0% found this document useful (0 votes)
93 views390 pages

Engineering Mathematics 2

This document provides an overview of linear equations and matrices. It defines linear equations and systems of linear equations. It describes how systems of linear equations can be represented using matrices, including coefficient matrices and augmented matrices. It also defines different types of matrices like identity, diagonal, lower triangular, and upper triangular matrices. Finally, it introduces common matrix operations that are used to solve systems of linear equations and study other topics in linear algebra.

Uploaded by

narsinh badve
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views390 pages

Engineering Mathematics 2

This document provides an overview of linear equations and matrices. It defines linear equations and systems of linear equations. It describes how systems of linear equations can be represented using matrices, including coefficient matrices and augmented matrices. It also defines different types of matrices like identity, diagonal, lower triangular, and upper triangular matrices. Finally, it introduces common matrix operations that are used to solve systems of linear equations and study other topics in linear algebra.

Uploaded by

narsinh badve
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 390

Engineering

Mathematics-II

Prof. P. Panigrahi
Prof. J. Kumar
Prof. P.V.S.N. Murthy
Prof. S. Kumar

+91 7900900676 [email protected]


Engineering Mathematics-II

-: Course Content Developed By :-

Dr. J Kumar (Assistant Professor)


Dr. S Kumar (Professor)
Dr. P Panigrahi (Associate Professor)
Dr. P V S N Murthy (Associate Professor)
Dept. of Mathematics, IIT Kharagpur

-:Content Reviewed by :-
Dr. Tanuja Srivastava (Professor)
Dept. of Mathematics, IIT, Roorkee

AgriMoon App AgriVarsha App


App that helps the students to gain the Knowledge App that helps the students to All Agricultural
about Agriculture, Books, News, Jobs, Interviews of Competitive Exams IBPS-AFO, FCI, ICAR-JRF,
Toppers & achieved peoples, Events (Seminar, SRF, NET, NSC, State Agricultural exams are
Workshop), Company & College Detail and Exam available here.
notification.
Index
Lesson Name Page No
Module 1: Matrices and Linear Algebra
Lesson 1: Linear Equations and Matrices 5-13
Lesson 2: Rank of a Matrix and Solution of a Linear 14-22
System
Lesson 3: Inverse of Matrices by Determinants and 23-28
Gauss-Jordan Method
Lesson 4: Vector Spaces, Linear Dependence and 29-38
Independence
Lesson 5: Basis and Dimension of Vector Spaces 39-47
Lesson 6: Eigenvalues and Eigenvectors of Matrices 48-55
Lesson 7: The Cayley Hamilton Theorem and 56-64
Applications
Lesson 8: Diagonalization of Matrices 65-72
Lesson 9: Linear and Orthogonal Transformations 73-82
Lesson 10: Quadratic Forms 83-97
Module 2: Complex Variables
Lesson 11: Limit, Continuity, Derivative of Function of 98-108
Complex Variable
Lesson 12: Analytic Function, C-R Equations, 109-120
Harmonic Functions,
Lesson 13: Line Integrals in Complex Plane 121-128
Lesson 14: Cauchy’ Integral Theorem and Cauchy’s 129-138
Integral Formula
Lesson 15: Infinite Series, Convergence Tests, Uniform 139-149
Convergence
Lesson 16: Power Series 150-160
Lesson 17: Taylor and Laurent series 161-173
Lesson 18: Zeros and singularities 174-182
Lesson 19: Residue Theorem 183-191
Module 3: Fourier Series and Fourier Transform
Lesson 20: Introduction 192-197
Lesson 21: Fourier Series of a Periodic Function 198-204
Lesson 22: Convergence Theorems 205-211
Lesson 23: Half Range Sine and Cosine Series 212-218
Lesson 24: Integration and Differentiation of Fourier 219-223
Series
Lesson 25: Bessel’s Inequality and Parseval’s Identity 224-229
Lesson 26: Complex Form of Series 230-234
Lesson 27: Fourier Integral 235-239
Lesson 28: Fourier Integrals (Cont.) 240-245
Lesson 29: Fourier Sine and Cosine Transform 246-251
Lesson 30: Fourier Transform 252-257
Lesson 31: Fourier Transform (Cont.) 258-263
Lesson 32: Fourier Transform (Cont.) 264-269
Lesson 33: Finite Fourier Transform 270-275
Module 4: Partial Differential Equations
Lesson 34: Partial Differential Equations 276-285
Lesson 35: Linear First Order Equation 286-294
Lesson 36: Geometric Interpretation of a First Order 295-301
Equation
Lesson 37: Integral Surface through a Given Curve - 302-309
The Cauchy Problem
Lesson 38: Non-Linear First Order p.d.e – Compatible 310-313
System
Lesson 39: Non – linear p.d.e of 1st order complete 314-318
integral – Charpit’s method
Lesson 40: Special Types of First Order Non-Linear 319-323
p.d.e
Lesson 41: Classification of Semi-linear 2nd order 324-331
Partial Differential Equations
Lesson 42: Solution of Homogeneous and Non- 332-342
Homogeneous Linear Partial Differential Equations
Lesson 43: Non-Homogeneous Linear Equation 343-344
Lesson 44: Method of Separation of Variables 345-348
Lesson 45: One Dimensional Heat Equation 349-357
Lesson 46: One Dimensional Wave Equation 358-367
Lesson 47: Laplace Equation in 2-Dimension 368-375
Lesson 48: Application of Laplace and Fourier 376-382
Transforms to Boundary Value Problems in p.d.es
Lesson 49: Laplace And Fourier Transform Techniques 383-389
To Wave Equation and Laplace Equation
Module 1: Matrices and Linear Algebra

Lesson 1

Linear Equations and Matrices

1.1 Introduction

The problem of solving system of linear equations arises in almost all areas of
science and engineering. This is an important part of linear algebra and lies at the
heart of it.

A linear equation on n variables x1, x2, . . . , xn is an equation of the form a1x1 + a2x2
+…+ anxn = b,

where a1, a2, . . . , an and b are real or complex numbers, usually known in advance.
A system of linear equations (or a linear system) is a collection of one or more
linear equations involving the same variables. The following is an example of a
system of linear equations:

x1 - 2x2 + 4x3 = 10
2x1 - 3x3 = -9 (1.1)

It is convenient to represent large systems of linear equations in terms of


rectangular arrays called matrices. An m n matrix is a rectangular array of
elements with m number of rows and n number of columns. It is denoted by (aij)m ×
n, where i = 1, 2, 3, . . . , m, and j = 1, 2 , . . . , n, and aij are real or complex
numbers (or elements of a field) called entries of the matrix. Almost all the
concepts in linear algebra are expressed in terms of matrices.

5 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear Equations and Matrices

A system of m linear equations on n variables x1, x2, . . . , xn can be written as

a11x1 + a12x2 + . . . + a1nxn = b1

a21x1 + a22x2 + . . . + a2nxn = b2

am1x1+ am2x2 +. . . + amnxn = bn (1.2)

The m × n matrix

 a11 a12  a1n 


 
 a 21 a 22  a 2n 
  .
 
 m1
a a m2  a mn 

associated with the system (1.2) is called the co-efficient matrix of the system. The
m × (n + 1) matrix

 a11 a12  a1n b1 


 
 a 21 a 22  a 2n b2 
  .
 
 a m1 a m2  a mn bm 

is called the augmented matrix of the system (1.2).

The augmented matrix of a system consists of the co-efficient matrix with an


additional column whose entries are the constants from the right sides of the
equations. If in (1.2) bi = 0 for all i = 1, 2, . . . , n then the system is called
homogeneous otherwise non-homogeneous. We perform some operations on

6 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear Equations and Matrices

matrices not only for solving system of linear equations but also for studying other
topics in linear algebra.

1.2. Matrix Operations

As matrix notation simplifies the calculations in solving systems of linear


equations, we shall discuss different kind of matrices and operations on them.

Recall that a matrix A of size m × n over a field F (here we take F as the real or
complex field) is denoted by A = (aij)m × n, i = 1, 2, 3, . . . , m, and j = 1, 2 , . . . , n,
and aij are from F. If m = n then A is called a square matrix. In this case the entries
a11, . . . , ann are called the main diagonal or principal diagonal and other entries
are called off-diagonal entries. If aij = 0 for all i and j, then A is called the null
matrix or the zero matrix, and is denoted by 0. An identity matrix, denoted by I, is
a square matrix whose all diagonal entries are equal to 1 and off diagonal entries
are equal to zero.

A square matrix A is called a diagonal matrix if all the off-diagonal entries are
zero. A square matrix A = (aij)n × n is called lower (respectively upper) triangular
matrix if aij = 0 whenever i > j (respectively i < j), that is, all entries above
(respectively below) the main diagonal are zero.

Two matrices of the same size A = (aij)m × n and B = (bij)m × n are said to be equal if
aij = bij for all i, j.

7 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear Equations and Matrices

1.2.1 Addition and Scalar Multiplication

If A = (aij)m × n is a matrix over F and α ϵ F then the scalar multiplication of A by


α is the matrix αA =(α aij)m × n i.e. each entry of A is multiplied by α.

If A = (aij)m × n and b = (bij)m × n are matrices of the same size over F then addition
of A and B denoted by, A + B, is the matrix C = (cij)m × n , where cij = aij + bij.

Scalar multiplication and addition of matrices satisfy some properties as given


below.

For matrices A, B and C of the same size over F and α, β ϵ F:

(1) A + B = B + A (commutative)

(2) (A + B) + C = A + (B + C) (associative)

(3) A + 0 = 0+A =A, where 0 is the zero matrix of the same size as A.

(4) A + (− A) = (− A)+A= 0, where − A = (− 1)A i.e. if A = (aij)m × n then − A = (−


aij)m × n.

(5) (α + β) A = αA + βA.

(6) α (A + B) = αA + αB.

(7) α (βA) = α β A.

1.2.2 Matrix Multiplication

If A = (aij)m × n and B = (bij)n × p are matrices over F then multiplication or product


of A and B, denoted by AB, is the matrix C = (cij)m × p, where

8 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear Equations and Matrices

Matrix multiplication satisfies some properties as given below.

(1) Matrix multiplication need not be commutative, that is, one can find matrices A
and B such that AB is not equal to BA.

(2) For matrices A and B if AB = 0 then it may not imply either A = 0 or B = 0.

(3) If for matrices A, B, C if AB = AC, it may not imply B = C, that is matrix


multiplication does not obey cancellation law.

(4) If A, B and C are matrices of sizes m × n, n × p, and p × q respectively then

(A B) C = A (B C) (associative).

(5) If A is a matrix of size m × n and both B and C are matrices of size n × p then

A (B + C) = AB + AC (left distributive).

(6) If A, B are matrices of size m × n each and C is a matrix of size n × p then

(A + B) C = AC + BC (right distributive).

(7) For any square matrix A, AI=IA=A, where I is the identity matrix of the same
size as A.

For matrix A = (aij)m × n , the transpose of A, denoted by AT, is the matrix AT = (aji)n
× m. In other words AT is obtained from A by writing the rows of A as the columns
of AT in order. Some properties of transpose operation are as given below.

(1) For any matrix A, (AT)T = A.

(2) For matrices A and B of the same size

9 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear Equations and Matrices

(A + B)T = AT + BT.

(3) For matrices A and B over F of sizes m × n and n × p respectively,

(AB)T = BTAT.

1.2.3 Some Special Matrices

Here we shall discuss about some of the special type of matrices which will be
used in the subsequent lectures.

We consider a square matrix A = (aij)n × n. If A is a real matrix and satisfies A = AT


then A is called symmetric. In this case aij = aji for all i, j. If A satisfies AT = − A
then A is called skew-symmetric. In this case aij = − aji for all i, j, and therefore all
diagonal entries are equal to zero.

Here we take a complex square matrix A = (aij)n × n. The conjugate of A is the


matrix = ( ij) n × n, where ϵ ij is the complex conjugate of aij. Matrix A is said to
be Hermitian if (ϵA)T = A. In this case aij = ji and in particular aii = ii. Thus for
Hermitian matrices diagonal entries are real numbers. Matrix A is said to be skew-
Hermitian if (ϵA)T = − A. By the similar argument aij = - ji and so diagonal entries
are either 0 or pure imaginary for skew-Hermitian matrices. One sees that
symmetric and Hermitian matrices agree for real matrices. Similarly, skew-
symmetric and skew-Hermitian matrices also agree for real matrices.

A complex square matrix A = (aij)n × n is called unitary if A(ϵA)T =(ϵA)TA=I,


where I is the identity matrix of the same size as A. In case of real matrices unitary

10 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear Equations and Matrices

matrices are called orthogonal, that is, a real matrix A is orthogonal if AAT = ATA
= I.

1.2.4 Elementary Row/Column Operations

For any matrix A, each of the following is called an elementary row (resp.
columns) operation on A:

(1) Interchange of two rows (resp. columns).

(2) Addition of scalar multiple of one row (resp. column) to another row (resp.
column).

(3) Multiplication of a row (resp. column) by a non-zero scalar.

1.3 Determinant of Matrices

Let A = (aij)n × n be a square matrix with aij ϵ or ₵


.

We define determinant of A, denoted by det A or | A |, recursively as below. For


n = 2,

a11 a12
|A|= a a 22 = a11a22 – a12a21.
21

For n ≥ 3,
m

det A = | A | = ∑ ( −1)
i+j
a ij mij .
j=1

Where i is a fixed integer with 1 ≤ i ≤ n, and mij is the determinant of the matrix
obtained from A by deleting ith row and jth column.

11 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear Equations and Matrices

One may also find determinant of A by using following properties of determinant:

(1) For identity matrix I of any size, det I =1.

(2) det A = det AT

(3) If any two rows (or columns) are interchanged, then the value of the
determinant is multiplied by (− 1).

(4) If each element of a row is multiplied by a scalar α then the value of the
determinant is multiplied by α. Therefore | α A | = αn | A |.

(5) If a non-zero scalar multiple of the elements of some row (or column) is added
to the corresponding elements of some other row (or column), then the value of
the determinant remains unchanged.

(6) Determinant of diagonal or triangular matrices is the product of its diagonal


entries.

(7) If A and B are the matrices of the same order then det (AB) = det (A) det (B).

1.4 Conclusions

Matrices and operations on them will be used in almost all the subsequent lectures.
In the next lecture we shall solve systems of linear equations. A solution of a
system of linear equations on n variables x1, x2, . . . , xn is a list (s1, s2, . . .,sn) of
numbers such that each equation is a true statement when the values s1, s2, . . . , sn
are substituted for x1, x2, . . . , xn respectively. The set of all possible solutions is
called the solution set of the given system. Two systems are called equivalent if
they have the same solution set. That is, every solution of the first system is a
solution of the second system and vice versa. Getting solution set of a system of
two linear equations in two variables is easy because it is just finding the
intersection of two lines. However, solving a large system is not so straight-

12 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear Equations and Matrices

forward. For this we represent a system in matrix notation and then we perform
some operations on the associated matrices. From the resultant matrices either we
draw conclusion that the system has no solution or find solutions of the system.

Keywords: Algebra of matrices, special matrices, elementary row operations,


determinant of matrices, linear systems.

Suggested Readings:

Linear Algebra, Kenneth Hoffman and Ray Kunze, PHI Learning pvt. Ltd., New
Delhi, 2009.

Linear Algebra, A. R. Rao and P. Bhimasankaram, Hindustan Book Agency, New


Delhi, 2000.

Linear Algebra and Its Applications, Fourth Edition, Gilbert Strang, Thomson
Books/Cole, 2006.

Matrix Methods: An Introduction, Second Edition, Richard Bronson, Academic


press, 1991.

13 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Matrices and Linear Algebra

Lesson 2

Rank of a Matrix and Solution of a Linear System

2.1 Introduction

In this lecture we shall discuss about the rank of matrices, the consistency of
systems of linear equations, and finally present the Gauss elimination method
for solving the linear systems. For this we need an important form of matrices
called echelon form which is obtained by applying elementary row (or column)
operations.

2.2 Echelon Form of a Matrix

Echelon form of a matrix is useful in solving system of linear equations, finding


rank of a matrix and checking many more results in linear algebra.

An m × n matrix A is said to be in (row) echelon form if

(i) All the zero rows of A are at the bottom.

(ii) For the non-zero rows of A, as the row number increases, the number of zero
entries at the beginning of the row also increases.

In the echelon form of a matrix some people consider one more condition that
the 1st non-zero entry in a non-zero row is equal to 1. However this condition is
not required for us and therefore not included in the definition of echelon form
of a matrix. One finds row echelon form of a matrix by applying elementary
row operations. By applying elementary column operations, one gets column
echelon form of the matrix.

Example 2.2.1: Find the row echelon form of

14 WhatsApp: +91 7900900676 www.AgriMoon.Com


Rank of a Matrix and Solution of a Linear System

1 3 5 
A = 1 4 3  .
1 1 9 
 

We keep 1st row as it. Then we make 1st entry of the second row zero by
applying elementary row operations. So replacing 2nd row R2 by R2 − R1 one
gets
1 3 5 
 
 0 1 −2  .
1 1 9 
 

Then we make at least 1st two entries of the 3rd row of the above matrix zero.
For this we replace R3 by R3 − R1 in the above matrix and get

1 3 5 
 
 0 1 −2  .
 0 −2 4 
 

Finally by replacing R3 by R3 + 2R2 one gets the echelon form of A and is given
by

1 3 5 
 
 0 1 −2  .
0 0 0 
 

2.3 Rank of a Matrix

The rank of a matrix has several equivalent definitions. Here we take the rank
of a matrix A as the number of non-zero rows in the row echelon form of A. It
is also defined as the number of nonzero columns in the column echelon form of
the matrix. Whatever way the definition may be given the rank of a matrix will

15 WhatsApp: +91 7900900676 www.AgriMoon.Com


Rank of a Matrix and Solution of a Linear System

be the same, is a fixed number. Therefore the rank of a matrix A has the
following properties.

(1) Matrix A and its transpose have the same rank, that is, rank(A) = rank(AT).

(2) If A is a matrix of size m × n then rank (A) is at the most min{m, n}.

(3) If B is a sub-matrix of A then rank (B) is less than or equal to rank(A).

Example 2. 3.1: Here we find rank of the matrix

 2 −2 3 4 −1
 
−1 1 2 5 2 
A=  .
 0 0 −1 −2 3 
 
 1 −1 2 3 0 

Here we find echelon form of the matrix A. First row will be kept as it is.
Replacing R4 by R4 + R2 and then R2 by 2R2+ R1 the matrix will be

 2 −2 3 4 −1
 
 0 0 7 14 3  .
 0 0 −1 −2 3 
 
0 0 4 8 2 

Replacing R4 by R4 + 4R3 and then replacing R3 by 7R3 + R2 the matrix will be

 2 −2 3 4 −1 
 
0 0 7 14 3 
.
0 0 0 0 24 
 
0 0 0 0 14 

Finally replacing R4 by R4 − R3 the resultant matrix will be in echelon form

as given below:

16 WhatsApp: +91 7900900676 www.AgriMoon.Com


Rank of a Matrix and Solution of a Linear System

 2 −2 3 4 −1 
 
0 0 7 14 3 
.
0 0 0 0 24 
 
0 0 0 0 0

Now there are three non-zero rows in the echelon form of the given matrix A.
Therefore rank of A is equal to 3.

2.4 Solution of a Linear System

Recall that a system of m linear equations in n variables x1, x2, . . . , xn will be of


the form
a11x1 + a12x2 + . . . + a1nxn = b1
a21x1 + a22x2 + . . . + a2nxn = b2
. . . .
am1x1 + am2x2 + . . . + amnxn = bm

where aij’s and bi’s are real or complex numbers.

By using matrix notation this system can be expressed as Ax = b, where A is the


m × n matrix

 a11 a12  a1n 


 
 a 21 a 22  a 2n 
A= ,
  
 
 a m1 a m2  a mn 

 b1 
 x1   
  b
x is the n × 1 matrix x =    , b is the m × 1 matrix  2  .
x    
 n  
 bm 

17 WhatsApp: +91 7900900676 www.AgriMoon.Com


Rank of a Matrix and Solution of a Linear System

A system of linear equations has either (i) no solution or (ii) exactly one
solution or (iii) infinitely many solutions. The system is said to be consistent if
it has at least one solution, that is (ii) or (iii) of the above hold, and is
inconsistent if it has no solution.

The following theorem gives conditions for existence of solution of the system
Ax = b.

Theorem 2.4.1: Let Ax = b be a system of m linear equations on n variables.


Let the augmented matrix ϵA of A be (A b). Then

(i) The system is consistent if rank A = rank ϵA.

(ii) The system has a unique solution if rank A = rank ϵA = n.

(iii) The system has infinitely many solutions if rank A = rank ϵA =k < n.

Remark 2.4.1: Recall that if b = 0 then the system Ax = 0 is called


homogeneous. In this case A = ϵA and so from the above theorem a
homogeneous system is always consistent. In fact (x1, x2, . . . , xn) = (0, 0, . . . ,
0) is always a solution of Ax = 0.

2.5 Gauss Elimination Method for Solving a System

Gauss-Elimination method is a matrix method constantly used to solve large


systems of linear equations. The main steps in this method are as follow:

1. Consider the augmented matrix of the system.

2. Convert the augmented matrix in to row echelon form. Decide whether the
system is consistent or not. If yes then go to the next step, stop otherwise.

18 WhatsApp: +91 7900900676 www.AgriMoon.Com


Rank of a Matrix and Solution of a Linear System

3. Write the system of equations corresponding to the matrix in echelon form


obtained in step 2. Now this system is either solvable by back-substitution or
having some free variables (variables which do not occur at the beginning of
any equation of the system in this step) for which we assign arbitrary
real/complex value and then solve the system.

We explain the above method through some examples.

Example 2.5.1: Consider the system of linear equations

2x − 2y + 3z + 4u = − 1

− x + y + 2z + 5u = 3

− z − 2u = 3

x − y + 2z + 3u = 0

The augmented matrix of this system is

 2 −2 3 4 −1
 
 −1 1 2 5 2 
 0 0 −1 −2 3 .
 
 1 −1 2 3 0 

Notice that this is the same matrix A appears in Example 3.1. So its row echelon
form will be

2 −2 3 4 −1 
 
0 0 7 14 3 
0 0 0 0 24  .
 
0 0 0 0 0 

19 WhatsApp: +91 7900900676 www.AgriMoon.Com


Rank of a Matrix and Solution of a Linear System

Observe that the rank of the co-efficient matrix is 2 and that of the augmented
matrix is 3. Therefore according to Theorem 4.1(i) the given system is
inconsistent.

Example 2.5.2: Here we shall solve the system

2x + y − 2z = 10

3x + 2y + 2z = 1

5x + 4y + 3z = 4

2 1 −2 10 
 
The augmented matrix is  3 2 2 1 .
5 4 
 4 3 

2 1 −2 10 
 
Row echelon form of this matrix is  0 1 10 −28  .
0 −14 42 
 0

Notice that 1st three columns is the row echelon form of the co-efficient matrix
and its rank is equal to three which is same as the rank of the augmented matrix.
Therefore the system is consistent and since the number of variables is also
equal to three from Theorem 4.1(ii) the system has a unique solution.

The system corresponding to the echelon form of the augmented matrix is:

2x + y − 2z = 10

y + 10z = − 28

− 14z = 42

20 WhatsApp: +91 7900900676 www.AgriMoon.Com


Rank of a Matrix and Solution of a Linear System

From the last equation we get z = − 3. Then by back substitution we get y = 2


and x = 1 from the 2nd and 1st equations respectively. Hence (1,2,-3) is the
unique solution of the given system.

Example 2.5.3: Here we shall solve the system

x + 2y − 3z = 6

2x − y + 4z = 2

4x + 3y − 2z = 14

 1 2 −3 6 
 
Augmented matrix of the system is  2 −1 4 2 
 4 3 −2 14 
 

 1 2 −3 6 
 
and its row echelon form is  0 −5 10 −10  .
0 0 0 0 

The rank of the co-efficient matrix and the rank of the augmented matrix are
same and is equal to 2 which is less than the number of variables. Therefore the
system has infinite number of solutions. From the row echelon form of the
augmented matrix the system will be

x + 2y − 3z = 6

− 5y + 10z = − 10

Here z is the free variable. So it can take any real value. Let z = α, α is a real
number. Then from the second equation of the above system y = 2 + 2α and

21 WhatsApp: +91 7900900676 www.AgriMoon.Com


Rank of a Matrix and Solution of a Linear System

then from the first equation x = 2 – α. Hence the set of all solutions of the
system is
{(2 – α, 2 + 2α, α) : α ϵ }.

2.6 Conclusions

In this lecture we have observed that homogeneous systems are always


consistent. We shall see in an other lecture that the set of all solutions of a
homogeneous system has linearity property and therefore these systems are of
special interest. In a subsequent lecture we shall learn about the linearity
property of sets, which is the basic thing of the subject Linear Algebra.

Keywords: Echelon form of matrices, rank of a matrix, solution of linear


system, Gauss elimination method.

Suggested Readings:

Linear Algebra, Kenneth Hoffman and Ray Kunze, PHI Learning pvt. Ltd.,
New Delhi, 2009.

Linear Algebra, A. R. Rao and P. Bhimasankaram, Hindustan Book Agency,


New Delhi, 2000.

Linear Algebra and Its Applications, Fourth Edition, Gilbert Strang, Thomson
Books/Cole, 2006.

Matrix Methods: An Introduction, Second Edition, Richard Bronson, Academic


press, 1991.

22 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Matrices and Linear Algebra

Lesson 3

Inverse of Matrices by Determinants and Gauss-Jordan Method

3.1 Introduction

In lecture 1 we have seen addition and multiplication of matrices. Here we shall


discuss about the reciprocal or inverse of matrices. Matrix inverse is one of the
basic concepts that is useful in several topics of linear algebra. Not every matrix
has an inverse. In this lecture we shall find conditions for existence of inverse of a
matrix and discuss two different methods for getting it.

3.2 Inverse of a Matrix

Let A be a square matrix of size n. A square matrix B of size n is said to be inverse


of A if and only if AB = BA = I, where I is the identity matrix of size n.

 −2 1 
1 2   
Example 3.2.1: Let A =   and B =  1 − 1  .
 3 4 
2 2

1 0 
Notice that AB = BA =   . So A and B are inverse of each other.
0 1 

Inverse of a matrix A is denoted by A-1. If a square matrix has an inverse then it is


called invertible or non-singular, otherwise it is non-invertible or singular. Not all
square matrices are invertible.

Theorem 3.2.1: A square matrix has an inverse if and only if its determinant is
non-zero.

23 WhatsApp: +91 7900900676 www.AgriMoon.Com


Inverse of Matrices by Determinants and Gauss-Jordan Method

Some of the properties of inverse of a matrix are as listed below:

(1) Inverse of a matrix if exists is unique.

(2) Inverse of inverse of a matrix is the matrix itself, that is, (A-1)-1 = A.

(3) Inverse and transpose operations are interchangeable, that is, (AT)-1 = (A-1)T.

(4) If A and B are invertible matrices then (AB)-1 = B-1 A-1.

3.3 Inverse by Determinants

Recall that for a square matrix A = (aij), the minor of any entry aij is the
determinant of the square matrix obtained from A by removing ith row and jth
column. Moreover the cofactor of aij is equal to the minor of aij multiplied by (− 1)i
+ j
. The cofactor matrix associated with an n × n matrix A is an n × n matrix Ac
obtained from A by replacing each entry of A by its cofactor. The adjugate A* of A
is the transpose of the cofactor matrix of A.

The following theorem gives an idea to find inverse of a matrix.

Theorem 3.3.1: For any square matrix A,

AA* = A*A = (det A) I

where I is the identity matrix of the same size as A.

Corollary 3.3.1 If det A ≠ 0 then

 A∗   A∗ 
A =  A = I.
 det A   det A 

24 WhatsApp: +91 7900900676 www.AgriMoon.Com


Inverse of Matrices by Determinants and Gauss-Jordan Method

Thus we have the following formula for inverse of a matrix given in the theorem
below.

Theorem 3.3.2: For any square matrix A with det A ≠ 0,

A− 1 = A*.

Example 3.3.1: Here we find inverse of the matrix

 2 0 −1
A =  0 1 2 
3 1 1 
 .

We first check the value of determinant of A. Since detA = 1 ≠ 0, the inverse of A


exists.

One can check that the cofactor matrix Ac of A is given by

 −1 6 −3 
 
Ac =  −1 5 −2  .
 1 −4 2 
 

Then the adjugate A* of A is


 −1 −1 1 
 
A* =  6 5 −4  .
 −3 −2 2 
 

25 WhatsApp: +91 7900900676 www.AgriMoon.Com


Inverse of Matrices by Determinants and Gauss-Jordan Method

Since det A = 1, A− 1 = A* = A*.

 −1 −1 1 
Hence, A −1
=  6 5 −4  .
 −3 −2 2 
 

Existence of inverse of a matrix can be linked with rank of A through the result
below given in the theorem.

Theorem 3.3.3: For a matrix A of size n, det A ≠ 0 if and only if rank A = n. In


other words inverse of A exists if and only if rank A =n.

3.4 Inverse by Gauss-Jordan Elimination

Next we shall find inverse of a square matrix A of size n by Gauss-Jordan


elimination method. The following steps are followed in this method:

Step 1: If either det A ≠ 0 or rank A = n then proceed to next step, otherwise


inverse of A does not exist.

Step 2: Form the augmented matrix (A I) where I is the n × n identity matrix.

Step 3: Apply elementary row operations to (A I) so that first n column of it will


form an upper triangular matrix, say U. So now the resultant matrix is (U
B).

Step 4: Again apply elementary row operations to (U B) till first n columns form
the identity matrix. If the resultant matrix is (I K) then K is the inverse of
matrix A.

We shall consider an example below to explain this method.

26 WhatsApp: +91 7900900676 www.AgriMoon.Com


Inverse of Matrices by Determinants and Gauss-Jordan Method

Example 3.4.1: Here we shall find inverse of the matrix


 2 0 −1
 
A =  5 1 0  . One checks that rank of A is equal to 3 or det A ≠ 0, and so
0 1 3 
 

2 0 −1 1 0 0
 
inverse of A exists. The augmented matrix is  5 1 0 0 1 0 .
0 1 3 0 0 1
 

Replacing R1 by R2 one gets

 −1 1 
 −1 1  1 0 0 0
1 0 2 2
0 0
 2 2 
  R 2 →R 2 - 5R1  −5
1 0 
5
5 1 0 0 1 0   → 0 1
0 1  2 2
3 0 0 1  
  0 1 3 0 0 1
   
 
 −1 1   
1 0 2 2
0 0  1 0 0 3 −1 1 
   
−5 5 −5
→  0 1   1 0
R 3 →R 3 - R 2 5 R1 → R1 + R 3
1 0  → 0 1
2 2  2 2 
   
0 0 1 5   1 5
 −1 1  0 0 −1 1 
 2 2   2 2 

1 0 0 3 −1 1 
 

→  0 1 0 −15 6 −5 
0 0 1 5 −2 2 

The last matrix is of the form (I K). Therefore the inverse of A is given by

27 WhatsApp: +91 7900900676 www.AgriMoon.Com


Inverse of Matrices by Determinants and Gauss-Jordan Method

 3 −1 1 
 
A− 1 =  −15 6 −5  .
 5 −2 2 
 

3.5 Conclusions
Several other methods are also there to find inverse of a matrix and for particular
type of matrices like upper or lower triangular matrices one can derive an easier
formula for the inverse. Applying inverse of a matrix one can find solution of the
system Ax =b if A is a square matrix of size n and rank of A is n. In this case x=A-
1
b is the solution.

Keywords: Invertible matrices, Adjugate of a matrix, Gauss-Jordan elimination


method, Augmented matrix.

Suggested Readings:

Linear Algebra, Kenneth Hoffman and Ray Kunze, PHI Learning pvt. Ltd., New
Delhi, 2009.

Linear Algebra, A. R. Rao and P. Bhimasankaram, Hindustan Book Agency, New


Delhi, 2000.

Linear Algebra and Its Applications, Fourth Edition, Gilbert Strang, Thomson
Books/Cole, 2006.

Matrix Methods: An Introduction, Second Edition, Richard Bronson, Academic


press, 1991.

28 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Matrices and Linear Algebra

Lesson 4

Vector Spaces, Linear Dependence and Independence

4.1 Introduction

In this lecture we discuss about the basic algebraic structure involved in linear
algebra. This structure is known as the vector space. A vector space is a non-empty
set that satisfies some conditions with respect to addition and scalar multiplication.
Recall that by a scalar we mean a real or a complex number. The set of all real
numbers is called the real field and the set of all complex numbers ₵ is called
the complex field. Here onwards by a field F we mean the set of real or the set of
complex numbers. The elements of vector spaces are usually known as vectors and
that in field F are called scalars. In this lecture we also discuss about linearly
dependency or independency of vectors.

4.2 Vector Spaces

A non-empty set V together with two operations called addition (denoted by +) and
scalar multiplication (denoted by.), in short (V, +, .), is a vector space over a field
F if the following hold:

(1) V is closed under scalar multiplication, i.e. for every element α ϵ F and u ϵ V,
α.u ϵ V. (In place of α.u usually we write simply αu).

(2) (V, + ) is a commutative group, that is, (i) forevery pair of elements u, v ϵ V,
u+v ϵ V (ii) elements of V are associative and commutative with respect to +
(iii) V has the zero element, denoted by 0, with respect to +, i.e, u+0 =0+u=0,
for every element u of V and finally (iv) every element u of V has additive
inverse, i.e, there exists v ϵ V such that u+v = v+u = 0.

29 WhatsApp: +91 7900900676 www.AgriMoon.Com


Vector Spaces, Linear Dependence and Independence

(3) For α, β ϵ F and u ϵ V, (α + β).u = α.u + β.u.

(4) For α ϵ F and u, w ϵ V, α. (u + w) = α.u + α.w.

(5) For α, β ϵ F and u ϵ V, α.(β.u) = (αβ).u

(6) 1.u = u, for all u ϵ V, where 1 is the multiplicative identity of F.

If V is vector space over F then elements of V are called vectors and elements of F
are called scalars.

For vectors v1, v2, . . . , vn in V and scalars α1, α2, . . . , αn in F the expression
α1v1, α2v2, . . . , αnvn is called a linear combination of v1, v2, . . . , vn. Notice that V
contains all finite linear combinations of its elements hence it is also called a linear
space.

Examples 4.2.1: Here we give example of some vector spaces.

(1) ₵ is a vector space over . But is not a vector space over ₵ as it is not
closed under scalar multiplication.

(2) If F= or Fn then
₵ = {( x1, x2, . . . , xn) : xi ϵ F, 1 ≤ i ≤ n} is a vector space
over F where addition and scalar multiplication are as defined below:

For x = (x1, x2, . . . , xn), y = (y1, y2, . . . , yn) ϵ Fn and α ϵ F,

x + y = (x1+y1, x2+y2, . . . , xn+yn).

αx = (αx1, αx2, . . . , αxn).

Fn is also called the n-tuple space.

30 WhatsApp: +91 7900900676 www.AgriMoon.Com


Vector Spaces, Linear Dependence and Independence

(3) The Space of m × n Matrices: Here Fm × n is the set of all m × n matrices over
F. Fm × n is a vector space over F with respect to matrix addition and matrix
scalar multiplication.

(4) The space of polynomials over F: Let (F) be the set of all polynomials over
F, i.e.,

P(F) = { a0 + a1x + . . . + anxn : ai ϵ F, 1 ≤ i ≤ n, n ≥ 0 is an integer}.

P(F) is a vector space over F with respect to addition and scalar multiplication
of polynomials, that is,

(a0 + a1x + . . . + anxn) + (b0 + b1x + . . . + bmxm)

= c0 + c1x + c2x2 + . . . + ckxk

where ci = ai + bi, k = max {m, n}, ai = bj = 0

for i > n and j > m. And

α (a0 + a1x + . . . + anxn) = αa0 + αa1x + . . . + αanxn.

The following results can be verified easily (proof of which can be taken as
exercise).

Theorem 4.2.1: If V is a vector space over F then

(a) α.0 = 0, for α ϵ F, here 0 is the additive identity of V or the zero vector.

(b) 0.u = 0, for u ϵ V, here 0 in the left hand side is the scalar zero i.e. additive
identity of F and 0 in right hand side is the zero vector in V.

31 WhatsApp: +91 7900900676 www.AgriMoon.Com


Vector Spaces, Linear Dependence and Independence

(c) (− α).u = − (α.u), for all α ϵ F, u ϵ V.

(d) If u ≠ 0 in V then α.u = 0 implies α = 0.

4.3 Subspaces

For every algebraic structure we have the concept of sub-structures. Here we


discuss about subspaces of vector spaces.

Let V be a vector space over F. A subset W of V is called a subspace of V if W is


closed under ‘ + ’ and ‘ . ’ (which are the addition and scalar multiplication of V).
In other words (i) for u, v ϵ W, u + v ϵ W and (ii) for u ϵ W and α ϵ F, αu ϵ W.

The above two conditions of a subspace can be combined and expressed in a single
statement that: W is a subspace of V if and only if for u, v ϵ W and scalars α, β ϵ
F, αu + βv ϵ W.

Example 4.3.1: Here we give some example of subspaces.

(1) The zero vector of the vector space V alone i.e. {0} and the vector space V
itself are subspaces of V. These subspaces are called trivial subspaces of V.
2 2
(2) Let V = , the Euclidean plane, and W be the straight line in passing
through (0, 0) and (a, b), i.e. W = {(x, y) ϵ 2
: ax + by = 0}. Then W is a
2
subspace of . Whereas the straight lines which do not pass through the origin
2
are not subspaces of .

(3) The set of all n × n symmetric matrices over F forms a subspace of Fn × n (F is a


field).

32 WhatsApp: +91 7900900676 www.AgriMoon.Com


Vector Spaces, Linear Dependence and Independence

× n
(4) The set of all n × n Hermitian matrices is not a subspace of ₵n (the
collection of all n × n complex matrices), because if A is a Hermitian matrix
then diagonal entries of A are real and so iA is not a Hermitian matrix
(However the set of all n × n Hermitian matrices forms a vector space over .

4.4 Linear Span

Let V be a vector space over F and S be a subset of V. The liner span of S, denoted
by (S), is the collection of all possible finite linear combinations of elements in S.
Then (S) satisfies the following properties given in the theorem below.

Theorem 4.4.1: For any subset S of a vector space V

(1) (S) is a subspace of V.

(2) (S) is the smallest subspace of V containing S, i.e. if W is any subspace of V


containing S then (S) contained in W.

2
Example 4.4.1: In if S = {(2, 3)} then (S) is the straight line passing through
2
(0, 0) and (2, 3) i.e. (S) = 2x + 3y = 0. If S = {(1, 0), (0, 1)} then (S) = .

4.5 Linearly Dependency/Independency

A vector space can be expressed in terms of very few elements of it, provided that ,
these elements spans the space and satisfy a condition called linearly
independency. Short-cut representation of a vector space is essential in many
subjects like Information and Coding Theory.

33 WhatsApp: +91 7900900676 www.AgriMoon.Com


Vector Spaces, Linear Dependence and Independence

Consider a vector space V over a field F and a set S={ v1, v2, . . . , vk } of vectors in
V. S is said to be linearly dependent if exist scalars α1, α2, . . . , αk (in F), not
all zero such that
α1v1 + α2v2 + . . . + αkvk = 0.

If S is not linearly dependent then it is called linearly independent. In other words


S is linearly independent, if whenever α1v1 + α2v2 + . . . + αnvn = 0, all scalars αi
have to be zero. This suggests a method to verify linearly dependency or
independency of a given set of finite number of vectors, as given in the next sub-
section.

4.5.1 Verification of Linearly Dependency/Independency

Suppose the given set of vectors is S = {v1, v2, . . . , vk}.

Step 1: Equate the linear combination of these vectors to the zero vector, that is,
α1v1 + α2v2 + . . . + αkvk = 0, where αi’s are scalars that we have to find.

Step 2: Solve for scalars α1, α2, . . . , αk. If all are equal to zero then S is a linearly
independent set, otherwise (i.e. at least one αi is non-zero) the S is linearly
dependent.

Properties 4.5.1: Some properties of linearly dependent/independent vectors are


as given below.

(1) A superset of a linearly dependent set is linearly dependent.

(2) A subset of a linearly independent set is linearly independent.

(3) Any set which contains the zero vector is linearly dependent.

34 WhatsApp: +91 7900900676 www.AgriMoon.Com


Vector Spaces, Linear Dependence and Independence

3
Example 4.5.1: Let V = be the vector space (over ) and S1 = {(1, 2, 3), (1, 0,
2), (2, 1, 5)} and S2 = {(2, 0, 6), (1, 2, − 4), (3, 2, 2)} be subsets of V. We check
linearly dependency/independency of S1 and S2.

First consider the set S1. Let α1, α2, α3 be scalars such that
α1(1, 2, 3) + α2(1, 0, 2) + α3(2, 1, 5) = (0, 0, 0)

Then we have
(α1 + α2 + 2α3, 2α1 + α3, 3α1 + 2α2 + 5α3) = (0, 0, 0)

And is equivalent to the system


α1 + α 2 + 2α3 =
0
2α1 + α3 =
0
3α1 + 2α 2 + 5α3 =
0

On solving this system we get α1 = α2, = α3 = 0, so S1 is linearly independent.


Next for S2, we can take α1 = α2 = 1 and α3 = − 1 and get

α1(2, 0, 6) + α2(1, 2, − 4) + α3(3, 2, 2) = 0.

So S2 is a linearly dependent set.

We can also test linearly dependency/independency of vectors in Fn (in particular


n
in ) using echelon form of a matrix. This method has been explained in the
example below.

35 WhatsApp: +91 7900900676 www.AgriMoon.Com


Vector Spaces, Linear Dependence and Independence

Example 4.5.2: Let V = 4


and S = {(1, 2, 1, − 2), (2, 1, 3, − 1), (2, 0, 1, 4)} and S1
= {(0, 1, 2, − 1), (1, 2, 0, 3), (1, 3, 2, 2), (0, 1, 1, 1)} be subsets of V. We will check
linearly dependency/independency of S and S1.

We consider S first. We write the vectors in S as a matrix taking the vectors as


rows and then apply elementary row operations and convert it to echelon form. If
there is a zero row in the echelon form then the set is linearly dependent otherwise
linearly independent.

 1 2 1 −2   1 2 1 −2 
  R 2 →R 2 − 2R1  
 2 1 3 −1   →  0 −3 1 3 
2 0 1 4  2 0 1 4 
   

 1 2 1 −2   1 2 1 −2 

R 3 → R 3 − 2R1  R 3 →3R 3 − 2R1  

→  0 −3 1 3   →  0 −3 1 3 
 0 −2 −1 8   0 0 −5 18 
   

The last matrix is in echelon form and all the rows are non-zero. Hence S is
linearly independent.

Next we consider

S1 = {(0, 1, 2, − 1), (1, 2, 0, 3), (1, 3, 2, 2), (0, 1, 1, 1)}.

While forming the matrix we may not have to take 1st vector in S1 as 1st row, 2nd
vector as 2nd row and so on. Since we have to convert the matrix into echelon form
we may take 1st row of the matrix a vector in S for which the 1st entry is non-zero.
So let the matrix be

36 WhatsApp: +91 7900900676 www.AgriMoon.Com


Vector Spaces, Linear Dependence and Independence

1 2 0 3
 
0 1 1 1
.
0 1 2 −1
 
1 3 2 2

We convert this to echelon form by applying elementary row operations and is


given by
1 2 0 3
 
0 1 1 1
.
0 0 1 −2 
 
0 0 0 0

There is a zero row in the echelon form so S1 is linearly dependent.

4.6 Conclusions

Vector spaces are the main ingredients of the subject linear algebra. Here we have
studied an important property of the vectors that is linearly
dependency/independency. This property will be used in almost all the lectures. In
the next lecture also we discuss about some basic terminologies associated with a
vector space.

Keywords: Vectors, scalars, vector spaces, subspaces, linearly dependent or


independent vectors.

Suggested Readings:

Linear Algebra, Kenneth Hoffman and Ray Kunze, PHI Learning pvt. Ltd., New
Delhi, 2009.

37 WhatsApp: +91 7900900676 www.AgriMoon.Com


Vector Spaces, Linear Dependence and Independence

Linear Algebra, A. R. Rao and P. Bhimasankaram, Hindustan Book Agency, New


Delhi, 2000.

Linear Algebra and Its Applications, Fourth Edition, Gilbert Strang, Thomson
Books/Cole, 2006.

Matrix Methods: An Introduction, Second Edition, Richard Bronson, Academic


press, 1991.

38 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Matrices and Linear Algebra

Lesson 5

Basis and Dimension of Vector Spaces

5.1 Introduction

In the previous lecture we have already said that vector spaces can be represented
in a short-cut form in terms of few linearly independent vectors. The set of these
few vectors have a name called basis. The number of elements in a basis is fixed
and this number is called the dimension of the vector space. In this lecture we
shall discuss on these two important terms basis and dimension of a vector space.
We shall also give an another definition of the rank of a matrix in terms of linearly
independent rows/columns and finally present the rank-nullity theorem.

5.2 Basis and Dimension

Let V be a vector space over F. A subset S of V is called a basis for V if the


following hold

(i) S is a linearly independent set

(ii) S spans V i.e., (S) = V (or in other words every element of V can be
written as a finite linear combination of vectors in S).

If V contains a finite basis then V is called a finite dimensional vector space and
dimension of V is the number of elements in . If V is not finite dimensional then
it is infinite dimensional vector space. Dimension of a vector space is well defined
because of the theorem below.

39 WhatsApp: +91 7900900676 www.AgriMoon.Com


Basis and Dimension of Vector Spaces

Theorem 5.2.1: If a vector space V has a basis with k number of vectors then
every basis of V contains k vectors (in other words all bases of a vector space are
of the same cardinality).

Next we shall see some examples of vector spaces with their bases and dimensions.

Example 5.2.1:
3
(1) {(2,0,6), (1,2,-4), (3,2,2)} is not a basis for as it is not linearly independent
because (2,0,6)+(1,2,-4)=(3,2,2).

(2) S = {( 2, 0, 0 ) , ( 3, 4, 0 )} is also not a basis for 3


as it does not span R3
because (0, 0, α), α ≠ 0, cannot be written as linear combination of vectors in S.
n
(3) The set {(1,0,0,…,0), (0,1,0,0,…,0),…,(0,0,…,1)} of vectors in forms a
n n n
basis for . This basis is called standard basis of . So dimension of is
n.

(4) The collection of all polynomials over F, P(F) is an infinite dimensional vector
space over F because S={1,x,x2,x3,…..} is a linearly independent set and spans
P(F) but no finite subset of S spans P(F). However Pn(F) , the set of all
polynomials of degree ≤ n, is a finite dimensional vector space with
{1,x,x2,x3,…,xn} as a basis. Hence dimension of P(F) is equal to n + 1.
2×2
(5) The set of all 2 × 2 real matrices is a finite dimensional vector space over

 1 0   0 1   0 0   0 0  
with  , , ,   as a basis. So dim 2×2
= 4.
 0 0   0 0   1 0   0 1 

40 WhatsApp: +91 7900900676 www.AgriMoon.Com


Basis and Dimension of Vector Spaces

Next we shall list some of the well known properties of an n-dimensional vector
space.

Theorem 5.2.2: The following results are true in an n-dimensional vector space V:

(i) Every basis of V contains n number of vectors.

(ii) A set of n + 1 or more vectors in V is a linearly dependent set.

(iii) If S is a set of n vectors in V and (S) = V then S is linearly independent.

(iv) If S is a set of n linearly independent vectors in V then (S) = V. In other


words S is a basis of V.

(v) If S = {v1, v2, . . . , vm } is a set of m vectors in V, m ≤ n, then S can be


extended to a basis of V i.e. there exist vectors, um + 1, . . . , un, such that S = {
v1, v2, . . . , vm, um+1, . . . , un} is a basis for V.

(vi) If S = {w1, w2, . . . , wk }, k ≥ n, is a set of vectors in V such that (S) = V,


then S contains a basis for V.

(vii) If W is a subspace of V then dim W ≤ dim V.

In the following example we shall use some of the results of Theorem 2.2 to check
for a basis.

Example 5.2.3: Here we show that S = {(1, 0, − 1), (1, 1, 1), (1, 2, 4)} is a basis
3 3
for in two different ways. Here we shall use the fact that dimension of is
3.

41 WhatsApp: +91 7900900676 www.AgriMoon.Com


Basis and Dimension of Vector Spaces

Method 1: We will show that S is a linearly independent set. We get the echelon
form of the matrix formed by the vectors in S. The matrix and its row reduced
matrices are as follow:

1 0 −1  1 0 −1
  R 2 →R 2 -R1  
1 1 1  →  0 1 2 
1 2 4  1 2 4 
   

 1 0 −1  1 0 −1
R 3 → R 3 -R1   R 3 → R 3 -2R 2  
 →  0 1 2   → 0 1 2 
0 2 5  0 0 1 
   

The last matrix is in echelon form and no zero row is there in it. So S is a linearly
3
independent set of 3 vectors and since dimension of is 3, by Theorem 2.2(iv) S
3
is a basis of .

3
Method 2: Next by applying Theorem 2.2(iii) we show that S is a basis of .
3
Here we show that every vector in can be expressed as a linear combination of
vectors in S. Let (x1, x2, x3) ϵ 3
be an arbitrary vector and α, β, γ ϵ such that

(x1, x2, x3) = α(1, 0, − 1) + β(1, 1, 1) + γ(1, 2, 4),

= (α + β + γ, β + 2γ, − α + β + 4γ)

So α + β + γ = x1, β + 2γ = x2, − α + β + 4γ = x3, and is a linear system with


unknowns α, β, γ. On solving we get

α = 2x1 − 3x2 + x3, β = − 2x1 + 5x2 − 2x3, γ = x1 − 2x2 + x3

42 WhatsApp: +91 7900900676 www.AgriMoon.Com


Basis and Dimension of Vector Spaces

3
Thus for every vector in we have found scalars to express the vector as a linear
3
combination of vectors in S. Hence S forms a basis for .

[In particular if (x1, x2, x3) = (1, 2, 3) then α = − 1, β = 2, γ = 0 i.e.

(1, 2, 3) = (− 1) (1, 0, −1) + 2(1, 1, 1) + 0(1, 2, 4).]

In the next example we shall find basis and dimension of a subspace generated by a
set of vectors.

5
Example 5.2.4: We consider the subspace W of generated by the vectors u =
(1, 3, 1, − 2, − 3), v = (1, 4, 3, − 1, − 4), w = (2, 3, − 4, − 7, − 3), x = (3, 8, 1, − 7,
− 8).

Here we find a basis and the dimension of W.

The dimension of W will be the maximum number of linearly independent vectors


in {u, v, w, x}. To determine this we take help of echelon form of the matrix whose
rows are the vectors u, v, w, and x. The matrix is

1 3 1 −2 −3 
 
1 4 3 −1 −4 
2 3 −4 −7 −3  .
 
3 8 1 −7 −8 

Replacing R2 by R2 − R1, R3 by R3 − 2R1 and R4 by R4 − 3R1 the matrix will be

43 WhatsApp: +91 7900900676 www.AgriMoon.Com


Basis and Dimension of Vector Spaces

1 3 1 −2 −3 
 
0 1 2 1 −1 
0 −3 −6 −3 3 .
 
0 −1 −2 −1 1 

Replacing R3 by R3 + 3R2 and R4 by R4 + R2 in the above matrix we get

1 3 1 −2 −3 
 
0 1 2 1 −1 
0 0 0 0 0 
 
0 0 0 0 0 
which is in echelon form.

In the echelon form there are two non-zero rows only. Therefore dimension of W is
equal to two and these non-zero rows form a basis for W. So {(1, 3, 1, − 2, − 3),
(0, 1, 2, 1, − 1)} is a basis for W.

5.3 The Rank-Nullity Theorem

Here we give a definition of the rank of a matrix in terms of linearly independent


rows or columns. The rank of a matrix A is defined as the maximum number of
linearly independent rows in A. This is same as the dimension of the subspace
spanned by the rows of A. This subspace is also called the row space of A.
similarly one defines the column space of A. It is known that the dimension of the
row space of A is same as the dimension of the column space of A. Therefore the
rank of a matrix is also equal to the dimension of its column space. From this one
can also conclude that a matrix and its transpose have the same rank.

44 WhatsApp: +91 7900900676 www.AgriMoon.Com


Basis and Dimension of Vector Spaces

For any matrix A its nullity may be defined as below. Recall that a homogeneous
system of m linear equations on n variables is of the form AX = 0, where A is a m
× n matrix and X is the n × 1 matrix (x1, x2, . . . , xn). Homogeneous systems are
always consistent because (0, 0, . . . , 0) is always a solution of it. Also this is true
because of the fact that the co-efficient and augment matrices of this system have
the same rank.

Let S be the collection of all solutions of AX = 0. One can easily check that S is a
n
subspace of and this subspace is called the solution space of the system. The
dimension of the solution space of the system AX = 0 is called the nullity of A.
Now we are ready to state the famous rank-nullity theorem for matrices.

Theorem 5.3.1: Let A be an m × n matrix. Then rank A + nullity of A = n.

We illustrate the above theorem through some examples below.

Example 5.3.1: We verify the rank-nullity theorem for the matrix A

1 2 −1
 
2 5 2 
=1 4 7 .
 
1 3 3 

1 2 −1
 
0 1 4 
We convert A into row echelon form and is given by  0 0 0 .
 
0 0 0 

45 WhatsApp: +91 7900900676 www.AgriMoon.Com


Basis and Dimension of Vector Spaces

From this we get that rank of A is equal to 2 since there are two non-zero rows in
the row echelon form of A.

The homogeneous system corresponding to A is AX = 0, where X is the 3 × 1


matrix say (x1, x2, x3)T. So the system is

x1 + 2x2 − x3 = 0

2x1 + 5x2 + 2x3 = 0

x1 + 4x2 + 7x3 = 0

x1 + 3x2 + 3x3 = 0

From the echelon form of the matrix A, the above system is equivalent to

x1 + 2x2 − x3 = 0

x2 + 4x3 = 0

Here x3 is the free variable. Let x3 = α, α ϵ . Then x2 = − 4α and x1 = 9α. So the

solution space of the system AX = 0 is S = {(9α, − 4α, α) -: α ϵ }.

A basis for S is {(9, − 4, 1)} because this vector generates S, that is, all other
vectors in S are scalar multiple of the vector (9, − 4, 1). Therefore nullity of
A = dim S = 1. Now rank of A + nullity of A = 3 which verifies the rank-nullity
theorem.

5.4 Conclusions

46 WhatsApp: +91 7900900676 www.AgriMoon.Com


Basis and Dimension of Vector Spaces

In this lecture we have learned that if we know a basis for a vector space then the
whole vector space can be generated by taking all possible finite linear
combinations of the basis vectors. Because of this wonderful structure, vector
spaces are widely used in coding and decoding of messages in Information and
Coding theory. We shall find application of the rank-nullity theorem in some of the
subsequent lectures.

Keywords: Finite dimensional vector spaces, basis, dimension, homogeneous


system of equations, rank-nullity theorem.

Suggested Readings:

Linear Algebra, Kenneth Hoffman and Ray Kunze, PHI Learning pvt. Ltd., New
Delhi, 2009.

Linear Algebra, A. R. Rao and P. Bhimasankaram, Hindustan Book Agency, New


Delhi, 2000.

Linear Algebra and Its Applications, Fourth Edition, Gilbert Strang, Thomson
Books/Cole, 2006.

Matrix Methods: An Introduction, Second Edition, Richard Bronson, Academic


press, 1991.

47 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Matrices and Linear Algebra

Lesson 6

Eigenvalues and Eigenvectors of Matrices

6.1 Introduction

The concept of eigenvalues and eigenvectors of matrices is very basic and having
wide application in science and engineering. Eigenvalues are useful in studying
differential equations and continuous dynamical systems. They provide critical
information in engineering design and also naturally arise in fields such as physics
and chemistry.

6.2 Eigenvalues and Eigenvectors

Let A be square matrix of size n over a real or complex field F. An element λ in F


is called an eigenvalue of A if there exists a non-zero vector x in Fn (or a n × 1
matrix) such that Ax = λx.

If λ is an eigenvalue of A then all the non-zero vectors x satisfying Ax = λx are


called eigenvectors corresponding to λ. For a single eigenvalue there may be
several eigenvectors associated with it. In fact all these eigenvectors form a
subspace as we shall see below.

Theorem 6.2.1: Let A be an n × n matrix, λ be an eigenvalue of A, and S be the set


of all eigenvectors corresponding to λ. Then SU{0} is a subspace of Fn.

Proof: Let x1, x2 be eigenvectors corresponding to λ. Then

A(x1 + x2) = Ax1 + Ax2 = λx1 + λx2 = λ(x1 + x2).

48 WhatsApp: +91 7900900676 www.AgriMoon.Com


Eigenvalues and Eigenvectors of Matrices

A(αx1) = αAx1 = αλx1 = λ(αx1). So x1 + x2 , αx1 ϵ S and hence the result.

If S is the set of all eigenvectors corresponding to an eigenvalue λ then the


subspace SU{0} is called the eigenspace corresponding to the eigenvalue λ.
5 4
Example 6.2.1: For the matrix A =   over the real field , 6 is an
1 2
 4
eigenvalue because for the vector x =   in 2
1

 5 4  4   24   4
Ax =    =   = 6  1  = 6x.
 1 2  1   6   

2
Similarly y =  1
 
 is also an eigenvector of A corresponding to the eigenvalue 6.
2

Next we shall find all eigenvalues and associated eigenvectors of a matrix


systematically.

6.2.1 Method to find Eigenvalues and Eigenvectors

If λ is an eigenvalue of A and x is a corresponding eigenvector then

Ax = λx or (A − λI) x = 0 (6.1)

where I is the n × n identity matrix. Note that (6.1) is a homogeneous system of


linear equations. If (6.1) has a non-zero solution then rank (A − λI) < n. Then A –

49 WhatsApp: +91 7900900676 www.AgriMoon.Com


Eigenvalues and Eigenvectors of Matrices

λI is not invertible and one gets that det (A − λI) = 0. Therefore if λ is an


eigenvalue of A then it satisfies the equation det (A - λI) = 0 (because it will have a
non-zero eigenvector). Since det (A − λI) is a polynomial in λ of degree n, we
obtain all values of λ by solving det (A − λI) = 0, and this equation will have n
solutions with counting multiplicities. We summarize the above discussions as
follows:

1. Eigenvalues of A are the solutions of det (A − λI) = 0.

2. If A is of size n then A has n number of eigenvalues with counting multiplicities.

3. If λ is an eigenvalue of A then all non-zero solutions of the system (A − λI) x =


0 are the eigenvectors of A corresponding to λ, here x = (x1, x2, . . . , xn)T.

Eigenvalues of matrices are sometimes called characteristic values. The equation


det (A − λI) = 0 is called the characteristic equation and det (A − λI) is called the
characteristic polynomial associated with A.

We explain this method of finding eigenvalues and eigenvectors of a matrix


through an example below.

Example 6.2.2: Find all eigenvalues and their corresponding eigenvectors of the
matrix
5 4 2
 
A = 4 5 2 .
2 2 
 2

Solution: The characteristic polynomial of A is

50 WhatsApp: +91 7900900676 www.AgriMoon.Com


Eigenvalues and Eigenvectors of Matrices

5−λ 4 2
det (A - λI) = 4 5−λ 2 .
2 2 2−λ

= − ( λ − 10 )( λ − 1 )
2
.

So the characteristic equation is (λ − 10) (λ − 1)2 = 0 and the eigenvalues λ are


λ = 10, 1, 1.

Eigenvectors Corresponding to λ = 10: Here we solve the system (A – 10I) x = 0.

 −5 4 2   x1 
  
4 −5 2   x2  = 0
or 
 2 −8   x 3 
 2
 x1 
where x =  x 2  .
x 
 3

 −5 4 2
 
Echelon form of the co-efficient matrix is  0 −9 18  .
 0 0 
 0

So the given system of equations will be

– 5x1 + 4x2 + 2x3 = 0.

– 9x2 + 18x3 = 0.

51 WhatsApp: +91 7900900676 www.AgriMoon.Com


Eigenvalues and Eigenvectors of Matrices

Here x3 is free variable. So let x3 = α, α ≠ 0, α ϵ . Then we get x2 = 2α and x1 =


x2 = 2α. So the set of all eigenvectors corresponding to λ = 10 is {(2α, 2α, α) : α ϵ
, α ≠ 0}.

Eigenvectors corresponding to λ = 1: Here we have to solve the system


(A - I) x = 0.

 4 4 2   x1 
  
or  4 4 2   x 2  = 0 .
 2 2 1 x 
  3 

 4 4 2
 
Echelon form of the co-efficient matrix is  0 0 0  . So the system will be
0 0 0
 
4x1 + 4x2 + 2x3 = 0.
or 2x1 + 2x2 + 2x3 = 0.

Here x2 and x3 are both free variables. So let x2 = α, x3 = β, α, β ϵ , and α = 0,

β = 0 cannot hold simultaneously. Then x1 = – (2α + β). The set of all

eigenvectors corresponding to λ = 1 is {(– (2α + β), α, β) : α, β ϵ , α and β do

not take the zero value simultaneously}

6.2.2 Properties of Eigenvalues and Eigenvectors

In the following we present some properties of eigenvalues and eigenvectors of


matrices:

52 WhatsApp: +91 7900900676 www.AgriMoon.Com


Eigenvalues and Eigenvectors of Matrices

(1) The sum of the eigenvalues of a matrix A is equal to the sum of all diagonal
entries of A (called trace of A). This property provides a procedure for
checking eigenvalues.

(2) A matrix is invertible if and only it has non-zero eigenvalues.

This can be verified easily as det A = (A – 0I) = 0 if and only if 0 is an


eigenvalues of A. Also recall that det A = 0 if and only if A is not invertible.

(3) The eigenvalues of an upper (or lower) triangular matrix are the elements on
the main diagonal.

This is true because determinant of an upper (or lower) triangular matrix is


equal to the product of the (main) diagonal entries.

(4) If λ is an eigenvalue of A and if A is invertible then is an eigenvalue of A− 1.

Further if x is an eigenvector of A corresponding to λ then it is also an


eigenvector of A− 1 corresponding to .

The above is true because if x is an eigenvector of A corresponding to the


eigenvalue λ then Ax = λx. Multiplying both sides by A− 1, x = λ A− 1 x or

A− 1 x = x.

(5) If λ is an eigenvalue of A then αλ is an eigenvalue of αA where α is any real or


complex number. Further if x is an eigenvector of A corresponding to the
eigenvalue λ then x is also an eigenvector of αA corresponding to eigenvalue
αλ. This is true because (αA) x = (αλ) x.

(6) If λ is an eigenvalue of A then λk is an eigenvalue of Ak for any positive


integer k. Further if x is an eigenvector of A corresponding to the eigenvalue λ

53 WhatsApp: +91 7900900676 www.AgriMoon.Com


Eigenvalues and Eigenvectors of Matrices

then x is also an eigenvector of Ak corresponding to the eigenvalue λk. This is


true because if x is an eigenvector of A corresponding the eigenvalue λ then

Ak x = Ak − 1(Ax) = Ak − 1(λx) = λ (Ak − 1x) = λ2 (Ak − 2x) = . . . = λk x.

(7) If λ is an eigenvalue of A, then for any real or complex number c, λ – c is an


eigenvalue of A – cI. Further if x is an eigenvector of A corresponding to the
eigenvalue λ then x is also an eigenvector of A – cI corresponding to the
eigenvalue λ – c

This is true because (A − cI) x = Ax – cx = λx – cx = (λ − c) x for an


eigenvalue λ and its corresponding eigenvector x of A.

(8) Every eigenvalue of A is also an eigenvalue of AT. One verifies this from the
fact that determinant of a matrix is same as the determinant of this transpose
and

A − λI | = | (AT)T − λIT | = | (AT − λI)T | = | AT − Iλ |.

(9) The product of all the eigenvalues (with counting multiplicity) of a matrix
equals the determinant of the matrix.

(10) Eigenvectors corresponding to distinct eigenvalues are linearly independent.

6.3 Conclusions

Some more properties of eigenvalues and eigenvectors will be discussed in the


next lecture. In a subsequent lecture we shall show that eigenvalues and
eigenvectors are used for diagonalization of matrices.

Keywords: Characteristic equation, eigenvalues, eigenvectors, properties of


eigenvalues and eigenvectors.

54 WhatsApp: +91 7900900676 www.AgriMoon.Com


Eigenvalues and Eigenvectors of Matrices

Suggested Readings:

Linear Algebra, Kenneth Hoffman and Ray Kunze, PHI Learning pvt. Ltd., New
Delhi, 2009.

Linear Algebra, A. R. Rao and P. Bhimasankaram, Hindustan Book Agency, New


Delhi, 2000.

Linear Algebra and Its Applications, Fourth Edition, Gilbert Strang, Thomson
Books/Cole, 2006.

Matrix Methods: An Introduction, Second Edition, Richard Bronson, Academic


press, 1991.

55 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Matrices and Linear Algebra

Lesson 7

The Cayley Hamilton Theorem and Applications

7.1 Introduction

The Cayley Hamilton theorem is one of the most powerful results in linear algebra.
This theorem basically gives a relation between a square matrix and its
characteristic polynomial. One important application of this theorem is to find
inverse and higher powers of matrices.

7.2 The Cayley Hamilton Theorem

The Cayley Hamilton theorem states that:

Theorem 7.2.1: Every square matrix satisfies its own characteristic equation.

That is if A is a matrix of size n and χA (λ) = a0 + a1λ + . . . + an −1λn − 1 + λn = 0 is


the characteristic equation of A then

χA (A) = a0I + a1A + . . . + an − 1An − 1 + An = 0n × n

where 0n × n is the zero matrix of size n, and for any positive integer i, Ai is the
product A × A . . . × A of i number of A.

1 2
Example7.2.1: Let A =   . Characteristic equation is λ – 4λ – 5 = 0. One
2

 4 3 
9 8 4 8
can check that A2 =   , 4A =   . So
16 17  16 12 

56 WhatsApp: +91 7900900676 www.AgriMoon.Com


The Cayley Hamilton Theorem and Applications

 9 8   4 8  5 0
A2 – 4A – 5I =  –  – .
16 17  16 12   0 5 
 9−4−5 8 −8 − 0  0 0
=  = .
16 − 16 − 0 17 − 12 − 5   0 0 

The Cayley-Hamilton theorem can be used to find inverse as well as higher powers
of a matrix.

7.3 Method to Find Inverse

Here we consider a square matrix A of size n and its characteristic polynomial χA


(λ) = det (A- λ I) =a0 + a1λ + . . . + an − 1λn − 1 + λn. The following is a well known
result for matrices.

Theorem 7.3.1: If χA (λ) = det (A- λ I) =a0 + a1λ + . . . + an − 1λn − 1 + λn is the


characteristic polynomial of a square matrix A then determinant of A is equal to (−
1)n a0.

The following is an immediate consequence of the above theorem.

Corollary 7.3.1: A is invertible if and only if a0 ≠ 0.

In light of the above results to find inverse of A we should have a0 ≠ 0. By the


Cayley- Hamilton theorem we have

a0I + a1A + . . . + an − 1An − 1 + An = 0.


or A(a1I + a2A + . . . + An − 1) = – a0 I.

57 WhatsApp: +91 7900900676 www.AgriMoon.Com


The Cayley Hamilton Theorem and Applications

1
or A{ − ( a1I + a2A + . . . + An − 1)} = I.
a0

1
Therefore A− 1 = − ( a1I + a2A + . . . + An − 1) which is a formula for inverse of A.
a0

We will illustrate this method in the example below.


 2 −1 1
 
Example 7.3.1: Here we find inverse of the matrix A =  3 −2 1 applying
 0 0 1
 
Cayley- Hamilton theorem. One finds that the characteristic equation of A is
det (A − λI) = − λ3 + λ2 + λ – 1 = 0.

The matrix A is invertible because a0 = − 1 ≠ 0. By the Cayley-Hamilton theorem


–A3 + A2 + A – I = 0.
or A(– A2 + A + I) = I.
 1 0 2   2 −1 1  1 0 0 
     
or A− 1 = – A2 + A + I = −  0 1 2  +  3 −2 1 +  0 1 0 
 0 0 1   0 0 1  0 0 1 
     

 2 −1 −1
 
=  3 −2 −1 .
0 0 1 
 

7.4 Computation of powers of A

Applying Cayley-Hamilton theorem we can also find higher powers of a square


matrix. For this we need a famous theorem of algebra called the division algorithm,
which is stated below.

58 WhatsApp: +91 7900900676 www.AgriMoon.Com


The Cayley Hamilton Theorem and Applications

Theorem 7.4.1: (Division Algorithm) For any polynomials f(x) and g(x) over a
field F there exist polynomials q(x) and r(x) such that f(x) = q(x) g(x) + r(x) where
r(x) = 0 or deg r(x) < deg g(x).

The polynomial r(x) is called remainder polynomial.

Here we shall discuss about a method that finds value of higher degree polynomial
on a square matrix A and in particular the value of higher power of A. The method
as follows:

Step 1: Let A be a square matrix of size n and f(A) be a polynomial in A of any


finite degree m, usually m > n.

Step 2: Compute the characteristic polynomial χ(A) of A. From division algorithm


we get f(A) = q(A) χ(A) + r(A), where q(A) and r(A) are polynomials in A and deg
r(A) < deg χ(A) or r(A) = 0.

Step 3: From Cayley-Hamilton theorem we get χ(A) = 0. Therefore f(A) = r(A),


that is f(A) is equal to a polynomial in A of degree less than n. Then we compute
r(A) which involves at the most n unknown constants and up to (n − 1)th powers of
A, that is, r(A) can be written as:

r(A) = a0I + a1A + . . . + an − 1An − 1.

To find r(A) one has to compute the co-efficients a0 , a1 , . . . , an − 1 and powers of


A. We use the eigenvalues of A to find these co-efficients. This procedure is
divided into two cases depending on the eigenvalues are distinct or not.

59 WhatsApp: +91 7900900676 www.AgriMoon.Com


The Cayley Hamilton Theorem and Applications

Step 4: In this case we assume that A has distinct eigenvalues λ1, λ2, . . . , λn. From
Cayley-Hamilton theorem we have f(A) = r(A). Therefore

f(λi) = r(λi) for all i = 1, 2, . . . , n, that is

f(λ1) = r(λ1) = a0 + a1λ1 + a2λ12 + . . . + an − 1λ1n − 1.

f(λ2) = r(λ2) = a0 + a1λ2 + a2λ22 + . . . + an − 1λ2n − 1


f(λn) = r(λn) = a0 + a1λn + a2λn2 + . . . + an − 1λnn − 1

Solving this system one finds the values a0 , a1 , . . . , an-1, since f(λi) and λi, 1 ≤ i ≤
n, are known.

Step 5: In this step we consider the case that A has multiple eigenvalues. If λi is an
eigenvalue of A of multiplicity k then we differentiate the equation f(λi) = r(λi) k –
1 times, and get k equations:

f(λi) = r(λi).

df ( λ ) dr ( λ )
= .
=
dλ λ λ=
dλ λ λi
i


d ( k-1) f ( λ ) d ( k-1) r ( λ )
= .
dλ dλ
=λ λ=
i λ λi

60 WhatsApp: +91 7900900676 www.AgriMoon.Com


The Cayley Hamilton Theorem and Applications

This is how one gets a system of n equations using all the eigenvalues of A and
from this system the values of a0, a1 , . . . , an can be determined.

 2 −1
Example 7.4.1: Here we shall find the value of f(A) = A78, for A = 
5 
,
2
applying Cayley-Hamilton theorem. Characteristic polynomial of A is
det (A − λI) = λ2 − 7λ + 12. Eigenvalues are 3 and 4. Since characteristic
polynomial of A is of degree 2 the remainder will be of degree at the most one.

Therefore

A78 = a0 I + a1A (7.1)

378 = a0 I + 3a1

478 = a0 I + 4a1

On solving we get a1 = − 378 + 478 and a0 = 4 × 378 – 3 × 478. Putting this value in
(7.1),

 2 x 378 − 478 378 − 478 


A78 =  .
 −2 x 3 + 2 x 4 −378 + 2 x 478 
78 78

1 0 1
 
Example 7.4.2: For the matrix A =  0 1 0  , we find the value of f(A) = A10 –
0 0 2
 
5A6 + 2A3.

61 WhatsApp: +91 7900900676 www.AgriMoon.Com


The Cayley Hamilton Theorem and Applications

Eigenvalues of the matrix A are 1, 1 and 2. Since the characteristic polynomial is


of degree 3 we get

f(A) = a0I + a1A + a2A2 = r(A).

For eigenvalue 2 we get the equation

210 – 5 × 26 + 2 × 23 = a0 + 2a1 + 4a2 (7.2)

Since 1 is a eigenvalue of multiplicity two we get equations

df ( λ ) dr ( λ )
f(1) = r(1) and = . That is,
=
d λ λ 1=
d λ λ 1

− 2 = a0 + a1 + a2 and − 14 = a1 + 2a2 (7.3)

From (7.2) and (7.3) we have the system

a0 + 2a1 + 4a2 = 720


a0 + a1 + a2 = − 2
a1 + 2a2 = − 14

On solving this system we get a0 = 748, a1 = − 1486 and a2 = 736.

Thus f(A) = A10 – 5A6 + 2A3 = 748 I – 1486 A + 736 A2.

62 WhatsApp: +91 7900900676 www.AgriMoon.Com


The Cayley Hamilton Theorem and Applications

1 0 3
 
A2 =  0 1 0  .
0 0 4
 

1 0 0 1 0 1 1 0 3
     
Now f(A) = 748  0 1 0  + (− 1486)  0 1 0  + 736  0 1 0  =
0 0 1 0 0 2 0 0 4
     

 −2 0 722 
 
 0 −2 0 .
 0 0 1720 
 

7.5. Conclusions

In this lecture we have seen that how powerful the Cayley-Hamilton theorem and
the concept of eigenvalues are? In the next lecture also we shall use the theory of
eigenvalues for diagonalization of matrices.

Keywords: Cayley Hamilton theoem, division algorithm, inverse of matrices,


power of marices.

Suggested Readings:

Linear Algebra, Kenneth Hoffman and Ray Kunze, PHI Learning pvt. Ltd., New
Delhi, 2009.

Linear Algebra, A. R. Rao and P. Bhimasankaram, Hindustan Book Agency, New


Delhi, 2000.

Linear Algebra and Its Applications, Fourth Edition, Gilbert Strang, Thomson
Books/Cole, 2006.

63 WhatsApp: +91 7900900676 www.AgriMoon.Com


The Cayley Hamilton Theorem and Applications

Matrix Methods: An Introduction, Second Edition, Richard Bronson, Academic


press, 1991.

64 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Matrices and Linear Algebra

Lesson 8

Diagonalization of Matrices

8.1 Introduction

Diagonalizable matrices are of particular interest in linear algebra because of their


application to computation of several matrix operations and functions easily. Not
all matrices are diagonalizable. In this lecture we learn technique to identify
matrices that are diagonalizable.

8.2 Similar Matrices

Diagonalizable matrices are defined through similar matrices. Two square matrices
A and B are said to be similar if there exists an invertible matrix P such that A = P−
1
B P or equivalently PA = BP.

3 5  2 4
Example 8.2.1: (i) Matrices A =   and B =   are similar because PA
3 1  4 2
 4 0
= PB, where P =   . Note that P is invertible as det P = 20 ≠ 0. However
1 5
2 0 2 1
matrices R =   and S =   are not similar because otherwise the
 0 2  0 2
a b
matrix P1 satisfying P1R = SP1 will be of the form   and is a non-invertible
0 0
(or singular) matrix.

In the following we shall present an important result on similar matrices.

65 WhatsApp: +91 7900900676 www.AgriMoon.Com


Diagonalization of Matrices

Theorem 8.2.1: Similar matrices have the same characteristic equation (and hence
the same eigenvalues).

Proof: Let A and B be similar matrices. We have to show that


det (A − λI) = det (B − λI). A = P− 1 B P, where P is an invertible matrix.

det (A − λI) = det (P− 1 B P – P− 1λI P) = det (P− 1 (B − λI) P)


= det (P− 1) det(B − λI) det (P) = det (B − λI).

The above theorem also gives a criteria for checking that the given matrices are
similar or not.

1 2 4 1
Example 8.2.1: Matrices A =   and B =   are not similar because
 4 3  3 2
their characteristic polynomials are λ2 − 4λ – 5 and λ2 − 6λ + 5 respectively.

8.3 Algebraic and Geometric Multiplicities

For diagonalization of matrices we need to understand the algebraic and geometric


multiplicities of eigenvalues. Let λ0 be an eigenvalue of A. The geometric
multiplicity of λ0 is the dimension of the eigenspace of λ0, that is the dimension of
the solution space of (A – λ0I) x = 0, which is also the nullity of (A – λ0I).
Whereas the algebraic multiplicity of λ0 is the largest positive integer k such that (λ
– λ0)k is a factor of the characteristic polynomial of A.

66 WhatsApp: +91 7900900676 www.AgriMoon.Com


Diagonalization of Matrices

 −3 1 −1 
 
Example 8.3.1: Consider the matrix A =  −7 5 −1  . Characteristic polynomial
 −6 6 −2 
 
of A is det (A − λI) = (λ + 2)2 (λ − 4). So − 2 is an eigenvalue of multiplicity two
and therefore algebraic multiplicity of the eigenvalue − 2 is equal to 2. One can
check that rank of (A + 2I) is equal to two hence its nullity is equal to one. So
geometric multiplicity of the eigenvalue − 2 is equal to 1. The following theorem
gives a relation between these two multiplicities.

Theorem 8.3.1: The algebraic multiplicity of an eigenvalue is not less than its
geometric multiplicity.

8.4 Diagonalizable Matrices

A square matrix is said to be diagonalizable if it is similar to a diagonal matrix. In


other words A is diagonalizable if and only if there is an invertible matrix P such
that P-1 A P is a diagonal matrix.

The following theorem gives a criteria for diagonalizable matrices.

Theorem 8.4.1: Let An×n be an square matrix with eigenvalues λ1, λ2, . . . , λk. Let
γ1, γ2, . . . , γk be the geometric multiplicity of λ1, λ2, . . . , λk respectively. Then A is
diagonalizable if and only if γ1 + γ2 + . . . + γk = n.

From theorems 8.3.1 and 8.4.1 we get the following result.

67 WhatsApp: +91 7900900676 www.AgriMoon.Com


Diagonalization of Matrices

Corollary 8.4.1: A matrix An × n is diagonalizable if and only if for every


eigenvalue λ of A, the algebraic multiplicity of λ is equal its geometric
multiplicity.

Corollary 8.4.2: If An × n has n distinct eigenvalues then A is diagonalizable.

8.5 Algorithm to Diagonalize a Matrix

Input: A square matrix An × n.

Output: A diagonal matrix similar to A.

(1) Find eigenvalues of A say λ1, λ2, . . . , λk, k ≤ n.

(2) Find geometric multiplicity γi of λi, 1 ≤ i ≤ k.

(3) If γ1 + γ2 + . . . + γk = n then continue otherwise return that A is not


diagonalizable.
λi
(4) Find basis for eigenspace of each λi. Let { x j : 1 ≤j ≤ γi } be a basis for the
eigenspace corresponding to λi, 1 ≤ i ≤ k.

(5) Take P = ( x1λ1  x γ1 λ1 x1λ2 x 2λ2  x γ 2 λ2  x1λk x 2λk  x γ k λk ) be the n × n

matrix such that each x j λi is a column vector i.e. a matrix of size n × 1.


Obviously P is invertible.

(6) P− 1 A P = diag(λ1, λ1,… ,λ1, λ2, λ2,. . . λ2 , ….λk ,λk , … λk ) is the diagonal
matrix similar to A.

68 WhatsApp: +91 7900900676 www.AgriMoon.Com


Diagonalization of Matrices

5 4 2
 
Example 8.5.1: Consider the matrix A =  4 5 2  .
 2 2 2
 

A has two eigenvalues λ1 = 10 and λ2 = 1, where algebraic multiplicities of λ1 and


λ2 are 1 and 2 respectively. Recall that the eigenspace of λ1 is

S1 = {(2α, 2α, α): α ϵ R} (here we include the zero vector also).

dim S1 = 1 = γ1, the geometric multiplicity of λ1. Eigen space of λ2 is

S2 = {(− (2α + β), α, β): α, β ϵ R} and dim S2 = 2 = γ2 .

Now γ1 + γ2 = 3 = size of the matrix A. So A is diagonalizable.

A basis for S1 is {(2, 2, 1)}. A basis for S2 is {(− 1, 1, 0), (− )} (obtained by

taking α = 1, β = 0 and then α = 0, β = 1). So


P
 1
 2 −1 −
2
 
= .
2 1 0
1 0 1 
 
 

69 WhatsApp: +91 7900900676 www.AgriMoon.Com


Diagonalization of Matrices

10 0 0 
 
−1
One checks that P A P =  0 1 0  and is similar A.
 0 0 1
 
Not all matrices are diagonalizable and we will see such an example below.

Example 8.5.2: As we have seen in Example 8.3.1 that for the matrix A =
 −3 1 −1 
 
 −7 5 −1  the eigenvalues are λ1 = − 2, λ2 = 4, λ1 is of multiplicity 2. Also the
 −6 6 −2 
 
algebraic multiplicity of λ1 is 2 and the geometric multiplicity of it is 1. Therefore
A is not a diagonalizable matrix.

8.6 Computation of Functions of Diagonalizable Matrices

In the following theorem we shall list some properties of diagonal and


diagonalizable matrices.

Theorem 8.6.1: The following are true for a diagonal or a diagonalizable matrix
D:

a 0
(I) If D =  0 b  is a diagonal matrix the kth power of D is equal to
 n x n
 ak 0
  .
0 bk n x n

(II) If A is a diagonalizable matrix with A = P− 1 D P, where D is a diagonal matrix,


then Ak = P− 1 Dk P. (For k=2 one verifies that A2 = A.A = (P− 1 D P) (P− 1 D P) = P− 1
D (PP− 1) D P = P− 1D2 P.)

70 WhatsApp: +91 7900900676 www.AgriMoon.Com


Diagonalization of Matrices

(III) If P(x) = a0 + a1x + . . . + anxn be any polynomial and A be a diagonalizable


matrix with A = M D M−1, where D is diagonal, then P(A)= M P (D) M− 1. (One can
get this by taking P(A)= M a0 I M− 1 + M a0 D M − 1 + . . . +M a0 Dn M− 1.)

 0 1
Example 8.6.1: Here, We compute A30 for A =   . This matrix is
 −2 3 

1 1  1 0
diagonalizable as A = M D M , where M = 
−1
 and D =   . Thus by
1 2  0 2
1 1  1 0  2 −1
Theorem 8.6.1(i) and (ii), A30 = M D30 M− 1 =   30  .
1 2  0 2  −1 1 
 2 − 230 230 − 1
= .
2−2 231 − 1 
31

Example 8.6.2: If P(x) = x17 – 3x5+2x2+1 then we find the value of P(A) = A17 –
3A5 + 2A2 + I, for the same matrix A in Example 8.6.2. By Theorem 8.6.1 (iii) ,
and Example 8.61, P(A)=A17 – 3A5 + 2A2 + I

 2 − 217 217 − 1  2 − 25 25 − 1   2 − 2 2 22 − 1  1 0 
=  − 3  + 2 + .
2−2 218 − 1  2 − 26 2 6 − 1   2 − 23 23 − 1   0 1 
18

 89 − 217 −88 + 217 


= .
176 − 2 −175 + 218 
18

8.7 Conclusions

Here we have seen that finding higher powers of a diagonalizable matrix or value
of any polynomial on a diagonalizable matrix can be computed easily.

71 WhatsApp: +91 7900900676 www.AgriMoon.Com


Diagonalization of Matrices

Keywords: Similar matrices, diagonalizable matrices, algebraic multiplicity,


geometric multiplicity, functions of diagonalizable matrices.

Suggested Readings:

Linear Algebra, Kenneth Hoffman and Ray Kunze, PHI Learning pvt. Ltd., New
Delhi, 2009.

Linear Algebra, A. R. Rao and P. Bhimasankaram, Hindustan Book Agency, New


Delhi, 2000.

Linear Algebra and Its Applications, Fourth Edition, Gilbert Strang, Thomson
Books/Cole, 2006.

Matrix Methods: An Introduction, Second Edition, Richard Bronson, Academic


press, 1991.

72 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Matrices and Linear Algebra

Lesson 9

Linear and Orthogonal Transformations

9.1 Introduction

In order to compare mathematical structures of same type we study operation


presenting mappings from one structure to another. In case of vector spaces such a
mapping is called a linear transformation. Matrices and linear transformations are
closely related, in fact one can be obtained from the other easily. Orthogonal
transformations are particular type of linear transformations.

9.2. Linear Transformations

Let V and W be vector space over the same field F. A mapping T: V → W is called
a linear transformation if

(i) T (u + v) = T (u) + T (v), for u,v ϵ V.

(ii) T (αu) = αT (u) for all u ϵ V and α ϵF.

(Combiningly these two statements can be written as:


T(αu + βv) = αT(u) + β T(v), for u,v ϵ V and α, β ϵ F).

3 2
Example 9.2.1: Let T1, T2 be mappings from to defined as:
T1 (x1, x2, x3) = (x1 + x2, x3) and T2 (x1, x2, x3) = (x1x2, x3).

T1 is a linear transformation because

T1 ((x1, x2, x3) + (y1, y2, y3)) = T1 (x1+y1, x2 + y2, x3 + y3)

= (x1+y1 + x2 + y2, x3 + y3).

73 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear and Orthogonal Transformations

= (x1 + x2, x3) + (y1 + y2, y3).

= T1 (x1, x2, x3) + T2 (y1, y2, y3).

T1 (α (x1, x2, x3)) = T1 (αx1, αx2, αx3).

= (αx1 + αx2 , αx3) = α (x1 + x2, x3) = αT (x1, x2, x3).

T2 is not a linear transformation because

T2 ((x1, x2, x3) + (y1, y2, y3)) = T2 (x1+y1, x2 + y2, x3 + y3).

= ((x1+y1) (x2 + y2), (x3 + y3))

≠ (x1 x2, x3) + (y1 y2, y3) = T (x1, x2, x3) + T (y1, y2, y3).

A linear transformation T: V → W is called an isomorphism if T is a one to one


mapping. Vector spaces V and W are said to be isomorphic if there is a an
isomorphism from V on to W. A vector space V is trivially isomorphic to itself
because the identity mapping is an isomorphism from V onto itself. If V and W are
isomorphic and T is an isomorphism from V on to W then T− 1 : W → V is also an
isomorphism.

In the theorem below we list some properties of isomorphisms.

Theorem 9.2.1: Let T: V → W be a linear transformation. Then

(1) T(0) = 0. Further if T is an isomorphism then T(v) = 0 implies v = 0.

74 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear and Orthogonal Transformations

(2) If T is an isomorphism and S = {V1, V2, . . . , Vk} is a linearly independent set


of vectors in V then { T(V1 ), T(V2 ), . . . ,T (Vk )} is a linearly independent set
in W.

An important result for finite dimensional vector spaces is given in the theorem
below.

Theorem 9.2.2: Two finite dimensional vector spaces over the same field are
isomorphic if and only if they have the same dimension.

Corollary 9.2.1: Every n-dimension vector space over F is isomorphic to Fn. In


n
particular every n-dimensional vector space over is isomorphic to .

Next we shall define the null space and range space of a linear transformation. Let
T: V → W be a linear transformation. The kernel of T, Ker T, is the set Ker T = {v
ϵ V: T(v) = 0}.The set T(V) = {T(v) : v ϵ V} is called the range of T, denoted by
rang(T). It is an well-known result that Ker T = {0} if and only if T is an
isomorphism One can verify easily that Ker T is a subspace of V, called the null
space of T, and Rang(T) is also a subspace of W, called the range space of T. If V
and We are finite dimensional vector spaces then dimension of Ker T is called the
nullity of T and the dimension of rang(T) is called the rank of T. One should not
get confuse with these terminologies because very shortly we are going show that
linear transformations can be represented as matrices and the vice versa.

Example 9.2.2: (1) Consider the linear transformation T: 3


→ 2
defined as:

75 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear and Orthogonal Transformations

2
T((x1, x2, x3)) = (x1, x2, 0). Then Ker T is the z-axis and rang (T) = .

(2) Consider the linear transformation T: 3


→ 2
given by T (x1, x2, x3) = (x1, 0,
0). Then Ker T is the yz-plane and rang (T) is the x-axis.

Like matrices one can also have the rank-nullity theorem for linear
transformations.

Theorem 9.2.3: Let V and W be finite dimensional vector spaces and T : V → W


be a linear transformation. Then nullity of T + rank of T = dim V.

9.3 Linear Transformations from Matrices

Every linear transformation can be represented as a matrix and every matrix can
produce a linear transformation. So people sometime treat matrices as linear
transformations and vice versa. Here we shall discuss about the method to get a
linear transformation from a matrix.

Let V and W be finite dimensional vector spaces over F with dim V = n and dim W
= n, and Am × n = (aij)m × n be a matrix over F (same field). From Corollary 9.2.1
every vector in V can be expressed as an n-tuple of elements in F, in other words,
 x1 
 
we can take V = Fn × 1, i.e. V consists of n × 1 matrices (or column vectors    , xi
x 
 n

 x1 
 
ϵ F). Similarly elements of W can be taken as column vectors    , xi ϵ F, i.e.
x 
 m

76 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear and Orthogonal Transformations

 x1   x1 
   
W = Fm × 1. Then the mapping T: V → W defined as T    = Am × n   
x  x 
 n  n n x1
 n 
 ∑ a1j x j 
 j=1 
=    is a linear transformation because
 n 
 a x 
∑ mj j 
 j=1 

 n 
 ∑ a1 j ( x j + y j ) 
 x1   y1    j=1 
     
A    +     =
  =
 x   y    n 
 n   n   
∑ a mj ( x j + y j ) 

 j=1 

 n   n 
 ∑ a 1 j x j   ∑ a 1 j y j 
 x1   y1 


j=1
 
 
j=1

    
  +  =A   + A  
 n   n  x  y 
∑ a mj x j  ∑ a mj y j   n   n 

 j=1 
 
 j=1 

and

 α x1    x1  
    
A    = α A    
.
α x    x 
 n   n 

Notice that if A is an m × n matrix then we get a linear transformation from an n-


dimensional vector space to an m-dimensional vector space.

77 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear and Orthogonal Transformations

1 3 −2 
Example 9.3.1: Let A =  0 
1  2×3 be a matrix over . The mapping T:
 4
3
→ 2
given by:

 x1 
1 3 −2   
T ( x1 , x 2 , x 3 ) =
T
 x2
0 4 1    .
 x3 

 x1 + 3x 2 − 2x 3 
 
 4x 2 + x 3 
= is a linear transformation.

9.4 Matrix Representation of a Linear Transformation

Let V and W be vector spaces over F and T: V → W be a linear transformation.


Let dim V = n, dim W = m, {v1, v2, . . . , vn} and{w1, w2, . . . , wm} be bases for V
and W respectively. Note that T (v1), T (v2), . . . , T (vn) are vectors in W and so
these vectors can be expressed as linear combinations of vectors in{w1, w2, . . . ,
wm}. So let

T (v1) = a11w1 + a12w2 + . . . + a1mwm.


T (v2) = a21w1 + a22w2 + . . . + a2mwm.


T (vn) = an1w1 + an2w2 + . . . + anmwm , where aij ϵ F.

Then the matrix A given below is a matrix representation of T:

78 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear and Orthogonal Transformations

 a11 a 21  a n1 
 
 a12 a 22  an2 
A=      .
 
 a1m a 2m  a nm  m×n

Note that if we consider different bases in V and W then we may get different
matrix representations of T (of course these matrices are all similar). In the above
if we represent T (vi) = (ai1, ai2, . . . , ain)T. then the matrix corresponding to T can
be written as:

A = (T(v1) T(v2). . . T(vn)).

Example 9.4.1: Consider the linear transformation T: 3


→ 2
defined by T(x1,
x2, x3) = (x1 + x2, 2x3).

3 2
Take bases B = {(1, 1, 0), (0, 1, 4), (1, 2, 3)} and B1 = {(1, 0), (0, 2)} in and
respectively.

T(1, 1, 0) = (2, 0) = 2(1, 0) + 0(0, 2).

T(0, 1, 4) = (1, 8) = 1(1, 0) + 4(0, 2).

T(1, 2, 3) = (3, 6) = 3(1, 0) + 3(0, 2).

 2 1 3
So the matrix representation of T is the matrix  .
 0 4 3

9.5 Orthogonal Transformations

79 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear and Orthogonal Transformations

Before defining orthogonal transformations we recall same terminologies defined


n
in the vector space . For any two vectors x = (x1, x2, . . . , xn) and
n
y = (y1, y2, . . . , yn) in the standard inner product of x and y, denoted by ,
n

is given by = ∑x
i=1
i yi . Since ≥ 0, positive square root of

is called the norm (or length) of x. Two vectors x and y


n n
in are said to be orthogonal if = 0. A basis {v1, v2, . . . , vn } of is

said to be an orthonormal basis if = 0 for 1 ≤ i ≠ j ≤ n, and || vk || = 1 for

all k = 1, 2, . . . , n.

Recall that a real square matrix A of size n is said to be orthogonal if A AT = AT A


= I, where I is the n × n identity matrix. Orthogonal matrices satisfy the following
properties: (1) AT = A− 1
(2) det A = ± 1 and (3) Product of two orthogonal
matrices of the same size is orthogonal.

A linear transformation T: n
→ n
is called an orthogonal transformation if
n
= for every vectors u and v in . So an orthogonal
transformation not only preserves the addition and scalar multiplication, it also
preserves the length of every vector.
An orthogonal transformation is also called an isometry because of the following
result.

Theorem 9.5.1: A linear transformation T: n


→ n
is an orthogonal
n
transformation if and only if || T(v) || = || v || for all vectors v in .

80 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear and Orthogonal Transformations

 2x-y x+2y 
Example 9.5.1: The mapping T: 2
→ 2
defined as T(x, y) =  ,  is
 5 5 
an orthogonal transformation. One can check that T preserves addition and scalar
multiplication and hence is a linear transformation. Next we show that

2
|| T(x, y) || = || (x, y) ||, for all vectors (x, y) in .

{( 2x-y ) }
1
1
+ ( x+2y )
2 2 2
|| T(x, y) || = .
5
1
=
1
5
{5x + 5y } =
2 2 2
x 2 + y2 = || ( x, y ) || .

In the following theorem we show that the matrix associated with an orthogonal
transformation is also orthogonal.

Theorem 9.5.2: Let T: n


→ n
be an orthogonal transformation and A be the
n
matrix representation of T with respect to the standard basis {e1, e2, . . . , en} in .
Then A is an orthogonal matrix.

Proof: The matrix representation of T can be written as A = T(e1), T(e2), . . . ,


T(en)). Since T is an orthogonal transformation, T(ei ),T(e j ) = ei , e j The
.

standard basis in n
is orthonormal. So T(ei ),T(e j ) is equal to 1 if i = j and is

zero otherwise (i.e. i ≠ j). Thus A AT = I and A is orthogonal.

9.6 Conclusions

81 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear and Orthogonal Transformations

Linear transformations are used to recognize identical structures in linear algebra.


Using these transformations one can transfer problems in a complicated space to a
simpler space and then workout. Orthogonal transformations are also applied for
reduction of matrices to some important foms.

Keywords: Linear transformations, isomorphic vector spaces, kernel, matrix


representation, orthogonal transformations.

Suggested Readings:

Linear Algebra, Kenneth Hoffman and Ray Kunze, PHI Learning pvt. Ltd., New
Delhi, 2009.

Linear Algebra, A. R. Rao and P. Bhimasankaram, Hindustan Book Agency, New


Delhi, 2000.

Linear Algebra and Its Applications, Fourth Edition, Gilbert Strang, Thomson
Books/Cole, 2006.

Matrix Methods: An Introduction, Second Edition, Richard Bronson, Academic


press, 1991.

82 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Matrices and Linear Algebra

Lesson 10

Quadratic Forms

10.1 Introduction

The study of quadratic forms began with the pioneering work of Witt. Quadratic
forms are basically homogeneous polynomials of degree 2. They have wide
application in science and engineering.

10.2 Quadratic Forms and Matrices

Let A = (aij) be a real square matrix of size n and x be a column vector x = (x1, x2,
. . . , xn)T. A quadratic form on n variables is an expression Q = xT A x.

In other words,

 a11  a1n   x1 
  
Q = xT A x = ( x 1 , x 2 , . . . , x n )      .
a  
 n1  a nn   x n 
= a11x12 + a12x1x2 + . . . + a1nx1xn + a21x2x1 + a22x22 + . . . + a2nx2xn + . . . + an1xnx1 +
n n

an2xnx2 + . . . + annxn2 = ∑∑ a
=j 1 =i 1
ij xi x j .

The matrix A is called the matrix of the quadratic form Q. This matrix A need not
be symmetric. However, in the following theorem we show that every quadratic
form corresponds to a unique symmetric matrix. Hence there is one to one
correspondence between symmetric matrices of size n and quadratic forms on n
variables.

83 WhatsApp: +91 7900900676 www.AgriMoon.Com


Quadratic Forms

Theorem 10.2.1: For every quadratic form Q there is a unique symmetric matrix B
such that Q = xT B x .

Proof: We consider an arbitrary quadratic form Q = xT Ax, with A = (aij). We


a ij + a ji
construct a matrix B = (bij), where bij = . This matrix B is symmetric and xT
2
A x = xT B x, i.e. quadratic forms associated with A and B are the same.

 1 2 3
 
Example 10.2.1: For A =  4 5 6  , the quadratic form associated with A is
7 8 9
 

 1 2 3   x1 
  
Q = ( x1 x2 x3 )  4 5 6   x 2  .
7 8 9 x 
  3 

= x12 + 2x1x2 + 3x1x3+ 4x2x1 + 5x22 + 6x2x3+ 7x3x1 + 8x3x2 + 9x32.

= x12 + 6x1x2 + 10x1x3+ x22 + 14x2x3+ 7x3x1 + 9x32.

5 3 5
 
This quadratic form is equal to the quadratic form xT B x where B =  3 5 7 
5 7 9
 
which is a symmetric matrix.

If D is a diagonal matrix then the quadratic form associated with D is called a


diagonal quadratic form, that is if D = diag (a11, a22, . . . , ann) then xT D x = a11x12 +
a22x22 + . . . + annxn2. This is also called the canonical representation of a quadratic

84 WhatsApp: +91 7900900676 www.AgriMoon.Com


Quadratic Forms

form. The theorem below says that every quadratic form has a canonical
representation.

Theorem 10.2.2: every quadratic form xT A x can be reduced to a diagonal


quadratic form yT D y through a non-singular transformation Px = y, that is, P is a
non-singular matrix.

The above theorem says that, for x = (x1, x2, . . . , xn)T and y = (y1, y2, . . . , yn)T,
variables x1, x2, . . . , xn in xT A x can be changed to y1, y2, . . . , yn through Px = y,
P is a non-singular matrix, so that xT A x = yT D y, where D is a diagonal matrix.

We shall explain the above result through some examples.

Example 10.2.2: We reduce the quadratic forms (a) 4x12 + x22 + 9x32 – 4x1x2 +
12x1x3 and (b) x1x2 + x2x3 + x3x1 to diagonal forms.

For (a), 4x12 + x22 + 9x32 – 4x1x2 + 12x1x3

= 4{x12 + x1(3x3 – x2)} + x22 + 9x32

=4 + +

=4 + x22 + 9x32 – 9x32 + x22 + 6x2x3.

= (2x1 + 3x3 – x2)2 + 6x2x3.

We change the variables as: x1 = y1, x2 = y2, and x3 = y2 + y3. Then the above
expression (2x1 + 3x3 – x2)2 + 6x2x3

85 WhatsApp: +91 7900900676 www.AgriMoon.Com


Quadratic Forms

= (2y1 + 3y2 + 3y3 – y2)2 + 6y2 (y2 + y3)

= (2y1 + 2y2 + 3y3)2 + 6 −6

= (2y1 + 2y2 + 3y3)2 + 6 −

= (2y1 + 2y2 + 3y3)2 +

Finally changing the variables as 2y1 + 2y2 + 3y3 = z1, 2y2 + y3 = z2 and y3 = z3, we
get the above quadratic form is z12 + which is in diagonal form. Here

the transformation Px = z is non-singular, because here P is the non-singular the


 2 −1 3 
 
matrix  0 1 1  .
 0 1 −1
 

For the (b) part the quadratic form is x1x2 + x2x3 + x3x1. Here no square term is
there and since the 1st non-zero term is x1x2, we change the variables to x1 = y1,
x2 = y1 + y2 and x3 = y3. So this form is y1 (y1 + y2) + (y1 + y2) y3 + y1y3

= y12 + y1y2 + y1y3 + y2y3 + y1y3

= + y2y3

= − + y2y3.

= − + y2y3.

= - y22y32.

86 WhatsApp: +91 7900900676 www.AgriMoon.Com


Quadratic Forms

Finally replacing 2y1 + 2y2 + 3y3 = z1, and y2 = z2 and y3 = z3 the above form will
reduce to . Here also the transformation Px = z is non-singular

1 1 2
 
as the matrix P is  1 −1 0  , which is non-singular.
0 0 1
 

10.3 Classification of Quadratic Forms

Quadratic forms are classified into several categories according to their range.
These are given below.

Definition 10.3.1: A quadratic form Q = xT A x is said to be

(i) Negative definite if Q < 0 for x ≠ 0.

(ii) Negative semi-definite if Q ≤ 0 for all x and Q = 0 for some x ≠ 0.

(iii) Positive definite if Q > 0 for x ≠ 0.

(iv) Positive semi-definite if Q ≥ 0 for all x and Q = 0 for some x ≠ 0.

(v) Indefinite if Q > 0 for some x and Q < 0 for other x.

Since there is one to one correspondence between real symmetric matrices and
quadratic forms similar kind of classification is also there for the symmetric
matrices. A real symmetric matrix A belongs to a class if the corresponding
quadratic form xT A x belong to the same class.

Example 10.3.1: The form Q1 = − x12 – 2x22 is a negative definite form where as:
Q2 = − x12 + 2x1 x + x22 is a negative semi-definite because Q2 = − (x1 – x2)2 which
is always negative and also takes value zero for x1 = x2 ≠ 0. The form Q3 = 2x12 +

87 WhatsApp: +91 7900900676 www.AgriMoon.Com


Quadratic Forms

3x22 is positive definite where as: Q4 = x12 − 2x1x2 + x22 is positive semi-definite.
Finally Q5 = x12 − x22 is an indefinite form.

10.4 Rank and Signature of a Quadratic Form

To define rank and signature of a quadratic form we use its diagonal representation
as given below.

For a real symmetric matrix A let P(A) and N(A) be the numbers of positive and
negative diagonal entries in any diagonal form to which xTA x is reduce through a
non-singular transformation. The number P(A) – N(A) is called the signature of
the quadratic form xT A x. However rank of the matrix A is called the rank of the
form xT A x.

The quadratic form in example 10.2.2(a) has signature equal to 1 where as that in
example 10.2.2(b) has signature − 1.

The classification of quadratic forms can also be done according to their rank and
signatures as given in the theorem below.

Theorem 10.4.1: Let Q = xT A x be an n variable quadratic form with rank r and


signature s then Q is

(i) Positive definite if and only if s = n.

(ii) Positive semi-definite if and only if r = s.

(iii) Negative definite if and only if s = − n.

(iv) Negative semi-definite if and only if r = − s.

88 WhatsApp: +91 7900900676 www.AgriMoon.Com


Quadratic Forms

(v) Indefinite if and only if | s | < r.

The following is an important result on non-singular transformation of quadratic


forms.

Theorem 10.4.2: Two quadratic forms on the same number of variables can be
obtained from each other through a non-singular transformation if and only if they
have the same rank and signature.

10.5 Hermitian Forms

The complex analogue of real quadratic form is known as Hermitian form. Here all
vectors as well as matrices are taken as complex.

For a vector x in ₵
n
and a hermitian matrix A, the expression T
A x is called a
Hermitian form where is complex conjugate of x. Notice that if x and A are real
then Hermitian form will be a quadratic form only.

Although the vector x and the matrix A are complex, the Hermitian form always
takes real value that can be seen in the theorem below.

Theorem 10.5.1: A Hermitian form takes real values only.

Proof: Let H = xT A x be a Hermitian form. Complex conjugate of H is

= = = xT .

89 WhatsApp: +91 7900900676 www.AgriMoon.Com


Quadratic Forms

Since H is a scalar, H = HT = = xTAT . Since A is Hermitian so

= xT = xTAT = HT = H. Therefore A is real.

 2 3+i
Example 10.5.1: Consider a Hermitian matrix A =   . The Hermitian
3− i 1 
form associated with this is

 3 + i   x1 
( )
2
H= = x , x   
3− i 1   x2  .
1 2

= 2x1 + (3 + i) x2 + (3 − i) x1 + x2 .

= 2| x1 |2 + 2 Re + | x2 |2.

which is a real number.

10.6 Conclusions

Vast literature is there on quadratic forms, to know them on should do further


reading. Quadratic forms occur naturally in the study of conics and quadrics in
geometry.

Keywords: Quadratic forms, positive definite matrix, negative definite matrix,


rank and signature, Hermitian forms.

Suggested Readings:

Linear Algebra, Kenneth Hoffman and Ray Kunze, PHI Learning pvt. Ltd., New
Delhi, 2009.

90 WhatsApp: +91 7900900676 www.AgriMoon.Com


Quadratic Forms

Linear Algebra, A. R. Rao and P. Bhimasankaram, Hindustan Book Agency, New


Delhi, 2000.

Linear Algebra and Its Applications, Fourth Edition, Gilbert Strang, Thomson
Books/Cole, 2006.

Matrix Methods: An Introduction, Second Edition, Richard Bronson, Academic


press, 1991.

91 WhatsApp: +91 7900900676 www.AgriMoon.Com


e-course Linear Algebra problems

1. Identify the following matrices as symmetric, skew-symmetric, Hermitian, skew-Hermitian or none?

 
2 0 2
 
(a) 5 1 0
0 6 3
 
2 0 −1
 
(b)  0 1 1 
−1 1 3
 
0 5 −1
 
(c) −5 0 −1
1 1 0
 
1 −1 i
 
(d) −1 0 1 − i
−i 1+i 2
 
i −i 3 + i
 
(e) −i i 0 
−3 + i 0 3

2. Whether the system below is consistent? Justify.

x + 2y − 3z = 1
3x − y + 2z = 5
5x + 3y − 4z = 2

3. Solve
x + 2y − 3z + 2w = 2
2x + 5y − 8z + 6w = 5
3x + 4y − 5z + 2w = 4

4. Find rank of the matrix given below.


 
1 2 −3 0
 
2 4 −2 2
3 6 −4 3

5. Check whether the following are vector spaces?


(a) Let V be the set of all real polynomials of degree ≥ 5, with the usual addition and scalar
multiplication.

(b) Let V be the set of all nonzero real numbers with addition defined as x + y = xy and scalar
multiplication defined as αx = x.

92 WhatsApp: +91 7900900676 www.AgriMoon.Com


6. In the following, find out whether S forms a subspace of V ?

(a) V = R3 , S = {(x1 , x2 , x3 ) : x1 + 5x2 + 3x3 = 0}


(b) V = R3 , S = {(x1 , x2 , x3 ) : x1 + 5x2 + 3x3 = 1}
(c) V = R3 , S = {(x1 , x2 ) : x1 ≥ 0, x2 ≥ 0}
(d) V = P (R), the set of all polynomials over reals and S = {p(x) ∈ P (R) : P (5) = 0}
(e) V = Rn , S = {(x1 , x2 , ..., xn ) : x1 = x2 }
(f) V = R3 , S = {(x1 , x2 , ..., xn ) : x21 = x22 }.

7. Prove or disprove:

(a) Union of two subspaces of V is a subspace of V .

(b) Intersection of any number of subspaces is a subspace.

8. If x, y, z are linearly independent vectors then whether x+y, y +z, z +x are linearly independent?

9. For what values of k, do the vectors in the set {(0, 1, k), (k, 1, 0), (1, k, 1)} form a basis for R3 ?

10. Check whether the following set of vectors are linearly dependent or independent.

(a) S = {(1, 2, −2, −1), (2, 1, −1, 4), (−3, 0, 3, −2)}

(b) S = {(1, 3, −2, 5, 4), (1, 4, 1, 3, 5), (1, 4, 2, 4, 3), (2, 7, −3, 6, 13)}

11. Determine whether or not the following form a basis for R3 ?

(a) {(1, 1, 1), (1, −1, 5)}

(b) {(1, 1, 1), (1, 2, 3), (2, −1, 1)}

(c) {(1, 2, 3), (1, 0, −1), (3, −1, 0), (2, 1, −2)}

(d) {(1, 1, 2), (1, 2, 5), (5, 3, 4)}

12. Let W be a subspace of R5 generated by the vectors in 10(b). Find dimension and a basis for it.

13. Applying
 Gauss Jordan elimination method find inverse of the matrix
2 0 −1
 
A = 5 1 0 
0 1 3

93 WhatsApp: +91 7900900676 www.AgriMoon.Com


14. Whether f is a linear transformation in each of the following? If yes then whether it is as isomor-
phism?

(a) f : R2 → R2 , f (x1 , x2 ) = (x1 + x2 , x1 x2 ).


(b) f : R3 → R3 , f (x1 , x2 , x3 ) = (x2 , x1 , 0).
(c) f : R3 → R3 , f (x1 , x2 , x3 ) = (x1 , x3 , x1 ).
(d) f : R3 → R3 , f (x1 , x2 , x3 ) = (x1 − 2, x2 − 4, x3 ).

15. Let T : R3 → R2 be a linear transformation defined by T (x1 , x2 , x3 ) = (x1 − x2 , x1 + x3 ). Find


the matrix of T with respect to the basis {u1 , u2 , u3 } of R3 and {u01 , u02 } of R2 respectively, where
u1 = (1, −1, 0), u2 = (2, 0, 1), u3 = (1, 2, 1), u01 = (−1, 0) and u02 = (0, 1).

16. For the system

x + 2y − z = 0
2x + 5y + 2z = 0
x + 4y + 7z = 0
x + 3y + 3z = 0

find the solution space as well as its dimension.

 
2 1 0
 
17. Consider the matrix A = 0 1 −1
0 2 4
For this find all eigenvalues and a basis for each eigenspace. Is A diagonalizable?

18. Applying Cayley-Hamilton theorem find inverse of the matrix


 
1 2 0
 
−1 1 2
1 2 1

19. Find the symmetric matrix of the quadratic form 2x21 + 2x1 x2 − 6x2 x3 − x22

20. Check whether the matrices below are positive definite or positive semi-definite?
 
10 2 0
 
(i)  2 4 6 .
0 6 10
 
8 2 −2
 
(ii)  2 8 −2.
−2 −2 11
 
3 10 −2
 
(iii)  10 6 8 .
−2 8 12

94 WhatsApp: +91 7900900676 www.AgriMoon.Com


Answer and Hints

1. (a) none (b)symmetric (c) skew-symmetric (d) Hermitian (e) skew-Hermitian

2. Not consistent. Check that rank of the co-efficient matrix is 2 where as that of the augmented
matrix is 3.

3. Check that the system is consistent, where the rank of both the co-efficient matrix and augmented
matrix is 2. In echelon form the system is

x + 2y − 3z + 2w = 2
y − 2z + 2w = 1

Taking z and w as free variables, i.e., z = α, w = β, we get the set of all solutions is {(−α + 2β, 1 +
2α − 2β, α, β) : α, β ∈ R}.

4. Making elementary
 row operations
 R2 → −2R1 + R2 , R3 → −3R1 + R3 , R3 → −5R2 + 4R3 , get
1 2 −3 0
 
an echelon form 0 0 4 2
0 0 0 2

Thus rank of the given matrix is 3.

5. (a) Not a vector space because zero vector is not there.


(b) Yes, it is a vector space as it satisfy all the axioms. Here 1 is the zero vector, and for any
vector x its negative vector is x1 .

6. (a) yes, (b) neither closed under addition nor under scalar multiplication, (c) not closed under
scalar multiplication, (d) yes, (e) yes, (f) not closed under addition.

7. (a) No, Counter Example: V = R2 , S1 = {(x1 , x2 ) : x1 = x2 }, S2 = {(x1 , x2 ) : x1 + 2x2 = 0}.


(1, 1) ∈ S1 , (−2, 1) ∈ S2 but (1, 1) + (−2, 1) = (−1, 2) ∈ S1 , S2 .

(b) Yes. Let Si (i = 1, 2, · · · ) be subspaces of V and S = ∩∞


i=1 Si . x, y ∈ S ⇒ x, y ∈ Si ∀i. Then
x + y ∈ Si ∀i and so x + y ∈ S. Similarly S is closed under scalar multiplication. So (b) is true.

8. Yes, linearly independent.

95 WhatsApp: +91 7900900676 www.AgriMoon.Com


9. Taking scalar multiplication of the vectors and equating to 0 one gets the system in α, β, γ,
βk + γ = 0
α + β + γk = 0  
1 1 k
 
αk + γ = 0. An echelon form of the system is 0 k 1 
0 0 2 − k2
The system should have unique solution and hence rank of this matrix is 3. So k 2 6= 2. k can not
be equal to zero otherwise the set will have only two vectors. So k can be any real number other

than 0 and ± 2.

10. (a) Echelon form of the corresponding matrix


   
1 2 −2 −1 1 2 −2 −1
   
 2 1 −1 4  is 0 3 −3 −6. So the given set of vectors are linearly independent.
−3 0 3 −2 0 0 −3 −7

(b)
 Echelon form ofthecorresponding matrix
1 3 −2 5 4 1 3 −2 5 4
   
1 4 1 3 5 0 −1 −3 2 −1
   . So the given set of vectors are linearly dependent.
1 4 2 4 3 is 0 0 1 1 −2
   
2 7 −3 6 3 0 0 0 0 0

11. (a) No, because dimR3 = 3.

(b) Yes, because the set is linearly independent.

(c) No, because it contains more than 3 vectors.

(d) No, because it is linearly dependent.

12. First 3 rows of the echelon form in 10(b) forms a basis for W . Therefore dim W = 3.

 
2 0 −1 1 0 0
 
13. Consider (A|I) =  5 1 0 0 1 0 .
0 1 3 0 0 1
Apply each of these elementary row operations in the updated matrix R1 → 12 R1 , R2 → R2 − 5R1 ,
R3 → R3 + R2 , R1 → R1 + R
 3 , R2 → −R2 ,  R2 → R2 − 5R3 ,R3 → 2R3 and get
1 0 0 3 −1 1 3 −1 −1
  −1  
 0 1 0 −15 6 −5  . So, A = −15 6 −5.
0 0 1 5 −2 2 5 −2 2

14. (a) No. (b) Yes; not an isomorphism. (c)Yes; an isomorphism. (d) No

96 WhatsApp: +91 7900900676 www.AgriMoon.Com


15. T (u1 ) = (2, 1) = −2(−1, 0) + 1(0, 1)
T (u2 ) = (2, 3) = −2(−1, 0) + 3(0, 1) Ã !
−2 −2 1
T (u3 ) = (−1, 2) = 1(−1, 0) + 2(0, 1). So, answer is .
1 3 2

16. The system in echelon form is

x + 2y − z = 0
y + 4z = 0

The solution space is {(9α, −4α, α) : α ∈ R}. It’s dimension is 1.

17. Eigenvalues are 2, 2, 3. Basis for eigenspace corresponding to 2 and 3 are {(1, 0, 0)} and {(1, 1, −2)}
respectively. The matrix is not diagonalizable beacuse sum of dimension of eigenspaces is not equal
to 3.

 
−3 −2 4
1 1  
18. Characteristic polynomial is −λ3 + 3λ2 − λ + 3. So A−1 2
= 3 (A − 3A + I) = 3  3 1 −2
−3 0 3

 
2 1 0
 
19. 1 −1 3
0 3 0

20. (i) Positive semi-definite.


(ii) Positive definite. (iii) Neither of them.

97 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 2: Complex Variables

Lesson 11

Limit, Continuity, Derivative of Function of Complex Variable

11.1 Introduction

First we introduce some basic notations and terminology for the set of complex
numbers as a metric space.

11.1.1 Circle, Disk and Annulus

ρ denote the circle of radius ρ and


Let z = 1 be the unit circle and let z − a =

centre a . z − a < ρ denotes the interior of the circle of radius ρ and centre a .

It is also called an open circular disk. Similarly z − a ≤ ρ is the closed circular

disk and z − a > ρ is the exterior of the circle.

The open circular disk z − a < ρ is also called a neighbourhood of .

Also ρ1 < z − a < ρ 2 denotes an open annulus or a circular ring.

11.1.2 Half-Planes

The following notations are used for half-planes:

(i) { z =x + iy : y > 0} → upper half-plane

(ii) { z =x + iy : y < 0} → lower half-plane

(iii) { z =x + iy : x > 0} → right half-plane

(iv) { z =x + iy : x < 0} → the left half-plane

98 WhatsApp: +91 7900900676 www.AgriMoon.Com


Limit, Continuity, Derivative of Function of Complex Variables

11.1.3 Interior, Exterior and Boundary Points

A point z0 is said to be an interior point of a set if there is a neighbourhood of


z0 that is entirely contained in .

A point z0 is called an exterior point of a set D if there is a neighbourhood of z0


which does not have any point of .

A point z0 is called a boundary point of a set , if every neighbourhood of

z0 contains points of as well as points of D C .

11.1.3.1 Example: The boundary of the sets , z ≤ 1 or z < 1 is z = 1 .

11.1.4 Open and Closed Sets

A set is said to be an open set if all its points are interior points. For example,
the open circular disk, the right half-plane etc. are open sets.

A set is closed if it contains all its boundary points. The closure of a set is the
closed set consisting of all points in together with the boundary of .

11.1.4.1 Example: The set { z : z ≤ ρ } is a closed set.

11.1.4.2 Example: The set { z : 0 < z ≤ 1} is neither open nor closed.

11.1.4.3 Example: The set of all complex numbers is both open and closed.

11.1.5 Connected Sets, Bounded Sets, Domain


An open set is said to be connected if each pair of points z1 and z2 can be
joined by a polygonal line, consisting of a finite number of line segments joined
end to end, that lies entirely in .

99 WhatsApp: +91 7900900676 www.AgriMoon.Com


Limit, Continuity, Derivative of Function of Complex Variables

11.1.5.1 Example: The open set { z : z < 1} is connected.

Example: The open ring { z :1 < z < 2} is connected.

An open connected set is called a domain. Any neighbourhood is a domain. A


domain together with some, none or all of its boundary points is called a region.
A set D is closed if and only if its complement is open.

A set is bounded if every point of D lies inside some circle z = R , otherwise


it is unbounded.

A simple closed path is a closed path that does not intersect or touch itself. A
simply connected domain D in the complex plane is a domain such that every
simple closed path in D enclosed only points of D. A domain that is not simply
connected is called multiply connected.

11.1.5.2 Example: The set { z :1 < z < 2} is bounded whereas right half plane is

unbounded.

11.1.6 Examples

1. z − 2 + i ≤ 1 closed, bounded

2. 2 z + 3 > 4 open, connected set, unbounded


3. Im z > 1 open, connected , unbounded
4. Im z = 1
π
5. 0 ≤ arg z ≤ , ( z ≠ 0)
4
6. z − 4 ≥ z

100 WhatsApp: +91 7900900676 www.AgriMoon.Com


Limit, Continuity, Derivative of Function of Complex Variables

7. Re z ≤ z

1 1
8. Re   ≤
z 2
9. Re ( z 2 ) > 0

11.2 Function

Let be a set of complex numbers. A function defined on is a rule that


assigns to each in D a complex number . The number is called the value of
at and is denoted by ; that is . The set is called the domain
of definition of . The set of all values of a function is called the range of .
Suppose that is the value of a function at , so that
.

Each of the real numbers and depends on real variable and , and so it
follows that can be expressed in terms of a pair of real-valued functions of
the real variables and :

Converse is not true, i.e., given two real functions we may not be able to
define a complex function of in an explicit form, for example,
.

11.2.1 Function in Polar Form: If the polar co-ordinates and θ are used then

u + iv =f ( reiθ ) , where and z = reiθ . So we may write

f ( z ) u (r ,θ ) + iv(r ,θ ).
=

101 WhatsApp: +91 7900900676 www.AgriMoon.Com


Limit, Continuity, Derivative of Function of Complex Variables

11.2.2 Example: If f ( z ) = z 2 , then f ( x + iy ) =( x + iy ) 2 = ( x 2 − y 2 ) + 2ixy

Hence u ( x, y=
) x2 − y 2 , .
When polar co-ordinates are used,

( reiθ ) (=
re ) =
2
f= iθ
re 2 2 iθ
r 2 cos 2θ + ir 2 sin 2θ . Consequently,

u (r ,θ ) = r 2 cos 2θ , v(r ,θ ) = r 2 sin 2θ . If is always zero then is a real-valued

function of a complex variable. For example, f (=


z ) z=
2
(x 2
+ y2 ) .

11.2.3 Polynomial and Rational Functions

If a0 , a1 ,..., an are complex numbers, an ≠ 0, n ≥ 0 ,then P( z ) = a0 + a1 z + ... + an z n is


a polynomial of degree .The domain of is the entire complex plane. For

example, P( z ) =+
1 2 z − 3z 2 .

P( z )
Quotients of polynomials are called rational functions and are defined at
Q( z )
2 − z2
each point , where Q( z ) ≠ 0 . For example, g ( z ) = .
z + 4z3

11.2.4 Examples
1
1. Domain of definition of f ( z ) = is the entire complex plane excluding the
z
origin.
1
2. Domain of definition of f ( z ) = is the entire complex plane excluding
1− z
2

the circle z = 1 .

102 WhatsApp: +91 7900900676 www.AgriMoon.Com


Limit, Continuity, Derivative of Function of Complex Variables

11.2.5 Multiple-Valued Function

If to each value of , there are several values of , is called a multiple-


1
valued function. For example, if w = z , thenn
may take any of values:

  θ + 2 kπ 1
  θ + 2 kπ 
=wk z cos  n
 + i sin  
  n   n 

for . In such cases, we consider those parts of the domain in


which the multiple-valued function behaves like a single-valued function. Each
one of these single valued functions is called a branch of the multiple-valued
function.

11.3 Limit of a Function

Let a function be defined in some domain containing z0 . We say that

lim f ( z ) = s, if for every ∈> 0 there exists δ > 0 such that f ( z ) − s <∈
z → z0

whenever z − z0 < δ .

11.3.1 Examples
iz 2 i
1. lim =
z →2 3 3
i 1 δ
( z − 2)
= z − 2 < <∈ whenever z − 2 < δ and δ < 3 ∈ .
3 3 3
z z x
2. lim does not exists, as along , = = 1 and along ,
z →0 z z x
z iy
= = −1.
z −iy

103 WhatsApp: +91 7900900676 www.AgriMoon.Com


Limit, Continuity, Derivative of Function of Complex Variables

( z − 3i ) − ( z + i ) 
lim 
3. lim  z − 3i − z + i  =
z →∞ z →∞ z − 3i + z + i

−4 i −4 i u
= lim
= lim= 0.
z →∞ z − 3i + z + i u →0 1 − 3i u + 1 + iu

11.3.2 Theorem: Suppose that , z=


0 x0 + iy0 , and
w=
0 u0 + iv0 . Then lim f ( z ) = w0 if and only if lim u ( x , y ) = u0
z → z0 ( x , y )→( x0 , y0 )

and lim v( x, y ) = v0 .
( x , y )→( x0 , y0 )

11.3.3 Theorem: Suppose that lim f ( z ) = α 0 and lim g ( z ) = β 0 . Then


z → z0

lim [ f ( z ) ± g ( z ) ] =
α 0 + β0 , lim [ f ( z ) g ( z ) ] = α 0 β 0 , and if β 0 ≠ 0 , then
z → z0 z → z0

f ( z) α0
lim = .
z → z0 g ( z) β0

11.3.4 Infinite Limits and Limit at Infinity

We say that lim f ( z ) = ∞ if for every positive ∈ > 0 , these exists δ > 0 such
z → z0

1
that f ( z ) > whenever z − z0 < δ .

11.3.4.1 Example: lim


( iz + 3) = ∞
z →−1 z +1
We say that lim f ( z ) = w0 if for every ∈ > 0 there exists δ > 0 such that
z →∞

1 1
f ( z ) − w0 <∈ whenever z > . Equivalently, we can say that lim f   = w0 .
δ z →0
z

104 WhatsApp: +91 7900900676 www.AgriMoon.Com


Limit, Continuity, Derivative of Function of Complex Variables

2z + i z
11.3.4.2 Examples: lim = 2 (ii) lim =i
z →∞ z + 1 z →∞ 2 − iz

We say that lim f ( z ) = ∞ if for every∈> 0 , there exists δ > 0 such that
z →∞

1 1 1
f ( z ) > whenever z > . One can alternatively say lim = 0.
∈ δ z →0 1
f 
z
2z3 − 1
11.3.4.3 Example: lim = ∞
z →∞ z + 1

11.4 Continuous Function

A function is continuous at a point z0 if lim f ( z ) = f ( z0 ) . Using the


z → z0

definition of limit, we define is continuous at if for every ∈> 0 , there exists

δ > 0 such that f ( z ) − f ( z0 ) < ∈whenever z − z0 < δ .

Compositions of continuous functions are again continuous.

11.4.1 Remark: If is continuous, let g ( z) = f ( z) . Now

g ( z ) − g (=
z0 ) f ( z ) − f (=
z0 ) f ( z ) − f ( z0 ) <∈ , whenever z − z0 < δ . So
is also continuous.

11.4.2 Examples

1. f ( z ) = z 3 is continuous on the whole complex plane.

sin z
2. f ( z ) = is continuous except at z = ± i.
1+ z2

105 WhatsApp: +91 7900900676 www.AgriMoon.Com


Limit, Continuity, Derivative of Function of Complex Variables

 Im( z )
 ,z ≠ 0
3. f ( z ) =  z
0, z = 0

 Re( z 2 )
 ,z ≠ 0
4. f ( z ) =  z
2


0, z = 0

are not continuous at

 z2 + 1
 , z ≠ −i
5. f ( z ) =  z + i
0, z = −i

is not continuous at as lim f ( z ) =−2i ≠ f (−i ) .
z →− i

11.5 Differentiability of a Function

The derivative of a complex function at a point z0 is defined by

f ( z0 + ∆z ) − f ( z0 )
lim = f ′( z0 )
∆z →0 ∆z

provided the limit exists. Then the function is said to be differentiable at z0 .

11.5.1 Example: f ( z ) = z 2

( z0 + ∆z ) 2 − ( z0 ) 2
lim = lim (∆z + 2 z=
) 2z
∆z →0 ∆z ∆z →0

11.5.2 Remark: It can be easily seen that the differentiability of a function at a


point implies its continuity at that point.

106 WhatsApp: +91 7900900676 www.AgriMoon.Com


Limit, Continuity, Derivative of Function of Complex Variables

General differentiation rules are the same as in real calculus such as


(cf )′ = cf ′, ( f + g )′ = f ′ + g ′,( fg )′ = f g
′ + fg ′ .

 f ′ f ′g − fg ′
g = g2
provided g does not vanish.
 

11.5.3 Examples:

1. f ( z ) = z .

f ( z0 + ∆z ) − f ( z0 ) ( z0 + ∆z ) − ( z0 ) ∆z ∆x − i∆y
= = =
∆z ∆z ∆z ∆x + i∆y
Now for ∆y =0 this value is and for ∆x =0 , it is . Hence
f ( z0 + ∆z ) − f ( z0 )
lim does not exist for any . That is, f z = z is not
∆z →0 ∆z
differentiable at any point.

2. f (=
z ) z= zz
2

f ( z + ∆z ) − f ( z ) ( z + ∆z )( z + ∆z ) − zz
=
∆z ∆z
∆z
= z + z + ∆z
∆z

f (0 + ∆z ) − f (0)
Now for , = ∆z
∆z
as ∆z → 0 . Hence z is differentiable at
2
which has limit . However for

∆z
any z ≠ 0 , lim
2
does not exist. Consequently z is not differentiable at any
∆z →0 ∆z

other point.

3. f ( z ) = Re( z ) is not differentiable for any .

107 WhatsApp: +91 7900900676 www.AgriMoon.Com


Limit, Continuity, Derivative of Function of Complex Variables

4. f ( z ) = Im( z ) is not differentiable for any .

5. f ( z ) = z n

( z + ∆=
z ) − zn 1  n  n−1 n
n
 n  n−2 n
  z ∆z +   z ( ∆z ) + ... +   ( ∆z ) 
2

∆z ∆z 1  2 n 


→ nz n−1 as ∆z → 0 .

Hence
d n
dz
( z ) = nz n−1 for all z.

Suggested Readings

Ahlfors, L.V. (1979). Complex Analysis, McGraw-Hill, Inc., New York.

Boas, R.P. (1987). Invitation to Complex Analysis, McGraw-Hill, Inc., New


York.

Brown, J.W. and Churchill, R.V. (1996). Complex Variables and Applications.
McGraw-Hill, Inc., New York.

Conway, J.B. (1993). Functions of One Complex Variable, Springer-Verlag,


New York.

Fisher, S.D. (1986). Complex Variables, Wadsworth, Inc., Belmont, CA.

Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics,


Narosa Publishing House, New Delhi.

Ponnusamy, S. (2006). Foundations of Complex Analysis, Alpha Science


International Ltd, United Kingdom.

108 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 2: Complex Variables

Lesson 12

Analytic Function, Cauchy-Riemann Equations, Harmonic Functions

12.1 Analytic Functions

A function is said to be analytic at a point z0 if it is differentiable at z0 and


also at each point in some neighbourhood of z0 . The function is said to be
analytic in a domain , if it is analytic at every point in .
Analytic functions are also called holomorphic functions.

12.1.1 Examples:

1. f ( z ) = z n , a positive integer, is analytic at every point in the complex plane.

2. p ( z ) = a0 + a1 z + ... + an z n where a0 , a1 ,..., an are complex constants is analytic


at every point in the complex plane.

P( z )
3. f ( z ) = , where and are polynomials, is analytic at all points except
Q( z )
where vanishes.

12.1.2 Entire Function

A function which is analytic at all points in the complex plane is called an entire
function.

12.1.3 Examples:

1. Every polynomial is an entire function.

2. f ( z ) = z is not analytic anywhere as it is differentiable only at


2
.

109 WhatsApp: +91 7900900676 www.AgriMoon.Com


Analytic Function, Cauchy-Riemann Equations, Harmonic Functions

1
A function is said to be analytic at z = ∞ if f   is analytic at .
z
Let us write the function

and let u x , u y , vx , v y denote the partial derivatives of and with respect to

and respectively.

12.2 Cauchy-Riemann Equations

u x = v y , u y = vx (12.2.1)

12.2.1 Theorem: Let be defined and continuous in


some neighbourhood of a point and differentiable at itself. Then at
that point the first order partial derivatives of and exist and satisfy the
Cauchy-Riemann equations (12.2.1).

Hence, if is analytic in a domain , then partial derivatives exist and


satisfy (12.2.1) at all points of .

f ( z + ∆z ) − f ( z )
Proof: Given that f ′( z ) = lim exists. This implies that
∆z →0 ∆z

lim
{[u ( x + ∆x, y + ∆y) + iv( x + ∆x, y + ∆y)] − [u ( x, y) + iv( x, y)]} exists.
( ∆x ,∆y )→(0,0) (∆x + i∆y )

Hence along (∆x,0) and (0, ∆y ) the limit should be same. Now along (∆x,0) the
limit is

110 WhatsApp: +91 7900900676 www.AgriMoon.Com


Analytic Function, Cauchy-Riemann Equations, Harmonic Functions

lim
( u ( x + ∆x, y ) − u ( x, y ) ) + i ( v( x + ∆x, y ) − v( x, y ) )
∆x →0 ∆x
= u x ( x, y ) + ivx ( x, y ), (12.2.2)

since limit is assumed to exist.

Similarly along (0, ∆y ) the limit is

lim
( u ( x, y + ∆y ) − u ( x, y ) ) + i ( v( x, y + ∆y) − v( x, y) )
∆y →0 i∆y
= iu y ( x, y ) + v y ( x, y ) …… (12.2.3)

Equating the real and imaginary parts in (12.2.2) & (12.2.3), we get the Cauchy-
Riemann equations.

12.2.2 Example:

1. Let f ( z )= z= x − iy , u = x, v = − y. It can be easily seen that


ux =
1, v y =
−1, u y =
0, vx =
0 . Hence the Cauchy-Riemann equations are not

satisfied. So cannot be differentiable at any point.

1 x y
2. Let f ( z ) = = −i 2 , z ≠ 0.
z x +y
2 2
x + y2

y 2 − x2 2 xy
u x =2 = vy , u y =
− =
−vx except at . The function is
(x + y )
2 2
(x + y )
2 2 2

nalytic everywhere except at .

12.2.3 Theorem: If two real-valued continuous functions and of


two real variables and have continuous first order partial derivatives that

111 WhatsApp: +91 7900900676 www.AgriMoon.Com


Analytic Function, Cauchy-Riemann Equations, Harmonic Functions

satisfy the Cauchy-Riemann equations in some domain , then the complex


function is analytic in .

Proof: Consider a neighbourhood of . Now the partial derivatives of


and are continuous. Therefore, we can write

∆=
u u ( x + ∆x, y + ∆y ) − u ( x, y )= u x ∆x + u y ∆y + ∈1 ∆x + ∈2 ∆y,

and v v( x + ∆x, y + ∆y ) − v( x, y )= vx ∆x + v y ∆y + ∈3 ∆x + ∈4 ∆y,


∆=

where ∈1 , ∈2 , ∈3 , ∈4 → 0 as ∆x, ∆y → 0 .

Now ∆w = f ( z + ∆z ) − f ( z ) = ∆u + i∆v
= (u x + ivx )∆x + (u y + iv y )∆y + (∈1 +i ∈3 )∆x + (∈2 +i ∈4 )∆y

If we apply Cauchy-Riemann equations, the above expression reduces to

∆w= (u x + ivx )∆x + (vx + iu x )∆y + (∈1 +i ∈3 )∆x + (∈2 +i ∈4 )∆y

= (u x + ivx ) ( ∆x + i∆y ) + (∈1 +i ∈3 )∆x + (∈2 +i ∈4 )∆y.

f ( z + ∆z ) − f ( z ) ∆x ∆y
So − (u x + ivx ) ≤ (∈1 +i ∈3 ) + (∈2 +i ∈4 ) .
∆z ∆z ∆z

∆x ∆y
Using the fact that ≤ 1& ≤ 1 , we get
∆z ∆z
f ( z + ∆z ) − f ( z )
lim =u x + ivx =u y + iv y .
∆z →0 ∆z

This proves that is differentiable at an arbitrary point in and so it is analytic


in .

112 WhatsApp: +91 7900900676 www.AgriMoon.Com


Analytic Function, Cauchy-Riemann Equations, Harmonic Functions

12.2.4 Examples:

1. f ( z ) =z3 =x3 − 3xy 2 + i (3x 2 y − y 3 ) is analytic in .

 ( z )2
 ,z≠0
2. f ( z ) =  z
0, z = 0.

Then Cauchy-Riemann equations are satisfied at ( ) but is not differentiable


at ( ).

3. Let be analytic in a domain =


and f ( z ) k for all z ∈ D . So writing

, we get u 2 + v 2 =
k 2 . Differentiating with respect to
and we get
uu x + vvx =
0 (12.2.4)
and uu y + vv y =
0 (12.2.5)

Using vx = −u y in the first equation and v y = u x in the second equation, we get

uu x − vu y =
0 and uu y + vu x =
0

0 , (u 2 + v2 ) u y =
⇒ (u 2 + v2 ) ux = 0

If u 2 + v 2 = k 2 = 0 then and hence .

If k ≠ 0 then u=
x u=
y 0 , then vx and v y are also zero. So = const. , = const.

This proves that f is constant.

113 WhatsApp: +91 7900900676 www.AgriMoon.Com


Analytic Function, Cauchy-Riemann Equations, Harmonic Functions

12.2.5 Polar Co-ordinates

= cosθ , y r sin θ . Consider the function


Let x r= . If we write
then the real and imaginary parts of are expressed in

terms of the variables and . Similarly, if we write z = reiθ , ( z ≠ 0), the real
and imaginary parts of are expressed in terms and θ . Assume
the existence and continuity of the first-order partial derivatives of and with
respect to and everywhere in some neighbourhood of a given non zero point
z0 . Then the first order partial derivatives with respect to and θ will also exist
and be continuous in some neighbourhood. Using the chain rule for
differentiating real-valued functions of two real variables we obtain

∂u ∂u ∂u ∂u ∂y ∂u ∂u ∂u ∂u ∂y
= + , = +
∂r ∂x ∂r ∂y ∂r ∂θ ∂x ∂θ ∂y ∂θ

u x cosθ + u y sin θ ,
so that ur = − u x r sin θ + u y r cosθ .
uθ = (12.2.6)

vx cosθ v y sin θ ,
Similarly vr =+ −vx r sin θ + v y r cosθ .
vθ = (12.2.7)

If the partial derivatives with respect to and also satisfy the Cauchy-
Riemann equations u x = v y , u y = −vx at z0 , then equation (12.2.7) becomes

−u y cosθ + u x sin θ , vθ =
vr = u y r sin θ + u x r cosθ (12.2.8)

Comparing (12.2.6) and (12.2.8), we get

114 WhatsApp: +91 7900900676 www.AgriMoon.Com


Analytic Function, Cauchy-Riemann Equations, Harmonic Functions

1 1
ur = vθ and vr = − uθ . (12.2.9)
r r
f ( z ) u (r ,θ ) + i v(r ,θ ) be defined throughout
12.2.6 Theorem: Let the function=
some ∈ -neighbourhood of a non-zero point z0 = r0 exp(iθ 0 ) . Suppose that the
first order partial derivatives of the functions and with respect to and
θ exist anywhere in that neighbourhood and that they are continuous at ( r0 ,θ 0 ) .
Then if those partial derivatives satisfy the polar form (4) of the Cauchy-
Riemann equations at (r0 ,θ 0 ) , the derivatives f ′( z0 ) exists and

′( z0 ) e − iθ (ur + ivr ) ,
f=

where the right hand side is evaluated at (r0 ,θ 0 ) .

1 1 cosθ sin θ
12.2.7 Example: f ( z )= = iθ
= −i
z re r r

The conditions in the theorem are satisfied at every non-zero point z = reiθ in the
plane. Hence the derivative of exists there and

 cosθ sin θ  1 1
e − iθ 
f ′( z ) =− +i 2 = =
− 2.
  ( reiθ )
2 2
r r z

12.2.8 Example:

f ( z=
) z (Re z=
) x 2 + ixy . Then=
u x 2 x=
, u y 0,=
vx y=
, v y x.

115 WhatsApp: +91 7900900676 www.AgriMoon.Com


Analytic Function, Cauchy-Riemann Equations, Harmonic Functions

So C. R. equations are satisfied only at the origin. Hence is not differentiable


at any point z ≠ 0 . At , partial derivatives are continuous. Hence is
differentiable at .

12.2.9 Example:
z
=
f ( z) 2
, z ≠ 0,
z
1
= , z≠0
z
x −y
=u = , v .
x2 + y 2 x2 + y 2

Here is differentiable everywhere except at .

12.3 Harmonic Functions


A real valued function φ ( x, y ) of two variables and that has continuous
second order partial derivatives in a domain and satisfies the Laplace
equation
∂ 2φ ∂ 2φ
+ =
0
∂x 2 ∂y 2
is said to be harmonic in .

12.3.1 Theorem: If is analytic in a domain , then


and satisfy Laplace’s equation

∇ 2u = u xx + u yy = 0 and ∇ 2v = vxx + v yy = 0

116 WhatsApp: +91 7900900676 www.AgriMoon.Com


Analytic Function, Cauchy-Riemann Equations, Harmonic Functions

respectively in and have continuous second order partial derivatives in .

Proof: The function satisfies the Cauchy-Riemann equations


ux = vy , (12.3.1)

and u y = vx . (12.3.2)

Differentiating (12.3.1) with respect to x and (12.3.2) with respect to y we get

u xx = vxy (12.3.3)

and u yy = −v yx (12.3.4)

If is analytic in then and have continuous partial derivatives of all


orders in . Hence vxy = v yx . Hence adding equations (12.3.3) and (12.3.4), we

0 . Similarly we can prove that vxx + v yy =


get u xx + u yy = 0.

If two functions and are harmonic in a domain and their first order partial
derivatives satisfy the Cauchy-Riemann equations throughout , is said to be
a harmonic conjugate of .

12.3.2 Theorem: A function is analytic in a domain


if and only if is a harmonic conjugate of .

12.3.3 Example:
Let u = x 2 − y 2 − y.
Then
ux =
2 x, u xx =
2, u y =
−2 y − 1, u yy =
−2.

117 WhatsApp: +91 7900900676 www.AgriMoon.Com


Analytic Function, Cauchy-Riemann Equations, Harmonic Functions

So u xx + u yy =
0 ; that is, is harmonic.

To find the conjugate harmonic function of , we should have


v=
y u=
x 2 x and vx =−u y =2 y + 1 . Integrating the first equation with respect to

, we get

Differentiating with respect to , we get vx =2 y + h′( x) =2 y + 1,


or, h′( x) =+1 ⇒ h( x) =+ x + k . Hence .

This is the general conjugate harmonic function of and

f ( z ) = u + iv = ( x 2 − y 2 − y ) + i ( 2 xy + x + k ) = (z 2
+ iz + ik ) is analytic.

12.3.4Remark: A conjugate of a given harmonic function is uniquely


determined up to a constant.

12.3.5 Remark: If and are any two harmonic functions, then


need not be analytic in . However, if second order partial derivatives

of nad are continuous then ( u y − vx ) + i ( u x + v y ) is analytic in .

12.3.6 Example: Let u =x2 − y 2 , v =3 x 2 y − y 3 . Then and are harmonic. But


u x ≠ v y and so is not analytic. Let U= u x − vx and V= u x + v y . Then

is analytic.

12.3.7 Example: Let u ( x, y ) = 2 x + y 3 − 3 x 2 y.

ux =
2 − 6 xy, u xx =
−6 y, u y =
3 y 2 − 3x 2 , u yy =
+6 y.

118 WhatsApp: +91 7900900676 www.AgriMoon.Com


Analytic Function, Cauchy-Riemann Equations, Harmonic Functions

So u xx + u yy =
0 , that is, is harmonic.

For finding conjugate ,

v y = u x = 2 − 6 xy ⇒ v = 2 y − 3xy 2 + h( x)

−3 y 2 + h′( x) =
⇒ vx = −u y =
3x 2 − 3 y 2

⇒ h′( x) =3 x 2 ⇒ h( x ) =x3 + c

Hence v = 2 y − 3 xy 2 + x 3 + c. f =u + iv =2 z + iz 3 + ic is analytic.

12.3.8 Laplace Equation in Polar Form

f ( z ) u (r ,θ ) + iv(r ,θ ).
Consider the function f in polar form =

Cauchy-Riemann equations are


1
ur = vθ (12.3.5)
r

1
and uθ = vr (12.3.6)
r
rur ⇒ vrθ =ur + rurr
(12.3.5) ⇒ vθ = (12.3.7)
1
(12.3.6) ⇒ vθ r =
− uθθ (12.3.8)
r

1
Assuming urθ = vθ r , we get ur + rurr =
− uθθ
r
1 1 1
⇒ ur + rurr + uθθ =
0, or, urr + ur + 2 uθθ =
0. (12.3.9)
r r r

119 WhatsApp: +91 7900900676 www.AgriMoon.Com


Analytic Function, Cauchy-Riemann Equations, Harmonic Functions

Similarly, we will have


1 1
vrr + vr + 2 vθθ =
0. (12.3.10)
r r

Equations (12.3.9) and (12.3.10) are Laplace equations in polar form.

Suggested Readings

Ahlfors, L.V. (1979). Complex Analysis, McGraw-Hill, Inc., New York.

Boas, R.P. (1987). Invitation to Complex Analysis, McGraw-Hill, Inc., New


York.

Brown, J.W. and Churchill, R.V. (1996). Complex Variables and Applications,
McGraw-Hill, Inc., New York.

Conway, J.B. (1993). Functions of One Complex Variable, Springer-Verlag,


New York.

Fisher, S.D. (1986). Complex Variables, Wadsworth, Inc., Belmont, CA.

Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics,


Narosa Publishing House, New Delhi.

Ponnusamy, S. (2006). Foundations of Complex Analysis, Alpha Science


International Ltd, United Kingdom.

120 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 2: Complex Variables

Lesson 13

Line Integral in the Complex Plane

13.1 Introduction
Complex definite integrals are called complex line integrals written as ∫ f ( z ) dz ,
C

where C is a curve in the complex plane called the path of integration. We may
represent such a curve C by a parametric representation

z ( t=
) x ( t ) + iy ( t ) , a ≤ t ≤ b. (13.1.1)

The sense of increasing is called the positive sense on C . We assume C to be


smooth curve, that is, C has a continuous and nonzero derivative at
each point. Geometrically this means that has a unique and continuously
turning tangent. Consider the partition Let

Further, we choose point between and and consider

the sum ,

where . (13.1.2)

The limit of as the maximum of approaches zero


(consequently approaches zero) is called the line integral
of over and denoted by ∫ f ( z ) dz
C
or, by if coincides with

(that is, is a closed curve).

121 WhatsApp: +91 7900900676 www.AgriMoon.Com


Line Integral in the Complex Plane

In general all paths of integration for complex line integrals are assumed to be
piecewise smooth. The following three properties are easily implied by the
definition of the line integral.

1. Linearity: ∫ (k1 f1 ( z ) + k2 f 2 ( z ))dz =k1 ∫ f1 ( z ) + k2 ∫ f 2 ( z ).


C C C

2. Sense Reversal:

f ( z ) dz
3. Partitioning of Path: ∫= ∫ f ( z ) dz + ∫ f ( z ) dz.
C C C

13.2 Existence of the Complex Line Integral

From our assumptions of the existence of the complex integral, is continuous


and is piecewise smooth. Let us write Let us
further take and . Then the sum in
(13.1.2) becomes

(13.2.1)

These sums are real. Since is continuous, and are continuous. As


maximum of maximum of and also converges to zero and
the sum on the right becomes a real line integral.

 
lim S n =
n →∞ ∫ f ( z ) dz = ∫ udx − ∫ vdy + i  ∫ udy + ∫ vdx 
C C C C C
(13.2.2)

2
122 WhatsApp: +91 7900900676 www.AgriMoon.Com
Line Integral in the Complex Plane

This shows that under assumptions on and , the line integral exists and its
value is independent of the choice of subdivisions and intermediate points

13.2.1 Theorem: (Indefinite integration of analytic functions)

Let be analytic in a simply connected domain (every simple closed


curve in encloses only points of Then there exists an indefinite integral of
in the domain that is, an analytic function such that
in and for all paths in joining two points and in we have

. (13.2.3)

13.2.2 Examples

1.

2.

3.

(since is periodic with period ).

4.

13.2.3 Theorem (Integration by the use of the path)

Let be a piecewise smooth path, represented by where Let


b 
be a continuous function on Then ∫ f ( z ) dz = ∫ f ( z ( t ) ) z ( t ) dt , (13.2.4)
C a

where

3
123 WhatsApp: +91 7900900676 www.AgriMoon.Com
Line Integral in the Complex Plane

Proof: The LHS of (13.2.4) is given by (13.2.2) in terms of real line integrals,
and we show that the RHS of (13.2.4) also equals (13.2.2). We have
hence We simply write for and for We
also have and
Consequently, in (13.2.4)

= ∫ [udx − vdy + i(udy + vdx)]


C

= ∫ ( udx − vdy ) + i ∫ ( udy + vdx ).


C C

13.2.4 Examples
dz
1. ∫
C
z
= 2π i, where C is a unit circle, counter clockwise.

Solution: (representation of unit circle)

Thus from (13.2.4), we get


2π 2π
1
∫C = ∫ e .ie= ∫ dz 2π i.
− it
dz it
dt i =
z 0 0

2. m is an integer, is a constant. is circle of radius ρ with


center at counter clockwise.

Solution: can be represented in the form

4
124 WhatsApp: +91 7900900676 www.AgriMoon.Com
Line Integral in the Complex Plane

Then .
2π 2π

∫ ( z − z=
) dz ∫ ρ i ( m +1)t
ρ e dt i ρ ∫e
m m +1
0 e i=
m imt it
dt
C 0 0

When so that the integral equals


For the two integrals vanish. Hence

2π i, m = −1
∫ (z − z ) =
m

m ≠ −1 and integer.
0
C 0,

3. Integrate from 0 to
(a) along , straight line joining origin to 1+2i
(b) along containing of and , straight lines from origin to 1 and 1 to 1
+2i.

Solution:

(a)

1
1
∫0t (1 + 2i ) dt =
∫C Re z dz = 2
+ i.
3

(b) Along

Along

1 2
1
Hence ∫
C
∫ f ( z ) dz +
f ( z ) dz =
C1 C2
∫ ∫0 t dt + ∫0 i dt =
f ( z ) dz =
2
+ 2i .

5
125 WhatsApp: +91 7900900676 www.AgriMoon.Com
Line Integral in the Complex Plane

Thus the integral is dependent on the path.

13.3 Bounds for the Absolute Value of the Integrals

13.3.1 ML- inequality

∫ f ( z ) dz ≤ ML, where L is the length of


C
and M a constant such that

everywhere on

Proof:

Now is the length of the chord whose endpoints are and Hence
the sum on the right represents the length L* of the broken line of chords whose
endpoints are If n approaches infinity such that max
and so max tends to zero, then L* approaches the length L of the curve C,
by the definition of the length of the curve. This proves the ML- inequality.

13.3.2 Examples

∫ Re ( z ) dz, where
2
1. Evaluate is from 0 to represents
C

(a) a line segment joining the points (0,0) and (2,4),

(b) x-axis from 0 to 2, and then vertical line to ,

(c) parabola .

Solution: (a) Equation of is

∫ f ( z (t )) z′(t ) dt =−
Hence, we obtain I = ∫ ( 3t ) (1 + 2i) dt =
−8(1 + 2i ) .
2

C 0

6
126 WhatsApp: +91 7900900676 www.AgriMoon.Com
Line Integral in the Complex Plane

(b) is
is
For
For

Hence, we obtain
= ∫ f ( z ( t ) ) z ( t ) dt + ∫ f ( z ( t ) ) z ( t ) dt
' '
I
C1 C2

(c) The parametric form of the curve can be written as

So z ' ( t ) = 1 + 2it , and

=(

2
Hence I = ∫ f ( z ( t ) ) z ' ( t ) dt = ∫ ( t 2 − t 4 ) (1 + 2it ) dt
C 0

Suggested Readings

Ahlfors, L.V. (1979). Complex Analysis, McGraw-Hill, Inc., New York.

Boas, R.P. (1987). Invitation to Complex Analysis, McGraw-Hill, Inc., New


York.

Brown, J.W. and Churchill, R.V. (1996). Complex Variables and Applications,
McGraw-Hill, Inc., New York.

7
127 WhatsApp: +91 7900900676 www.AgriMoon.Com
Line Integral in the Complex Plane

Conway, J.B. (1993). Functions of One Complex Variable, Springer-Verlag,


New York.

Fisher, S.D. (1986). Complex Variables, Wadsworth, Inc., Belmont, CA.

Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics,


Narosa Publishing House, New Delhi.

Ponnusamy, S. (2006). Foundations of Complex Analysis, Alpha Science


International Ltd, United Kingdom.

8
128 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module2: Complex Variables

Lesson 14

Cauchy’s Integral Theorem and Cauchy’s Integral Formula

14.1 Cauchy’s Integral Theorem


A simple closed path is a closed path that does not intersect or touch itself. A
simply connected domain D in the complex plane is a domain such that every
simple closed path in D enclosed only points of D. A domain that is not simply
connected is called multiply connected.

14.1.1 Theorem (Cauchy’s Integral Theorem)

If is analytic in a simply connected domain D , then for every simple

closed path C in D , ∫ f ( z ) dz = 0
C
(14.1.1)

Proof: We have from (13.2.2),

∫ f ( z ) dz= ∫ ( udx − vdy ) + i ∫ ( udy + vdx ).


C C C

Since is analytic in D , u and v have continuous partial derivatives in D .


Hence by Green’s Theorem

 ∂v ∂u 
∫ (udx − vdy) = ∫∫  − ∂x − ∂y  dxdy,
C R

where R is the region bounded by C . By Cauchy-Riemann condition

the RHS vanishes. Similarly the second integral also vanishes.

129 WhatsApp: +91 7900900676 www.AgriMoon.Com


Cauchy’s Integral Theorem and Cauchy’s Integral Formula

14.1.2 Examples

∫ = ∫ c os= ∫ z= 0, =n 0,1, 2, …
z n
1. e dz 0, zdz 0, dz for any closed path C as
C C C

these are all entire functions.

2. ∫ sec z dz = 0,
C
C is the unit circle, as has singularities at

outside the unit circle.

1
3. ∫ z
C
2
+4
dz = 0, C is unit circle, are outside the unit circle.


=
4. ∫ z dz ∫= C : z ( t ) eit is the unit circle. Here z is no analytic.
e − it ieit dt 2π i,=
C 0


dz 1
5. ∫ 2
= ∫=
e .ie dt −2 it it
0, C is the unit circle taken counter clockwise. is not
C
z 0
z2

analytic at z = 0 .

1
6. ∫ z dz = 2π i,
C
C is the unit circle taken counter clockwise.

14.1.3 Theorem (Independence of Path): If is analytic in a simply


connected domain D, then the integral of is independent of the path in D.
Proof: Let and be any points in D. Consider two paths and in D
from to without further common points. Let be the path with
orientation reversed. Integrate from over to and over back to .
This is a simple closed path, and Cauchy’s theorem applies under our
assumptions and gives zero:

∫f dz + ∫ f dz =
0,
C1 C2*

⇒ ∫f − ∫ f dz =
dz = ∫ f dz .
C1 C2* C2

130 WhatsApp: +91 7900900676 www.AgriMoon.Com


Cauchy’s Integral Theorem and Cauchy’s Integral Formula

This proves the theorem for paths that have only the endpoints in common. For
paths with finitely many further common points the above argument is applied
to each loop.

14.2 Principle of Deformation of Path

The idea is related to path independence. We can imagine that path was
obtained from by continuously moving (with ends fixed) until it coincides
with . As long as our deforming path always contains only points at which
is analytic, the integral retains the same value. This is called the principle
of deformation of path.

14.2.1 Theorem (Existence of Indefinite Integral)

If is analytic in a simply connected domain D, then there exists an


indefinite integral of in D, thus which is analytic in D,
and for all paths in D joining any two points and in D, the integral of
from to can be evaluated by

Proof: Since f is analytic in , the line integral of from any in D to any z


in D is independent of path in D. We keep fixed. Then this integral becomes
a function of z, say F ( z ).

131 WhatsApp: +91 7900900676 www.AgriMoon.Com


Cauchy’s Integral Theorem and Cauchy’s Integral Formula

Now

where the path of integration from to may be selected as a line


segment.

Since we can write So

Since f is continuous at z, for each positive


whenever Choosing we have

F ( z + ∆z ) − F ( z ) 1
− f (z) < ε ∆z =ε
∆z ∆z

F ( z + ∆z ) − F ( z )
that is, lim = f ( z)
∆z →0 ∆z
or,

Since z is arbitrary, F is analytic in D.


Further if then is constant in D. That is two
independent integrals differ by a constant.

14.3 Cauchy’s Theorem for Multiply Connected Domains

Consider a doubly connected domain D with outer boundary curve and inner
curve . If f is analytic in any domain D* that contains D and its boundary

curves, then both integrals being taken counter

clockwise (or clockwise, full interior of may not belong to D*.

132 WhatsApp: +91 7900900676 www.AgriMoon.Com


Cauchy’s Integral Theorem and Cauchy’s Integral Formula

In general: let

(a) C be a simple closed curve (counter clockwise)


(b) are simple closed curves (all in counter clockwise directions)
and interior to C and whose interiors have no points in common.

14.3.1 Theorem: Let C and be simply closed curves as in (a) and (b).
If a function f is analytic throughout the closed region D. Then

∫ f ( z ) dz = ∑ ∫ f ( z ) dz.
C k =1 Cn

As a consequence of the above results we have the following important


observation:
2π i, m = −1
∫ − =
m
( z z ) 
m ≠ −1 and integer,
0
C 0,

for counter-clockwise integration around any simple closed path containing


in its interior.

14.3.2 Examples

∫ e dz = 0, C is unit circle, (Cauchy’s Theorem is applicable), as


(− z )
2
1. is
C

analytic in the given domain.

it 2π

1
2. ∫ = dz ∫=
ie dtit
e= | 0. Here Cauchy’s Theorem is not applicable.
C
| z |2 0
0

1 1 1 1
3. ∫ 2 z −=
C
1
dz ∫
 =
2 C (z − 1 )
dz =
2
.2π i π i, C is unit circle, (Cauchy’s Theorem is

2
applicable)

133 WhatsApp: +91 7900900676 www.AgriMoon.Com


Cauchy’s Integral Theorem and Cauchy’s Integral Formula

dz
4. ∫ z − 3i = 2π i, C is the circle
C
as 3i is inside this circle.

ez
5. C∫ z dz = 0 (using Cauchy’s Theorem for doubly connected domain) C is a
circle counter clockwise and clockwise.

6. is upper semi-circle of clockwise.

is lower semi-circle counter-clockwise.

0 2π
1 ieit 1 ieit
I1 = ∫C z dz = ∫ it dt = −π i
π e
and=I 2 ∫=dz
C2
z ∫
π
=
eit
dt π i
1

and are not same, i.e., principle of deformation of paths is not applicable
since the curve cannot be continuously deformed into without passing
through z=0 at which is not analytic.

dz
7. I = ∫ where C is any rectangle containing the points z = 0 and z =  2
C
z ( z + 2)

inside it.

Solution: Enclose points z = 0 and z =  2 inside circles and respectively


that do not intersect. Then applying Cauchy’s integral theorem for triply
connected domains, we get

dz dz dz
∫=
C
z ( z + 2) ∫ z ( z + 2 ) ∫ z ( z + 2 )
C1
+
C2

1  dz dz dz dz  1
= ∫ − ∫ +∫ −∫  = (2π i − 0 + 0 − 2π i ) =0.
2  C1 z C1 z + 2 C2 z C2 z + 2  2

134 WhatsApp: +91 7900900676 www.AgriMoon.Com


Cauchy’s Integral Theorem and Cauchy’s Integral Formula

14.4 Cauchy’s Integral Formula

14.4.1 Theorem: Let be analytic in a simply connected domain D. Then


for any point in D and any simple closed path C in D that encloses

1 f ( z)
f ( z0 ) = ∫
 dz ,
2π i C z − z0

(C is taken counter clockwise direction.)

14.4.2 Examples

2π ie , for any C which has z0 = 2 as interior point


2
ez
=
1. I ∫ z − 2 dz
C
= 
0 , for any C which has z0 = 2 as exterior point

dz
=2. I ∫=
C
2− z
, C: z 1

zz 1
Now 2 − z = 2 − = 2 − on C . Hence
z z
zdz 1 z 1 1 πi
=I ∫=
C
2z −1 ∫
 =
2 C z− 1
dz
2
=.2π i.
2 2
.
2

z2 +1
=4. I C∫ z (2 z − 1) dz, C : z 1 .
=

The integrand is not analytic at and . We write

z2 +1 z2 +1 1  πi
=I ∫ − ∫
1 C z
=dz 2π i  + 1 − 2π=
 4 
i.1
2
.
C z−
2

14.4.3 Theorem (Derivatives of Analytic Function)

135 WhatsApp: +91 7900900676 www.AgriMoon.Com


Cauchy’s Integral Theorem and Cauchy’s Integral Formula

If is analytic in a domain D, then it has derivatives of all orders in D, which


are then analytic functions in D. The values of these derivatives at a point in
D are given by

1 f ( z)
f ' ( z0 ) = ∫
 dz
2π i C ( z − z0 ) 2

and in general

n! f ( z)
f ( n ) ( z=
0) ∫

2π i C ( z − z0 ) n +1
dz , =
n 1, 2, …

Here C is any simple closed path in C that encloses and whose interior is a
subset of D.

14.4.4 Examples
cos z
1. ∫ ( z − π i)
C
2
2π i (cos z ) ' |z =π i =
dz = −2π i sin π i =
2π sin h(π ).

2. For any curve C for which 1 lies inside and outside

ez d  ez 
∫ ( z − 1) (
C
2
z2 + 4
dz = 2π i  2
)

dz  z + 4  z =1

 e z ( z 2 + 4) − e z .2 z  6eπ i
πi 
2=  .
 ( z + 4)
2 2
 z =1 25

14.4.5 Cauchy’s Inequality: Let be analytic within and on

and on C, then

136 WhatsApp: +91 7900900676 www.AgriMoon.Com


Cauchy’s Integral Theorem and Cauchy’s Integral Formula

14.4.6 Liouville’s Theorem: If an analytic function is bounded for all


values of z in the complex plane, then must be a constant.

Proof: Let By Cauchy’s inequality


for any r.

Taking we get Since is also arbitrary, So


must be a constant.

14.4.7 Maximum Modulus Principle: If a function f is analytic and not


constant in a given domain D, then has no maximum value in D.

14.4.8 Corollary: Suppose that a function f is continuous in a closed and


bounded region R and that it is analytic and not constant in the interior of .
Then the maximum value of in , which is always reached, occurs
somewhere on the boundary of and never in the interior.

14.4.9 Examples
dz
=1. I ∫ ( z
C
2
=
+ 4) 2
, C :| z − i | 2

The integrand is not analytic at The point lies inside the domain
but lies outside it. So

dz f ( z ) dz 1
=I ∫=
C
( z − 2i ) ( z + 2i )
2 ∫ ( z − 2=
2
C
i)
, where f ( z )
2
( z + 2i ) 2

π
π i f ′(2i )
= 2=
16

137 WhatsApp: +91 7900900676 www.AgriMoon.Com


Cauchy’s Integral Theorem and Cauchy’s Integral Formula

(3 z 4 + 5 z 2 + 2) dz
2. I = ∫ , where C is any simple closed curve containing the
C
( z + 1) 4

point z = 1 inside its interior.

2π i  d 3 
I =  3 (3 z 4 + 5 z 2 + 2)  −24 π i.
=
3!  dz  z = −1

Suggested Readings

Ahlfors, L.V. (1979). Complex Analysis, McGraw-Hill, Inc., New York.

Boas, R.P. (1987). Invitation to Complex Analysis, McGraw-Hill, Inc., New


York.

Brown, J.W. and Churchill, R.V. (1996). Complex Variables and Applications.
McGraw-Hill, Inc., New York.

Conway, J.B. (1993). Functions of One Complex Variable, Springer-Verlag,


New York.

Fisher, S.D. (1986). Complex Variables, Wadsworth, Inc., Belmont, CA.

Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics,


Narosa Publishing House, New Delhi.

Ponnusamy, S. (2006). Foundations of Complex Analysis, Alpha Science


International Ltd, United Kingdom.

138 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 2: Complex Variables

Lesson 15

Infinite Series, Convergence Tests, Uniform Convergence

15.1 Infinite Series

Let be a set of real or complex numbers. Then

(15.1.1)

is an infinite series of numbers and is its th term. The partial sum of the
series is defined by

The remainder of the series (after the nth term) is defined as

The series (15.1.1) is said to be convergent if the sequence of the partial


sums is convergent. The limit S of the sequence is called the sum of the
series.

15.1.1 Theorem: A necessary condition for a series to be convergent is

Proof: Suppose that the series is convergent. Then

139 WhatsApp: +91 7900900676 www.AgriMoon.Com


Infinite Series, Convergence Tests, Uniform Convergence

Since we get

15.1.2 Theorem (Cauchy’s criterion for convergence): The series is


convergent if and only if for any given real positive number there exists a
natural number such that

for all

15.1.3 Theorem: The series , where of complex numbers


converges to if and only if the series of the real parts converges to
and the series of the imaginary parts converges to .

15.1.4 Geometric Series

Consider the geometric series where r is any real number. We now find
the conditions for the convergence of this series.

First consider the sequence of partial sums

or,

Therefore,

140 WhatsApp: +91 7900900676 www.AgriMoon.Com


Infinite Series, Convergence Tests, Uniform Convergence

Note that when hence the geometric series converges to


, when

When as So the geometric series diverges in this case.

For each term in the series is unity. Hence the partial sum as
. Thus the series is divergent in this case.

For the terms in the series are +1 and 1 alternatively. Now the sequence
has two subsequences with limits 0 and 1. Hence in this case, the sequence
does not converge and consequently the series does not converge.

15.1.5 Example: Using the above argument, one can show that the series
converges to if Here is complex variable.

15.1.6 Harmonic Series:

Consider the harmonic series We show that this series is divergent.

The sequence of partial sums is defined by

and

Note that when Thus . This shows that for ,

141 WhatsApp: +91 7900900676 www.AgriMoon.Com


Infinite Series, Convergence Tests, Uniform Convergence

one cannot satisfy the condition . This violates


the condition for Cauchy convergence. By Theorem 15.1.2 we conclude that the
harmonic series is not convergent.

15.2 Tests for Convergence

The following results are frequently used to test the convergence of an infinite
series.

15.2.1 Comparison Test: Let and be two real series with


positive terms and for any real positive k and Then,

(i) convergence of the series imply convergence of the series


(ii) divergence of the series implies the divergence of the series
.

15.2.2 Limit comparison test and be two real series with


positive terms and

Then, both the series and converge or diverge together.


1
15.2.3 Theorem: The series ∑n
n =1
p
, p > 0 is convergent if and divergent if

Proof: We write
1 1 1 1  1 1   1 1 1 1 
S n = p + p +…+ p = p +  p + p  +  p + p + p + p  +…
1 2 n 1 2 3  4 5 6 7 
2
1 2 4 1  1   1 
< p + p + p +…
= p +  p −1  +  p −1  +…
1 2 4 1 2  2 

142 WhatsApp: +91 7900900676 www.AgriMoon.Com


Infinite Series, Convergence Tests, Uniform Convergence

The last series is a geometric series with common ratio Therefore, the

series is convergent if For Since the

1
harmonic series ∑ n is divergent, applying the comparison test, the series

1
∑n
n =1
p
is also divergent for


1
15.2.4 Example: Prove that the series ∑ n ( n + 1)
n −1
is convergent. Also find its

sum.

Solution: We can write

Now so the given series is convergent and the sum of the series is
1.

15.2.5 D’ Alembert’s test (Ratio test): Let be a real series of positive


terms or a complex series. Let

Then, the series is (i) convergent if c and (ii) divergent if . The


ratio test does not give any information on convergence of the series when

15.2.6 Examples: Apply ratio test to the following series


(i) (ii) (iii) .

Solution:

143 WhatsApp: +91 7900900676 www.AgriMoon.Com


Infinite Series, Convergence Tests, Uniform Convergence

(i) . Hence

Therefore, the series is convergent when and divergent when


The test fails when

(ii) . So

So the series is convergent for all .

(iii) Here and so

So the series is divergent for all .

15.2.6 Examples When Ratio Test Fails

(i) The series is divergent. However,

(ii) The series is convergent. However,

15.2.7 Cauchy’s Root Test: Let be a real series of positive terms or a


complex series. Let Then, the series is (i) convergent if
and (ii) divergent if c . The root test does not give any information on
the convergence if

15.2.7 Example: Let . Using the Cauchy root test we

have

144 WhatsApp: +91 7900900676 www.AgriMoon.Com


Infinite Series, Convergence Tests, Uniform Convergence

Now,

So the series is convergent.

15.3 Alternating Series

A real series in which the terms are alternatively positive and negative is called
an alternative series and is of the form The following
theorem gives a sufficient condition for the convergence of an alternative series.

15.3.1 Theorem (Leibnitz theorem): Let be an


alternative series satisfying the following conditions

(i) The sequence { } is non-increasing, that is for all n, and


(ii)

Then, the series is convergent.

15.3.2 Examples: Using Leibnitz Theorem, we can conclude that the following
series are convergent:

1 1
(i) ∑ (−1) n

n
(ii) ∑ (−1) n

15.3.3 Absolutely Convergent Series

145 WhatsApp: +91 7900900676 www.AgriMoon.Com


Infinite Series, Convergence Tests, Uniform Convergence

Let be an arbitrary series of real or complex numbers. If the series of


positive terms is convergent, then we say that the series is absolutely
convergent. If the series is convergent but is divergent, then the
series is called conditionally convergent.

1
15.3.4 Example: The series ∑ (−1) n

n
is conditionally convergent.

15.4 Uniform Convergence of the Series of Functions

Let be a series of single-valued complex functions defined in


a domain D (or a series of real functions defined on a closed interval). Let
be the nth partial sum. If a point in D, the
sequence { } of partial sums converges to then we say that the series
converges to This convergence is called pointwise convergence of
the series .

We say that the series converges uniformly to , if, for a given real
positive number there exists a natural number independent of z, but
dependent on such that

for

Thus, a series which is uniformly convergent is also pointwise convergent.


Weierstrass’s M-test gives sufficient conditions for the uniform convergence of
a series.

146 WhatsApp: +91 7900900676 www.AgriMoon.Com


Infinite Series, Convergence Tests, Uniform Convergence

15.4.1 Theorem: (Weierstrass’s M-test) Let be an infinite series


defined in some domain D of the complex plane and let { } be a sequence of
positive terms, where | for all n and for all z in D. If the series
is convergent, then the series is uniformly and absolutely convergent.

15.4.2 Example: We discuss the uniform convergence of the series on

the disk

Note that

for all z in

Since, the series is convergent, the given series is uniformly convergent.

15.4.3 Example: We show that the geometric series is

(i) uniformly convergent in any closed disk


(ii) not uniformly convergent in the open disk

We have

and

In the closed disk , we have

or

147 WhatsApp: +91 7900900676 www.AgriMoon.Com


Infinite Series, Convergence Tests, Uniform Convergence

Then

Using the right hand side can be made as small as necessary by choosing
n large enough.

Hence, for and for all z. This shows that the given series
is uniformly convergent.

If we consider the open disk we can find a z for a given n and a real
number (no matter how large) such that

by taking sufficiently close to 1. Thus, for no N we can have


for every in the open disk . Thus N depends both on
and . So the series is not uniformly convergent.

Suggested Readings

Ahlfors, L.V. (1979). Complex Analysis, McGraw-Hill, Inc., New York.

Boas, R.P. (1987) Invitation to Complex Analysis, McGraw-Hill, Inc., New


York.

Brown, J.W. and Churchill, R.V. (1996). Complex Variables and Applications.
McGraw-Hill, Inc., New York.

Conway, J.B. (1993). Functions of One Complex Variable, Springer-Verlag,


New York.

Fisher, S.D. (1986). Complex Variables, Wadsworth, Inc., Belmont, CA.

148 WhatsApp: +91 7900900676 www.AgriMoon.Com


Infinite Series, Convergence Tests, Uniform Convergence

Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics,


Narosa Publishing House, New Delhi.

Ponnusamy, S. (2006). Foundations of Complex Analysis, Alpha Science


International Ltd, United Kingdom.

149 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module2: Complex Variables

Lesson 16

Power Series

16.1 Introduction

A power series in powers of is a series of the form

(16.1.1)

where is a complex variable and are complex (or real) constants,


called the coefficients of the series, and is a complex (real) constant, called
the center of the series.

If , we obtain a power series in the powers of :

16.1.1 Examples

1. It can be seen easily that the series , converges


absolutely if and diverges for

2. The series is absolutely convergent for every In

fact, by the ratio test, for any fixed

150 WhatsApp: +91 7900900676 www.AgriMoon.Com


Power Series

16.1.2 Theorem: (Convergence of a Power Series)

(a) Every power series (16.1.1) converges at .


(b) If (16.1.1) converges at , it converges absolutely for every
closer to than , i.e.,
(c) If (16.1.1) diverges at then it diverges at every further away from
than .

Proof:

(a) The proof follows by observing that for the series reduces to .

(b) Since is convergent, the necessary condition for the


convergence of a series implies that the n-th term as
Hence the terms are bounded. So there exists M such that
for all n. Thus we have

Now for is a convergent geometric series with common ratio .


Therefore converges absolutely for

(c) The proof follows assuming contrary to assumption.

16.2 Radius of Convergence

Let be the radius of the circle with center at that contains all points at
which the series is convergent and the series is divergent at all points outside it.

151 WhatsApp: +91 7900900676 www.AgriMoon.Com


Power Series

Then is called the circle of convergence and is the radius of


convergence. The power series may or may not converge on the boundary. If
the series is convergent only at and if the series
converges for all .

16.2.1 Examples

1. The series converges for | z | ≤ 1 . Here R = 1 .

2. The series converges for but diverges for z = 1 . Here R = 1.

3. The series diverges for Here R = 1.

16.2.2 Theorem (Radius of Convergence) Let Then the

radius of convergence of the power series is . (The case

and is included). (Cauchy-Hadamad formula)

Proof: By the ratio test, consider

If then for all the power series will converge and so If

then for and all (for some ). Hence the

series will not converge for any . So In all other cases, the series will

converges for or and diverges for or

. Hence is the radius of convergence.

152 WhatsApp: +91 7900900676 www.AgriMoon.Com


Power Series

16.2.3 Example: Consider the series

Hence . So the power series converges for and diverges for

16.2.4 Remark: We can also take where .

16.2.5 Examples

1. For the series , we find

2. Consider the series . Here


( n + 1)!=
2n n +1
→∞.
n +1
2 n! 2

Hence , that is, the series converges only at

3. For the series , note that Hence

Let L = lim ( nln ln n ) n . So


1
4. Take the series

= =
log log L lim
1
lim ln ln nln ln n
n
( 1
)
lim lim (ln(ln n) 2 = 0 .
n

Hence L= e=
0
1, or , R= 1.

5. For the series , let denote the term.

153 WhatsApp: +91 7900900676 www.AgriMoon.Com


Power Series

Then .

Hence the series converges for | z | < 2 and diverges for | z | > 2 .

6. is a positive integer. Here

Hence

7. ,

for

Hence the series converges for

8.

as .

Hence

154 WhatsApp: +91 7900900676 www.AgriMoon.Com


Power Series

9.

10. .

16.3 Results on Power Series

If any given power series has a nonzero radius of convergence


we write its sum a function ;

We say that is represented by the power series or it is developed in the


power series.

16.3.1 Theorem: The function in (1) with is continuous at

Proof: . Now converges absolutely for for any .


Hence the series
with converges.

Let Then for

For Hence is continuous at

155 WhatsApp: +91 7900900676 www.AgriMoon.Com


Power Series

16.3.2 Theorem: Suppose that the power series and both


converge for and have the same sum for all these Then these
series are identical, i.e.,

Proof: Given .
Taking we get

Assume Then

Dividing both sides by and then taking , we get


Hence by Mathematical induction and so the two power series are
identical.

Term by term addition or subtraction of two power series with radii of


convergence and yields a power series with radius of convergence at
least equal to the smaller of and

Term by term multiplication of two power series

and

means the multiplication of each term of the first series by each term of the
second series and the collection of like power of . This gives a power series
and is given by

156 WhatsApp: +91 7900900676 www.AgriMoon.Com


Power Series

This power series converges absolutely for each within the circle of
convergence of each of the two given series and has the sum
.

16.3.3 Theorem: The derived series of a power series has the same radius of
convergence as the original series.

Proof: have the same radius of convergence

The series after differentiation is

Now

16.3.4 Example: Consider the series

Then

157 WhatsApp: +91 7900900676 www.AgriMoon.Com


Power Series

So the series converges for and diverges for

16.3.5 Theorem: The power series obtained

by integrating term by term has the same radius of


convergence as the orginal series.

16.3.6 Theorem: A power series with a nonzero radius of convergence R


represents an analytic function at every point interior to its circle of
convergence. The derivatives of this function are obtained by differentiating the
original series term by term. All the series thus obtained have the same radius of
convergence as the original series. Hence each of them is an analytic function.

Proof: Consider the two series


Let have the radius of convergence We will show that the function is
analytic and has derivative in the interior of the circle of convergence.

158 WhatsApp: +91 7900900676 www.AgriMoon.Com


Power Series

The bracket contains terms, and the largest coefficient is . For


the absolute value of the series is less than or
equal to

The series is the second derived series of at


and converges absolutely. Let be the sum then
f ( z + ∆z ) − f ( z )
− f1 ( z ) ≤ ∆z K ( R0 ) → 0 as ∆z → 0.
∆z

This completes the proof of the theorem.

16.3.7 Examples:

1. . Differentiating twice, we get which is

convergent for and is divergent for .

2. . Differentiating, we get whose radius of

convergence is .

3. . Consider the series . Differentiating this

twice and multiplying by , we get the original series. Now clearly the
radius of convergence of this new series is 5.

159 WhatsApp: +91 7900900676 www.AgriMoon.Com


Power Series

4. . Differentiating the series term by term times and

multiplying by we get the original series. Now the radius of convergence

of is 5.

Suggested Readings

Ahlfors, L.V. (1979). Complex Analysis, McGraw-Hill, Inc., New York.

Boas, R.P. (1987). Invitation to Complex Analysis, McGraw-Hill, Inc., New


York.

Brown, J.W. and Churchill, R.V. (1996). Complex Variables and Applications.
McGraw-Hill, Inc., New York.

Conway, J.B. (1993). Functions of One Complex Variable, Springer-Verlag,


New York.

Fisher, S.D. (1986). Complex Variables, Wadsworth, Inc., Belmont, CA.

Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics,


Narosa Publishing House, New Delhi.

Ponnusamy, S. (2006). Foundations of Complex Analysis, Alpha Science


International Ltd, United Kingdom.

160 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 2: Complex Variables

Lesson 17

Taylor Series and Laurent Series

17.1 Taylor Series

The following theorem shows that a Taylor series can be found for an analytic
function.

17.1.1 Theorem: Let be analytic in a domain and let be any


point in . Then there is a unique Taylor series

where

and contains . This representation is valid in the largest open disk with
center in which is analytic. The remainder of (17.1.1) can be
represented as:

The coefficient satisfy the inequality , where is the maximum of

on a circle in whose interior is also in .


Proof: By Cauchy’s integral formula, we have

161 WhatsApp: +91 7900900676 www.AgriMoon.Com


Taylor Series and Laurent Series

for lying inside . Now

This expansion is valid for , we can do so as in on and we

choose inside the circle of radius with center , so that .

Thus

Using (17.1.5) in (17.1.4), we get

162 WhatsApp: +91 7900900676 www.AgriMoon.Com


Taylor Series and Laurent Series

This is Taylor’s formula with remainder term.

Since analytic functions have derivatives of all orders, we can take in (17.1.6)
as large as possible. If we let , we get (17.1.1). Clearly, (17.1.1) will
convergence and represent if and only if

Since is on and is inside Since is analytic inside

and on it is bounded, and so is the function , i.e.,

Also has the radius and the length


Hence by the ML-inequality, we get from (17.1.3)

163 WhatsApp: +91 7900900676 www.AgriMoon.Com


Taylor Series and Laurent Series

Now as lies inside . Hence the term on the right as .


Hence the convergence of the Taylor series is proved. Uniqueness follows since
power series have unique representation of functions.

Finally

17.1.3 Maclaurin’s Series

A Maclaurin’s series is a Taylor series with center That is,

A point at which is not differentiable but such that every disk with
center contains points at which is differentiable. We say that is
singular at or has a singularity at .

17.1.3 Theorem: A power series with nonzero radius of convergence is the


Taylor series of its sum.

Proof: Given the power series

164 WhatsApp: +91 7900900676 www.AgriMoon.Com


Taylor Series and Laurent Series

Then Now

Thus Further,

Thus

In general, With these coefficients the given series becomes


the Taylor’s series of .

17.1.4 Remark: Complex analytic functions have derivatives of all orders and
they can always be represented by power series of the from (17.1.1). This is not
true in general for real valued functions. In fact, there are real functions for
which derivatives of all orders exist but it cannot be represented by a power
series.

Consider for example, ,

This function cannot be represented by a Maclaurin’s series since all its


derivatives vanish at zero.

17.1.5 Examples:

1. . Then . Hence the Maclaurin’s

expansion of is the geometric series

165 WhatsApp: +91 7900900676 www.AgriMoon.Com


Taylor Series and Laurent Series

is singular at This point lies on the circle of convergence.

2.

3.

4.

5.

6.

7.

8.

9. =

10. To find Maclaurin’s series for ,

Integrating the power series term by term:

representing the principal value of

166 WhatsApp: +91 7900900676 www.AgriMoon.Com


Taylor Series and Laurent Series

17.2 Laurent Series

The following theorem gives the conditions for the existence of a Laurent’s
series.

17.2.1 Theorem: If is analytic on two concentric circles and with


center and in the annulus between them, then can be represented by the
Laurent series

consisting of nonnegative powers and the principal part (the negative powers).
The coefficients of this Laurent series are given by the integrals

taken counter clockwise around any simple closed path that lies in the annulus
and encircles the inner circle.

This series converges and represents in the open annulus obtained from the
given annulus by continuously increasing the outer circle and decreasing
until each of the circles reaches a point where is singular.

167 WhatsApp: +91 7900900676 www.AgriMoon.Com


Taylor Series and Laurent Series

In the special case that is the only singular point of inside , this
circle can be shrunk to the point , giving convergence in a disk except at the
center.

Proof: By Cauchy’s integral formula for multiply connected domains, we get

where is any point in the given annulus and both and are counter-
clockwise. Now integral is exactly the Taylor series so that

with coefficients

Here can be replaced by by the principal of deformation of path as is a


point not in the annulus.

To get the expansion for we note that for on and is the

annulus.
Now

168 WhatsApp: +91 7900900676 www.AgriMoon.Com


Taylor Series and Laurent Series

Multiplying by and integrating over on both the sides, we get

where,

The integral over can be replaced by integrals over .

We see that on the right, the power is multiplied by as given in

(17.2.2). This proves Laurent’s theorem provided

Now if the principal part consists of finitely many terms only, then there is

nothing to prove. Otherwise, we note that in is bounded in the

absolute value, say on because is analytic in the

169 WhatsApp: +91 7900900676 www.AgriMoon.Com


Taylor Series and Laurent Series

annulus and on , and lies on and outside, so that From


this and the ML-inequality, we get

The first series in (17.2.1) is a Taylor series and hence it converges in


the disk with center whose radius equals the distance of that singularity of
which is closet to Also, must be singular at all points outside
where is singular.

The second series in (17.2.1) representing is a power series in .

Let the given annulus be where and are radii of and

respectively. Then . Hence this power series in must converge

at least in the disk . This corresponds to the exterior of

, so that is analytic for all in the exterior E of the circle with center
and radius equal to the maximum distance from to the singularities of
inside The domain common to and is the open annulus.

17.2.2 Remark: The Laurent series of a given analytic function is unique


in its annulus of existence. However, may have different Laurent series in
two annulus with the same center.

170 WhatsApp: +91 7900900676 www.AgriMoon.Com


Taylor Series and Laurent Series

17.2.3 Examples:

1. with center 0.

for | z | > 0 . Hence the annulus is the whole complex plane except the
origin.

2.

3.

and

valid for

4. center 0

From the previous geometric series, we get by multiplying ,

5. center 0

171 WhatsApp: +91 7900900676 www.AgriMoon.Com


Taylor Series and Laurent Series

for (first for and second for

for

We can also write

for (first for and second for )

for

Suggested Readings

Ahlfors, L.V. (1979). Complex Analysis, McGraw-Hill, Inc., New York.

Boas, R.P. (1987). Invitation to Complex Analysis, McGraw-Hill, Inc., New


York.

Brown, J.W. and Churchill, R.V. (1996). Complex Variables and Applications.
McGraw-Hill, Inc., New York.

Conway, J.B. (1993). Functions of One Complex Variable, Springer-Verlag,


New York.

172 WhatsApp: +91 7900900676 www.AgriMoon.Com


Taylor Series and Laurent Series

Fisher, S.D. (1986). Complex Variables, Wadsworth, Inc., Belmont, CA.

Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics,


Narosa Publishing House, New Delhi.

Ponnusamy, S. (2006). Foundations of Complex Analysis, Alpha Science


International Ltd, United Kingdom.

173 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 2: Complex Variables

Lesson 18

Zeros and Singularities

18.1 Singular Points

A function is singular or has a singularity at a point if is not


analytic at but every neighbourhood of contains points at which
is analytic. Then we say that is a singular point of
The point is called an isolated singularity of if has a
neighbourhood without further singularites of

18.1.1 Example: The function has a non-isolated singularity

at .

18.1.2 Example: The function has isolated singularities at


π 3π
z=
± ,± ,.... etc.
2 2

18.2 Poles

Isolated singularities of at can be classified by the Laurent series

valid in an immediate neighbourhood of the singular point at itself,


that is, in a region of the form The sum of the first series is

174 WhatsApp: +91 7900900676 www.AgriMoon.Com


Zeros and Singularities

analytic at . The second series, containing the negative powers, is called


the principal part of (18.2.1). If it has only finitely many terms, it is of the form

Then the singularity of at is called a pole, and is called the


order of the pole. Poles of the first order are called simple poles. If the
principal part of (18.2.1) has infinitely many terms, we say that has an
isolated essential singularity at

18.2.1 Examples:

1. has a simple pole at and a pole of fifth

order

2. has an isolated essential singularity at

3.

has an isolated essential singularity at

4.

has a pole of order four at

175 WhatsApp: +91 7900900676 www.AgriMoon.Com


Zeros and Singularities

5.

The first expansion shows that there is a pole of order 3 at The second
expansion has infinitely many terms of negative power. But it is no
contradiction as this later expansion is valid for

18.2.2 Theorem: If is analytic and has a pole at , then


as in any manner.

18.2.3 Example: has a pole at and as in

any manner.

18.2.4 Theorem (Picard’s Theorem): If is analytic and has an isolated


essential singularity at a point , it takes on every value, with at most one
exceptional value, in an arbitrarily small neighbourhood of .

18.2.5 Example: The function has an isolated essential singularity


at It has no limit for approach along the imaginary axis. It becomes
infinite if through negative real values. It takes on nay given value
in an arbitrary small neighbourhood of Letting ,
we must solve the equation

for and . Equating the absolute values and the arguments, we have

176 WhatsApp: +91 7900900676 www.AgriMoon.Com


Zeros and Singularities

, i.e., and .

From these two equations and + ,


we obtain the formulae

and .

Hence can be made arbitrary small by adding multiples of to , leaving


unaltered.

18.2.6 Removable Singularity

We say that a function has a removable singularity at if is


not analytic at but can be made analytic there by assigning a suitable
value . Such singularities are of no interest as they can be removed.

18.2.7 Example: The function becomes analytic at if we

define =1.

18.3 Zeros

A zero of an analytic function in a domain is a in such that


A zero has order if not only if but the derivatives

are all 0 at but . A first order zero is


called a simple zero. For a second order zero but
.

177 WhatsApp: +91 7900900676 www.AgriMoon.Com


Zeros and Singularities

18.3.1 Examples:

1. The function has simple zeros at

2. The function has second-order zeros at and

3. The function has a third order zeros at

4. The function has no zeros.

5. The function has simple zeros at and has


second-order zeros at these points.

6. The function has second-order zeros at ..

18.3.2 Taylor Series at a Zero

At an -order zero of the terms are

all 0 and . Therefore, the Taylor series is of the form

(18.3.1)

Conversely, if has a such a Taylor series then it has an -order zero at

18.3.3 Theorem: The zeros of an analytic function are isolated, i.e.,


each of them has a neighbourhood that contains no further zeros of
Proof: In (18.3.1), the factor is zero only at The power series
in the parenthesis represents an analytic function say . Now
Since is also continuous, in some
neighbourhood of Hence in some neighbourhood of

178 WhatsApp: +91 7900900676 www.AgriMoon.Com


Zeros and Singularities

18.3.4 Theorem: Let be analytic at and have a zero of -order at

. Then has a pole of -order at .

The same holds for if is analytic at and

18.3.5 Analytic or Singularity at Infinity

Infinity has been added to the complex plane resulting in the extended
complex plane. The extended complex plane can be mapped into sphere of
diameter 1 touching the plane at The image of a complex number
is the intersection of the sphere with the segment from A to the “north pole” .
The point is the image .

The sphere representing the extended complex plane in this way is called the
Riemann number sphere. The mapping of the sphere onto the plane is called
stereographic projection with center

Thus for investigating a function for large , we set and

investigate in the neighbourhood of . We define

to be analytic or singular at infinity if is analytic or singular at

We also define if this limit exists. We say that has a

-order zero at infinity if has such a zero at . Similarly we define

poles and essential singularities.

179 WhatsApp: +91 7900900676 www.AgriMoon.Com


Zeros and Singularities

18.3.6 Examples:

1. The function is analytic at . Since is analytic

at and has a second-order zero at .

2. The function is singular at and has third-order pole there since

the function has such a pole at .

3. The function has an essential singularity at since has such


a singularity at . Similarly, and have essential singularity at
.

By Liouville’s theorem a bounded entire function is constant. Hence a non-


constant entire function must be unbounded. Hence it has a singularity at ,a
pole if it is a polynomial or an essential singularity if it is not.

18.3.7 Meromorphic Function

Let f ( z ) be analytic function and it has only singularities in the finite plane
which are poles. Then f ( z ) is called a meromorphic function. Some examples
of meromorphic functions are rational functions with nonconstant denominator,
trigonometric functions and

18.3.7 Examples:

1. .

Here z = 0 is a singular point of Now we can write

180 WhatsApp: +91 7900900676 www.AgriMoon.Com


Zeros and Singularities

The principal part of the Laurent series is the single term

Hence, is a simple pole.

2.

which is valid for .

Hence is a simple pole.

Alternatively, we can express

for

181 WhatsApp: +91 7900900676 www.AgriMoon.Com


Zeros and Singularities

Hence is simple pole.

Suggested Readings

Ahlfors, L.V. (1979). Complex Analysis, McGraw-Hill, Inc., New York.

Boas, R.P. (1987). Invitation to Complex Analysis, McGraw-Hill, Inc., New


York.

Brown, J.W. and Churchill, R.V. (1996). Complex Variables and Applications.
McGraw-Hill, Inc., New York.

Conway, J.B. (1993). Functions of One Complex Variable, Springer-Verlag,


New York.

Fisher, S.D. (1986). Complex Variables, Wadsworth, Inc., Belmont, CA.

Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics,


Narosa Publishing House, New Delhi.

Ponnusamy, S. (2006). Foundations of Complex Analysis, Alpha Science


International Ltd, United Kingdom.

182 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 2: Complex Variables

Lesson 19

Residue Theorem

19.1 Residues

If has a singularity at inside a simple closed curve , but is otherwise


analytic on and inside , then we can expand the function in a Laurent
series as

This series is convergent for all points near (except at ) in the same
domain of the form

Now the coefficient of the first negative power of this Laurent series is

given by

We define to be the residue of at and denote it by

19.1.1 Examples:

183 WhatsApp: +91 7900900676 www.AgriMoon.Com


Residue Theorem

1. We want to integrate around the unit circle . Consider the


Laurent series expansion as

This is convergent for Hence

2. Here we integrate clockwise around The function

has singularities at and . However, lies outside the circle


. So we can expand in Laurent series at as

Note that the residue is 1 and we get

19.1.2 Residue at Simple Pole

For a simple pole at , the Laurent series is

This implies that

184 WhatsApp: +91 7900900676 www.AgriMoon.Com


Residue Theorem

So .

19.1.3 Example:

19.1.3 Remark: Suppose we have , where and are analytic,

and has a simple pole so that has a simple pole . So by


Taylor series, we find

So

19.2 Residue at Pole of Any Order

If has a pole of any order at then its Laurent series can be written
as

where

185 WhatsApp: +91 7900900676 www.AgriMoon.Com


Residue Theorem

The residue of at is . If we multiply both sides by , we


get

The residue of at is now the coefficient of the power


in the Taylor series of the function with center at
. So

(by Taylor’s Theorem). Hence, if has a pole of the -order at , the


residue is given by

19.2.1 Example: The function has a pole of second order at

. So

186 WhatsApp: +91 7900900676 www.AgriMoon.Com


Residue Theorem

19.2.2 Residue Theorem: Let the function be analytic inside a simple closed
path and on except for finitely many singular points inside
Then the integral of taken counter clockwise around is given by

Proof: We enclose each of the singular points in a circle with radius small
enough that these circles and are all separated. Then is analytic in the
domain bounded by and and on the entire boundary of . From
Cauchy’s integral theorem, we thus have

19.2.3 Examples:

1. Find , where is simple closed path that

(a) encloses 0 and 1,

(b) 0 is inside and 1 is outside,

187 WhatsApp: +91 7900900676 www.AgriMoon.Com


Residue Theorem

(c) 0 and 1 are outside,

(d) 1 is inside, o is outside.

Hence (a) (b) (c) 0 , (d) .

2.

3. Evaluate , is ellipse .

The first term in the integrand has simple poles at and .


The poles at lie outside the curve . So the first pole of is

For the second term

188 WhatsApp: +91 7900900676 www.AgriMoon.Com


Residue Theorem

So , and then the second term of the integral is .

Hence .

4. Evaluate the integral , . We can write

5. Evaluate . The function has simple poles at

of which only lie inside the contour. So

189 WhatsApp: +91 7900900676 www.AgriMoon.Com


Residue Theorem

6. Evaluate , .

The integral has simple poles at and they all lie inside the contour.
Now for pole at

So,

Suggested Readings

Ahlfors, L.V. (1979). Complex Analysis, McGraw-Hill, Inc., New York.

Boas, R.P. (1987). Invitation to Complex Analysis, McGraw-Hill, Inc., New York.

Brown, J.W. and Churchill, R.V. (1996). Complex Variables and Applications.
McGraw-Hill, Inc., New York.

Conway, J.B. (1993). Functions of One Complex Variable, Springer-Verlag, New


York.

Fisher, S.D. (1986). Complex Variables, Wadsworth, Inc., Belmont, CA.

Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics,


Narosa Publishing House, New Delhi.

190 WhatsApp: +91 7900900676 www.AgriMoon.Com


Residue Theorem

Ponnusamy, S. (2006) Foundations of Complex Analysis, Alpha Science


International Ltd, United Kingdom.

191 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 3: Fourier Series and Fourier Transform

Lesson 20

Introduction

Before we start discussion on Fourier transform it is very important to discuss Fourier


series firstly because it gives a pathway to understand Fourier transform. Fourier series
has a wide range of applications, viz. in analysis of current flow, sound waves, image
analysis and many more. They are also used to solve differential equations. In a general
sense, we use Fourier series to represent a periodic functions. Indeed, not only periodic
functions but also to represent and approximate functions defined on a finite interval.

20.1 Periodic Functions

If a function f is periodic with period T > 0 then f (t) = f (t + T ), −∞ < t < ∞. The
smallest of T , for which the equality f (t) = f (t + T ) is true, is called fundamental period
of f (t). However, if T is the period of a function f then nT , n is any natural number, is
also a period of f . Some familiar periodic functions are sin x, cos x, tan x etc.

20.1.1 Properties of Periodic Functions

We consider two important properties of periodic function. These properties will be used
to discuss the Fourier series.
1. It should be noted that the sum, difference, product and quotient of two functions is
also a periodic function. Consider for example:

f (x) = |{z}
sin x + sin
| {z2x} + cos
| {z3x}
2π 2π
period: 2π =π
2 3
Period of f = common period of (sin x, sin 2x, cos 3x) = 2π
One can also confirms the period of the function f (x) as

f (x + 2π) = sin(x + 2π) + sin(2x + 2π) + cos(3x + 2π)


= sin(x) + sin(2x) + cos(3x) = f (x)

192 WhatsApp: +91 7900900676 www.AgriMoon.Com


Introduction

2. If a function is integrable on any interval of length T , then it is integrable on any


other intervals of the same length and the value of the integral is the same, that is,
Z a+T Z b+T Z T
f (x) dx = f (x) dx = f (x) dx for any value of a and b
a b 0

This property has been depicted in Figure 20.1.1.

f(x)

a a+T b b+T

Figure 20.1: Area showing integral of a typical periodic function

20.2 Trigonometric Polynomials and Series

• Trigonometric polynomial of order n is defined as


n  
X πkx πkx
Sn (x) = a0 + ak cos + bk sin
l l
k=1
Here an and bn are some constants. Since the sum of the periodic functions again
represents a periodic function. Therefore Sn will be a periodic function. What will
be the period of the function Sn ? The period can be identified simply by looking at
the common period of the functions involved in the sum as
 
πx πx 2πx nπx nπx
Period of Sn (x) = common period of cos , sin , cos , . . . , sin , cos
l l l l l
= 2π/(π/l) = 2l.

193 WhatsApp: +9127900900676 www.AgriMoon.Com


Introduction

• The infinite trigonometric series


∞  
X πkx πkx
S(x) = a0 + ak cos + bk sin ,
l l
k=1
if it converges, also represents a function of period 2l.

Now the question aries whether any function of period T = 2l can be represented as
the sum of a trigonometric series? The answer to this question is affirmative and it
is possible for a very wide class of periodic functions. In the next lesson we will see
how to obtain the constants an and bn in order this trigonometric series to represent
a given periodic function.

Remark 1: Though sine and cosine functions are quite simple in nature but their sum
function may be quite complex. One can see the plot of sin x+sin 2x+cos 3x in Figure 20.2.
However, the function has a period 2π which is a common period of sin x, sin 2x, cos 3x.

1.5

0.5

−0.5
f(x)

−1

−1.5

−2

−2.5

−3
0 2 4 6 8 10 12 14
x

Figure 20.2: Plot of a trigonometric polynomial f (x) = sin x + sin 2x + cos 3x

194 WhatsApp: +9137900900676 www.AgriMoon.Com


Introduction

20.3 Orthogonality Property of Trigonometric System

We call two functions φ(x) and ψ(x) to be orthogonal on the interval [a, b] if
Z b
φ(x)ψ(x) dx = 0
a

With this definition we can say that the basic trigonometric system viz.

1, cos x, sin x, cos 2x, sin 2x, . . .

is orthogonal on the interval [−π, π] or [0, 2π]. In particular, we shall prove that any two
distinct functions are orthogonal.
To show the orthogonality we take different possible combination as:
For any integer n 6= 0: We have the following integrals to show the orthogonality of the
function 1 with any member of sine or cosine family
Z π Z π
sin(nx) π cos(nx) π
1 · cos(nx) dx = = 0, 1 · sin(nx) dx = − =0
−π n −π −π n −π

We have also the following useful results


Z π Z π Z π Z π
2 1 + cos(2nx) 2 1 − cos(2nx)
cos (nx) dx = = π, sin (nx) dx = =π
−π −π 2 −π −π 2
For any integer m and n (m 6= n): Now we show that any two different members of the
same family (sine or cosine) are orthogonal. For the cosine family we have
Z π Z π
1
cos(nx) cos(mx) dx = [cos(n + m)x + cos(n − m)x] dx = 0
−π 2 −π

and for the sine family we have


Z π Z π
1
sin(nx) sin(mx) dx = [cos(n − m)x − cos(n + m)x] dx = 0
−π 2 −π

For any integer m and n: Here we show that any two members of the two different
family (sine and cosine) are orthogonal
Z π
sin(nx) cos(mx) dx = 0
−π

Note that the integrand is an odd function and therefore the integral is zero.
The above result can be summarized in a more general setting in the following theorem.

195 WhatsApp: +9147900900676 www.AgriMoon.Com


Introduction

20.3.1 Theorem

The trigonometric system


πx πx 2πx 2πx
1, cos , sin , cos , sin ,...
l l l l
is orthogonal on the interval [−l, l] or [a, a + 2l], where a is any real number.

Proof: Note that the common period of the trigonometric system

πx πx 2πx 2πx
1, cos , sin , cos , sin ,...
l l l l

is 2l. Similar to the evaluation of the integral appeared above to show orthogonality of the
basic trigonometric system, we have the following results:

Z Z (
l a+2l 0 if m 6= n
mπx nπx mπx nπx
a) cos cos dx = cos cos dx =
−l l l a l l l if m = n 6= 0

Z Z (
l a+2l 0 if m 6= n
mπx nπx mπx nπx
b) sin sin dx = sin sin dx =
−l l l a l l l if m = n 6= 0

Z l Z a+2l
mπx nπx mπx nπx
c) sin cos dx = sin cos dx = 0
−l l l a l l

This completes the proof of the above theorem.

To summarize, the value of the integral over length of period of integrand is equal to zero
if the integrand is a product of two different members of trigonometric system. If the
integrand is product of two same member from sine or cosine family then the value of
the integral will be half of the interval length on which the integral is performed. These
results will be used to establish Fourier series of a function of period 2l defined on the
interval [−l, l] or [a, a + 2l]. It should be noted that for l = π we obtain results for standard
trigonometric system of common period 2π .

196 WhatsApp: +9157900900676 www.AgriMoon.Com


Introduction

Suggested Readings

Davis, H.F. (1963). Fourier Series and Orthogonal Functions. Dover Publications, Inc.
New York.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Hanna, J.R. and Rowland, J.H. (1990). Fourier Series, Transforms and Boundary Value
Problems. Second Edition. Dover Publications, Inc. New York.
Iorio, R. and Iorio, V. de M. (2001). Fourier Analysis and Partial Differential Equations.
Cambridge University Press. United Kingdom.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Stein, E.M. and Shakarchi, R. (2003). Fourier Analysis: An Introduction. Princeton
University Press, Princeton, New Jersey, USA.

197 WhatsApp: +9167900900676 www.AgriMoon.Com


Module 3: Fourier Series and Fourier Transform

Lesson 21

Construction of Fourier Series

In this lesson we shall introduce Fourier series of a piecewise continuous periodic func-
tion. First we construct Fourier series of periodic functions of standard period 2π and then
the idea will be extended for a function of arbitrary period.

21.1 Piecewise Continuous Functions

A function f is piecewise continuous on [a, b] if there are points

a < t1 < t2 < . . . < tn < b

such that f is continuous on each open sub-interval (a, t1 ), (tj , tj+1 ) and (tn , b) and all the
following one sided limits exist and are finite

lim f (t), lim f (t), lim f (t), and lim f (t), j = 1, 2, . . . , n


t→a+ t→tj − t→tj + t→b−

This mean that f is continuous on [a, b] except possibly at finitely many points, at each
of which f has finite one sided limits. It should be clear that all continuous functions are
obviously piecewise continuous.

21.1.1 Example 1

Consider the function




 3, for x = −π ;

 2 x , for −π < x < 1;
f (x) =

 1 − x2 , for 1 ≤ x < 2;


2, for 2 ≤ x ≤ π .

At each point of discontinuity the function has finite one sided limits from both sides. At
the end points x = −π and π right and left sided limits exist, respectively. Therefore, the
function is piecewise continuous.

198 WhatsApp: +91 7900900676 www.AgriMoon.Com


Construction of Fourier Series

21.1.2 Example 2

A simple example that is not piecewise continuous includes


(
0, x = 0;
f (x) =
x−n , x ∈ (0, 1], n > 0 .

Note that f is continuous everywhere except at x = 0. The function f is also not piecewise
continuous on [0, 1] because limx=0+ f (x) = ∞.
An important property of piecewise continuous functions is boundedness and integrability
over closed interval. A piecewise continuous function on a closed interval is bounded and
integrable on the interval. Moreover, if f1 and f2 are two piecewise continuous functions
then their product, f1 f2 , and linear combination, c1 f1 +c2 f2 , are also piecewise continuous.

21.2 Fourier Series of a 2π Periodic Function

Let f be a periodic piecewise continuous function on [−π, π] and has the following trigono-
metric series expansion

a0 X
f∼ + [ak cos(kx) + bk sin(kx)] (21.1)
2
k=1

The aim is to determine the coefficients ak , k = 0, 1, 2, . . . and bk , k = 1, 2, . . .. First we


assume that the above series can be integrated term by term and its integral is equal to the
integral of the function f over [−π, π], that is,
Z π Z π ∞  Z π Z π 
a0 X
f (x) dx = dx + ak cos(kx) dx + bk sin(kx) dx
−π −π 2 −π −π
k=1

This implies Z π
1
a0 = f (x) dx.
π −π
Multiplying the series by cos(nx), integrating over [−π, π] and assuming its value equal to
the integral of f (x) cos(nx) over [−π, π], we get
Z π ∞ 
X Z π Z π 
f (x) cos(nx) dx = 0 + ak cos(nx) cos(kx) dx + bk cos(nx) sin(kx) dx
−π k=1 −π −π

199 WhatsApp: +9127900900676 www.AgriMoon.Com


Construction of Fourier Series


Note that the first term on the right hand side is zero because −π cos(kx) dx = 0. Further,
using the orthogonality of the trigonometric system we obtain
Z π
1
an = f (x) cos(nx) dx
π −π

Similarly, by multiplying the series by sin(nx) and repeating the above steps we obtain
Z π
1
bn = f (x) sin(nx) dx
π −π

The coefficients an , n = 0, 1, 2, . . . and bn , n = 1, 2, . . . are called Fourier coefficients and


the trigonometric series (21.1 ) is called the Fourier series of f (x). Note that by writing
the constant a0 /2 instead of a0 , one can use a single formula of an to calculate a0 .

Remark 1: In the series (21.1) we can not, in general, replace ∼ by = sign as clear
from the determination of the coefficients. In the process we have set two integrals equal
which does not imply that the function f (x) is equal to the trigonometric series. Later we
will discuss conditions under which equality holds true.

Remark 2: (Uniqueness of Fourier Series) If we alter the value of the function f


at a finite number of points then the integral defining Fourier coefficients are unchanged.
Thus function which differ at finite number of points have exactly the same Fourier series.
In other words we can say that if f, g are piecewise continuous functions and Fourier
series of f and g are identical, then f (x) = g(x) except at a finite number of points.

21.3 Fourier Series of a 2l Periodic Function

Let f (x) be piecewise continuous function defined in [−l, l] and it is 2l periodic. The
Fourier series corresponding to f (x) is given as
∞  
a0 X kπx kπx
f∼ + an cos + bn sin (21.2)
2 l l
k=1

where the Fourier coefficients, derived exactly in the similar manner as in the previous
case, are given as
Z l
1 kπx
ak = f (x) cos dx, k = 0, 1, 2, . . .
l −l l

200 WhatsApp: +9137900900676 www.AgriMoon.Com


Construction of Fourier Series

Z l
1 kπx
bk = f (x) sin dx k = 1, 2, . . .
l −l l
In must be noted that just for simplicity we will be discussing Fourier series of 2π periodic
function. However all discussions are valid for a function of an arbitrary period.

Remark 3: It should be noted that piecewise continuity of a function is sufficient for


the existence of Fourier series. If a function is piecewise continuous then it is always
possible to calculate Fourier coefficients. Now the question arises whether the Fourier
series of a function f converges and represents f or not. For the convergence we need
additional conditions on the function f to ensure that the series converges to the desired
values. These issues on convergence will be taken in the next lesson.

21.4 Example Problems

21.4.1 Problem 1

Find the Fourier series to represent the function


(
−π, −π < x < 0;
f (x) =
x, 0 < x < π.

Solution: The Fourier series of the given function will represent a 2π periodic function
and the series is given by

a0 X
f (x) ∼ + (an cos(nx) + bn sin(nx))
2
n=1

with Z  Z Z 
π 0 π
1 π
a0 = f (x) dx = − π dx + x dx = −
−π π −π 0 2
and the coefficients an , n = 1, 2, . . . as
Z π  Z 0 Z π 
1 1
an = f (x) cos(nx) dx = − π cos(nx) dx + x cos(nx) dx
π −π π −π 0
 0  π Z π 
sin(nx) 1 sin(nx) sin(nx)
=− + x − dx
n −π
π n 0 0 n

201 WhatsApp: +9147900900676 www.AgriMoon.Com


Construction of Fourier Series

It can be further simplified to give


(
1 0, n is even;
an = 2
[(−1)n − 1] =
n π − n22 π , n is odd.

Similarly bn , n = 1, 2, . . . can be calculated as


 Z 0 Z π 
1
bn = − π sin(nx) dx + x sin(nx) dx
π −π 0
 0   π Z π 
cos(nx) 1 cos(nx) cos(nx)
= + − x + dx
n −π
π n 0 0 n

After simplification we get


(
1 − n1 , n is even;
bn = [1 − 2(−1)n ] =
n 3
n is odd.
n,

Substituting the values of an and bn , we get


π 2
h cos 3x cos 5x
i h sin 2x 3 sin 3x
i
f (x) ∼ − − cos x + + + . . . + 3 sin x − + − ... .
4 π 32 52 2 3

Remark 4: Let a function is defined on the interval [−l, l]. It should be noted that
the periodicity of the function is not required for developing Fourier series. However,
the Fourier series, if it converges, defines a 2l-periodic function on R. Therefore, this is
sometimes convenient to think the given function as 2l-periodic defined on R.

21.4.2 Problem 2

Expand f (x) =| sin x | in a Fourier series.


Solution: There are two possibilities to work out this problem. This may be treated as a
function of period π and we can work in the interval (0, π) or we treat this function as of
period 2π and work in the interval (−π, π).

π
Case I: First we treat the function | sin x| as π periodic we have 2l = π ⇒ l = 2. The
coefficient a0 is given as
Z π Z π
1 2 2 4
a0 = π f (x) dx = sin x dx = [− cos x]π0 = .
2 0 π 0 π π

202 WhatsApp: +9157900900676 www.AgriMoon.Com


Construction of Fourier Series

The other coefficient an , n = 1, 2, . . . are given by


Z π Z π
2 1
an = sin x cos(2nx) dx = [sin(2n + 1)x − sin(2n − 1)x] dx
π 0 l 0
It can be further simplified to have
  h i
1 cos(2n + 1)x π cos(2n − 1)x π 1 2 2 4
an = − + = − =−
π 2n + 1 0 2n − 1 0 π 2n + 1 2n − 1 π(4n2 − 1)
Now we compute the coefficients bn , n = 1, 2, . . . as
Z π Z π
2 2 1
bn = sin x sin(2nx) dx = [cos(2n − 1)x − cos(2n + 1)x] dx
π 0 π 2
0 
1 sin(2n − 1)x π sin(2n + 1)x π
= − + =0
π 2n − 1 0 2n + 1 0

Hence the Fourier series is given by


∞ ∞
2 X −4 2 4 X cos(2nx)
f (x) ∼ + cos(2nx) = − , 0≤x≤1
π π(4n2 − 1) π π 4n2 − 1
n=1 n=1

Case II: If we treat f (x) as 2π periodic then


Z π Z π
2 1
an = sin x cos(nx) dx = [sin(n + 1)x − sin(n − 1)x] dx
π π
0
0
  
1 cos(n + 1)x π cos(n − 1)x π 1 −(−1)n+1 + 1 (−1)n−1 − 1
= − + dx = +
π n+1 0 n−1 0 π n+1 n−1
Thus, for n 6= 1 we have
(
0, when n is odd;
an =
− π1 n24−1 , when n is even
The coefficient a1 needs to calculated separately as
Z π Z π h cos 2x i π
2 1 1 1
a1 = sin x cos x dx = sin 2x dx = − = [−1 + 1] = 0
π 0 π 0 π 2 0 2π
Clearly, the coefficients bn ’s are zero because
Z π Z π
1 1
bn = f (x) sin(nx) dx = | sin x| sin(nx) dx = 0
π −π π −π
| {z }
odd function
The Fourier series can be written as
h i 2 4X∞
2 4 cos 2x cos 4x cos 6x cos(2nx)
f (x) ∼ − + + + ... = − .
π π 3 15 35 π π 4n2 − 1
n=1

Therefore we ended up with the same series.

203 WhatsApp: +9167900900676 www.AgriMoon.Com


Construction of Fourier Series

Remark 5: If we develop the Fourier series of a function considering its period as any
integer multiple of its fundamental period, we shall end up with the same Fourier series.

Remark 6: Note that in the above example the given function is an even function and
therefore the Fourier series is simpler as we have seen that the coefficient bn is zero in this
case. The determination of the Fourier series of a given function becomes simpler if the
function is odd or even. More detail of this we shall see in the Lesson 23.

Suggested Readings

Davis, H.F. (1963). Fourier Series and Orthogonal Functions. Dover Publications, Inc.
New York.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Hanna, J.R. and Rowland, J.H. (1990). Fourier Series Transforms and Boundary Value
Problems. Second Edition. Dover Publications, Inc. New York.
Iorio, R. and Iorio, V. de M. (2001). Fourier Analysis and Partial Differential Equations.
Cambridge University Press. United Kingdom.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Stein, E.M. and Shakarchi, R. (2003). Fourier Analysis: An Introduction. Princeton
University Press, Princeton, New Jersey, USA.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Kreyszig, E. (1993). Advanced Engineering Mathematics. Seventh Edition, John Willey
& Sons, Inc., New York.

204 WhatsApp: +9177900900676 www.AgriMoon.Com


Module 3: Fourier Series and Fourier Transform

Lesson 22

Convergence Theorems

We have seen that piecewise continuity of a function is sufficient for the existence of the
Fourier series. We have not yet discussed the convergence of the Fourier series. Conver-
gence of the Fourier series is a very important topic to be explored in this lesson.
In order to motivate the discussion on convergence, let us construct the Fourier series of
the function (
− cos x, −π/2 ≤ x < 0;
f (x) = f (x + π) = f (x).
cos x, 0 ≤ x ≤ π/2.

In this case the function is an odd function and therefore an = 0, n = 0, 1, 2, . . .. We


compute the Fourier coefficient bn by
π/2 π/2
2 4 8 n
Z Z
bn = f (x) sin(2nx) dx = cos x sin(2nx) dx = 2
π −π/2 π 0 π (4n − 1)

The Fourier series is given by


∞ ∞
X 8 X n sin(2nx)
f (x) ∼ bn sin(2nx) = .
π 4n2 − 1
n=1 n=1

Note that the Fourier series at x = 0 converges to 0. So the Fourier series of f does not
converge to the value of the function at x = 0.
With this example we pose the following questions in connection to the convergence of
the Fourier series
1. Does the Fourier series of a function f (x) converges at a point x ∈ [−L, L].
2. If the series converges at a point x, is the sum of the series equal to f (x).
The answers of these questions are in the negative because
1. There are Lebesgue integrable functions on [−L, L] whose Fourier series diverge
everywhere on [−L, L].
2. There are continuous functions whose Fourier series diverge at a countable number
of points.

205 WhatsApp: +91 7900900676 www.AgriMoon.Com


Convergence Theorems

3. We have already seen in the above examples that the Fourier series converges at a
point but the sum is not equal to the the value of the function at that point.
We need some additional conditions to ensure that the Fourier series of a function f (x)
converges and it converges to the function f (x). Though, we have several notions of
convergence like pointwise, uniform, mean square, etc. we first stick to the most com-
mon notion of convergence, that is, pointwise convergence. Let {fm }∞ m=1 be sequence
of functions defined on [a, b]. We say that {fm }∞
m=1 converges pointwise to f on [a, b] if
for each x ∈ [a, b] we have limm→∞ fm (x) = f (x). A more formal definition of pointwise
convergence will be given later.

22.1 Convergence Theorem (Dirichlet’s Theorem, Sufficient Condi-


tions)

Theorem Statement: Let f be a piecewise continuous function on [−L, L] and the


one sided derivatives of f , that is,
f (x + h) − f (x+) f (x−) − f (x − h)
lim in x ∈ [−L, L) & lim in x ∈ (−L, L]
h→0+ h h→0+ h
(22.1)
exist (and are finite), then for each x ∈ (−L, L) the Fourier series converges and we have
∞  
f (x+) + f (x−) a0 X kπx kπx
= + an cos + bn sin
2 2 L L
n=1

At both endpoints x = ±L the series converges to [f (L−) + f ((−L)+)] /2, thus we have

f (L−) + f ((−L)+) a0 X
= + (−1)n an
2 2
n=1

Remark 1: If the function is continuous at a point x, that is, f (x+) = f (x−) then we
have
∞  
a0 X kπx kπx
f (x) = + an cos + bn sin (22.2)
2 L L
k=1

In other words, if f is continuous with f (−L) = f (L) and one sided derivatives (22.1)
exist then equality (22.2) holds for all x.

206 WhatsApp: +9127900900676 www.AgriMoon.Com


Convergence Theorems

Remark 2: In the above theorem condition on f are sufficient conditions. One may
replace these conditions (piecewise continuity and one sided derivatives) by slightly more
restrictive conditions of piecewise smoothness. A function is said to be piecewise smooth
on [−L, L] if it is piecewise continuous and has a piecewise continuous derivative. The
difference between the two similar restrictions on f will be clear from the example of the
function (
x2 sin(1/x), x 6= 0;
f (x) =
0, x = 0.
It can easily easily be shown that derivative of the function exist everywhere and thus the
function has one sided derivatives and satisfy the conditions of the convergence Theorem
(22.1). However the function is not piecewise smooth because the limx→0 f ′(x) does not
exist as (
2x sin(1/x) − cos(1/x), x 6= 0;
f ′ (x) =
0, x = 0.
If a function is piecewise smooth then it can easily be shown that left and right derivatives
exist. Let f be a piecewise smooth function on [−L, L] then limx→a± f ′ (x) exists for all
a ∈ [−L, L]. This implies
 
′ f (x + h) − f (x)
lim f (x) = lim lim
x→a+ x→a+ h→0+ h
Interchanging the two limits on the right hand side we obtain
 
′ f (x + h) − f (x) f (a + h) − f (a+)
lim f (x) = lim lim = lim
x→a+ h→0+ x→a+ h h→0+ h
Similarly one can shown the existence of left derivative. This example confirms that piece-
wise smoothness is stronger condition than piecewise continuity with existence of one
sided derivatives.

22.2 Different Notions of Convergence

22.2.1 Mean Square Convergence

Let {fm }∞m=1 be sequence of functions defined on [a, b]. Let f be defined on [a, b]. We say
that the sequence {fm }∞
m=1 converges in the mean square sense to f on [a, b] if
Z b
lim |f (x) − fm (x)|2 dx = 0
m→∞ a

207 WhatsApp: +9137900900676 www.AgriMoon.Com


Convergence Theorems

22.2.2 Pointwise Convergence

Let {fm }∞m=1 be sequence of functions defined on [a, b] and let f be defined on [a, b].
We say that {fm }∞m=1 converges pointwise to f on [a, b] if for each x ∈ [a, b] we have
limm→∞ fm (x) = f (x). That is, for each x ∈ [a, b] and ε > 0 there is a natural number
N(ε, x) such that

|fn (x) − f (x)| < ε for all n ≥ N(ε, x)

22.2.3 Uniform Convergence

Let {fm }∞
m=1 be sequence of functions defined on [a, b] and let f be defined on [a, b]. We
say that {fm }∞
m=1 converges uniformly to f on [a, b] if for each ε > 0 there is a natural
number N(ε) such that

|fn (x) − f (x)| < ε for all n ≥ N(ε), and for all x ∈ [a, b]

There is one more interesting fact about the uniform convergence. If {fm }∞ m=1 is a se-
quence of continuous functions which converge uniformly to a function to f on [a, b], then
f is continuous.

22.2.4 Example 1

Let un = xn on [0, 1). Clearly, the sequence {un }∞


n=1 converges pointwise to 0, that is, for
fixed x ∈ [0, 1) we have lim un = 0. But it does not converge uniformly to 0 as we shall
n→∞
show that for given ε there does not exist a natural number N independent of x such that
|un − 0| < ε. Suppose that the series converges uniformly, then for a given ε with

|un − 0| < ε, (22.3)

we seek for a natural number N(ε) such that relation (22.3) holds for n > N . Note that
relation (22.3) holds true if
ln ε
xn < ε ⇐⇒ n >
ln x
It should be evident now that for given x and ε one can define
 
ln ε
N := , where [ ] gives integer rounded towards infinity
ln x

208 WhatsApp: +9147900900676 www.AgriMoon.Com


Convergence Theorems

It once again confirms pointwise convergence. However if x is not fixed then ln ε/ ln x


grows without bounds for x ∈ [0, 1). Hence it is not possible to find N which depends only
on ε and therefore the sequence un does not converge uniformly to 0.

22.2.5 Example 2

xn
Let un = on [0, 1). This sequence converges uniformly and of course pointwise to 0.
n
For given ε > 0 take n > N := 1ε then noting 1ε > 1ε we have |un − 0| < xn /n < 1/n < ε
   

for all n > N Hence the sequence un converges uniformly.


Now we discuss these three types of convergence for the Fourier series of a function.

• Let f be a piecewise continuous function on [−π, π] then the Fourier series of f


convergence to f in the mean square sense. That is
Z π ha m i 2
0
X
lim f (x) − + (ak cos kx + bk sin kx) dx = 0

m→∞ π 2
k=1

• Let f be a piecewise continuous function on [−π, π] and the appropriate one sided
derivatives of f at each point in [−π, π] exists then for each x ∈ [−π, π] the Fourier
series of f converges pointwise to the value (f (x−) + f (x+))/2.
• If f is continuous on [−π, π], f (−π) = f (π), and f ′ is piecewise continuous on
[−π, π], then the Fourier series of f converges uniformly (and also absolutely) to f
on [−π, π].

22.3 Best Trigonometric Polynomial Approximation

An interesting property of the partial sums of a Fourier series is that among all trigonomet-
ric polynomials of degree N , the partial sum of Fourier Series yield the best approximation
of f in the mean square sense. This result has been summarized in the following lemma.

22.3.1 Lemma

Let f be piecewise continuous function on [−π, π] and let the mean square error is defined
by the following function

209 WhatsApp: +9157900900676 www.AgriMoon.Com


Convergence Theorems

Z π hc N i 2
0
X
E(c0 , . . . , cN , d1 , . . . , dN ) = f − + (ck cos kx + dk sin kx) dx

−π 2
k=1

then E(a0 , . . . , aN , b1 , . . . , bN ) ≤ E(c0 , . . . , cN , d1 , . . . , dN ) for any real numbers c0 , c1 , . . . , cN


and d1 , d2 , . . . , dN . Note that ak and bk are the Fourier coefficients of f .

22.4 Example Problems

22.4.1 Problem 1

Let the function f (x) be defined as


(
−π, −π < x < 0;
f (x) =
x, 0 < x < π.

Find the sum of the Fourier series for all point in [−π, π].
Solution: At x = 0, the Fourier series will converge to
f (0+) + f (0−) 0 + (−π) π
= =−
2 2 2
Again, x = ±π are another points of discontinuity and the value of the series at these point
will be
f (π−) + f ((−π)+) π + (−π)
= = 0;
2 2
At all other points the series will converge to functional value f (x).

22.4.2 Problem 2

Let the Fourier series of the function f (x) = x + x2 , −π < x < π be given by

π2 X
 
2 n 4 2
x+x ∼ + (−1) cos nx − sin nx
3 n2 n
n=1

Find the sum of the Fourier series for all point in [−π, π]. Applying the result on conver-
gence of the Fourier series find the value of
1 1 1 1 1 1
1+ + + + ... and 1− + − + ...
22 32 42 22 32 42

210 WhatsApp: +9167900900676 www.AgriMoon.Com


Convergence Theorems

Solution: Clearly the required series may be obtained by substituting x = ±π and x = 0.


At the points of discontinuity x = ±π the series converges to

f (π−) + f ((−π)+) (π + π 2 ) + (−π + π 2 )


= = π2;
2 2
Substituting x = ±π into the series we get
∞ ∞
π2 X 4 X 1 π2
+ (−1)(2n) 2 = π 2 =⇒ =
3 n n2 6
n=1 n=1

At the point x = 0 is a point of continuity and therefore the series will converge to 0.
Substituting x = 0 into the series we obtain
∞ ∞
π2 X 4 X 1 π2
+ (−1)(n) 2 = 0 =⇒ (−1)(1+n) 2 = .
3 n n 12
n=1 n=1

Suggested Readings

Davis, H.F. (1963). Fourier Series and Orthogonal Functions. Dover Publications, Inc.
New York.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Kreyszig, E. (1993). Advanced Engineering Mathematics. Seventh Edition, John Willey
& Sons, Inc., New York.

211 WhatsApp: +9177900900676 www.AgriMoon.Com


Module 3: Fourier Series and Fourier Transform

Lesson 23

Half Range Sine and Cosine Series

In this chapter, we start discussion on even and odd function. As mentioned earlier if the
function is odd or even then the Fourier series takes a rather simple form of containing
sine or cosine terms only. Then we discuss a very important topic of developing a desired
Fourier series (sine or cosine) of a function defined on a finite interval by extending the
given function as odd or even function.

23.1 Even and Odd Functions

A function is said to be an even about the point a if f (a − x) = f (a + x) for all x and odd
about the point a if f (a − x) = −f (a + x) for all x. Further, note the following properties
of even and odd functions:
a) The product of two even or two odd functions is again an even function.
b) The product of and even function and an odd function is an odd function.
Using these properties we have the following results for the Fourier coefficients
Z π
Z π
1 2
 f (x) cos(nx) dx, when f is even function about 0
an = f (x) cos(nx) dx = 0
π −π π
0, when f is odd function about 0


1
Z π
2
 0,
Z π when f is even function about 0
bn = f (x) sin(nx) dx =
π −π π f (x) sin(nx) dx, when f is odd function about 0
0

From these observation we have the following results

23.1.1 Proposition

Assume that f is a piecewise continuous function on [−π, π]. Then

212 WhatsApp: +91 7900900676 www.AgriMoon.Com


Half Range Sine and Cosine Series

a) If f is an even function then the Fourier series takes the simple form
∞ Z π
a0 X 2
f (x) ∼ + an cos(nx) with an = f (x) cos(nx) dx, n = 0, 1, 2, . . . .
2 π 0
n=1

Such a series is called a cosine series.

b) If f is an odd function then the Fourier series of f has the form


∞ Z π
X 2
f (x) ∼ bn sin(nx) with bn = f (x) sin(nx) dx, n = 1, 2, . . . .
π 0
n=1

Such a series is called a sine series.

23.2 Example Problems

23.2.1 Problem 1

Obtain the Fourier series to represent the function f (x)


(
x, when 0 ≤ x ≤ π
f (x) =
2π − x, when π < x ≤ 2π

Solution: The given function is an even function about x = π and therefore


Z 2π
1
bn = f (x) sin(nx) dx = 0.
π 0

The coefficient a0 will be calculated as


Z 2π Z π 2π
1 π2 π2
Z   
1 1
a0 = f (x) dx = x dx + (2π − x) dx = + =π
π 0 π 0 pi π 2 2

The other coefficients an are given as


2π π 2π
Z "Z Z #
1 1
an = f (x) cos(nx) dx = x cos(nx) dx + (2π − x) cos(nx) dx
π π
0 0 pi

213 WhatsApp: +9127900900676 www.AgriMoon.Com


Half Range Sine and Cosine Series

It can be further simplified as


(
2 0, when n is even
an = 2
[(−1)n − 1] = 4
n π , when n is odd
n2 π
Therefore, the Fourier series is given by
π 4 cos 3x cos 5x
h i
f (x) = − cos x + 2
+ + . . . where 0 ≤ x ≤ 2π. (23.1)
2 π 3 52
In this case as the function is continuous and f ′ is piecewise continuous, the series con-
verges uniformly to f (x) and we can write the equality (23.1).

23.2.2 Problem 2

Determine the Fourier Series of f (x) = x2 on [-π, π ] and hence find the value of the
∞ ∞
n+1 1 1
X X
infinite series (−1) 2
and 2
.
n n
n=1 n=1

Solution: The function f (x) = x2 is even on the interval [-π, π ] and therefore bn =0 for all
n. The coefficient a0 is given as
Z π
1 x3 π 2π 2

2
a0 = x dx = = .
π π 3π −π 3
The other coefficients can be calculated by the general formula as
Z π Z π  π Z π 
1 2 2 sin(nx) 1
an = x2 cos(nx) dx = x2 cos(nx) dx = x2 − 2x sin(nx) dx

π −π π 0 π n 0 n 0
Again integrating by parts we obtain
 Z π   
4 cos(nx) π cos(nx) 4 π(−1)n 4(−1)n

an = x − dx = −0 = 2
nπ n 0 0 n nπ n n
Therefore the Fourier series is given as

π 2 X 4(−1)n
x2 = + cos(nx) for x ∈ [−π, π]. (23.2)
3 n2
n=1
If we substitute x = 0 in the equation (23.2) we get
∞ ∞
π 2 X 4(−1)n X (−1)n+1 π2
0= + ⇒ = .
3 n2 n2 12
n=1 n=1
If we now substitute x = π in the equation (23.2) we get
∞ ∞ ∞
2 π 2 X 4(−1)2n 1 2π 2 X 1 X 1 π2
π = + ⇒ = ⇒ = .
3 n2 4 3 n2 n2 6
n=1 n=1 n=1

214 WhatsApp: +9137900900676 www.AgriMoon.Com


Half Range Sine and Cosine Series

23.3 Half Range Series

Suppose that f (x) is a function defined on (0, l]. Suppose we want to express f (x) in the
cosine or sine series. This can be done by extending f (x) to be an even or an odd function
on [−l, l]. Note that there exists an infinite number of ways to express the function in the
interval [−l, 0]. Among all possible extension of f there are two, even and odd extensions,
that lead to simple and useful series:

a) If we want to express f (x) in cosine series then we extend f (x) as an even function in
the interval [−l, l].

b) On the other hand, if we want to express f (x) in sine series then we extend f (x) as an
odd function in [−l, l].

We summarize the above discussion in the following proposition

23.3.1 Proposition

Let f be a piecewise continuous function defined on [0, l]. The series


∞ Z l
a0 X nπx 2 nπx
f (x) ∼ + an cos with an = f (x) cos dx
2 l l 0 l
n=1

is called half range cosine series of f . Similarly, the series


∞ Z l
X nπx 2 nπx
f (x) ∼ bn sin with bn = f (x) sin dx
l l 0 l
n=1

is called half range sine series of f .

Remark: Note that we can develop a Fourier series of a function f defined in [0, l]
and it will, in general, contain all sine and cosine terms. This series, if converges, will
represent a l-periodic function. The idea of half range Fourier series is entirely different
where we extend the function f as per our desire to have sine or cosine series. The half
range series of the function f will represent a 2l-periodic function.

215 WhatsApp: +9147900900676 www.AgriMoon.Com


Half Range Sine and Cosine Series

23.4 Example Problems

23.4.1 Problem 1

Obtain the half range sine series for ex in 0 < x < 1.


Solution: Since we are developing sine series of f we need to compute bn as
Z l Z 1 " Z 1 #
2 nπx
bn = f (x) sin dx = 2 ex sin nπx = 2 ex sin nπx|10 − nπ ex cos nπx dx dx
l 0 l 0 0
 Z 1 
=2 −nπ{ex cos nπx|10 + nπ ex sin nπx dx} = −2nπ(e(−1)n − 1) − n2 π 2 bn
0

Taking second term on the right side to the left side and after simplification we get
2nπ [1 − e(−1)n ]
bn =
1 + n2 π 2
Therefore, the sine series of f is given as

x
X n [1 − e(−1)n ]
e = 2π sin nπx for 0<x<1
1 + n2 π 2
n=1

23.4.2 Problem 2

Let f (x) = sin πx


l on (0, l). Find Fourier cosine series in the range 0 < x < l.

Solution: Sine we want to find cosine series of the function f we compute the coefficients
an as
Z l Z l 
2 πx nπx 1 (n + 1)πx (1 − n)πx
an = sin cos dx = sin + sin dx
l 0 l l l 0 l l

For n 6= 1 we can can compute the integrals to get


" #l
(n+1)πx (1−n)πx
 
1 cos l cos l 1 (−1)n+1 1 (−1)n−1 1
an = − (n+1)π + (n−1)π
= − + + −
l π n+1 n+1 n−1 n−1
l l 0

It can be further simplified as


(
0, when n is odd
an = 4
− π(n+1)(n−1) , when n is even

216 WhatsApp: +9157900900676 www.AgriMoon.Com


Half Range Sine and Cosine Series

The coefficient a1 needs to calculated separately as


Z l  l
1 2πx 1 2πx l 1
a1 = sin dx = cos = (1 − 1) = 0
l 0 l l l 2π 0

The Fourier cosine series of f is given as


2πx
cos 4πx cos 6πx
 
πx 2 4 cos l l l
sin = − + + + ...
l π π 1·3 3·5 5·7

23.4.3 Problem 3

Expand f (x) = x, 0 < x < 2 in a (i) sine series and (ii) cosine series.
Solution: (i) To get sine series we calculate bn as
Z l Z 2
2 nπx 2 nπx
bn = f (x) sin dx = x sin dx
l 0 L 2 0 2
Integrating by parts we obtain
i2 Z 2
nπx 2 2 nπx 4
h 
bn = x cos − + cos dx = − cos nπ.
2 nπ 0 nπ 0 2 nπ
Then for 0 < x < 2 we have the Fourier sine series
4 X∞ cos nπ 4
nπx
 πx 1 2πx 1 3πx

x=− sin = sin − sin + sin + ... .
π n=1 n 2 π 2 2 2 3 2

(ii) Now we express f (x) = x in cosine series. We need to calculate an for n 6= 0 as


Z 2  i2 Z 2
2 nπx nπx 2 nπx 2
h  
an = x cos dx = x sin − sin dx
2 0 2 2 nπ 0 0 2 nπ
After simplifications we obtain
2
 2  h nπx i2 4 4
an = cos = 2 2
(cos nπ − 1) = 2 2
[(−1)n − 1]
nπ nπ 2 0 n π n π
The coefficient a0 is given as Z 2
a0 = x dx = 2
0
Then the Fourier sine series of f (x) = x for 0 < x < 2 is given as

4 X [(−1)n − 1] 8
nπx
 πx 1 3πx 1 5πx

x=1+ 2 cos = 1 − 2 cos + 2 cos + 2 cos + ... .
π n2 2 π 2 3 2 5 2
n=1

217 WhatsApp: +9167900900676 www.AgriMoon.Com


Half Range Sine and Cosine Series

It is interesting to note that the given function f (x) = x, 0 < x < 2 is represented by
two entirely different series. One contains only sine terms while the other contains only
cosine terms.
Note that we have used series equal to the given function because the series converges for
each x ∈ (0, 2) to the function value. It should also be pointed out that one can deduce
sum of several series by putting different values of x ∈ (0, 2) in the above sine and cosine
series.

Suggested Readings

Davis, H.F. (1963). Fourier Series and Orthogonal Functions. Dover Publications, Inc.
New York.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Kreyszig, E. (1993). Advanced Engineering Mathematics. Seventh Edition, John Willey
& Sons, Inc., New York.

218 WhatsApp: +9177900900676 www.AgriMoon.Com


Module 3: Fourier Series and Fourier Transform

Lesson 24

Integration and Differentiation of Fourier Series

In this lesson we discuss differentiation and integration of the Fourier series of a function.
We can get some idea of the complexity of the new series if looking at the terms of the
series. In the case of differentiation we get terms like n sin(nx) and n cos(nx), where
presence of n as product makes the magnitude of the terms larger then the original and
therefore convergence of the new series becomes more difficult. This is exactly other
way round in the case of integration where n appears in division and new terms become
smaller in magnitude and thus we expect better convergence in this case. We shall deal
these two case separately in next sections.

24.1 Differentiation

We first discuss term by term differentiation of the Fourier series. Let f be a piecewise
continuous with the Fourier series

a0 X
f (x) ∼ + [an cos(nx) + bn sin(nx)] (24.1)
2
n=1

Can we differentiate term by term the Fourier series of a function f in order to obtain the
Fourier series of f ′ ? In other words, is it true that

X
f ′ (x) ∼ [−nan sin(nx) + nbn cos(nx)]? (24.2)
n=1

In general the answer to this question is no.


Let us consider the Fourier series of f (x) = x in [−π, π]. This is an odd function and
2(−1)n+1
therefore Fourier series will be x ∼ ∞
P
n=1 n sin(nx). If we differentiate the series
P∞
term by term we get n=1 2(−1)n+1 cos(nx). Note that this is not the Fourier series of
f ′ (x) = 1 since the Fourier series of f (x) = 1 is simply 1.

We consider one more simple example to illustrate this fact. Consider the half range sine
series for cos x in (0, π)

8 X n sin(2nx)
cos x ∼
π (4n2 − 1)
n=1

219 WhatsApp: +91 7900900676 www.AgriMoon.Com


Integration and Differentiation of Fourier Series

If we differentiate this series term by term then we obtain the series



16 X n2 cos(2nx)
π (4n2 − 1)
n=1

This series can not be the Fourier series of − sin x because it diverges as
16 n2 cos(2nx)
lim 6= 0
n→∞ π (4n2 − 1)

For the term by term differentiation we have the following result

24.1.1 Theorem

If f is continuous on [−π, π], f (−π) = f (π), f ′ is piecewise continuous on [−π, π], and if

a0 X
f (x) ∼ + [an cos(nx) + bn sin(nx)]
2
n=1

(in fact in this case we can replace ∼ by =) is the Fourier series of f , then the Fourier
series of f ′ is given by

X
f ′ (x) ∼ [−nan sin(nx) + nbn cos(nx)] .
n=1

Moreover, if the function f ′ has appropriate left and right derivatives at a point x, then
we have

f ′ (x+) + f ′ (x−) X
= [−nan sin(nx) + nbn cos(nx)] .
2
n=1

If f ′ is continuous at a point x then



X

f (x) = [−nan sin(nx) + nbn cos(nx)] .
n=1

Proof: Since f ′ is piecewise continuous and this is sufficient condition for the existence
of Fourier series of f ′. So we can write Fourier series of as

ā0 X  
f ′ (x) ∼ + ān cos(nx) + b̄n sin(nx) (24.3)
2
n=1

220 WhatsApp: +9127900900676 www.AgriMoon.Com


Integration and Differentiation of Fourier Series

where π π
1 1
Z Z

ān = f (x) cos(nx) dx, b̄n = f ′ (x) sin(nx) dx
π −π π −π

Now we simplify coefficients ān and b̄n and write them in terms of an and bn . Using the
condition f (−π) = f (π), we can easily show that

ā0 = 0, ān = nbn , b̄n = −nan

Now the Fourier series of f ′ (24.3) reduces to



X

f (x) ∼ [nbn cos(nx) − nan sin(nx)]
n=1

f ′ (x+) + f ′ (x−)
Convergence of this series to or f ′ (x) is a direct consequence of conver-
2
gence theorem of Fourier series.

24.2 Integration

In general, for an infinite series uniform convergence is required to integrate the series
term by term. In the case of Fourier series we do not even have to assume the convergence
of the Fourier series to be integrated. However, integration term by term of a Fourier series
does not, in general, lead to a Fourier series. The main results can be summarize as:

24.2.1 Theorem

Let f be piecewise continuous function and have the following Fourier series

a0 X
f (x) ∼ + [an cos(nx) + bn sin(nx)] (24.4)
2
n=1

Then no matter whether this series converges or not we have for each x ∈ [−π, π],
x ∞  
a0 (x + π) X an
Z
bn 
f (t)dt = + sin(nx) − cos(nx) − cos nπ (24.5)
−π 2 n n
n=1

and the series on the right hand side converges uniformly to the function on the left.
Proof: We define Z x
a0
g(x) = f (t)dt − x
−π 2

221 WhatsApp: +9137900900676 www.AgriMoon.Com


Integration and Differentiation of Fourier Series

Since f is piecewise continuous function, it is easy to prove that g is continuous. Also


a0
g ′ (x) = f (x) − (24.6)
2
at each point of continuity of f . This implies that g ′ is piecewise continuous and further
we see that
a0 π
g(−π) =
2
and Z π
a0 a0 a0 π
g(π) = f (t)dt − π = πa0 − π =
−π 2 2 2
Hence, the Fourier series of the function g converges uniformly to g on [−π, π]. Thus we
have

α0 X
g(x) = + [αn cos(nx) + βn sin(nx)]
2
n=1

Using Theorem 24.1.1 we have the following result for the Fourier series of g ′ as

X

g (x) ∼ [−nαn sin(nx) + nβn cos(nx)]
n=1

Fourier series of f and the relation (24.6) gives



′ a0 X
g (x) = f (x) − ∼ [an cos(nx) + bn sin(nx)]
2
n=1

Now comparing the last two equations we get

nβn = an − nαn = bn n = 1, 2, . . .

Substituting these values in the Fourier series of g we obtain


Z x ∞  
a0 α0 X an bn
g(x) = f (t)dt − x = + sin(nx) − cos(nx)
−π 2 2 n n
n=1

We can rewrite this to get


Z x ∞  
a0 α0 X an bn
f (t)dt = x + + sin(nx) − cos(nx) (24.7)
−π 2 2 n n
n=1

To obtain α0 we set x = π in the above equation



X 2bn
α0 = a0 π + cos(nx)
n
n=1

Substituting α0 in the equation (24.7) we obtain the required result (24.5).

222 WhatsApp: +9147900900676 www.AgriMoon.Com


Integration and Differentiation of Fourier Series

Remark 1: Note that the series on the right hand side of (24.5) is not a Fourier series
due to presence of x.

Remark 2: The above Theorem on integration can be established in a more general


sense as:
If f be piecewise continuous function in −π ≤ x ≤ π and if

a0 X
+ [an cos(nx) + bn sin(nx)]
2
n=1

is its Fourier series then no matter whether this series converges or not, it is true that
Z x Z x ∞ Z x
a0 X
f (t)dt = a0 dx + [an cos(nx) + bn sin(nx)] dx
a 2 a n=1 a

where −π ≤ a ≤ x ≤ π and the series on the right hand side of converges uniformly in x
to the function on the left for any fixed value of a.

Suggested Readings

Davis, H.F. (1963). Fourier Series and Orthogonal Functions. Dover Publications, Inc.
New York.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Kreyszig, E. (1993). Advanced Engineering Mathematics. Seventh Edition, John Willey
& Sons, Inc., New York.

223 WhatsApp: +9157900900676 www.AgriMoon.Com


Module 3: Fourier Series and Fourier Transform

Lesson 25

Bessel’s Inequality and Parseval’s Identity

In this lesson some properties of the Fourier coefficients will be given. We will mainly de-
rive two important inequalities related to Fourier series, in particular, Bessel’s inequality
and Parseval’s identity. One of the applications of Parseval’s identity for summing certain
infinite series will be discussed.

25.1 Theorem (Bessel’s Inequality)

If f be a piecewise continuous function in [−π, π], then


∞ π
a20 X 2  1
Z
+ ak + b2k ≤ f 2 (x) dx
2 π −π
k=1

where a0 , a1 , . . . and b1 , b2 , . . . are Fourier coefficients of f .


Proof: Clearly, we have
" n
#2
π
a0 X
Z
f (x) − − [ak cos(kx) + bk sin(kx)] dx ≥ 0
−π 2
k=1

Expanding the integrands we get


Z π "Xn
#2
π 2 Z π
a
Z
2 0
f (x) dx + π + [ak cos(kx) + bk sin(kx)] dx − a0 f (x) dx
−π 2 −π k=1 −π
Z π " n # Z π "Xn
#
X
−2 f (x) [ak cos(kx) + bk sin(kx)] dx + a0 [ak cos(kx) + bk sin(kx)] dx ≥ 0
−π k=1 −π k=1

Using the orthogonality of the trigonometric system and definition of Fourier coefficients
we get
π n n
a20
Z X X
f 2 (x) dx + a2k + b2k − a20 π − 2π a2k + b2k + 0 ≥ 0
 
π+π
−π 2
k=1 k=1

This can be further simplified


π n
a2
Z X
2
f (x) dx − 0 π − π a2k + b2k ≥ 0

−π 2
k=1

224 WhatsApp: +91 7900900676 www.AgriMoon.Com


Bessel’s Inequality and Parseval’s Identity

This implies
n π
a20 X 2  1
Z
+ ak + b2k ≤ f 2 (x) dx
2 π −π
k=1

Passing the limit n → ∞, we get the required Bessel’s inequality.


Indeed the above Bessel’s inequality turns into an equality named Parseval’s identity.
However, for the sake of simplicity of proof we state the following theorem for more
restrictive function but the result holds under less restrictive conditions (only piecewise
continuity) same as in Theorem 25.1.

25.2 Theorem (Parseval’s Identity)

If f is a continuous function in [−π, π] and one sided derivatives exit then we have the
equality
∞ π
a20 X 2  1
Z
+ an + b2n = f 2 (x) dx (25.1)
2 π −π
n=1

where a0 , a1 , . . . and b1 , b2 , . . . are Fourier coefficients of f .


Proof: From the Dirichlet’s convergence theorem for x ∈ (−π, π) we have

a0 X
f (x) = + [an cos(nx) + bn sin(nx)]
2
n=1

Integrating by f (x) and integrating term by term from −π to π we obtain


π π ∞  π π 
a0
Z Z X Z Z
2
f (x) dx = f (x) dx + an f (x) cos(nx) dx + bn f (x) sin(nx) dx
−π 2 −π −π −π
n=1

Using the definition of Fourier coefficients we get


π ∞
πa20
Z X
f 2 (x) dx = a2n + b2n


−π 2
n=1

Dividing by π we obtain the required identity.

Remark: As stated earlier Parseval’s identity can be proved for piecewise continuous
functions. Further, for a piecewise continuous function on [−L, L] we can get Parseval’s
identity just by replacing π by L in (25.1).

225 WhatsApp: +9127900900676 www.AgriMoon.Com


Bessel’s Inequality and Parseval’s Identity

25.3 Example Problems

25.3.1 Problem 1

Consider the Fourier cosine series of f (x) = x :



X 4 nπx
x∼1+ [cos(nπ) − 1] cos
π 2 n2 2
n=1

a) Write Parseval’s identity corresponding to the above Fourier series


b) Determine from a) the sum of the series
1 1 1
4
+ 4 + 4 + ...
1 2 3

Solution: a) We first find the Fourier coefficient and the period of the Fourier series just
by comparing the given series with the standard Fourier series
4
a0 = 2, an = [cos(nπ) − 1], n = 1, 2 . . . , bn = 0
π 2 n2
period = 2L = 4 ⇒ L = 2

Writing Parseval’s identity as


L ∞
1 a20 X 2
Z
f 2 (x) dx = an + b2n

+
L −L 2
n=1

This implies
2 ∞
1 4 X 16
Z
2
x dx = + 4 n4
(cos(nπ) − 1)2
2 −2 2 π
n=1

This can be simplified to give


 
8 64 1 1 1
= 2+ 4 4 + 4 + 4 + ...
3 π 1 3 5

Then we obtain
1 1 1 π4
+ + + . . . =
14 34 54 96
b) Let
1 1 1
S= + + + ...
14 24 34

226 WhatsApp: +9137900900676 www.AgriMoon.Com


Bessel’s Inequality and Parseval’s Identity

This series can be rewritten as


   
1 1 1 1 1 1
S = 4 + 4 + 4 + ... + + + + ...
1 3 5 24 44 64
π4 1
= + 4S
96 2

π4
Then we have the required sum as S = .
90

25.3.2 Problem 2

Find the Fourier series of x2 , −π < x < π and use it along with Parseval’s theorem to
show that ∞ X 1 π4
=
(2n − 1)4 96
n=1

Solution: Since f (x) = x2 is an even function, so bn = 0. The Fourier coefficients an will


be given as
π
2 2
Z Z
an = f (x) cos(nx) dx = πx2 cos(nx) dx
π 0 π 0

This can be further simplified for n 6= 0 to


2 π
 
2 4
Z
an = 0− x sin(nx) dx = 2 (−1)n
π n 0 n

The coefficient a0 can be evaluated separately as

2 2π 2
Z
a0 = πx2 dx =
π 0 3

The the Fourier series of f (x) = x2 will be given as


X (−1)n ∞
π2
2
x = +4 cos(nx)
3 n2
n=1

Now by parseval’s theorem we have


π ∞
1 a2 X 2
Z
2
f (x) dx = 0 + an + b2n

π −π 2
n=1

227 WhatsApp: +9147900900676 www.AgriMoon.Com


Bessel’s Inequality and Parseval’s Identity

π
1 2π 4
Z
Using x4 dx = we get
π −π 5

4π 4 X 16 2π 4
+ =
18 n4 5
n=1

This implies

X 1 π4
=
n4 90
n=1

Now using the idea of splitting of the series from the Example 25.3.1 (b), we have
∞ ∞ ∞ ∞
X 1 X 1 1 X 1 15 X 1
= − =
(2n − 1)4 n4 16 n4 16 n4
n=1 n=1 n=1 n=1


X 1
Substituting the value of in the above equation we get the required sum.
n4
k=1

25.3.3 Problem 3

Given the Fourier series



x 2 4 X (−1)n+1
cos = + cos(nx)
2 π π (4n2 − 1)
k=1

deduce the value of



X 1
.
(4n2 − 1)2
n=1

Solution: By Parseval’s theorem for

4 4 (−1)n+1
a0 = , an = , f (x) = cos(x/2)
π π (4n2 − 1)

we have ∞ π
1 16 16 X 1 1
Z
2
+ 2 2 2
= cos2 (x/2) dx = 1
2π π (4n − 1) π −π
n=1

Then,

X 1 π2 − 8
= .
(4n2 − 1)2 16
n=1

228 WhatsApp: +9157900900676 www.AgriMoon.Com


Bessel’s Inequality and Parseval’s Identity

Suggested Readings

Davis, H.F. (1963). Fourier Series and Orthogonal Functions. Dover Publications, Inc.
New York.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Kreyszig, E. (1993). Advanced Engineering Mathematics. Seventh Edition, John Willey
& Sons, Inc., New York.

229 WhatsApp: +9167900900676 www.AgriMoon.Com


Module 3: Fourier Series and Fourier Transform

Lesson 26

Complex Fourier Series

It is often convenient to work with complex form of Fourier series. In deed, the complex
form of Fourier series has applications in the field of signal processing which is of great
interest to many electrical engineers.
Given the Fourier series of a function f (x) as


1 X
f ∼ a0 + [an cos(nx) + bn sin(nx)] , −π < x < π (26.1)
2
n=1

with Z π
1
an = f (x) cos(nx) dx, n = 0, 1, 2 . . .
π −π
and Z π
1
bn = f (x) sin(nx) dx, n = 1, 2 . . .
π −π
We know from Euler’s formula
einx + e−inx einx − e−inx
cos(nx) = sin(nx) =
2 2i

Substituting these values of cos(nx) and sin(nx) into the equation (26.1) we obtain
∞  
1 X einx + e−inx einx − e−inx
f ∼ a0 + an + bn
2 2 2i
n=1
∞  
1 X 1 inx 1 −inx
= a0 + (an − ibn )e + (an + ibn )e
2 2 2
n=1

Let us define new coefficients as


1 1
cn = (an − ibn ), kn = (an + ibn ) (26.2)
2 2
Note that c0 = a0 /2 because b0 = 0. Then the Fourier series becomes

X  inx 
f ∼ c0 + cn e + kn e−inx (26.3)
n=1

230 WhatsApp: +91 7900900676 www.AgriMoon.Com


Complex Fourier Series

where the coefficients are given as


Z π Z π
1 1 1
cn = (an − ibn ) = f (x) [cos(nx) − i sin(nx)] dx = f (x)e−inx dx
2 2π −π 2π −π

Z π Z π
1 1 1
kn = (an + ibn ) = f (x) [cos(nx) + i sin(nx)] dx = f (x)einx dx
2 2π −π 2π −π

From the above calculation we get kn = c−n . Substituting the value of kn into the Fourier
series (26.3) we have

X
f∼ cn einx (26.4)
n=−∞

where
Z π
1
cn = f (x)e−inx dx, n = 0, ±1, ±2, . . . (26.5)
2π −π

The series on the right side of equation (26.4) is called complex form of the Fourier series.
For a function of period 2L defined in [−L, L], the complex form of the Fourier series can
analogously be derived to have
∞ Z L
X inπx 1 −inπx
f∼ cn e L , cn = f (x)e L dx, n = 0, ±1, ±2, . . .
n=−∞
2L −L

26.1 Example Problems

26.1.1 Problem 1

Find the complex Fourier series of

f (x) = ex if − π < x < π and f (x + 2π) = f (x)

Solution: We calculate the coefficients cn as


Z π Z π
1 1
cn = e e x −inx
dx = e(1−in)x dx
2π −π 2π −π
1 e(1−in)x1 π 1  π −inπ 
− e−π einπ

= = e e
2π 1 − in −π 2π 1 − in

231 WhatsApp: +9127900900676 www.AgriMoon.Com


Complex Fourier Series

Substituting e±inπ = cos nπ ± i sin nπ = (−1)n we get


1 1 + in 1 1 + in
cn = (−1)n sinh π = 2
(−1)n sinh π
π (1 − in)(1 + in) π (1 + n )
Then, the Fourier is given as

sinh π X 1 + in inx
f∼ (−1)n e .
π n=−∞ (1 + n2 )

26.1.2 Problem 2

Determine the complex Fourier series representation of


f (x) = x if − l < x < l and f (x + 2l) = f (x)

.
Solution: The complex Fourier series representation of a function f (x) is given as

X inπx
f∼ cn e l

n=−∞

where Z Z
l l
1 −inπx 1 −inπx
cn = f (x)e l dx = xe l dx
2l −l 2l −l
For n 6= 0, integrating by parts we get
"  Z l #
1 −inπx −l l l −inπx
cn = xe l + e l dx ,
2l inπ −l inπ −l
Further application of integration by parts simplifies to
 
1 l2 −inπ l2 inπ l2
−inπx l
cn = − e − e − e l ,
2l inπ inπ (inπ)2 | {z −l}
=0
Finally, it simplifies to
(−1)n il
cn = , n = ±1, ±2, . . .

Now c0 can be calculated as Z l
1
c0 = x dx = 0
2l −l
Therefore, the Fourier series is given as

il X (−1)n inπx
f∼ e l
π n=−∞ n
n6=0

232 WhatsApp: +9137900900676 www.AgriMoon.Com


Complex Fourier Series

26.1.3 Problem 3

Show that Parseval’s identity for the complex form of Fourier series takes the form
Z π ∞
1 2
X
{f (x)} dx = |cn |2
2π −π n=−∞

Solution: For the real form of Fourier series the Parseval’s identity is given as
∞ Z π
a20 X 2  1
+ an + b2n = {f (x)}2 dx (26.6)
2 π −π
n=1

We know that
a0 1 1
c0 = , cn = (an − ibn ), c−n = (an + ibn )
2 2 2
We can deduce that
1 1
|cn |2 = (a2n + b2n ), |c−n|2 = (a2n + b2n ) (26.7)
4 4
Diving the equation (26.6) by 2 and then splitting the second term as
∞ ∞ Z π
a20 1 X 2  1X 2  1
+ an + b2n + an + b2n = {f (x)}2 dx
4 4 4 2π −π
n=1 n=1

Using the relations (26.7) we obtain


∞ ∞ Z π
X X 1
c20 + |cn | + 2
|c−n | =2
{f (x)}2 dx
2π −π
n=1 n=1

This can be rewritten as


∞ Z π
X 1
|cn | = 2
{f (x)}2 dx
n=−∞
2π −π

26.1.4 Problem 4

Given the Fourier series



x sinh π X 1 + in inx
e ∼ (−1)n e .
π n=−∞ (1 + n2 )

233 WhatsApp: +9147900900676 www.AgriMoon.Com


Complex Fourier Series

deduce the value of



X 1
n=−∞
n2 + 1

Solution: From the given series we clearly have


eπ − e−π 1 + in
cn = (−1)n , n = 0, ±1, ±2, . . .
2π (1 + n2 )

These coefficients can be simplified


2 2
2 eπ − e−π (1 + n2 ) eπ − e−π 1
|cn | = =
4π 2 (1 + n2 )2 4π 2 (1 + n2 )

A simple calculation gives


Z π Z π
1 2 1 e2π − e−2π
{f (x)} dx = e2x dx =
2π −π 2π −π 4π

Thus, by Parseval’s identity we have


2 ∞
e2π − e−2π eπ − e−π X 1
=
4π 4π 2 n=−∞
(1 + n2 )

Therefore, we obtain
∞ 
X 1 π eπ + e−π
2)
= π − e−π )
= π cot hπ.
n=−∞
(1 + n (e

Suggested Readings

Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)

234 WhatsApp: +9157900900676 www.AgriMoon.Com


Module 3: Fourier Series and Fourier Transform

Lesson 27

Fourier Integral

If f (x) is defined on a finite interval [−l, l] and is piecewise continuous then we can con-
struct a Fourier series corresponding to the function f and this series will represent the
function on this interval if the function satisfies some additional conditions discussed be-
fore. Furthermore, if f is periodic then we may be able to represent the function by its
Fourier series on the entire real line. Now suppose the function is not periodic and is de-
fined on the entire real line. Then we do not have any possibility to represent the function
by the Fourier series. However, we may still be able to represent the function in terms
of sine and cosines using an integral, called Fourier integral, instead of a summation. In
this lesson we discuss a representation of a non-periodic function by letting l → ∞ in the
Fourier series of a function defined on [−l, l].

27.1 Fourier Series Representation of a Function

Consider any function f (x) defined on [−l, l] that can be represented by a Fourier series as

a0 X  nπx nπx 
f (x) = + an cos + bn sin . (27.1)
2 l l
n=1

For a more general case we can replace left hand side of the above equation by the average
value (f (x+) + f (x−))/2. We now see what will happen if we let l → ∞. It should be
mentioned that as l approaches to ∞ the function f (x) becomes non-periodic defined on
the real axis. Substituting an and bn in the equation (27.1) we get

!
l l l
1 1X nπu nπx nπu nπx
Z Z Z
f (x) = f (u) du + f (u) cos du cos + f (u) sin du sin
2l −l l −l l l −l l l
n=1

Using the identity cos x cos y + sin x sin y = cos(x − y), we get
∞ Z
1 l 1X l nπ
Z
f (x) = f (u) du + f (u) cos (u − x) du (27.2)
2l −l l l
n=1 −l
Z ∞
If we assume that |f (u)| du converges, the first term on the right hand side approaches
−∞
1 Z l 1 ∞
Z
to 0 as l → ∞ since
f (u) du ≤

|f (u)| du.
2l −l 2l −∞

235 WhatsApp: +91 7900900676 www.AgriMoon.Com


Fourier Integral

0 2 3

Figure 27.1: Sum of area of trapezoid as area under curve in the limiting case

Letting l → ∞ in equation (27.2), we get


∞ ∞ ∞
1X nπ πX1 ∞ nπ
Z Z
f (x) = lim f (u) cos (u − x) du = lim f (u) cos (u − x) du
l→∞ l −∞ l l→∞ l π −∞ l
n=1 n=1
For simplifications, we define

π 1
Z
∆α = and F (α) = f (u) cos α(u − x) du
l π −∞
With these definitions and noting ∆α → 0 as l → ∞, we have

X
f (x) = lim ∆αF (n∆α)
∆α→0
n=1
Refereing Figure 27.1, we can write this limit of the sum in the form of improper integral
as
∞ ∞Z ∞
1
Z Z
f (x) = F (α)dα = f (u) cos α(u − x) du dα
0 π 0 −∞
This is called Fourier Integral Representation of f on the real line. Equivalently, this can
be rewritten as
∞ Z ∞  Z ∞  
1
Z
f (x) = f (u) cos αu du cos αx + f (u) sin αu du sin αx dα
π 0 −∞ −∞
It is often convenient to write
Z ∞
f (x) = [A(α) cos αx + B(α) sin αx] dα
0
where the Fourier Integral Coefficients are
∞ ∞
1 1
Z Z
A(α) = f (u) cos αu du and B(α) = f (u) sin αu du
π −∞ π −∞

236 WhatsApp: +9127900900676 www.AgriMoon.Com


Fourier Integral

Remark It should be mentioned that above derivation is not rigorous proof of con-
vergence of the Fourier Integral to the function. This is just to give some idea of transition
form Fourier series to Fourier Integral. Nevertheless we summarize the convergence re-
sult, without proof, in the next theorem. In addition to all conditions required for the
convergence of Fourier series we need one more condition, namely, absolute integrability
of f . Further, note that Fourier integral representation of f (x) is entirely analogous to a
P∞
Fourier series representation of a function on finite interval n=1 · · · , is replaced with
R∞ 
0 · · · du .

27.1.1 Theorem

Assume that f is piecewise smooth on every finite interval on the x axis (or piecewise
continuous and one sided derivatives exist) and let f be absolutely integrable over entire
real axis. Then for each x on the entire axis we have
∞Z ∞
1 f (x+) + f (x−)
Z
f (u) cos α(u − x) du =
π 0 −∞ 2

As in the convergence of Fourier series if f is continuous and all other conditions are
satisfied then the Fourier integral converges

27.2 Example Problems

27.2.1 Problem 1

Let a be a real constant and the function f is defined as



 0, x < 0;


f (x) = x, 0 < x < a;

 0, x > a.

i) Find the Fourier integral representation of f . Zii) Determine the convergence of the

1 − cos α
integral at x = a. iii) Find the value of the integral 2
dα.
0 α
Solution: i) The integral representation of f is
Z ∞
f (x) ∼ [A(α) cos αx + B(α) sin αx] dα (27.3)
0

237 WhatsApp: +9137900900676 www.AgriMoon.Com


Fourier Integral

where
∞  a  Z a 
1 1 1 u sin αu a sin αu
Z Z
A(α) = f (u) cos αu du = u cos αu du = − du
π −∞ 0 π π α 0 0 α
   
1 a sin αa (cos αa − 1) 1 cos αa + αa sin αa − 1
= + =
π α α2 π α2

1 a
∞   Z a 
1 1 −u cos αu a cos αu
Z Z
B(α) = f (u) sin αu du = u sin αu du = + du
π −∞ π 0 π α 0 0 α
   
1 −a cos αa sin αa 1 sin αa − αa cos αa
= + =
π α α2 π α2

Replacing A(α) and B(α) in equation (27.3), we have


∞     
1 cos αa + αa sin αa − 1 sin αa − αa cos αa
Z
f (x) ∼ cos αx + sin αx dα
π 0 α2 α2

1 cos α(a − x) + αa sin α(a − x) − cos αx
Z
= dα
π 0 α2
ii) The function is not defined at x = a. The value of the Fourier integral at x = a is given
as
1 ∞ 1 − cos αa f (a+) + f (a−) 0+a a
Z
2
dα = = =
π 0 α 2 2 2
iii) Substituting a = 1 in the above integral we get
∞ ∞
1 1 − cos α 1 1 − cos α π
Z Z
dα = =⇒ dα =
π 0 α2 2 0 α2 2

27.2.2 Problem 2

Determine the Fourier integral representing


(
1, 0 < x < 2;
f (x) =
0, x < 0 and x > 2.

sin α
Z
Further, find the value of the integral dα.
0 α
Solution: The Fourier integral representation of f is
Z ∞
f (x) ∼ [A(α) cos αx + B(α) sin αx] dα (27.4)
0

238 WhatsApp: +9147900900676 www.AgriMoon.Com


Fourier Integral

where
∞ 2
1 1 1 sin αu 2 1 sin 2α
Z Z
A(α) = f (u) cos αu du = cos αu du = =
π −∞ π 0 π α 0 π α
∞ 2
1 1 1 − cos αu 2 1 (1 − cos 2α)
Z Z
B(α) = f (u) sin αu du = sin αu du = =
π −∞ π 0 π α 0 π α
Then, substituting calculated values of A(α) and B(α) in equation (27.4), we obtain
∞ 
1 sin 2α (1 − cos 2α)
Z
f (x) ∼ cos αx + sin αx dα
π 0 α α

1 sin α(2 − x) + sin αx
Z
= dα
π 0 α
To find the value of the given integral we substitute x = 1 in the above Fourier integral
and use convergence theorem to get

2 sin α
Z
dα = f (1) = 1
π 0 α
This gives the value of the desired integral as

sin α π
Z
dα =
0 α 2

Suggested Readings

Davis, H.F. (1963). Fourier Series and Orthogonal Functions. Dover Publications, Inc.
New York.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Kreyszig, E. (1993). Advanced Engineering Mathematics. Seventh Edition, John Willey
& Sons, Inc., New York.

239 WhatsApp: +9157900900676 www.AgriMoon.Com


Module 3: Fourier Series and Fourier Transform

Lesson 28

Fourier Integrals (Cont.)

In this lesson we shall first present complex form of Fourier integral. We then introduce
Fourier sine and cosine integral. The convergence of these integrals with its application to
evaluate integrals will be discussed. In this lesson will be very useful to introduce Fourier
transforms.

28.1 The Exponential Fourier Integral

It is often convenient to introduce complex form of Fourier integral. In fact, using com-
plex form of Fourier integral we shall introduce Fourier transform, sometimes referred
as Fourier exponential transform, in the next lesson. We start with the following Fourier
integral
∞Z ∞
1
Z
f (x) = f (u) cos α(u − x) du dα (28.1)
π 0 −∞

Note that the integral


Z ∞
f (u) cos α(u − x) du
−∞

is an even function of α and therefore the integral (28.1) can be written as


∞ ∞
1
Z Z
f (x) = f (u) cos α(u − x) du dα (28.2)
2π −∞ −∞

Also, note that the integral


Z ∞
f (u) sin α(u − x) du
−∞

is an odd function of α and therefore we have the following result


∞ ∞
1
Z Z
f (u) sin α(u − x) du dα = 0 (28.3)
2π −∞ −∞

Multiplying the equation (28.3) by i and adding into the equation (28.2) we obtain
∞ ∞
1
Z Z
f (x) = f (u) [cos α(u − x) + i sin α(u − x)] du dα (28.4)
2π −∞ −∞

240 WhatsApp: +91 7900900676 www.AgriMoon.Com


Fourier Integrals (Cont.)

This may be rewritten as


∞ ∞
1
Z Z
f (x) = f (u)eiα(u−x) du dα (28.5)
2π −∞ −∞

If we subtract the equation (28.3) after multiplying by i from the equation (28.2) we obtain
∞ ∞
1
Z Z
f (x) = f (u)e−iα(u−x) du dα (28.6)
2π −∞ −∞

Either (28.5) or (28.6) are exponential form of the Fourier integral.

28.1.1 Example

Compute the complex Fourier integral representation of f (x) = e−a|x| .


Solution: The complex integral representation of f is given as
∞ ∞ ∞ ∞
1 1
Z Z Z Z
iα(u−x) −iαx
f (x) = f (u)e du dα = e f (u)eiαu du dα (28.7)
2π −∞ −∞ 2π −∞ −∞

We first compute the inner integral


" #0 " #∞
∞ 0 ∞
e(a+iα)u e−(a−iα)u
Z Z Z
f (u)eiαu du = eau eiαu du + e−au eiαu du = + −
−∞ −∞ 0 a + iα a − iα
−∞ 0

This can be further simplified


∞  
1 1 2a
Z
−iαu
f (u)e du = + =
−∞ a + iα a − iα a2 + α2

Then the complex Fourier integral representation of f is



a 1
Z
f (x) = e−iαx dα (28.8)
π −∞ a2 + α2

28.2 Fourier Sine and Cosine Integrals

Consider the Fourier integral representation of a function f as


Z ∞
f (x) ∼ [A(α) cos αx + B(α) sin αx] dα
0

241 WhatsApp: +9127900900676 www.AgriMoon.Com


Fourier Integrals (Cont.)

where the Fourier Integral Coefficients are


∞ ∞
1 1
Z Z
A(α) = f (u) cos αu du and B(α) = f (u) sin αu du
π −∞ π −∞

If the function f is an even function, the integral of A(α) has an even integrand. Therefore
we can simplify the integral to

2
Z
A(α) = f (u) cos αu du
π 0

Since the integrand of the integral in B(α) is odd and therefore B(α) = 0. Thus for even
function f we have
Z ∞
f (x) ∼ A(α) cos αx dα
0

Similarly, for an odd function f we have



2
Z
A(α) = 0 and B(α) = f (u) sin αu du
π 0

and
Z ∞
f (x) ∼ B(α) sin αx dα
0

Remark: Similar to half range Fourier series, we can represent a function defined for
all real x > 0 by Fourier sine or Fourier cosine integral by extending the function as an
odd function or as an even function over the entire real axis, respectively.
We summarize the above results in the following theorem:

28.2.1 Theorem

Assume that f is piecewise smooth function on every finite interval on the positive x-axis
and let f be absolutely integrable over 0 to ∞. Then f may be represented by either:
a) Fourier cosine integral
Z ∞
f (x) ∼ A(α) cos αx, dα 0 < x < ∞,
0

where

2
Z
A(α) = f (u) cos αu du
π 0

242 WhatsApp: +9137900900676 www.AgriMoon.Com


Fourier Integrals (Cont.)

b) Fourier sine integral


Z ∞
f (x) ∼ B(α) sin αx dα 0<x<∞
0

where

2
Z
B(α) = f (u) sin αu du
π 0

Moreover, the above Fourier cosine and sine integrals converge to [f (x+) + f (x−)]/2.

28.3 Example Problems

28.3.1 Problem 1

For the function 



 0, −∞ < x < −π ;


 −1, −π < x < 0;
f=


 1, 0 < x < π;

0, π < x < ∞.

determine the Fourier integral. To what value does the integral converge at x = −π ?
Solution: Since the given function is an odd function we can directly put A(α) = 0 and
evaluate B(α) as
∞ π
2 2 2
Z Z
B(α) = f (u) sin αu du = sin αu du = (1 − cos απ)
π 0 π 0 πα
Therefore, the Fourier integral representation is

2 1 − cos απ
Z
f (x) ∼ sin αx dα
π 0 α
The function is not defined at x = −π and therefore the Fourier integral at x = −π will
converge to the average value 0−1 1
2 = −2.

28.3.2 Problem 2

Find a Fourier sine and cosine integral representation of


(
1, 0 < x < π ;
f=
0, π < x < ∞.

243 WhatsApp: +9147900900676 www.AgriMoon.Com


Fourier Integrals (Cont.)

Hence evaluate
∞ ∞
sin πα cos πα (1 − cos πα) sin πα
Z Z
dα and dα
0 α 0 α

Solution: Fourier sine representation is given as


Z ∞
f (x) ∼ B(α) sin αx dα
0

where ∞ π
2 2 2 (1 − cos πα)
Z Z
B(α) = f (u) sin αu du = sin αu du =
π 0 π 0 π α
Therefore

2 (1 − cos πα)
Z
f (x) ∼ sin αx dα
π 0 α
Using convergence theorem, we have

∞  0,

 x > π;
2 (1 − cos πα)
Z
sin αx dα = 1/2, x = π ;
π 0 α 
 1, 0 < x < π.

To get the desired integral we substitute x = π in the above integral


∞ ∞
2 (1 − cos πα) 1 (1 − cos πα) π
Z Z
sin πα dα = =⇒ sin πα dα =
π 0 α 2 0 α 4
For the Fourier cosine representation we evaluate
∞ π
2 2 2 sin πα
Z Z
A(α) = f (u) cos αu du = cos αu du =
π 0 π 0 π α
Thus, the Fourier cosine integral representation is given as

2 sin πα cos αx
Z
f (x) ∼ dα
π 0 α
Applying convergence theorem we have

∞  0,

 x > π;
2 sin πα cos αx
Z
dα = 1/2, x = π ;
π 0 α 
 1, 0 < x < π.

To get the required integral we now substitute x = π into the above integral
∞ ∞
2 sin πα cos πα 1 sin πα cos πα π
Z Z
dα = =⇒ dα =
π 0 α 2 0 α 4

244 WhatsApp: +9157900900676 www.AgriMoon.Com


Fourier Integrals (Cont.)

Suggested Readings

Davis, H.F. (1963). Fourier Series and Orthogonal Functions. Dover Publications, Inc.
New York.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Hanna, J.R. and Rowland, J.H. (1990). Fourier Series Transforms and Boundary Value
Problems. Second Edition. Dover Publications, Inc.. New York.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Kreyszig, E. (1993). Advanced Engineering Mathematics. Seventh Edition, John Willey
& Sons, Inc., New York.

245 WhatsApp: +9167900900676 www.AgriMoon.Com


Module 3: Fourier Series and Fourier Transform

Lesson 29

Fourier Sine and Cosine Transform

In this lesson we introduce Fourier cosine and sine transforms. Evaluation and proper-
ties of Fourier cosine and sine transform will be discussed. The parseval’s identities for
Fourier cosine and sine transform will be given.

29.1 Fourier Cosine and Sine Transform

Consider the Fourier cosine integral representation of a function f as


r Z ∞ r Z ∞ !
∞Z ∞
2 2 2
Z
f (x) = f (u) cos αu du cos αx dα = f (u) cos αu du cos αx dα
π 0 0 π 0 π 0

In this integration representation, we set


r Z ∞
2
fˆc (α) = f (u) cos αu du (29.1)
π 0
and then
r Z ∞
2
f (x) = fˆc (α) cos αx dα (29.2)
π 0

The function fˆc (α) as given by (29.1) is known as the Fourier cosine transform of f (x)
in 0 < x < ∞. We shall denote Fourier cosine transform by Fc (f ). The function f (x)
as given by (29.2) is called inverse Fourier cosine transform of fˆc (α). It is denoted by
Fc−1 (fˆc ).

Similarly we define Fourier sine and inverse Fourier sine transform by


r Z ∞ r Z ∞
2 2
Fs (f ) = fˆs (α) := f (u) sin αu du and Fs−1 (fˆ) = f (x) := fˆs (α) sin αx dα
π 0 π 0

29.2 Properties

We mention here some important properties of Fourier cosine and sine transform that will
be used in the application to solving differential equations.

246 WhatsApp: +91 7900900676 www.AgriMoon.Com


Fourier Sine and Cosine Transform

1. Linearity: Let f and g are piecewise continuous and absolutely integrable functions.
Then for constants a and b we have

Fc (af + bg) = aFc (f ) + bFc (g) and Fs (af + bg) = aFs (f ) + bFc (g)

Note that these properties are obvious and can be proved just using linearity of the inte-
grals.
2. Transform of Derivatives: Let f (x) be continuous and absolutely integrable on
x−axis. Let f ′ (x) be piecewise continuous and on each finite interval on [0, ∞] and
f (x) → 0 as x → ∞. Then,
r
2
Fc {f ′ (x)} = αFs {f (x)} − f (0) and Fs {f ′ (x)} = −αFc {f (x)}
π
Proof: By the definition of Fourier cosine transform we have
r Z ∞
2
Fc {f ′ (x)} = f ′ (x) cos αx dx
π 0

Integrating by parts we get


r  Z ∞ 
′ 2 ∞
Fc {f (x)} = (f (x) cos αx) + α f (x) sin αx dx

π 0 0

Using the definition of Fourier sine integral we obtain


r
′ 2
Fc {f (x)} = − f (0) + α Fs {f (x)}.
π
Similarly the other result for Fourier sine transform can be obtained.

Remark: The above results can easily be extended to the second order derivatives to
have
r r
2 ′ 2
Fc {f ′′ (x)} = −α2 Fc {f (x)} − f (0) and Fs {f ′′ (x)} = αf (0) − α2 Fs {f (x)}
π π
Note that here we have assumed continuity of f and f ′ and piecewise continuity of f ′′ .
Further, we also assumed that f and f ′ both goes to 0 as x approaches to ∞.
3. Parseval’s Identities: For Fourier sine and cosine transform we have the following
identities

247 WhatsApp: +9127900900676 www.AgriMoon.Com


Fourier Sine and Cosine Transform

Z ∞ Z ∞ Z ∞h i2 Z ∞
i) fˆc (α)ĝc(α) dα = f (x)g(x) dx ii) fˆc (α) dα = [f (x)]2
0 0 0 0

Z ∞ Z ∞ Z ∞h i2 Z ∞
iii) fˆs (α)ĝs (α) dα = f (x)g(x) dx iv) fˆs (α) dα = [f (x)]2
0 0 0 0
Proof: We prove the first identity and rest can be proved similarly. We take the right hand
side of the identity and use the definition of the inverse cosine transform to get

r Z ∞ Z ∞
2
Z
f (x)g(x) dx = f (x) ĝc (α) cos(αx) dα dx
0 π 0 0

Changing the order of integration we obtain



r Z ∞ Z ∞ Z ∞
2
Z
f (x)g(x) dx = ĝc (α) f (x) cos(αx) dx dα = fˆc (α)ĝc (α) dα
0 π 0 0 0

29.3 Example Problems

29.3.1 Problem 1

Find the Fourier sine transform of e−x , x > 0. Hence show that

x sin mx πe−m
Z
d x = ,m>0
0 1 + x2 2

Solution: Using the definition of Fourier sine transform


r Z ∞
x 2
Fs {e } = e−x sin αx dx
π 0
Let us denote the integral on the right hand side by i and evaluate it by integrating by parts
as
Z ∞ ∞ Z ∞ Z ∞
−x −x −x
I= e sin αx dx = −e sin αx + α e cos αx dx = α e−x cos αx dx

0 0 0 0

Again integrating by parts


 ∞ Z ∞ 
−x −x
I = α −e cos αx − α e sin αx dx = α [1 − αI]

0 0

This implies
α
I=
1 + α2

248 WhatsApp: +9137900900676 www.AgriMoon.Com


Fourier Sine and Cosine Transform

Finally substituting the value of I to the expression of Fourier sine transform above we
get r  
2 α
Fs {ex } =
π 1 + α2
Taking inverse Fourier transform
r Z ∞
2 2 ∞ α
Z
e−x = ˆ
fs (α) sin αx dα = sin αx dα
π 0 π 0 1 + α2

Changing x to m and α to x we obtain


Z ∞
x π
2
sin(xm) dx = e−m
0 1+x 2

29.3.2 Problem 2
2
Find the Fourier cosine transform of e−x , x > 0.
Solution: By the definition of Fourier cosine transform we have
r Z ∞ r Z ∞
2 2 2 2
Fc {e−x }= f (u) cos(αu) du = e−u cos(αu) du
π 0 π 0

Let us denote the integral on the right hand side by I and differentiate it with respect to α
as Z ∞ Z ∞
dI d −u2 2
= e cos(αu) du = − e−u sin(αu)u du
dα dα 0 0
Integrating by parts we get
 Z ∞ 
dI 1 −u2 −u2 α
= e sin(αu) − α e cos(αu) du = − I
dα 2 0 2

This implies
2
/4
I = c e−α
√ √
π π
Using I(0) = 2 , we evaluate the constant c = 2 . Then we have

π −α2 /4
I= e
2
Therefore the desired Fourier cosine transform is given as
r √
−x2 2 π −α2 /4 1 2
Fc {e }= e = √ e−α /4
π 2 2

249 WhatsApp: +9147900900676 www.AgriMoon.Com


Fourier Sine and Cosine Transform

29.3.3 Problem 3

Using Parseval’s identities, prove that


dt ∞
t2
Z Z
π π
i) 2 2 2 2
= ii) 2
dt =
0 (a + t )(b + t ) 2ab(a + b) 0 (t + 1) 4

Solution: i) For the first part let f (x) = e−ax and g(x) = e−bx . It can easily be shown that
r Z ∞ r
2 2 a
Fc {f } = fˆc (α) = e−ax
cos αx dx =
π 0 π a + α2
2

r Z ∞ r
2 2 b
Fc {g} = fˆc (α) = e−bx cos αx dx =
π 0 π b + α2
2

Using Parseval’s identity we get


∞ ∞ ∞ ∞
2
Z Z Z Z
a b
fˆc (α)ĝc(α) dα = f (x)g(x) dx ⇒ dα = e−(a+b)x dx
0 0 π 0 a + α b + α2
2 2 2
0

This can be further simplified as



2ab dα e−(a+b)x ∞
Z
dα = −
π (a2 + α2 )(b2 + α2 ) a+b 0

0

Thus we get


Z
π
=
0 (a2 + α2 )(b2 + α2 ) ab(a + b)
ii) For the second part we use Fourier sine transform of e−x as
r
2 α
Fs {e−x } = fˆs (α) = .
π 1 + α2
Using Parseval’s identity we obtain

2 ∞
α2 1
Z Z
2
dα = ∞ e−x dx =
π 0 (1 + α2 )2 0 2

Hence we have the desired results



α2
Z
π
2 2
dα = .
0 (1 + α ) 2

250 WhatsApp: +9157900900676 www.AgriMoon.Com


Fourier Sine and Cosine Transform

Suggested Readings

Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Hanna, J.R. and Rowland, J.H. (1990). Fourier Series, Transforms and Boundary Value
Problems. Second Edition. Dover Publications, Inc.. New York.
Iorio, R. and Iorio, V. de M. (2001). Fourier Analysis and Partial Differential Equations.
Cambridge University Press. United Kingdom.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Kreyszig, E. (1993). Advanced Engineering Mathematics. Seventh Edition, John Willey
& Sons, Inc., New York.

251 WhatsApp: +9167900900676 www.AgriMoon.Com


Module 3: Fourier Series and Fourier Transform

Lesson 30

FOURIER TRANSFORM

In this lesson we describe Fourier transform. We shall connect Fourier series with the
Fourier transform through Fourier integral. Several interesting properties of the Fourier
Transform such as linearity, shifting, scaling etc. will be discussed.

30.1 Fourier Transform

Consider the Fourier integral defined in earlier lessons as


∞Z ∞ 
1
Z
f (x) = f (u) cos α(u − x)du dα
π 0 −∞

Since the inner integral is an even function of α we have


∞ Z ∞ 
1
Z
f (x) = f (u) cos α(u − x)du dα (30.1)
2π −∞ −∞

Further note that


∞ ∞
1
Z Z
0= f (u) sin α(u − x)du dα (30.2)
2π −∞ −∞

as the integral Z ∞
f (u) sin α(u − x)du dα
−∞

is an odd function of α. Multiplying the equation (30.2) by the imaginary unit i and adding
to the equation (30.1), we obtain
∞ ∞
1
Z Z
f (x) = f (u)eiα(u−x) du dα (30.3)
2π −∞ −∞

This is the complex Fourier integral representation of f on the real line. Now we split the
exponential integrands and the pre-factor 1/(2π) as
∞  ∞ 
1 1
Z Z
f (x) = √ √ iαu
f (u)e du e−iαx dα (30.4)
2π −∞ 2π −∞

252 WhatsApp: +91 7900900676 www.AgriMoon.Com


Fourier Transform

The term in the parentheses is what we will the Fourier transform of f . Thus the Fourier
transform of f , denoted by fˆ(α), is defined as

1
Z
fˆ(α) = √ f (u)eiαu du
2π −∞

Now the equation (30.4) can be written as



1
Z
f (x) = √ fˆ(α)e−iαx dα (30.5)
2π −∞

The function f (x) in equation (30.5) is called the inverse Fourier transform of fˆ(α). We
shall use F for Fourier transformation and F −1 for inverse Fourier transformation in this
lesson.

Remark: It should be noted that there are a number of alternative forms for the
Fourier transform. Different forms deals with a different pre-factor and power of ex-
ponential. For example we can also define Fourier and inverse Fourier transform in the
following manner.
∞ ∞
1 1
Z Z
f (x) = √ fˆ(α)eiαx dα where fˆ(α) = √ f (u)e−iαu du
2π −∞ 2π −∞

or
∞ ∞
1
Z Z
f (x) = fˆ(α)eiαx dα where fˆ(α) = f (u)e−iαu du
−∞ 2π −∞

or
∞ ∞
1
Z Z
f (x) = fˆ(α)eiαx dα where fˆ(α) = f (u)e−iαu du
2π −∞ −∞

We shall remain with our original form because it is easy to remember because of the
same pre-factor in front of both forward and inverse transforms.

30.2 Properties

We now list a number of properties of the Fourier transform that are useful in their ma-
nipulation.

253 WhatsApp: +9127900900676 www.AgriMoon.Com


Fourier Transform

1. Linearity: Let f and g are piecewise continuous and absolutely integrable functions.
Then for constants a and b we have

F (af + bg) = aF (f ) + bF (g)

Proof: Similar to the Fourier sine and cosine transform this property is obvious and can
be proved just using linearity of the Fourier integral.
2. Change of Scale Property: If fˆ(α) is the Fourier transform of f (x) then
1 ˆ α 
F [f (ax)] = f , a 6= 0
|a| a

Proof: By the definition of Fourier transform we get



1
Z
F [f (ax)] = √ f (ax)eiαx dx
2π −∞

Substituting ax = t so that adx = dt , we have



1 dt 1 ˆ α 
Z
iα at
F [f (ax)] = √ f (t)e = f .
2π −∞ a |a| a

3. Shifting Property: If fˆ(α) is the Fourier transform of f (x) then

F [f (x − a)] = eiαa F [f (x)]

Proof: By definition, we have


Z ∞
1
F [f (x − a)] = √ fˆ(x − a)eiαx dx
2π −∞
Z ∞
1
=√ fˆ(t)eiα(t+a) dt = eiαa fˆ(α)
2π −∞

3. Duality Property: If fˆ(α) is the Fourier transform of f (x) then

F [fˆ(x)] = f (−α)

Proof: By definition of the inverse Fourier transform, we have



1
Z
f (x) = √ fˆ(α)e−iαx dα
2π −∞

254 WhatsApp: +9137900900676 www.AgriMoon.Com


Fourier Transform

Renaming x to α and α to x, we have



1
Z
f (α) = √ fˆ(x)e−iαx dx
2π −∞

Replacing α to −α, we obtain



1
Z
f (−α) = √ fˆ(x)eiαx dx = F [fˆ(x)].
2π −∞

Now we evaluate Fourier transform of some simple functions.

30.3 Example Problems

30.3.1 Problem 1

Find the Fourier transform of the following function


(
1, |x| < a,
X[−a,a] (x) = (30.6)
0, |x| > a.

Solution: By the definition of Fourier transform, we have



1
Z
X[−a,a] (x)eiαx dx
 
F X[−a,a] (x) = √
2π −∞

Using the given value of given function we get


Z a
1 1 1 iαa
eiαx dx = √ (e − e−iαa )
 
F X[−a,a] (x) = √
2π −a 2π iα
2
 iαa
e −e −iαa 
2

sin(αa)

=√ =√ .
2π 2iα 2π α

30.3.2 Problem 2
2
Find the Fourier transform of e−ax .
Solution: Using the definition of the Fourier Transform

1
Z
−ax2 ) 2
F (e =√ e−ax eiαx dx
2π −∞

255 WhatsApp: +9147900900676 www.AgriMoon.Com


Fourier Transform

Further simplifications leads to


Z ∞
h
−ax2
i 1 iα 2 α2
F e =√ e[−a(x− 2a ) − 4a ] dx
2π −∞
1 − α2 ∞ −ay 2 1
Z
α2
=√ e 4a e dy = √ e− 4a
2π −∞ 2a
α2
h 1 2i
If a = 1/2 then F e− 2 x = e− 2 . This shows F [f (x)] = f (α) such function is said to be
self-reciprocal under the Fourier transformation.

30.3.3 Problem 3

Find the inverse Fourier transform of fˆ(α) = e−|α|y , where y ∈ (0, ∞).
Solution: By the definition of inverse Fourier transform
Z ∞ Z ∞
−1
h
ˆ
i 1 ˆ −iαx 1
F f (α) = √ f (α)e dα = √ e−|α|y e−iαx dα
2π −∞ 2π −∞
Z 0 Z ∞
1 αy −iαx 1
=√ e e dα + √ e−αy e−iαx dα
2π −∞ 2π 0
Combining the two exponentials in the integrands
Z 0 Z ∞
−1
h
ˆ
i 1 (y−ix)α 1
F f (α) = √ e dα + √ e−(y+ix)α dα
2π −∞ 2π 0
Now we can integrate the above two integrals to get
" #0 " #∞
h i 1 e(y−ix)α 1 e−(y+ix)α
F −1 fˆ(α) = √ +√
2π (y − ix) −∞ 2π −(y + ix) 0

Noting limα→−∞ e(y−ix)α = 0 and limα→∞ e−(y+ix)α = 0, we obtain


h i 1 1 1 1
F −1 fˆ(α) = √ +√
2π y − ix 2π y + ix
This can be further simplified to give
h i 1 y + ix + y − ix
F −1 fˆ(α) = √
2π (y − ix)(y + ix)
Hence we get r
2 y
f (x) = .
π (x2 + y 2 )

256 WhatsApp: +9157900900676 www.AgriMoon.Com


Fourier Transform

Suggested Readings

Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Hanna, J.R. and Rowland, J.H. (1990). Fourier Series Transforms and Boundary Value
Problems. Second Edition. Dover Publications, Inc.. New York.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Kreyszig, E. (1993). Advanced Engineering Mathematics. Seventh Edition, John Willey
& Sons, Inc., New York.
Arfken, G.B. (2001). Mathematical Methods for Physicists. Fifth Edition, Harcourt Aca-
demic Press, San Diego.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishilers, New Delhi.

257 WhatsApp: +9167900900676 www.AgriMoon.Com


Module 3: Fourier Series and Fourier Transform

Lesson 31

Fourier Transform (Cont.)

In this lesson we continues further on Fourier transform. Here we discuss some more
properties of the Fourier transform and evolution of Fourier transform of some special
functions. Some applications of Parseval’s identity and convolution property will be
demonstrated.

31.1 Fourier Transforms of Derivatives

31.1.1 Theorem

If f (x) is continuously differential and f (x) → 0 as |x| → ∞, then

F [f ′ (x)] = (−iα)F [f (x)] = (−iα)fˆ(α).

Proof: By the definition of Fourier transform we have



1
Z

F [f (x)] = √ f ′ (x)eiαx dx
2π −∞

Integrating by parts we obtain


 ∞ 
1
Z
∞
f (x)eiαx −∞ iαx


F [f (x)] = √ − f (x)e (iα)dx .
2π −∞

Since f (x) → 0 as |x| → ∞, we get

F [f ′ (x)] = −iαfˆ(α).

This proves the result.


Note that the above result can be generalized. If f (x) is continuously n-times differen-
tiable and f k (x) → 0 as |x| → ∞ for k = 1, 2, ..., n − 1, then the Fourier transform of nth
derivative is
F [f n (x)] = (−iα)n fˆ(α).

258 WhatsApp: +91 7900900676 www.AgriMoon.Com


Fourier Transform (Cont.)

31.2 Convolution for Fourier Transforms

31.2.1 Theorem

The Fourier transform of the convolution of f (x) and g(x) is 2π times the product of the
Fourier transforms of f (x) and g(x), i.e.,

F [f ∗ g] = 2πF (f )F (g).

Proof: By definition, we have


∞ Z ∞ 
1
Z
F [f ∗ g] = √ f (y)g(x − y) dy eiαx dx
2π −∞ −∞

Changing the order of integration we obtain


∞ ∞
1
Z Z
F [f ∗ g] = √ f (y)g(x − y)eiαx dx dy
2π −∞ −∞

By substituting x − y = t ⇒ dx = dt we get
∞ ∞
1
Z Z
F [f ∗ g] = √ f (y)g(t)eiα(y+t)dt dy
2π −∞ −∞

Splitting the integrals we get


√ ∞ ∞
  
1 1
Z Z
iαy iαt
F [f ∗ g] = 2π √ f (y)e dy √ g(t)e dt
2π −∞ 2π −∞

Finally we have the following result


√ √
F [f ∗ g] = 2π F [f ]F [g] = 2π fˆ(α)ĝ(α)

This proves the result.


The above result is sometimes written by taking the inverse transform on both the sides as
Z ∞
(f ∗ g)(x) = fˆ(α)ĝ(α)e−iαx dα
−∞

or Z ∞ Z ∞
f (y)g(x − y)dy = fˆ(α)ĝ(α)e−iαx dα
−∞ −∞

259 WhatsApp: +9127900900676 www.AgriMoon.Com


Fourier Transform (Cont.)

31.3 Perseval’s Identity for Fourier Transforms

31.3.1 Theorem

If fˆ(α) and ĝ(α) are the Fourier transforms of the f (x) and g(x) respectively, then
Z ∞ Z ∞ Z ∞ Z ∞
(i) fˆ(α)ĝ(α) dα = f (x)g(x) dx (ii) |fˆ(α)|2 dα = |f (α)|2 dα.
−∞ −∞ −∞ −∞

Proof: (i) Use of the inversion formula for Fourier transform gives
∞ ∞  ∞ 
1
Z Z Z
iαx
f (x)g(x) dx = f (x) √ ĝ(α)e dα dx
−∞ −∞ 2π −∞

Changing the order of integration we have


∞ ∞ ∞
1
Z Z Z
f (x)g(x) dx = √ f (x)ĝ(α)eiαx dx dα
−∞ 2π −∞ −∞

Using the definition of Fourier transform we get


Z ∞ Z ∞
f (x)g(x) dx = ĝ(α)fˆ(α) dα.
−∞ −∞

(ii) Taking f (x) = g(x) we get,


Z ∞ Z ∞
fˆ(α)fˆ(α) dα = f (x)f (x) dx
−∞ −∞
Z ∞ Z ∞
This implies 2
|f (x)| dx = ˆ 2 dα.
|f (α)|
−∞ −∞

31.4 Example Problems

31.4.1 Problem 1

Find the Fourier transform of f (x) defined by


(
1, when |x| < a
f (x) =
0, when |x| > a

260 WhatsApp: +9137900900676 www.AgriMoon.Com


Fourier Transform (Cont.)

and hence evaluate



sin αa cos αx ∞
sin αa ∞
sin2 x
Z Z Z
(i) dα, (ii) dα and (iii) dx.
−∞ α 0 α 0 x2

Solution: (i) Let fˆ(α) be the Fourier transform of f (x). Then, by the definition of Fourier
transform
Z ∞ Z a
1 1
fˆ(α) = √ iαx
e f (x)dx = √ eiαx dx
2π −∞ 2π −a
1 1
eiαa − e−iαa dx

=√
2π iα
This gives
2 sin aα
fˆ(α) = √
2π α
From the definition of inverse Fourier transform we also know that

1
Z
f (x) = √ fˆ(α)e−iαx dα
2π −∞

This implies that


( √
Z ∞ √ 2π, when |x| < a
fˆ(α)e−iαx dα = 2πf (x) =
−∞ 0, when |x| > a

Substituting fˆ(α) in the above equation we get


( √
∞ 2π, when |x| < a
2 sin aα
Z
√ (cos αx − i sin αx) dα =
−∞ 2π α 0, when |x| > a
We now split the left hand side into real and imaginary parts to get
(
∞ ∞ π, when |x| < a
sin aα cos xα sin αa sin αx
Z Z
dα − i dα =
−∞ α −∞ α 0, when |x| > a
Equating real part on both sides we get the desired result as
(
∞ π, when |x| < a
sin αa cos αx
Z
dα =
−∞ α 0, when |x| > a

(ii) If we set x = 0 and a = 1 in the above results, we get



sin α
Z
dα = π, Since |x| < a
−∞ α

261 WhatsApp: +9147900900676 www.AgriMoon.Com


Fourier Transform (Cont.)

Since the integrand is an even function, we get the the desired results

sin α π
Z
=
0 α 2

(ii) We now apply Parseval’s identity for Fourier transform


Z ∞ Z ∞
|fˆ(α)| dα =
2
|f (α)|2 dα
−∞ −∞

Substituting the function f (x) and its Fourier transform we get


a

4 sin2 aα
Z Z
dα = dα = 2a
−∞ 2π α2 −a

This implies

sin2 aα
Z
dα = πa
−∞ α2
Since the integrand is an even function we have the desired result as

sin2 aα π
Z
2
dα = a
0 α 2

31.4.2 Problem 2

Evaluate the Fourier transform of the rectangular pulse function


(
1, if |t| < 1/2;
Π(t) =
0, otherwise.

Apply the convolution theorem to evaluate the Fourier transform of the triangular pulse
function
(
1 − |t|, if |t| < 1;
Λ(t) =
0, otherwise.

Solution: It is well known result that Λ = Π ∗ Π. It can easily be sheen by observing


t+1/2
 R
 R−1/2 1 · 1dy, if −1 < t < 0;

Z∞ 
1/2
(Π ∗ Π)(t) = Π(y)Π(t − y)dy = t−1/2 1 · 1dy, if 0 < t < 1;
−∞ 

 0 otherwise.

262 WhatsApp: +9157900900676 www.AgriMoon.Com


Fourier Transform (Cont.)

Clearly, we have

 1 + t, if −1 < t < 0;
Z ∞ 

(Π ∗ Π)(t) = Π(y)Π(t − y)dy = 1 − t, if 0 < t < 1; = Λ(t)
−∞ 
 0 otherwise.

Using a = 1/2 in the previous example we have


2 sin(α/2)
F (Π) = √
2π α

Now using convolution result we get


√ 4 sin2 (α/2)
F [Λ(t)] = F [(Π ∗ Π)(t)] = 2πF (Π)F (Π) = √ .
2π α2

Suggested Readings

Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Hanna, J.R. and Rowland, J.H. (1990). Fourier Series, Transforms and Boundary Value
Problems. Second Edition. Dover Publications, Inc. New York.
Iorio, R. and Iorio, V. de M. (2001). Fourier Analysis and Partial Differential Equations.
Cambridge University Press. United Kingdom.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Kreyszig, E. (1993). Advanced Engineering Mathematics. Seventh Edition, John Willey
& Sons, Inc., New York.

263 WhatsApp: +9167900900676 www.AgriMoon.Com


Module 3: Fourier Series and Fourier Transform

Lesson 32

Fourier Transform (Cont.)

In this lesson we provide some miscellaneous examples of Fourier transforms. One of the
major applications of Fourier transforms for solving partial differential equations will not
be discussed in this module. However, we shall highlights some other applications like
evaluating special integrals and the idea of solving ordinary differential equations.

32.1 Example Problems

32.1.1 Problem 1

Find the Fourier transform of


(
1 − x2 , when |x| < 1
f (x) =
0, when |x| > 1

and hence evaluate ∞


−x cos x + sin x x
Z
3
cos dx.
0 x 2

Solution: Using the definition of Fourier transform we get


Z ∞
ˆ 1
f (α) = F [f (x)] = √ eiαx f (x) dx
2π −∞
Z 1
1
eiαx 1 − x2 dx

=√
2π −1

Integrating by parts we obtain


Z 1 iαx
ˆ 1 eiαx 1
2 e
f (α) = √ (1 − x ) − (−2x) dx
2π iα −1 −1 iα

Again, the application of integration by parts gives


 iαx Z 1 iαx
1

ˆ 2 e e
f (α) = √ x − dx
2π (iα)2 −1 −1 (iα)
2

264 WhatsApp: +91 7900900676 www.AgriMoon.Com


Fourier Transform (Cont.)

Further simplifications leads to


eiαx 1
  
ˆ 2 1 iα −iα
f (α) = √ − 2 e +e −
α iα −1


eiα e−iα
 
1 2 iα −iα
=− √ e +e − +
2π α2 iα iα
Using Euler’s equality we obtain
 
ˆ 1 4 sin α
f (α) = − √ cos α −
2π α2 α
1 4
=√ [−α cos α + sin α]
2π α3
We know from the Fourier inversion formula that

1
Z
f (x) = √ fˆ(α)e−iαx dα
2π −∞

This implies

4 −α cos α + sin α −iαx
Z
f (x) = e dα
2π −∞ α3
Equating real parts, on both sides we get

−α cos α + sin α π
Z
cos αx dα = f (x)
−∞ α3 2

Substituting the value of the function we obtain


(
∞ π 2 when |x| < 1
−α cos α + sin α 2 (1 − x ),
Z
cos αx dα =
−∞ α3 0, when |x| > 1
Substitution x = 1/2 gives
∞  
−α cos α + sin α α π 1
Z
3
cos dα = 1− ,
−∞ α 2 2 4
This implies

−α cos α + sin α α 3π
Z
2 cos dα =
0 α3 2 8

Hence we get the desired result as



−α cos α + sin α α 3π
Z
cos dα =
0 α3 2 16

265 WhatsApp: +9127900900676 www.AgriMoon.Com


Fourier Transform (Cont.)

32.1.2 Problem 2

Find the Fourier transformation of the function f (t) = e−at H(t), where
(
0, when t < 0
H(t) =
1, when t ≥ 0

Solution: Using the definition of Fourier transform


Z ∞
1
F [f (t)] = √ f (t)eiαt dt
2π −∞
Z ∞
1
=√ e−at eiαt dt
2π 0
Solving integral leads to
1 e(−a+iα)t ∞
F [f (t)] = √
2π (−a + iα) 0

Since we know that

lim e−at eiαt = lim e−at (cos αt + i sin αt) = 0


t→∞ t→∞

We get the required transform as


 
1 1 1 1
F [f (t)] = − √ =√ .
2π (−a + iα) 2π a − iα

32.1.3 Problem 3

Find the Fourier transform of Dirac-Delta function δ(t − a).


Solution: Recall that the Dirac-Delta function can be thought as

 0, when t < a, a > 0


δ(t − a) = lim δǫ (t − a) = 1
ǫ→0  ǫ , when a ≤ t ≤ a + ǫ
 0, when t > a + ǫ

Applying the definition of Fourier transform we get


Z ∞
1
F [δ(t − a)] = √ δ(t − a)eiαt dt
2π −∞
Z a+ǫ
1 1
=√ lim eiαt dt
2π a ǫ→0 ǫ

266 WhatsApp: +9137900900676 www.AgriMoon.Com


Fourier Transform (Cont.)

On integrating we obtain
1 1 eiαt a+ǫ
F [δ(t − a)] = lim √
ǫ→0 2π ǫ iα a

1 1 1  iα(a+ǫ) iαa

= lim √ e −e
ǫ→0 2π ǫ iα
1 eiαǫ − 1 1
= √ eiαa lim = √ eiαa
2π ǫ→0 iαǫ 2π

With this results we deduce that F −1 (1) = 2πδ(t).

32.1.4 Problem 4

Find the Fourier transform of

f (t) = e−a|t| , − ∞ < t < ∞, a > 0.

Solution: Using the definition of Fourier transform we have


Z 0 Z ∞ 
h
−a|t|
i 1 at iαt −at iαt
F e =√ e e dt + e e dt
2π −∞ 0
" #
1 e(a+iα)t 0 e(−a+iα)t ∞
=√ +
2π a + iα −∞ −a + iα 0

 
1 1 1
=√ + (−1)
2π a + iα −a + iα
 
1 1 1 1 2a
=√ + =√ .
2π a + iα a − iα 2π a + α2
2

32.1.5 Problem 5

1
Find the inverse Fourier transform of fˆ(α) = .
2π(a − iα)2
Solution: Writing the given function as a product of two functions as
 
−1
h
ˆ
i
−1 1 1
F f (α) = F √ √
2π(a − iα) 2π(a − iα)
Application of convolution theorem gives
   
1 −1 1 −1 1 1  −at
e H(t) ∗ e−at H(t)

f (t) = √ F √ ∗F √ =√
2π 2π(a − iα) 2π(a − iα) 2π

267 WhatsApp: +9147900900676 www.AgriMoon.Com


Fourier Transform (Cont.)

Evaluating the convolution


∞ ∞
1 e−at
Z Z
−ax −a(t−x)
f (t) = √ e H(x)e H(t − x)dx = √ H(x)H(t − x)dx
2π −∞ 2π −∞

Note that H(x)H(t − x) = 0 when x < 0 or when t − x < 0, i.e.,


(
1, if 0 < x < t;
H(x)H(t − x) =
0, otherwise

Hence we have
( −at
t te
eat √ , if t > 0;
Z
f (t) = √ dx = 2π
2π 0 0, if t < 0.

Thus we get
te−at
f (t) = √ H(t).

32.1.6 Problem 6

Using Fourier transform, find the solution of the differential equation



y − 2y = H(t)e−2t , − ∞ < t < ∞

Solution: Taking Fourier transform on both sides we get


 
h ′i 1 1
F y − 2F [y] = √
2π −2 + iα

Aplying the property of Fourier transform of derivatives we get


 
1 1
−iαŷ − 2ŷ = − √
2π −2 + iα

Simple algebraic calculation gives the value of transformed variable as


1 1
ŷ = − √
2π 4 + α2
1
Taking inverse Fourier transform we get the desired solution as y = − e−2|t| .
4

268 WhatsApp: +9157900900676 www.AgriMoon.Com


Fourier Transform (Cont.)

Suggested Readings

Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Hanna, J.R. and Rowland, J.H. (1990). Fourier Series, Transforms and Boundary Value
Problems. Second Edition. Dover Publications, Inc. New York.
Iorio, R. and Iorio, V. de M. (2001). Fourier Analysis and Partial Differential Equations.
Cambridge University Press. United Kingdom.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Kreyszig, E. (1993). Advanced Engineering Mathematics. Seventh Edition, John Willey
& Sons, Inc., New York.

269 WhatsApp: +9167900900676 www.AgriMoon.Com


Module 3: Fourier Series and Fourier Transform

Lesson 33

Finite Fourier Transform

The Fourier transform, cosine transform and sine transform are all motivated by the re-
spective integral representations of a function. Applying the same line of reasoning, but
using Fourier cosine and sine series instead of integrals, we obtain the so called finite
transforms. It has applications in solving partial differential equations in finite domain.

32.1 Finite Fourier Transformations

Let f (x) be defined in (0, L) and satisfies Dirichlet’s conditions in that finite domain. We
begin with the cosine Fourier series

a0 X nπx
f (x) = + an cos ,
2 L
n=1

where the Fourier coefficients are given by


L
2 nπx
Z
an = f (x) cos dx, n = 1, 2, ...
L 0 L

The sine Fourier series is given as



X nπx
f (x) = bn sin
L
n=1

where
L
2 nπx
Z
bn = f (x) sin dx, n = 1, 2, ...
L 0 L
We now define the finite Fourier cosine transform as
L
nπx
Z
Fc [f ] = fˆc (n) = f (x) cos dx
0 L

The function f (x) is called inverse finite Fourier cosine transform and is given by

h
ˆ
i 1ˆ 2Xˆ nπx
Fc−1 fc (n) = f (x) = fc (0) + fc (n) cos dx
L L L
n=1

270 WhatsApp: +91 7900900676 www.AgriMoon.Com


Finite Fourier Transform

The finite Fourier sine transform is


L
nπx
Z
Fs (f ) = fˆs (n) = f (x) sin dx
0 L

The inverse finite Fourier sine transform is given by



2Xˆ nπx
f (x) = fs (n) sin dx
L L
n=1

2
Remark: The factor may be associated with either the transformation or the in-
L r
2
verse of the transformation or the factor may be associated with both the transform
L
and the inverse.

32.2 Finite Fourier Transform (Complex form)

Similar to the finite Fourier sine and cosine transform we can also define finite Fourier
transform from complex form of Fourier series as
Z L
−inπx
F [f ] = f (x)e L dx = fˆ(n)
−L

The inverse finite Fourier transform is defined as



1 X ˆ inπx
f (x) = f (n)e L
2L n=−∞

32.3 Derivatives of Finite Fourier Sine and Cosine Transforms

32.3.1 Theorem

Let f (x) and f ′(x) be continuous and f ′′ (x) be piecewise continuous on the interval [0, l],
then

h i
fˆc (n)
′ nπ

(i) Fs f (x) = − L

271 WhatsApp: +9127900900676 www.AgriMoon.Com


Finite Fourier Transform

h ′′ i
nπ 2 ˆ nπ
f (0) + (−1)n+1 f (L)
  
(ii) Fs f (x) = − L fs (n) + L

h i
fˆc (n) + (−1)n f (L) − f (0)
′ nπ

(iii) Fc f (x) = L

h i
′′ nπ 2 ˆ ′ ′
fc (n) + (−1)n f (L) −

(iv) Fc f (x) = − L f (0)

Proof: (i) Using the definition of Fourier sine transform


L
h ′ i Z ′ nπx
Fs f (x) = f (x) sin dx
0 L
Integrating by parts, we get
" Z L #
h ′ i nπx L nπx nπ
Fs f (x) = f (x) sin − f (x) cos dx
L 0 0 L L

This implies h ′ i  nπ 
Fs f (x) = − fˆc (n)
L
(ii) By the definition of finite Fourier transform, we get
L
h ′ i Z ′ nπx
Fc f (x) = f (x) cos dx
0 L
Integrating by parts gives
" Z L #
h ′ i nπx L nπx nπ
Fc f (x) = f (x) cos − f (x) sin dx
L 0 0 L L

Thus we get h ′ i  nπ 
Fc f (x) = (−1)n f (L) − f (0) + fˆc (n)
L
Repeated applications of these above two will give (ii) and (iv).

32.4 Example Problems

32.4.1 Problem 1

Find the finite Fourier sine and cosine transform of f (x) = x2 , if 0 < x < 4.

272 WhatsApp: +9137900900676 www.AgriMoon.Com


Finite Fourier Transform

Solution: Using the definition of Fourier sine transform


4 4
nπx nπx
Z Z
Fs [f (x)] = f (x) sin dx = x2 sin dx if n = 1, 2, 3...
0 4 0 4

Integration by parts leads to


  4 Z 4
cos(nπx)/4
2 cos(nπx)/4
Fs [f (x)] = x − − 2x dx
nπ/4 0 0 nπ/4
Z 4
64 cos nπ 8 nπx
=− + x cos dx
nπ nπ 0 4

Evaluating the integral we get


4
64(−1)n+1 64(−1)n+1

32 cos(nπx)/4 128
Fs [f (x)] = − 2 2 − = + 3 3 [(−1)n − 1]
nπ n π nπ/4 0 nπ n π

We have used the fact that



1 if n even,
cos(nπ) = (−1)n =
−1 if n odd.

Now, by the definition of Fourier cosine transform, we get


4 4
nπx nπx
Z Z
Fc [f (x)] = f (x) cos dx = x2 sin dx if n = 1, 2, 3...
0 4 0 4

Proceeding as before we get


 4 4
2 sin(nπx)/4 sin(nπx)/4
Z
Fs [f (x)] = x − 2x dx
nπ/4 0 0 nπ/4
128(−1)n
Z 4
8 nπx
=− x sin dx =
nπ 0 4 n2 π 2
4 4
64
Z Z
If n = 0, the Fs [f (x)] = f (x)dx = x2 dx = .
0 0 3

32.4.2 Problem 2

Find the finite Fourier sine and cosine transform of the function

f (t) = |t| for −1 < t ≤ 1,

273 WhatsApp: +9147900900676 www.AgriMoon.Com


Finite Fourier Transform

Solution: For n ≥ 1, we note that |t| cos(nπt) is even and hence by the finite Fourier
cosine transform we have
Z 1
fˆc (n) = f (t) cos(nπt) dt
−1
Z 1
=2 t cos(nπt) dt
0
 1 Z 1
t 1
=2 sin(nπt) −2 sin(nπt) dt
nπ t=0 0 nπ
2 (−1)n − 1

1 h i1
= 0 + 2 2 cos(nπt) =
n π t=0 n2 π 2
For n = 0, we find Z 1
fˆc (n) = |t| dt = 1
−1
Now, we notice that |t| sin(nπt) is odd and, therefore, finite Fourier sine transform is cal-
culated as Z 1
fˆs (n) = f (t) sin(nπt) dt = 0
−1

32.4.3 Problem 3

Find the finite Fourier sine transform of the function


(
x, if 0 ≤ x ≤ π/2;
f (x) =
π − x, if π/2 ≤ x ≤ π .

Solution: By the definition of finite Fourier transform, we have


Z π Z π/2 Z π
fˆs (n) = f (x) sin(nx)dx = f (x) sin(nx)dx + f (x) sin(nx)dx
0 0 π/2

Using the given values of f (x) we get


Z π/2 Z π
fˆs (n) = x sin(nx)dx + (π − x) sin(nx)dx
0 π/2

Integrating by parts leads to


 
cos nx   sin nx π/2   cos nx   sin nx π
fˆs (n) = x − − − 2 + (π − x) − − − 2
n n 0 n n π/2

274 WhatsApp: +9157900900676 www.AgriMoon.Com


Finite Fourier Transform

Finally, we get the finite Fourier sine transform as


2 nπ
fˆs (n) = 2 sin .
n 2

Suggested Readings

Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Raisinghania, M.D. (2009). Advanced Differential Equations. Twelfth Edition. S. Chand
& Company Ltd., New Delhi.
Hanna, J.R. and Rowland, J.H. (1990). Fourier Series, Transforms and Boundary Value
Problems. Second Edition. Dover Publications, Inc. New York.

275 WhatsApp: +9167900900676 www.AgriMoon.Com


Module 4: Partial Differential Equations

Lesson 34

Partial Differential Equations

34.1 Introduction of Partial Differential Equations

Many physical processes in real world are modelled by partial differential


equations. Any equation that involves one or more terms with partial derivatives
of the dependent variable is called a partial differential equation (p.d.e). For a
function z depending on two independent variables x and y, i.e., z(x,y), a partial
differential equation may be written as:

∂2 z ∂z ∂ 2 z ∂z
2 2 + 3 + 2 + 5 = sin( x + y ) .
∂x ∂x ∂y ∂y

In general, a p.d.e may be written as:

∂z ∂z ∂ 2 z ∂ 2 z ∂ 2 z
f ( x, y , z , , , 2 , 2 , ,.....) = 0 (34.1)
∂x ∂y ∂x ∂y ∂xy

The domain of the function z(x,y) is a subset of  2 . It is to be noted that if the


dependent function z is depending on n independent variables, say
z = z ( x1 , x2 , x3 ,..., xn ) , then the domain of z will be a sub set of  n .

Evidently, for each point (x,y) in Ω Subset of  2 , there exists a value for z(x,y),
and this set of points {(x, y, z)} generates a surface in  3 . z=z(x,y), is the
solution of a p.d.e. In the same manner, one has to visualize higher dimensional
surfaces as solutions of p.d.es involving 3 or more independent variables.

276 WhatsApp: +91 7900900676 www.AgriMoon.Com


Partial Differential Equations

34.2 Basic Concepts

Order of the p.d.e: The highest order Partial derivative in the equation decides

 ∂z  ∂z
3

the order of the p.d.e. For example,   + =


0 is a first order p.d.e, while
 ∂x  ∂y
∂ 2 z ∂z
+ =
0 is a second order p.d.e.
∂x 2 ∂y

A set Ω in the n-dimensional Enclidean space  n is called a domain if it is an


open and connected set. A region is a set consisting of a domain plus some or
all of its boundary points. For example, the interior of a circle x 2 + y 2 < a 2 with

radius a is called the domain while the circle with its boundary x 2 + y 2 ≤ a 2 is
called the region.

In general, the partial differential equation is assumed to take values from the
interior of the region. The boundary and initial conditions are specified on its
boundary.

By a solution of the partial differential equation (1) we mean a continuously


differential function z=z (x,y) with respect to the independent variables x and y
at all points of the domain and it satisfies the differential equation.

∂2 z ∂2 z
Observe that (i) z ( x, y=
) ( x + y ) satisfies the p.d.e
3
− =
0.
∂x 2 ∂y 2

277 WhatsApp: +91 7900900676 www.AgriMoon.Com


Partial Differential Equations

∂z ∂2 z ∂z
The partial derivatives are = 3( x + y ) 2 , = 6( x + y ) , = 3( x + y ) 2 and
∂x ∂x 2
∂y

∂2 z
= 6( x + y ) . Also observe (ii) z ( x=
, y ) sin( x − y ) is a solution of the same
∂y 2
∂z ∂2 z
p.d.e, as:= cos( x − y ) , 2 = − sin( x − y ) and
∂x ∂x
∂z ∂2 z
− cos( x − y ) , 2 =
= − sin( x − y ) satisfies the same equation.
∂y ∂y

This illustrates that a partial differential equation can have more than one
solution i.e., uniqueness of solution is not seen.

Classification of First Order Partial Differential Equations (p.d.es)

The general representation of a first order partial differential equation as given


in equation (34.1) represents a non-linear p.d.e as the function f is a general
function of the dependent variable and all its partial derivatives of various
orders.

When we restrict to the first order partial derivatives of z(x,y) in equation (1),
we get a first order p.d.e. The most general form of a non- linear 1st order p.d.e
∂z ∂z
may be written as f ( x, y , z , , )=0 (34.2)
∂x ∂y

Classification of the first order p.d.e

The first order p.d.e given by (1) is said to be a linear equation if it is linear z,
∂z ∂z
and . It is of the form
∂x ∂y

278 WhatsApp: +91 7900900676 www.AgriMoon.Com


Partial Differential Equations

∂z ∂z
A( x, y ) + B ( x, y ) + C ( x, y ) z =
S ( x, y ) (34.3)
∂x ∂y

where the coefficients A, B, C and S are continuous functions of x & y in Ω .


S(x,y) is called the non-homogeneous function. If S ( x, y ) ≡ 0 . ∀( x, y ) ∈Ω , then
the equation is called a homogeneous p.d.e.

∂z ∂z ∂z ∂z
Examples are: x + y =xyz + xy , + =
1.
∂x ∂y ∂x ∂y

∂z ∂z
Equation (34.2) is said to be a semi-linear p.d.e if it is linear in and and
∂x ∂y
∂z ∂z
the coefficient of and are functions of x and y only. The semi linear
∂x ∂y
p.d.e may be written as

∂z ∂z
A( x, y ) + B ( x, y ) = C ( x, y , z ) (34.4)
∂x ∂y

∂z ∂z ∂z ∂z
Examples are : x +y = xyz 3 , x −y = sin z .
∂x ∂y ∂x ∂y

∂z ∂z
Equation (2) is called a quasi-linear p.d.e if it is linear in and and it is
∂x ∂y
written in the form

∂z ∂z
A( x, y, z ) + B ( x, y , z ) = C ( x, y , z ) (34.5)
∂x ∂y

279 WhatsApp: +91 7900900676 www.AgriMoon.Com


Partial Differential Equations

∂z ∂z
An example is (x 2
− z2 )
∂x
+ xy =
∂y
yz 2 + x 2

Equation (34.2) represents a general first order non-linear equation, a simple


∂z ∂z
example for it may be written as  = z2 .
∂x ∂y

We use these notations for the first order partial derivatives of z = z ( x, y ) as:
∂z ∂z
p= ,q = .
∂x ∂y

34.3 Formation of Partial Differential Equations

Given a one parameter family of plane curves, we can find an ordinary


differential equation for which the given one parameter family is a solution.
This is done by eliminating the arbitrary constant in the family of curves. In the
same way, given an arbitrary surface in  3 or in higher dimensional spaces,
elimination of the arbitrary function leads to the partial differential equation for
which the given surface is a solution. The following examples give more insight
into formation of partial differential equations associated with the given surface.

Example 1: Eliminate the arbitrary function F from the below given surfaces:

(i) z = x + y + F ( xy )

(ii) F ( x − z , y − z ) =
0.

Solution:

(i) we eliminate the arbitrary function by finding the partial derivatives p and q.

280 WhatsApp: +91 7900900676 www.AgriMoon.Com


Partial Differential Equations

∂z dF ( xy ) ∂ ( xy ) dF ( xy )
p= =
1+ . =
1+ y
∂x d ( xy ) ∂x d ( xy )

∂z dF ( xy ) ∂ ( xy ) dF ( xy )
q= =
1+ . =
1+ x
∂y d ( xy ) ∂y d ( xy )

dF ( xy )
Eliminating from the above, we obtain xp − yq =x − y which is the
d ( xy )
required p.d.e. Further, note that it is a linear p.d.e

(ii) Given F (u , v) = 0 where u =−


x z , v =−
y z. Using chain rule of
differentiation, we get

∂F ∂u ∂F ∂v ∂F ∂F
p= + = (1 − p ) + (− p=
) 0
∂u ∂x ∂v ∂y ∂u ∂u

∂F ∂u ∂F ∂v ∂F ∂F
Similarly q= + = (−q) + (1 − q )= 0
∂u ∂y ∂v ∂y ∂u ∂u

∂F ∂F
Eliminating and from the above two equation we get
∂u ∂v
− pq + (1 − q )(1 − p ) =
0⇒ p+q=
1 which is the required linear p.d.e.

Thus eliminating an arbitrary function resulted in a linear partial differential


equation.

Example 2: Eliminate the arbitrary parameters from the following functions.

(i) 2 z = (ax + y ) 2 + b (ii) =


z ax + by

281 WhatsApp: +91 7900900676 www.AgriMoon.Com


Partial Differential Equations

Solution:

(i) Note that in the surface 2 z = (ax + y ) 2 + b , the arbitrary constant a is non-
linearly involved. Differentiating partially we obtain,

2 p = 2(ax + y )a ⇒ px = a 2 x 2 + axy

=
2q 2(ax + y )1 or q = (ax + y ) ⇒ qy = y 2 + axy

Also 2 z = q 2 + b ⇒ b = 2 z − q 2

q 2 = 2 z − b = (ax + y ) 2 = px + qy

∴ px + qy =
q 2 is the required non-linear p.d.e

(ii)=
z ax + by ; the arbitrary constants a & b are linearly involved in the
function. Differentiating partially we obtain p=a and q=b. So the required
p.d.e is xp+yq=z which is a linear p.d.e.

The following are some Standard Partial Differential Equations which occur
in physics:

∂z ∂z
1. + =
0 (Transport equation)
∂x ∂y

∂u 2 ∂u 2
2. + =
1 (Eikonal equation)
∂x ∂y

∂ 2u ∂ 2u
3. 2 − 2 = 0 (Wave equation)
∂t ∂x
∂u ∂ 2u
4. − =
0 (Heat or Diffusion equation)
∂t ∂x 2

282 WhatsApp: +91 7900900676 www.AgriMoon.Com


Partial Differential Equations

∂ 2u ∂ 2u
5. 2 + 2 = 0 (Laplace equation)
∂x ∂y

∂ 2u 2 ∂ 2u ∂ 2u
6. − . =
f ( x, y ) (Monge-Ampere equation)
∂x∂y ∂x 2 ∂y 2

In the above, equations 34.1, 34.3, 34.4, 34.5 are linear and homogeneous
equations while equations 34.2, 34.6 are non-homogeneous equations. In
equation 6, if f ( x, y ) ≡ 0∀( x, y ) ∈ Ω , the equation is a non-linear and
homogeneous equation.

34.4 Checking Linearity of the given Partial Differential Equation

The Linear equation (34.3) can be written in the operator form as:

Lz = S (34.6)

where L is the linear operator defined as:

∂ ∂
L: A + B +C
∂x ∂y

The homogeneous equation corresponding to (34.6) is Lz = 0 .

∂2 z ∂2 z
Let us check the linearity of equation + =
0. (34.7)
∂x 2 ∂y 2

Definition: An operator L is said to be linear if and only if, for two functions
z1 ( x, y ) and z2 ( x, y ) with arbitrary constants c1 and c2 ∈  , the following
property holds. :
8

283 WhatsApp: +91 7900900676 www.AgriMoon.Com


Partial Differential Equations

L(c1 z1 + c2 z2 ) = c1Lz1 + c2 Lz2 .

The operator L is non-linear if the above property is not satisfied.


∂2 ∂2
In equation (34.7), the operator is =
L +
∂x 2 ∂y 2

Now consider
∂2 ∂2
L[c1 z1 ( x, y ) + c2 z2 ( x, y )]= (c1 z1 + c2 z2 ) + 2 (c1 z1 + c2 z2 )
∂x 2 ∂y

 ∂2 z ∂2 z   ∂2 z ∂2 z 
=  c1 21 + c2 22  +  c1 21 + c2 21 
 ∂x ∂x   ∂y ∂y 

 ∂ 2 z1 ∂ 2 z2   ∂ 2 z1 ∂ 2 z2 
= c1  2 + 2  + c2  2 + 2 
 ∂x ∂x   ∂y ∂y 

= c1Lz1 + c2 Lz2

∂ 2 z1 ∂ 2 z2
∴ + =
0 is a linear equation.
∂x 2 ∂x 2

∂z ∂z
Let us test the equation +z = 0 (34.8)
∂x ∂y

∂ ∂
for linearity. Consider (c1 z1 + c2 z2 ) + (c1 z1 + c2 z2 ) (c1 z1 + c2 z2 )
∂x ∂y

∂z1 ∂z  ∂z ∂z 
= c1 + c2 2 + (c1 z1 + c2 z2 )  c1 1 + c2 2 
∂x ∂x  ∂y ∂y 

 ∂z ∂z   ∂z ∂z 
≠  c1 1 + c2 2  +  c1 z1 1 + c2 z2 2 
 ∂x ∂x   ∂y ∂y 

284 WhatsApp: +91 7900900676 www.AgriMoon.Com


Partial Differential Equations

Hence equation (34.8) is a non-linear equation.

Keywords: Order, linear equation, semi-linear, quasi-linear

Exercise 1

Eliminate the arbitrary function / constants from the following surfaces to form
an appropriate partial differential equation.

(i) z =( x + a )( y + b) (ii) z 2 = 8( x + ay + b)3

1
=
(iii) z F [( x + y ) ] (iv) z =
2 2 2
xy + F ( x 2 + y 2 )

 xy 
(v) z = F  
 z 

References

Amaranath, T. (2003). An Elementary Course in Partial Differential Equations.


Narosa Publishing House, New Delhi

Ian, Sneddon. (1957). Elements of Partial Differential Equations. McGraw-Hill,


Singapore

Suggested Reading

I.P. Stavroulakis, Stephen a tersian, (2003). Partial Differential Equations.


Allied Publishers Pvt. Limited, New Delhi.

J. David Logan. (2004). Partial Differential Equations. Springer(India) Private


Ltd. New Delhi

10

285 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 4: Partial Differential Equations

Lesson 35

Linear First Order Equation

35.1 Classification of Integrals (Solutions of p.d.es)

A surface z=z(x,y) which is continuously differentiable with respect to both the


variables x and y in a domain Ω ⊆  2 that satisfies the given p.d.e
f ( x, y, z , p, q ) = 0 is called an integral surface of it.

Let z = F ( x, y, a, b) be a 2-parameter family of surfaces, with a, b as arbitrary –


∂z ∂F ∂z ∂F
parameters. Now =
p = and =
q = .
∂x ∂x ∂y ∂y

Using p and q , we can eliminate a and b from z = F ( x, y, a, b) and form a first


∂z ∂z
order p.d.e f ( x, y, z , , ) = 0.
∂x ∂y

Also the surface z = F ( x, y, a, b) is a solution of the p.d.e f ( x, y, z , p, q ) = 0 .


The solution of the partial differential equation is called an integral surface of it.
It is classified as (i) complete integral (ii) general integral and (iii) singular
integral.

(i) Complete Integral: A two parameter family of surfaces z = F ( x, y, a, b) that


satisfies f ( x, y, z , p, q ) = 0 is called a complete integral if in the domain of

 ∂F ∂2 F ∂2 F 
 
∂a ∂x∂a ∂y∂a 
definition Ω , the rank of the matrix A =  is 2.
∂F ∂2 F ∂2 F 
 
 ∂b ∂x∂b ∂y∂b 

286 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear First Order Equation

Let us see some examples:

Example 1: Consider the surface

Solution:

( x − a) + ( y − b) 1 and the p.d.e z 2 (1 + p 2 + q 2 ) =


+ z2 =
2 2
1.

F ( x, y , z , a , b ) = ( x − a ) + ( y − b ) + z 2 − 1
2 2

∂F ∂2 F ∂2 F
=
−2( x − a ) , = −2 , =0,
∂a ∂x∂a ∂y∂a

∂F ∂2 F ∂2 F
=
−2( y − b) , = −2 , = 0.
∂b ∂x∂b ∂y∂b

 2(a − x) −2 0 
The matrix A=  is with a non-vanishing 2x2 minor and hence
 2(b − x) 0 −2 
its rank is 2. We check whether the given surface is a solution of the given
x−a y −b
p.d.e. We find p= and q = and using p and q in z 2 (1 + p 2 + q 2 ) =
1,
−z −z
  x − a 2  y − b 2 
z 1 +   +   = ( x − a) + ( y − b) + z2 =
2 2 2
we see 
1 or 1 is the given
  −z   −z  
surface that is satisfying the p.d.e.

∴ This surface is a complete integral of the given p.d.e.

Exercises
y
1. Show that z = ax + + b is a complete integral of pq = 1 .
a

2. Show that the 2-parameter family of surfaces z = ax + by + a 2 + b 2 is a complete


integral of the p.d.e z − px − qx − p 2 − q 2 =
0.

287 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear First Order Equation

(ii) General Integral: The general integral is also a solution of the partial
differential equation that involves an arbitrary function. In the two parameter
family of solutions z = F ( x, y, a, b) , take a = ϕ (b) , we get a one parameter
family of solutions of f ( x, y, z , p, q ) = 0 . We obtain z = F ( x, y,ϕ (b), b) which is
a subfamily of the given two parameter family of complete integral of
f ( x, y , z , p , q ) = 0 . Find the envelope of this one parameter sub-family by
eliminating b between z = F ( x, y,ϕ (b), b) and
∂F ( x, y,ϕ (b), b) / ∂F ( x, y,ϕ (b), b)
ϕ (b) + =
0 if exists. This way we will be able
∂a ∂b
to find b = b( x, y ) and substituting for b in the one parameter sub family, we
obtain z = F ( x, y,ϕ (b( x, y )), b( x, y )) . If the function ϕ which defines this sub-
family is arbitrary, then such a solution is called a general integral of
f ( x, y, z , p, q ) = 0 . Different choices of ϕ give different particular integrals of
the p.d.e. Let us illustrate this with examples.

∂z
Example 2: Find the general solution of the equation +z=e− x .
∂x
Solution:
∂z
Integrating the homogeneous equation +z=0 with respect of x, holding y
∂x
as a constant, we obtain z ( x, y ) = e − x  f ( y ) where f is an arbitrary function
which is a continuously differentiable function of y.

By inspection, we note that xe − x satisfies the given equation. This is a particular


solution. Thus the given general solution of this p.d.e is written as
z ( x, y ) e − x f ( y ) + xe − x .
=

288 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear First Order Equation

35.2 General Solution of the Linear Equation

Let us now derive the form of the general solution of the linear first order
homogeneous equation.

A( x, y ) z x + B( x, y ) z y + C ( x, y ) z z =
0 (35.1)

Where A,B,C are continuously differentiable functions in some domain in  2 .


Chose the transformation ξ = ξ ( x, y ) , η = η ( x, y ) , ( x, y ) ∈ Ω , with Jacobian

∂ξ ∂ξ
∂x ∂y
=J ≠ 0 on Ω .
∂y ∂y
∂x ∂y

∂z ∂z ∂ξ ∂z ∂η ∂z ∂z ∂ξ ∂z ∂η
Clearly, = + =
, and + (35.2)
∂x ∂ξ ∂x ∂η ∂x ∂y ∂ξ ∂y ∂η ∂y

Using these in the linear equation (35.1), we obtain

 ∂ξ ∂ξ  ∂z  ∂η ∂η  ∂z
 ∂x + + + + Cz =
∂y  ∂ξ  ∂x ∂y  ∂η
A B A B 0 (35.3)

 ∂η ∂η 
Chose η such that  A ∂x + B ∂y  =
0 (35.4)
 

This is a meaningful choice because of the following argument.

Assume that A( x, y ) ≠ 0 and consider the o.d.e

289 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear First Order Equation

∂y B ( x, y )
= (35.5)
∂x A( x, y )

Write its general solution as η ( x, y ) = k , k is an arbitrary constant and


∂η
≠ 0. Then for, η ( x, y ) = k (35.6)
∂y

∂η ∂η
we have dη ( x, y= = 0 or
) dk dx + dy =
0
∂x ∂y

In view of this, equation (35.4) is satisfied.

The one parameter family of curves given by (35.6) that are obtained from
equation (35.5) are called characteristic curves of the differential equation
(35.1).

=
Now chose ξ ξ=
( x, y ) x , such that

1 0
J= J= = η y ≠ 0 ∀( x, y ) ∈ Ω
ηx η y

Now the transformation ξ = x , η = η ( x, y ) which is an invertible transformation


(having one to one correspondence between (ξ ,η ) and ( x, y ) transforms the
equation (35.3) to the following simple form

∂z
A(ξ ,η ) + C (ξ ,η ) z =
0 (35.7)
∂ξ

290 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear First Order Equation

This equation is called canonical form for the linear partial differential equation
(35.1). This can be solved as an o.d.e.

Under the same transformation, the non-homogeneous linear equation

A( x, y ) z x + B( x, y ) z y + C ( x, y ) z z =
D ( x, y ) (35.8)

gets transformed to

A(ξ ,η ) zξ + C (ξ ,η ) z =
D(ξ ,η ) (35.9)

We describe the Lagrange method for finding the general integral of the given
quasi-linear p.d.e in the next lesson.

Example 3: Find the general solution of the linear p.d.e

x z − y zy + =
y 2 z y 2 , ( x, y ) ≠ 0

Solution:

Given A( x, y ) =
x, B ( x, y ) =
− y , C ( x, y ) =
y 2 , D ( x, y ) =
y2

dy y
Now equation (35.5) gives = − which gives its general solution as
dx x
xy = k where k is an arbitrary constant. Now set ξ = x , η = xy as the co-
ordinate transformation; This gives J= x ≠ 0 .

dz dz dξ dz dη dz dz dξ dz dη
Now = + =
zξ .1 + zη . y , and = + = y zη ,
dx dξ dx dη dx dy dξ dy dη dy

291 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear First Order Equation

η η2 η2
A( x, y )= x= ξ , B( x, y ) =− y =− , C ( x, y ) = 2 , D( x, y ) = 2 .
ξ ξ ξ

The canonical form of the given equation is:


η2 η2 η2 η2
ξ .zξ + 2 .z = or zξ + 3 z = .
ξ ξ2 ξ ξ3

This can be solved as an p.d.e, by fixing η as a constant in z (ξ ,η ) . Thus we


obtain

∫ ξ 3 dξ  η 2 ∫ ξ 3 dξ 
η 2
η 2

z (ξ ,η ) e
=  f (η ) + ∫ 3 e dξ 
ξ
 
η2  − 2 
η 2

= e 2ξ 2
 f (η ) + e 2ξ

 

η2

= f (η )e 2ξ 2
+ 1 , where f (η ) is an arbitrary function.

y2
Thus =
z ( x, y ) f ( xy )e 2
+ 1 is the general solution of the given p.d.e.

Example 4: Find the general solution of the Euler equation


x z x + y=
z y n z , ( x, y ) ≠ 0 .

Solution:

Given =
A( x, y ) x= ; C ( x, y ) η=
; B( x, y ) y= , D( x, y ) 0 , equation (35.5) gives
dy y 1 1 x
=⇒ dx =dy , leading to ln=
x ln k + ln y or =k as its
dx x x y y
x
ξ x=
characteristic curve. Now set= ,η ,
y

292 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear First Order Equation

x
A( x, y ) ξ=
− 2 ≠ 0 . Also, note that=
J= , C ( x, y ) η , and the canonical form
y
η
for the given p.d.e is ξ zx + η z =
0 , or zξ + z=
0 . Its general solution is
ξ
x
z (ξ ,η ) = ξ − n f (η ) or z ( x, y ) = x − n f   where f is an arbitrary function.
 y

Exercises 3: Find the general solutions of

(i) x z x + y z y =
xn

d where a, b, c, d are constants such that a 2 + b 2 ≠ 0 .


(ii) a z x + b z y + c z z =

(iii) Singular integral: Find the envelope of the two parameter family of
solutions z = F ( x, y, a, b) , if exists. This is obtained by eliminating a and b
∂F ( x, y, a, b) ∂F ( x, y, a, b)
from the equations z = F ( x, y, a, b) , = 0, = 0 . This is
∂a ∂b
called the singular integral of the given p.d.e.

Example 5: Obtain the singular integral for z − px − qy − p 2 − q 2 =


0.

Solution:

The given equation has the two parameter family of curves z = ax + by + a 2 + b 2


as its complete integral. Now

∂F ( x, y, a, b)
= 0 ⇒ x + 2a = 0,
∂a
∂F ( x, y, a, b)
= 0 ⇒ y + 2b = 0,
∂b

293 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear First Order Equation

Eliminating a and b from the equations z = ax + by + a 2 + b 2 ,

x + 2a = 0, y + 2b = 0, we obtain the singular integral as 4 z =


−( x 2 + y 2 ) .

Keywords: Complete Integral, General Integral, Singular Integral,


Characteristic Curves

References

Amaranath, T. (2003). An Elementary Course in Partial Differential Equations.


Narosa Publishing House, New Delhi.

Ian, Sneddon. (1957). Elements of Partial Differential Equations. McGraw-Hill,


Singapore.

Suggested Reading

I.P. Stavroulakis, Stephen a tersian, (2003). Partial Differential Equations.


Allied Publishers Pvt. Limited, New Delhi.

J. David Logan. (2004). Partial Differential Equations. Springer(India) Private


Ltd. New Delhi.

294 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 4: Partial Differential Equations

Lesson 36

Geometric Interpretation of a First Order Equation

36.1 Geometric Interpretation of a First Order Equation

Consider the general quasi linear partial differential equation

P( x, y, u )u x + Q( x, y, u )u y =
R ( x, y , u ) (36.1)

A possible solution written in implicit form as

f ( x, y=
, u ) u ( x, y ) =
−u 0 (36.2)

which is a surface in ( x, y, u ) space. At any point ( x, y, u ) on this surface,


=
∇f (u x , u y , −1) gives the normal to the surface. Equation (1) can be re-written

as:

( P, Q, R)(u x , u y , −1) =
0 (36.3)

This shows that the vector ( P, Q, R) must be a tangent vector of the surface
given by (36.2) at ( x, y, u ) and this determines a direction filed called the
Characteristic Direction for the integral surface for of the given p.d.e. In brief;
f(x,y,u) = u(x,y) - u = 0 is a solution of (1) if and only if the direction vector
field (P,Q,R) lies in the tangent plane of this integral surface at each point
( x, y , u ) .

295 WhatsApp: +91 7900900676 www.AgriMoon.Com


Geometric Interpretation of a First Order Equation

u x , u x , −1
u normal

(P,Q,R)

x, y , u
tangent plane
o
y

If Γ is a curve with the parametric =


rep x x=
(t ), y y=
(t ), u u (t ) . If this space
curve lies on the surface u = u ( x, y ) , then at ( x, y, u ) , the tangent to the curve Γ
will have the direction cosines as ( P, Q, R) where ( P, Q, R)(u x , u y , −1) =
0 is the

partial differential equation for which u = u ( x, y ) is the solution.

Definition: A curve in ( x, y, u ) – space, whose tangent at every point coincides


with the characteristic direction field ( P, Q, R) is called a characteristic curve.
If parametric representation of this characteristic curve is

=x x=
(t ), y y=
(t ), u u (t ) (36.4)

 dx dy du 
then the tangent vector to this curve is  , ,  which must coincide with
 dt dt dt 
( P, Q, R ) .

2
296 WhatsApp: +91 7900900676 www.AgriMoon.Com
Geometric Interpretation of a First Order Equation

The system of ordinary differential equations representing these characteristic


curve is given by

dx dy du
= P=( x, y, u ); Q=
( x, y, u ); R ( x, y , u ) (36.5)
dt dt dt

These are called the characteristic equations of the Quasi linear equation (36.1).
Its solution consist of a 2-p family of curves in (x,y,u) – space.

The characteristic equations in non-parametric form are written as:

dx dy du
= = (35.6)
P Q R

36.2 Method of Characteristics to obtain the general integral: (Lagrange


Method)

The general solution of the quasi-linear partial differential equation (also known
as the Lagrange’s equation) P( x, y, u )u x + Q( x, y, u )u y =
R( x, y, u ) is written as

F (φ , ψ ) = 0 where F is an arbitrary function of φ and ψ and φ ( x, y, u ) = C1


and ψ ( x, y, u ) = C2 are two functionally independent solutions of the
dx dy du
characteristic system = = . This general solution can also be written
P Q R
as: φ = G ( ψ ) .

Example 1: Find the general integral of the quasi linear p.d.e yzz x + xzz y =
xy .

Solution:
dx dy dz
The characteristic system is: = =
yz xz xy

3
297 WhatsApp: +91 7900900676 www.AgriMoon.Com
Geometric Interpretation of a First Order Equation

Taking 2 equations at a time and integrating, we get

dx dy
(i) = ⇒ x 2 − y 2 = C1
y x

dy dz
(ii) = ⇒ z 2 − y 2 = C2
z y

The general solution is F ( x 2 − y 2 , z 2 − y 2 ) =


0 , where F is and arbitrary
function, this general solution can also be written in the form
z2 =y 2 + G ( x 2 − y 2 ) , G is any arbitrary function.

Example 2: Find the general integral of z x + zz y =


0

Solution:
dx dy dz
The characteristic system is = =
1 z 0

Which admits solutions as: (i) z = C1 and (ii) y − zx =


C2 .

So the general solution is written as F ( z , y − zx) =


0 , where F is an arbitrary
function, or is also written as=z G ( y − zx) , where G is arbitrary function.

Exercises 1: Find the General Integral of

1) z ( xz x − yz y ) =y 2 − x 2

2) yzz x + xzz y =
x+ y

3) x( y − z ) z x + y ( z − x) z y = z ( x − y )

4
298 WhatsApp: +91 7900900676 www.AgriMoon.Com
Geometric Interpretation of a First Order Equation

36.3 Linear Equation to 3-Variables

Now, let us consider the extension of the linear equation in 3-variable for the
function u ( x, y, z ) as

A( x, y, z )u x + B( x, y, z )u y + C ( x, y, z )u z =
0.

For this equation, the characteristic system is given by

dx dy dz
= =
A( x, y, z ) B ( x, y, z ) C ( x, y, z )

and this gives the family of characteristic curves as (say) g ( x, y, z ) = C1 and


h( x, y, z ) = C2 which are two functionally independent solution of the above
system, then the general solution is written as u = F ( g , h) .

The functions g ( x, y, z ) , h( x, y, z ) are called functionally independent if rank

 ∂g ∂g ∂g 
 ∂x ∂y ∂z 
  is 2.
 ∂h ∂h ∂h 
 ∂x ∂y ∂z 

Example 3: Find the general solution of the linear equation in 3 independent


variables
( y − z )u x + ( z − x)u y + ( x − y )u z =
0.

Solution:
The characteristic curves are obtained from the characteristic system

5
299 WhatsApp: +91 7900900676 www.AgriMoon.Com
Geometric Interpretation of a First Order Equation

dx dy dz
= = .
( y − z ) ( z − x) ( z − y )

Note that dx + dy + dz = ( y − z + z − x + x − y ) = 0 ,
and xdx + ydy + zdz = x( y − z ) + y ( z − x) + z ( x − y ) = 0 .

Integrating, these equations give


g ( x, y, z ) = x + y + z = C1

and h( x, y, z ) = x 2 + y 2 + z 2 = C2 .

Then the general solution is written as u = F ( g , h) i.e.,

u ( x, y, z )= F ( x + y + z , x 2 + y 2 + z 2 ) : where F is an arbitrary function.

Exercises 2: Find the general solution of the equations

1) x( y − z )u x + y ( z − x)u y + z ( x − y )u z =
0

2) xu x + yu y + zu z =
u.

References

Amaranath, T. (2003). An Elementary Course in Partial Differential Equations.


Narosa Publishing House, New Delhi

Ian, Sneddon. (1957). Elements of Partial Differential Equations. McGraw-Hill,


Singapore

6
300 WhatsApp: +91 7900900676 www.AgriMoon.Com
Geometric Interpretation of a First Order Equation

Suggested Reading

I.P. Stavroulakis, Stephen a tersian, (2003). Partial Differential Equations.


Allied Publishers Pvt. Limited, New Delhi.

J. David Logan. (2004). Partial Differential Equations. Springer(India) Private


Ltd. New Delhi

7
301 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 4: Partial Differential Equations

Lesson 37

Integral Surface Through a Given Curve - The Cauchy Problem

37.1 Integral Surface through a given Curve - The Cauchy Problem

For the quasi linear p.d.e. P( x, y, z ) z x + Q( x, y, z ) z y =


R ( x, y, z ) , with its

general integral F (φ , ψ ) = 0 where φ ( x, y, z ) = C1 and ψ ( x, y, z ) = C2 are two


dx dy dz
functionally independent solutions of the characteristic system = = ,
P Q R
can we find a particular integral containing the given curve C whose parametric
equations are given by =x x=
0 ( s ), y y0 ( s ) and z = z0 ( s ) where s is the
parameter. This is similar to finding the arbitrary constants in the general
solution of an ordinary differential equation using the initial conditions.

Thus fixing the arbitrary function in the general solution of the given p.d.e by
making it to pass through the given initial data is called the Cauchy Problem.

Suppose z = z ( x, y ) is the integral surface passing through the initial data curve
C then we require that the equations φ ( x0 ( s ), y0 ( s ), z0 ( s ) ) = C1 and

ψ ( x0 ( s ), y0 ( s ), z0 ( s ) ) = C2 be satisfied. Now eliminating s from these two

equations we obtain F ( C1 , C2 ) = 0 or C1 = G ( C2 ) . This fixes the arbitrary


function F (or G ) and produces the required surface.

Let us illustrate this by considering few examples:

Example 1: For the p.d.e z ( x + y ) z x + z ( x − y ) z y =x 2 + y 2

Find the integral surface that satisfies the Cauchy data z = 0 on the curve y = 2 x .

302 WhatsApp: +91 7900900676 www.AgriMoon.Com


Integral surface through a given Curve - The Cauchy Problem

Solution:

Step 1: Find the general solution:

The Characteristic system is:

dx dy dz
= = 2
z( x + y) z( x − y) x + y 2

Note that − xdx + ydy + zdz =


0 and ydx + xdy − zdy =
0.

1
C1 and xy − z 2 =
On integrating we get z 2 − x 2 + y 2 = C2
2

Thus the two characteristic curves are φ = z 2 − x 2 + y 2 = C1 and


ψ = 2xy − z 2 = C2 .

The general solution is written as: F ( z 2 − x 2 + y 2 ,2 xy − z 2 ) =


0

or z 2 = ( x 2 − y 2 ) + G ( 2 xy − z 2 ) , here G is any arbitrary function

Step 2: Fixing the arbitrary function:

We are given the Cauchy data as z = 0 on y = 2 x . Its parametric representation


is,=
x s=
, y 2 s=
,z 0 .

Using this in the integrals φ = C1 and ψ = C2 ; 0 ⋅ − s 2 + 4 s 2 = C1 ; 2.s.2 s − 0 =C2

or 3s 2 = C1 ; 4s 2 = C2
3 C1
⇒ = ⇒ 4C1 =
3C2
4 C2
Thus the solution is written as:

303 WhatsApp: +91 7900900676 www.AgriMoon.Com


Integral surface through a given Curve - The Cauchy Problem

4 ( z 2 − x 2 + y 2=
) 3( 2 xy − z 2 ) or 7 z 2 = 6 xy + 4 x 2 − 4 y 2

Alternative to Step 2: We have z 2 = ( x 2 − y 2 ) + G ( 2 xy − z 2 )

Using the Cauchy data, we get 0 =s 2 − 4 s 2 + G ( 2.s.2 s ) or 3s 2 = G ( 4 s 2 )

t 3
Put 4s 2 = t ⇒ s 2 = leads to 3s 2 = t
4 4
3
∴ G (t ) = t
4
3
This gives the integral surface as z 2 = ( x 2 − y 2 ) + (2 xy − z 2 )
4
or 7 z 2 =6 xy + 4( x 2 − y 2 ).

Example 2: Find the integral surface of the equation

(2 xy − 1) p + ( z − 2 x 2 ) q = 2( x − yz ) Passing through the Cauchy data


=
x0 ( s ) 1,=
y0 ( s ) 0,=
z0 ( s ) s.

Solution:
dx dy dz
Step 1: The characteristic system is = =
2 xy − 1 z − 2 x 2
2( x − yz )

Note that 0 ⇒ u = xz + y = C1
(i) zdx + dy + xdz =

(ii) xdx + ydy + dz =


0 ⇒ v = x 2 + y 2 + z = C2

∴ Integral surface is F ( xz + y, x 2 + y 2 + z ) =
0

Step 2: using the data:=


x0 ( s ) 1,=
y0 ( s ) 0,=
z0 ( s ) s In the integrals; we obtain

304 WhatsApp: +91 7900900676 www.AgriMoon.Com


Integral surface through a given Curve - The Cauchy Problem

1⋅ s + 0 =C1 & 1 + 0 + s =C2 leads to C1 = s ; 1 + s =C2 or 1 + C1 =


C2 .

∴ The required integral surface is 1 + xz + y = x 2 + y 2 + z

or x 2 + y 2 − xz − y + z − 1 =0.

37.2 Existence and Uniqueness of solution for the Cauchy problem:

The following result ensures the existence and uniqueness of an integral surface
for the Cauchy problem.

Statement: Consider the first order quasi linear p.d.e


R ( x, y, z ) in the domain Ω where P, Q, R are
P ( x, y , z ) z x + Q ( x, y , z ) z y =

continuously differentiable functions in Ω . =


Let x x=
0 ( s ), y y0 ( s ) and
z = z0 ( s ) , 0 ≤ s ≤ 1 is the initial smooth curve in Ω and

dx0 dy
B ( x0 ( s ), y0 ( s ), z0 ( s ) ) − 0 A ( x0 ( s ), y0 ( s ), z0 ( s ) ) ≠ 0 ….. (37.1)
ds ds

0 ≤ s ≤ 1. Then there exists one and only one solution z = z ( x, y ) defined in a


neighbourhood of this initial curve, which satisfies the equation (37.1) and the
=
initial condition z0 ( s ) z ( x0 ( s ), y0 ( s ) ) ,0 ≤ s ≤ 1.

Note: The condition given in (37.1) excludes the possibility that the initial
=
curve x x=
0 ( s ), y y0 ( s ) could be a characteristic. Let us now illustrate an
example where the given p.d.e has a unique, no, infinitely many solutions
with the Cauchy data.

305 WhatsApp: +91 7900900676 www.AgriMoon.Com


Integral surface through a given Curve - The Cauchy Problem

Example 3: Consider the p.d.e yp − xq =


0 whose general solution is

=z F ( x 2 + y 2 ) where F is an arbitrary function.

Case 1: Consider the initial curve

=x x=
0 (s) s ,=
y y0 (=
s ) 0,=
z z0 (=
s) s 2

which is the parabola in ( x, z ) plane. The condition (37.1) becomes

1⋅ −s − 0 ⋅ 0 = −s ≠ 0

This ensures that the Cauchy Problem has unique solution.

Eliminating F : s 2 = F ( s 2 ) ⇒ F (t ) = t ⇒ z = x 2 + y 2 is the circular paraboloid

contains the initial curve (Parabola).

Thus =
z x 2 + y 2 is the required integral surface.

Case 2: Consider the initial curve x0 ( s ) = cos


= =
s , y0 ( s ) sin s, z0 ( s ) sin s.

This is the parametric representation for the ellipse x 2 + y 2 = 1, z = y .

The condition (37.1) becomes: ( − sin s )( − cos s ) − ( sin s )( cos s ) =


0

This the condition fails, so either there is no solution or there are infinitely
many solutions. (i.e., either existence of the solution is lost or the uniqueness of
the solution is lost). The integral surface=z F ( x2 + y 2 ) becomes

306 WhatsApp: +91 7900900676 www.AgriMoon.Com


Integral surface through a given Curve - The Cauchy Problem

y = F (1) which is inconsistency that a constant F (1) is equal to a variable y .


This implies there is no solution to the Cauchy Problem (Existence of solution
is lost). Note: The tangent vector ( − sin s,cos s,cos s ) to the given curve is

nowhere parallel to the characteristic vector ( sin s, − cos s,0 ) .

Case 3: Consider the initial curve x0 ( s ) = cos


= =
s , y0 ( s ) sin s, z0 ( s ) 1 ,which is

the circle x 2 + y 2 = 1, z = 1 .The condition (37.1) means


− sin s ⋅ − cos s − cos s ⋅ sin s = 0 . The integral surface is containing the curve
results in 1 = F (1) . This is possible for any function F such that F (1) = 1 (i.e.,

F ( w) = wn ). There are infinitely many representations of this function F , in

, z F ( x 2 + y 2 ) is an integral surface that contains the


this case with each of F=

curve. In this case, it is to be noted that the initial data curve is a characteristic
curve.

1
Example 4: Solve the p.d.e zz x + z y = with the initial condition
2
s
z ( s , s=
) ,0 ≤ s ≤ 1 .
4

Solution: The initial curve satisfies the condition given in (37.1) for s ≠ 4 . The
characteristic system can also be written as:

dx dy dz 1
= z ,= 1,= with the initial conditions
dt dt dt 2
s
= , y ( s,0) s , z ( s,0) = .
x( s,0) s=
4

307 WhatsApp: +91 7900900676 www.AgriMoon.Com


Integral surface through a given Curve - The Cauchy Problem

Solving the above system of ordinary differential equations using the initial
conditions, we get

1 1 s
z= t + C1 ( s,0) = t+ , y=
t + C2 ( s,0) =
t + s and
2 2 4

dx 1 s t 2 ts t 2 ts
= z= t + or x = + + C3 ( s,0) or x = + + s.
dt 2 4 4 4 4 4

Now eliminating s and t from the above we obtain

4x − y2 4( y − x)
s= and t =
4− y 4− y

Hence the integral surface having the given Cauchy data is:

s t 1  4 x − y 2  1  4( y − x)  8 y − 4x − y2
z= + =  + or z = for y= s ≠ 4.
4 2 4  4 − y  2  4 − y  4(4 − y )

Exercises:

1. Solve the Cauchy Problem for the p.d.e 2 z x + yz y =


z containing the initial

data curve=
x x0 (=
s ) s= s ) s 2 , z= z0 ( s=
, y y0 (= ) s, 1 ≤ s ≤ 2 .

2. Find the solution of p − zq + z =0 for all y and x > 0 , for the initial data
x0 = 0, y0 = s, z0 = −2 s, −∞ < s < ∞ .

3. Show that the integral surface for the p.d.e p + q =z 2 with the initial
f ( x − y)
condition z ( x,0) = f ( x) is z ( x, y ) = .
1 − yf ( x − y )

308 WhatsApp: +91 7900900676 www.AgriMoon.Com


Integral surface through a given Curve - The Cauchy Problem

References

Amaranath, T. (2003). An Elementary Course in Partial Differential Equations.


Narosa Publishing House, New Delhi

Ian, Sneddon. (1957). Elements of Partial Differential Equations. McGraw-Hill,


Singapore

Suggested Reading

Stavroulakis, I.P. Stephen a tersian, (2003). Partial Differential Equations.


Allied Publishers Pvt. Limited, New Delhi.

J. David Logan. (2004). Partial Differential Equations. Springer (India) Private


Ltd. New Delhi

309 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 4: Partial Differential Equations

Lesson 38

Non-Linear First order p.d.e – Compatible system

38.1 Non-linear first order p.d.e – compatible system

Two first order partial differential equations

f ( x, y , z , p , q ) = 0 (38.1)

and

g ( x, y , z , p , q ) = 0 (38.2)

are said to be compatible if they have common solutions. In fact these two
equations admit a one parameter family of common solutions under some
conditions.

Definition: The equation (38.1) and (38.2) are compatible on a domain Ω if

∂( f , g )
(i) =J ≠ 0 on Ω (38.3)
∂ ( p, q )

and

(ii) =p φ=
( x, y, z ), q ψ ( x, y, z ) (38.4)

obtained by solving (1) and (2) render the equation

dz φ dx +ψ dy
= (38.5)

310 WhatsApp: +91 7900900676 www.AgriMoon.Com


Non-Linear First Order P.D.E – Compatible System

integrable. Below we state a necessary and sufficient condition for the


integrability of the equation (38.5).

Result: A necessary and sufficient condition for the integrability of the equation
(38.5) is:

∂( f , g ) ∂( f , g ) ∂( f , g ) ∂( f , g )
+p + +q =
0.
∂ ( x, p ) ∂ ( z , p ) ∂ ( y, q) ∂( z, q)

We now consider some examples to check compatibility condition for the given
equations.

Example 1: Find the domain in which the equation f = xp − yq − x = 0 and


g= x 2 p + q − xz= 0 are compatible.

Solution:

Condition in equation (38.3) means

∂( f , g ) fp gp x x2
J== = = x(1 + xy ) ≠ 0
∂ ( p, q ) fq gq − y 1

So the domain Ω should not contain points ( x, y ) such that x = 0 or 1 + xy =


0 . In

such a domain Ω , these two equations admit common solutions.

Example 2: Find the one parameter family of common solutions to the p.d.es
f = p 2 + q 2 − 1= 0 and g = (p 2
+ q 2 ) x − pz = 0 .

Solution:

311 WhatsApp: +91 7900900676 www.AgriMoon.Com


Non-Linear First Order P.D.E – Compatible System

Step 1: Let us find the domain in which these equations admit common
solutions:

2p 2q
=J = 2qz , J ≠ 0 ⇒ z ≠ 0 in Ω
2 px − z 2qx

Step 2: Solve for p and q from f = 0 and g = 0 .

x z 2 − x2
This gives p= = φ ( x, y, z ) and q 2 =1 − p 2 ⇒ q = =ψ ( x, y, z ) (say)
z z

x z 2 − x2
dz φ dx +ψ dy leads to =
Step 3: Integrability of = dz dx + dy
z z

Or zdz = xdx + z 2 − x 2 dy , this admits the solution z 2 = x 2 + ( y + c) 2 .


which is the 1-parameter family of common integrals to f = 0 and g = 0 .

Example 3: Show that the equations xp − yq =


0 and z ( xp + yp ) =
2 xy are

compatible and solve them.

Solution:

Step 1: J ≠ 0 ⇒ x ≠ 0 (we always assume that both p and q are non zero).

Step 2: Solving f = 0 and g = 0 for finding p and q :

y x
we obtain p= = φ ( x, y, z ) and q= = ψ ( x, y, z )
z z

dz φ dx + ϕ dy
Step 3: Integrability of = ⇒ zdz = ydx + xdy

⇒ z 2 = 2 xy + c , is the 1-parameter family of common solutions to


f = 0 and g = 0 .

Exercise:

312 WhatsApp: +91 7900900676 www.AgriMoon.Com


Non-Linear First Order P.D.E – Compatible System

1. Show that f = xp − yq − x = 0 and g= x p + q − xz= 0 are compatible. Show also


2

that=
z x( y + 1) is a solution of f = 0 but not of g = 0 . Hence conclude that “not

all solutions of f = 0 are solutions of g = 0 ”.

( x + y)
2. Show that z = is a solution of f = p 2 + q 2 − 1= 0 and not of
2

g= (p 2
+ q 2 ) x − pz = 0 though f = 0 and g = 0 are compatible. Also find the 1-

parameter family of common solutions.

Keywords: Common Solutions, Integrability , Non-Linear First Order P.D.E –


Compatible System

References

Amaranath, T. (2003). An Elementary Course in Partial Differential Equations.


Narosa Publishing House, New Delhi

Ian, Sneddon. (1957). Elements of Partial Differential Equations. McGraw-Hill,


Singapore

Suggested Reading

I.P. Stavroulakis, Stephen a tersian, (2003). Partial Differential Equations.


Allied Publishers Pvt. Limited, New Delhi.

J. David Logan. (2004). Partial Differential Equations. Springer(India) Private


Ltd. New Delhi

313 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 4: Partial Differential Equations

Lesson 39

Non – linear p.d.e of 1st order complete integral – Charpit’s method

39.1 Non – linear p.d.e of 1st order complete integral – Charpit’s method.

Given a first order p.d.e

f ( x, y , z , p , q ) = 0 (39.1)

its complete integral can be obtained by considering a one parameter family


p.d.e.

g ( x, y , z , p , q , a ) = 0 (39.2)

which is compatible with f = 0 for each value of a . We know if f = 0 and g = 0


are compatible, they admit common solutions.

Choose equation (38.2) such that (a) equations (38.1) and (38.2) on solving for

p and q give

p = φ ( x, y, z , a ) and q = ψ ( x, y, z , a ) (39.3)

and (b) the equation

dz φ dx +ψ dy
= (39.4)

314 WhatsApp: +91 7900900676 www.AgriMoon.Com


Non – Linear P.D.E Of 1st Order Complete Integral – Charpit’s Method

is integrable. When such a p.d.e g ( x, y, z, p, q, a) = 0 is found, the solution of


equation (39.4), which can be written as:

F ( x, y , z , a , b ) = 0 (39.5)

containing two arbitrary constants a and b will form the complete integral of
(39.1).

Now we see the Construction of such g ( x, y, z , p, q, a) = 0 .

As f = 0 and g = 0 are compatible, we have

∂g ∂g ∂g ∂g ∂g
[ f , g] = fp
∂x
+ fq
∂y
+ ( pf p + qf q )
∂z ∂p
(
− ( f x + pf z ) − f y + qf z
∂q
= )
0. (39.6)

Note that equation (38.6) is obtained by expanding equation

∂( f , g ) ∂( f , g ) ∂( f , g ) ∂( f , g )
[ f , g] = +p + +q =0
∂ ( x, p ) ∂ ( z , p ) ∂ ( y, q) ∂( z, q)

Equation (39.6) is a quasi linear first order p.d.e for g with x, y, z, p and q as the
independent variables, and the corresponding characteristic system is

dx dy dz dp dq
= = = − =
− . (39.7)
f p f q pf p + qf q f x + pf z f y + qf z

Now we consider any solution of this system which involves p or q or both,


which contains an arbitrary constant. This choice gives us a g ( x, y, z, p, q, a) = 0 .

315 WhatsApp: +91 7900900676 www.AgriMoon.Com


Non – Linear P.D.E Of 1st Order Complete Integral – Charpit’s Method

Example 1: Find a complete integral of f= xpq + yq 2 − =


1 0

Solution:

Equation (39.7) becomes

dx dy dz dp dq
= = =
− =
− 2.
xq 2 yq + xp 2 xpq + 2 yq 2
pq q

dp dq
For finding g = 0 chose = ⇒ p=
aq.
pq q 2

Now we write g ( x, y, z , p, q, a ) =p − aq =0. Since f = 0 and g = 0 are


a 1
find p φ=
compatible, we= ( x, y , z , a ) = and q ψ=
( x, y , z , a )
ax + y ax + y

a 1 adx + dy
such =
that dz dx + dy Is integrable, i.e. dz =
ax + y ax + y ax + y

) 2 ax + y or ( z + b ) = 4 ( ax + y ) is the complete integral which may


⇒ ( z + b=
2

 Fa Fax Fay 
be written as F ( x, y, z, a, b) = 0 . we also note that the matrix   is of
 Fb Fbx Fby 

rank two (verify!).

Example 2: Solve f =q + xp − p .
2

Solution: Equation (36.7) gives the characteristic system as

dx dy dz dp dq
= = = = .
x − 2 p 1 −2 p + xp + q − p 0
2

316 WhatsApp: +91 7900900676 www.AgriMoon.Com


Non – Linear P.D.E Of 1st Order Complete Integral – Charpit’s Method

Chose g ( x, y, z , p, q, a ) = 0 as p = ae − y or g =−
p ae − y =
0.

Solving=
f 0,= −axe − y + a 2 e −2 y .
g 0 for q , we get q =

Then =
dz pdx + qdy becomes dz = ae − y dx + ( a 2 e −2 y − axe − y ) dy

1
On integrating we get z =axe − y − a 2 e −2 y + b as the complete integral of f = 0
2
where a and b are arbitrary constants.

Exercises:

1. Find a complete integral of f =


z 2 − pqxy =
0 by Charpit’s method.

2. Find a complete integral of the non-linear p.d.e

f = (p 2
+ q 2 ) y − qz = 0 .

3. Use Charpit’s method to solve the non-linear 1st order p.d.e

=
f x 2 p 2 + y 2 q 2 −=
4 0.

4. Solve 16 p 2 z 2 + 9q 2 z 2 + 4 z 2 − 4 =0.

5. Solve p= ( z + qy ) .
2

6. Find the complete integral of 2( y + zq ) = q ( xp + yq ) .

Keywords: Characteristic System, Complete Integral

References

Amaranath, T. (2003). An Elementary Course in Partial Differential Equations.


Narosa Publishing House, New Delhi

317 WhatsApp: +91 7900900676 www.AgriMoon.Com


Non – Linear P.D.E Of 1st Order Complete Integral – Charpit’s Method

Ian, Sneddon. (1957). Elements of Partial Differential Equations. McGraw-Hill,


Singapore

Suggested Reading

I.P. Stavroulakis, Stephen a tersian, (2003). Partial Differential Equations.


Allied Publishers Pvt. Limited, New Delhi.

J. David Logan. (2004). Partial Differential Equations. Springer(India) Private


Ltd. New Delhi

318 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 4: Partial Differential Equations

Lesson 40
Special Types of First Order Non-Linear p.d.e

40.1 Special Types of First Order Non-Linear p.d.e

We now consider 4 special types of first order non-linear p.d.es for which the
complete integral can be obtained easily. The underlying principle in the first
three types is that of the Charpit’s method.

Consider the general p.d.e is f ( x, y, z, p, q) = 0 .

Type I: The equation is free from x, y, z , i.e., f ( p, q) = 0

Here=
f x 0,=
f y 0,=
fz 0 .

The auxiliary system equation (38.7) simplifies to

dx dy dz dp dq
= = = = .
f p f q pf p + qf q 0 0

On solving, we get either p = a (or q = a ). Without loss of generality, take


g = p − a = 0 Using this, find q from f = 0 , denote it by q = Q(a ) .

Then =
dz adx + Q(a )dy , on integration we get z =
ax + Q(a ) y =
b as the complete

integral of f ( p, q) = 0 .

Example 1

Find a complete integral of f = p(1 − q) + q= 0 .

dx dy dz dp dq
Solution: We have = = = = .
1 − q − p + 1 p + q − 2 pq 0 0

From the last equation, we have q = a , constant.

Now using this in f = p(1 − q) + q= 0 we get

319 WhatsApp: +91 7900900676 www.AgriMoon.Com


Special Types of First Order Non-Linear p.d.e

a
p (1 − a ) + a =0 ⇒ p= .
a −1

a a
∴=
dz dx + ady ⇒=z x +=
ay b is the complete integral.
a −1 a −1

Type II: The equation is free from x, y , i.e., f ( z, p, q) = 0 .

dp dq
From the characteristic system of equations we consider = , on solving we
p q

get p = aq . Using this we find q as q = Q(a, z ) .

[Note: similarly, one can write q = ap and p = Q(a, z ) ]

Now dz = pdx + qdy = Q(a, z )(adx + dy ) , on integrating we get

dz
∫ Q ( a, z ) = ax + y + b as the complete integral.

Example 2

=
z p2 + q2

Solution: Choose p =aq ⇒ q 2 =z − a 2 q 2

z
q= 1
.
(1 + a ) 2 2

a z 1
=dz 1
dx + 1
z dy
(1 + a ) 2 2
(1 + a )2 2

1 adx + dy 1
or ∫=dz ∫ = 1 1
(ax + y ) + b ,
z
(1 + a )
2 2
(1 + a )2 2

or on simplifying we get

4(1 + a 2 ) z = (ax + y + b) 2 as the complete integral.

2
320 WhatsApp: +91 7900900676 www.AgriMoon.Com
Special Types of First Order Non-Linear p.d.e

Type III: Consider a special form for f ( x, y, z, p, q) = 0 in a separable type such


as g ( x, p) = h( y, q) .

In this case, the auxiliary equations are

dx dy dz dp dq
= = = =
g p −hq pg p − qhq − g x hy

Solving the first and fourth together, we get


g x dx + g p dp =
0 or dg ( x, p ) =
0 ⇒ g ( x, p ) =
a, a constant .

Since g ( x, p) = h( y, q) ⇒ h( y, q) = a ,

solving for p and q , we get p = A(a, x) and q = B(a, y )

and the complete integral becomes z = ∫ A(a, x)dx + ∫ B(a, y )dy + b .

Example 3:

Solve p − x 2 =q + y 2 .

dx dy dz dp dq
Solution: The auxiliary equations are = = = = .
1 −1 p + q 2x 2 y

The first and the fourth equations give 2 xdx − dp = 0 ⇒ p − x 2 = a ⇒ p = a + x 2


also q + y 2 = a ⇒ q = a − y 2 . hence

x3 y3
z= ∫ ( a + x ) dx + ∫ ( a − y ) dy +b , or z = ax + + ay − + b is the complete solution.
2 2

3 3

Type IV: The p.d.e is in the special form given by z = px + qy + h( p, q)

which is known as the Clairaut equation. Its complete integral is written as


z = ax + by + h(a, b) , which clearly satisfies the given p.d.e and also the rank of the

x+ha 1 0
matrix   is two.
 y + hb 0 1

3
321 WhatsApp: +91 7900900676 www.AgriMoon.Com
Special Types of First Order Non-Linear p.d.e

Example 4

The complete integral of z = px + qy + log pq is the surface given by


z = ax + by + log ab .

Exercises

Find the complete integral of the p.d.es

1. p 2 + q 2 =
9.

2. pq + p + q =0.

3. z = px + qy + p 2 q 2 .

4. p(1 − q 2 ) = q(1 − z ) .

5. 1 + p 2 =
qz .

6. q + px =
p2 .

7. p − q + 3x =
0.

8. xyp + qy + pq =
yz .

9. z ( p 2 + q 2 ) + px + qy =
0.

Keywords: Charpit’s method, Clairaut equation.

References
Ian Sneddon, (1957). Elements of Partial Differential Equations. McGraw-Hill,
Singapore
Amaranath. T, (2003). An Elementary Course in Partial Differential
Equations.Narosa Publishing House, New Delhi

4
322 WhatsApp: +91 7900900676 www.AgriMoon.Com
Special Types of First Order Non-Linear p.d.e

Suggested Reading
I.P. Stavroulakis, Stephen a tersian, (2003). Partial Differential Equations.
Allied Publishers Pvt. Limited, New Delhi
J. David Logan, (2004). Partial Differential Equations. Springer(India)
Private Ltd. New Delhi

5
323 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 4: Partial Differential Equations

Lesson 41

Classification of Semi-linear 2nd order Partial Differential Equations

41.1 Classification of 2nd order Partial Differential Equations: Parabolic –


Hyperbolic – Elliptic Equations

A2nd order semi linear partial differential equation can be put in the form
Lu + g ( x, y, u , u x , u y ) =
0 (41.1)

∂2 ∂ ∂2
where the linear operator L ≡ R( x, y ) 2 + S ( x, y ) + T ( x, y ) 2 is such that
∂x ∂x∂y ∂y
the coefficient functions R, S and T are continuous function of x , y and

R2 + S 2 + T 2 ≠ 0 .

We change the independent variables ( x, y ) to ( ξ ,η ) as ξ = ξ ( x, y ) and η = η ( x, y ) ,


to enforce the one – to – one correspondence of this transformation, we assume
ξ xη y − η xξ y ≠ 0 .

The coefficients and the partial derivatives in the given equation are written in
terms of the transformed variables. The first and second order partial derivatives
become:
u x uξ ξ x + uηη x ; =
= u y uξ ξ y + uηη y

u xy= uξξ ξ yξ x + uξηη yξ x + uξ ξ xy + uξηξ yη x + uηηη yη x + uηη xy

u xx =uξξ ξ x 2 + uξηη xξ x + uξ ξ xx + uξηξ xη x + uηηη x 2 + uηη xx

324 WhatsApp: +91 7900900676 www.AgriMoon.Com


Classification of Semi-linear 2nd order Partial Differential Equations

u yy =uξξ ξ y 2 + uξηη yξ y + uξ ξ yy + uξηξ yη y + uηηη y 2 + uηη yy

∴ Ru xx + Su xy + Tu=
yy uξξ ( Rξ x 2 + Sξ xξ y + T ξ y 2 )

+uξη (2 Rη xξ x + S (η yξ x + ξ yη x ) + 2T ξ yη y )
+uηη ( Rη x 2 + Sη xη y + T ξ y 2 ) + F (ξ ,η , uξ , uη , u ) .

Equation (41.1) becomes

A(ξ x , ξ y )uξξ + 2 B(ξ x , ξ y ;η x ,η y )uξη + A(η x ,η y )uηη =


G (ξ ,η , u, uξ , uη ) (41.2)

where A(u , v) = Ru 2 + Suv + Tv 2 (41.3)

1
B(u1 , v1; u2 , v2 ) = Ru1u2 + S (u1v2 + u2v1 ) + Tv1v2 (41.4)
2

Now, the problem is to determine ξ &η so that the equation (41.2) takes the
simplest possible (Canonical) form.

When the sign of the determinant S 2 − 4 RT of the quadratic form (41.3) is


everywhere positive, negative or zero it is easy to make the classification.

Case A: When S 2 − 4 RT > 0 everywhere in the domain.


The new independent variable ξ &η can be so chosen that the coefficients of

uξξ and uηη in (41.2) vanish.

The roots λ1 & λ2 of the equation Rα 2 + Sα + T =


0 are real and distinct.
The coefficient of uξξ & uηη in (41.2) will vanish if we close ξ &η such that

325 WhatsApp: +91 7900900676 www.AgriMoon.Com


Classification of Semi-linear 2nd order Partial Differential Equations

∂ξ ∂ξ ∂y ∂y
+ λ1 ; λ2
=
∂x ∂y ∂x ∂y

A suitable choice will be ξ = f1 ( x, y ) , η = f 2 ( x, y ) where

f1 ( x, y ) = c1 , f 2 ( x, y ) = c2 are the solution of the ordinary differential equations

dy dy
+ λ1 ( x, y ) =0; + λ2 ( x, y ) =0 respectively.
dx dx

It can be verified that


A(ξ x , ξ y ) A(η x ,η y ) − B 2 (ξ x , ξ y ;η x ,η y ) =
(4 RT − S 2 )(ξ xη y − ξ yη x ) 2 / 4. (41.5)

Now when the A ’s are zero;


( S 2 − 4 RT )(ξ xη y − ξ yη x ) 2
B2 =

Since S 2 − 4 RT > 0 ⇒ B 2 > 0 , hence equation (41.2) reduces to


∂ 2u
= φ (ξ ,η , uξ , uη , u ) (41.6)
∂ξ∂η

The curves ξ ( x, y ) = constant, η ( x, y ) = constant are called the characteristic


curves of equation (41.1).
Equation (41.6) is called the canonical form of equation

Example 1
Reduce the equation u xx − x 2u yy =
0 to a canonical form.

326 WhatsApp: +91 7900900676 www.AgriMoon.Com


Classification of Semi-linear 2nd order Partial Differential Equations

Solution
comparing with the standard form, we note that =
R 1,=
S 0,=
T x2

Then S 2 − RT = 4 x 2 > 0 .

0 becomes α 2 − x 2 =
So Rα 2 + Sα + T = 0 ⇒α =±x .
⇒ λ1 =x; λ2 =
−x

dy 1
Now + x = 0 ⇒ y + x 2 = c1
dx 2
dy 1
− x = 0 ⇒ y − x 2 = c2
dx 2

1 1
Taking ξ= y + x 2 ;η= y − x 2
2 2
u x = uξ ξ x + uηη x = uξ x + uη (− x) = uξ x − uη x

u=
y uξ + uη u xx= x 2uξξ − 2 x 2uξη + x 2uηη + uξ − uη u yy =uξξ + 2uξη + uηη
, ,
1 1
∴ u xx − x 2u yy = =
0 becomes uξη = (u − u ) (uξ − uη ).
4 (ξ − η )
ξ η
4x2

Case B: If S 2 − 4 RT =
0

0 are real and equal. We define ξ as in case


Roots of the equation Rα 2 + Sα + T =
A and take η to be any function of x, y which is independent of ξ . In this case we
have A(ξ x , ξ y ) = 0 as before and hence from equation (41.5), B = 0 .

But A(η x ,η y ) ≠ 0  ξ & η are independent functions.

327 WhatsApp: +91 7900900676 www.AgriMoon.Com


Classification of Semi-linear 2nd order Partial Differential Equations

∂ 2u
Hence the canonical form in this case is = φ (ξ ,η , u , u x , u y ) (41.7)
∂η 2

Example 2
u xx + 2u xy + u yy =
0 canonical form.

Solution

comparing with the standard form, we note that


=
R 1,=
S 2,=
T 1, and S 2 − 4 RT =
0.

Rα 2 + Sα + T =α 2 + 2α + 1 =(α + 1) =0 ⇒ α =−1, −1 .
2

dy
∴ − 1 = 0 ⇒ x − y = c1 , take ξ= x − y
dx
Then chose η= x + y .

∂ 2ξ
Using these ξ &η : we have the canonical form as =0
∂η 2
ξ η f1 (ξ ) + f 2 (ξ ) where f1 & f 2 are arbitrary functions.
⇒=
Hence the solution of the given equation is:
z = ( x + y ) f1 ( x − y ) + f 2 ( x − y ).

Case C: S 2 − 4 RT < 0 .
In this case, the roots of the equation Rα 2 + Sα + T =
0 are complexconjugates.
∂ 2u
Proceeding as in case A; the canonical form = φ (ξ ,η , u , u x , u y ) .
∂η 2

328 WhatsApp: +91 7900900676 www.AgriMoon.Com


Classification of Semi-linear 2nd order Partial Differential Equations

But ξ &η are complex conjugates.To get the real canonical form, we use the

1 1 ∂ 2u 1  ∂ 2u ∂ 2u 
transformation α = (ξ + η ), β = i (η − ξ ) ⇒ =  + .
2 2 ∂ξ ∂η 2 4  ∂α 2 ∂β 2 

∂ 2u ∂ 2u
So the canonical form in this case is + ϕ (α , β , u , uα , uβ ) .
=
∂α 2 ∂β 2

Example 3

Reduce the equation u xx + x 2u yy =


0 to canonical form.

Solution
Clearly, =
R 1,=
S 0,=
T x 2 , and S 2 − 4 RT < 0 .
α 2 + x 2 =⇒
0 α=±ix , hence λ1 = ix; λ2 = −ix ,
1 1 1 2
ξ=iy + x 2 ;η2 = x2 ,α
−iy + = =x ;β y
2 2 2
1
⇒ uαα + uββ =
− uα is the canonical form

No we classify second order equation of the type (41.1) by their canonical form as:
A) Hyperbolic if S 2 − 4 RT > 0 , B) Parabolic if S 2 − 4 RT =
0,

C) Elliptic if S 2 − 4 RT < 0 .

Clearly the one dimensional wave equation given by utt = c u xx is an example for
2

the Hyperbolic equation,

329 WhatsApp: +91 7900900676 www.AgriMoon.Com


Classification of Semi-linear 2nd order Partial Differential Equations

the one dimensional heat conduction equation given by ut = α u xx


is an example
u xx + u yy =
0
for the parabolic equation and the Laplace equation is an example for
the elliptic equation.

Example 4
u xx + 2 xu xy + (2 − y 2 )u yy =
0
Discuss the nature of the equation

Solution
Clearly S 2 − RT = ( x 2 + y 2 − 2) .

Hence the given equation isHyperbolic at all points ( x, y ) such that x + y > 2 ,
2 2

Parabolic if x 2 + y 2 =
2 and Elliptic if x 2 + y 2 < 2 .

Exercises
1. Reduce the equation to its canonical form and classify
it: utt + 4utx + 4u xx + 2u x − ut =
0
.
2. Classify the partial differential equation:

utt + (5 + 2 x 2 )utx + (1 + x 2 )(4 + x 2 )u xx =


0
.

Keywords: Elliptic ,Hyperbolic, Parabolic,

References

Ian Sneddon, (1957). Elements of Partial Differential Equations. McGraw-Hill,


Singapore

330 WhatsApp: +91 7900900676 www.AgriMoon.Com


Classification of Semi-linear 2nd order Partial Differential Equations

Amaranath.T, (2003). An Elementary Course in Partial Differential


Equations.Narosa Publishing House, New Delhi

Suggested Reading

I.P. Stavroulakis, Stephen a tersian, (2003). Partial Differential Equations. Allied


Publishers Pvt. Limited, New Delhi

J. David Logan, (2004). Partial Differential Equations. Springer(India) Private


Ltd. New Delhi

331 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 4: Partial Differential Equations

Lessons 42

Solution of Homogeneous and Non-Homogeneous Linear Partial


Differential Equations

42.1 Introduction

Consider the homogeneous linear equations with constant coefficients ki 's as

(D n
+ k1D n−1D′ + ... + kn D′n ) z =f ( x, y ) or F ( D, D′) z = f ( x, y ) where

∂ ∂
F ( D, D′) = ∑∑ Crs D r D′s , Crs are constants=
&D = ; D′
z s ∂x ∂y .
Let us find the Complementary function for this equation.

Result 1: If u is the complementary function and z1 a particular integral of a


linear differential equation, then u + z1 is a general solution of the equation.

We have F ( D, D′)u = 0

and F ( D, D′) z1 = f ( x, y )

∴ F ( D, D′) ( u + z1 ) =
f ( x, y ) .

Result 2: If u1 , u1 ,..., un are solutions of the homogeneous linear partial


differential equation F ( D, D′) z = 0 , then c1u1 + c2u2 + .......cnun is also a
solution; Cr ’s are arbitrary constants.

Let F ( D, D′) be a Linear partial differential operator.

This operator is said to be reducible if it can be written as the product of linear


function of the form ( D + ar ′ + b ) with a,b are constants. For example:

332 WhatsApp: +91 7900900676 www.AgriMoon.Com


Solution of Homogeneous and Non-Homogeneous Linear Partial Differential Equations

(D 2
− D′2 ) =( D + D′ )( D − D′ ) .

It is said to be irreducible if it cannot be so written. For example D r − D′ is ( )


irreducible.

42.2 Reducible Equations

Result 3: If the operator F ( D, D′) is reducible, the order in which the linear
factors occur is unimportant. Any reducible operator can be written in the form.

We have (α r D + β r D′ + rr ) (α s D + β s D′ + rs )

= α rα s D 2 + (α s β r + α r β s ) DD′ + β r β s D′2 + (rsα r + rrα s ) D + (rs β r + rr β s ) D′ + rr rs


= (α s D + β s D′ + rs )(α r D + β r D′ + rr )

Similarly this is true for any product of finite number of factors.

Result 4: If (α r D + β r D′ + γ r ) is a factor of F ( D, D′) and φr (ξ ) is an arbitrary


function of the simple variable ξ , then if α r ≠ 0 .

 γ x
exp  − r φr ( β r x − α r y )
ur =
 αr 
is a solution of the equation F ( D, D′) z = 0 .

γ γ x
− r ur + β r exp  r φ ′ ( β r x − α r y )
Proof: Dur =
αr  αr 
γ x
−α r exp  r φ ′ ( β r x − α r y )
D′ur =
 αr 

333 WhatsApp: +91 7900900676 www.AgriMoon.Com


Solution of Homogeneous and Non-Homogeneous Linear Partial Differential Equations

so that (α r D + β r D′ + γ r ) ur =
0 (42.1)

 n 
=
Now F ( D, D′) ∏ (α s D + β s D′ + γ s )  (α r D + β r D′ + γ r ) ur (42.2)
 s =1 

The prime after the product denotes that the factor corresponding to s = r
is omitted. Combining (42.1) & (42.2) we get F ( D, D′)ur = 0 .

Result 5: If ( β r D′ + γ r ) is a factor of F ( D, D′) and φr (ξ ) is an arbitrary

γ x
function of the simple variable ξ , then if β r ≠ 0 ; ur = exp  r φr ( β r x ) is a
 αr 
solution of the equation F ( D, D′) z = 0 .
Proof: Similar lines to that of result 5.

If F ( D, D′) is decomposed into linear factors such that (α r D + β r D′ + γ r ) is a

multiple factor; (say n=2) then the solution of F ( D, D′) z = 0 is obtained as


given below:

(α r D + β r D′ + γ r ) z=
2
0

Let Z = (α r D + β r D′ + γ r ) z .

Then (α r D + β r D′ + γ r ) Z =
2
0.

Then by result (4), it has solution


 γ x
exp  − r φr ( β r x − α r y ) if α r ≠ 0
Z=
 αr 
To find Z ; we have to solve
rx
 ∂z ∂y  − αr
r

αr + βr + γ=
rz e φr ( β r x − α r y )
 ∂x ∂x 

334 WhatsApp: +91 7900900676 www.AgriMoon.Com


Solution of Homogeneous and Non-Homogeneous Linear Partial Differential Equations

dx dy dz
Solution: = =
αr βr −
rr x

−rr z + e αr
φr ( β r x − α r y )
With solution:
dx dy
= ⇒ β r x − α r y =C1
αr βr
dx dz
and =
αr −
rr x
αr
rr z + e φr C1
rr x

=
⇒z
1
αr
{φ ( C ) x + C } e
r 1 2
αr

rr x

∴ z xφr ( β r x − α r y ) + ϕr ( β r x − α r y ) e
= αr

is the solution. φr & ϕr are arbitrary.

Result 6: (This is generalization of result 5) If (α r D + β r D′ + γ r ) (α r ≠ 0 ) is a


n

factor of F ( D, D′) and if the functions φr ,φr ,...,φr are arbitrary, then
1 2 n

 γ x n
exp  − r  ∑ x s −1φrs ( β r x − α r y ) is a solution of F ( D, D′) = 0 .
 α r  s =1

( β r D′ + γ r ) is a factor of F ( D, D′) and if the functions


m
Result 7: If

 γ r y  m s −1
φr ,φr ,...,φr are arbitrary, then exp  −  ∑ x φrs ( β r x ) is a solution of
 β r  s =1
1 2 n

F ( D, D′) z = 0 .

Complementary function of F ( D, D′) z = f ( x, y ) when F ( D, D′) is reducible.


n
′)
We have F ( D, D= ∑ (α D + β D′ + γ ) and if none of α r′ s is zero, then the
mr
r r r
s =1

corresponding complementary function is:

335 WhatsApp: +91 7900900676 www.AgriMoon.Com


Solution of Homogeneous and Non-Homogeneous Linear Partial Differential Equations

 γ r x  n s −1
n

∑ exp  − α ∑
u= x φrs ( β r x − α r y ) where
= φrs ( s 1,2,...,
= nr ; r 1,2,..., n ) are
=r 1 =  r s 1

arbitrary.

∂2 z ∂2 z ∂2 z
Consider the second order equation + k + k =
0
∂x 2 ∂x∂y ∂y 2
1 2

(
which is written in the operator form as D 2 + k1DD + k2 D′2 z =
0. )
D
Let its roots be denoted by = m1 , m2 .
D′

Case 1: These roots are real and distinct :


Say ( D − m1D′ )( D − m2 D′ ) z =
0

dx dy dz
( D − m2 D′ ) z = 0 ⇒ = = ⇒ y + m2 x = c1 , z = c2
1 −m2 0

hence=z φ ( y + m2 x ) , where φ is an arbitrary function.

Similarly ( D − m1D′ ) z = 0 ⇒ z = f ( y + m1 x ) , where f is an arbitrary function.

Hence the complete solution is z = f ( y + m1 x ) + φ ( y + m2 x ) .

Case 2: Let these roots be repeated, say m1 = m2 , then

( D − m1D′) 0 ;let ( D − m1D′ ) z =


z=
2
u , then

( D − m1D′) u = 0 ⇒ u = φ ( y + m1 x )
dx dy dz
( D − m1D′) z = φ ( y + m1 x ) ⇒ = =
1 −m1 φ ( y + m1 x )
or y + m1 x = c1; dz = φ (u )dx

or z = xφ ( y + m1 x ) + c2

or z = xφ ( y + m1 x ) + f ( y + m2 x ) is the complementary function.

336 WhatsApp: +91 7900900676 www.AgriMoon.Com


Solution of Homogeneous and Non-Homogeneous Linear Partial Differential Equations

Example 1

2 D 2 + 5 DD′ + 2 D′2 =
0
1
2m 2 + 5m + 2 =0 ⇒ m1 =−2, m2 =−
2
 1 
z = f1 ( y − 2 x ) + f 2  y − x  .
 2 

Example 2

r + bs + qt =
0

m 2 + bm + q =0 ⇒ m =−3, −3

z = f1 ( y − 3 x ) + xf 2 ( y − 3 x ) .

Example 3

(D 2
− D′2 ) z =
0 . m 2 − 1 =0 ⇒ m =±1

z = φ1 ( x + y ) + φ2 ( x − y ) .

Example 4

( )
Find the complementary function of D 4 + D′4 z − 2 D 2 D′2 z =
0.

Solution

( D + D′) ( D − D′) z=
2 2
0

α=
1 α=
2 1, γ1 = 0
So the solution is: β=
1 β=
2 1;γ 2 = 0
z xφ1 ( x − y ) + φ2 ( x − y ) + xϕ1 ( x + y ) + ϕ2 ( x + y ) where ϕ arbitrary function.
=

337 WhatsApp: +91 7900900676 www.AgriMoon.Com


Solution of Homogeneous and Non-Homogeneous Linear Partial Differential Equations

42.3 Particular Integral

Result 8: We have F ( D, D′)e ax +by = F (a, b)e ax +by .

F ( D, D′) is made up of term of the type Crs D r D′s ; F ( D, D′) = ∑∑ Crs D r D′s
r s

and D r e ax +by = a r e ax +by and D′s e ax +by = b s e ax +by ,


so Crs D r D′s = Crs a r b s e ax +by

and F ( D, D′)e ax +by = F (a, b)e ax +by .

{
Result 9: F ( D, D′) e ax +byφ ( x, y= }
) e ax +by F ( D + a, D′ + b)φ ( x, y )

( )( ) ( )
r r
Solution: D r e axφ = ∑ r C p D ρ e ax D r − ρφ = e ax ∑ r C p a ρ D r − ρ φ ( x, y )
ρ =0 ρ =0

= e ax ( D + a ) φ .
r

′s ebxφ e ax ( D′ + a ) φ .
Similarly, D=
s

Hence F ( D, D′)e ax +=
by
φ eax+by f ( D + a, D′ + a)φ ( x, y ) .
1
=
f ( D, D ′) z F ( x, y ) ⇒
= z F ( x, y )
f ( D, D′)

1 1
Case 1: e ax +by = e ax +by , provided f (a, b) ≠ 0 .
f ( D, D′) f ( a, b)

Case 2:

f ( D 2 , DD′, D′2 )sin(mx + ny ) = f ( −m 2 , −mn, −n 2 ) sin(mx + ny )cos(mx + ny )

1
∴z sin(mx + ny ) or cos(mx + ny ) .
f ( −m 2 , −mn, −n 2 )

338 WhatsApp: +91 7900900676 www.AgriMoon.Com


Solution of Homogeneous and Non-Homogeneous Linear Partial Differential Equations

−1
Case 3: F ( x, y ) = x m y n , m, n constants. P.I . =  f ( D, D′ )  x m y n .

1
Case 4: F ( x, y ) is any function of x and y , resolve , into partial
f ( D, D′)
fractions, treating f ( D, D′) as a function of D alone and operate each partial
function of F ( x, y ) , remembering that
1
( D − mD′)
F=
( x, y ) ∫ F ( x, c − mx)dx

where c is replaced by y + mx after integration.

Example 5

(
Find the solution of D 2 − D′2 z =
x− y )
The complementary function is : φ1 ( x + y ) + φ2 ( x − y ) .

The particular integral is obtained as:


Let =
z1 ( D + D′) z
Then ( D − D′ ) z1 =
x− y

∂z1 ∂z1 1
− = x − y ⇒ z1 = ( x − y ) + f ( x + y ) , f is arbitrary.
2

∂x ∂y u

Exercises
Find the solution of the linear p.d.e with constant coefficients:
1. D 2 + 4 DD′ − 5 D′2 z = sin(2 x + 3 y )

(
2. D 2 − DD′ z =)
cos x cos 2 y

3. D 3 − 2 D 2 D′ =2e 2 x + 3 x 2 y

4. 4 D 2 − 4 DD′ + D=
′2 16log( x + 2 y ) .

339 WhatsApp: +91 7900900676 www.AgriMoon.Com


Solution of Homogeneous and Non-Homogeneous Linear Partial Differential Equations

42.4 The complementary function of irreducible equations

F ( D, D′) z = f ( x, y )
Irreducible factors are treated as follows:
1
Case 1: The particular integral z = f ( x, y ) is obtained by Expanding
F ( D, D′)
the operator F −1 by the binomial theorem and then interpret the operator
D −1 , D′−1 as integration.

Example 6
Find a Particular Integral of the equation ( D 2 − D′) z =2 y − x2 .

Solution
−1
 D2  1
z= 2
1
( D − D′)
( 2 y − x 2
) =−  1− 
D′  D′
( 2 y − x2 )

 1 D2 D4 
or z =1 − − 2 − 3 − .......  ( 2 y − x 2 )
 D′ D′ D′ 

=( − y 2 + x 2 y ) −
1
(−2) − .....
D′2
=− y 2 + x2 y + y 2 =x2 y .
Case 2: If f ( x, y ) is made of term of the form exp(ax + by )
1
then P.I is: e( ax +by ) if F (a, b) ≠ 0 .
F ( a, b)

( D 2 − D′) z =
e( ax +by ) , F (a, b)= 3 ≠ 0
1 ( ax +by ) 1 ( ax +by )
So e = e .
( D 2 − D′) 3

If F (a, b) = 0 then z = we( ax +by )

340 WhatsApp: +91 7900900676 www.AgriMoon.Com


Solution of Homogeneous and Non-Homogeneous Linear Partial Differential Equations

and F ( D + a, D′ + b) w =
c.

F ( D, D′) z = ce( ax +by ) .

Example 7
Find the particular solution of ( D 2 − D′) z =
e( ax +by ) .
Clearly F (1,1) = 0 .

F ( D + 1, D′ + 1) = ( D + 1) − ( D′ + 1) = D 2 + 2 D − D′
2

(
or D 2 + 2 D − D′ w =
1 )
1 
1 −1  D 2 + 2 D   x
1 =  1 −  1 =
2 
 D2 + 2D  D′  D′  − y 
− D′ 1 − 
 D′ 

1 ( ax +by )
∴ P.I. are xe & − ye x + y .
2
f ( x, y ) involving trigonometric functions Re. or Img. Write it as exp(i....)
use the above method.

Otherwise: method of Undetermined Coefficients.


Ex: ( D 2 − D′)=
z A cos(lx + my ) , A, l , m are constants.
Let a P.I. =
z c1 cos(lx + my ) + c2 sin(lx + my ) .
Find D 2 z , D1 z .

Equating the coefficients of sine & cosine terms.


0 
mc1 − l 2c2 = 
We get 2 =
−l c1 + mc2 =
⇒ z
A
A
m −l
2 4 {
m sin(lx + my ) + l 2 cos(lx + my ) }

341 WhatsApp: +91 7900900676 www.AgriMoon.Com


Solution of Homogeneous and Non-Homogeneous Linear Partial Differential Equations

∂2 z ∂2 z ∂2 z
Exercises: Denote: = r , = s= , 2 t . Find the solution of
∂x 2 ∂x∂y ∂y
1. r + s − 2t =e x+ y .
2. r − s + 2q − z =x2 y 2 .
3. r + s − 2t − p − 2q =0.

∂2 z ∂2 z
4. Solve 2 + 2 = e − x cos y .
∂x ∂y

Keywords: Complementary function, Irreducible, Particular integral.

References

Ian Sneddon, (1957). Elements of Partial Differential Equations. McGraw-Hill,


Singapore

Amaranath. T, (2003). An Elementary Course in Partial Differential


Equations.Narosa Publishing House, New Delhi

Suggested Reading

I.P. Stavroulakis, Stephen a tersian, (2003). Partial Differential Equations.


Allied Publishers Pvt. Limited, New Delhi

J. David Logan, (2004). Partial Differential Equations. Springer(India)


Private Ltd. New Delhi

342 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 4: Partial Differential Equations

Lesson43

Non-Homogeneous Linear Equation

43.1 Complementary Function and Particular Solution

Consider the non-homogeneous linear equation

n
f ( D, D′) z = F ( x, y ) where f ( D, D′) = ∏D
r =1
r − mDr′ − Cr ;

for some fixed r , the solution may be written as

dx dy dz
= = ⇒ y + mx = a , z = becx .
1 −m cz

( )
Example1: D 2 + 2 DD′ + D′2 − 2 D − 2 D′ z= sin ( x + 2 y )

( D + D′)( D + D′ − 2 ) =z sin ( x + 2 y )

Solution corresponding to the factor ( D + D′ − 2 ) is:

=z e 2 xφ ( y − x )

and the complementary function is: φ1 ( y − x ) + e 2 xφ ( y − x ) .

The Particular Integral is


1
sin ( x + 2 y )
( D 2
+ 2 DD′ + D ′ 2
− 2 D − 2 D′ )

343 WhatsApp: +91 7900900676 www.AgriMoon.Com


Non-Homogeneous Linear Equation

1
=
− sin ( x + 2 y )
2( D + D′) + 9
−2( D + D′) − 9
sin ( x + 2 y )
4 ( D 2 + 2 DD′ + D′2 ) − 81

1
=  2cos ( x + 2 y ) − 3sin ( x + 2 y ) 
39 

Exercises: Solve the following non-homogeneous equations

(
1. D 2 + DD′ + D′ − 1 z = )
e− x

2. ( D + D′ − 1)( D + 2 D′ − 3) z =4 + 3x + 6 y

3. ( D′ + DD′ + D′ ) z =x 2 + y 2

(
4. 2 DD′ + D′2 − 3D′= )
z 3cos ( 3 x − 2 y )

Keywords:Non-Homogeneous,

References

Ian Sneddon, (1957). Elements of Partial Differential Equations. McGraw-Hill,


Singapore

Amaranath.T, (2003). An Elementary Course in Partial Differential


Equations.Narosa Publishing House, New Delhi

Suggested Reading

I.P. Stavroulakis, Stephen a tersian, (2003). Partial Differential Equations.


Allied Publishers Pvt. Limited, New Delhi

J. David Logan, (2004). Partial Differential Equations. Springer(India)


Private Ltd. New Delhi

344 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 4: Partial Differential Equations

Lesson 44
Method of Separation of Variables

44.1 Introduction

This is the oldest systematic procedure for the solving a class of partial
differential equations. The underlying principle in this method is to transform
the given partial differential equation to a set of ordinary differential equations.
The solution of the p.d.e. is then written as either the product
z ( x, y ) = X ( x) ⋅ Y ( y ) ≠ 0 or as a sum z ( x=
, y ) X ( x) + Y ( y ) where X ( x) and Y ( y ) are

functions of x and y respectively.

44.2 Method of Separation of Variables


Many practical problems in p.d.e. can be solved by the method of separation of
variables. Usually, the first order p.d.e. can be solved by this method without
the need for Fourier Series which is described in the latter lessons. Let us
illustrate the separation of variables technique by few examples.

Example 1
0 subject to the condition =
Solve the first order p.d.e. z x + 2 z y = y ) 4e −2 y .
z ( x 0,=

Solution
We look for a separable solution for z ( x, y ) in the form z ( x, y ) = X ( x) ⋅ Y ( y ) ≠ 0 .
Substituting this in the given p.d.e we obtain
X ′( x) ⋅ Y ( y ) + 2 X ( x) ⋅ Y ′( y ) =
0.

This can be separated into 2 o.d.es, one in x and the other in y as:
X ′( x) Y ′( y )
= − .
2 X ( x) Y ( y)

345 WhatsApp: +91 7900900676 www.AgriMoon.Com


Method of Separation of Variables

Note that the left hand side of the equality is a function of x alone and it is
equated to a function of y alone which is on the right hand side. This is possible
only when both are equal to the same constant (say) k which is called an
arbitrary separation constant. Thus we have

X ′( x) Y ′( y )
= k=
2 X ( x) Y ( y)

This gives two o.d.es. as: X ′( x) − 2kX ( x) =


0 , Y ′( y ) − kY ( y ) =
0

having solutions X ( x) = Ae−2 kx and Y ( y ) = Be− ky where A and B are arbitrary


constants. Hence the general solution is z ( x, y ) = X ( x) ⋅ Y ( y ) = Cek (2 x − y ) , C = AB .

Eliminating the arbitrary constant C using the given condition z (0, y ) = 4e−2 y
we get C = 4 and k = 2 . Hence the particular solution is z ( x, y ) = 4e4 x −2 y .

Let us now demonstrate this method for a non-linear p.d.e.

Example 2
 x2 
( xyz ) subject to the condition u ( x, 0) = 3exp   .
Solve y p + x q =
2 2 2 2 2

 4

Solution
Note that p = z x and q = z y write z ( x=
, y ) X ( x) ⋅ Y ( y ) in the given equation. This

will produce separate the two variables as


2 2
1  X ′( x)  1  Y ′( y ) 
2   =
1− 2  λ 2 (say)
 =
x  X ( x)  y  Y ( y) 

1  X ′( x)  1  Y ′( y ) 
⇒  =λ and  = 1− λ2 .
x  X ( x)  y  Y ( y) 

0 and Y / ( y ) − 1 − λ 2 yY ( y ) =
Solving these two o.d.es X / ( x) − λ xX ( x) = 0 , we find

λ 2 y
x 1− λ 2
X ( x) = Ae 2
and Y ( y ) = Be 2
.

2
346 WhatsApp: +91 7900900676 www.AgriMoon.Com
Method of Separation of Variables

 λ y 2 
Hence the general solution is z (=
x, y ) C exp  x 2 + 1− λ=  , C AB .
2 2 

 x2  1
The boundary condition u ( x, 0) = 3exp   implies C = 4 and λ = .
 4 2

1 3 2
∴ The particular solution =
is z ( x, y ) 4 exp  x 2 + y  .
4 4 

Let us now solve a second order equation using this method.

Example 3
∂2 z ∂z ∂y
Solve −2 + =
0.
∂x 2
∂x ∂x
Solution
Write Z ( x=
, y ) X ( x) ⋅ Y ( y ) .

∂z ∂2 z ∂z
= X ′( x) ⋅ Y ( y ) , = X ′′( x) ⋅ Y ( y ) and= X ( x) ⋅ Y ′( y ) .
∂x ∂x 2
∂y

Using these in the given equation, we get X ′′ − 2 X ′ − λ X =


0 and Y ′ + λY =
0

where λ is the arbitrary separation constant. Solving these o.d.es., we obtain

( ) (
X ( x) C1 exp  1 + 1 + λ x  + C2 exp  1 − 1 + λ x  and=
=
    )
Y ( y ) C3 exp [ −λ y ] .

Hence the required solution is

Z=
( x, y ) {C exp (1 +
4 )   (  ) }
1 + λ x  + C5 exp  1 − 1 + λ x  exp [ −λ y ]

where C4 ( = C1C3 ) and C5 ( = C2C3 ) are the arbitrary constants.

Exercise: Solve the following using the method of separation of variables.

∂u ∂u
1. =4 , given that u (0, y ) = 8e−3 y .
∂x ∂y

3
347 WhatsApp: +91 7900900676 www.AgriMoon.Com
Method of Separation of Variables

∂z ∂z
2. 4 + =
3 z subjected to=z 3e − y − e −5 y when x = 0 .
∂x ∂y

∂ 2 z ∂z
3. Find a solution of the equation − − 2z =
0 subject to the conditions:
∂x 2 ∂x
∂z
=
z ( x 0,=
y ) 0 And ( x = 0, y ) = 1 + e −3 y .
∂x

Keywords: Method of Separation of Variables, Separation of variables.

References
Ian Sneddon, (1957). Elements of Partial Differential Equations. McGraw-Hill,
Singapore
Amaranath. T, (2003). An Elementary Course in Partial Differential
Equations.Narosa Publishing House, New Delhi

Suggested Reading
I.P. Stavroulakis, Stephen a tersian, (2003). Partial Differential Equations.
Allied Publishers Pvt. Limited, New Delhi
J. David Logan, (2004). Partial Differential Equations. Springer(India)
Private Ltd. New Delhi

4
348 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 4: Partial Differential Equations

Lesson 45

One Dimensional Heat Equation

45.1 Introduction

The one dimensional heat equation is a parabolic partial differential equation.


We wish to estimate the heat transfer in a very thin long (finite or infinite) string
at some location on the string at any given time. Let x be the coordinate along
the thin rod and let t represent the time. Then the 1-dimensional heat
conduction equation is given by

∂z ∂2 z
= c2 2 (45.1)
∂t ∂x

k
where z (t , x) representing the heat conducting in the material and c 2 = is the

diffusivity constant with k being the thermal conductivity, ρ being the density
and s being the specific heat. The problem is well posed if this differential
equation is supplemented with an initial condition and two boundary conditions.
Let us attempt to solve this equation with suitable initial and boundary
conditions using some standard mathematical techniques such as the method of
separation of variables and integral transform techniques. By a solution of heat
equation, we mean a physically realistic solution that obeys the ‘natural’
physical process.

45.2 Solution of the Heat Equation – Method of separation of variables

Assume that a solution of (45.1) can be written in the form

z (=
t , x) X ( x) ⋅ T (t ) .

∂z ∂2 z
Finding and and substituting in (45.1), we get the set of ordinary
∂t ∂x 2
differential equation as

349 WhatsApp: +91 7900900676 www.AgriMoon.Com


One Dimensional Heat Equation

X ′′( x) − λ X ( x) =
0 (45.2)

and T ′(t ) − λ c 2T (t ) =
0 (45.3)

with λ as the arbitrary separation constant which takes positive or negative or


zero. Solving equation (44.2) and (44.3) for these three cases of λ we get the
following three cases for the solution z (t , x) .

Case 1: Take λ > 0 , say λ = p 2 .

( x) c1e px + c2 e − px and T (t ) = c3ec p t


2 2
In this case, X=

(t , x) ( c4 e px + c4 e − px ) ec p t
i.e., z=
2 2
(45.4)

with c4 = c1c3 and c5 = c2c3 are arbitrary constants.

Case 2: Take λ < 0 , say λ = − p 2

= X ( x) c6 cos px + c7 sin px , and T (t ) = c8e − c p t


2 2
In this case,

z (t , x) ( c9 cos px + c10 sin px ) e − c p t


and=
2 2
(45.5)

=
where c9 c=
6 c8 ; c10 c7 c8 are arbitrary constants.

Case 3: Take λ = 0 .

, x) ( c14 x + c15 )
In this case z (t= (45.6)

( x) ( c11 x + c12 ) , T (t ) = c13 where


as X= = c14 c=
11c13 ; c15 c12 c13 are arbitrary constants.

Now, among these three possible solutions, we have to choose the one that is
physically realistic. In general, the solution of heat conduction problem is
exponentially decaying with time ' t ' . This property is clearly seen only
when λ < 0 .

2
350 WhatsApp: +91 7900900676 www.AgriMoon.Com
One Dimensional Heat Equation

z (t , x) ( c1 cos px + c2 sin px ) e − c p t
Thus the suitable solution of the heat equation is=
2 2

The values of c1 , c2 and p are found based on the initial and boundary

conditions associated with the equation. Let us see this solution procedure in
some special situations.

Example 44.1

∂z ∂ 2 z
solve the heat conduction problem = 2 0 < x < 1, t > 0
∂t ∂x

with the initial condition = x) sin nπ x and the boundary conditions


z (t 0,=

z (t , = = 0 and z (t , x= 1)= 0 for t > 0 .


x 0)

Solution: The physically realistic solution of the given equation is

= ( c1 cos px + c2 sin px ) e− p t
2
z ( x, t )

Determining the constants c1 , c2 and p :

Using the boundary condition at x = 0 , we have c1e− p t = 0


2

This implies c1 = 0 as e− p t ≠ 0; ∀t > 0 .


2

∴ z ( x, t )= c2 sin px ⋅ e − p t
2

The other boundary condition at x = 1 gives c2 ⋅ sin p ⋅ e− p t =


2
0

Now, if we take c2 = 0 , then z ( x, t ) ≡ 0 which should be ruled out as we are


seeking a non-trivial solution for the given problem.

So c2 ≠ 0 , hence sin p =0 ⇒ p =nπ , n = 0,1, 2,....

x) an sin nπ x ⋅ e − p t
∴ z (t , =
2

3
351 WhatsApp: +91 7900900676 www.AgriMoon.Com
One Dimensional Heat Equation

z ( x, t ) a0 sin 0π x ⋅ e − p t ,
At this stage, note that with each n , n = 0,1, 2,.... ,we get=
2

a1 sin1π x ⋅ e − p t , a2 sin 2π x ⋅ e − p t etc. as the solutions.


2 2

Using the principle of superposition (valid only for linear p.d.es.), we can write
the general solution as the infinite sum of these solutions as

= ∑a sin nπ x ⋅ e − p t .
2
z ( x, t ) n
n =0

Now using the initial condition z (0, x) = sin nπ x ,


see z (t , x) ∑
we= = an sin nπ x sin nπ x
n =0

Comparing the coefficients on both sides, we get an= 1 ∀n .

Hence the solution of the heat equation satisfying the given initial and boundary

=
conditions is written as z (t , x) ∑ sin nπ x ⋅ e
n =0
− p 2t
.

Example 44.2

Let us replace the boundary condition z (t , x= 1)= 0 by z (t , x= 1)= 20t in the


example (44.1) and look for the solution.

z (t , x) c2 sin px ⋅ e − p t , when evaluated at x = 1 , we have


The solution=
2

z (t ,1) c2 sin p ⋅ e − p t = 20t


=
2

This will neither give any information about p nor about c2 . Thus the separation
of variables would then be futile. This example clearly indicates the restricted
use of the method of separation of variables.

Let us now consider an example with derivative boundary conditions.

4
352 WhatsApp: +91 7900900676 www.AgriMoon.Com
One Dimensional Heat Equation

Example 44.3

∂z ∂ 2 z
Solve the equation = , 0 < x < L , subject to the boundary conditions
∂t ∂x 2
∂z ∂z
(t , = = 0 , = (t ,=
x 0) x L=
) 0 and the initial condition =
z (t 0,=
x ) h( x ) .
∂x ∂x

Solution: Using the separation of variables method the general solution of the
z (t , x) ( A cos px + B sin px ) e − p t
heat conduction equation can be written as=
2

∂z
The boundary condition (t , 0) =0 ⇒ B =0
∂x

z (t , x) A cos px ⋅ e − p t .
Hence=
2

∂z nπ
The other condition= (t= 0 ( A ≠ 0 ) ⇒ p = ; n = 0,1, 2,...
, L) 0 ⇒ sin pL =
∂x 2

− n 2π 2
nπ x t
=
Thus we can write zn (t , x) an cos ⋅e L2
L

− n 2π 2
∞ ∞
nπ x
z (t , x) ∑ zn (t , x) ∑ an cos
t
=
and = ⋅e L2
.
= n 0= n 0 L

Using the initial condition, we get


nπ x
∑a
n =0
n cos
L
= h( x )

The unknown coefficients an are computed using the half range Fourier Cosine
Series expansion, which gives

nπ x
L L
1 2
L ∫0 ∫
a0 = h( x)dx and an = h( x) cos dx .
L0 L

5
353 WhatsApp: +91 7900900676 www.AgriMoon.Com
One Dimensional Heat Equation

Thus for a given function h( x) , we find an ’s and the final solution is written as
− n 2π 2

nπ x
z (t , x) ∑ an cos
t
= ⋅e L2

n =0 L

Example 44.4

Two ends A and B of a thin rod of length 10 cm have the temperature at 30o C
and 80o C until steady state is reached. The temperatures of the ends are
changed to 40o C and 60o C respectively. Find the temperature distribution in
the rod at time t .

Solution: In the steady state condition, z is a function of x i.e., z (t , x) = z ( x)

∂z ∂ 2 z ∂2 z
and = 2 becomes 2 = 0 . The steady state solution is zs ( x=
) ax + b ... (i)
∂t ∂x ∂x

The initial temperature at the ends A and B before the steady state is reached
are z (=
x 0)= 300 C ... (ii)

and z= = 800 C ... (iii).


( x 10)

These conditions imply z (0, x=) 30 + 5 x ... (iv)

The boundary conditions are

z (t , 0) = 400 C ... (v)

and z (=
t ,10) 600 C ∀t ... (vi)

i.e., the boundary values are non-zero, we split up the temperature function
z (t , x) into the sum of zs ( x) and zt (t , x) i.e., z (=
t , x) zs ( x) + zt (t , x) ......(vii)

where zs ( x) is the steady state solution (involving x only) satisfying the


boundary conditions (v) and (vi);

6
354 WhatsApp: +91 7900900676 www.AgriMoon.Com
One Dimensional Heat Equation

and zt (t , x) is z (t , x) − zs ( x) , which is the transient part of the solution which


decays with the increase in time.

Since zs (0) = 40 and zs (10) = 60 , the steady solution zs ( x=


) ax + b

becomes zs ( x=) 2 x + 40

∂z ∂ 2 z
The transient solution zt (t , x) is obtained by solving = subject to the
∂t ∂x 2
initial condition z (= = 30 + 5 x and the boundary conditions
x 0) z (t , 0) = 400 C

and z (t ,10) = 600 C .


∴ z (t , x) = ( 40 + 2 x ) + ∑ ( an cos px + bn sin px ) e − p t .
2

n =1

Now z (t , 0) = 40 + ∑ an cos px ⋅ e − p t
40 = ⇒ an =∀
2
0 n.


Hence z (t , x) = ( 40 + 2 x ) + ∑ bn sin px ⋅ e− p t .
2

n =1

The other boundary condition is z (t ,10) = 60


⇒ 60 = 40 + 20 + ∑ bn sin10 p ⋅ e − p t
2

n =1


Since bn ≠ 0 , sin10 p =0 ⇒ p =
10


nπ x − n10π t
∴ z (t , x) = ( 40 + 2 x ) + ∑ bn sin ⋅e
n =1 10

Now the unknown bn are obtained by making use of the initial condition

z (0, x=
) 30 + 5 x .


nπ x
⇒ 30 + 5 x = 40 + 2 x + ∑ bn sin
n =1 10

7
355 WhatsApp: +91 7900900676 www.AgriMoon.Com
One Dimensional Heat Equation


nπ x
or ∑b
n =1
n sin
10
= 3 x − 10

Considering the half-range Fourier sine series expansion for ( 3x − 10 ) , we


determine

nπ x
10
2 60 20 20 20
=bn ∫ ( 3x − 10 ) sin dx =
− cos nπ + cos nπ − =
− [ 2 cos nπ + 1] .
10 0 10 nπ nπ nπ nπ

Hence the desired solution is

[ 2 cos nπ + 1] sin nπ x ⋅ e− n10π  t .


2

20
z (t , x) = (40 + 2 x) −
π

n =1 n 10

Exercises:

∂z ∂2 z
1. Solve the heat conduction problem =α2 2 , 0 < x < l ;
∂x ∂x

Subject to the boundary and initial conditions

∂z ∂z
(t , 0) = 0 , (t , l ) = 0 ; z (0, x) = x .
∂x ∂x

2. The temperatures at one end of a bar 10cm long with insulated sides is
kept at 0oC and that the other end is kept at 100 oC until steady state
condition attained. The two end are then suddenly insulated so that the
temperature gradient is zero at each end thereafter. Find the temperature
distribution in the bar.
∂z ∂2 z
3. Solve = α 2 2 subject to the conditions:
∂x ∂x
(i) z is decaying as t → ∞ in 0 < x < l ;
∂z ∂z
(ii) (t , 0)= 0= (t , l ) .
∂x ∂x

8
356 WhatsApp: +91 7900900676 www.AgriMoon.Com
One Dimensional Heat Equation

Keywords: Heat Conducting, Heat equation, Parabolic Partial Differential


Equation,

References
Ian Sneddon, (1957). Elements of Partial Differential Equations. McGraw-Hill,
Singapore
Amaranath. T, (2003). An Elementary Course in Partial Differential
Equations.Narosa Publishing House, New Delhi

Suggested Reading
I.P. Stavroulakis, Stephen a tersian, (2003). Partial Differential Equations.
Allied Publishers Pvt. Limited, New Delhi
J. David Logan, (2004). Partial Differential Equations. Springer(India)
Private Ltd. New Delhi

9
357 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 4: Partial Differential Equations

Lesson 46

One Dimensional Wave Equation

46.1 Introduction

In general wave motion occurs in vibrating strings, vibrating membranes etc.


Waves travelling through a solid media, acoustic waves, water waves, shock
waves etc are normally observed in nature. The standard and the simplest
example is the vibration of a stretched flexible string which is modelled as the
one dimensional wave equation. It is an example for the hyperbolic equation.
∂2 z 2 ∂ z
2
Mathematically, it is represented as = c where z (t , x) denoting the
∂t 2 ∂x 2
deflection of the string at any position and at any point of time. The constant
P
c= denotes the wave speed with P denoting the tension in the string and m
m

is the mass per unit length of the string.

The solution of the wave equation should describe the wave motion and this
involves periodic sine and cosine terms. A particular solution of this equation
can be obtained by specifying two initial conditions and two boundary
conditions. Let us now see the solution of this one dimensional wave equation
using the separation of variables technique using various types of boundary
conditions. Note that we pose either initial displacement or initial velocity or
both for the initial conditions to make the mathematical formulation as a well
posed problem.

46.2 The Method of Separation of Variables to the 1-D Wave Equation: A


finite string of length L that is fixed at both ends and is released from rest with
an initial displacement at some position. The mathematical representation of
this problem given by

358 WhatsApp: +91 7900900676 www.AgriMoon.Com


One Dimensional Wave Equation

∂2 z 2 ∂ z
2
= c (46.1)
∂t 2 ∂x 2

satisfying the boundary conditions (i) z (t , = = 0 ; and (ii) z (t ,=


x 0) x L=
) 0

and the initial conditions (iii) =


u (t 0,=
x) f ( x) (Initial displacement),

∂u
(iv) = (t 0,=
x) 0 (string is released from rest, the initial velocity is zero).
∂t

We now write the solution z (=


t , x) X ( x) ⋅ T (t ) (46.2)

In the equation (1) it becomes

T ′′ X ′′
X ( x) ⋅ T ′′(t=
) c 2 X ′′( x) ⋅ T (t ) or = = λ (46.3)
c 2T X

where λ is the arbitrary separation parameter. This will result in two ordinary
differential equations as T ′′ − λ c 2T =
0 (46.4)

and X ′′ − λ X =
0 (46.5)

We consider the three cases for λ > 0 , λ < 0 , λ = 0 .

Case 1: Take λ > 0 say λ = p 2 , and solving equations (46.4) and (46.5)

( c1e px + c2e− px )( c3ecpt + c4e−cpt )


We get the solution as z (t , x) = (46.6)

Case 2: When λ < 0 say λ = − p 2 the solution becomes

( c5 cos px + c6 sin px )( c7 cos cpt + c8 sin cpt )


z (t , x) = (46.7)

( c9 x + c10 )( c11t + c12 )


Case 3: When λ = 0 ; we get z (t , x) = (46.8)

Among these three possible solutions for the wave equation, the physically
realistic solutions that represents the periodic functions of x and t is when λ < 0
( c1 cos px + c2 sin px )( c3 cos cpt + c4 sin cpt ) .
i.e., z (t , x) =

359 WhatsApp: +91 7900900676 www.AgriMoon.Com


One Dimensional Wave Equation

The arbitrary constants c1 , c2 , c3 , c4 and p are determined as shown below:

0 ⇒ c1 ( c3 cos cpt + c4 sin cpt ) =


The first boundary condition z (t , 0) = 0

For this to be true for all t > 0, we should have c1 =


0 .

Hence z (t , x) c2 sin px ( c3 cos cpt + c4 sin cpt )


=

∂z
One of the initial condition (0, x) = 0 implies
∂t

c2 sin px ( −c3 ⋅ cp cos cpt + c4 ⋅ cp sin cpt ) =0.

We have two possibilities, one is c2 =


0 ⇒ z (t , x) =
0 . This is ruled out in order to

have a non-trivial solution. The other possibility is c4 = 0 (note c ≠ 0; p ≠ 0 )

∴ z (t , x) =
c2 c3 sin cpt cos cpt . At this stage we use the other boundary condition

which is given as z (t , L) = 0 ⇒ c2c3 sin pL cos cpt =


0 ∀ t.


As c2 ≠ 0; c3 ≠ 0 , we have sin pL = 0 ⇒ p = n = 0,1, 2,...
L

nπ x
=Thus z (t , x) A=
sin cos cpt where A c2 c3 . This constant is determined using the
L
nπ x
other initial condition as: z (0, x) = f ( x) ⇒ A sin f ( x) where c2 c3 = A .
=
L

πx πx πt
Choose f ( x) = 3sin ⇒ A = 3, n = 1 , hence the solution is z (t , x) = 3sin cos .
L L L

Example 2

A tight string, 2m long with c = 30m/s is initially at rest but is given an initial
velocity 300sin 4π x from its equilibrium position. Determine the displacement at
1
the position x = m of the string.
8

360 WhatsApp: +91 7900900676 www.AgriMoon.Com


One Dimensional Wave Equation

Solution:

∂2 z ∂2 z
Given = 900 Subject to z (t , 0)= 0= z (t , 2) , and z (0, x) = 0 and
∂t 2 ∂x 2
∂z
(0, x) = 300sin 4π x . The solution may be written as
∂t

z (t , x) = ( A cos 30 pt + B sin 30 pt ) ⋅ D sin px .

Now, z (t , 0) = 0 ⇒ c = 0

∴ z (t=
, x) ( A cos 30 pt + B sin 30 pt ) ⋅ D sin px

Also zero initial displacement ⇒ A =


0

Hence z (t , x) =
B ⋅ D sin px ⋅ sin 30 pt .

∂z 300 5
As (0, x)= 300sin 4π x= 30 p ⋅ B ⋅ D ⋅ sin px ⇒ p =
4π and B=
⋅D =
∂t 30 ⋅ 4π 2π

5
∴ z=
(t , x) sin120π t ⋅ sin 4π x .

1
We now determine the maximum displacement at x = occurs when
8
2.5
sin120π t = 1 and then zmax = .
π

Note that the condition z (t , 2) = 0 is not used in determining the arbitrary


constants but it is satisfied automatically.

Example 3

A string of length L which is fixed at both ends is initially in equilibrium

position. It is set in vibrating mode by given each point a velocity


v0  πx 3π x 
4 3sin L − sin L  . Find the displacement at any point of the string.

361 WhatsApp: +91 7900900676 www.AgriMoon.Com


One Dimensional Wave Equation

Solution:

∂2 z 2 ∂ z
2
The equation of the vibrating sting is = c .
∂t 2 ∂x 2

The boundary conditions are z (t , 0) = 0 , z (t , L) = 0 . Also given the initial


∂z v0  πx 3π x 
conditions are z (0, x) = 0 and=
(0, x) 3sin L − sin L  . As seen in the
∂t 4

earlier examples, here also the solution of the vibrating string after applying the
boundary conditions reduces to

nπ x  cnπ t cnπ t 
z (t , x) c2 sin  c3 cos + c4 sin .
L  L L 

nπ x
Now the initial condition z (0, x) = 0 ⇒ c2c3 sin 0 ∀x ⇒ c2 c3 =
= 0 ⇒ c3 =
0 (for
L
non-trivial solution).

nπ x cnπ t
∴ z (t , x) =
bn sin sin where bn = c2c4 .
L L

As the wave equation is linear, by the principle of superimposition, we can


nπ x cnπ t
write the general solution as z (t , x) = ∑ bn sin sin .
L L

Applying the other initial condition we have

∂z v0  πx 3π x  ∞
cnπ nπ x
∂t
(0, x=
)
4  3sin
L
− sin
L 
= ∑n =1 L
bn sin
L
.

nπ x
Comparing the coefficients of sin on both sides we see
L

3Lv0 − Lv0
b1 = , b3 = , b= b= = 0
b4 ...
4cπ 12cπ
2 3

Hence the solution of the given problems is

362 WhatsApp: +91 7900900676 www.AgriMoon.Com


One Dimensional Wave Equation

Lv0  πx cπ t 3π x 3cπ t 
=z (t , x) 9sin L sin L − sin L sin L  .
12cπ

46.3 The D’Alembert Solution of the Wave Equation:

∂2 z 2 ∂ z
2
Consider the wave equation 2 = a 2 (46.9)
∂t ∂x

and introduce the new independent variables ξ =


x − at ;η =
x + at (46.10)

These are the two characteristics of the hyperbolic equation. The equatin (46.9)
∂2 z
is transformed to its canonical form as =0 (46.11)
∂ξ∂η

∂z ∂z ∂z ∂z ∂z ∂z ∂2 z ∂2 z ∂2 z ∂2 z
This is because =+ ; = −a +a , = + 2 +
∂x ∂ξ ∂η ∂t ∂ξ ∂η ∂u 2 ∂ξ 2 ∂ξ∂η ∂η 2

∂2 z 2 ∂ z
2
2 ∂ z
2
2 ∂ z
2
and =
a − 2 a + a .
∂t 2 ∂ξ 2 ∂ξ∂η ∂η 2

Substituting these in equation (4.9) results in equation (46.11). Now integrating


∂z
(46.11) with respect to ξ gives = h (η ) , where h (η ) is a arbitrary function
∂η

z (η , ξ )
of η . Integrating again w.r.t η , we get= ∫ h (η ) dη + g (ξ ) which can be

, ξ ) f (η ) + g (ξ ) with g (ξ ) is an arbitrary function of ξ


also be written as z (η=

alone and the integral is a function of η alone and is written as f (η ) .

Thus the solution is written as z (t , x) = f ( x + at ) + g ( x − at ) (46.12)

This is the called the D’ Alembert’s solution of the wave equation.

∂z
Case 1: Let the initial conditions be z (0, x) = φ ( x) and (0, x) = 0 .
∂t

Now differentiating (4) with respect to ‘ t ’ and putting t = 0 ,

363 WhatsApp: +91 7900900676 www.AgriMoon.Com


One Dimensional Wave Equation

cf ′( x + ct ) t 0=
We obtain = − cg ′( x − ct ) t 0 =
0 , ⇒ f ′( x) = g ′( x) ⇒ f ( x) = g ( x) + k , where

k is a constant. Also, , 0) φ=
z ( x= ( x) f ( x) + g ( x) φ ( x) ,
⇒ 2 g ( x) + k =

1 1 1
⇒ g ( x)= [φ ( x) − k ] . ( x) g (=
Hence f= x) + k [φ ( x) − k ] , or=
f ( x) [φ ( x) + k ] .
2 2 2

Hence the general solution (46.12) takes the form

1
z (t=
, x) (φ ( x) + ct ) + (φ ( x) − ct )  . (46.13)
2

∂z
Case 2: Suppose now that z (0, x) = 0 and (0, x) = θ ( x)
∂t

∂z
From equation (4), we have (0, x) = af ′( x) − ag ′( x) = θ ( x)
∂t

s
1
a ∫0
⇒ f ( x) − g=
( x) θ ( s )ds + D

where s is a dummy variable of integration and D is an arbitrary constant.

Also, z (0, x) = 0 ⇒ f ( x) + g ( x) =
0

⇒ f ( x) =
− g ( x) or f (0) − g (0) =
C=
2 f (0) =
−2 g (0) .

s s
1 1
This ⇒ =
f ( x) ∫
2a 0
− ∫ θ ( s )ds + g (0) ,
θ ( s )ds + f (0) and g ( x) =
2a 0

and finally, the solution of the wave equation becomes


1  
x + at x − at x + at
1
=z (t , x) 
2a  ∫0
θ ( s )ds − ∫0
θ ( s )ds 

=
2a x −∫at
θ ( s )ds .

Thus from these two cases, it is evident that a particular solution is obtained for
a given φ ( x) and θ ( x) respectively.

364 WhatsApp: +91 7900900676 www.AgriMoon.Com


One Dimensional Wave Equation

Example 4:

φ ( x) k (sin x − sin 2 x) and obtain a particular solution of the wave


In case, 1 take=
equation.

Solution:

1
We have z ( x,=
t) [ f ( x + ct ) + f ( x − ct )]
2

 k {sin ( x + ct ) − sin 2( x + ct )} + k {sin ( x − ct ) − sin 2( x − ct )}


1
=
2

= k (sin x cos ct − sin 2 x cos 2ct ) .

Keywords: Separation of Variables, Separation of variables, separation of


variables, Wave Equation, D’ Alembert’s solution

Exercises:

1. Using D’Alembert Method, find the deflection of a vibrating string of unit


length having fixed ends with initial velocity zero and initial deflection:

a
(i) f (=
x) a( x − x 2 ) (ii) f (=
x) (1 + cos 2kx)
2

2. An infinite string is given the initial velocity

0, x < −1
10( x + 1), −1 ≤ x ≤ 0

θ ( x) = 
10(1 − x), 0 ≤ x ≤ 1
0,1 < x

If the string has zero initial displacement find the solution of the wave equation.

365 WhatsApp: +91 7900900676 www.AgriMoon.Com


One Dimensional Wave Equation

∂2 z 2 ∂ z
2
3. Solve = c , 0 < x < L subject to the conditions
∂t 2 ∂x 2

3π x ∂z
z= (=
(0, t ) z= =
L, t ) 0 ; z (0, x) sin ; (0, x) 0 .
L ∂x

∂2 z ∂2 z
4. Solve = , 0 < x < L , subject to
∂t 2 ∂x 2

∂z
=
z (0, t ) 0; z (= x, 0) µ x( L − x);
L, t ) 0; z (= ( x, 0) = 0 .
∂t

5. The points of trisection of a sting are pulled aside through the same distance
on opposite sides of the position of equilibrium and the string is released from
rest. Derive an expression for the displacement of the string at subsequent time.

∂2 z ∂2 z
6. Solve = , 0 < x < L; z=
(0, t ) z=
( L, t ) 0;
∂t 2 ∂x 2

 1
 x, 0 ≤ x ≤ 2 ∂z
z ( x, 0) =  ; ( x, 0) = 0 .
1 − x, 1 ≤ x ≤ 1 ∂t
 2

References
Ian Sneddon, (1957). Elements of Partial Differential Equations. McGraw-Hill,
Singapore
Amaranath. T, (2003). An Elementary Course in Partial Differential
Equations.Narosa Publishing House, New Delhi

Suggested Reading
I.P. Stavroulakis, Stephen a tersian, (2003). Partial Differential Equations.
Allied Publishers Pvt. Limited, New Delhi

366 WhatsApp: +91 7900900676 www.AgriMoon.Com


One Dimensional Wave Equation

J. David Logan, (2004). Partial Differential Equations. Springer(India)


Private Ltd. New Delhi

367 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 4: Partial Differential Equations

Lesson 47
Laplace Equation in 2-Dimensions

47.1 Introduction

∂z  ∂2 z ∂2 z 
= α 2 + 2 
Heat conduction in a two dimensional region is given by
∂t  ∂x ∂y 

where z (t , x, y ) denoting the temperature in the region. This is clearly a parabolic


equation. When we consider steady state conditions, z = z ( x, y ) i.e, z is
∂2 z ∂2 z
independent of time and the equation reduces to + =
0 which will be
∂x 2 ∂y 2

elliptic in nature. Unlike the hyperbolic and parabolic equations where initial
conditions are also specified, in case of elliptic equation only boundary
conditions are specified, thus making these problems as pure boundary value
problems. Let Ω be the interior of a simple closed differentiable boundary curve
Γ and f be a continuous function defined on the boundary Γ . The problem of

finding the solution of the above Laplace equation in Ω such that it coincides
with the function f on the boundary Γ is called the Dirichlet Problem.

Finding a function z ( x, y ) that satisfies the Laplace equation in Ω and satisfies


∂z ∂
= f ( s ) on Γ where representing the normal derivative along the outward
∂η ∂η

normal direction to the surface z ( x, y ) that obeys ∫ f (s)ds = 0


Γ
is known as the

Neumann Problem. The third boundary value problem , known as the Robin
Problem is one in which the solution of the Laplace equation is obtained in Ω
∂z
that satisfies the condition + g (s) z (s) =
0 on Γ where g ( s ) ≥ 0 and g ( s ) ≠ 0 . We
∂η

now describe the method of Separation of variables technique for the Laplace
equation.

368 WhatsApp: +91 7900900676 www.AgriMoon.Com


Laplace equation in 2-Dimensions

∂2 z ∂2 z
We have the Laplace equation given by + =
0 (47.1)
∂x 2 ∂y 2

Let z ( x, y ) = X ( x) Y ( y ) (47.2)

∂2 z ∂2 z
Finding and and substituting these in (1) and separating them into
∂x 2 ∂y 2

two ordinary differential equations, we get

X ′′ − λ X =
0 and Y ′′ + λY =
0 . where λ is the arbitrary separation parameter.
Solving these equations, we get three possible solutions for λ = p 2 , λ = − p 2 and
λ = 0 . These forms are:

( c1e px + c2e− px ) ( c3 cos py + c4 sin py ) ; λ > 0


a) z ( x, y ) =

( c5 cos px + c6 sin px ) ( c7 e py + c2e− py ) ; λ < 0


b) z ( x, y ) =

and c) z ( x, y ) =( c9 x + c10 )( c11 y + c12 ) ; λ =0.

Of these, we take that solution which is consistent with the given boundary
conditions.

47.2 Dirichlet Problem in a Rectangular Region:

Example 1:

∂2 z ∂2 z
Solve the Laplace equation + =
0 in the region with the boundary
∂x 2 ∂y 2

conditions as shown in the figure

0OC

0OC Ω 1
50sin π y O C

0OC 2
369 WhatsApp: +91 7900900676 www.AgriMoon.Com

2 x
Laplace equation in 2-Dimensions

Solution:

X ′′ Y ′′
=
− = p2
X Y

Note: Here we considered λ = p 2 to allow sinusoidal variation with y , to be


consistent with the boundary conditions.

Then the general solution is

( Ae px + Be− px ) ( C cos β y + D sin β y ) .


z ( x, y ) =

The boundary conditions are expressed as

=
z ( x 0,=
y ) 00 C ; = y ) 50sin π y 0C
z ( x 2,=

z ( x, y= 0)= 00 C ; z ( x, y= 1)= 00 C .

Now z (0, y ) =0∀y ⇒ A + B =0 ⇒ A =− B .

Hence z ( x, 0) = 0∀x ⇒ D = 0 .

z ( x,1) = 0∀x ⇒ sin β = 0 ⇒ β = π

∴ z ( x, y ) = AC ( eπ x − e −π x ) sin π y

The non-homogeneous boundary condition at x = 2

⇒ AC ( e 2π − e −2π ) sin π y =
50sin π y

50

= AC = 0.0934;( x, y )
( e − e−2π )

Thus the temperature at any point ( x, y ) is written as


z ( x, y ) 0.0934 ( eπ x − e −π x ) sin π y .
=

3
370 WhatsApp: +91 7900900676 www.AgriMoon.Com
Laplace equation in 2-Dimensions

47.3 Temperature Distribution is Studied in an Infinitely Long Plate.

Example 2:

An infinitely long plane uniform plate is bounded by two parallel edges and at
an end at right angles to them as shown in the adjacent figure. Find the
temperature distribution at any point of the plate in the steady state.

u OC
0O C 0O C

π x
Solution:

The steady state temperature distribution in this infinitely long plate is obtained
∂2 z ∂2 z
by solving + =
0.
∂x 2 ∂y 2

The boundary conditions are = y ) 00 C , z ( x= π , y =


z ( x 0,= ) 00 C∀y > 0 ,

z ( x, = = u0 0C for 0 < x < π ,


y 0) z ( x, y → ∞) → 0 for 0 < x < π .

Among the three possibilities for solution i.e., solution forms (a), (b), (c), we
chose a solution that is consistent with the given boundary conditions. He
solution given in equation (a) cannot satisfy the boundary condition z (0, y ) = 0∀y
. The solution given in equation (c) cannot satisfy the condition
in z ( x, y → ∞) → 0 in 0 < x < π .

4
371 WhatsApp: +91 7900900676 www.AgriMoon.Com
Laplace equation in 2-Dimensions

( A cos px + B sin px ) ( Ce py + De− py ) .


Thus we have the solution as z ( x, y ) =

Now z (0, y ) = A ( Ce py + De− py ) = 0 ⇒ A =


0

( x, y ) B sin px ( Ce py + De − py ) .
∴ z=

z (π , y ) = 0∀y ⇒ B sin pπ ( Ce py + De − py ) = n , an integer ( B ≠ 0 ) .


0 ⇒ p=

x, y ) B sin nx ( Ce py + De − py ) .
∴ z (=

As z ( x, y → ∞) → 0 ⇒ c = BD sin nxe − ny .
0 ∴ z ( x, y ) =

Taking BD = bn and write the general form of the solution as



z ( x, y ) = ∑ bn sin nxe − ny .
n =1


Using the non-homogeneous boundary condition u ( x, 0)= u=
0 ∑b
n =1
n sin nx

The unknown coefficients are found using the half range Fourier sine series
expansion in (0, π ) as

π  4u0
2 2  ,=
n 2m − 1
= ∫ = u0 1 − (−1)  =  nπ n
bn u0 sin nxdx , m is a positive integer.
π 0 π 0, n = 2m

4u0  − y 1 
Thus z ( x, y ) =  e sin x + e −3 y sin 3 x + ... .
π  3 

Example 3

∂2 z ∂2 z
Solve + =
0 subject to z=
(0, y ) z=
(a, y ) z=
( x, b ) 0 and
∂x 2 ∂y 2

z ( x, 0)= z (a − x), 0 < x < a .

5
372 WhatsApp: +91 7900900676 www.AgriMoon.Com
Laplace equation in 2-Dimensions

Solution

Physically realistic solution here is

( c1 cos px + c2 sin px ) ( c3e py + c4e− py ) .


z ( x, y ) =

z (0, y ) =0 ⇒ c1 =0 ,


z (a, y ) =0 ⇒ sin pa =0 ⇒ p = , n is an integer
a

nπ x  nπa y − nπ y

=
∴ z ( x, y ) c2 sin  3
c e + c4 e a

a  

Take c2c3 = A , c2c4 = B .

 −nπ y 
nπ y − nπ y − B exp  
z ( x, b ) =
0 ⇒ Ae a
+ Be a
=
0 ⇒ A=  a .
 nπ y 
exp  
 a 

− nπ b
 − nπ y 
nπ x  − Be a nπa y
=
∴ z ( x, y ) sin e + Be a 
a  nπ b 
 e 
a

nπ ( y −b ) − nπ ( y −b )
−B nπ x  
= nπ b
sin e a
−e a
.
a
a  
e

So the general solution is now written as


nπ x nπ ( y − b ) −2 B
z ( x, y ) = ∑ bn sin sinh , where bn = nπ b
n =1 a a
ea

Now using the non-homogeneous condition


nπ b nπ x ∞
nπ x
z ( x, 0) = x(a − x) = ∑ bn sinh
n =1 a
sin
a
= ∑ Bn sin
n =1 a
(say)

6
373 WhatsApp: +91 7900900676 www.AgriMoon.Com
Laplace equation in 2-Dimensions

nπ x
x
2
=
the coefficient Bn are found as Bn
a0∫ x(a − x) sin
a
dx

 8a 2
4a 2  ,=
n 2m − 1
= 3 3 (1 − cos nπ ) =  n3π 3 , m is a positive integer.
nπ 0, n = 2m

4nπ (b − y )
sin
8a 2 nπ x
∴ z ( x, y ) = ∑ a
π n =1,3,5,... n3 sinh nπ (b − y )
3
sin
a
a

(2n + 1)π (b − y )
8a 2
or z ( x, y ) = 3 ∑
∞ sinh
a sin
( 2n + 1) π x .
π n =0 (2n + 1)3 sinh (2n + 1)π (b − y ) a
a

Keywords: Dirichlet Problem, Neumann Problem, Robin Problem

Exercises 1

∂2 z ∂2 z
1. Solve + =
0 in 0 < x < π,0 < y < π; with the conditions
∂x 2 ∂y 2

z= (π , y ) z=
(0, y ) z= ( x, π ) 0 ; z ( x, 0) = sin 2 x

2. A rectangular plate has sides a and b . taking the side of length a as OX


and that of length b as OY and other sides to be x = a and y = b , the sides
πx
x = 0 , x = a , y = b are insulated and the edge y = 0 is kept at temperature u0 cos .
a
Find the temperature z ( x, y ) in the steady state.

∂2 z ∂2 z
3. Solve 2 + 2 = 0 subject to
∂x ∂y

7
374 WhatsApp: +91 7900900676 www.AgriMoon.Com
Laplace equation in 2-Dimensions

(i) =
z (0, y ) 0;=
z ( x, 0 0);= z ( x,1) 100sin π x .
z (1, y ) 0;=

(ii) = z (=
z (0, y ) 0;= =
x, 0 0); z (1, y ) 100sin π y; z ( x,1) 0 .

∂z ∂z
=
(iii) z (0, y ) 0;=
( x, 0) 0; =
(1, y ) 0;=
z ( x,1) 100 .
∂y ∂x

=
(iv) =
z (0, y ) 100; =
z ( x, 0) 100; =
z (1, y ) 200; z ( x,1) 100 .

References
Ian Sneddon, (1957). Elements of Partial Differential Equations. McGraw-Hill,
Singapore
Amaranath. T, (2003). An Elementary Course in Partial Differential
Equations.Narosa Publishing House, New Delhi

Suggested Reading
I.P. Stavroulakis, Stephen a tersian, (2003). Partial Differential Equations.
Allied Publishers Pvt. Limited, New Delhi
J. David Logan, (2004). Partial Differential Equations. Springer(India)
Private Ltd. New Delhi

8
375 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 4: Partial Differential Equations

Lesson 48

Application of Laplace and Fourier Transforms to Boundary Value


Problems in p.d.es

48.1 Introduction

One dimensional heat and wave equations in semi-infinite or infinite region can
be solved using the Laplace transform or Fourier transform techniques.
Application of these transforms on these one dimensional equations reduce the
p.d.e to an ordinary differential equation. The solution of this o.d.e. involves the
parameter that is associated with the transformation and on applying the inverse
transformation, this solution gives the solution of the given p.d.e.

48.2 Selection of Transform Technique

If for a problem, z (t , x = 0) is given, then we make use infinite sine transform to


∂2 z ∂z
remove form the differential equation. In case if (t , x = 0) (flux
∂x 2 ∂x

∂2 z
condition) is given, then we employ infinite cosine transform to remove . If
∂x 2
in a problem z (t , 0) and z (t , L) (Dirichlet boundary condition) are given, then we
∂2 z ∂z
use finite sine transform to remove 2 in the p.d.e similarly if (t , 0) and
∂x ∂x
∂z
(t , L) (Neumann boundary condition.) are given, then use finite cosine
∂x

∂2 z
transform to remove . Let us consider some examples to explain this
∂x 2
technique. Let us now see the transform of the partial derivatives.

Example 1:
∂z ∂2 z ∂z ∂2 z
find the Laplace transform L of (i) (ii) 2 (iii) (iv) 2
∂t ∂t ∂x ∂x

376 WhatsApp: +91 7900900676 www.AgriMoon.Com


Application of Laplace and Fourier transforms to boundary value problems in p.d.es

Solution:

∂z ∂z
(i) L   = ∫ e− st dt
 ∂t  0 ∂t

∂z
p

= lim ∫ e − st dt
p →∞
0
∂t

 − st 
p

lim e z (t , x) + s ∫ e − st z (t , x)dt 
p

p →∞
 0
0 

= sL [ z (t , x) ] − z (0, x)

 ∂2 z  ∂z
(ii) Show that L  2 = s 2 L [ z (t , x)] − s [ z (0, x)] − (0, x) is left as an exercise.
 ∂t  ∂t
∞ ∞
  ∂z ∂z d d
=
(iii) L   ∫=
e − st dx = ∫ e − st zdx L [ z (t , x) ]
 ∂x  0
∂x dx 0 dx

 ∂2 z  d 2
(iv) L  2
= 2 L [ z (t , x) ] is left as an exercise.
 ∂x  dx

Example 2:
 ∂2 z   ∂2 z   ∂2 z 
Find (i)   2  (ii) s  2  and (iii) c  2  where  is the Fourier transform,
 ∂x   ∂x   ∂x 

s is the sine and c is the cosine Fourier transform.

Solution:

 ∂2 z  ∂2 z
By definition   2  = ∫ e−isx 2 dx
 ∂x  −∞ ∂x
∞ ∞
∂2 z ∂z
∫ ise
− isx − isx
= e + dx
∂x 2 −∞ −∞
∂x
∞ ∞
∂z ∞
= e − isx − ise − isx z − s 2 ∫ e − isx z ⋅ dx
∂x −∞ −∞
−∞

∂z
= − s 2  [ z (t , x) ] provided z and tend to zero as x → ±∞ .
∂x

377 WhatsApp: +91 7900900676 www.AgriMoon.Com


Application of Laplace and Fourier transforms to boundary value problems in p.d.es

 ∂2 z  ∂z
∴ s  2  = − s 2   z ( t , x )  provided both z and → 0 as x → ±∞ .
 ∂x  ∂x

 ∂2 z  ∞ ∂2 z
(ii) By definition, s  2
= ∫ 2 sin sxdx
 ∂x  0 ∂x
∞ ∞
∂z ∂z
= sin sx − ∫ s cos x dx
∂x 0 0 ∂x

∂z
− s cos x ⋅ z 0 − s 2 s  z ( t , x ) 

= sin sx
∂x 0

∂z
s ⋅ z ( t , x ) x =0 − s 2 s  z ( t , x )  provided z → 0 ,
= → 0 as x → ∞ .
∂x

 ∂2 z  ∞ ∂2 z
(iii) By definition, c  2  = ∫ 2 cos sxdx
 ∂x  0 ∂x

Integrating by parts, twice, we obtain


∂z
=
− − s 2 c  z ( t , x )  provided z → 0 as x → ∞ .
∂x x =0

Note: these results indicate that (i) if z ( t , x ) is specified at x = 0∀t , then Fourier
∂z
sine transform is useful and (ii) if at x = 0∀t is specified , then the Fourier
∂x
cosine is useful in semi-infinite region.

48.3 Heat Conduction – This Infinite Rod - Use of Fourier Transform


Method

Example 3:

∂z ∂2 z
Solve = k 2 , −∞ < x < ∞; t > 0; subject to z ( x, 0) = f ( x) , −∞ < x < ∞ and z ( x, t ) is
∂x ∂x
bounded as x → ±∞ .

Solution:

1
let z (t , s) indicate   z ( t , x )  i.e.,   z ( =
t , x )  z=
(t , s ) ∫ z ( t , x )e
− isx
dx
2π −∞

378 WhatsApp: +91 7900900676 www.AgriMoon.Com


Application of Laplace and Fourier transforms to boundary value problems in p.d.es

In the above we used the other form of Fourier transformation of a function.


∂z ∂2 z
Apply Fourier transform to the equation =k 2
∂t ∂x


we get z (t , s ) + ks 2 z (t , s ) =
0
∂t

Its solution is z (t , s) = A( s)e− ks t where A an arbitrary function to be determined


2

from the initial condition. Applying transform to the initial condition,



1
 [ z (0, x) ] = ∫ z (0, x)e
isx
dx
2π −∞


1
or z (0, s) =
2π ∫
−∞
f ( x)eisx dx = F ( s ) (say)

⇒ A( s ) =
F ( s ).

Thus the solution is z (t , s) = F ( s)e− ks t .


2

Taking inverse Fourier transform to z (t , s) , we get



1
 −1
[ z (=
t , s)] z=
(t , x) ∫ F ( s )e
− ks 2t − isx
e ds
2π −∞

which is the solution of the given equation.

48.4 Heat Conduction Problem - Fourier Sine Transform

Example 4:
∂z ∂ 2 z
Solve = subject to z (t , 0) = 0 for t > 0 ,
∂t ∂x 2

1, 0 < x < 1


z (0, x) =  and z (t , x) is bounded.
0, x > 1

379 WhatsApp: +91 7900900676 www.AgriMoon.Com


Application of Laplace and Fourier transforms to boundary value problems in p.d.es

Solution:
∂z ∂ 2 z
Let z (t , s) denote s [ z (t , s)] , apply Fourier sine transform to =
∂t ∂x 2
∞ ∞
∂z ∂2 z ∂
we get ∫0 ∂t sin sxdx = ∫0 ∂x 2 sin sxdx, ∂t
z (t , s ) =
− s 2 z (t , s ) + sz (t , 0)

∂z
⇒ =− s 2 z ⇒ z =Ae − s t .
2

∂t

1 − cos s
1
From the initial condition we have s [ z ( x, 0)] = ∫ 1⋅ sin sx ⋅ ds =
0
s

1 − cos s
or z ( s, 0)= A=
s

 1 − cos s  − s 2t
⇒ z ( s, t ) =
 e
 s 

Taking inverse Fourier sine transform, we get



2 1 − cos s − s 2t
s [ z (=
x, 0) ] z=
π ∫0
−1
( x, t ) e sin sx ⋅ ds is the solution.
s

Note: The integral in general cannot be evaluated using simple integration rules
with real variables, these involve complex integration techniques, so these
integrals are left as they are.

48.5 Heat Conduction Equation - Fourier Cosine Transform Method:

Example 3:

Using cosine Fourier transform technique


∂z ∂2 z ∂z
= k 2 0 ≤ x ≤ ∞, t > 0 subject to z (0, x) = 0 for x ≥ 0 (t , 0) = −a (constant)
∂t ∂x ∂t
z (t , x) is bounded .

380 WhatsApp: +91 7900900676 www.AgriMoon.Com


Application of Laplace and Fourier transforms to boundary value problems in p.d.es

Solution:
Let z ( s, t ) denote c [ z (t , x)]

∂z ∂2 z
Applying Fourier cosine transform to the equation =k 2
∂t ∂x
∂z (t , s )  ∂z (t , 0)  ∂z
we get =k  − s 2 z − or + ks 2 z =
ka
∂t  ∂t  ∂t

 e ks t  2
2

z (t , s )  ka 2 + c  e − ks t
which has the solution=  ks 
 
where c is the arbitrary constant.

Taking transform to the initial condition, we get z (0, s) = 0


ka −a
⇒ z (0, s ) = 2
+c = 0⇒c = 2
ks s
a
( )
∴ z (t , s ) = 2 1 − e − ks t .
s
2

Taking inverse Fourier cosine transform , we get



2 1
( )
z (t , x) =∫ 2 1 − e − ks t cos sx ⋅ ds
π 0s
2

Keywords: Fourier Cosine Transform Method, Fourier Sine Transform,


Laplace transform, Fourier transform, boundary value problem

Exercises 1
∂z ∂2 z
1: Solve = k 2 , 0 ≤ x ≤ ∞, t > 0 Subject to
∂t ∂x
(i) z (t ,=
0) z0 , t > 0 (ii) z (0, x=
) 0;0 < x < ∞ and (iii) z (t , x) is bounded.

∂z ∂ 2 z
2. Solve = ; 0 < x < ∞; t > 0 subjected to the boundary conditions
∂t ∂x 2
∂z
(i) (t , 0) = 0 for t > 0 (ii) z (t , x) is bounded for x > 0, t > 0
∂x

381 WhatsApp: +91 7900900676 www.AgriMoon.Com


Application of Laplace and Fourier transforms to boundary value problems in p.d.es

 x, 0 ≤ x ≤ 1
and the initial condition (iii) z (0, x) =  .
0, x > 1

∂z ∂2 z
3. Solve = 2 2 if= z (0, x) e − x and z (t , x) is bounded where x > 0 and
z (t , 0) 0;=
∂t ∂x
t > 0.

References

Ian Sneddon, (1957). Elements of Partial Differential Equations. McGraw-Hill,


Singapore

Amaranath. T, (2003). An Elementary Course in Partial Differential Equations.


Narosa Publishing House, New Delhi

Suggested Reading

I.P. Stavroulakis, Stephen a tersian, (2003). Partial Differential Equations.


Allied Publishers Pvt. Limited, New Delhi

J. David Logan, (2004). Partial Differential Equations. Springer(India)


Private Ltd. New Delhi

382 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 4: Partial Differential Equations

Lesson 49

Laplace And Fourier Transform Techniques To Wave Equation And


Laplace Equation

49.1 Introduction

In this lesson we see the utility o Fourier transform technique to the hyperbolic
(wave) and elliptic (Laplace) equations.

Example 1:
Solve the problem of vibrations of an infinite string governed by
∂2 z 2 ∂2 z
−c =
0; −∞ < x < ∞, t > 0 subjected to the initial conditions
∂t 2 ∂x 2

(i)
z(0, x) = f ( x)
∂z
(0, x) = g ( x)
∂t −∞ < x > ∞
(ii)

∂z
and boundary conditions at far field given by z (t , x), (t , x) → 0 as x → ±∞ .
∂x
Solution: Taking Fourier transform to the governing equation and the initial
conditions (i),(ii) we get
∂ 2 z (t , s ) 2 2 ∂z
+ c s z (t , s ) =
0 z (0, s ) = F ( s ) , (0, s ) = G ( s )
∂t 2
∂t
where  [ z (t , x)] = z (t , s) and  [ f ( x)] = F ( s) and  [ g ( x)] = G ( s) .

Equation (iii) ⇒ z (t , s) =Aeisct + Be−isct .

1 G (s)  1 G (s) 
From all these equations we =
get A F ( s ) + =
and B F ( s ) −
2  ics  2  ics  .

1 G ( s )  isct 1  G ( s )  − isct
∴ z (t , s )=  F ( s) +  e +  F (s) − e .
2 ics  2 ics 

383 WhatsApp: +91 7900900676 www.AgriMoon.Com


Laplace And Fourier Transform Techniques To Wave Equation And Laplace Equation


1
=
Now taking the inverse Fourier transform z (t , x) =
−1
[ z (t , s)] ∫e
− isx
z (t , s )ds
2π −∞

to this solution, we get


1 1 G (s) 
∞ ∞ ∞ ∞
− is ( x − ct ) 1 − is ( x + ct ) 1 − is ( x − ct ) G (s) 1 − is ( x + ct )
= 
2  2π ∫e
−∞
F ( s )ds +
2π ∫e
−∞
F ( s )ds +
c 2π ∫e
−∞
is
ds −
c 2π ∫e
−∞
is
ds 

x − ct x + ct
1 1 1
=  f ( x − ct ) + f ( x + ct )  − ∫ g (ξ )d ξ + ∫ g (ξ )d ξ
2 2c 0
2c 0

x + ct
1 1
 f ( x − ct ) + f ( x + ct )  +
2c x −∫ct
= g (ξ )d ξ .
2

This is the D’Alembert’s solution of the wave equation.

Exercise: 1.

A tightly stretched flexible string has its ends fixed at x = 0 and x = L . At


( x) µ x(l − x) , where µ is a
time t = 0 ,the string is given a shape defined by f=
constant and then released. Find the displacement of any point x of the string at
any time t > 0 .

Example 2

An infinitely long string having one end at x = 0 is initially at rest along the x-
axis. The end x = 0 is given a transverse displacement f ( x) when t > 0 . Find the
displacement of any point of the string at any time.

Solution: Let z (t , x) denote the displacement in the string at any point x at any
time t , the wave equation is given by

384 WhatsApp: +91 7900900676 www.AgriMoon.Com


Laplace And Fourier Transform Techniques To Wave Equation And Laplace Equation

∂2 z 2 ∂ z
2
= c , 0 < x < ∞; t > 0 subject to the initial conditions z (0, x) = 0 ;
∂t 2 ∂x 2
∂z
(0, x) = 0 and z (t , 0) = f (t ) and z (t , x) is bounded.
∂t

Now taking the Laplace transform on both sides of the governing equation, we
 ∂2 z  2 ∂ z 
2
∂z 2 ∂ z
2
get L 2  = c L 2  ⇒ s z − sz (0, x) −
2
(0, x) =
c where L [ z (t , x)] = z ( s, x)
 ∂t   ∂x  ∂t ∂x 2

∂2 z ∂ 2 z  s 
2

Using the initial conditions, we get ⇒ s 2 z =


c2 ⇒ =  z
∂x 2 ∂x 2  c 
sx sx

and its solution is z ( s=
, x) Ae c + Be c
where A and B are arbitrary constants.
We have z (t , 0) = f (t ) , L [ z (=
t , 0) ] z=
( s, 0) L [=
f (t ) ] f ( s ) say at x = 0

z (t , x) is bounded as t → ∞ , ⇒ A =
0 in equation.
sx

∴ z ( s, x) =
Be c

Now z ( s, 0) B = f ( s)
sx

∴ The solution in terms of the transformed variables ' s ' is z ( s, x) = f ( s)e c .

sx

On finding the inverse Laplace transform to f ( s)e c , we obtain the required
  x
solution as z (t=
, x) f  t −  .
c  

Note: Use the complex inverse formula for inversion as


a + i∞  x
1 t− s
 ( s )e c  ds
2π i a −∫i∞
z (t , x) = f
.

385 WhatsApp: +91 7900900676 www.AgriMoon.Com


Laplace And Fourier Transform Techniques To Wave Equation And Laplace Equation

49.2 Solution of the Laplace Equation in the Upper Half Plane Dirichlet
Problem

Example 3:

∂2 z ∂2 z
Solve + =
0 −∞ < x < ∞; y > 0 …. (i)
∂x 2 ∂y 2

subject to the boundary conditions


z (0, x) = f ( x); −∞ < x < ∞; ….. (ii)

z ( x, y → ∞) is bounded ; −∞ < x < ∞ ….. (iii)

∂z
both z and → 0 as x → ±∞ ….. (iv)
∂x

Solution:

Let z ( s, y ) indicate the Fourier transform of z ( x, y ) . Observe z is defined for


−∞ < x < ∞ so the Fourier transform technique can be used. Finding the Fourier

∂2 z ∂2 z
transform of + =
0,
∂x 2 ∂y 2

∂ 2 z ( s, y ) 2
we get − s z ( s, y ) =
0 .... (v)
∂x 2
( s, y ) A( s )e sy + B ( s )e − sy .... (vi)
Its solution is given by z=
Given that z ( x, y ) is bounded as y → ∞
⇒ z ( s, y ) must also be as y → ∞

Now this means A( s) = 0 for s > 0 and


B ( s ) = 0 for s < 0
−s y
∴ z ( s, y ) =
z ( s, 0)e

where z ( s, 0) =  [=
z ( x, 0) ] =
[ f ( x) ] F ( s )
−s y
∴ z ( s, y ) =
F ( s )e .... (vii)

386 WhatsApp: +91 7900900676 www.AgriMoon.Com


Laplace And Fourier Transform Techniques To Wave Equation And Laplace Equation

2 y 
We note  ( e− s y ) =
π  y 2 + x 2 

Exercise 2
Taking inverse Fourier transform on both sides of equation (vii) and applying
2 y 
the conclusion theorem, we get z ( x, y ) = f ( x)  2 
π  y + ( x − ξ ) 
2

∞ ∞
1 2 y  y f (ξ )
=
2π ∫
−∞
f (ξ )
π  y 2 + x 2 
dξ =
π ∫
−∞ y2 + ( x − ξ )
2

.

y f (ξ )
Thus z ( x, y ) =
π ∫
−∞ y + (x −ξ )
2 2
d ξ is the solution of the Laplace equation in the

upper half plane.


Exercise: 1. Take f ( x) = 1 and obtain z ( x, y ) .
2. Take f ( x) = x and obtain z ( x, y ) .

49.3 Solution of Laplace Equation in the Upper Half Plane Neumann


Problem

Example 4:

Solve the Laplace equation with derivative boundary condition given by


∂2 z ∂2 z ∂z ( x, 0)
+ 2 =0 , −∞ < x < ∞; y > 0 subject to = g ( x) −∞ < x < ∞
∂x ∂y
2
∂y
∂z
with the condition that z is bounded as y → ∞ and z and vanish as x → ±∞
∂x

and ∫ g ( x)dx = 0 which is the necessary condition for the existence of solution.
−∞

∂z ( x, y )
Solution: Use the transformation v( x, y ) =
∂x
y

Then z ( x, y ) = ∫ v( x,η )dη


a .

387 WhatsApp: +91 7900900676 www.AgriMoon.Com


Laplace And Fourier Transform Techniques To Wave Equation And Laplace Equation

∂ 2v ∂ 2v ∂  ∂ 2 z ∂ 2 z 
Now + =  + =  0,
∂x 2 ∂y 2 ∂y  ∂x 2 ∂y 2 

∂z ( x, 0)
=
v( x, 0) = g ( x) (given)
∂y

∂ 2v ∂ 2v
Thus we have + , −∞ < x < ∞;
∂x 2 ∂y 2

subject to v( x, 0) = g ( x) ; −∞ < x < ∞


∂v
v is bounded as y → ∞ , both v and vanish as x → ±∞ .
∂x
Thus we have the Dirichlet problem in terms of the new dependent
function v( x, y ) . Its solution is given by

y g (ξ )
v ( x, y ) =
π ∫
−∞ y + (ξ − x )
2 2

1
y ∞ g (ξ ) 
π ∫a  −∞
∫ y 2 + (ξ − x )2 dη
⇒ z ( x, y ) = η  d ξ

1
∞  (ξ − x )2 + y 2 
=
2π ∫ g (ξ ) log  (ξ − x )2 + a 2 dξ is the required solution.
−∞  

References

Ian Sneddon, (1957). Elements of Partial Differential Equations. McGraw-Hill,


Singapore

Amaranath. T, (2003). An Elementary Course in Partial Differential


Equations.Narosa Publishing House, New Delhi

Suggested Reading

I.P. Stavroulakis, Stephen a tersian, (2003). Partial Differential Equations.


Allied Publishers Pvt. Limited, New Delhi

388 WhatsApp: +91 7900900676 www.AgriMoon.Com


Laplace And Fourier Transform Techniques To Wave Equation And Laplace Equation

J. David Logan, (2004). Partial Differential Equations. Springer(India)


Private Ltd. New Delhi

389 WhatsApp: +91 7900900676 www.AgriMoon.Com


****** ☺******
This Book Download From e-course of ICAR
Visit for Other Agriculture books, News, Recruitment,
Information, and Events at
www.agrimoon.com

Give Feedback & Suggestion at [email protected]


Send a Massage for daily Update of Agriculture on WhatsApp

+91-7900 900 676


Disclaimer:

The information on this website does not warrant or assume any legal liability or
responsibility for the accuracy, completeness or usefulness of the courseware contents.

The contents are provided free for noncommercial purpose such as teaching, training,
research, extension and self learning.

Connect With Us:

AgriMoon App AgriVarsha App


App that helps the students to gain the Knowledge App that helps the students to All Agricultural
about Agriculture, Books, News, Jobs, Interviews of Competitive Exams IBPS-AFO, FCI, ICAR-JRF,
Toppers & achieved peoples, Events (Seminar, SRF, NET, NSC, State Agricultural exams are
Workshop), Company & College Detail and Exam available here.
notification.

You might also like