Engineering Mathematics 2
Engineering Mathematics 2
Mathematics-II
Prof. P. Panigrahi
Prof. J. Kumar
Prof. P.V.S.N. Murthy
Prof. S. Kumar
-:Content Reviewed by :-
Dr. Tanuja Srivastava (Professor)
Dept. of Mathematics, IIT, Roorkee
Lesson 1
1.1 Introduction
The problem of solving system of linear equations arises in almost all areas of
science and engineering. This is an important part of linear algebra and lies at the
heart of it.
A linear equation on n variables x1, x2, . . . , xn is an equation of the form a1x1 + a2x2
+…+ anxn = b,
where a1, a2, . . . , an and b are real or complex numbers, usually known in advance.
A system of linear equations (or a linear system) is a collection of one or more
linear equations involving the same variables. The following is an example of a
system of linear equations:
x1 - 2x2 + 4x3 = 10
2x1 - 3x3 = -9 (1.1)
The m × n matrix
associated with the system (1.2) is called the co-efficient matrix of the system. The
m × (n + 1) matrix
matrices not only for solving system of linear equations but also for studying other
topics in linear algebra.
Recall that a matrix A of size m × n over a field F (here we take F as the real or
complex field) is denoted by A = (aij)m × n, i = 1, 2, 3, . . . , m, and j = 1, 2 , . . . , n,
and aij are from F. If m = n then A is called a square matrix. In this case the entries
a11, . . . , ann are called the main diagonal or principal diagonal and other entries
are called off-diagonal entries. If aij = 0 for all i and j, then A is called the null
matrix or the zero matrix, and is denoted by 0. An identity matrix, denoted by I, is
a square matrix whose all diagonal entries are equal to 1 and off diagonal entries
are equal to zero.
A square matrix A is called a diagonal matrix if all the off-diagonal entries are
zero. A square matrix A = (aij)n × n is called lower (respectively upper) triangular
matrix if aij = 0 whenever i > j (respectively i < j), that is, all entries above
(respectively below) the main diagonal are zero.
Two matrices of the same size A = (aij)m × n and B = (bij)m × n are said to be equal if
aij = bij for all i, j.
If A = (aij)m × n and b = (bij)m × n are matrices of the same size over F then addition
of A and B denoted by, A + B, is the matrix C = (cij)m × n , where cij = aij + bij.
(1) A + B = B + A (commutative)
(2) (A + B) + C = A + (B + C) (associative)
(3) A + 0 = 0+A =A, where 0 is the zero matrix of the same size as A.
(5) (α + β) A = αA + βA.
(6) α (A + B) = αA + αB.
(7) α (βA) = α β A.
(1) Matrix multiplication need not be commutative, that is, one can find matrices A
and B such that AB is not equal to BA.
(A B) C = A (B C) (associative).
(5) If A is a matrix of size m × n and both B and C are matrices of size n × p then
A (B + C) = AB + AC (left distributive).
(A + B) C = AC + BC (right distributive).
(7) For any square matrix A, AI=IA=A, where I is the identity matrix of the same
size as A.
For matrix A = (aij)m × n , the transpose of A, denoted by AT, is the matrix AT = (aji)n
× m. In other words AT is obtained from A by writing the rows of A as the columns
of AT in order. Some properties of transpose operation are as given below.
(A + B)T = AT + BT.
(AB)T = BTAT.
Here we shall discuss about some of the special type of matrices which will be
used in the subsequent lectures.
matrices are called orthogonal, that is, a real matrix A is orthogonal if AAT = ATA
= I.
For any matrix A, each of the following is called an elementary row (resp.
columns) operation on A:
(2) Addition of scalar multiple of one row (resp. column) to another row (resp.
column).
a11 a12
|A|= a a 22 = a11a22 – a12a21.
21
For n ≥ 3,
m
det A = | A | = ∑ ( −1)
i+j
a ij mij .
j=1
Where i is a fixed integer with 1 ≤ i ≤ n, and mij is the determinant of the matrix
obtained from A by deleting ith row and jth column.
(3) If any two rows (or columns) are interchanged, then the value of the
determinant is multiplied by (− 1).
(4) If each element of a row is multiplied by a scalar α then the value of the
determinant is multiplied by α. Therefore | α A | = αn | A |.
(5) If a non-zero scalar multiple of the elements of some row (or column) is added
to the corresponding elements of some other row (or column), then the value of
the determinant remains unchanged.
(7) If A and B are the matrices of the same order then det (AB) = det (A) det (B).
1.4 Conclusions
Matrices and operations on them will be used in almost all the subsequent lectures.
In the next lecture we shall solve systems of linear equations. A solution of a
system of linear equations on n variables x1, x2, . . . , xn is a list (s1, s2, . . .,sn) of
numbers such that each equation is a true statement when the values s1, s2, . . . , sn
are substituted for x1, x2, . . . , xn respectively. The set of all possible solutions is
called the solution set of the given system. Two systems are called equivalent if
they have the same solution set. That is, every solution of the first system is a
solution of the second system and vice versa. Getting solution set of a system of
two linear equations in two variables is easy because it is just finding the
intersection of two lines. However, solving a large system is not so straight-
forward. For this we represent a system in matrix notation and then we perform
some operations on the associated matrices. From the resultant matrices either we
draw conclusion that the system has no solution or find solutions of the system.
Suggested Readings:
Linear Algebra, Kenneth Hoffman and Ray Kunze, PHI Learning pvt. Ltd., New
Delhi, 2009.
Linear Algebra and Its Applications, Fourth Edition, Gilbert Strang, Thomson
Books/Cole, 2006.
Lesson 2
2.1 Introduction
In this lecture we shall discuss about the rank of matrices, the consistency of
systems of linear equations, and finally present the Gauss elimination method
for solving the linear systems. For this we need an important form of matrices
called echelon form which is obtained by applying elementary row (or column)
operations.
(ii) For the non-zero rows of A, as the row number increases, the number of zero
entries at the beginning of the row also increases.
In the echelon form of a matrix some people consider one more condition that
the 1st non-zero entry in a non-zero row is equal to 1. However this condition is
not required for us and therefore not included in the definition of echelon form
of a matrix. One finds row echelon form of a matrix by applying elementary
row operations. By applying elementary column operations, one gets column
echelon form of the matrix.
1 3 5
A = 1 4 3 .
1 1 9
We keep 1st row as it. Then we make 1st entry of the second row zero by
applying elementary row operations. So replacing 2nd row R2 by R2 − R1 one
gets
1 3 5
0 1 −2 .
1 1 9
Then we make at least 1st two entries of the 3rd row of the above matrix zero.
For this we replace R3 by R3 − R1 in the above matrix and get
1 3 5
0 1 −2 .
0 −2 4
Finally by replacing R3 by R3 + 2R2 one gets the echelon form of A and is given
by
1 3 5
0 1 −2 .
0 0 0
The rank of a matrix has several equivalent definitions. Here we take the rank
of a matrix A as the number of non-zero rows in the row echelon form of A. It
is also defined as the number of nonzero columns in the column echelon form of
the matrix. Whatever way the definition may be given the rank of a matrix will
be the same, is a fixed number. Therefore the rank of a matrix A has the
following properties.
(1) Matrix A and its transpose have the same rank, that is, rank(A) = rank(AT).
(2) If A is a matrix of size m × n then rank (A) is at the most min{m, n}.
2 −2 3 4 −1
−1 1 2 5 2
A= .
0 0 −1 −2 3
1 −1 2 3 0
Here we find echelon form of the matrix A. First row will be kept as it is.
Replacing R4 by R4 + R2 and then R2 by 2R2+ R1 the matrix will be
2 −2 3 4 −1
0 0 7 14 3 .
0 0 −1 −2 3
0 0 4 8 2
2 −2 3 4 −1
0 0 7 14 3
.
0 0 0 0 24
0 0 0 0 14
as given below:
2 −2 3 4 −1
0 0 7 14 3
.
0 0 0 0 24
0 0 0 0 0
Now there are three non-zero rows in the echelon form of the given matrix A.
Therefore rank of A is equal to 3.
b1
x1
b
x is the n × 1 matrix x = , b is the m × 1 matrix 2 .
x
n
bm
A system of linear equations has either (i) no solution or (ii) exactly one
solution or (iii) infinitely many solutions. The system is said to be consistent if
it has at least one solution, that is (ii) or (iii) of the above hold, and is
inconsistent if it has no solution.
The following theorem gives conditions for existence of solution of the system
Ax = b.
(iii) The system has infinitely many solutions if rank A = rank ϵA =k < n.
2. Convert the augmented matrix in to row echelon form. Decide whether the
system is consistent or not. If yes then go to the next step, stop otherwise.
2x − 2y + 3z + 4u = − 1
− x + y + 2z + 5u = 3
− z − 2u = 3
x − y + 2z + 3u = 0
2 −2 3 4 −1
−1 1 2 5 2
0 0 −1 −2 3 .
1 −1 2 3 0
Notice that this is the same matrix A appears in Example 3.1. So its row echelon
form will be
2 −2 3 4 −1
0 0 7 14 3
0 0 0 0 24 .
0 0 0 0 0
Observe that the rank of the co-efficient matrix is 2 and that of the augmented
matrix is 3. Therefore according to Theorem 4.1(i) the given system is
inconsistent.
2x + y − 2z = 10
3x + 2y + 2z = 1
5x + 4y + 3z = 4
2 1 −2 10
The augmented matrix is 3 2 2 1 .
5 4
4 3
2 1 −2 10
Row echelon form of this matrix is 0 1 10 −28 .
0 −14 42
0
Notice that 1st three columns is the row echelon form of the co-efficient matrix
and its rank is equal to three which is same as the rank of the augmented matrix.
Therefore the system is consistent and since the number of variables is also
equal to three from Theorem 4.1(ii) the system has a unique solution.
The system corresponding to the echelon form of the augmented matrix is:
2x + y − 2z = 10
y + 10z = − 28
− 14z = 42
x + 2y − 3z = 6
2x − y + 4z = 2
4x + 3y − 2z = 14
1 2 −3 6
Augmented matrix of the system is 2 −1 4 2
4 3 −2 14
1 2 −3 6
and its row echelon form is 0 −5 10 −10 .
0 0 0 0
The rank of the co-efficient matrix and the rank of the augmented matrix are
same and is equal to 2 which is less than the number of variables. Therefore the
system has infinite number of solutions. From the row echelon form of the
augmented matrix the system will be
x + 2y − 3z = 6
− 5y + 10z = − 10
Here z is the free variable. So it can take any real value. Let z = α, α is a real
number. Then from the second equation of the above system y = 2 + 2α and
then from the first equation x = 2 – α. Hence the set of all solutions of the
system is
{(2 – α, 2 + 2α, α) : α ϵ }.
2.6 Conclusions
Suggested Readings:
Linear Algebra, Kenneth Hoffman and Ray Kunze, PHI Learning pvt. Ltd.,
New Delhi, 2009.
Linear Algebra and Its Applications, Fourth Edition, Gilbert Strang, Thomson
Books/Cole, 2006.
Lesson 3
3.1 Introduction
−2 1
1 2
Example 3.2.1: Let A = and B = 1 − 1 .
3 4
2 2
1 0
Notice that AB = BA = . So A and B are inverse of each other.
0 1
Theorem 3.2.1: A square matrix has an inverse if and only if its determinant is
non-zero.
(2) Inverse of inverse of a matrix is the matrix itself, that is, (A-1)-1 = A.
(3) Inverse and transpose operations are interchangeable, that is, (AT)-1 = (A-1)T.
Recall that for a square matrix A = (aij), the minor of any entry aij is the
determinant of the square matrix obtained from A by removing ith row and jth
column. Moreover the cofactor of aij is equal to the minor of aij multiplied by (− 1)i
+ j
. The cofactor matrix associated with an n × n matrix A is an n × n matrix Ac
obtained from A by replacing each entry of A by its cofactor. The adjugate A* of A
is the transpose of the cofactor matrix of A.
A∗ A∗
A = A = I.
det A det A
Thus we have the following formula for inverse of a matrix given in the theorem
below.
A− 1 = A*.
2 0 −1
A = 0 1 2
3 1 1
.
−1 6 −3
Ac = −1 5 −2 .
1 −4 2
−1 −1 1
Hence, A −1
= 6 5 −4 .
−3 −2 2
Existence of inverse of a matrix can be linked with rank of A through the result
below given in the theorem.
Step 4: Again apply elementary row operations to (U B) till first n columns form
the identity matrix. If the resultant matrix is (I K) then K is the inverse of
matrix A.
2 0 −1 1 0 0
inverse of A exists. The augmented matrix is 5 1 0 0 1 0 .
0 1 3 0 0 1
−1 1
−1 1 1 0 0 0
1 0 2 2
0 0
2 2
R 2 →R 2 - 5R1 −5
1 0
5
5 1 0 0 1 0 → 0 1
0 1 2 2
3 0 0 1
0 1 3 0 0 1
−1 1
1 0 2 2
0 0 1 0 0 3 −1 1
−5 5 −5
→ 0 1 1 0
R 3 →R 3 - R 2 5 R1 → R1 + R 3
1 0 → 0 1
2 2 2 2
0 0 1 5 1 5
−1 1 0 0 −1 1
2 2 2 2
1 0 0 3 −1 1
→ 0 1 0 −15 6 −5
0 0 1 5 −2 2
The last matrix is of the form (I K). Therefore the inverse of A is given by
3 −1 1
A− 1 = −15 6 −5 .
5 −2 2
3.5 Conclusions
Several other methods are also there to find inverse of a matrix and for particular
type of matrices like upper or lower triangular matrices one can derive an easier
formula for the inverse. Applying inverse of a matrix one can find solution of the
system Ax =b if A is a square matrix of size n and rank of A is n. In this case x=A-
1
b is the solution.
Suggested Readings:
Linear Algebra, Kenneth Hoffman and Ray Kunze, PHI Learning pvt. Ltd., New
Delhi, 2009.
Linear Algebra and Its Applications, Fourth Edition, Gilbert Strang, Thomson
Books/Cole, 2006.
Lesson 4
4.1 Introduction
In this lecture we discuss about the basic algebraic structure involved in linear
algebra. This structure is known as the vector space. A vector space is a non-empty
set that satisfies some conditions with respect to addition and scalar multiplication.
Recall that by a scalar we mean a real or a complex number. The set of all real
numbers is called the real field and the set of all complex numbers ₵ is called
the complex field. Here onwards by a field F we mean the set of real or the set of
complex numbers. The elements of vector spaces are usually known as vectors and
that in field F are called scalars. In this lecture we also discuss about linearly
dependency or independency of vectors.
A non-empty set V together with two operations called addition (denoted by +) and
scalar multiplication (denoted by.), in short (V, +, .), is a vector space over a field
F if the following hold:
(1) V is closed under scalar multiplication, i.e. for every element α ϵ F and u ϵ V,
α.u ϵ V. (In place of α.u usually we write simply αu).
(2) (V, + ) is a commutative group, that is, (i) forevery pair of elements u, v ϵ V,
u+v ϵ V (ii) elements of V are associative and commutative with respect to +
(iii) V has the zero element, denoted by 0, with respect to +, i.e, u+0 =0+u=0,
for every element u of V and finally (iv) every element u of V has additive
inverse, i.e, there exists v ϵ V such that u+v = v+u = 0.
If V is vector space over F then elements of V are called vectors and elements of F
are called scalars.
For vectors v1, v2, . . . , vn in V and scalars α1, α2, . . . , αn in F the expression
α1v1, α2v2, . . . , αnvn is called a linear combination of v1, v2, . . . , vn. Notice that V
contains all finite linear combinations of its elements hence it is also called a linear
space.
(1) ₵ is a vector space over . But is not a vector space over ₵ as it is not
closed under scalar multiplication.
(2) If F= or Fn then
₵ = {( x1, x2, . . . , xn) : xi ϵ F, 1 ≤ i ≤ n} is a vector space
over F where addition and scalar multiplication are as defined below:
(3) The Space of m × n Matrices: Here Fm × n is the set of all m × n matrices over
F. Fm × n is a vector space over F with respect to matrix addition and matrix
scalar multiplication.
(4) The space of polynomials over F: Let (F) be the set of all polynomials over
F, i.e.,
P(F) is a vector space over F with respect to addition and scalar multiplication
of polynomials, that is,
The following results can be verified easily (proof of which can be taken as
exercise).
(a) α.0 = 0, for α ϵ F, here 0 is the additive identity of V or the zero vector.
(b) 0.u = 0, for u ϵ V, here 0 in the left hand side is the scalar zero i.e. additive
identity of F and 0 in right hand side is the zero vector in V.
4.3 Subspaces
The above two conditions of a subspace can be combined and expressed in a single
statement that: W is a subspace of V if and only if for u, v ϵ W and scalars α, β ϵ
F, αu + βv ϵ W.
(1) The zero vector of the vector space V alone i.e. {0} and the vector space V
itself are subspaces of V. These subspaces are called trivial subspaces of V.
2 2
(2) Let V = , the Euclidean plane, and W be the straight line in passing
through (0, 0) and (a, b), i.e. W = {(x, y) ϵ 2
: ax + by = 0}. Then W is a
2
subspace of . Whereas the straight lines which do not pass through the origin
2
are not subspaces of .
× n
(4) The set of all n × n Hermitian matrices is not a subspace of ₵n (the
collection of all n × n complex matrices), because if A is a Hermitian matrix
then diagonal entries of A are real and so iA is not a Hermitian matrix
(However the set of all n × n Hermitian matrices forms a vector space over .
Let V be a vector space over F and S be a subset of V. The liner span of S, denoted
by (S), is the collection of all possible finite linear combinations of elements in S.
Then (S) satisfies the following properties given in the theorem below.
2
Example 4.4.1: In if S = {(2, 3)} then (S) is the straight line passing through
2
(0, 0) and (2, 3) i.e. (S) = 2x + 3y = 0. If S = {(1, 0), (0, 1)} then (S) = .
A vector space can be expressed in terms of very few elements of it, provided that ,
these elements spans the space and satisfy a condition called linearly
independency. Short-cut representation of a vector space is essential in many
subjects like Information and Coding Theory.
Consider a vector space V over a field F and a set S={ v1, v2, . . . , vk } of vectors in
V. S is said to be linearly dependent if exist scalars α1, α2, . . . , αk (in F), not
all zero such that
α1v1 + α2v2 + . . . + αkvk = 0.
Step 1: Equate the linear combination of these vectors to the zero vector, that is,
α1v1 + α2v2 + . . . + αkvk = 0, where αi’s are scalars that we have to find.
Step 2: Solve for scalars α1, α2, . . . , αk. If all are equal to zero then S is a linearly
independent set, otherwise (i.e. at least one αi is non-zero) the S is linearly
dependent.
(3) Any set which contains the zero vector is linearly dependent.
3
Example 4.5.1: Let V = be the vector space (over ) and S1 = {(1, 2, 3), (1, 0,
2), (2, 1, 5)} and S2 = {(2, 0, 6), (1, 2, − 4), (3, 2, 2)} be subsets of V. We check
linearly dependency/independency of S1 and S2.
First consider the set S1. Let α1, α2, α3 be scalars such that
α1(1, 2, 3) + α2(1, 0, 2) + α3(2, 1, 5) = (0, 0, 0)
Then we have
(α1 + α2 + 2α3, 2α1 + α3, 3α1 + 2α2 + 5α3) = (0, 0, 0)
1 2 1 −2 1 2 1 −2
R 2 →R 2 − 2R1
2 1 3 −1 → 0 −3 1 3
2 0 1 4 2 0 1 4
1 2 1 −2 1 2 1 −2
R 3 → R 3 − 2R1 R 3 →3R 3 − 2R1
→ 0 −3 1 3 → 0 −3 1 3
0 −2 −1 8 0 0 −5 18
The last matrix is in echelon form and all the rows are non-zero. Hence S is
linearly independent.
Next we consider
While forming the matrix we may not have to take 1st vector in S1 as 1st row, 2nd
vector as 2nd row and so on. Since we have to convert the matrix into echelon form
we may take 1st row of the matrix a vector in S for which the 1st entry is non-zero.
So let the matrix be
1 2 0 3
0 1 1 1
.
0 1 2 −1
1 3 2 2
4.6 Conclusions
Vector spaces are the main ingredients of the subject linear algebra. Here we have
studied an important property of the vectors that is linearly
dependency/independency. This property will be used in almost all the lectures. In
the next lecture also we discuss about some basic terminologies associated with a
vector space.
Suggested Readings:
Linear Algebra, Kenneth Hoffman and Ray Kunze, PHI Learning pvt. Ltd., New
Delhi, 2009.
Linear Algebra and Its Applications, Fourth Edition, Gilbert Strang, Thomson
Books/Cole, 2006.
Lesson 5
5.1 Introduction
In the previous lecture we have already said that vector spaces can be represented
in a short-cut form in terms of few linearly independent vectors. The set of these
few vectors have a name called basis. The number of elements in a basis is fixed
and this number is called the dimension of the vector space. In this lecture we
shall discuss on these two important terms basis and dimension of a vector space.
We shall also give an another definition of the rank of a matrix in terms of linearly
independent rows/columns and finally present the rank-nullity theorem.
(ii) S spans V i.e., (S) = V (or in other words every element of V can be
written as a finite linear combination of vectors in S).
If V contains a finite basis then V is called a finite dimensional vector space and
dimension of V is the number of elements in . If V is not finite dimensional then
it is infinite dimensional vector space. Dimension of a vector space is well defined
because of the theorem below.
Theorem 5.2.1: If a vector space V has a basis with k number of vectors then
every basis of V contains k vectors (in other words all bases of a vector space are
of the same cardinality).
Next we shall see some examples of vector spaces with their bases and dimensions.
Example 5.2.1:
3
(1) {(2,0,6), (1,2,-4), (3,2,2)} is not a basis for as it is not linearly independent
because (2,0,6)+(1,2,-4)=(3,2,2).
(4) The collection of all polynomials over F, P(F) is an infinite dimensional vector
space over F because S={1,x,x2,x3,…..} is a linearly independent set and spans
P(F) but no finite subset of S spans P(F). However Pn(F) , the set of all
polynomials of degree ≤ n, is a finite dimensional vector space with
{1,x,x2,x3,…,xn} as a basis. Hence dimension of P(F) is equal to n + 1.
2×2
(5) The set of all 2 × 2 real matrices is a finite dimensional vector space over
1 0 0 1 0 0 0 0
with , , , as a basis. So dim 2×2
= 4.
0 0 0 0 1 0 0 1
Next we shall list some of the well known properties of an n-dimensional vector
space.
Theorem 5.2.2: The following results are true in an n-dimensional vector space V:
In the following example we shall use some of the results of Theorem 2.2 to check
for a basis.
Example 5.2.3: Here we show that S = {(1, 0, − 1), (1, 1, 1), (1, 2, 4)} is a basis
3 3
for in two different ways. Here we shall use the fact that dimension of is
3.
Method 1: We will show that S is a linearly independent set. We get the echelon
form of the matrix formed by the vectors in S. The matrix and its row reduced
matrices are as follow:
1 0 −1 1 0 −1
R 2 →R 2 -R1
1 1 1 → 0 1 2
1 2 4 1 2 4
1 0 −1 1 0 −1
R 3 → R 3 -R1 R 3 → R 3 -2R 2
→ 0 1 2 → 0 1 2
0 2 5 0 0 1
The last matrix is in echelon form and no zero row is there in it. So S is a linearly
3
independent set of 3 vectors and since dimension of is 3, by Theorem 2.2(iv) S
3
is a basis of .
3
Method 2: Next by applying Theorem 2.2(iii) we show that S is a basis of .
3
Here we show that every vector in can be expressed as a linear combination of
vectors in S. Let (x1, x2, x3) ϵ 3
be an arbitrary vector and α, β, γ ϵ such that
= (α + β + γ, β + 2γ, − α + β + 4γ)
3
Thus for every vector in we have found scalars to express the vector as a linear
3
combination of vectors in S. Hence S forms a basis for .
In the next example we shall find basis and dimension of a subspace generated by a
set of vectors.
5
Example 5.2.4: We consider the subspace W of generated by the vectors u =
(1, 3, 1, − 2, − 3), v = (1, 4, 3, − 1, − 4), w = (2, 3, − 4, − 7, − 3), x = (3, 8, 1, − 7,
− 8).
1 3 1 −2 −3
1 4 3 −1 −4
2 3 −4 −7 −3 .
3 8 1 −7 −8
1 3 1 −2 −3
0 1 2 1 −1
0 −3 −6 −3 3 .
0 −1 −2 −1 1
1 3 1 −2 −3
0 1 2 1 −1
0 0 0 0 0
0 0 0 0 0
which is in echelon form.
In the echelon form there are two non-zero rows only. Therefore dimension of W is
equal to two and these non-zero rows form a basis for W. So {(1, 3, 1, − 2, − 3),
(0, 1, 2, 1, − 1)} is a basis for W.
For any matrix A its nullity may be defined as below. Recall that a homogeneous
system of m linear equations on n variables is of the form AX = 0, where A is a m
× n matrix and X is the n × 1 matrix (x1, x2, . . . , xn). Homogeneous systems are
always consistent because (0, 0, . . . , 0) is always a solution of it. Also this is true
because of the fact that the co-efficient and augment matrices of this system have
the same rank.
Let S be the collection of all solutions of AX = 0. One can easily check that S is a
n
subspace of and this subspace is called the solution space of the system. The
dimension of the solution space of the system AX = 0 is called the nullity of A.
Now we are ready to state the famous rank-nullity theorem for matrices.
1 2 −1
2 5 2
=1 4 7 .
1 3 3
1 2 −1
0 1 4
We convert A into row echelon form and is given by 0 0 0 .
0 0 0
From this we get that rank of A is equal to 2 since there are two non-zero rows in
the row echelon form of A.
x1 + 2x2 − x3 = 0
x1 + 4x2 + 7x3 = 0
x1 + 3x2 + 3x3 = 0
From the echelon form of the matrix A, the above system is equivalent to
x1 + 2x2 − x3 = 0
x2 + 4x3 = 0
A basis for S is {(9, − 4, 1)} because this vector generates S, that is, all other
vectors in S are scalar multiple of the vector (9, − 4, 1). Therefore nullity of
A = dim S = 1. Now rank of A + nullity of A = 3 which verifies the rank-nullity
theorem.
5.4 Conclusions
In this lecture we have learned that if we know a basis for a vector space then the
whole vector space can be generated by taking all possible finite linear
combinations of the basis vectors. Because of this wonderful structure, vector
spaces are widely used in coding and decoding of messages in Information and
Coding theory. We shall find application of the rank-nullity theorem in some of the
subsequent lectures.
Suggested Readings:
Linear Algebra, Kenneth Hoffman and Ray Kunze, PHI Learning pvt. Ltd., New
Delhi, 2009.
Linear Algebra and Its Applications, Fourth Edition, Gilbert Strang, Thomson
Books/Cole, 2006.
Lesson 6
6.1 Introduction
The concept of eigenvalues and eigenvectors of matrices is very basic and having
wide application in science and engineering. Eigenvalues are useful in studying
differential equations and continuous dynamical systems. They provide critical
information in engineering design and also naturally arise in fields such as physics
and chemistry.
5 4 4 24 4
Ax = = = 6 1 = 6x.
1 2 1 6
2
Similarly y = 1
is also an eigenvector of A corresponding to the eigenvalue 6.
2
Ax = λx or (A − λI) x = 0 (6.1)
Example 6.2.2: Find all eigenvalues and their corresponding eigenvectors of the
matrix
5 4 2
A = 4 5 2 .
2 2
2
5−λ 4 2
det (A - λI) = 4 5−λ 2 .
2 2 2−λ
= − ( λ − 10 )( λ − 1 )
2
.
−5 4 2 x1
4 −5 2 x2 = 0
or
2 −8 x 3
2
x1
where x = x 2 .
x
3
−5 4 2
Echelon form of the co-efficient matrix is 0 −9 18 .
0 0
0
– 9x2 + 18x3 = 0.
4 4 2 x1
or 4 4 2 x 2 = 0 .
2 2 1 x
3
4 4 2
Echelon form of the co-efficient matrix is 0 0 0 . So the system will be
0 0 0
4x1 + 4x2 + 2x3 = 0.
or 2x1 + 2x2 + 2x3 = 0.
(1) The sum of the eigenvalues of a matrix A is equal to the sum of all diagonal
entries of A (called trace of A). This property provides a procedure for
checking eigenvalues.
(3) The eigenvalues of an upper (or lower) triangular matrix are the elements on
the main diagonal.
A− 1 x = x.
(8) Every eigenvalue of A is also an eigenvalue of AT. One verifies this from the
fact that determinant of a matrix is same as the determinant of this transpose
and
(9) The product of all the eigenvalues (with counting multiplicity) of a matrix
equals the determinant of the matrix.
6.3 Conclusions
Suggested Readings:
Linear Algebra, Kenneth Hoffman and Ray Kunze, PHI Learning pvt. Ltd., New
Delhi, 2009.
Linear Algebra and Its Applications, Fourth Edition, Gilbert Strang, Thomson
Books/Cole, 2006.
Lesson 7
7.1 Introduction
The Cayley Hamilton theorem is one of the most powerful results in linear algebra.
This theorem basically gives a relation between a square matrix and its
characteristic polynomial. One important application of this theorem is to find
inverse and higher powers of matrices.
Theorem 7.2.1: Every square matrix satisfies its own characteristic equation.
where 0n × n is the zero matrix of size n, and for any positive integer i, Ai is the
product A × A . . . × A of i number of A.
1 2
Example7.2.1: Let A = . Characteristic equation is λ – 4λ – 5 = 0. One
2
4 3
9 8 4 8
can check that A2 = , 4A = . So
16 17 16 12
9 8 4 8 5 0
A2 – 4A – 5I = – – .
16 17 16 12 0 5
9−4−5 8 −8 − 0 0 0
= = .
16 − 16 − 0 17 − 12 − 5 0 0
The Cayley-Hamilton theorem can be used to find inverse as well as higher powers
of a matrix.
1
or A{ − ( a1I + a2A + . . . + An − 1)} = I.
a0
1
Therefore A− 1 = − ( a1I + a2A + . . . + An − 1) which is a formula for inverse of A.
a0
2 −1 −1
= 3 −2 −1 .
0 0 1
Theorem 7.4.1: (Division Algorithm) For any polynomials f(x) and g(x) over a
field F there exist polynomials q(x) and r(x) such that f(x) = q(x) g(x) + r(x) where
r(x) = 0 or deg r(x) < deg g(x).
Here we shall discuss about a method that finds value of higher degree polynomial
on a square matrix A and in particular the value of higher power of A. The method
as follows:
Step 4: In this case we assume that A has distinct eigenvalues λ1, λ2, . . . , λn. From
Cayley-Hamilton theorem we have f(A) = r(A). Therefore
f(λn) = r(λn) = a0 + a1λn + a2λn2 + . . . + an − 1λnn − 1
Solving this system one finds the values a0 , a1 , . . . , an-1, since f(λi) and λi, 1 ≤ i ≤
n, are known.
Step 5: In this step we consider the case that A has multiple eigenvalues. If λi is an
eigenvalue of A of multiplicity k then we differentiate the equation f(λi) = r(λi) k –
1 times, and get k equations:
f(λi) = r(λi).
df ( λ ) dr ( λ )
= .
=
dλ λ λ=
dλ λ λi
i
d ( k-1) f ( λ ) d ( k-1) r ( λ )
= .
dλ dλ
=λ λ=
i λ λi
This is how one gets a system of n equations using all the eigenvalues of A and
from this system the values of a0, a1 , . . . , an can be determined.
2 −1
Example 7.4.1: Here we shall find the value of f(A) = A78, for A =
5
,
2
applying Cayley-Hamilton theorem. Characteristic polynomial of A is
det (A − λI) = λ2 − 7λ + 12. Eigenvalues are 3 and 4. Since characteristic
polynomial of A is of degree 2 the remainder will be of degree at the most one.
Therefore
378 = a0 I + 3a1
478 = a0 I + 4a1
On solving we get a1 = − 378 + 478 and a0 = 4 × 378 – 3 × 478. Putting this value in
(7.1),
1 0 1
Example 7.4.2: For the matrix A = 0 1 0 , we find the value of f(A) = A10 –
0 0 2
5A6 + 2A3.
df ( λ ) dr ( λ )
f(1) = r(1) and = . That is,
=
d λ λ 1=
d λ λ 1
1 0 3
A2 = 0 1 0 .
0 0 4
1 0 0 1 0 1 1 0 3
Now f(A) = 748 0 1 0 + (− 1486) 0 1 0 + 736 0 1 0 =
0 0 1 0 0 2 0 0 4
−2 0 722
0 −2 0 .
0 0 1720
7.5. Conclusions
In this lecture we have seen that how powerful the Cayley-Hamilton theorem and
the concept of eigenvalues are? In the next lecture also we shall use the theory of
eigenvalues for diagonalization of matrices.
Suggested Readings:
Linear Algebra, Kenneth Hoffman and Ray Kunze, PHI Learning pvt. Ltd., New
Delhi, 2009.
Linear Algebra and Its Applications, Fourth Edition, Gilbert Strang, Thomson
Books/Cole, 2006.
Lesson 8
Diagonalization of Matrices
8.1 Introduction
Diagonalizable matrices are defined through similar matrices. Two square matrices
A and B are said to be similar if there exists an invertible matrix P such that A = P−
1
B P or equivalently PA = BP.
3 5 2 4
Example 8.2.1: (i) Matrices A = and B = are similar because PA
3 1 4 2
4 0
= PB, where P = . Note that P is invertible as det P = 20 ≠ 0. However
1 5
2 0 2 1
matrices R = and S = are not similar because otherwise the
0 2 0 2
a b
matrix P1 satisfying P1R = SP1 will be of the form and is a non-invertible
0 0
(or singular) matrix.
Theorem 8.2.1: Similar matrices have the same characteristic equation (and hence
the same eigenvalues).
The above theorem also gives a criteria for checking that the given matrices are
similar or not.
1 2 4 1
Example 8.2.1: Matrices A = and B = are not similar because
4 3 3 2
their characteristic polynomials are λ2 − 4λ – 5 and λ2 − 6λ + 5 respectively.
−3 1 −1
Example 8.3.1: Consider the matrix A = −7 5 −1 . Characteristic polynomial
−6 6 −2
of A is det (A − λI) = (λ + 2)2 (λ − 4). So − 2 is an eigenvalue of multiplicity two
and therefore algebraic multiplicity of the eigenvalue − 2 is equal to 2. One can
check that rank of (A + 2I) is equal to two hence its nullity is equal to one. So
geometric multiplicity of the eigenvalue − 2 is equal to 1. The following theorem
gives a relation between these two multiplicities.
Theorem 8.3.1: The algebraic multiplicity of an eigenvalue is not less than its
geometric multiplicity.
Theorem 8.4.1: Let An×n be an square matrix with eigenvalues λ1, λ2, . . . , λk. Let
γ1, γ2, . . . , γk be the geometric multiplicity of λ1, λ2, . . . , λk respectively. Then A is
diagonalizable if and only if γ1 + γ2 + . . . + γk = n.
(6) P− 1 A P = diag(λ1, λ1,… ,λ1, λ2, λ2,. . . λ2 , ….λk ,λk , … λk ) is the diagonal
matrix similar to A.
5 4 2
Example 8.5.1: Consider the matrix A = 4 5 2 .
2 2 2
10 0 0
−1
One checks that P A P = 0 1 0 and is similar A.
0 0 1
Not all matrices are diagonalizable and we will see such an example below.
Example 8.5.2: As we have seen in Example 8.3.1 that for the matrix A =
−3 1 −1
−7 5 −1 the eigenvalues are λ1 = − 2, λ2 = 4, λ1 is of multiplicity 2. Also the
−6 6 −2
algebraic multiplicity of λ1 is 2 and the geometric multiplicity of it is 1. Therefore
A is not a diagonalizable matrix.
Theorem 8.6.1: The following are true for a diagonal or a diagonalizable matrix
D:
a 0
(I) If D = 0 b is a diagonal matrix the kth power of D is equal to
n x n
ak 0
.
0 bk n x n
0 1
Example 8.6.1: Here, We compute A30 for A = . This matrix is
−2 3
1 1 1 0
diagonalizable as A = M D M , where M =
−1
and D = . Thus by
1 2 0 2
1 1 1 0 2 −1
Theorem 8.6.1(i) and (ii), A30 = M D30 M− 1 = 30 .
1 2 0 2 −1 1
2 − 230 230 − 1
= .
2−2 231 − 1
31
Example 8.6.2: If P(x) = x17 – 3x5+2x2+1 then we find the value of P(A) = A17 –
3A5 + 2A2 + I, for the same matrix A in Example 8.6.2. By Theorem 8.6.1 (iii) ,
and Example 8.61, P(A)=A17 – 3A5 + 2A2 + I
2 − 217 217 − 1 2 − 25 25 − 1 2 − 2 2 22 − 1 1 0
= − 3 + 2 + .
2−2 218 − 1 2 − 26 2 6 − 1 2 − 23 23 − 1 0 1
18
8.7 Conclusions
Here we have seen that finding higher powers of a diagonalizable matrix or value
of any polynomial on a diagonalizable matrix can be computed easily.
Suggested Readings:
Linear Algebra, Kenneth Hoffman and Ray Kunze, PHI Learning pvt. Ltd., New
Delhi, 2009.
Linear Algebra and Its Applications, Fourth Edition, Gilbert Strang, Thomson
Books/Cole, 2006.
Lesson 9
9.1 Introduction
Let V and W be vector space over the same field F. A mapping T: V → W is called
a linear transformation if
3 2
Example 9.2.1: Let T1, T2 be mappings from to defined as:
T1 (x1, x2, x3) = (x1 + x2, x3) and T2 (x1, x2, x3) = (x1x2, x3).
≠ (x1 x2, x3) + (y1 y2, y3) = T (x1, x2, x3) + T (y1, y2, y3).
An important result for finite dimensional vector spaces is given in the theorem
below.
Theorem 9.2.2: Two finite dimensional vector spaces over the same field are
isomorphic if and only if they have the same dimension.
Next we shall define the null space and range space of a linear transformation. Let
T: V → W be a linear transformation. The kernel of T, Ker T, is the set Ker T = {v
ϵ V: T(v) = 0}.The set T(V) = {T(v) : v ϵ V} is called the range of T, denoted by
rang(T). It is an well-known result that Ker T = {0} if and only if T is an
isomorphism One can verify easily that Ker T is a subspace of V, called the null
space of T, and Rang(T) is also a subspace of W, called the range space of T. If V
and We are finite dimensional vector spaces then dimension of Ker T is called the
nullity of T and the dimension of rang(T) is called the rank of T. One should not
get confuse with these terminologies because very shortly we are going show that
linear transformations can be represented as matrices and the vice versa.
2
T((x1, x2, x3)) = (x1, x2, 0). Then Ker T is the z-axis and rang (T) = .
Like matrices one can also have the rank-nullity theorem for linear
transformations.
Every linear transformation can be represented as a matrix and every matrix can
produce a linear transformation. So people sometime treat matrices as linear
transformations and vice versa. Here we shall discuss about the method to get a
linear transformation from a matrix.
Let V and W be finite dimensional vector spaces over F with dim V = n and dim W
= n, and Am × n = (aij)m × n be a matrix over F (same field). From Corollary 9.2.1
every vector in V can be expressed as an n-tuple of elements in F, in other words,
x1
we can take V = Fn × 1, i.e. V consists of n × 1 matrices (or column vectors , xi
x
n
x1
ϵ F). Similarly elements of W can be taken as column vectors , xi ϵ F, i.e.
x
m
x1 x1
W = Fm × 1. Then the mapping T: V → W defined as T = Am × n
x x
n n n x1
n
∑ a1j x j
j=1
= is a linear transformation because
n
a x
∑ mj j
j=1
n
∑ a1 j ( x j + y j )
x1 y1 j=1
A + =
=
x y n
n n
∑ a mj ( x j + y j )
j=1
n n
∑ a 1 j x j ∑ a 1 j y j
x1 y1
j=1
j=1
+ =A + A
n n x y
∑ a mj x j ∑ a mj y j n n
j=1
j=1
and
α x1 x1
A = α A
.
α x x
n n
1 3 −2
Example 9.3.1: Let A = 0
1 2×3 be a matrix over . The mapping T:
4
3
→ 2
given by:
x1
1 3 −2
T ( x1 , x 2 , x 3 ) =
T
x2
0 4 1 .
x3
x1 + 3x 2 − 2x 3
4x 2 + x 3
= is a linear transformation.
T (vn) = an1w1 + an2w2 + . . . + anmwm , where aij ϵ F.
a11 a 21 a n1
a12 a 22 an2
A= .
a1m a 2m a nm m×n
Note that if we consider different bases in V and W then we may get different
matrix representations of T (of course these matrices are all similar). In the above
if we represent T (vi) = (ai1, ai2, . . . , ain)T. then the matrix corresponding to T can
be written as:
3 2
Take bases B = {(1, 1, 0), (0, 1, 4), (1, 2, 3)} and B1 = {(1, 0), (0, 2)} in and
respectively.
2 1 3
So the matrix representation of T is the matrix .
0 4 3
is given by = ∑x
i=1
i yi . Since ≥ 0, positive square root of
all k = 1, 2, . . . , n.
A linear transformation T: n
→ n
is called an orthogonal transformation if
n
= for every vectors u and v in . So an orthogonal
transformation not only preserves the addition and scalar multiplication, it also
preserves the length of every vector.
An orthogonal transformation is also called an isometry because of the following
result.
2x-y x+2y
Example 9.5.1: The mapping T: 2
→ 2
defined as T(x, y) = , is
5 5
an orthogonal transformation. One can check that T preserves addition and scalar
multiplication and hence is a linear transformation. Next we show that
2
|| T(x, y) || = || (x, y) ||, for all vectors (x, y) in .
{( 2x-y ) }
1
1
+ ( x+2y )
2 2 2
|| T(x, y) || = .
5
1
=
1
5
{5x + 5y } =
2 2 2
x 2 + y2 = || ( x, y ) || .
In the following theorem we show that the matrix associated with an orthogonal
transformation is also orthogonal.
standard basis in n
is orthonormal. So T(ei ),T(e j ) is equal to 1 if i = j and is
9.6 Conclusions
Suggested Readings:
Linear Algebra, Kenneth Hoffman and Ray Kunze, PHI Learning pvt. Ltd., New
Delhi, 2009.
Linear Algebra and Its Applications, Fourth Edition, Gilbert Strang, Thomson
Books/Cole, 2006.
Lesson 10
Quadratic Forms
10.1 Introduction
The study of quadratic forms began with the pioneering work of Witt. Quadratic
forms are basically homogeneous polynomials of degree 2. They have wide
application in science and engineering.
Let A = (aij) be a real square matrix of size n and x be a column vector x = (x1, x2,
. . . , xn)T. A quadratic form on n variables is an expression Q = xT A x.
In other words,
a11 a1n x1
Q = xT A x = ( x 1 , x 2 , . . . , x n ) .
a
n1 a nn x n
= a11x12 + a12x1x2 + . . . + a1nx1xn + a21x2x1 + a22x22 + . . . + a2nx2xn + . . . + an1xnx1 +
n n
an2xnx2 + . . . + annxn2 = ∑∑ a
=j 1 =i 1
ij xi x j .
The matrix A is called the matrix of the quadratic form Q. This matrix A need not
be symmetric. However, in the following theorem we show that every quadratic
form corresponds to a unique symmetric matrix. Hence there is one to one
correspondence between symmetric matrices of size n and quadratic forms on n
variables.
Theorem 10.2.1: For every quadratic form Q there is a unique symmetric matrix B
such that Q = xT B x .
1 2 3
Example 10.2.1: For A = 4 5 6 , the quadratic form associated with A is
7 8 9
1 2 3 x1
Q = ( x1 x2 x3 ) 4 5 6 x 2 .
7 8 9 x
3
5 3 5
This quadratic form is equal to the quadratic form xT B x where B = 3 5 7
5 7 9
which is a symmetric matrix.
form. The theorem below says that every quadratic form has a canonical
representation.
The above theorem says that, for x = (x1, x2, . . . , xn)T and y = (y1, y2, . . . , yn)T,
variables x1, x2, . . . , xn in xT A x can be changed to y1, y2, . . . , yn through Px = y,
P is a non-singular matrix, so that xT A x = yT D y, where D is a diagonal matrix.
Example 10.2.2: We reduce the quadratic forms (a) 4x12 + x22 + 9x32 – 4x1x2 +
12x1x3 and (b) x1x2 + x2x3 + x3x1 to diagonal forms.
=4 + +
We change the variables as: x1 = y1, x2 = y2, and x3 = y2 + y3. Then the above
expression (2x1 + 3x3 – x2)2 + 6x2x3
Finally changing the variables as 2y1 + 2y2 + 3y3 = z1, 2y2 + y3 = z2 and y3 = z3, we
get the above quadratic form is z12 + which is in diagonal form. Here
For the (b) part the quadratic form is x1x2 + x2x3 + x3x1. Here no square term is
there and since the 1st non-zero term is x1x2, we change the variables to x1 = y1,
x2 = y1 + y2 and x3 = y3. So this form is y1 (y1 + y2) + (y1 + y2) y3 + y1y3
= + y2y3
= − + y2y3.
= − + y2y3.
= - y22y32.
Finally replacing 2y1 + 2y2 + 3y3 = z1, and y2 = z2 and y3 = z3 the above form will
reduce to . Here also the transformation Px = z is non-singular
1 1 2
as the matrix P is 1 −1 0 , which is non-singular.
0 0 1
Quadratic forms are classified into several categories according to their range.
These are given below.
Since there is one to one correspondence between real symmetric matrices and
quadratic forms similar kind of classification is also there for the symmetric
matrices. A real symmetric matrix A belongs to a class if the corresponding
quadratic form xT A x belong to the same class.
Example 10.3.1: The form Q1 = − x12 – 2x22 is a negative definite form where as:
Q2 = − x12 + 2x1 x + x22 is a negative semi-definite because Q2 = − (x1 – x2)2 which
is always negative and also takes value zero for x1 = x2 ≠ 0. The form Q3 = 2x12 +
3x22 is positive definite where as: Q4 = x12 − 2x1x2 + x22 is positive semi-definite.
Finally Q5 = x12 − x22 is an indefinite form.
To define rank and signature of a quadratic form we use its diagonal representation
as given below.
For a real symmetric matrix A let P(A) and N(A) be the numbers of positive and
negative diagonal entries in any diagonal form to which xTA x is reduce through a
non-singular transformation. The number P(A) – N(A) is called the signature of
the quadratic form xT A x. However rank of the matrix A is called the rank of the
form xT A x.
The quadratic form in example 10.2.2(a) has signature equal to 1 where as that in
example 10.2.2(b) has signature − 1.
The classification of quadratic forms can also be done according to their rank and
signatures as given in the theorem below.
Theorem 10.4.2: Two quadratic forms on the same number of variables can be
obtained from each other through a non-singular transformation if and only if they
have the same rank and signature.
The complex analogue of real quadratic form is known as Hermitian form. Here all
vectors as well as matrices are taken as complex.
For a vector x in ₵
n
and a hermitian matrix A, the expression T
A x is called a
Hermitian form where is complex conjugate of x. Notice that if x and A are real
then Hermitian form will be a quadratic form only.
Although the vector x and the matrix A are complex, the Hermitian form always
takes real value that can be seen in the theorem below.
= = = xT .
2 3+i
Example 10.5.1: Consider a Hermitian matrix A = . The Hermitian
3− i 1
form associated with this is
3 + i x1
( )
2
H= = x , x
3− i 1 x2 .
1 2
= 2x1 + (3 + i) x2 + (3 − i) x1 + x2 .
= 2| x1 |2 + 2 Re + | x2 |2.
10.6 Conclusions
Suggested Readings:
Linear Algebra, Kenneth Hoffman and Ray Kunze, PHI Learning pvt. Ltd., New
Delhi, 2009.
Linear Algebra and Its Applications, Fourth Edition, Gilbert Strang, Thomson
Books/Cole, 2006.
2 0 2
(a) 5 1 0
0 6 3
2 0 −1
(b) 0 1 1
−1 1 3
0 5 −1
(c) −5 0 −1
1 1 0
1 −1 i
(d) −1 0 1 − i
−i 1+i 2
i −i 3 + i
(e) −i i 0
−3 + i 0 3
x + 2y − 3z = 1
3x − y + 2z = 5
5x + 3y − 4z = 2
3. Solve
x + 2y − 3z + 2w = 2
2x + 5y − 8z + 6w = 5
3x + 4y − 5z + 2w = 4
(b) Let V be the set of all nonzero real numbers with addition defined as x + y = xy and scalar
multiplication defined as αx = x.
7. Prove or disprove:
8. If x, y, z are linearly independent vectors then whether x+y, y +z, z +x are linearly independent?
9. For what values of k, do the vectors in the set {(0, 1, k), (k, 1, 0), (1, k, 1)} form a basis for R3 ?
10. Check whether the following set of vectors are linearly dependent or independent.
(b) S = {(1, 3, −2, 5, 4), (1, 4, 1, 3, 5), (1, 4, 2, 4, 3), (2, 7, −3, 6, 13)}
(c) {(1, 2, 3), (1, 0, −1), (3, −1, 0), (2, 1, −2)}
12. Let W be a subspace of R5 generated by the vectors in 10(b). Find dimension and a basis for it.
13. Applying
Gauss Jordan elimination method find inverse of the matrix
2 0 −1
A = 5 1 0
0 1 3
x + 2y − z = 0
2x + 5y + 2z = 0
x + 4y + 7z = 0
x + 3y + 3z = 0
2 1 0
17. Consider the matrix A = 0 1 −1
0 2 4
For this find all eigenvalues and a basis for each eigenspace. Is A diagonalizable?
19. Find the symmetric matrix of the quadratic form 2x21 + 2x1 x2 − 6x2 x3 − x22
20. Check whether the matrices below are positive definite or positive semi-definite?
10 2 0
(i) 2 4 6 .
0 6 10
8 2 −2
(ii) 2 8 −2.
−2 −2 11
3 10 −2
(iii) 10 6 8 .
−2 8 12
2. Not consistent. Check that rank of the co-efficient matrix is 2 where as that of the augmented
matrix is 3.
3. Check that the system is consistent, where the rank of both the co-efficient matrix and augmented
matrix is 2. In echelon form the system is
x + 2y − 3z + 2w = 2
y − 2z + 2w = 1
Taking z and w as free variables, i.e., z = α, w = β, we get the set of all solutions is {(−α + 2β, 1 +
2α − 2β, α, β) : α, β ∈ R}.
4. Making elementary
row operations
R2 → −2R1 + R2 , R3 → −3R1 + R3 , R3 → −5R2 + 4R3 , get
1 2 −3 0
an echelon form 0 0 4 2
0 0 0 2
6. (a) yes, (b) neither closed under addition nor under scalar multiplication, (c) not closed under
scalar multiplication, (d) yes, (e) yes, (f) not closed under addition.
(b)
Echelon form ofthecorresponding matrix
1 3 −2 5 4 1 3 −2 5 4
1 4 1 3 5 0 −1 −3 2 −1
. So the given set of vectors are linearly dependent.
1 4 2 4 3 is 0 0 1 1 −2
2 7 −3 6 3 0 0 0 0 0
12. First 3 rows of the echelon form in 10(b) forms a basis for W . Therefore dim W = 3.
2 0 −1 1 0 0
13. Consider (A|I) = 5 1 0 0 1 0 .
0 1 3 0 0 1
Apply each of these elementary row operations in the updated matrix R1 → 12 R1 , R2 → R2 − 5R1 ,
R3 → R3 + R2 , R1 → R1 + R
3 , R2 → −R2 , R2 → R2 − 5R3 ,R3 → 2R3 and get
1 0 0 3 −1 1 3 −1 −1
−1
0 1 0 −15 6 −5 . So, A = −15 6 −5.
0 0 1 5 −2 2 5 −2 2
14. (a) No. (b) Yes; not an isomorphism. (c)Yes; an isomorphism. (d) No
x + 2y − z = 0
y + 4z = 0
17. Eigenvalues are 2, 2, 3. Basis for eigenspace corresponding to 2 and 3 are {(1, 0, 0)} and {(1, 1, −2)}
respectively. The matrix is not diagonalizable beacuse sum of dimension of eigenspaces is not equal
to 3.
−3 −2 4
1 1
18. Characteristic polynomial is −λ3 + 3λ2 − λ + 3. So A−1 2
= 3 (A − 3A + I) = 3 3 1 −2
−3 0 3
2 1 0
19. 1 −1 3
0 3 0
Lesson 11
11.1 Introduction
First we introduce some basic notations and terminology for the set of complex
numbers as a metric space.
centre a . z − a < ρ denotes the interior of the circle of radius ρ and centre a .
11.1.2 Half-Planes
A set is said to be an open set if all its points are interior points. For example,
the open circular disk, the right half-plane etc. are open sets.
A set is closed if it contains all its boundary points. The closure of a set is the
closed set consisting of all points in together with the boundary of .
11.1.4.3 Example: The set of all complex numbers is both open and closed.
A simple closed path is a closed path that does not intersect or touch itself. A
simply connected domain D in the complex plane is a domain such that every
simple closed path in D enclosed only points of D. A domain that is not simply
connected is called multiply connected.
11.1.5.2 Example: The set { z :1 < z < 2} is bounded whereas right half plane is
unbounded.
11.1.6 Examples
1. z − 2 + i ≤ 1 closed, bounded
7. Re z ≤ z
1 1
8. Re ≤
z 2
9. Re ( z 2 ) > 0
11.2 Function
Each of the real numbers and depends on real variable and , and so it
follows that can be expressed in terms of a pair of real-valued functions of
the real variables and :
Converse is not true, i.e., given two real functions we may not be able to
define a complex function of in an explicit form, for example,
.
11.2.1 Function in Polar Form: If the polar co-ordinates and θ are used then
f ( z ) u (r ,θ ) + iv(r ,θ ).
=
Hence u ( x, y=
) x2 − y 2 , .
When polar co-ordinates are used,
( reiθ ) (=
re ) =
2
f= iθ
re 2 2 iθ
r 2 cos 2θ + ir 2 sin 2θ . Consequently,
example, P( z ) =+
1 2 z − 3z 2 .
P( z )
Quotients of polynomials are called rational functions and are defined at
Q( z )
2 − z2
each point , where Q( z ) ≠ 0 . For example, g ( z ) = .
z + 4z3
11.2.4 Examples
1
1. Domain of definition of f ( z ) = is the entire complex plane excluding the
z
origin.
1
2. Domain of definition of f ( z ) = is the entire complex plane excluding
1− z
2
the circle z = 1 .
θ + 2 kπ 1
θ + 2 kπ
=wk z cos n
+ i sin
n n
lim f ( z ) = s, if for every ∈> 0 there exists δ > 0 such that f ( z ) − s <∈
z → z0
whenever z − z0 < δ .
11.3.1 Examples
iz 2 i
1. lim =
z →2 3 3
i 1 δ
( z − 2)
= z − 2 < <∈ whenever z − 2 < δ and δ < 3 ∈ .
3 3 3
z z x
2. lim does not exists, as along , = = 1 and along ,
z →0 z z x
z iy
= = −1.
z −iy
( z − 3i ) − ( z + i )
lim
3. lim z − 3i − z + i =
z →∞ z →∞ z − 3i + z + i
−4 i −4 i u
= lim
= lim= 0.
z →∞ z − 3i + z + i u →0 1 − 3i u + 1 + iu
and lim v( x, y ) = v0 .
( x , y )→( x0 , y0 )
lim [ f ( z ) ± g ( z ) ] =
α 0 + β0 , lim [ f ( z ) g ( z ) ] = α 0 β 0 , and if β 0 ≠ 0 , then
z → z0 z → z0
f ( z) α0
lim = .
z → z0 g ( z) β0
We say that lim f ( z ) = ∞ if for every positive ∈ > 0 , these exists δ > 0 such
z → z0
1
that f ( z ) > whenever z − z0 < δ .
∈
1 1
f ( z ) − w0 <∈ whenever z > . Equivalently, we can say that lim f = w0 .
δ z →0
z
2z + i z
11.3.4.2 Examples: lim = 2 (ii) lim =i
z →∞ z + 1 z →∞ 2 − iz
We say that lim f ( z ) = ∞ if for every∈> 0 , there exists δ > 0 such that
z →∞
1 1 1
f ( z ) > whenever z > . One can alternatively say lim = 0.
∈ δ z →0 1
f
z
2z3 − 1
11.3.4.3 Example: lim = ∞
z →∞ z + 1
g ( z ) − g (=
z0 ) f ( z ) − f (=
z0 ) f ( z ) − f ( z0 ) <∈ , whenever z − z0 < δ . So
is also continuous.
11.4.2 Examples
sin z
2. f ( z ) = is continuous except at z = ± i.
1+ z2
Im( z )
,z ≠ 0
3. f ( z ) = z
0, z = 0
Re( z 2 )
,z ≠ 0
4. f ( z ) = z
2
0, z = 0
z2 + 1
, z ≠ −i
5. f ( z ) = z + i
0, z = −i
is not continuous at as lim f ( z ) =−2i ≠ f (−i ) .
z →− i
f ( z0 + ∆z ) − f ( z0 )
lim = f ′( z0 )
∆z →0 ∆z
11.5.1 Example: f ( z ) = z 2
( z0 + ∆z ) 2 − ( z0 ) 2
lim = lim (∆z + 2 z=
) 2z
∆z →0 ∆z ∆z →0
f ′ f ′g − fg ′
g = g2
provided g does not vanish.
11.5.3 Examples:
1. f ( z ) = z .
f ( z0 + ∆z ) − f ( z0 ) ( z0 + ∆z ) − ( z0 ) ∆z ∆x − i∆y
= = =
∆z ∆z ∆z ∆x + i∆y
Now for ∆y =0 this value is and for ∆x =0 , it is . Hence
f ( z0 + ∆z ) − f ( z0 )
lim does not exist for any . That is, f z = z is not
∆z →0 ∆z
differentiable at any point.
2. f (=
z ) z= zz
2
f ( z + ∆z ) − f ( z ) ( z + ∆z )( z + ∆z ) − zz
=
∆z ∆z
∆z
= z + z + ∆z
∆z
f (0 + ∆z ) − f (0)
Now for , = ∆z
∆z
as ∆z → 0 . Hence z is differentiable at
2
which has limit . However for
∆z
any z ≠ 0 , lim
2
does not exist. Consequently z is not differentiable at any
∆z →0 ∆z
other point.
5. f ( z ) = z n
( z + ∆=
z ) − zn 1 n n−1 n
n
n n−2 n
z ∆z + z ( ∆z ) + ... + ( ∆z )
2
Hence
d n
dz
( z ) = nz n−1 for all z.
Suggested Readings
Brown, J.W. and Churchill, R.V. (1996). Complex Variables and Applications.
McGraw-Hill, Inc., New York.
Lesson 12
12.1.1 Examples:
P( z )
3. f ( z ) = , where and are polynomials, is analytic at all points except
Q( z )
where vanishes.
A function which is analytic at all points in the complex plane is called an entire
function.
12.1.3 Examples:
1
A function is said to be analytic at z = ∞ if f is analytic at .
z
Let us write the function
and respectively.
u x = v y , u y = vx (12.2.1)
f ( z + ∆z ) − f ( z )
Proof: Given that f ′( z ) = lim exists. This implies that
∆z →0 ∆z
lim
{[u ( x + ∆x, y + ∆y) + iv( x + ∆x, y + ∆y)] − [u ( x, y) + iv( x, y)]} exists.
( ∆x ,∆y )→(0,0) (∆x + i∆y )
Hence along (∆x,0) and (0, ∆y ) the limit should be same. Now along (∆x,0) the
limit is
lim
( u ( x + ∆x, y ) − u ( x, y ) ) + i ( v( x + ∆x, y ) − v( x, y ) )
∆x →0 ∆x
= u x ( x, y ) + ivx ( x, y ), (12.2.2)
lim
( u ( x, y + ∆y ) − u ( x, y ) ) + i ( v( x, y + ∆y) − v( x, y) )
∆y →0 i∆y
= iu y ( x, y ) + v y ( x, y ) …… (12.2.3)
Equating the real and imaginary parts in (12.2.2) & (12.2.3), we get the Cauchy-
Riemann equations.
12.2.2 Example:
1 x y
2. Let f ( z ) = = −i 2 , z ≠ 0.
z x +y
2 2
x + y2
y 2 − x2 2 xy
u x =2 = vy , u y =
− =
−vx except at . The function is
(x + y )
2 2
(x + y )
2 2 2
∆=
u u ( x + ∆x, y + ∆y ) − u ( x, y )= u x ∆x + u y ∆y + ∈1 ∆x + ∈2 ∆y,
where ∈1 , ∈2 , ∈3 , ∈4 → 0 as ∆x, ∆y → 0 .
Now ∆w = f ( z + ∆z ) − f ( z ) = ∆u + i∆v
= (u x + ivx )∆x + (u y + iv y )∆y + (∈1 +i ∈3 )∆x + (∈2 +i ∈4 )∆y
f ( z + ∆z ) − f ( z ) ∆x ∆y
So − (u x + ivx ) ≤ (∈1 +i ∈3 ) + (∈2 +i ∈4 ) .
∆z ∆z ∆z
∆x ∆y
Using the fact that ≤ 1& ≤ 1 , we get
∆z ∆z
f ( z + ∆z ) − f ( z )
lim =u x + ivx =u y + iv y .
∆z →0 ∆z
12.2.4 Examples:
( z )2
,z≠0
2. f ( z ) = z
0, z = 0.
, we get u 2 + v 2 =
k 2 . Differentiating with respect to
and we get
uu x + vvx =
0 (12.2.4)
and uu y + vv y =
0 (12.2.5)
uu x − vu y =
0 and uu y + vu x =
0
0 , (u 2 + v2 ) u y =
⇒ (u 2 + v2 ) ux = 0
If k ≠ 0 then u=
x u=
y 0 , then vx and v y are also zero. So = const. , = const.
terms of the variables and . Similarly, if we write z = reiθ , ( z ≠ 0), the real
and imaginary parts of are expressed in terms and θ . Assume
the existence and continuity of the first-order partial derivatives of and with
respect to and everywhere in some neighbourhood of a given non zero point
z0 . Then the first order partial derivatives with respect to and θ will also exist
and be continuous in some neighbourhood. Using the chain rule for
differentiating real-valued functions of two real variables we obtain
∂u ∂u ∂u ∂u ∂y ∂u ∂u ∂u ∂u ∂y
= + , = +
∂r ∂x ∂r ∂y ∂r ∂θ ∂x ∂θ ∂y ∂θ
u x cosθ + u y sin θ ,
so that ur = − u x r sin θ + u y r cosθ .
uθ = (12.2.6)
vx cosθ v y sin θ ,
Similarly vr =+ −vx r sin θ + v y r cosθ .
vθ = (12.2.7)
If the partial derivatives with respect to and also satisfy the Cauchy-
Riemann equations u x = v y , u y = −vx at z0 , then equation (12.2.7) becomes
−u y cosθ + u x sin θ , vθ =
vr = u y r sin θ + u x r cosθ (12.2.8)
1 1
ur = vθ and vr = − uθ . (12.2.9)
r r
f ( z ) u (r ,θ ) + i v(r ,θ ) be defined throughout
12.2.6 Theorem: Let the function=
some ∈ -neighbourhood of a non-zero point z0 = r0 exp(iθ 0 ) . Suppose that the
first order partial derivatives of the functions and with respect to and
θ exist anywhere in that neighbourhood and that they are continuous at ( r0 ,θ 0 ) .
Then if those partial derivatives satisfy the polar form (4) of the Cauchy-
Riemann equations at (r0 ,θ 0 ) , the derivatives f ′( z0 ) exists and
′( z0 ) e − iθ (ur + ivr ) ,
f=
1 1 cosθ sin θ
12.2.7 Example: f ( z )= = iθ
= −i
z re r r
The conditions in the theorem are satisfied at every non-zero point z = reiθ in the
plane. Hence the derivative of exists there and
cosθ sin θ 1 1
e − iθ
f ′( z ) =− +i 2 = =
− 2.
( reiθ )
2 2
r r z
12.2.8 Example:
f ( z=
) z (Re z=
) x 2 + ixy . Then=
u x 2 x=
, u y 0,=
vx y=
, v y x.
12.2.9 Example:
z
=
f ( z) 2
, z ≠ 0,
z
1
= , z≠0
z
x −y
=u = , v .
x2 + y 2 x2 + y 2
∇ 2u = u xx + u yy = 0 and ∇ 2v = vxx + v yy = 0
and u y = vx . (12.3.2)
u xx = vxy (12.3.3)
and u yy = −v yx (12.3.4)
If two functions and are harmonic in a domain and their first order partial
derivatives satisfy the Cauchy-Riemann equations throughout , is said to be
a harmonic conjugate of .
12.3.3 Example:
Let u = x 2 − y 2 − y.
Then
ux =
2 x, u xx =
2, u y =
−2 y − 1, u yy =
−2.
So u xx + u yy =
0 ; that is, is harmonic.
, we get
f ( z ) = u + iv = ( x 2 − y 2 − y ) + i ( 2 xy + x + k ) = (z 2
+ iz + ik ) is analytic.
is analytic.
ux =
2 − 6 xy, u xx =
−6 y, u y =
3 y 2 − 3x 2 , u yy =
+6 y.
So u xx + u yy =
0 , that is, is harmonic.
v y = u x = 2 − 6 xy ⇒ v = 2 y − 3xy 2 + h( x)
−3 y 2 + h′( x) =
⇒ vx = −u y =
3x 2 − 3 y 2
⇒ h′( x) =3 x 2 ⇒ h( x ) =x3 + c
Hence v = 2 y − 3 xy 2 + x 3 + c. f =u + iv =2 z + iz 3 + ic is analytic.
f ( z ) u (r ,θ ) + iv(r ,θ ).
Consider the function f in polar form =
1
and uθ = vr (12.3.6)
r
rur ⇒ vrθ =ur + rurr
(12.3.5) ⇒ vθ = (12.3.7)
1
(12.3.6) ⇒ vθ r =
− uθθ (12.3.8)
r
1
Assuming urθ = vθ r , we get ur + rurr =
− uθθ
r
1 1 1
⇒ ur + rurr + uθθ =
0, or, urr + ur + 2 uθθ =
0. (12.3.9)
r r r
Suggested Readings
Brown, J.W. and Churchill, R.V. (1996). Complex Variables and Applications,
McGraw-Hill, Inc., New York.
Lesson 13
13.1 Introduction
Complex definite integrals are called complex line integrals written as ∫ f ( z ) dz ,
C
where C is a curve in the complex plane called the path of integration. We may
represent such a curve C by a parametric representation
z ( t=
) x ( t ) + iy ( t ) , a ≤ t ≤ b. (13.1.1)
the sum ,
where . (13.1.2)
In general all paths of integration for complex line integrals are assumed to be
piecewise smooth. The following three properties are easily implied by the
definition of the line integral.
2. Sense Reversal:
f ( z ) dz
3. Partitioning of Path: ∫= ∫ f ( z ) dz + ∫ f ( z ) dz.
C C C
(13.2.1)
lim S n =
n →∞ ∫ f ( z ) dz = ∫ udx − ∫ vdy + i ∫ udy + ∫ vdx
C C C C C
(13.2.2)
2
122 WhatsApp: +91 7900900676 www.AgriMoon.Com
Line Integral in the Complex Plane
This shows that under assumptions on and , the line integral exists and its
value is independent of the choice of subdivisions and intermediate points
. (13.2.3)
13.2.2 Examples
1.
2.
3.
4.
where
3
123 WhatsApp: +91 7900900676 www.AgriMoon.Com
Line Integral in the Complex Plane
Proof: The LHS of (13.2.4) is given by (13.2.2) in terms of real line integrals,
and we show that the RHS of (13.2.4) also equals (13.2.2). We have
hence We simply write for and for We
also have and
Consequently, in (13.2.4)
13.2.4 Examples
dz
1. ∫
C
z
= 2π i, where C is a unit circle, counter clockwise.
4
124 WhatsApp: +91 7900900676 www.AgriMoon.Com
Line Integral in the Complex Plane
Then .
2π 2π
∫ ( z − z=
) dz ∫ ρ i ( m +1)t
ρ e dt i ρ ∫e
m m +1
0 e i=
m imt it
dt
C 0 0
2π i, m = −1
∫ (z − z ) =
m
m ≠ −1 and integer.
0
C 0,
3. Integrate from 0 to
(a) along , straight line joining origin to 1+2i
(b) along containing of and , straight lines from origin to 1 and 1 to 1
+2i.
Solution:
(a)
1
1
∫0t (1 + 2i ) dt =
∫C Re z dz = 2
+ i.
3
(b) Along
Along
1 2
1
Hence ∫
C
∫ f ( z ) dz +
f ( z ) dz =
C1 C2
∫ ∫0 t dt + ∫0 i dt =
f ( z ) dz =
2
+ 2i .
5
125 WhatsApp: +91 7900900676 www.AgriMoon.Com
Line Integral in the Complex Plane
everywhere on
Proof:
Now is the length of the chord whose endpoints are and Hence
the sum on the right represents the length L* of the broken line of chords whose
endpoints are If n approaches infinity such that max
and so max tends to zero, then L* approaches the length L of the curve C,
by the definition of the length of the curve. This proves the ML- inequality.
13.3.2 Examples
∫ Re ( z ) dz, where
2
1. Evaluate is from 0 to represents
C
(c) parabola .
∫ f ( z (t )) z′(t ) dt =−
Hence, we obtain I = ∫ ( 3t ) (1 + 2i) dt =
−8(1 + 2i ) .
2
C 0
6
126 WhatsApp: +91 7900900676 www.AgriMoon.Com
Line Integral in the Complex Plane
(b) is
is
For
For
Hence, we obtain
= ∫ f ( z ( t ) ) z ( t ) dt + ∫ f ( z ( t ) ) z ( t ) dt
' '
I
C1 C2
=(
2
Hence I = ∫ f ( z ( t ) ) z ' ( t ) dt = ∫ ( t 2 − t 4 ) (1 + 2it ) dt
C 0
Suggested Readings
Brown, J.W. and Churchill, R.V. (1996). Complex Variables and Applications,
McGraw-Hill, Inc., New York.
7
127 WhatsApp: +91 7900900676 www.AgriMoon.Com
Line Integral in the Complex Plane
8
128 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module2: Complex Variables
Lesson 14
closed path C in D , ∫ f ( z ) dz = 0
C
(14.1.1)
∂v ∂u
∫ (udx − vdy) = ∫∫ − ∂x − ∂y dxdy,
C R
14.1.2 Examples
∫ = ∫ c os= ∫ z= 0, =n 0,1, 2, …
z n
1. e dz 0, zdz 0, dz for any closed path C as
C C C
2. ∫ sec z dz = 0,
C
C is the unit circle, as has singularities at
1
3. ∫ z
C
2
+4
dz = 0, C is unit circle, are outside the unit circle.
2π
=
4. ∫ z dz ∫= C : z ( t ) eit is the unit circle. Here z is no analytic.
e − it ieit dt 2π i,=
C 0
2π
dz 1
5. ∫ 2
= ∫=
e .ie dt −2 it it
0, C is the unit circle taken counter clockwise. is not
C
z 0
z2
analytic at z = 0 .
1
6. ∫ z dz = 2π i,
C
C is the unit circle taken counter clockwise.
∫f dz + ∫ f dz =
0,
C1 C2*
⇒ ∫f − ∫ f dz =
dz = ∫ f dz .
C1 C2* C2
This proves the theorem for paths that have only the endpoints in common. For
paths with finitely many further common points the above argument is applied
to each loop.
The idea is related to path independence. We can imagine that path was
obtained from by continuously moving (with ends fixed) until it coincides
with . As long as our deforming path always contains only points at which
is analytic, the integral retains the same value. This is called the principle
of deformation of path.
Now
F ( z + ∆z ) − F ( z ) 1
− f (z) < ε ∆z =ε
∆z ∆z
F ( z + ∆z ) − F ( z )
that is, lim = f ( z)
∆z →0 ∆z
or,
Consider a doubly connected domain D with outer boundary curve and inner
curve . If f is analytic in any domain D* that contains D and its boundary
In general: let
14.3.1 Theorem: Let C and be simply closed curves as in (a) and (b).
If a function f is analytic throughout the closed region D. Then
∫ f ( z ) dz = ∑ ∫ f ( z ) dz.
C k =1 Cn
14.3.2 Examples
it 2π
2π
1
2. ∫ = dz ∫=
ie dtit
e= | 0. Here Cauchy’s Theorem is not applicable.
C
| z |2 0
0
1 1 1 1
3. ∫ 2 z −=
C
1
dz ∫
=
2 C (z − 1 )
dz =
2
.2π i π i, C is unit circle, (Cauchy’s Theorem is
2
applicable)
dz
4. ∫ z − 3i = 2π i, C is the circle
C
as 3i is inside this circle.
ez
5. C∫ z dz = 0 (using Cauchy’s Theorem for doubly connected domain) C is a
circle counter clockwise and clockwise.
0 2π
1 ieit 1 ieit
I1 = ∫C z dz = ∫ it dt = −π i
π e
and=I 2 ∫=dz
C2
z ∫
π
=
eit
dt π i
1
and are not same, i.e., principle of deformation of paths is not applicable
since the curve cannot be continuously deformed into without passing
through z=0 at which is not analytic.
dz
7. I = ∫ where C is any rectangle containing the points z = 0 and z = 2
C
z ( z + 2)
inside it.
dz dz dz
∫=
C
z ( z + 2) ∫ z ( z + 2 ) ∫ z ( z + 2 )
C1
+
C2
1 dz dz dz dz 1
= ∫ − ∫ +∫ −∫ = (2π i − 0 + 0 − 2π i ) =0.
2 C1 z C1 z + 2 C2 z C2 z + 2 2
1 f ( z)
f ( z0 ) = ∫
dz ,
2π i C z − z0
14.4.2 Examples
dz
=2. I ∫=
C
2− z
, C: z 1
zz 1
Now 2 − z = 2 − = 2 − on C . Hence
z z
zdz 1 z 1 1 πi
=I ∫=
C
2z −1 ∫
=
2 C z− 1
dz
2
=.2π i.
2 2
.
2
z2 +1
=4. I C∫ z (2 z − 1) dz, C : z 1 .
=
z2 +1 z2 +1 1 πi
=I ∫ − ∫
1 C z
=dz 2π i + 1 − 2π=
4
i.1
2
.
C z−
2
1 f ( z)
f ' ( z0 ) = ∫
dz
2π i C ( z − z0 ) 2
and in general
n! f ( z)
f ( n ) ( z=
0) ∫
2π i C ( z − z0 ) n +1
dz , =
n 1, 2, …
Here C is any simple closed path in C that encloses and whose interior is a
subset of D.
14.4.4 Examples
cos z
1. ∫ ( z − π i)
C
2
2π i (cos z ) ' |z =π i =
dz = −2π i sin π i =
2π sin h(π ).
ez d ez
∫ ( z − 1) (
C
2
z2 + 4
dz = 2π i 2
)
dz z + 4 z =1
e z ( z 2 + 4) − e z .2 z 6eπ i
πi
2= .
( z + 4)
2 2
z =1 25
and on C, then
14.4.9 Examples
dz
=1. I ∫ ( z
C
2
=
+ 4) 2
, C :| z − i | 2
The integrand is not analytic at The point lies inside the domain
but lies outside it. So
dz f ( z ) dz 1
=I ∫=
C
( z − 2i ) ( z + 2i )
2 ∫ ( z − 2=
2
C
i)
, where f ( z )
2
( z + 2i ) 2
π
π i f ′(2i )
= 2=
16
(3 z 4 + 5 z 2 + 2) dz
2. I = ∫ , where C is any simple closed curve containing the
C
( z + 1) 4
2π i d 3
I = 3 (3 z 4 + 5 z 2 + 2) −24 π i.
=
3! dz z = −1
Suggested Readings
Brown, J.W. and Churchill, R.V. (1996). Complex Variables and Applications.
McGraw-Hill, Inc., New York.
Lesson 15
(15.1.1)
is an infinite series of numbers and is its th term. The partial sum of the
series is defined by
Since we get
for all
Consider the geometric series where r is any real number. We now find
the conditions for the convergence of this series.
or,
Therefore,
For each term in the series is unity. Hence the partial sum as
. Thus the series is divergent in this case.
For the terms in the series are +1 and 1 alternatively. Now the sequence
has two subsequences with limits 0 and 1. Hence in this case, the sequence
does not converge and consequently the series does not converge.
15.1.5 Example: Using the above argument, one can show that the series
converges to if Here is complex variable.
and
The following results are frequently used to test the convergence of an infinite
series.
∞
1
15.2.3 Theorem: The series ∑n
n =1
p
, p > 0 is convergent if and divergent if
Proof: We write
1 1 1 1 1 1 1 1 1 1
S n = p + p +…+ p = p + p + p + p + p + p + p +…
1 2 n 1 2 3 4 5 6 7
2
1 2 4 1 1 1
< p + p + p +…
= p + p −1 + p −1 +…
1 2 4 1 2 2
The last series is a geometric series with common ratio Therefore, the
1
harmonic series ∑ n is divergent, applying the comparison test, the series
∞
1
∑n
n =1
p
is also divergent for
∞
1
15.2.4 Example: Prove that the series ∑ n ( n + 1)
n −1
is convergent. Also find its
sum.
Now so the given series is convergent and the sum of the series is
1.
Solution:
(i) . Hence
(ii) . So
have
Now,
A real series in which the terms are alternatively positive and negative is called
an alternative series and is of the form The following
theorem gives a sufficient condition for the convergence of an alternative series.
15.3.2 Examples: Using Leibnitz Theorem, we can conclude that the following
series are convergent:
1 1
(i) ∑ (−1) n
n
(ii) ∑ (−1) n
1
15.3.4 Example: The series ∑ (−1) n
n
is conditionally convergent.
We say that the series converges uniformly to , if, for a given real
positive number there exists a natural number independent of z, but
dependent on such that
for
the disk
Note that
for all z in
We have
and
or
Then
Using the right hand side can be made as small as necessary by choosing
n large enough.
Hence, for and for all z. This shows that the given series
is uniformly convergent.
If we consider the open disk we can find a z for a given n and a real
number (no matter how large) such that
Suggested Readings
Brown, J.W. and Churchill, R.V. (1996). Complex Variables and Applications.
McGraw-Hill, Inc., New York.
Lesson 16
Power Series
16.1 Introduction
(16.1.1)
16.1.1 Examples
Proof:
(a) The proof follows by observing that for the series reduces to .
Let be the radius of the circle with center at that contains all points at
which the series is convergent and the series is divergent at all points outside it.
16.2.1 Examples
series will not converge for any . So In all other cases, the series will
16.2.5 Examples
= =
log log L lim
1
lim ln ln nln ln n
n
( 1
)
lim lim (ln(ln n) 2 = 0 .
n
Hence L= e=
0
1, or , R= 1.
Then .
Hence the series converges for | z | < 2 and diverges for | z | > 2 .
Hence
7. ,
for
8.
as .
Hence
9.
10. .
Proof: Given .
Taking we get
Assume Then
and
means the multiplication of each term of the first series by each term of the
second series and the collection of like power of . This gives a power series
and is given by
This power series converges absolutely for each within the circle of
convergence of each of the two given series and has the sum
.
16.3.3 Theorem: The derived series of a power series has the same radius of
convergence as the original series.
Now
Then
16.3.7 Examples:
convergence is .
twice and multiplying by , we get the original series. Now clearly the
radius of convergence of this new series is 5.
of is 5.
Suggested Readings
Brown, J.W. and Churchill, R.V. (1996). Complex Variables and Applications.
McGraw-Hill, Inc., New York.
Lesson 17
The following theorem shows that a Taylor series can be found for an analytic
function.
where
and contains . This representation is valid in the largest open disk with
center in which is analytic. The remainder of (17.1.1) can be
represented as:
Thus
Since analytic functions have derivatives of all orders, we can take in (17.1.6)
as large as possible. If we let , we get (17.1.1). Clearly, (17.1.1) will
convergence and represent if and only if
Finally
A point at which is not differentiable but such that every disk with
center contains points at which is differentiable. We say that is
singular at or has a singularity at .
Then Now
Thus Further,
Thus
17.1.4 Remark: Complex analytic functions have derivatives of all orders and
they can always be represented by power series of the from (17.1.1). This is not
true in general for real valued functions. In fact, there are real functions for
which derivatives of all orders exist but it cannot be represented by a power
series.
17.1.5 Examples:
2.
3.
4.
5.
6.
7.
8.
9. =
The following theorem gives the conditions for the existence of a Laurent’s
series.
consisting of nonnegative powers and the principal part (the negative powers).
The coefficients of this Laurent series are given by the integrals
taken counter clockwise around any simple closed path that lies in the annulus
and encircles the inner circle.
This series converges and represents in the open annulus obtained from the
given annulus by continuously increasing the outer circle and decreasing
until each of the circles reaches a point where is singular.
In the special case that is the only singular point of inside , this
circle can be shrunk to the point , giving convergence in a disk except at the
center.
where is any point in the given annulus and both and are counter-
clockwise. Now integral is exactly the Taylor series so that
with coefficients
annulus.
Now
where,
Now if the principal part consists of finitely many terms only, then there is
, so that is analytic for all in the exterior E of the circle with center
and radius equal to the maximum distance from to the singularities of
inside The domain common to and is the open annulus.
17.2.3 Examples:
1. with center 0.
for | z | > 0 . Hence the annulus is the whole complex plane except the
origin.
2.
3.
and
valid for
4. center 0
5. center 0
for
for
Suggested Readings
Brown, J.W. and Churchill, R.V. (1996). Complex Variables and Applications.
McGraw-Hill, Inc., New York.
Lesson 18
at .
18.2 Poles
18.2.1 Examples:
order
3.
4.
5.
The first expansion shows that there is a pole of order 3 at The second
expansion has infinitely many terms of negative power. But it is no
contradiction as this later expansion is valid for
any manner.
for and . Equating the absolute values and the arguments, we have
, i.e., and .
and .
define =1.
18.3 Zeros
18.3.1 Examples:
(18.3.1)
Infinity has been added to the complex plane resulting in the extended
complex plane. The extended complex plane can be mapped into sphere of
diameter 1 touching the plane at The image of a complex number
is the intersection of the sphere with the segment from A to the “north pole” .
The point is the image .
The sphere representing the extended complex plane in this way is called the
Riemann number sphere. The mapping of the sphere onto the plane is called
stereographic projection with center
18.3.6 Examples:
Let f ( z ) be analytic function and it has only singularities in the finite plane
which are poles. Then f ( z ) is called a meromorphic function. Some examples
of meromorphic functions are rational functions with nonconstant denominator,
trigonometric functions and
18.3.7 Examples:
1. .
2.
for
Suggested Readings
Brown, J.W. and Churchill, R.V. (1996). Complex Variables and Applications.
McGraw-Hill, Inc., New York.
Lesson 19
Residue Theorem
19.1 Residues
This series is convergent for all points near (except at ) in the same
domain of the form
Now the coefficient of the first negative power of this Laurent series is
given by
19.1.1 Examples:
So .
19.1.3 Example:
So
If has a pole of any order at then its Laurent series can be written
as
where
. So
19.2.2 Residue Theorem: Let the function be analytic inside a simple closed
path and on except for finitely many singular points inside
Then the integral of taken counter clockwise around is given by
Proof: We enclose each of the singular points in a circle with radius small
enough that these circles and are all separated. Then is analytic in the
domain bounded by and and on the entire boundary of . From
Cauchy’s integral theorem, we thus have
19.2.3 Examples:
2.
3. Evaluate , is ellipse .
Hence .
6. Evaluate , .
The integral has simple poles at and they all lie inside the contour.
Now for pole at
So,
Suggested Readings
Boas, R.P. (1987). Invitation to Complex Analysis, McGraw-Hill, Inc., New York.
Brown, J.W. and Churchill, R.V. (1996). Complex Variables and Applications.
McGraw-Hill, Inc., New York.
Lesson 20
Introduction
If a function f is periodic with period T > 0 then f (t) = f (t + T ), −∞ < t < ∞. The
smallest of T , for which the equality f (t) = f (t + T ) is true, is called fundamental period
of f (t). However, if T is the period of a function f then nT , n is any natural number, is
also a period of f . Some familiar periodic functions are sin x, cos x, tan x etc.
We consider two important properties of periodic function. These properties will be used
to discuss the Fourier series.
1. It should be noted that the sum, difference, product and quotient of two functions is
also a periodic function. Consider for example:
f (x) = |{z}
sin x + sin
| {z2x} + cos
| {z3x}
2π 2π
period: 2π =π
2 3
Period of f = common period of (sin x, sin 2x, cos 3x) = 2π
One can also confirms the period of the function f (x) as
f(x)
a a+T b b+T
Now the question aries whether any function of period T = 2l can be represented as
the sum of a trigonometric series? The answer to this question is affirmative and it
is possible for a very wide class of periodic functions. In the next lesson we will see
how to obtain the constants an and bn in order this trigonometric series to represent
a given periodic function.
Remark 1: Though sine and cosine functions are quite simple in nature but their sum
function may be quite complex. One can see the plot of sin x+sin 2x+cos 3x in Figure 20.2.
However, the function has a period 2π which is a common period of sin x, sin 2x, cos 3x.
1.5
0.5
−0.5
f(x)
−1
−1.5
−2
−2.5
−3
0 2 4 6 8 10 12 14
x
We call two functions φ(x) and ψ(x) to be orthogonal on the interval [a, b] if
Z b
φ(x)ψ(x) dx = 0
a
With this definition we can say that the basic trigonometric system viz.
is orthogonal on the interval [−π, π] or [0, 2π]. In particular, we shall prove that any two
distinct functions are orthogonal.
To show the orthogonality we take different possible combination as:
For any integer n 6= 0: We have the following integrals to show the orthogonality of the
function 1 with any member of sine or cosine family
Z π Z π
sin(nx) π cos(nx) π
1 · cos(nx) dx = = 0, 1 · sin(nx) dx = − =0
−π n −π −π n −π
For any integer m and n: Here we show that any two members of the two different
family (sine and cosine) are orthogonal
Z π
sin(nx) cos(mx) dx = 0
−π
Note that the integrand is an odd function and therefore the integral is zero.
The above result can be summarized in a more general setting in the following theorem.
20.3.1 Theorem
πx πx 2πx 2πx
1, cos , sin , cos , sin ,...
l l l l
is 2l. Similar to the evaluation of the integral appeared above to show orthogonality of the
basic trigonometric system, we have the following results:
Z Z (
l a+2l 0 if m 6= n
mπx nπx mπx nπx
a) cos cos dx = cos cos dx =
−l l l a l l l if m = n 6= 0
Z Z (
l a+2l 0 if m 6= n
mπx nπx mπx nπx
b) sin sin dx = sin sin dx =
−l l l a l l l if m = n 6= 0
Z l Z a+2l
mπx nπx mπx nπx
c) sin cos dx = sin cos dx = 0
−l l l a l l
To summarize, the value of the integral over length of period of integrand is equal to zero
if the integrand is a product of two different members of trigonometric system. If the
integrand is product of two same member from sine or cosine family then the value of
the integral will be half of the interval length on which the integral is performed. These
results will be used to establish Fourier series of a function of period 2l defined on the
interval [−l, l] or [a, a + 2l]. It should be noted that for l = π we obtain results for standard
trigonometric system of common period 2π .
Suggested Readings
Davis, H.F. (1963). Fourier Series and Orthogonal Functions. Dover Publications, Inc.
New York.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Hanna, J.R. and Rowland, J.H. (1990). Fourier Series, Transforms and Boundary Value
Problems. Second Edition. Dover Publications, Inc. New York.
Iorio, R. and Iorio, V. de M. (2001). Fourier Analysis and Partial Differential Equations.
Cambridge University Press. United Kingdom.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Stein, E.M. and Shakarchi, R. (2003). Fourier Analysis: An Introduction. Princeton
University Press, Princeton, New Jersey, USA.
Lesson 21
In this lesson we shall introduce Fourier series of a piecewise continuous periodic func-
tion. First we construct Fourier series of periodic functions of standard period 2π and then
the idea will be extended for a function of arbitrary period.
such that f is continuous on each open sub-interval (a, t1 ), (tj , tj+1 ) and (tn , b) and all the
following one sided limits exist and are finite
This mean that f is continuous on [a, b] except possibly at finitely many points, at each
of which f has finite one sided limits. It should be clear that all continuous functions are
obviously piecewise continuous.
21.1.1 Example 1
At each point of discontinuity the function has finite one sided limits from both sides. At
the end points x = −π and π right and left sided limits exist, respectively. Therefore, the
function is piecewise continuous.
21.1.2 Example 2
Note that f is continuous everywhere except at x = 0. The function f is also not piecewise
continuous on [0, 1] because limx=0+ f (x) = ∞.
An important property of piecewise continuous functions is boundedness and integrability
over closed interval. A piecewise continuous function on a closed interval is bounded and
integrable on the interval. Moreover, if f1 and f2 are two piecewise continuous functions
then their product, f1 f2 , and linear combination, c1 f1 +c2 f2 , are also piecewise continuous.
Let f be a periodic piecewise continuous function on [−π, π] and has the following trigono-
metric series expansion
∞
a0 X
f∼ + [ak cos(kx) + bk sin(kx)] (21.1)
2
k=1
This implies Z π
1
a0 = f (x) dx.
π −π
Multiplying the series by cos(nx), integrating over [−π, π] and assuming its value equal to
the integral of f (x) cos(nx) over [−π, π], we get
Z π ∞
X Z π Z π
f (x) cos(nx) dx = 0 + ak cos(nx) cos(kx) dx + bk cos(nx) sin(kx) dx
−π k=1 −π −π
Rπ
Note that the first term on the right hand side is zero because −π cos(kx) dx = 0. Further,
using the orthogonality of the trigonometric system we obtain
Z π
1
an = f (x) cos(nx) dx
π −π
Similarly, by multiplying the series by sin(nx) and repeating the above steps we obtain
Z π
1
bn = f (x) sin(nx) dx
π −π
Remark 1: In the series (21.1) we can not, in general, replace ∼ by = sign as clear
from the determination of the coefficients. In the process we have set two integrals equal
which does not imply that the function f (x) is equal to the trigonometric series. Later we
will discuss conditions under which equality holds true.
Let f (x) be piecewise continuous function defined in [−l, l] and it is 2l periodic. The
Fourier series corresponding to f (x) is given as
∞
a0 X kπx kπx
f∼ + an cos + bn sin (21.2)
2 l l
k=1
where the Fourier coefficients, derived exactly in the similar manner as in the previous
case, are given as
Z l
1 kπx
ak = f (x) cos dx, k = 0, 1, 2, . . .
l −l l
Z l
1 kπx
bk = f (x) sin dx k = 1, 2, . . .
l −l l
In must be noted that just for simplicity we will be discussing Fourier series of 2π periodic
function. However all discussions are valid for a function of an arbitrary period.
21.4.1 Problem 1
Solution: The Fourier series of the given function will represent a 2π periodic function
and the series is given by
∞
a0 X
f (x) ∼ + (an cos(nx) + bn sin(nx))
2
n=1
with Z Z Z
π 0 π
1 π
a0 = f (x) dx = − π dx + x dx = −
−π π −π 0 2
and the coefficients an , n = 1, 2, . . . as
Z π Z 0 Z π
1 1
an = f (x) cos(nx) dx = − π cos(nx) dx + x cos(nx) dx
π −π π −π 0
0 π Z π
sin(nx) 1 sin(nx) sin(nx)
=− + x − dx
n −π
π n 0 0 n
Remark 4: Let a function is defined on the interval [−l, l]. It should be noted that
the periodicity of the function is not required for developing Fourier series. However,
the Fourier series, if it converges, defines a 2l-periodic function on R. Therefore, this is
sometimes convenient to think the given function as 2l-periodic defined on R.
21.4.2 Problem 2
π
Case I: First we treat the function | sin x| as π periodic we have 2l = π ⇒ l = 2. The
coefficient a0 is given as
Z π Z π
1 2 2 4
a0 = π f (x) dx = sin x dx = [− cos x]π0 = .
2 0 π 0 π π
Remark 5: If we develop the Fourier series of a function considering its period as any
integer multiple of its fundamental period, we shall end up with the same Fourier series.
Remark 6: Note that in the above example the given function is an even function and
therefore the Fourier series is simpler as we have seen that the coefficient bn is zero in this
case. The determination of the Fourier series of a given function becomes simpler if the
function is odd or even. More detail of this we shall see in the Lesson 23.
Suggested Readings
Davis, H.F. (1963). Fourier Series and Orthogonal Functions. Dover Publications, Inc.
New York.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Hanna, J.R. and Rowland, J.H. (1990). Fourier Series Transforms and Boundary Value
Problems. Second Edition. Dover Publications, Inc. New York.
Iorio, R. and Iorio, V. de M. (2001). Fourier Analysis and Partial Differential Equations.
Cambridge University Press. United Kingdom.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Stein, E.M. and Shakarchi, R. (2003). Fourier Analysis: An Introduction. Princeton
University Press, Princeton, New Jersey, USA.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Kreyszig, E. (1993). Advanced Engineering Mathematics. Seventh Edition, John Willey
& Sons, Inc., New York.
Lesson 22
Convergence Theorems
We have seen that piecewise continuity of a function is sufficient for the existence of the
Fourier series. We have not yet discussed the convergence of the Fourier series. Conver-
gence of the Fourier series is a very important topic to be explored in this lesson.
In order to motivate the discussion on convergence, let us construct the Fourier series of
the function (
− cos x, −π/2 ≤ x < 0;
f (x) = f (x + π) = f (x).
cos x, 0 ≤ x ≤ π/2.
Note that the Fourier series at x = 0 converges to 0. So the Fourier series of f does not
converge to the value of the function at x = 0.
With this example we pose the following questions in connection to the convergence of
the Fourier series
1. Does the Fourier series of a function f (x) converges at a point x ∈ [−L, L].
2. If the series converges at a point x, is the sum of the series equal to f (x).
The answers of these questions are in the negative because
1. There are Lebesgue integrable functions on [−L, L] whose Fourier series diverge
everywhere on [−L, L].
2. There are continuous functions whose Fourier series diverge at a countable number
of points.
3. We have already seen in the above examples that the Fourier series converges at a
point but the sum is not equal to the the value of the function at that point.
We need some additional conditions to ensure that the Fourier series of a function f (x)
converges and it converges to the function f (x). Though, we have several notions of
convergence like pointwise, uniform, mean square, etc. we first stick to the most com-
mon notion of convergence, that is, pointwise convergence. Let {fm }∞ m=1 be sequence
of functions defined on [a, b]. We say that {fm }∞
m=1 converges pointwise to f on [a, b] if
for each x ∈ [a, b] we have limm→∞ fm (x) = f (x). A more formal definition of pointwise
convergence will be given later.
At both endpoints x = ±L the series converges to [f (L−) + f ((−L)+)] /2, thus we have
∞
f (L−) + f ((−L)+) a0 X
= + (−1)n an
2 2
n=1
Remark 1: If the function is continuous at a point x, that is, f (x+) = f (x−) then we
have
∞
a0 X kπx kπx
f (x) = + an cos + bn sin (22.2)
2 L L
k=1
In other words, if f is continuous with f (−L) = f (L) and one sided derivatives (22.1)
exist then equality (22.2) holds for all x.
Remark 2: In the above theorem condition on f are sufficient conditions. One may
replace these conditions (piecewise continuity and one sided derivatives) by slightly more
restrictive conditions of piecewise smoothness. A function is said to be piecewise smooth
on [−L, L] if it is piecewise continuous and has a piecewise continuous derivative. The
difference between the two similar restrictions on f will be clear from the example of the
function (
x2 sin(1/x), x 6= 0;
f (x) =
0, x = 0.
It can easily easily be shown that derivative of the function exist everywhere and thus the
function has one sided derivatives and satisfy the conditions of the convergence Theorem
(22.1). However the function is not piecewise smooth because the limx→0 f ′(x) does not
exist as (
2x sin(1/x) − cos(1/x), x 6= 0;
f ′ (x) =
0, x = 0.
If a function is piecewise smooth then it can easily be shown that left and right derivatives
exist. Let f be a piecewise smooth function on [−L, L] then limx→a± f ′ (x) exists for all
a ∈ [−L, L]. This implies
′ f (x + h) − f (x)
lim f (x) = lim lim
x→a+ x→a+ h→0+ h
Interchanging the two limits on the right hand side we obtain
′ f (x + h) − f (x) f (a + h) − f (a+)
lim f (x) = lim lim = lim
x→a+ h→0+ x→a+ h h→0+ h
Similarly one can shown the existence of left derivative. This example confirms that piece-
wise smoothness is stronger condition than piecewise continuity with existence of one
sided derivatives.
Let {fm }∞m=1 be sequence of functions defined on [a, b]. Let f be defined on [a, b]. We say
that the sequence {fm }∞
m=1 converges in the mean square sense to f on [a, b] if
Z b
lim |f (x) − fm (x)|2 dx = 0
m→∞ a
Let {fm }∞m=1 be sequence of functions defined on [a, b] and let f be defined on [a, b].
We say that {fm }∞m=1 converges pointwise to f on [a, b] if for each x ∈ [a, b] we have
limm→∞ fm (x) = f (x). That is, for each x ∈ [a, b] and ε > 0 there is a natural number
N(ε, x) such that
Let {fm }∞
m=1 be sequence of functions defined on [a, b] and let f be defined on [a, b]. We
say that {fm }∞
m=1 converges uniformly to f on [a, b] if for each ε > 0 there is a natural
number N(ε) such that
|fn (x) − f (x)| < ε for all n ≥ N(ε), and for all x ∈ [a, b]
There is one more interesting fact about the uniform convergence. If {fm }∞ m=1 is a se-
quence of continuous functions which converge uniformly to a function to f on [a, b], then
f is continuous.
22.2.4 Example 1
we seek for a natural number N(ε) such that relation (22.3) holds for n > N . Note that
relation (22.3) holds true if
ln ε
xn < ε ⇐⇒ n >
ln x
It should be evident now that for given x and ε one can define
ln ε
N := , where [ ] gives integer rounded towards infinity
ln x
22.2.5 Example 2
xn
Let un = on [0, 1). This sequence converges uniformly and of course pointwise to 0.
n
For given ε > 0 take n > N := 1ε then noting 1ε > 1ε we have |un − 0| < xn /n < 1/n < ε
• Let f be a piecewise continuous function on [−π, π] and the appropriate one sided
derivatives of f at each point in [−π, π] exists then for each x ∈ [−π, π] the Fourier
series of f converges pointwise to the value (f (x−) + f (x+))/2.
• If f is continuous on [−π, π], f (−π) = f (π), and f ′ is piecewise continuous on
[−π, π], then the Fourier series of f converges uniformly (and also absolutely) to f
on [−π, π].
An interesting property of the partial sums of a Fourier series is that among all trigonomet-
ric polynomials of degree N , the partial sum of Fourier Series yield the best approximation
of f in the mean square sense. This result has been summarized in the following lemma.
22.3.1 Lemma
Let f be piecewise continuous function on [−π, π] and let the mean square error is defined
by the following function
Z π hc N i2
0
X
E(c0 , . . . , cN , d1 , . . . , dN ) = f − + (ck cos kx + dk sin kx) dx
−π 2
k=1
22.4.1 Problem 1
Find the sum of the Fourier series for all point in [−π, π].
Solution: At x = 0, the Fourier series will converge to
f (0+) + f (0−) 0 + (−π) π
= =−
2 2 2
Again, x = ±π are another points of discontinuity and the value of the series at these point
will be
f (π−) + f ((−π)+) π + (−π)
= = 0;
2 2
At all other points the series will converge to functional value f (x).
22.4.2 Problem 2
Let the Fourier series of the function f (x) = x + x2 , −π < x < π be given by
∞
π2 X
2 n 4 2
x+x ∼ + (−1) cos nx − sin nx
3 n2 n
n=1
Find the sum of the Fourier series for all point in [−π, π]. Applying the result on conver-
gence of the Fourier series find the value of
1 1 1 1 1 1
1+ + + + ... and 1− + − + ...
22 32 42 22 32 42
At the point x = 0 is a point of continuity and therefore the series will converge to 0.
Substituting x = 0 into the series we obtain
∞ ∞
π2 X 4 X 1 π2
+ (−1)(n) 2 = 0 =⇒ (−1)(1+n) 2 = .
3 n n 12
n=1 n=1
Suggested Readings
Davis, H.F. (1963). Fourier Series and Orthogonal Functions. Dover Publications, Inc.
New York.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Kreyszig, E. (1993). Advanced Engineering Mathematics. Seventh Edition, John Willey
& Sons, Inc., New York.
Lesson 23
In this chapter, we start discussion on even and odd function. As mentioned earlier if the
function is odd or even then the Fourier series takes a rather simple form of containing
sine or cosine terms only. Then we discuss a very important topic of developing a desired
Fourier series (sine or cosine) of a function defined on a finite interval by extending the
given function as odd or even function.
A function is said to be an even about the point a if f (a − x) = f (a + x) for all x and odd
about the point a if f (a − x) = −f (a + x) for all x. Further, note the following properties
of even and odd functions:
a) The product of two even or two odd functions is again an even function.
b) The product of and even function and an odd function is an odd function.
Using these properties we have the following results for the Fourier coefficients
Z π
Z π
1 2
f (x) cos(nx) dx, when f is even function about 0
an = f (x) cos(nx) dx = 0
π −π π
0, when f is odd function about 0
1
Z π
2
0,
Z π when f is even function about 0
bn = f (x) sin(nx) dx =
π −π π f (x) sin(nx) dx, when f is odd function about 0
0
23.1.1 Proposition
a) If f is an even function then the Fourier series takes the simple form
∞ Z π
a0 X 2
f (x) ∼ + an cos(nx) with an = f (x) cos(nx) dx, n = 0, 1, 2, . . . .
2 π 0
n=1
23.2.1 Problem 1
23.2.2 Problem 2
Determine the Fourier Series of f (x) = x2 on [-π, π ] and hence find the value of the
∞ ∞
n+1 1 1
X X
infinite series (−1) 2
and 2
.
n n
n=1 n=1
Solution: The function f (x) = x2 is even on the interval [-π, π ] and therefore bn =0 for all
n. The coefficient a0 is given as
Z π
1 x3 π 2π 2
2
a0 = x dx = = .
π π 3π −π 3
The other coefficients can be calculated by the general formula as
Z π Z π π Z π
1 2 2 sin(nx) 1
an = x2 cos(nx) dx = x2 cos(nx) dx = x2 − 2x sin(nx) dx
π −π π 0 π n 0 n 0
Again integrating by parts we obtain
Z π
4 cos(nx) π cos(nx) 4 π(−1)n 4(−1)n
an = x − dx = −0 = 2
nπ n 0 0 n nπ n n
Therefore the Fourier series is given as
∞
π 2 X 4(−1)n
x2 = + cos(nx) for x ∈ [−π, π]. (23.2)
3 n2
n=1
If we substitute x = 0 in the equation (23.2) we get
∞ ∞
π 2 X 4(−1)n X (−1)n+1 π2
0= + ⇒ = .
3 n2 n2 12
n=1 n=1
If we now substitute x = π in the equation (23.2) we get
∞ ∞ ∞
2 π 2 X 4(−1)2n 1 2π 2 X 1 X 1 π2
π = + ⇒ = ⇒ = .
3 n2 4 3 n2 n2 6
n=1 n=1 n=1
Suppose that f (x) is a function defined on (0, l]. Suppose we want to express f (x) in the
cosine or sine series. This can be done by extending f (x) to be an even or an odd function
on [−l, l]. Note that there exists an infinite number of ways to express the function in the
interval [−l, 0]. Among all possible extension of f there are two, even and odd extensions,
that lead to simple and useful series:
a) If we want to express f (x) in cosine series then we extend f (x) as an even function in
the interval [−l, l].
b) On the other hand, if we want to express f (x) in sine series then we extend f (x) as an
odd function in [−l, l].
23.3.1 Proposition
Remark: Note that we can develop a Fourier series of a function f defined in [0, l]
and it will, in general, contain all sine and cosine terms. This series, if converges, will
represent a l-periodic function. The idea of half range Fourier series is entirely different
where we extend the function f as per our desire to have sine or cosine series. The half
range series of the function f will represent a 2l-periodic function.
23.4.1 Problem 1
Taking second term on the right side to the left side and after simplification we get
2nπ [1 − e(−1)n ]
bn =
1 + n2 π 2
Therefore, the sine series of f is given as
∞
x
X n [1 − e(−1)n ]
e = 2π sin nπx for 0<x<1
1 + n2 π 2
n=1
23.4.2 Problem 2
Solution: Sine we want to find cosine series of the function f we compute the coefficients
an as
Z l Z l
2 πx nπx 1 (n + 1)πx (1 − n)πx
an = sin cos dx = sin + sin dx
l 0 l l l 0 l l
23.4.3 Problem 3
Expand f (x) = x, 0 < x < 2 in a (i) sine series and (ii) cosine series.
Solution: (i) To get sine series we calculate bn as
Z l Z 2
2 nπx 2 nπx
bn = f (x) sin dx = x sin dx
l 0 L 2 0 2
Integrating by parts we obtain
i2 Z 2
nπx 2 2 nπx 4
h
bn = x cos − + cos dx = − cos nπ.
2 nπ 0 nπ 0 2 nπ
Then for 0 < x < 2 we have the Fourier sine series
4 X∞ cos nπ 4
nπx
πx 1 2πx 1 3πx
x=− sin = sin − sin + sin + ... .
π n=1 n 2 π 2 2 2 3 2
It is interesting to note that the given function f (x) = x, 0 < x < 2 is represented by
two entirely different series. One contains only sine terms while the other contains only
cosine terms.
Note that we have used series equal to the given function because the series converges for
each x ∈ (0, 2) to the function value. It should also be pointed out that one can deduce
sum of several series by putting different values of x ∈ (0, 2) in the above sine and cosine
series.
Suggested Readings
Davis, H.F. (1963). Fourier Series and Orthogonal Functions. Dover Publications, Inc.
New York.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Kreyszig, E. (1993). Advanced Engineering Mathematics. Seventh Edition, John Willey
& Sons, Inc., New York.
Lesson 24
In this lesson we discuss differentiation and integration of the Fourier series of a function.
We can get some idea of the complexity of the new series if looking at the terms of the
series. In the case of differentiation we get terms like n sin(nx) and n cos(nx), where
presence of n as product makes the magnitude of the terms larger then the original and
therefore convergence of the new series becomes more difficult. This is exactly other
way round in the case of integration where n appears in division and new terms become
smaller in magnitude and thus we expect better convergence in this case. We shall deal
these two case separately in next sections.
24.1 Differentiation
We first discuss term by term differentiation of the Fourier series. Let f be a piecewise
continuous with the Fourier series
∞
a0 X
f (x) ∼ + [an cos(nx) + bn sin(nx)] (24.1)
2
n=1
Can we differentiate term by term the Fourier series of a function f in order to obtain the
Fourier series of f ′ ? In other words, is it true that
∞
X
f ′ (x) ∼ [−nan sin(nx) + nbn cos(nx)]? (24.2)
n=1
We consider one more simple example to illustrate this fact. Consider the half range sine
series for cos x in (0, π)
∞
8 X n sin(2nx)
cos x ∼
π (4n2 − 1)
n=1
This series can not be the Fourier series of − sin x because it diverges as
16 n2 cos(2nx)
lim 6= 0
n→∞ π (4n2 − 1)
24.1.1 Theorem
If f is continuous on [−π, π], f (−π) = f (π), f ′ is piecewise continuous on [−π, π], and if
∞
a0 X
f (x) ∼ + [an cos(nx) + bn sin(nx)]
2
n=1
(in fact in this case we can replace ∼ by =) is the Fourier series of f , then the Fourier
series of f ′ is given by
∞
X
f ′ (x) ∼ [−nan sin(nx) + nbn cos(nx)] .
n=1
Moreover, if the function f ′ has appropriate left and right derivatives at a point x, then
we have
∞
f ′ (x+) + f ′ (x−) X
= [−nan sin(nx) + nbn cos(nx)] .
2
n=1
Proof: Since f ′ is piecewise continuous and this is sufficient condition for the existence
of Fourier series of f ′. So we can write Fourier series of as
∞
ā0 X
f ′ (x) ∼ + ān cos(nx) + b̄n sin(nx) (24.3)
2
n=1
where π π
1 1
Z Z
′
ān = f (x) cos(nx) dx, b̄n = f ′ (x) sin(nx) dx
π −π π −π
Now we simplify coefficients ān and b̄n and write them in terms of an and bn . Using the
condition f (−π) = f (π), we can easily show that
f ′ (x+) + f ′ (x−)
Convergence of this series to or f ′ (x) is a direct consequence of conver-
2
gence theorem of Fourier series.
24.2 Integration
In general, for an infinite series uniform convergence is required to integrate the series
term by term. In the case of Fourier series we do not even have to assume the convergence
of the Fourier series to be integrated. However, integration term by term of a Fourier series
does not, in general, lead to a Fourier series. The main results can be summarize as:
24.2.1 Theorem
Let f be piecewise continuous function and have the following Fourier series
∞
a0 X
f (x) ∼ + [an cos(nx) + bn sin(nx)] (24.4)
2
n=1
Then no matter whether this series converges or not we have for each x ∈ [−π, π],
x ∞
a0 (x + π) X an
Z
bn
f (t)dt = + sin(nx) − cos(nx) − cos nπ (24.5)
−π 2 n n
n=1
and the series on the right hand side converges uniformly to the function on the left.
Proof: We define Z x
a0
g(x) = f (t)dt − x
−π 2
Using Theorem 24.1.1 we have the following result for the Fourier series of g ′ as
∞
X
′
g (x) ∼ [−nαn sin(nx) + nβn cos(nx)]
n=1
nβn = an − nαn = bn n = 1, 2, . . .
Remark 1: Note that the series on the right hand side of (24.5) is not a Fourier series
due to presence of x.
is its Fourier series then no matter whether this series converges or not, it is true that
Z x Z x ∞ Z x
a0 X
f (t)dt = a0 dx + [an cos(nx) + bn sin(nx)] dx
a 2 a n=1 a
where −π ≤ a ≤ x ≤ π and the series on the right hand side of converges uniformly in x
to the function on the left for any fixed value of a.
Suggested Readings
Davis, H.F. (1963). Fourier Series and Orthogonal Functions. Dover Publications, Inc.
New York.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Kreyszig, E. (1993). Advanced Engineering Mathematics. Seventh Edition, John Willey
& Sons, Inc., New York.
Lesson 25
In this lesson some properties of the Fourier coefficients will be given. We will mainly de-
rive two important inequalities related to Fourier series, in particular, Bessel’s inequality
and Parseval’s identity. One of the applications of Parseval’s identity for summing certain
infinite series will be discussed.
Using the orthogonality of the trigonometric system and definition of Fourier coefficients
we get
π n n
a20
Z X X
f 2 (x) dx + a2k + b2k − a20 π − 2π a2k + b2k + 0 ≥ 0
π+π
−π 2
k=1 k=1
This implies
n π
a20 X 2 1
Z
+ ak + b2k ≤ f 2 (x) dx
2 π −π
k=1
If f is a continuous function in [−π, π] and one sided derivatives exit then we have the
equality
∞ π
a20 X 2 1
Z
+ an + b2n = f 2 (x) dx (25.1)
2 π −π
n=1
Remark: As stated earlier Parseval’s identity can be proved for piecewise continuous
functions. Further, for a piecewise continuous function on [−L, L] we can get Parseval’s
identity just by replacing π by L in (25.1).
25.3.1 Problem 1
Solution: a) We first find the Fourier coefficient and the period of the Fourier series just
by comparing the given series with the standard Fourier series
4
a0 = 2, an = [cos(nπ) − 1], n = 1, 2 . . . , bn = 0
π 2 n2
period = 2L = 4 ⇒ L = 2
This implies
2 ∞
1 4 X 16
Z
2
x dx = + 4 n4
(cos(nπ) − 1)2
2 −2 2 π
n=1
Then we obtain
1 1 1 π4
+ + + . . . =
14 34 54 96
b) Let
1 1 1
S= + + + ...
14 24 34
π4
Then we have the required sum as S = .
90
25.3.2 Problem 2
Find the Fourier series of x2 , −π < x < π and use it along with Parseval’s theorem to
show that ∞ X 1 π4
=
(2n − 1)4 96
n=1
2 2π 2
Z
a0 = πx2 dx =
π 0 3
π
1 2π 4
Z
Using x4 dx = we get
π −π 5
∞
4π 4 X 16 2π 4
+ =
18 n4 5
n=1
This implies
∞
X 1 π4
=
n4 90
n=1
Now using the idea of splitting of the series from the Example 25.3.1 (b), we have
∞ ∞ ∞ ∞
X 1 X 1 1 X 1 15 X 1
= − =
(2n − 1)4 n4 16 n4 16 n4
n=1 n=1 n=1 n=1
∞
X 1
Substituting the value of in the above equation we get the required sum.
n4
k=1
25.3.3 Problem 3
4 4 (−1)n+1
a0 = , an = , f (x) = cos(x/2)
π π (4n2 − 1)
we have ∞ π
1 16 16 X 1 1
Z
2
+ 2 2 2
= cos2 (x/2) dx = 1
2π π (4n − 1) π −π
n=1
Then,
∞
X 1 π2 − 8
= .
(4n2 − 1)2 16
n=1
Suggested Readings
Davis, H.F. (1963). Fourier Series and Orthogonal Functions. Dover Publications, Inc.
New York.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Kreyszig, E. (1993). Advanced Engineering Mathematics. Seventh Edition, John Willey
& Sons, Inc., New York.
Lesson 26
It is often convenient to work with complex form of Fourier series. In deed, the complex
form of Fourier series has applications in the field of signal processing which is of great
interest to many electrical engineers.
Given the Fourier series of a function f (x) as
∞
1 X
f ∼ a0 + [an cos(nx) + bn sin(nx)] , −π < x < π (26.1)
2
n=1
with Z π
1
an = f (x) cos(nx) dx, n = 0, 1, 2 . . .
π −π
and Z π
1
bn = f (x) sin(nx) dx, n = 1, 2 . . .
π −π
We know from Euler’s formula
einx + e−inx einx − e−inx
cos(nx) = sin(nx) =
2 2i
Substituting these values of cos(nx) and sin(nx) into the equation (26.1) we obtain
∞
1 X einx + e−inx einx − e−inx
f ∼ a0 + an + bn
2 2 2i
n=1
∞
1 X 1 inx 1 −inx
= a0 + (an − ibn )e + (an + ibn )e
2 2 2
n=1
Z π Z π
1 1 1
kn = (an + ibn ) = f (x) [cos(nx) + i sin(nx)] dx = f (x)einx dx
2 2π −π 2π −π
From the above calculation we get kn = c−n . Substituting the value of kn into the Fourier
series (26.3) we have
∞
X
f∼ cn einx (26.4)
n=−∞
where
Z π
1
cn = f (x)e−inx dx, n = 0, ±1, ±2, . . . (26.5)
2π −π
The series on the right side of equation (26.4) is called complex form of the Fourier series.
For a function of period 2L defined in [−L, L], the complex form of the Fourier series can
analogously be derived to have
∞ Z L
X inπx 1 −inπx
f∼ cn e L , cn = f (x)e L dx, n = 0, ±1, ±2, . . .
n=−∞
2L −L
26.1.1 Problem 1
26.1.2 Problem 2
.
Solution: The complex Fourier series representation of a function f (x) is given as
∞
X inπx
f∼ cn e l
n=−∞
where Z Z
l l
1 −inπx 1 −inπx
cn = f (x)e l dx = xe l dx
2l −l 2l −l
For n 6= 0, integrating by parts we get
" Z l #
1 −inπx −l l l −inπx
cn = xe l + e l dx ,
2l inπ −l inπ −l
Further application of integration by parts simplifies to
1 l2 −inπ l2 inπ l2
−inπx l
cn = − e − e − e l ,
2l inπ inπ (inπ)2 | {z −l}
=0
Finally, it simplifies to
(−1)n il
cn = , n = ±1, ±2, . . .
nπ
Now c0 can be calculated as Z l
1
c0 = x dx = 0
2l −l
Therefore, the Fourier series is given as
∞
il X (−1)n inπx
f∼ e l
π n=−∞ n
n6=0
26.1.3 Problem 3
Show that Parseval’s identity for the complex form of Fourier series takes the form
Z π ∞
1 2
X
{f (x)} dx = |cn |2
2π −π n=−∞
Solution: For the real form of Fourier series the Parseval’s identity is given as
∞ Z π
a20 X 2 1
+ an + b2n = {f (x)}2 dx (26.6)
2 π −π
n=1
We know that
a0 1 1
c0 = , cn = (an − ibn ), c−n = (an + ibn )
2 2 2
We can deduce that
1 1
|cn |2 = (a2n + b2n ), |c−n|2 = (a2n + b2n ) (26.7)
4 4
Diving the equation (26.6) by 2 and then splitting the second term as
∞ ∞ Z π
a20 1 X 2 1X 2 1
+ an + b2n + an + b2n = {f (x)}2 dx
4 4 4 2π −π
n=1 n=1
26.1.4 Problem 4
Therefore, we obtain
∞
X 1 π eπ + e−π
2)
= π − e−π )
= π cot hπ.
n=−∞
(1 + n (e
Suggested Readings
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Lesson 27
Fourier Integral
If f (x) is defined on a finite interval [−l, l] and is piecewise continuous then we can con-
struct a Fourier series corresponding to the function f and this series will represent the
function on this interval if the function satisfies some additional conditions discussed be-
fore. Furthermore, if f is periodic then we may be able to represent the function by its
Fourier series on the entire real line. Now suppose the function is not periodic and is de-
fined on the entire real line. Then we do not have any possibility to represent the function
by the Fourier series. However, we may still be able to represent the function in terms
of sine and cosines using an integral, called Fourier integral, instead of a summation. In
this lesson we discuss a representation of a non-periodic function by letting l → ∞ in the
Fourier series of a function defined on [−l, l].
Consider any function f (x) defined on [−l, l] that can be represented by a Fourier series as
∞
a0 X nπx nπx
f (x) = + an cos + bn sin . (27.1)
2 l l
n=1
For a more general case we can replace left hand side of the above equation by the average
value (f (x+) + f (x−))/2. We now see what will happen if we let l → ∞. It should be
mentioned that as l approaches to ∞ the function f (x) becomes non-periodic defined on
the real axis. Substituting an and bn in the equation (27.1) we get
∞
!
l l l
1 1X nπu nπx nπu nπx
Z Z Z
f (x) = f (u) du + f (u) cos du cos + f (u) sin du sin
2l −l l −l l l −l l l
n=1
Using the identity cos x cos y + sin x sin y = cos(x − y), we get
∞ Z
1 l 1X l nπ
Z
f (x) = f (u) du + f (u) cos (u − x) du (27.2)
2l −l l l
n=1 −l
Z ∞
If we assume that |f (u)| du converges, the first term on the right hand side approaches
−∞
1 Z l 1 ∞
Z
to 0 as l → ∞ since
f (u) du ≤
|f (u)| du.
2l −l 2l −∞
0 2 3
Figure 27.1: Sum of area of trapezoid as area under curve in the limiting case
Remark It should be mentioned that above derivation is not rigorous proof of con-
vergence of the Fourier Integral to the function. This is just to give some idea of transition
form Fourier series to Fourier Integral. Nevertheless we summarize the convergence re-
sult, without proof, in the next theorem. In addition to all conditions required for the
convergence of Fourier series we need one more condition, namely, absolute integrability
of f . Further, note that Fourier integral representation of f (x) is entirely analogous to a
P∞
Fourier series representation of a function on finite interval n=1 · · · , is replaced with
R∞
0 · · · du .
27.1.1 Theorem
Assume that f is piecewise smooth on every finite interval on the x axis (or piecewise
continuous and one sided derivatives exist) and let f be absolutely integrable over entire
real axis. Then for each x on the entire axis we have
∞Z ∞
1 f (x+) + f (x−)
Z
f (u) cos α(u − x) du =
π 0 −∞ 2
As in the convergence of Fourier series if f is continuous and all other conditions are
satisfied then the Fourier integral converges
27.2.1 Problem 1
i) Find the Fourier integral representation of f . Zii) Determine the convergence of the
∞
1 − cos α
integral at x = a. iii) Find the value of the integral 2
dα.
0 α
Solution: i) The integral representation of f is
Z ∞
f (x) ∼ [A(α) cos αx + B(α) sin αx] dα (27.3)
0
where
∞ a Z a
1 1 1 u sin αu a sin αu
Z Z
A(α) = f (u) cos αu du = u cos αu du = − du
π −∞ 0 π π α 0 0 α
1 a sin αa (cos αa − 1) 1 cos αa + αa sin αa − 1
= + =
π α α2 π α2
1 a
∞ Z a
1 1 −u cos αu a cos αu
Z Z
B(α) = f (u) sin αu du = u sin αu du = + du
π −∞ π 0 π α 0 0 α
1 −a cos αa sin αa 1 sin αa − αa cos αa
= + =
π α α2 π α2
27.2.2 Problem 2
where
∞ 2
1 1 1 sin αu 2 1 sin 2α
Z Z
A(α) = f (u) cos αu du = cos αu du = =
π −∞ π 0 π α 0 π α
∞ 2
1 1 1 − cos αu 2 1 (1 − cos 2α)
Z Z
B(α) = f (u) sin αu du = sin αu du = =
π −∞ π 0 π α 0 π α
Then, substituting calculated values of A(α) and B(α) in equation (27.4), we obtain
∞
1 sin 2α (1 − cos 2α)
Z
f (x) ∼ cos αx + sin αx dα
π 0 α α
∞
1 sin α(2 − x) + sin αx
Z
= dα
π 0 α
To find the value of the given integral we substitute x = 1 in the above Fourier integral
and use convergence theorem to get
∞
2 sin α
Z
dα = f (1) = 1
π 0 α
This gives the value of the desired integral as
∞
sin α π
Z
dα =
0 α 2
Suggested Readings
Davis, H.F. (1963). Fourier Series and Orthogonal Functions. Dover Publications, Inc.
New York.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Kreyszig, E. (1993). Advanced Engineering Mathematics. Seventh Edition, John Willey
& Sons, Inc., New York.
Lesson 28
In this lesson we shall first present complex form of Fourier integral. We then introduce
Fourier sine and cosine integral. The convergence of these integrals with its application to
evaluate integrals will be discussed. In this lesson will be very useful to introduce Fourier
transforms.
It is often convenient to introduce complex form of Fourier integral. In fact, using com-
plex form of Fourier integral we shall introduce Fourier transform, sometimes referred
as Fourier exponential transform, in the next lesson. We start with the following Fourier
integral
∞Z ∞
1
Z
f (x) = f (u) cos α(u − x) du dα (28.1)
π 0 −∞
Multiplying the equation (28.3) by i and adding into the equation (28.2) we obtain
∞ ∞
1
Z Z
f (x) = f (u) [cos α(u − x) + i sin α(u − x)] du dα (28.4)
2π −∞ −∞
If we subtract the equation (28.3) after multiplying by i from the equation (28.2) we obtain
∞ ∞
1
Z Z
f (x) = f (u)e−iα(u−x) du dα (28.6)
2π −∞ −∞
28.1.1 Example
If the function f is an even function, the integral of A(α) has an even integrand. Therefore
we can simplify the integral to
∞
2
Z
A(α) = f (u) cos αu du
π 0
Since the integrand of the integral in B(α) is odd and therefore B(α) = 0. Thus for even
function f we have
Z ∞
f (x) ∼ A(α) cos αx dα
0
and
Z ∞
f (x) ∼ B(α) sin αx dα
0
Remark: Similar to half range Fourier series, we can represent a function defined for
all real x > 0 by Fourier sine or Fourier cosine integral by extending the function as an
odd function or as an even function over the entire real axis, respectively.
We summarize the above results in the following theorem:
28.2.1 Theorem
Assume that f is piecewise smooth function on every finite interval on the positive x-axis
and let f be absolutely integrable over 0 to ∞. Then f may be represented by either:
a) Fourier cosine integral
Z ∞
f (x) ∼ A(α) cos αx, dα 0 < x < ∞,
0
where
∞
2
Z
A(α) = f (u) cos αu du
π 0
where
∞
2
Z
B(α) = f (u) sin αu du
π 0
Moreover, the above Fourier cosine and sine integrals converge to [f (x+) + f (x−)]/2.
28.3.1 Problem 1
determine the Fourier integral. To what value does the integral converge at x = −π ?
Solution: Since the given function is an odd function we can directly put A(α) = 0 and
evaluate B(α) as
∞ π
2 2 2
Z Z
B(α) = f (u) sin αu du = sin αu du = (1 − cos απ)
π 0 π 0 πα
Therefore, the Fourier integral representation is
∞
2 1 − cos απ
Z
f (x) ∼ sin αx dα
π 0 α
The function is not defined at x = −π and therefore the Fourier integral at x = −π will
converge to the average value 0−1 1
2 = −2.
28.3.2 Problem 2
Hence evaluate
∞ ∞
sin πα cos πα (1 − cos πα) sin πα
Z Z
dα and dα
0 α 0 α
where ∞ π
2 2 2 (1 − cos πα)
Z Z
B(α) = f (u) sin αu du = sin αu du =
π 0 π 0 π α
Therefore
∞
2 (1 − cos πα)
Z
f (x) ∼ sin αx dα
π 0 α
Using convergence theorem, we have
∞ 0,
x > π;
2 (1 − cos πα)
Z
sin αx dα = 1/2, x = π ;
π 0 α
1, 0 < x < π.
To get the required integral we now substitute x = π into the above integral
∞ ∞
2 sin πα cos πα 1 sin πα cos πα π
Z Z
dα = =⇒ dα =
π 0 α 2 0 α 4
Suggested Readings
Davis, H.F. (1963). Fourier Series and Orthogonal Functions. Dover Publications, Inc.
New York.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Hanna, J.R. and Rowland, J.H. (1990). Fourier Series Transforms and Boundary Value
Problems. Second Edition. Dover Publications, Inc.. New York.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Kreyszig, E. (1993). Advanced Engineering Mathematics. Seventh Edition, John Willey
& Sons, Inc., New York.
Lesson 29
In this lesson we introduce Fourier cosine and sine transforms. Evaluation and proper-
ties of Fourier cosine and sine transform will be discussed. The parseval’s identities for
Fourier cosine and sine transform will be given.
The function fˆc (α) as given by (29.1) is known as the Fourier cosine transform of f (x)
in 0 < x < ∞. We shall denote Fourier cosine transform by Fc (f ). The function f (x)
as given by (29.2) is called inverse Fourier cosine transform of fˆc (α). It is denoted by
Fc−1 (fˆc ).
29.2 Properties
We mention here some important properties of Fourier cosine and sine transform that will
be used in the application to solving differential equations.
1. Linearity: Let f and g are piecewise continuous and absolutely integrable functions.
Then for constants a and b we have
Fc (af + bg) = aFc (f ) + bFc (g) and Fs (af + bg) = aFs (f ) + bFc (g)
Note that these properties are obvious and can be proved just using linearity of the inte-
grals.
2. Transform of Derivatives: Let f (x) be continuous and absolutely integrable on
x−axis. Let f ′ (x) be piecewise continuous and on each finite interval on [0, ∞] and
f (x) → 0 as x → ∞. Then,
r
2
Fc {f ′ (x)} = αFs {f (x)} − f (0) and Fs {f ′ (x)} = −αFc {f (x)}
π
Proof: By the definition of Fourier cosine transform we have
r Z ∞
2
Fc {f ′ (x)} = f ′ (x) cos αx dx
π 0
Remark: The above results can easily be extended to the second order derivatives to
have
r r
2 ′ 2
Fc {f ′′ (x)} = −α2 Fc {f (x)} − f (0) and Fs {f ′′ (x)} = αf (0) − α2 Fs {f (x)}
π π
Note that here we have assumed continuity of f and f ′ and piecewise continuity of f ′′ .
Further, we also assumed that f and f ′ both goes to 0 as x approaches to ∞.
3. Parseval’s Identities: For Fourier sine and cosine transform we have the following
identities
Z ∞ Z ∞ Z ∞h i2 Z ∞
i) fˆc (α)ĝc(α) dα = f (x)g(x) dx ii) fˆc (α) dα = [f (x)]2
0 0 0 0
Z ∞ Z ∞ Z ∞h i2 Z ∞
iii) fˆs (α)ĝs (α) dα = f (x)g(x) dx iv) fˆs (α) dα = [f (x)]2
0 0 0 0
Proof: We prove the first identity and rest can be proved similarly. We take the right hand
side of the identity and use the definition of the inverse cosine transform to get
∞
r Z ∞ Z ∞
2
Z
f (x)g(x) dx = f (x) ĝc (α) cos(αx) dα dx
0 π 0 0
29.3.1 Problem 1
Find the Fourier sine transform of e−x , x > 0. Hence show that
∞
x sin mx πe−m
Z
d x = ,m>0
0 1 + x2 2
This implies
α
I=
1 + α2
Finally substituting the value of I to the expression of Fourier sine transform above we
get r
2 α
Fs {ex } =
π 1 + α2
Taking inverse Fourier transform
r Z ∞
2 2 ∞ α
Z
e−x = ˆ
fs (α) sin αx dα = sin αx dα
π 0 π 0 1 + α2
29.3.2 Problem 2
2
Find the Fourier cosine transform of e−x , x > 0.
Solution: By the definition of Fourier cosine transform we have
r Z ∞ r Z ∞
2 2 2 2
Fc {e−x }= f (u) cos(αu) du = e−u cos(αu) du
π 0 π 0
Let us denote the integral on the right hand side by I and differentiate it with respect to α
as Z ∞ Z ∞
dI d −u2 2
= e cos(αu) du = − e−u sin(αu)u du
dα dα 0 0
Integrating by parts we get
Z ∞
dI 1 −u2 −u2 α
= e sin(αu) − α e cos(αu) du = − I
dα 2 0 2
This implies
2
/4
I = c e−α
√ √
π π
Using I(0) = 2 , we evaluate the constant c = 2 . Then we have
√
π −α2 /4
I= e
2
Therefore the desired Fourier cosine transform is given as
r √
−x2 2 π −α2 /4 1 2
Fc {e }= e = √ e−α /4
π 2 2
29.3.3 Problem 3
∞
dt ∞
t2
Z Z
π π
i) 2 2 2 2
= ii) 2
dt =
0 (a + t )(b + t ) 2ab(a + b) 0 (t + 1) 4
Solution: i) For the first part let f (x) = e−ax and g(x) = e−bx . It can easily be shown that
r Z ∞ r
2 2 a
Fc {f } = fˆc (α) = e−ax
cos αx dx =
π 0 π a + α2
2
r Z ∞ r
2 2 b
Fc {g} = fˆc (α) = e−bx cos αx dx =
π 0 π b + α2
2
Thus we get
∞
dα
Z
π
=
0 (a2 + α2 )(b2 + α2 ) ab(a + b)
ii) For the second part we use Fourier sine transform of e−x as
r
2 α
Fs {e−x } = fˆs (α) = .
π 1 + α2
Using Parseval’s identity we obtain
2 ∞
α2 1
Z Z
2
dα = ∞ e−x dx =
π 0 (1 + α2 )2 0 2
Suggested Readings
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Hanna, J.R. and Rowland, J.H. (1990). Fourier Series, Transforms and Boundary Value
Problems. Second Edition. Dover Publications, Inc.. New York.
Iorio, R. and Iorio, V. de M. (2001). Fourier Analysis and Partial Differential Equations.
Cambridge University Press. United Kingdom.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Kreyszig, E. (1993). Advanced Engineering Mathematics. Seventh Edition, John Willey
& Sons, Inc., New York.
Lesson 30
FOURIER TRANSFORM
In this lesson we describe Fourier transform. We shall connect Fourier series with the
Fourier transform through Fourier integral. Several interesting properties of the Fourier
Transform such as linearity, shifting, scaling etc. will be discussed.
as the integral Z ∞
f (u) sin α(u − x)du dα
−∞
is an odd function of α. Multiplying the equation (30.2) by the imaginary unit i and adding
to the equation (30.1), we obtain
∞ ∞
1
Z Z
f (x) = f (u)eiα(u−x) du dα (30.3)
2π −∞ −∞
This is the complex Fourier integral representation of f on the real line. Now we split the
exponential integrands and the pre-factor 1/(2π) as
∞ ∞
1 1
Z Z
f (x) = √ √ iαu
f (u)e du e−iαx dα (30.4)
2π −∞ 2π −∞
The term in the parentheses is what we will the Fourier transform of f . Thus the Fourier
transform of f , denoted by fˆ(α), is defined as
∞
1
Z
fˆ(α) = √ f (u)eiαu du
2π −∞
The function f (x) in equation (30.5) is called the inverse Fourier transform of fˆ(α). We
shall use F for Fourier transformation and F −1 for inverse Fourier transformation in this
lesson.
Remark: It should be noted that there are a number of alternative forms for the
Fourier transform. Different forms deals with a different pre-factor and power of ex-
ponential. For example we can also define Fourier and inverse Fourier transform in the
following manner.
∞ ∞
1 1
Z Z
f (x) = √ fˆ(α)eiαx dα where fˆ(α) = √ f (u)e−iαu du
2π −∞ 2π −∞
or
∞ ∞
1
Z Z
f (x) = fˆ(α)eiαx dα where fˆ(α) = f (u)e−iαu du
−∞ 2π −∞
or
∞ ∞
1
Z Z
f (x) = fˆ(α)eiαx dα where fˆ(α) = f (u)e−iαu du
2π −∞ −∞
We shall remain with our original form because it is easy to remember because of the
same pre-factor in front of both forward and inverse transforms.
30.2 Properties
We now list a number of properties of the Fourier transform that are useful in their ma-
nipulation.
1. Linearity: Let f and g are piecewise continuous and absolutely integrable functions.
Then for constants a and b we have
Proof: Similar to the Fourier sine and cosine transform this property is obvious and can
be proved just using linearity of the Fourier integral.
2. Change of Scale Property: If fˆ(α) is the Fourier transform of f (x) then
1 ˆ α
F [f (ax)] = f , a 6= 0
|a| a
F [fˆ(x)] = f (−α)
30.3.1 Problem 1
30.3.2 Problem 2
2
Find the Fourier transform of e−ax .
Solution: Using the definition of the Fourier Transform
∞
1
Z
−ax2 ) 2
F (e =√ e−ax eiαx dx
2π −∞
30.3.3 Problem 3
Find the inverse Fourier transform of fˆ(α) = e−|α|y , where y ∈ (0, ∞).
Solution: By the definition of inverse Fourier transform
Z ∞ Z ∞
−1
h
ˆ
i 1 ˆ −iαx 1
F f (α) = √ f (α)e dα = √ e−|α|y e−iαx dα
2π −∞ 2π −∞
Z 0 Z ∞
1 αy −iαx 1
=√ e e dα + √ e−αy e−iαx dα
2π −∞ 2π 0
Combining the two exponentials in the integrands
Z 0 Z ∞
−1
h
ˆ
i 1 (y−ix)α 1
F f (α) = √ e dα + √ e−(y+ix)α dα
2π −∞ 2π 0
Now we can integrate the above two integrals to get
" #0 " #∞
h i 1 e(y−ix)α 1 e−(y+ix)α
F −1 fˆ(α) = √ +√
2π (y − ix) −∞ 2π −(y + ix) 0
Suggested Readings
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Hanna, J.R. and Rowland, J.H. (1990). Fourier Series Transforms and Boundary Value
Problems. Second Edition. Dover Publications, Inc.. New York.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Kreyszig, E. (1993). Advanced Engineering Mathematics. Seventh Edition, John Willey
& Sons, Inc., New York.
Arfken, G.B. (2001). Mathematical Methods for Physicists. Fifth Edition, Harcourt Aca-
demic Press, San Diego.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishilers, New Delhi.
Lesson 31
In this lesson we continues further on Fourier transform. Here we discuss some more
properties of the Fourier transform and evolution of Fourier transform of some special
functions. Some applications of Parseval’s identity and convolution property will be
demonstrated.
31.1.1 Theorem
F [f ′ (x)] = −iαfˆ(α).
31.2.1 Theorem
√
The Fourier transform of the convolution of f (x) and g(x) is 2π times the product of the
Fourier transforms of f (x) and g(x), i.e.,
√
F [f ∗ g] = 2πF (f )F (g).
By substituting x − y = t ⇒ dx = dt we get
∞ ∞
1
Z Z
F [f ∗ g] = √ f (y)g(t)eiα(y+t)dt dy
2π −∞ −∞
or Z ∞ Z ∞
f (y)g(x − y)dy = fˆ(α)ĝ(α)e−iαx dα
−∞ −∞
31.3.1 Theorem
If fˆ(α) and ĝ(α) are the Fourier transforms of the f (x) and g(x) respectively, then
Z ∞ Z ∞ Z ∞ Z ∞
(i) fˆ(α)ĝ(α) dα = f (x)g(x) dx (ii) |fˆ(α)|2 dα = |f (α)|2 dα.
−∞ −∞ −∞ −∞
Proof: (i) Use of the inversion formula for Fourier transform gives
∞ ∞ ∞
1
Z Z Z
iαx
f (x)g(x) dx = f (x) √ ĝ(α)e dα dx
−∞ −∞ 2π −∞
31.4.1 Problem 1
Solution: (i) Let fˆ(α) be the Fourier transform of f (x). Then, by the definition of Fourier
transform
Z ∞ Z a
1 1
fˆ(α) = √ iαx
e f (x)dx = √ eiαx dx
2π −∞ 2π −a
1 1
eiαa − e−iαa dx
=√
2π iα
This gives
2 sin aα
fˆ(α) = √
2π α
From the definition of inverse Fourier transform we also know that
∞
1
Z
f (x) = √ fˆ(α)e−iαx dα
2π −∞
Since the integrand is an even function, we get the the desired results
∞
sin α π
Z
=
0 α 2
This implies
∞
sin2 aα
Z
dα = πa
−∞ α2
Since the integrand is an even function we have the desired result as
∞
sin2 aα π
Z
2
dα = a
0 α 2
31.4.2 Problem 2
Apply the convolution theorem to evaluate the Fourier transform of the triangular pulse
function
(
1 − |t|, if |t| < 1;
Λ(t) =
0, otherwise.
Clearly, we have
1 + t, if −1 < t < 0;
Z ∞
(Π ∗ Π)(t) = Π(y)Π(t − y)dy = 1 − t, if 0 < t < 1; = Λ(t)
−∞
0 otherwise.
Suggested Readings
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Hanna, J.R. and Rowland, J.H. (1990). Fourier Series, Transforms and Boundary Value
Problems. Second Edition. Dover Publications, Inc. New York.
Iorio, R. and Iorio, V. de M. (2001). Fourier Analysis and Partial Differential Equations.
Cambridge University Press. United Kingdom.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Kreyszig, E. (1993). Advanced Engineering Mathematics. Seventh Edition, John Willey
& Sons, Inc., New York.
Lesson 32
In this lesson we provide some miscellaneous examples of Fourier transforms. One of the
major applications of Fourier transforms for solving partial differential equations will not
be discussed in this module. However, we shall highlights some other applications like
evaluating special integrals and the idea of solving ordinary differential equations.
32.1.1 Problem 1
This implies
∞
4 −α cos α + sin α −iαx
Z
f (x) = e dα
2π −∞ α3
Equating real parts, on both sides we get
∞
−α cos α + sin α π
Z
cos αx dα = f (x)
−∞ α3 2
32.1.2 Problem 2
Find the Fourier transformation of the function f (t) = e−at H(t), where
(
0, when t < 0
H(t) =
1, when t ≥ 0
32.1.3 Problem 3
On integrating we obtain
1 1 eiαt a+ǫ
F [δ(t − a)] = lim √
ǫ→0 2π ǫ iα a
1 1 1 iα(a+ǫ) iαa
= lim √ e −e
ǫ→0 2π ǫ iα
1 eiαǫ − 1 1
= √ eiαa lim = √ eiαa
2π ǫ→0 iαǫ 2π
√
With this results we deduce that F −1 (1) = 2πδ(t).
32.1.4 Problem 4
32.1.5 Problem 5
1
Find the inverse Fourier transform of fˆ(α) = .
2π(a − iα)2
Solution: Writing the given function as a product of two functions as
−1
h
ˆ
i
−1 1 1
F f (α) = F √ √
2π(a − iα) 2π(a − iα)
Application of convolution theorem gives
1 −1 1 −1 1 1 −at
e H(t) ∗ e−at H(t)
f (t) = √ F √ ∗F √ =√
2π 2π(a − iα) 2π(a − iα) 2π
Hence we have
( −at
t te
eat √ , if t > 0;
Z
f (t) = √ dx = 2π
2π 0 0, if t < 0.
Thus we get
te−at
f (t) = √ H(t).
2π
32.1.6 Problem 6
Suggested Readings
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Hanna, J.R. and Rowland, J.H. (1990). Fourier Series, Transforms and Boundary Value
Problems. Second Edition. Dover Publications, Inc. New York.
Iorio, R. and Iorio, V. de M. (2001). Fourier Analysis and Partial Differential Equations.
Cambridge University Press. United Kingdom.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Pinkus, A. and Zafrany, S. (1997). Fourier Series and Integral Transforms. Cambridge
University Press. United Kingdom.
Peter, V. O’Neil (2008). Advanced Engineering Mathematics. Cengage Learning (Indian
edition)
Kreyszig, E. (1993). Advanced Engineering Mathematics. Seventh Edition, John Willey
& Sons, Inc., New York.
Lesson 33
The Fourier transform, cosine transform and sine transform are all motivated by the re-
spective integral representations of a function. Applying the same line of reasoning, but
using Fourier cosine and sine series instead of integrals, we obtain the so called finite
transforms. It has applications in solving partial differential equations in finite domain.
Let f (x) be defined in (0, L) and satisfies Dirichlet’s conditions in that finite domain. We
begin with the cosine Fourier series
∞
a0 X nπx
f (x) = + an cos ,
2 L
n=1
where
L
2 nπx
Z
bn = f (x) sin dx, n = 1, 2, ...
L 0 L
We now define the finite Fourier cosine transform as
L
nπx
Z
Fc [f ] = fˆc (n) = f (x) cos dx
0 L
The function f (x) is called inverse finite Fourier cosine transform and is given by
∞
h
ˆ
i 1ˆ 2Xˆ nπx
Fc−1 fc (n) = f (x) = fc (0) + fc (n) cos dx
L L L
n=1
2
Remark: The factor may be associated with either the transformation or the in-
L r
2
verse of the transformation or the factor may be associated with both the transform
L
and the inverse.
Similar to the finite Fourier sine and cosine transform we can also define finite Fourier
transform from complex form of Fourier series as
Z L
−inπx
F [f ] = f (x)e L dx = fˆ(n)
−L
32.3.1 Theorem
Let f (x) and f ′(x) be continuous and f ′′ (x) be piecewise continuous on the interval [0, l],
then
h i
fˆc (n)
′ nπ
(i) Fs f (x) = − L
h ′′ i
nπ 2 ˆ nπ
f (0) + (−1)n+1 f (L)
(ii) Fs f (x) = − L fs (n) + L
h i
fˆc (n) + (−1)n f (L) − f (0)
′ nπ
(iii) Fc f (x) = L
h i
′′ nπ 2 ˆ ′ ′
fc (n) + (−1)n f (L) −
(iv) Fc f (x) = − L f (0)
This implies h ′ i nπ
Fs f (x) = − fˆc (n)
L
(ii) By the definition of finite Fourier transform, we get
L
h ′ i Z ′ nπx
Fc f (x) = f (x) cos dx
0 L
Integrating by parts gives
" Z L #
h ′ i nπx L nπx nπ
Fc f (x) = f (x) cos − f (x) sin dx
L 0 0 L L
Thus we get h ′ i nπ
Fc f (x) = (−1)n f (L) − f (0) + fˆc (n)
L
Repeated applications of these above two will give (ii) and (iv).
32.4.1 Problem 1
Find the finite Fourier sine and cosine transform of f (x) = x2 , if 0 < x < 4.
32.4.2 Problem 2
Find the finite Fourier sine and cosine transform of the function
Solution: For n ≥ 1, we note that |t| cos(nπt) is even and hence by the finite Fourier
cosine transform we have
Z 1
fˆc (n) = f (t) cos(nπt) dt
−1
Z 1
=2 t cos(nπt) dt
0
1 Z 1
t 1
=2 sin(nπt) −2 sin(nπt) dt
nπ t=0 0 nπ
2 (−1)n − 1
1 h i1
= 0 + 2 2 cos(nπt) =
n π t=0 n2 π 2
For n = 0, we find Z 1
fˆc (n) = |t| dt = 1
−1
Now, we notice that |t| sin(nπt) is odd and, therefore, finite Fourier sine transform is cal-
culated as Z 1
fˆs (n) = f (t) sin(nπt) dt = 0
−1
32.4.3 Problem 3
Suggested Readings
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Folland, G.B. (1992). Fourier Analysis and Its Applications. Indian Edition. American
Mathematical Society. Providence, Rhode Islands.
Raisinghania, M.D. (2009). Advanced Differential Equations. Twelfth Edition. S. Chand
& Company Ltd., New Delhi.
Hanna, J.R. and Rowland, J.H. (1990). Fourier Series, Transforms and Boundary Value
Problems. Second Edition. Dover Publications, Inc. New York.
Lesson 34
∂2 z ∂z ∂ 2 z ∂z
2 2 + 3 + 2 + 5 = sin( x + y ) .
∂x ∂x ∂y ∂y
∂z ∂z ∂ 2 z ∂ 2 z ∂ 2 z
f ( x, y , z , , , 2 , 2 , ,.....) = 0 (34.1)
∂x ∂y ∂x ∂y ∂xy
Evidently, for each point (x,y) in Ω Subset of 2 , there exists a value for z(x,y),
and this set of points {(x, y, z)} generates a surface in 3 . z=z(x,y), is the
solution of a p.d.e. In the same manner, one has to visualize higher dimensional
surfaces as solutions of p.d.es involving 3 or more independent variables.
Order of the p.d.e: The highest order Partial derivative in the equation decides
∂z ∂z
3
radius a is called the domain while the circle with its boundary x 2 + y 2 ≤ a 2 is
called the region.
In general, the partial differential equation is assumed to take values from the
interior of the region. The boundary and initial conditions are specified on its
boundary.
∂2 z ∂2 z
Observe that (i) z ( x, y=
) ( x + y ) satisfies the p.d.e
3
− =
0.
∂x 2 ∂y 2
∂z ∂2 z ∂z
The partial derivatives are = 3( x + y ) 2 , = 6( x + y ) , = 3( x + y ) 2 and
∂x ∂x 2
∂y
∂2 z
= 6( x + y ) . Also observe (ii) z ( x=
, y ) sin( x − y ) is a solution of the same
∂y 2
∂z ∂2 z
p.d.e, as:= cos( x − y ) , 2 = − sin( x − y ) and
∂x ∂x
∂z ∂2 z
− cos( x − y ) , 2 =
= − sin( x − y ) satisfies the same equation.
∂y ∂y
This illustrates that a partial differential equation can have more than one
solution i.e., uniqueness of solution is not seen.
When we restrict to the first order partial derivatives of z(x,y) in equation (1),
we get a first order p.d.e. The most general form of a non- linear 1st order p.d.e
∂z ∂z
may be written as f ( x, y , z , , )=0 (34.2)
∂x ∂y
The first order p.d.e given by (1) is said to be a linear equation if it is linear z,
∂z ∂z
and . It is of the form
∂x ∂y
∂z ∂z
A( x, y ) + B ( x, y ) + C ( x, y ) z =
S ( x, y ) (34.3)
∂x ∂y
∂z ∂z ∂z ∂z
Examples are: x + y =xyz + xy , + =
1.
∂x ∂y ∂x ∂y
∂z ∂z
Equation (34.2) is said to be a semi-linear p.d.e if it is linear in and and
∂x ∂y
∂z ∂z
the coefficient of and are functions of x and y only. The semi linear
∂x ∂y
p.d.e may be written as
∂z ∂z
A( x, y ) + B ( x, y ) = C ( x, y , z ) (34.4)
∂x ∂y
∂z ∂z ∂z ∂z
Examples are : x +y = xyz 3 , x −y = sin z .
∂x ∂y ∂x ∂y
∂z ∂z
Equation (2) is called a quasi-linear p.d.e if it is linear in and and it is
∂x ∂y
written in the form
∂z ∂z
A( x, y, z ) + B ( x, y , z ) = C ( x, y , z ) (34.5)
∂x ∂y
∂z ∂z
An example is (x 2
− z2 )
∂x
+ xy =
∂y
yz 2 + x 2
We use these notations for the first order partial derivatives of z = z ( x, y ) as:
∂z ∂z
p= ,q = .
∂x ∂y
Example 1: Eliminate the arbitrary function F from the below given surfaces:
(i) z = x + y + F ( xy )
(ii) F ( x − z , y − z ) =
0.
Solution:
(i) we eliminate the arbitrary function by finding the partial derivatives p and q.
∂z dF ( xy ) ∂ ( xy ) dF ( xy )
p= =
1+ . =
1+ y
∂x d ( xy ) ∂x d ( xy )
∂z dF ( xy ) ∂ ( xy ) dF ( xy )
q= =
1+ . =
1+ x
∂y d ( xy ) ∂y d ( xy )
dF ( xy )
Eliminating from the above, we obtain xp − yq =x − y which is the
d ( xy )
required p.d.e. Further, note that it is a linear p.d.e
∂F ∂u ∂F ∂v ∂F ∂F
p= + = (1 − p ) + (− p=
) 0
∂u ∂x ∂v ∂y ∂u ∂u
∂F ∂u ∂F ∂v ∂F ∂F
Similarly q= + = (−q) + (1 − q )= 0
∂u ∂y ∂v ∂y ∂u ∂u
∂F ∂F
Eliminating and from the above two equation we get
∂u ∂v
− pq + (1 − q )(1 − p ) =
0⇒ p+q=
1 which is the required linear p.d.e.
Solution:
(i) Note that in the surface 2 z = (ax + y ) 2 + b , the arbitrary constant a is non-
linearly involved. Differentiating partially we obtain,
=
2q 2(ax + y )1 or q = (ax + y ) ⇒ qy = y 2 + axy
Also 2 z = q 2 + b ⇒ b = 2 z − q 2
q 2 = 2 z − b = (ax + y ) 2 = px + qy
∴ px + qy =
q 2 is the required non-linear p.d.e
(ii)=
z ax + by ; the arbitrary constants a & b are linearly involved in the
function. Differentiating partially we obtain p=a and q=b. So the required
p.d.e is xp+yq=z which is a linear p.d.e.
The following are some Standard Partial Differential Equations which occur
in physics:
∂z ∂z
1. + =
0 (Transport equation)
∂x ∂y
∂u 2 ∂u 2
2. + =
1 (Eikonal equation)
∂x ∂y
∂ 2u ∂ 2u
3. 2 − 2 = 0 (Wave equation)
∂t ∂x
∂u ∂ 2u
4. − =
0 (Heat or Diffusion equation)
∂t ∂x 2
∂ 2u ∂ 2u
5. 2 + 2 = 0 (Laplace equation)
∂x ∂y
∂ 2u 2 ∂ 2u ∂ 2u
6. − . =
f ( x, y ) (Monge-Ampere equation)
∂x∂y ∂x 2 ∂y 2
In the above, equations 34.1, 34.3, 34.4, 34.5 are linear and homogeneous
equations while equations 34.2, 34.6 are non-homogeneous equations. In
equation 6, if f ( x, y ) ≡ 0∀( x, y ) ∈ Ω , the equation is a non-linear and
homogeneous equation.
The Linear equation (34.3) can be written in the operator form as:
Lz = S (34.6)
∂ ∂
L: A + B +C
∂x ∂y
∂2 z ∂2 z
Let us check the linearity of equation + =
0. (34.7)
∂x 2 ∂y 2
Definition: An operator L is said to be linear if and only if, for two functions
z1 ( x, y ) and z2 ( x, y ) with arbitrary constants c1 and c2 ∈ , the following
property holds. :
8
Now consider
∂2 ∂2
L[c1 z1 ( x, y ) + c2 z2 ( x, y )]= (c1 z1 + c2 z2 ) + 2 (c1 z1 + c2 z2 )
∂x 2 ∂y
∂2 z ∂2 z ∂2 z ∂2 z
= c1 21 + c2 22 + c1 21 + c2 21
∂x ∂x ∂y ∂y
∂ 2 z1 ∂ 2 z2 ∂ 2 z1 ∂ 2 z2
= c1 2 + 2 + c2 2 + 2
∂x ∂x ∂y ∂y
= c1Lz1 + c2 Lz2
∂ 2 z1 ∂ 2 z2
∴ + =
0 is a linear equation.
∂x 2 ∂x 2
∂z ∂z
Let us test the equation +z = 0 (34.8)
∂x ∂y
∂ ∂
for linearity. Consider (c1 z1 + c2 z2 ) + (c1 z1 + c2 z2 ) (c1 z1 + c2 z2 )
∂x ∂y
∂z1 ∂z ∂z ∂z
= c1 + c2 2 + (c1 z1 + c2 z2 ) c1 1 + c2 2
∂x ∂x ∂y ∂y
∂z ∂z ∂z ∂z
≠ c1 1 + c2 2 + c1 z1 1 + c2 z2 2
∂x ∂x ∂y ∂y
Exercise 1
Eliminate the arbitrary function / constants from the following surfaces to form
an appropriate partial differential equation.
1
=
(iii) z F [( x + y ) ] (iv) z =
2 2 2
xy + F ( x 2 + y 2 )
xy
(v) z = F
z
References
Suggested Reading
10
Lesson 35
∂F ∂2 F ∂2 F
∂a ∂x∂a ∂y∂a
definition Ω , the rank of the matrix A = is 2.
∂F ∂2 F ∂2 F
∂b ∂x∂b ∂y∂b
Solution:
F ( x, y , z , a , b ) = ( x − a ) + ( y − b ) + z 2 − 1
2 2
∂F ∂2 F ∂2 F
=
−2( x − a ) , = −2 , =0,
∂a ∂x∂a ∂y∂a
∂F ∂2 F ∂2 F
=
−2( y − b) , = −2 , = 0.
∂b ∂x∂b ∂y∂b
2(a − x) −2 0
The matrix A= is with a non-vanishing 2x2 minor and hence
2(b − x) 0 −2
its rank is 2. We check whether the given surface is a solution of the given
x−a y −b
p.d.e. We find p= and q = and using p and q in z 2 (1 + p 2 + q 2 ) =
1,
−z −z
x − a 2 y − b 2
z 1 + + = ( x − a) + ( y − b) + z2 =
2 2 2
we see
1 or 1 is the given
−z −z
surface that is satisfying the p.d.e.
Exercises
y
1. Show that z = ax + + b is a complete integral of pq = 1 .
a
(ii) General Integral: The general integral is also a solution of the partial
differential equation that involves an arbitrary function. In the two parameter
family of solutions z = F ( x, y, a, b) , take a = ϕ (b) , we get a one parameter
family of solutions of f ( x, y, z , p, q ) = 0 . We obtain z = F ( x, y,ϕ (b), b) which is
a subfamily of the given two parameter family of complete integral of
f ( x, y , z , p , q ) = 0 . Find the envelope of this one parameter sub-family by
eliminating b between z = F ( x, y,ϕ (b), b) and
∂F ( x, y,ϕ (b), b) / ∂F ( x, y,ϕ (b), b)
ϕ (b) + =
0 if exists. This way we will be able
∂a ∂b
to find b = b( x, y ) and substituting for b in the one parameter sub family, we
obtain z = F ( x, y,ϕ (b( x, y )), b( x, y )) . If the function ϕ which defines this sub-
family is arbitrary, then such a solution is called a general integral of
f ( x, y, z , p, q ) = 0 . Different choices of ϕ give different particular integrals of
the p.d.e. Let us illustrate this with examples.
∂z
Example 2: Find the general solution of the equation +z=e− x .
∂x
Solution:
∂z
Integrating the homogeneous equation +z=0 with respect of x, holding y
∂x
as a constant, we obtain z ( x, y ) = e − x f ( y ) where f is an arbitrary function
which is a continuously differentiable function of y.
Let us now derive the form of the general solution of the linear first order
homogeneous equation.
A( x, y ) z x + B( x, y ) z y + C ( x, y ) z z =
0 (35.1)
∂ξ ∂ξ
∂x ∂y
=J ≠ 0 on Ω .
∂y ∂y
∂x ∂y
∂z ∂z ∂ξ ∂z ∂η ∂z ∂z ∂ξ ∂z ∂η
Clearly, = + =
, and + (35.2)
∂x ∂ξ ∂x ∂η ∂x ∂y ∂ξ ∂y ∂η ∂y
∂ξ ∂ξ ∂z ∂η ∂η ∂z
∂x + + + + Cz =
∂y ∂ξ ∂x ∂y ∂η
A B A B 0 (35.3)
∂η ∂η
Chose η such that A ∂x + B ∂y =
0 (35.4)
∂y B ( x, y )
= (35.5)
∂x A( x, y )
∂η ∂η
we have dη ( x, y= = 0 or
) dk dx + dy =
0
∂x ∂y
The one parameter family of curves given by (35.6) that are obtained from
equation (35.5) are called characteristic curves of the differential equation
(35.1).
=
Now chose ξ ξ=
( x, y ) x , such that
1 0
J= J= = η y ≠ 0 ∀( x, y ) ∈ Ω
ηx η y
∂z
A(ξ ,η ) + C (ξ ,η ) z =
0 (35.7)
∂ξ
This equation is called canonical form for the linear partial differential equation
(35.1). This can be solved as an o.d.e.
A( x, y ) z x + B( x, y ) z y + C ( x, y ) z z =
D ( x, y ) (35.8)
gets transformed to
A(ξ ,η ) zξ + C (ξ ,η ) z =
D(ξ ,η ) (35.9)
We describe the Lagrange method for finding the general integral of the given
quasi-linear p.d.e in the next lesson.
x z − y zy + =
y 2 z y 2 , ( x, y ) ≠ 0
Solution:
Given A( x, y ) =
x, B ( x, y ) =
− y , C ( x, y ) =
y 2 , D ( x, y ) =
y2
dy y
Now equation (35.5) gives = − which gives its general solution as
dx x
xy = k where k is an arbitrary constant. Now set ξ = x , η = xy as the co-
ordinate transformation; This gives J= x ≠ 0 .
dz dz dξ dz dη dz dz dξ dz dη
Now = + =
zξ .1 + zη . y , and = + = y zη ,
dx dξ dx dη dx dy dξ dy dη dy
η η2 η2
A( x, y )= x= ξ , B( x, y ) =− y =− , C ( x, y ) = 2 , D( x, y ) = 2 .
ξ ξ ξ
∫ ξ 3 dξ η 2 ∫ ξ 3 dξ
η 2
η 2
−
z (ξ ,η ) e
= f (η ) + ∫ 3 e dξ
ξ
η2 − 2
η 2
= e 2ξ 2
f (η ) + e 2ξ
η2
= f (η )e 2ξ 2
+ 1 , where f (η ) is an arbitrary function.
y2
Thus =
z ( x, y ) f ( xy )e 2
+ 1 is the general solution of the given p.d.e.
Solution:
Given =
A( x, y ) x= ; C ( x, y ) η=
; B( x, y ) y= , D( x, y ) 0 , equation (35.5) gives
dy y 1 1 x
=⇒ dx =dy , leading to ln=
x ln k + ln y or =k as its
dx x x y y
x
ξ x=
characteristic curve. Now set= ,η ,
y
x
A( x, y ) ξ=
− 2 ≠ 0 . Also, note that=
J= , C ( x, y ) η , and the canonical form
y
η
for the given p.d.e is ξ zx + η z =
0 , or zξ + z=
0 . Its general solution is
ξ
x
z (ξ ,η ) = ξ − n f (η ) or z ( x, y ) = x − n f where f is an arbitrary function.
y
(i) x z x + y z y =
xn
(iii) Singular integral: Find the envelope of the two parameter family of
solutions z = F ( x, y, a, b) , if exists. This is obtained by eliminating a and b
∂F ( x, y, a, b) ∂F ( x, y, a, b)
from the equations z = F ( x, y, a, b) , = 0, = 0 . This is
∂a ∂b
called the singular integral of the given p.d.e.
Solution:
∂F ( x, y, a, b)
= 0 ⇒ x + 2a = 0,
∂a
∂F ( x, y, a, b)
= 0 ⇒ y + 2b = 0,
∂b
References
Suggested Reading
Lesson 36
P( x, y, u )u x + Q( x, y, u )u y =
R ( x, y , u ) (36.1)
f ( x, y=
, u ) u ( x, y ) =
−u 0 (36.2)
as:
( P, Q, R)(u x , u y , −1) =
0 (36.3)
This shows that the vector ( P, Q, R) must be a tangent vector of the surface
given by (36.2) at ( x, y, u ) and this determines a direction filed called the
Characteristic Direction for the integral surface for of the given p.d.e. In brief;
f(x,y,u) = u(x,y) - u = 0 is a solution of (1) if and only if the direction vector
field (P,Q,R) lies in the tangent plane of this integral surface at each point
( x, y , u ) .
u x , u x , −1
u normal
(P,Q,R)
x, y , u
tangent plane
o
y
=x x=
(t ), y y=
(t ), u u (t ) (36.4)
dx dy du
then the tangent vector to this curve is , , which must coincide with
dt dt dt
( P, Q, R ) .
2
296 WhatsApp: +91 7900900676 www.AgriMoon.Com
Geometric Interpretation of a First Order Equation
dx dy du
= P=( x, y, u ); Q=
( x, y, u ); R ( x, y , u ) (36.5)
dt dt dt
These are called the characteristic equations of the Quasi linear equation (36.1).
Its solution consist of a 2-p family of curves in (x,y,u) – space.
dx dy du
= = (35.6)
P Q R
The general solution of the quasi-linear partial differential equation (also known
as the Lagrange’s equation) P( x, y, u )u x + Q( x, y, u )u y =
R( x, y, u ) is written as
Example 1: Find the general integral of the quasi linear p.d.e yzz x + xzz y =
xy .
Solution:
dx dy dz
The characteristic system is: = =
yz xz xy
3
297 WhatsApp: +91 7900900676 www.AgriMoon.Com
Geometric Interpretation of a First Order Equation
dx dy
(i) = ⇒ x 2 − y 2 = C1
y x
dy dz
(ii) = ⇒ z 2 − y 2 = C2
z y
Solution:
dx dy dz
The characteristic system is = =
1 z 0
1) z ( xz x − yz y ) =y 2 − x 2
2) yzz x + xzz y =
x+ y
3) x( y − z ) z x + y ( z − x) z y = z ( x − y )
4
298 WhatsApp: +91 7900900676 www.AgriMoon.Com
Geometric Interpretation of a First Order Equation
Now, let us consider the extension of the linear equation in 3-variable for the
function u ( x, y, z ) as
A( x, y, z )u x + B( x, y, z )u y + C ( x, y, z )u z =
0.
dx dy dz
= =
A( x, y, z ) B ( x, y, z ) C ( x, y, z )
∂g ∂g ∂g
∂x ∂y ∂z
is 2.
∂h ∂h ∂h
∂x ∂y ∂z
Solution:
The characteristic curves are obtained from the characteristic system
5
299 WhatsApp: +91 7900900676 www.AgriMoon.Com
Geometric Interpretation of a First Order Equation
dx dy dz
= = .
( y − z ) ( z − x) ( z − y )
Note that dx + dy + dz = ( y − z + z − x + x − y ) = 0 ,
and xdx + ydy + zdz = x( y − z ) + y ( z − x) + z ( x − y ) = 0 .
and h( x, y, z ) = x 2 + y 2 + z 2 = C2 .
1) x( y − z )u x + y ( z − x)u y + z ( x − y )u z =
0
2) xu x + yu y + zu z =
u.
References
6
300 WhatsApp: +91 7900900676 www.AgriMoon.Com
Geometric Interpretation of a First Order Equation
Suggested Reading
7
301 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 4: Partial Differential Equations
Lesson 37
Thus fixing the arbitrary function in the general solution of the given p.d.e by
making it to pass through the given initial data is called the Cauchy Problem.
Suppose z = z ( x, y ) is the integral surface passing through the initial data curve
C then we require that the equations φ ( x0 ( s ), y0 ( s ), z0 ( s ) ) = C1 and
Find the integral surface that satisfies the Cauchy data z = 0 on the curve y = 2 x .
Solution:
dx dy dz
= = 2
z( x + y) z( x − y) x + y 2
1
C1 and xy − z 2 =
On integrating we get z 2 − x 2 + y 2 = C2
2
or 3s 2 = C1 ; 4s 2 = C2
3 C1
⇒ = ⇒ 4C1 =
3C2
4 C2
Thus the solution is written as:
4 ( z 2 − x 2 + y 2=
) 3( 2 xy − z 2 ) or 7 z 2 = 6 xy + 4 x 2 − 4 y 2
t 3
Put 4s 2 = t ⇒ s 2 = leads to 3s 2 = t
4 4
3
∴ G (t ) = t
4
3
This gives the integral surface as z 2 = ( x 2 − y 2 ) + (2 xy − z 2 )
4
or 7 z 2 =6 xy + 4( x 2 − y 2 ).
Solution:
dx dy dz
Step 1: The characteristic system is = =
2 xy − 1 z − 2 x 2
2( x − yz )
Note that 0 ⇒ u = xz + y = C1
(i) zdx + dy + xdz =
∴ Integral surface is F ( xz + y, x 2 + y 2 + z ) =
0
or x 2 + y 2 − xz − y + z − 1 =0.
The following result ensures the existence and uniqueness of an integral surface
for the Cauchy problem.
dx0 dy
B ( x0 ( s ), y0 ( s ), z0 ( s ) ) − 0 A ( x0 ( s ), y0 ( s ), z0 ( s ) ) ≠ 0 ….. (37.1)
ds ds
Note: The condition given in (37.1) excludes the possibility that the initial
=
curve x x=
0 ( s ), y y0 ( s ) could be a characteristic. Let us now illustrate an
example where the given p.d.e has a unique, no, infinitely many solutions
with the Cauchy data.
=x x=
0 (s) s ,=
y y0 (=
s ) 0,=
z z0 (=
s) s 2
1⋅ −s − 0 ⋅ 0 = −s ≠ 0
Thus =
z x 2 + y 2 is the required integral surface.
This the condition fails, so either there is no solution or there are infinitely
many solutions. (i.e., either existence of the solution is lost or the uniqueness of
the solution is lost). The integral surface=z F ( x2 + y 2 ) becomes
curve. In this case, it is to be noted that the initial data curve is a characteristic
curve.
1
Example 4: Solve the p.d.e zz x + z y = with the initial condition
2
s
z ( s , s=
) ,0 ≤ s ≤ 1 .
4
Solution: The initial curve satisfies the condition given in (37.1) for s ≠ 4 . The
characteristic system can also be written as:
dx dy dz 1
= z ,= 1,= with the initial conditions
dt dt dt 2
s
= , y ( s,0) s , z ( s,0) = .
x( s,0) s=
4
Solving the above system of ordinary differential equations using the initial
conditions, we get
1 1 s
z= t + C1 ( s,0) = t+ , y=
t + C2 ( s,0) =
t + s and
2 2 4
dx 1 s t 2 ts t 2 ts
= z= t + or x = + + C3 ( s,0) or x = + + s.
dt 2 4 4 4 4 4
4x − y2 4( y − x)
s= and t =
4− y 4− y
Hence the integral surface having the given Cauchy data is:
s t 1 4 x − y 2 1 4( y − x) 8 y − 4x − y2
z= + = + or z = for y= s ≠ 4.
4 2 4 4 − y 2 4 − y 4(4 − y )
Exercises:
data curve=
x x0 (=
s ) s= s ) s 2 , z= z0 ( s=
, y y0 (= ) s, 1 ≤ s ≤ 2 .
2. Find the solution of p − zq + z =0 for all y and x > 0 , for the initial data
x0 = 0, y0 = s, z0 = −2 s, −∞ < s < ∞ .
3. Show that the integral surface for the p.d.e p + q =z 2 with the initial
f ( x − y)
condition z ( x,0) = f ( x) is z ( x, y ) = .
1 − yf ( x − y )
References
Suggested Reading
Lesson 38
f ( x, y , z , p , q ) = 0 (38.1)
and
g ( x, y , z , p , q ) = 0 (38.2)
are said to be compatible if they have common solutions. In fact these two
equations admit a one parameter family of common solutions under some
conditions.
∂( f , g )
(i) =J ≠ 0 on Ω (38.3)
∂ ( p, q )
and
(ii) =p φ=
( x, y, z ), q ψ ( x, y, z ) (38.4)
dz φ dx +ψ dy
= (38.5)
Result: A necessary and sufficient condition for the integrability of the equation
(38.5) is:
∂( f , g ) ∂( f , g ) ∂( f , g ) ∂( f , g )
+p + +q =
0.
∂ ( x, p ) ∂ ( z , p ) ∂ ( y, q) ∂( z, q)
We now consider some examples to check compatibility condition for the given
equations.
Solution:
∂( f , g ) fp gp x x2
J== = = x(1 + xy ) ≠ 0
∂ ( p, q ) fq gq − y 1
Example 2: Find the one parameter family of common solutions to the p.d.es
f = p 2 + q 2 − 1= 0 and g = (p 2
+ q 2 ) x − pz = 0 .
Solution:
Step 1: Let us find the domain in which these equations admit common
solutions:
2p 2q
=J = 2qz , J ≠ 0 ⇒ z ≠ 0 in Ω
2 px − z 2qx
x z 2 − x2
This gives p= = φ ( x, y, z ) and q 2 =1 − p 2 ⇒ q = =ψ ( x, y, z ) (say)
z z
x z 2 − x2
dz φ dx +ψ dy leads to =
Step 3: Integrability of = dz dx + dy
z z
Solution:
Step 1: J ≠ 0 ⇒ x ≠ 0 (we always assume that both p and q are non zero).
y x
we obtain p= = φ ( x, y, z ) and q= = ψ ( x, y, z )
z z
dz φ dx + ϕ dy
Step 3: Integrability of = ⇒ zdz = ydx + xdy
Exercise:
that=
z x( y + 1) is a solution of f = 0 but not of g = 0 . Hence conclude that “not
( x + y)
2. Show that z = is a solution of f = p 2 + q 2 − 1= 0 and not of
2
g= (p 2
+ q 2 ) x − pz = 0 though f = 0 and g = 0 are compatible. Also find the 1-
References
Suggested Reading
Lesson 39
39.1 Non – linear p.d.e of 1st order complete integral – Charpit’s method.
f ( x, y , z , p , q ) = 0 (39.1)
g ( x, y , z , p , q , a ) = 0 (39.2)
Choose equation (38.2) such that (a) equations (38.1) and (38.2) on solving for
p and q give
p = φ ( x, y, z , a ) and q = ψ ( x, y, z , a ) (39.3)
dz φ dx +ψ dy
= (39.4)
F ( x, y , z , a , b ) = 0 (39.5)
containing two arbitrary constants a and b will form the complete integral of
(39.1).
∂g ∂g ∂g ∂g ∂g
[ f , g] = fp
∂x
+ fq
∂y
+ ( pf p + qf q )
∂z ∂p
(
− ( f x + pf z ) − f y + qf z
∂q
= )
0. (39.6)
∂( f , g ) ∂( f , g ) ∂( f , g ) ∂( f , g )
[ f , g] = +p + +q =0
∂ ( x, p ) ∂ ( z , p ) ∂ ( y, q) ∂( z, q)
Equation (39.6) is a quasi linear first order p.d.e for g with x, y, z, p and q as the
independent variables, and the corresponding characteristic system is
dx dy dz dp dq
= = = − =
− . (39.7)
f p f q pf p + qf q f x + pf z f y + qf z
Solution:
dx dy dz dp dq
= = =
− =
− 2.
xq 2 yq + xp 2 xpq + 2 yq 2
pq q
dp dq
For finding g = 0 chose = ⇒ p=
aq.
pq q 2
a 1 adx + dy
such =
that dz dx + dy Is integrable, i.e. dz =
ax + y ax + y ax + y
Fa Fax Fay
be written as F ( x, y, z, a, b) = 0 . we also note that the matrix is of
Fb Fbx Fby
Example 2: Solve f =q + xp − p .
2
dx dy dz dp dq
= = = = .
x − 2 p 1 −2 p + xp + q − p 0
2
Chose g ( x, y, z , p, q, a ) = 0 as p = ae − y or g =−
p ae − y =
0.
Solving=
f 0,= −axe − y + a 2 e −2 y .
g 0 for q , we get q =
Then =
dz pdx + qdy becomes dz = ae − y dx + ( a 2 e −2 y − axe − y ) dy
1
On integrating we get z =axe − y − a 2 e −2 y + b as the complete integral of f = 0
2
where a and b are arbitrary constants.
Exercises:
f = (p 2
+ q 2 ) y − qz = 0 .
=
f x 2 p 2 + y 2 q 2 −=
4 0.
4. Solve 16 p 2 z 2 + 9q 2 z 2 + 4 z 2 − 4 =0.
5. Solve p= ( z + qy ) .
2
References
Suggested Reading
Lesson 40
Special Types of First Order Non-Linear p.d.e
We now consider 4 special types of first order non-linear p.d.es for which the
complete integral can be obtained easily. The underlying principle in the first
three types is that of the Charpit’s method.
Here=
f x 0,=
f y 0,=
fz 0 .
dx dy dz dp dq
= = = = .
f p f q pf p + qf q 0 0
Then =
dz adx + Q(a )dy , on integration we get z =
ax + Q(a ) y =
b as the complete
integral of f ( p, q) = 0 .
Example 1
dx dy dz dp dq
Solution: We have = = = = .
1 − q − p + 1 p + q − 2 pq 0 0
a
p (1 − a ) + a =0 ⇒ p= .
a −1
a a
∴=
dz dx + ady ⇒=z x +=
ay b is the complete integral.
a −1 a −1
dp dq
From the characteristic system of equations we consider = , on solving we
p q
dz
∫ Q ( a, z ) = ax + y + b as the complete integral.
Example 2
=
z p2 + q2
z
q= 1
.
(1 + a ) 2 2
a z 1
=dz 1
dx + 1
z dy
(1 + a ) 2 2
(1 + a )2 2
1 adx + dy 1
or ∫=dz ∫ = 1 1
(ax + y ) + b ,
z
(1 + a )
2 2
(1 + a )2 2
or on simplifying we get
2
320 WhatsApp: +91 7900900676 www.AgriMoon.Com
Special Types of First Order Non-Linear p.d.e
dx dy dz dp dq
= = = =
g p −hq pg p − qhq − g x hy
Since g ( x, p) = h( y, q) ⇒ h( y, q) = a ,
Example 3:
Solve p − x 2 =q + y 2 .
dx dy dz dp dq
Solution: The auxiliary equations are = = = = .
1 −1 p + q 2x 2 y
x3 y3
z= ∫ ( a + x ) dx + ∫ ( a − y ) dy +b , or z = ax + + ay − + b is the complete solution.
2 2
3 3
x+ha 1 0
matrix is two.
y + hb 0 1
3
321 WhatsApp: +91 7900900676 www.AgriMoon.Com
Special Types of First Order Non-Linear p.d.e
Example 4
Exercises
1. p 2 + q 2 =
9.
2. pq + p + q =0.
3. z = px + qy + p 2 q 2 .
4. p(1 − q 2 ) = q(1 − z ) .
5. 1 + p 2 =
qz .
6. q + px =
p2 .
7. p − q + 3x =
0.
8. xyp + qy + pq =
yz .
9. z ( p 2 + q 2 ) + px + qy =
0.
References
Ian Sneddon, (1957). Elements of Partial Differential Equations. McGraw-Hill,
Singapore
Amaranath. T, (2003). An Elementary Course in Partial Differential
Equations.Narosa Publishing House, New Delhi
4
322 WhatsApp: +91 7900900676 www.AgriMoon.Com
Special Types of First Order Non-Linear p.d.e
Suggested Reading
I.P. Stavroulakis, Stephen a tersian, (2003). Partial Differential Equations.
Allied Publishers Pvt. Limited, New Delhi
J. David Logan, (2004). Partial Differential Equations. Springer(India)
Private Ltd. New Delhi
5
323 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 4: Partial Differential Equations
Lesson 41
A2nd order semi linear partial differential equation can be put in the form
Lu + g ( x, y, u , u x , u y ) =
0 (41.1)
∂2 ∂ ∂2
where the linear operator L ≡ R( x, y ) 2 + S ( x, y ) + T ( x, y ) 2 is such that
∂x ∂x∂y ∂y
the coefficient functions R, S and T are continuous function of x , y and
R2 + S 2 + T 2 ≠ 0 .
The coefficients and the partial derivatives in the given equation are written in
terms of the transformed variables. The first and second order partial derivatives
become:
u x uξ ξ x + uηη x ; =
= u y uξ ξ y + uηη y
∴ Ru xx + Su xy + Tu=
yy uξξ ( Rξ x 2 + Sξ xξ y + T ξ y 2 )
+uξη (2 Rη xξ x + S (η yξ x + ξ yη x ) + 2T ξ yη y )
+uηη ( Rη x 2 + Sη xη y + T ξ y 2 ) + F (ξ ,η , uξ , uη , u ) .
1
B(u1 , v1; u2 , v2 ) = Ru1u2 + S (u1v2 + u2v1 ) + Tv1v2 (41.4)
2
Now, the problem is to determine ξ &η so that the equation (41.2) takes the
simplest possible (Canonical) form.
∂ξ ∂ξ ∂y ∂y
+ λ1 ; λ2
=
∂x ∂y ∂x ∂y
dy dy
+ λ1 ( x, y ) =0; + λ2 ( x, y ) =0 respectively.
dx dx
Example 1
Reduce the equation u xx − x 2u yy =
0 to a canonical form.
Solution
comparing with the standard form, we note that =
R 1,=
S 0,=
T x2
Then S 2 − RT = 4 x 2 > 0 .
0 becomes α 2 − x 2 =
So Rα 2 + Sα + T = 0 ⇒α =±x .
⇒ λ1 =x; λ2 =
−x
dy 1
Now + x = 0 ⇒ y + x 2 = c1
dx 2
dy 1
− x = 0 ⇒ y − x 2 = c2
dx 2
1 1
Taking ξ= y + x 2 ;η= y − x 2
2 2
u x = uξ ξ x + uηη x = uξ x + uη (− x) = uξ x − uη x
u=
y uξ + uη u xx= x 2uξξ − 2 x 2uξη + x 2uηη + uξ − uη u yy =uξξ + 2uξη + uηη
, ,
1 1
∴ u xx − x 2u yy = =
0 becomes uξη = (u − u ) (uξ − uη ).
4 (ξ − η )
ξ η
4x2
Case B: If S 2 − 4 RT =
0
∂ 2u
Hence the canonical form in this case is = φ (ξ ,η , u , u x , u y ) (41.7)
∂η 2
Example 2
u xx + 2u xy + u yy =
0 canonical form.
Solution
Rα 2 + Sα + T =α 2 + 2α + 1 =(α + 1) =0 ⇒ α =−1, −1 .
2
dy
∴ − 1 = 0 ⇒ x − y = c1 , take ξ= x − y
dx
Then chose η= x + y .
∂ 2ξ
Using these ξ &η : we have the canonical form as =0
∂η 2
ξ η f1 (ξ ) + f 2 (ξ ) where f1 & f 2 are arbitrary functions.
⇒=
Hence the solution of the given equation is:
z = ( x + y ) f1 ( x − y ) + f 2 ( x − y ).
Case C: S 2 − 4 RT < 0 .
In this case, the roots of the equation Rα 2 + Sα + T =
0 are complexconjugates.
∂ 2u
Proceeding as in case A; the canonical form = φ (ξ ,η , u , u x , u y ) .
∂η 2
But ξ &η are complex conjugates.To get the real canonical form, we use the
1 1 ∂ 2u 1 ∂ 2u ∂ 2u
transformation α = (ξ + η ), β = i (η − ξ ) ⇒ = + .
2 2 ∂ξ ∂η 2 4 ∂α 2 ∂β 2
∂ 2u ∂ 2u
So the canonical form in this case is + ϕ (α , β , u , uα , uβ ) .
=
∂α 2 ∂β 2
Example 3
Solution
Clearly, =
R 1,=
S 0,=
T x 2 , and S 2 − 4 RT < 0 .
α 2 + x 2 =⇒
0 α=±ix , hence λ1 = ix; λ2 = −ix ,
1 1 1 2
ξ=iy + x 2 ;η2 = x2 ,α
−iy + = =x ;β y
2 2 2
1
⇒ uαα + uββ =
− uα is the canonical form
2α
No we classify second order equation of the type (41.1) by their canonical form as:
A) Hyperbolic if S 2 − 4 RT > 0 , B) Parabolic if S 2 − 4 RT =
0,
C) Elliptic if S 2 − 4 RT < 0 .
Clearly the one dimensional wave equation given by utt = c u xx is an example for
2
Example 4
u xx + 2 xu xy + (2 − y 2 )u yy =
0
Discuss the nature of the equation
Solution
Clearly S 2 − RT = ( x 2 + y 2 − 2) .
Hence the given equation isHyperbolic at all points ( x, y ) such that x + y > 2 ,
2 2
Parabolic if x 2 + y 2 =
2 and Elliptic if x 2 + y 2 < 2 .
Exercises
1. Reduce the equation to its canonical form and classify
it: utt + 4utx + 4u xx + 2u x − ut =
0
.
2. Classify the partial differential equation:
References
Suggested Reading
Lessons 42
42.1 Introduction
(D n
+ k1D n−1D′ + ... + kn D′n ) z =f ( x, y ) or F ( D, D′) z = f ( x, y ) where
∂ ∂
F ( D, D′) = ∑∑ Crs D r D′s , Crs are constants=
&D = ; D′
z s ∂x ∂y .
Let us find the Complementary function for this equation.
We have F ( D, D′)u = 0
and F ( D, D′) z1 = f ( x, y )
∴ F ( D, D′) ( u + z1 ) =
f ( x, y ) .
(D 2
− D′2 ) =( D + D′ )( D − D′ ) .
Result 3: If the operator F ( D, D′) is reducible, the order in which the linear
factors occur is unimportant. Any reducible operator can be written in the form.
We have (α r D + β r D′ + rr ) (α s D + β s D′ + rs )
γ x
exp − r φr ( β r x − α r y )
ur =
αr
is a solution of the equation F ( D, D′) z = 0 .
γ γ x
− r ur + β r exp r φ ′ ( β r x − α r y )
Proof: Dur =
αr αr
γ x
−α r exp r φ ′ ( β r x − α r y )
D′ur =
αr
so that (α r D + β r D′ + γ r ) ur =
0 (42.1)
n
=
Now F ( D, D′) ∏ (α s D + β s D′ + γ s ) (α r D + β r D′ + γ r ) ur (42.2)
s =1
The prime after the product denotes that the factor corresponding to s = r
is omitted. Combining (42.1) & (42.2) we get F ( D, D′)ur = 0 .
γ x
function of the simple variable ξ , then if β r ≠ 0 ; ur = exp r φr ( β r x ) is a
αr
solution of the equation F ( D, D′) z = 0 .
Proof: Similar lines to that of result 5.
(α r D + β r D′ + γ r ) z=
2
0
Let Z = (α r D + β r D′ + γ r ) z .
Then (α r D + β r D′ + γ r ) Z =
2
0.
αr + βr + γ=
rz e φr ( β r x − α r y )
∂x ∂x
dx dy dz
Solution: = =
αr βr −
rr x
−rr z + e αr
φr ( β r x − α r y )
With solution:
dx dy
= ⇒ β r x − α r y =C1
αr βr
dx dz
and =
αr −
rr x
αr
rr z + e φr C1
rr x
−
=
⇒z
1
αr
{φ ( C ) x + C } e
r 1 2
αr
rr x
−
∴ z xφr ( β r x − α r y ) + ϕr ( β r x − α r y ) e
= αr
factor of F ( D, D′) and if the functions φr ,φr ,...,φr are arbitrary, then
1 2 n
γ x n
exp − r ∑ x s −1φrs ( β r x − α r y ) is a solution of F ( D, D′) = 0 .
α r s =1
γ r y m s −1
φr ,φr ,...,φr are arbitrary, then exp − ∑ x φrs ( β r x ) is a solution of
β r s =1
1 2 n
F ( D, D′) z = 0 .
γ r x n s −1
n
∑ exp − α ∑
u= x φrs ( β r x − α r y ) where
= φrs ( s 1,2,...,
= nr ; r 1,2,..., n ) are
=r 1 = r s 1
arbitrary.
∂2 z ∂2 z ∂2 z
Consider the second order equation + k + k =
0
∂x 2 ∂x∂y ∂y 2
1 2
(
which is written in the operator form as D 2 + k1DD + k2 D′2 z =
0. )
D
Let its roots be denoted by = m1 , m2 .
D′
dx dy dz
( D − m2 D′ ) z = 0 ⇒ = = ⇒ y + m2 x = c1 , z = c2
1 −m2 0
( D − m1D′) u = 0 ⇒ u = φ ( y + m1 x )
dx dy dz
( D − m1D′) z = φ ( y + m1 x ) ⇒ = =
1 −m1 φ ( y + m1 x )
or y + m1 x = c1; dz = φ (u )dx
or z = xφ ( y + m1 x ) + c2
Example 1
2 D 2 + 5 DD′ + 2 D′2 =
0
1
2m 2 + 5m + 2 =0 ⇒ m1 =−2, m2 =−
2
1
z = f1 ( y − 2 x ) + f 2 y − x .
2
Example 2
r + bs + qt =
0
m 2 + bm + q =0 ⇒ m =−3, −3
z = f1 ( y − 3 x ) + xf 2 ( y − 3 x ) .
Example 3
(D 2
− D′2 ) z =
0 . m 2 − 1 =0 ⇒ m =±1
z = φ1 ( x + y ) + φ2 ( x − y ) .
Example 4
( )
Find the complementary function of D 4 + D′4 z − 2 D 2 D′2 z =
0.
Solution
( D + D′) ( D − D′) z=
2 2
0
α=
1 α=
2 1, γ1 = 0
So the solution is: β=
1 β=
2 1;γ 2 = 0
z xφ1 ( x − y ) + φ2 ( x − y ) + xϕ1 ( x + y ) + ϕ2 ( x + y ) where ϕ arbitrary function.
=
F ( D, D′) is made up of term of the type Crs D r D′s ; F ( D, D′) = ∑∑ Crs D r D′s
r s
{
Result 9: F ( D, D′) e ax +byφ ( x, y= }
) e ax +by F ( D + a, D′ + b)φ ( x, y )
( )( ) ( )
r r
Solution: D r e axφ = ∑ r C p D ρ e ax D r − ρφ = e ax ∑ r C p a ρ D r − ρ φ ( x, y )
ρ =0 ρ =0
= e ax ( D + a ) φ .
r
′s ebxφ e ax ( D′ + a ) φ .
Similarly, D=
s
Hence F ( D, D′)e ax +=
by
φ eax+by f ( D + a, D′ + a)φ ( x, y ) .
1
=
f ( D, D ′) z F ( x, y ) ⇒
= z F ( x, y )
f ( D, D′)
1 1
Case 1: e ax +by = e ax +by , provided f (a, b) ≠ 0 .
f ( D, D′) f ( a, b)
Case 2:
1
∴z sin(mx + ny ) or cos(mx + ny ) .
f ( −m 2 , −mn, −n 2 )
−1
Case 3: F ( x, y ) = x m y n , m, n constants. P.I . = f ( D, D′ ) x m y n .
1
Case 4: F ( x, y ) is any function of x and y , resolve , into partial
f ( D, D′)
fractions, treating f ( D, D′) as a function of D alone and operate each partial
function of F ( x, y ) , remembering that
1
( D − mD′)
F=
( x, y ) ∫ F ( x, c − mx)dx
Example 5
(
Find the solution of D 2 − D′2 z =
x− y )
The complementary function is : φ1 ( x + y ) + φ2 ( x − y ) .
∂z1 ∂z1 1
− = x − y ⇒ z1 = ( x − y ) + f ( x + y ) , f is arbitrary.
2
∂x ∂y u
Exercises
Find the solution of the linear p.d.e with constant coefficients:
1. D 2 + 4 DD′ − 5 D′2 z = sin(2 x + 3 y )
(
2. D 2 − DD′ z =)
cos x cos 2 y
3. D 3 − 2 D 2 D′ =2e 2 x + 3 x 2 y
4. 4 D 2 − 4 DD′ + D=
′2 16log( x + 2 y ) .
F ( D, D′) z = f ( x, y )
Irreducible factors are treated as follows:
1
Case 1: The particular integral z = f ( x, y ) is obtained by Expanding
F ( D, D′)
the operator F −1 by the binomial theorem and then interpret the operator
D −1 , D′−1 as integration.
Example 6
Find a Particular Integral of the equation ( D 2 − D′) z =2 y − x2 .
Solution
−1
D2 1
z= 2
1
( D − D′)
( 2 y − x 2
) =− 1−
D′ D′
( 2 y − x2 )
1 D2 D4
or z =1 − − 2 − 3 − ....... ( 2 y − x 2 )
D′ D′ D′
=( − y 2 + x 2 y ) −
1
(−2) − .....
D′2
=− y 2 + x2 y + y 2 =x2 y .
Case 2: If f ( x, y ) is made of term of the form exp(ax + by )
1
then P.I is: e( ax +by ) if F (a, b) ≠ 0 .
F ( a, b)
( D 2 − D′) z =
e( ax +by ) , F (a, b)= 3 ≠ 0
1 ( ax +by ) 1 ( ax +by )
So e = e .
( D 2 − D′) 3
and F ( D + a, D′ + b) w =
c.
Example 7
Find the particular solution of ( D 2 − D′) z =
e( ax +by ) .
Clearly F (1,1) = 0 .
F ( D + 1, D′ + 1) = ( D + 1) − ( D′ + 1) = D 2 + 2 D − D′
2
(
or D 2 + 2 D − D′ w =
1 )
1
1 −1 D 2 + 2 D x
1 = 1 − 1 =
2
D2 + 2D D′ D′ − y
− D′ 1 −
D′
1 ( ax +by )
∴ P.I. are xe & − ye x + y .
2
f ( x, y ) involving trigonometric functions Re. or Img. Write it as exp(i....)
use the above method.
∂2 z ∂2 z ∂2 z
Exercises: Denote: = r , = s= , 2 t . Find the solution of
∂x 2 ∂x∂y ∂y
1. r + s − 2t =e x+ y .
2. r − s + 2q − z =x2 y 2 .
3. r + s − 2t − p − 2q =0.
∂2 z ∂2 z
4. Solve 2 + 2 = e − x cos y .
∂x ∂y
References
Suggested Reading
Lesson43
n
f ( D, D′) z = F ( x, y ) where f ( D, D′) = ∏D
r =1
r − mDr′ − Cr ;
dx dy dz
= = ⇒ y + mx = a , z = becx .
1 −m cz
( )
Example1: D 2 + 2 DD′ + D′2 − 2 D − 2 D′ z= sin ( x + 2 y )
( D + D′)( D + D′ − 2 ) =z sin ( x + 2 y )
=z e 2 xφ ( y − x )
1
=
− sin ( x + 2 y )
2( D + D′) + 9
−2( D + D′) − 9
sin ( x + 2 y )
4 ( D 2 + 2 DD′ + D′2 ) − 81
1
= 2cos ( x + 2 y ) − 3sin ( x + 2 y )
39
(
1. D 2 + DD′ + D′ − 1 z = )
e− x
2. ( D + D′ − 1)( D + 2 D′ − 3) z =4 + 3x + 6 y
3. ( D′ + DD′ + D′ ) z =x 2 + y 2
(
4. 2 DD′ + D′2 − 3D′= )
z 3cos ( 3 x − 2 y )
Keywords:Non-Homogeneous,
References
Suggested Reading
Lesson 44
Method of Separation of Variables
44.1 Introduction
This is the oldest systematic procedure for the solving a class of partial
differential equations. The underlying principle in this method is to transform
the given partial differential equation to a set of ordinary differential equations.
The solution of the p.d.e. is then written as either the product
z ( x, y ) = X ( x) ⋅ Y ( y ) ≠ 0 or as a sum z ( x=
, y ) X ( x) + Y ( y ) where X ( x) and Y ( y ) are
Example 1
0 subject to the condition =
Solve the first order p.d.e. z x + 2 z y = y ) 4e −2 y .
z ( x 0,=
Solution
We look for a separable solution for z ( x, y ) in the form z ( x, y ) = X ( x) ⋅ Y ( y ) ≠ 0 .
Substituting this in the given p.d.e we obtain
X ′( x) ⋅ Y ( y ) + 2 X ( x) ⋅ Y ′( y ) =
0.
This can be separated into 2 o.d.es, one in x and the other in y as:
X ′( x) Y ′( y )
= − .
2 X ( x) Y ( y)
Note that the left hand side of the equality is a function of x alone and it is
equated to a function of y alone which is on the right hand side. This is possible
only when both are equal to the same constant (say) k which is called an
arbitrary separation constant. Thus we have
X ′( x) Y ′( y )
= k=
2 X ( x) Y ( y)
Eliminating the arbitrary constant C using the given condition z (0, y ) = 4e−2 y
we get C = 4 and k = 2 . Hence the particular solution is z ( x, y ) = 4e4 x −2 y .
Example 2
x2
( xyz ) subject to the condition u ( x, 0) = 3exp .
Solve y p + x q =
2 2 2 2 2
4
Solution
Note that p = z x and q = z y write z ( x=
, y ) X ( x) ⋅ Y ( y ) in the given equation. This
1 X ′( x) 1 Y ′( y )
⇒ =λ and = 1− λ2 .
x X ( x) y Y ( y)
0 and Y / ( y ) − 1 − λ 2 yY ( y ) =
Solving these two o.d.es X / ( x) − λ xX ( x) = 0 , we find
λ 2 y
x 1− λ 2
X ( x) = Ae 2
and Y ( y ) = Be 2
.
2
346 WhatsApp: +91 7900900676 www.AgriMoon.Com
Method of Separation of Variables
λ y 2
Hence the general solution is z (=
x, y ) C exp x 2 + 1− λ= , C AB .
2 2
x2 1
The boundary condition u ( x, 0) = 3exp implies C = 4 and λ = .
4 2
1 3 2
∴ The particular solution =
is z ( x, y ) 4 exp x 2 + y .
4 4
Example 3
∂2 z ∂z ∂y
Solve −2 + =
0.
∂x 2
∂x ∂x
Solution
Write Z ( x=
, y ) X ( x) ⋅ Y ( y ) .
∂z ∂2 z ∂z
= X ′( x) ⋅ Y ( y ) , = X ′′( x) ⋅ Y ( y ) and= X ( x) ⋅ Y ′( y ) .
∂x ∂x 2
∂y
( ) (
X ( x) C1 exp 1 + 1 + λ x + C2 exp 1 − 1 + λ x and=
=
)
Y ( y ) C3 exp [ −λ y ] .
Z=
( x, y ) {C exp (1 +
4 ) ( ) }
1 + λ x + C5 exp 1 − 1 + λ x exp [ −λ y ]
∂u ∂u
1. =4 , given that u (0, y ) = 8e−3 y .
∂x ∂y
3
347 WhatsApp: +91 7900900676 www.AgriMoon.Com
Method of Separation of Variables
∂z ∂z
2. 4 + =
3 z subjected to=z 3e − y − e −5 y when x = 0 .
∂x ∂y
∂ 2 z ∂z
3. Find a solution of the equation − − 2z =
0 subject to the conditions:
∂x 2 ∂x
∂z
=
z ( x 0,=
y ) 0 And ( x = 0, y ) = 1 + e −3 y .
∂x
References
Ian Sneddon, (1957). Elements of Partial Differential Equations. McGraw-Hill,
Singapore
Amaranath. T, (2003). An Elementary Course in Partial Differential
Equations.Narosa Publishing House, New Delhi
Suggested Reading
I.P. Stavroulakis, Stephen a tersian, (2003). Partial Differential Equations.
Allied Publishers Pvt. Limited, New Delhi
J. David Logan, (2004). Partial Differential Equations. Springer(India)
Private Ltd. New Delhi
4
348 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 4: Partial Differential Equations
Lesson 45
45.1 Introduction
∂z ∂2 z
= c2 2 (45.1)
∂t ∂x
k
where z (t , x) representing the heat conducting in the material and c 2 = is the
sρ
diffusivity constant with k being the thermal conductivity, ρ being the density
and s being the specific heat. The problem is well posed if this differential
equation is supplemented with an initial condition and two boundary conditions.
Let us attempt to solve this equation with suitable initial and boundary
conditions using some standard mathematical techniques such as the method of
separation of variables and integral transform techniques. By a solution of heat
equation, we mean a physically realistic solution that obeys the ‘natural’
physical process.
z (=
t , x) X ( x) ⋅ T (t ) .
∂z ∂2 z
Finding and and substituting in (45.1), we get the set of ordinary
∂t ∂x 2
differential equation as
X ′′( x) − λ X ( x) =
0 (45.2)
and T ′(t ) − λ c 2T (t ) =
0 (45.3)
(t , x) ( c4 e px + c4 e − px ) ec p t
i.e., z=
2 2
(45.4)
=
where c9 c=
6 c8 ; c10 c7 c8 are arbitrary constants.
Case 3: Take λ = 0 .
, x) ( c14 x + c15 )
In this case z (t= (45.6)
Now, among these three possible solutions, we have to choose the one that is
physically realistic. In general, the solution of heat conduction problem is
exponentially decaying with time ' t ' . This property is clearly seen only
when λ < 0 .
2
350 WhatsApp: +91 7900900676 www.AgriMoon.Com
One Dimensional Heat Equation
z (t , x) ( c1 cos px + c2 sin px ) e − c p t
Thus the suitable solution of the heat equation is=
2 2
The values of c1 , c2 and p are found based on the initial and boundary
conditions associated with the equation. Let us see this solution procedure in
some special situations.
Example 44.1
∂z ∂ 2 z
solve the heat conduction problem = 2 0 < x < 1, t > 0
∂t ∂x
= ( c1 cos px + c2 sin px ) e− p t
2
z ( x, t )
∴ z ( x, t )= c2 sin px ⋅ e − p t
2
x) an sin nπ x ⋅ e − p t
∴ z (t , =
2
3
351 WhatsApp: +91 7900900676 www.AgriMoon.Com
One Dimensional Heat Equation
z ( x, t ) a0 sin 0π x ⋅ e − p t ,
At this stage, note that with each n , n = 0,1, 2,.... ,we get=
2
Using the principle of superposition (valid only for linear p.d.es.), we can write
the general solution as the infinite sum of these solutions as
∞
= ∑a sin nπ x ⋅ e − p t .
2
z ( x, t ) n
n =0
∞
see z (t , x) ∑
we= = an sin nπ x sin nπ x
n =0
Hence the solution of the heat equation satisfying the given initial and boundary
∞
=
conditions is written as z (t , x) ∑ sin nπ x ⋅ e
n =0
− p 2t
.
Example 44.2
This will neither give any information about p nor about c2 . Thus the separation
of variables would then be futile. This example clearly indicates the restricted
use of the method of separation of variables.
4
352 WhatsApp: +91 7900900676 www.AgriMoon.Com
One Dimensional Heat Equation
Example 44.3
∂z ∂ 2 z
Solve the equation = , 0 < x < L , subject to the boundary conditions
∂t ∂x 2
∂z ∂z
(t , = = 0 , = (t ,=
x 0) x L=
) 0 and the initial condition =
z (t 0,=
x ) h( x ) .
∂x ∂x
Solution: Using the separation of variables method the general solution of the
z (t , x) ( A cos px + B sin px ) e − p t
heat conduction equation can be written as=
2
∂z
The boundary condition (t , 0) =0 ⇒ B =0
∂x
z (t , x) A cos px ⋅ e − p t .
Hence=
2
∂z nπ
The other condition= (t= 0 ( A ≠ 0 ) ⇒ p = ; n = 0,1, 2,...
, L) 0 ⇒ sin pL =
∂x 2
− n 2π 2
nπ x t
=
Thus we can write zn (t , x) an cos ⋅e L2
L
− n 2π 2
∞ ∞
nπ x
z (t , x) ∑ zn (t , x) ∑ an cos
t
=
and = ⋅e L2
.
= n 0= n 0 L
∞
nπ x
∑a
n =0
n cos
L
= h( x )
The unknown coefficients an are computed using the half range Fourier Cosine
Series expansion, which gives
nπ x
L L
1 2
L ∫0 ∫
a0 = h( x)dx and an = h( x) cos dx .
L0 L
5
353 WhatsApp: +91 7900900676 www.AgriMoon.Com
One Dimensional Heat Equation
Thus for a given function h( x) , we find an ’s and the final solution is written as
− n 2π 2
∞
nπ x
z (t , x) ∑ an cos
t
= ⋅e L2
n =0 L
Example 44.4
Two ends A and B of a thin rod of length 10 cm have the temperature at 30o C
and 80o C until steady state is reached. The temperatures of the ends are
changed to 40o C and 60o C respectively. Find the temperature distribution in
the rod at time t .
∂z ∂ 2 z ∂2 z
and = 2 becomes 2 = 0 . The steady state solution is zs ( x=
) ax + b ... (i)
∂t ∂x ∂x
The initial temperature at the ends A and B before the steady state is reached
are z (=
x 0)= 300 C ... (ii)
and z (=
t ,10) 600 C ∀t ... (vi)
i.e., the boundary values are non-zero, we split up the temperature function
z (t , x) into the sum of zs ( x) and zt (t , x) i.e., z (=
t , x) zs ( x) + zt (t , x) ......(vii)
6
354 WhatsApp: +91 7900900676 www.AgriMoon.Com
One Dimensional Heat Equation
becomes zs ( x=) 2 x + 40
∂z ∂ 2 z
The transient solution zt (t , x) is obtained by solving = subject to the
∂t ∂x 2
initial condition z (= = 30 + 5 x and the boundary conditions
x 0) z (t , 0) = 400 C
∞
∴ z (t , x) = ( 40 + 2 x ) + ∑ ( an cos px + bn sin px ) e − p t .
2
n =1
Now z (t , 0) = 40 + ∑ an cos px ⋅ e − p t
40 = ⇒ an =∀
2
0 n.
∞
Hence z (t , x) = ( 40 + 2 x ) + ∑ bn sin px ⋅ e− p t .
2
n =1
∞
⇒ 60 = 40 + 20 + ∑ bn sin10 p ⋅ e − p t
2
n =1
nπ
Since bn ≠ 0 , sin10 p =0 ⇒ p =
10
∞
nπ x − n10π t
∴ z (t , x) = ( 40 + 2 x ) + ∑ bn sin ⋅e
n =1 10
Now the unknown bn are obtained by making use of the initial condition
z (0, x=
) 30 + 5 x .
∞
nπ x
⇒ 30 + 5 x = 40 + 2 x + ∑ bn sin
n =1 10
7
355 WhatsApp: +91 7900900676 www.AgriMoon.Com
One Dimensional Heat Equation
∞
nπ x
or ∑b
n =1
n sin
10
= 3 x − 10
nπ x
10
2 60 20 20 20
=bn ∫ ( 3x − 10 ) sin dx =
− cos nπ + cos nπ − =
− [ 2 cos nπ + 1] .
10 0 10 nπ nπ nπ nπ
Exercises:
∂z ∂2 z
1. Solve the heat conduction problem =α2 2 , 0 < x < l ;
∂x ∂x
∂z ∂z
(t , 0) = 0 , (t , l ) = 0 ; z (0, x) = x .
∂x ∂x
2. The temperatures at one end of a bar 10cm long with insulated sides is
kept at 0oC and that the other end is kept at 100 oC until steady state
condition attained. The two end are then suddenly insulated so that the
temperature gradient is zero at each end thereafter. Find the temperature
distribution in the bar.
∂z ∂2 z
3. Solve = α 2 2 subject to the conditions:
∂x ∂x
(i) z is decaying as t → ∞ in 0 < x < l ;
∂z ∂z
(ii) (t , 0)= 0= (t , l ) .
∂x ∂x
8
356 WhatsApp: +91 7900900676 www.AgriMoon.Com
One Dimensional Heat Equation
References
Ian Sneddon, (1957). Elements of Partial Differential Equations. McGraw-Hill,
Singapore
Amaranath. T, (2003). An Elementary Course in Partial Differential
Equations.Narosa Publishing House, New Delhi
Suggested Reading
I.P. Stavroulakis, Stephen a tersian, (2003). Partial Differential Equations.
Allied Publishers Pvt. Limited, New Delhi
J. David Logan, (2004). Partial Differential Equations. Springer(India)
Private Ltd. New Delhi
9
357 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 4: Partial Differential Equations
Lesson 46
46.1 Introduction
The solution of the wave equation should describe the wave motion and this
involves periodic sine and cosine terms. A particular solution of this equation
can be obtained by specifying two initial conditions and two boundary
conditions. Let us now see the solution of this one dimensional wave equation
using the separation of variables technique using various types of boundary
conditions. Note that we pose either initial displacement or initial velocity or
both for the initial conditions to make the mathematical formulation as a well
posed problem.
∂2 z 2 ∂ z
2
= c (46.1)
∂t 2 ∂x 2
∂u
(iv) = (t 0,=
x) 0 (string is released from rest, the initial velocity is zero).
∂t
T ′′ X ′′
X ( x) ⋅ T ′′(t=
) c 2 X ′′( x) ⋅ T (t ) or = = λ (46.3)
c 2T X
where λ is the arbitrary separation parameter. This will result in two ordinary
differential equations as T ′′ − λ c 2T =
0 (46.4)
and X ′′ − λ X =
0 (46.5)
Case 1: Take λ > 0 say λ = p 2 , and solving equations (46.4) and (46.5)
Among these three possible solutions for the wave equation, the physically
realistic solutions that represents the periodic functions of x and t is when λ < 0
( c1 cos px + c2 sin px )( c3 cos cpt + c4 sin cpt ) .
i.e., z (t , x) =
∂z
One of the initial condition (0, x) = 0 implies
∂t
∴ z (t , x) =
c2 c3 sin cpt cos cpt . At this stage we use the other boundary condition
nπ
As c2 ≠ 0; c3 ≠ 0 , we have sin pL = 0 ⇒ p = n = 0,1, 2,...
L
nπ x
=Thus z (t , x) A=
sin cos cpt where A c2 c3 . This constant is determined using the
L
nπ x
other initial condition as: z (0, x) = f ( x) ⇒ A sin f ( x) where c2 c3 = A .
=
L
πx πx πt
Choose f ( x) = 3sin ⇒ A = 3, n = 1 , hence the solution is z (t , x) = 3sin cos .
L L L
Example 2
A tight string, 2m long with c = 30m/s is initially at rest but is given an initial
velocity 300sin 4π x from its equilibrium position. Determine the displacement at
1
the position x = m of the string.
8
Solution:
∂2 z ∂2 z
Given = 900 Subject to z (t , 0)= 0= z (t , 2) , and z (0, x) = 0 and
∂t 2 ∂x 2
∂z
(0, x) = 300sin 4π x . The solution may be written as
∂t
Now, z (t , 0) = 0 ⇒ c = 0
∴ z (t=
, x) ( A cos 30 pt + B sin 30 pt ) ⋅ D sin px
Hence z (t , x) =
B ⋅ D sin px ⋅ sin 30 pt .
∂z 300 5
As (0, x)= 300sin 4π x= 30 p ⋅ B ⋅ D ⋅ sin px ⇒ p =
4π and B=
⋅D =
∂t 30 ⋅ 4π 2π
5
∴ z=
(t , x) sin120π t ⋅ sin 4π x .
2π
1
We now determine the maximum displacement at x = occurs when
8
2.5
sin120π t = 1 and then zmax = .
π
Example 3
Solution:
∂2 z 2 ∂ z
2
The equation of the vibrating sting is = c .
∂t 2 ∂x 2
earlier examples, here also the solution of the vibrating string after applying the
boundary conditions reduces to
nπ x cnπ t cnπ t
z (t , x) c2 sin c3 cos + c4 sin .
L L L
nπ x
Now the initial condition z (0, x) = 0 ⇒ c2c3 sin 0 ∀x ⇒ c2 c3 =
= 0 ⇒ c3 =
0 (for
L
non-trivial solution).
nπ x cnπ t
∴ z (t , x) =
bn sin sin where bn = c2c4 .
L L
∂z v0 πx 3π x ∞
cnπ nπ x
∂t
(0, x=
)
4 3sin
L
− sin
L
= ∑n =1 L
bn sin
L
.
nπ x
Comparing the coefficients of sin on both sides we see
L
3Lv0 − Lv0
b1 = , b3 = , b= b= = 0
b4 ...
4cπ 12cπ
2 3
Lv0 πx cπ t 3π x 3cπ t
=z (t , x) 9sin L sin L − sin L sin L .
12cπ
∂2 z 2 ∂ z
2
Consider the wave equation 2 = a 2 (46.9)
∂t ∂x
These are the two characteristics of the hyperbolic equation. The equatin (46.9)
∂2 z
is transformed to its canonical form as =0 (46.11)
∂ξ∂η
∂z ∂z ∂z ∂z ∂z ∂z ∂2 z ∂2 z ∂2 z ∂2 z
This is because =+ ; = −a +a , = + 2 +
∂x ∂ξ ∂η ∂t ∂ξ ∂η ∂u 2 ∂ξ 2 ∂ξ∂η ∂η 2
∂2 z 2 ∂ z
2
2 ∂ z
2
2 ∂ z
2
and =
a − 2 a + a .
∂t 2 ∂ξ 2 ∂ξ∂η ∂η 2
z (η , ξ )
of η . Integrating again w.r.t η , we get= ∫ h (η ) dη + g (ξ ) which can be
∂z
Case 1: Let the initial conditions be z (0, x) = φ ( x) and (0, x) = 0 .
∂t
cf ′( x + ct ) t 0=
We obtain = − cg ′( x − ct ) t 0 =
0 , ⇒ f ′( x) = g ′( x) ⇒ f ( x) = g ( x) + k , where
k is a constant. Also, , 0) φ=
z ( x= ( x) f ( x) + g ( x) φ ( x) ,
⇒ 2 g ( x) + k =
1 1 1
⇒ g ( x)= [φ ( x) − k ] . ( x) g (=
Hence f= x) + k [φ ( x) − k ] , or=
f ( x) [φ ( x) + k ] .
2 2 2
1
z (t=
, x) (φ ( x) + ct ) + (φ ( x) − ct ) . (46.13)
2
∂z
Case 2: Suppose now that z (0, x) = 0 and (0, x) = θ ( x)
∂t
∂z
From equation (4), we have (0, x) = af ′( x) − ag ′( x) = θ ( x)
∂t
s
1
a ∫0
⇒ f ( x) − g=
( x) θ ( s )ds + D
Also, z (0, x) = 0 ⇒ f ( x) + g ( x) =
0
⇒ f ( x) =
− g ( x) or f (0) − g (0) =
C=
2 f (0) =
−2 g (0) .
s s
1 1
This ⇒ =
f ( x) ∫
2a 0
− ∫ θ ( s )ds + g (0) ,
θ ( s )ds + f (0) and g ( x) =
2a 0
Thus from these two cases, it is evident that a particular solution is obtained for
a given φ ( x) and θ ( x) respectively.
Example 4:
Solution:
1
We have z ( x,=
t) [ f ( x + ct ) + f ( x − ct )]
2
Exercises:
a
(i) f (=
x) a( x − x 2 ) (ii) f (=
x) (1 + cos 2kx)
2
0, x < −1
10( x + 1), −1 ≤ x ≤ 0
θ ( x) =
10(1 − x), 0 ≤ x ≤ 1
0,1 < x
If the string has zero initial displacement find the solution of the wave equation.
∂2 z 2 ∂ z
2
3. Solve = c , 0 < x < L subject to the conditions
∂t 2 ∂x 2
3π x ∂z
z= (=
(0, t ) z= =
L, t ) 0 ; z (0, x) sin ; (0, x) 0 .
L ∂x
∂2 z ∂2 z
4. Solve = , 0 < x < L , subject to
∂t 2 ∂x 2
∂z
=
z (0, t ) 0; z (= x, 0) µ x( L − x);
L, t ) 0; z (= ( x, 0) = 0 .
∂t
5. The points of trisection of a sting are pulled aside through the same distance
on opposite sides of the position of equilibrium and the string is released from
rest. Derive an expression for the displacement of the string at subsequent time.
∂2 z ∂2 z
6. Solve = , 0 < x < L; z=
(0, t ) z=
( L, t ) 0;
∂t 2 ∂x 2
1
x, 0 ≤ x ≤ 2 ∂z
z ( x, 0) = ; ( x, 0) = 0 .
1 − x, 1 ≤ x ≤ 1 ∂t
2
References
Ian Sneddon, (1957). Elements of Partial Differential Equations. McGraw-Hill,
Singapore
Amaranath. T, (2003). An Elementary Course in Partial Differential
Equations.Narosa Publishing House, New Delhi
Suggested Reading
I.P. Stavroulakis, Stephen a tersian, (2003). Partial Differential Equations.
Allied Publishers Pvt. Limited, New Delhi
Lesson 47
Laplace Equation in 2-Dimensions
47.1 Introduction
∂z ∂2 z ∂2 z
= α 2 + 2
Heat conduction in a two dimensional region is given by
∂t ∂x ∂y
elliptic in nature. Unlike the hyperbolic and parabolic equations where initial
conditions are also specified, in case of elliptic equation only boundary
conditions are specified, thus making these problems as pure boundary value
problems. Let Ω be the interior of a simple closed differentiable boundary curve
Γ and f be a continuous function defined on the boundary Γ . The problem of
finding the solution of the above Laplace equation in Ω such that it coincides
with the function f on the boundary Γ is called the Dirichlet Problem.
Neumann Problem. The third boundary value problem , known as the Robin
Problem is one in which the solution of the Laplace equation is obtained in Ω
∂z
that satisfies the condition + g (s) z (s) =
0 on Γ where g ( s ) ≥ 0 and g ( s ) ≠ 0 . We
∂η
now describe the method of Separation of variables technique for the Laplace
equation.
∂2 z ∂2 z
We have the Laplace equation given by + =
0 (47.1)
∂x 2 ∂y 2
Let z ( x, y ) = X ( x) Y ( y ) (47.2)
∂2 z ∂2 z
Finding and and substituting these in (1) and separating them into
∂x 2 ∂y 2
X ′′ − λ X =
0 and Y ′′ + λY =
0 . where λ is the arbitrary separation parameter.
Solving these equations, we get three possible solutions for λ = p 2 , λ = − p 2 and
λ = 0 . These forms are:
Of these, we take that solution which is consistent with the given boundary
conditions.
Example 1:
∂2 z ∂2 z
Solve the Laplace equation + =
0 in the region with the boundary
∂x 2 ∂y 2
0OC
0OC Ω 1
50sin π y O C
0OC 2
369 WhatsApp: +91 7900900676 www.AgriMoon.Com
2 x
Laplace equation in 2-Dimensions
Solution:
X ′′ Y ′′
=
− = p2
X Y
=
z ( x 0,=
y ) 00 C ; = y ) 50sin π y 0C
z ( x 2,=
z ( x, y= 0)= 00 C ; z ( x, y= 1)= 00 C .
Hence z ( x, 0) = 0∀x ⇒ D = 0 .
∴ z ( x, y ) = AC ( eπ x − e −π x ) sin π y
⇒ AC ( e 2π − e −2π ) sin π y =
50sin π y
50
⇒
= AC = 0.0934;( x, y )
( e − e−2π )
2π
3
370 WhatsApp: +91 7900900676 www.AgriMoon.Com
Laplace equation in 2-Dimensions
Example 2:
An infinitely long plane uniform plate is bounded by two parallel edges and at
an end at right angles to them as shown in the adjacent figure. Find the
temperature distribution at any point of the plate in the steady state.
u OC
0O C 0O C
π x
Solution:
The steady state temperature distribution in this infinitely long plate is obtained
∂2 z ∂2 z
by solving + =
0.
∂x 2 ∂y 2
Among the three possibilities for solution i.e., solution forms (a), (b), (c), we
chose a solution that is consistent with the given boundary conditions. He
solution given in equation (a) cannot satisfy the boundary condition z (0, y ) = 0∀y
. The solution given in equation (c) cannot satisfy the condition
in z ( x, y → ∞) → 0 in 0 < x < π .
4
371 WhatsApp: +91 7900900676 www.AgriMoon.Com
Laplace equation in 2-Dimensions
( x, y ) B sin px ( Ce py + De − py ) .
∴ z=
x, y ) B sin nx ( Ce py + De − py ) .
∴ z (=
As z ( x, y → ∞) → 0 ⇒ c = BD sin nxe − ny .
0 ∴ z ( x, y ) =
∞
Using the non-homogeneous boundary condition u ( x, 0)= u=
0 ∑b
n =1
n sin nx
The unknown coefficients are found using the half range Fourier sine series
expansion in (0, π ) as
π 4u0
2 2 ,=
n 2m − 1
= ∫ = u0 1 − (−1) = nπ n
bn u0 sin nxdx , m is a positive integer.
π 0 π 0, n = 2m
4u0 − y 1
Thus z ( x, y ) = e sin x + e −3 y sin 3 x + ... .
π 3
Example 3
∂2 z ∂2 z
Solve + =
0 subject to z=
(0, y ) z=
(a, y ) z=
( x, b ) 0 and
∂x 2 ∂y 2
5
372 WhatsApp: +91 7900900676 www.AgriMoon.Com
Laplace equation in 2-Dimensions
Solution
z (0, y ) =0 ⇒ c1 =0 ,
nπ
z (a, y ) =0 ⇒ sin pa =0 ⇒ p = , n is an integer
a
nπ x nπa y − nπ y
=
∴ z ( x, y ) c2 sin 3
c e + c4 e a
a
−nπ y
nπ y − nπ y − B exp
z ( x, b ) =
0 ⇒ Ae a
+ Be a
=
0 ⇒ A= a .
nπ y
exp
a
− nπ b
− nπ y
nπ x − Be a nπa y
=
∴ z ( x, y ) sin e + Be a
a nπ b
e
a
nπ ( y −b ) − nπ ( y −b )
−B nπ x
= nπ b
sin e a
−e a
.
a
a
e
∞
nπ x nπ ( y − b ) −2 B
z ( x, y ) = ∑ bn sin sinh , where bn = nπ b
n =1 a a
ea
∞
nπ b nπ x ∞
nπ x
z ( x, 0) = x(a − x) = ∑ bn sinh
n =1 a
sin
a
= ∑ Bn sin
n =1 a
(say)
6
373 WhatsApp: +91 7900900676 www.AgriMoon.Com
Laplace equation in 2-Dimensions
nπ x
x
2
=
the coefficient Bn are found as Bn
a0∫ x(a − x) sin
a
dx
8a 2
4a 2 ,=
n 2m − 1
= 3 3 (1 − cos nπ ) = n3π 3 , m is a positive integer.
nπ 0, n = 2m
4nπ (b − y )
sin
8a 2 nπ x
∴ z ( x, y ) = ∑ a
π n =1,3,5,... n3 sinh nπ (b − y )
3
sin
a
a
(2n + 1)π (b − y )
8a 2
or z ( x, y ) = 3 ∑
∞ sinh
a sin
( 2n + 1) π x .
π n =0 (2n + 1)3 sinh (2n + 1)π (b − y ) a
a
Exercises 1
∂2 z ∂2 z
1. Solve + =
0 in 0 < x < π,0 < y < π; with the conditions
∂x 2 ∂y 2
z= (π , y ) z=
(0, y ) z= ( x, π ) 0 ; z ( x, 0) = sin 2 x
∂2 z ∂2 z
3. Solve 2 + 2 = 0 subject to
∂x ∂y
7
374 WhatsApp: +91 7900900676 www.AgriMoon.Com
Laplace equation in 2-Dimensions
(i) =
z (0, y ) 0;=
z ( x, 0 0);= z ( x,1) 100sin π x .
z (1, y ) 0;=
(ii) = z (=
z (0, y ) 0;= =
x, 0 0); z (1, y ) 100sin π y; z ( x,1) 0 .
∂z ∂z
=
(iii) z (0, y ) 0;=
( x, 0) 0; =
(1, y ) 0;=
z ( x,1) 100 .
∂y ∂x
=
(iv) =
z (0, y ) 100; =
z ( x, 0) 100; =
z (1, y ) 200; z ( x,1) 100 .
References
Ian Sneddon, (1957). Elements of Partial Differential Equations. McGraw-Hill,
Singapore
Amaranath. T, (2003). An Elementary Course in Partial Differential
Equations.Narosa Publishing House, New Delhi
Suggested Reading
I.P. Stavroulakis, Stephen a tersian, (2003). Partial Differential Equations.
Allied Publishers Pvt. Limited, New Delhi
J. David Logan, (2004). Partial Differential Equations. Springer(India)
Private Ltd. New Delhi
8
375 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 4: Partial Differential Equations
Lesson 48
48.1 Introduction
One dimensional heat and wave equations in semi-infinite or infinite region can
be solved using the Laplace transform or Fourier transform techniques.
Application of these transforms on these one dimensional equations reduce the
p.d.e to an ordinary differential equation. The solution of this o.d.e. involves the
parameter that is associated with the transformation and on applying the inverse
transformation, this solution gives the solution of the given p.d.e.
∂2 z
condition) is given, then we employ infinite cosine transform to remove . If
∂x 2
in a problem z (t , 0) and z (t , L) (Dirichlet boundary condition) are given, then we
∂2 z ∂z
use finite sine transform to remove 2 in the p.d.e similarly if (t , 0) and
∂x ∂x
∂z
(t , L) (Neumann boundary condition.) are given, then use finite cosine
∂x
∂2 z
transform to remove . Let us consider some examples to explain this
∂x 2
technique. Let us now see the transform of the partial derivatives.
Example 1:
∂z ∂2 z ∂z ∂2 z
find the Laplace transform L of (i) (ii) 2 (iii) (iv) 2
∂t ∂t ∂x ∂x
Solution:
∞
∂z ∂z
(i) L = ∫ e− st dt
∂t 0 ∂t
∂z
p
= lim ∫ e − st dt
p →∞
0
∂t
− st
p
lim e z (t , x) + s ∫ e − st z (t , x)dt
p
p →∞
0
0
= sL [ z (t , x) ] − z (0, x)
∂2 z ∂z
(ii) Show that L 2 = s 2 L [ z (t , x)] − s [ z (0, x)] − (0, x) is left as an exercise.
∂t ∂t
∞ ∞
∂z ∂z d d
=
(iii) L ∫=
e − st dx = ∫ e − st zdx L [ z (t , x) ]
∂x 0
∂x dx 0 dx
∂2 z d 2
(iv) L 2
= 2 L [ z (t , x) ] is left as an exercise.
∂x dx
Example 2:
∂2 z ∂2 z ∂2 z
Find (i) 2 (ii) s 2 and (iii) c 2 where is the Fourier transform,
∂x ∂x ∂x
Solution:
∞
∂2 z ∂2 z
By definition 2 = ∫ e−isx 2 dx
∂x −∞ ∂x
∞ ∞
∂2 z ∂z
∫ ise
− isx − isx
= e + dx
∂x 2 −∞ −∞
∂x
∞ ∞
∂z ∞
= e − isx − ise − isx z − s 2 ∫ e − isx z ⋅ dx
∂x −∞ −∞
−∞
∂z
= − s 2 [ z (t , x) ] provided z and tend to zero as x → ±∞ .
∂x
∂2 z ∂z
∴ s 2 = − s 2 z ( t , x ) provided both z and → 0 as x → ±∞ .
∂x ∂x
∂2 z ∞ ∂2 z
(ii) By definition, s 2
= ∫ 2 sin sxdx
∂x 0 ∂x
∞ ∞
∂z ∂z
= sin sx − ∫ s cos x dx
∂x 0 0 ∂x
∞
∂z
− s cos x ⋅ z 0 − s 2 s z ( t , x )
∞
= sin sx
∂x 0
∂z
s ⋅ z ( t , x ) x =0 − s 2 s z ( t , x ) provided z → 0 ,
= → 0 as x → ∞ .
∂x
∂2 z ∞ ∂2 z
(iii) By definition, c 2 = ∫ 2 cos sxdx
∂x 0 ∂x
Note: these results indicate that (i) if z ( t , x ) is specified at x = 0∀t , then Fourier
∂z
sine transform is useful and (ii) if at x = 0∀t is specified , then the Fourier
∂x
cosine is useful in semi-infinite region.
Example 3:
∂z ∂2 z
Solve = k 2 , −∞ < x < ∞; t > 0; subject to z ( x, 0) = f ( x) , −∞ < x < ∞ and z ( x, t ) is
∂x ∂x
bounded as x → ±∞ .
Solution:
∞
1
let z (t , s) indicate z ( t , x ) i.e., z ( =
t , x ) z=
(t , s ) ∫ z ( t , x )e
− isx
dx
2π −∞
∂
we get z (t , s ) + ks 2 z (t , s ) =
0
∂t
∞
1
or z (0, s) =
2π ∫
−∞
f ( x)eisx dx = F ( s ) (say)
⇒ A( s ) =
F ( s ).
Example 4:
∂z ∂ 2 z
Solve = subject to z (t , 0) = 0 for t > 0 ,
∂t ∂x 2
Solution:
∂z ∂ 2 z
Let z (t , s) denote s [ z (t , s)] , apply Fourier sine transform to =
∂t ∂x 2
∞ ∞
∂z ∂2 z ∂
we get ∫0 ∂t sin sxdx = ∫0 ∂x 2 sin sxdx, ∂t
z (t , s ) =
− s 2 z (t , s ) + sz (t , 0)
∂z
⇒ =− s 2 z ⇒ z =Ae − s t .
2
∂t
1 − cos s
1
From the initial condition we have s [ z ( x, 0)] = ∫ 1⋅ sin sx ⋅ ds =
0
s
1 − cos s
or z ( s, 0)= A=
s
1 − cos s − s 2t
⇒ z ( s, t ) =
e
s
Note: The integral in general cannot be evaluated using simple integration rules
with real variables, these involve complex integration techniques, so these
integrals are left as they are.
Example 3:
Solution:
Let z ( s, t ) denote c [ z (t , x)]
∂z ∂2 z
Applying Fourier cosine transform to the equation =k 2
∂t ∂x
∂z (t , s ) ∂z (t , 0) ∂z
we get =k − s 2 z − or + ks 2 z =
ka
∂t ∂t ∂t
e ks t 2
2
z (t , s ) ka 2 + c e − ks t
which has the solution= ks
where c is the arbitrary constant.
Exercises 1
∂z ∂2 z
1: Solve = k 2 , 0 ≤ x ≤ ∞, t > 0 Subject to
∂t ∂x
(i) z (t ,=
0) z0 , t > 0 (ii) z (0, x=
) 0;0 < x < ∞ and (iii) z (t , x) is bounded.
∂z ∂ 2 z
2. Solve = ; 0 < x < ∞; t > 0 subjected to the boundary conditions
∂t ∂x 2
∂z
(i) (t , 0) = 0 for t > 0 (ii) z (t , x) is bounded for x > 0, t > 0
∂x
x, 0 ≤ x ≤ 1
and the initial condition (iii) z (0, x) = .
0, x > 1
∂z ∂2 z
3. Solve = 2 2 if= z (0, x) e − x and z (t , x) is bounded where x > 0 and
z (t , 0) 0;=
∂t ∂x
t > 0.
References
Suggested Reading
Lesson 49
49.1 Introduction
In this lesson we see the utility o Fourier transform technique to the hyperbolic
(wave) and elliptic (Laplace) equations.
Example 1:
Solve the problem of vibrations of an infinite string governed by
∂2 z 2 ∂2 z
−c =
0; −∞ < x < ∞, t > 0 subjected to the initial conditions
∂t 2 ∂x 2
(i)
z(0, x) = f ( x)
∂z
(0, x) = g ( x)
∂t −∞ < x > ∞
(ii)
∂z
and boundary conditions at far field given by z (t , x), (t , x) → 0 as x → ±∞ .
∂x
Solution: Taking Fourier transform to the governing equation and the initial
conditions (i),(ii) we get
∂ 2 z (t , s ) 2 2 ∂z
+ c s z (t , s ) =
0 z (0, s ) = F ( s ) , (0, s ) = G ( s )
∂t 2
∂t
where [ z (t , x)] = z (t , s) and [ f ( x)] = F ( s) and [ g ( x)] = G ( s) .
1 G (s) 1 G (s)
From all these equations we =
get A F ( s ) + =
and B F ( s ) −
2 ics 2 ics .
1 G ( s ) isct 1 G ( s ) − isct
∴ z (t , s )= F ( s) + e + F (s) − e .
2 ics 2 ics
∞
1
=
Now taking the inverse Fourier transform z (t , x) =
−1
[ z (t , s)] ∫e
− isx
z (t , s )ds
2π −∞
x − ct x + ct
1 1 1
= f ( x − ct ) + f ( x + ct ) − ∫ g (ξ )d ξ + ∫ g (ξ )d ξ
2 2c 0
2c 0
x + ct
1 1
f ( x − ct ) + f ( x + ct ) +
2c x −∫ct
= g (ξ )d ξ .
2
Exercise: 1.
Example 2
An infinitely long string having one end at x = 0 is initially at rest along the x-
axis. The end x = 0 is given a transverse displacement f ( x) when t > 0 . Find the
displacement of any point of the string at any time.
Solution: Let z (t , x) denote the displacement in the string at any point x at any
time t , the wave equation is given by
∂2 z 2 ∂ z
2
= c , 0 < x < ∞; t > 0 subject to the initial conditions z (0, x) = 0 ;
∂t 2 ∂x 2
∂z
(0, x) = 0 and z (t , 0) = f (t ) and z (t , x) is bounded.
∂t
Now taking the Laplace transform on both sides of the governing equation, we
∂2 z 2 ∂ z
2
∂z 2 ∂ z
2
get L 2 = c L 2 ⇒ s z − sz (0, x) −
2
(0, x) =
c where L [ z (t , x)] = z ( s, x)
∂t ∂x ∂t ∂x 2
∂2 z ∂ 2 z s
2
z (t , x) is bounded as t → ∞ , ⇒ A =
0 in equation.
sx
−
∴ z ( s, x) =
Be c
Now z ( s, 0) B = f ( s)
sx
−
∴ The solution in terms of the transformed variables ' s ' is z ( s, x) = f ( s)e c .
sx
−
On finding the inverse Laplace transform to f ( s)e c , we obtain the required
x
solution as z (t=
, x) f t − .
c
49.2 Solution of the Laplace Equation in the Upper Half Plane Dirichlet
Problem
Example 3:
∂2 z ∂2 z
Solve + =
0 −∞ < x < ∞; y > 0 …. (i)
∂x 2 ∂y 2
∂z
both z and → 0 as x → ±∞ ….. (iv)
∂x
Solution:
∂2 z ∂2 z
transform of + =
0,
∂x 2 ∂y 2
∂ 2 z ( s, y ) 2
we get − s z ( s, y ) =
0 .... (v)
∂x 2
( s, y ) A( s )e sy + B ( s )e − sy .... (vi)
Its solution is given by z=
Given that z ( x, y ) is bounded as y → ∞
⇒ z ( s, y ) must also be as y → ∞
where z ( s, 0) = [=
z ( x, 0) ] =
[ f ( x) ] F ( s )
−s y
∴ z ( s, y ) =
F ( s )e .... (vii)
2 y
We note ( e− s y ) =
π y 2 + x 2
Exercise 2
Taking inverse Fourier transform on both sides of equation (vii) and applying
2 y
the conclusion theorem, we get z ( x, y ) = f ( x) 2
π y + ( x − ξ )
2
∞ ∞
1 2 y y f (ξ )
=
2π ∫
−∞
f (ξ )
π y 2 + x 2
dξ =
π ∫
−∞ y2 + ( x − ξ )
2
dξ
.
∞
y f (ξ )
Thus z ( x, y ) =
π ∫
−∞ y + (x −ξ )
2 2
d ξ is the solution of the Laplace equation in the
Example 4:
and ∫ g ( x)dx = 0 which is the necessary condition for the existence of solution.
−∞
∂z ( x, y )
Solution: Use the transformation v( x, y ) =
∂x
y
∂ 2v ∂ 2v ∂ ∂ 2 z ∂ 2 z
Now + = + = 0,
∂x 2 ∂y 2 ∂y ∂x 2 ∂y 2
∂z ( x, 0)
=
v( x, 0) = g ( x) (given)
∂y
∂ 2v ∂ 2v
Thus we have + , −∞ < x < ∞;
∂x 2 ∂y 2
1
y ∞ g (ξ )
π ∫a −∞
∫ y 2 + (ξ − x )2 dη
⇒ z ( x, y ) = η d ξ
1
∞ (ξ − x )2 + y 2
=
2π ∫ g (ξ ) log (ξ − x )2 + a 2 dξ is the required solution.
−∞
References
Suggested Reading
The information on this website does not warrant or assume any legal liability or
responsibility for the accuracy, completeness or usefulness of the courseware contents.
The contents are provided free for noncommercial purpose such as teaching, training,
research, extension and self learning.