Mathematics For Engineers III
Mathematics For Engineers III
Mathematics For Engineers III
The Module aims to introduce students to the various properties and definitions of Fourier
series, Partial Differential Equations and basic Probability and Statistics. The module will
deal with these topics at a basic level, leaving more advanced techniques to the specialist
courses in Engineering.
5.2 LEARNING OUTCOMES
1. Knowledge and Understanding
Having successfully completed the module, students should be able to demonstrate
knowledge and understanding of:
1.1 Simple Fourier series and partial differential equations and the basics of
probability and statistics.
1.2 The implications of the basic mathematical theories.
2. Cognitive/Intellectual skills/Application of Knowledge
2.1 Present simple arguments and conclusions using the mathematical theories.
2.2 Analyse and evaluate problems in mathematics and engineering.
3. Communication/ICT/Numeracy/Analytic Techniques/Practical Skills
Having successfully completed the module, students should be able to:
3.1 Apply the mathematical knowledge to solve problems in range of
Engineering situations.
3.2. Carry out mathematical and numerical manipulation with confidence and
accuracy.
4. General transferable skills
Having successfully completed the module, students should be able to:
4.1 Assimilate Abstract Ideas.
4.2 Communicate information having a mathematical content.
Mathematics for Engineers III Prepared by Jean de Dieu NIYIGENA Page 1
6. INDICATIVE CONTENT
Unit I Fourier series and Introduction to Fourier Transforms
1.1. Fourier series expansion
1.2. Fourier transform
Unit II Partial Differential Equations including Boundary Value Problems
2.1. Formulation and Solution of standard types of first order equations
2.2. Lagrange’s equation
2.3. Linear Homogeneous partial differential equations of second order with constant
coefficients.
2.4. Classification of second order linear partial differential equations
2.5. Solutions of one–dimensional wave equation, one-dimensional heat equation.
Unit III Introduction to Probability and Statistics
3.1. Descriptive Statistics: Measures of Central tendency, Measures of Dispersion and
Measures of Forms.
3.2. Probability: Basic concepts and definition of probability, Conditional probability.
3.3. Probability distributions including Discrete distributions e.g. binomial and Poisson
distributions and Continuous distribution e.g. Normal Distribution.
3.4. Simple linear regression analysis.
1.0.Introduction
Mathematicians of the eighteenth century, including Daniel Bernoulli and Leonard Euler,
expressed the problem of the vibratory motion of a stretched string through partial
differential equations that had no solutions in terms of ‘elementary functions’. Their
resolution of this difficulty was to introduce infinite series of sine and cosine functions that
satisfied the equations. In the early nineteenth century, Joseph Fourier, while studying the
problem of heat flow, developed a cohesive theory of such series. Consequently, they were
named after him. One important advantage of Fourier series is that it can represent a
function containing discontinuities, whereas Maclaurin’s and Taylor’s series require the
function to be continuous throughout. Fourier series and Fourier transform are investigated
in this chapter.
In this section we develop the Fourier series expansion of periodic functions, Dirichlet’s
conditions, Half range sine and Cosine series.
A function 𝑓(𝑥)is said to have a period T or to be periodic with period T if for all 𝑥, 𝑓(𝑥) =
𝑓(𝑥 + 𝑇), where T is a positive constant. The least value of T > 0 is called the least period
or simply the period of 𝑓(𝑥). Specifically, a function f is periodic with period. Specifically,
a function f is periodic with period T if the graph of f is invariant under translation in the
x-direction by a distance of T. A function that is not periodic is called aperiodic.
Examples 1:
For example, the sine function is periodic with period 2π, since ;
for all values of x. This function repeats on intervals of length 2π
2. The period of 𝑠𝑖𝑛 𝑛𝑥 or 𝑐𝑜𝑠 𝑛𝑥, where n is a positive integer, is 2𝜋 ⁄𝑛.
3. The period of 𝑡𝑎𝑛 𝑥 𝑖𝑠 𝜋.
4. A constant has any positive number as period .
5. Everyday examples are seen when the variable is time; for instance the hands of a
clock or the phases of the moon show periodic behaviour. Periodic motion is
motion in which the position(s) of the system are expressible as periodic functions,
all with the same period.
𝑥 2𝑥
a) 𝑦 = 3 sin 5𝑥 b) 𝑦 = sin c) 𝑦 = 6 sin
2 3
Answers:
No Amplitude Period
a) 3 2𝜋⁄
5
b) 1 4𝜋
c) 6 3𝜋
1.1.2. Harmonics
A function 𝑓(𝑥) is sometimes expressed as a series of a number of different sine
components. The component with the largest period is the first harmonic, or fundamental
of 𝑓(𝑥).
𝑦 = 𝐴1 sin 𝑥 is the first harmonic or fundamental
𝑦 = 𝐴2 sin 2𝑥 is the second harmonic
𝑦 = 𝐴3 sin 3𝑥 is the third harmonic
And in general
3600 2𝜋
𝑦 = 𝐴𝑛 sin 𝑛𝑥 is the n th harmonic, with amplitude 𝐴𝑛 , and period = =
𝑛 𝑛
𝑓(𝑥) = 𝑓(𝑥 + 6)
EXERCISE 1.1
1. Sketch the graphs of the following, inserting relevant values
4, 0 < 𝑥 < 5
3𝑥 − 𝑥 2 , 0 < 𝑥 < 3
a) 𝑓 (𝑥) = {0, 5 < 𝑥 < 8 b) 𝑓(𝑥) = {
𝑓 (𝑥 + 3)
𝑓 (𝑥 + 8)
𝑥2
, 0< 𝑥< 4
2 sin 𝑥 , 0 < 𝑥 < 𝜋 4
c) 𝑓 (𝑥) = { 0, 𝜋 < 𝑥 < 2𝜋 c) 𝑓(𝑥) = 4, 4 < 𝑥 < 6
𝑓 (𝑥 + 2𝜋) 0, 6 < 𝑥 < 8
{ 𝑓 (𝑥 + 8)
𝐿 𝐿
𝜋𝑥 𝜋𝑥 𝜋𝑥
∫ 𝑓(𝑥) cos 𝑑𝑥 = 𝑎1 ∫ cos cos 𝑑𝑥
−𝐿 𝐿 −𝐿 𝐿 𝐿
= 𝑎1 𝐿 𝐴𝑓𝑡𝑒𝑟 𝑢𝑠𝑖𝑛𝑔 𝑎𝑏𝑜𝑣𝑒 𝑜𝑟𝑡ℎ𝑜𝑔𝑜𝑛𝑎𝑙 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛𝑠
𝐿 𝜋𝑥 𝟏 𝑳 𝝅𝒙
Therefore, ∫−𝐿 𝑓(𝑥) cos 𝑑𝑥 = 𝑎1 𝐿 → 𝒂𝟏 = ∫−𝑳 𝒇 (𝒙) 𝐜𝐨𝐬 𝒅𝒙
𝐿 𝑳 𝑳
Examples 5: Determine the Fourier series expansion of the following periodic functions
of period 2𝜋:
a) 𝑓 (𝑥) = 𝑥, 0 < 𝑥 < 2𝜋
b) 𝑓 (𝑥) = 𝑥 2 + 𝑥, −𝜋 < 𝑥 < 𝜋
1
𝑥, 0 < 𝑥 < 𝜋
2
1 1
c) 𝑓 (𝑥) = 𝜋, 𝜋<𝑥<𝜋
2 2
1
𝜋 − 𝑥, 𝜋 < 𝑥 < 2𝜋
{ 2
Answers:
Fourier’s coefficients determination:
2𝜋
1 2𝜋 1 2𝜋 1 𝑥2
a) 𝑎 0 = ∫0 𝑓 (𝑥) 𝑑𝑥 = ∫0 𝑥 𝑑𝑥 = [ ] = 2𝜋
𝜋 𝜋 𝜋 2 0
1 2𝜋 1 𝜋 2
b) 𝑎 0 = ∫0 𝑓 (𝑥) 𝑑𝑥 = ∫−𝜋( 𝑥 2 + 𝑥) 𝑑𝑥 = 𝜋 2
𝜋 𝜋 3
1 𝐿 𝑛𝜋𝑥 1 𝜋 1 4𝜋 4
𝑎𝑛 = ∫ 𝑓(𝑥) cos 𝑑𝑥 = ∫ (𝑥 2 + 𝑥) cos 𝑛𝑥 𝑑𝑥 = 2 cos 𝜋𝑥 = 2 (−1) 𝑛
𝐿 −𝐿 𝐿 𝜋 −𝜋 𝜋𝑛 𝑛
𝐿 𝜋
1 𝑛𝜋𝑥 1 2 2
𝑏𝑛 = ∫ 𝑓(𝑥) sin 𝑑𝑥 = ∫ (𝑥 2 + 𝑥) sin 𝑛𝑥 𝑑𝑥 = cos 𝜋𝑥 = (−1) 𝑛
{ 𝐿 −𝐿 𝐿 𝜋 −𝜋 𝑛 𝑛
Hence,
∞
2 2 2
𝑓(𝑥) = 𝜋 2 + [∑ (−1) 𝑛 [ cos 𝑛𝑥 + sin 𝑛𝑥]]
3 𝑛 𝑛
𝑛=1
1 𝜋 1 𝜋⁄2 𝜋 1 2𝜋 1 5
c) 𝑎 0 = ∫−𝜋 𝑓(𝑥) 𝑑𝑥 = [∫0 𝑥 𝑑𝑥 + ∫𝜋⁄2 𝜋𝑑𝑥 + ∫𝜋 (𝜋 − 𝑥) 𝑑𝑥] = 𝜋
𝜋 𝜋 2 2 8
Remark: In the Fourier series corresponding to an odd function, only sine terms can be
present. In the Fourier series corresponding to an even function, only cosine terms (and
possibly a constant which we shall consider a cosine term) can be present.
Here 𝑓 (𝑥 + 0) and 𝑓(𝑥 − 0) are the right- and left-hand limits of 𝑓 (𝑥) at 𝑥 and represent
lim 𝑓(𝑥+∈ ) 𝑎𝑛𝑑 lim− 𝑓(𝑥−∈ ) respectively.
∈→0+ ∈→0
f) 𝑓 (𝑥) = 𝑦 where 𝑥 2 + 𝑦 2 = 9
Answers: A given function can be represented by a Fourier series sufficiently satisfying
the above three conditions,
No Statement No Statement
a) Yes d) yes
For a function 𝑓(𝑥) defined only over the finite interval 0 ≤ 𝑥 ≤ 𝐿 its even periodic
extension 𝐹(𝑥) is the even periodic function defined by
𝒇 (𝒙); 𝟎<𝑥 <𝑳
𝑭(𝒙) = {
𝒇( −𝒙); −𝑳 < 𝑥 < 0
𝑭(𝒙 + 𝟐𝑳) = 𝑭(𝒙)
Where
𝟐 𝑳 𝒏𝝅𝒙
𝒂𝒏 = ∫ 𝒇 (𝒙) 𝐜𝐨𝐬 𝒅𝒙 , 𝒏 = 𝟎, 𝟏, 𝟐, 𝟑, ⋯
𝑳 𝟎 𝑳
For a function 𝑓(𝑥) defined only over the finite interval 0 ≤ 𝑥 ≤ 𝐿 , its odd periodic
extension 𝐺 (𝑥) is the even periodic function defined by
𝒇 (𝒙); 𝟎<𝑥<𝑳
𝑮(𝒙) = {
−𝒇(−𝒙); −𝑳 < 𝑥 < 0
𝑮(𝒙 + 𝟐𝑳) = 𝑮(𝒙)
If 𝑓(𝑥) satisfies Dirichlet’s conditions in the interval 0 < 𝑥 < 𝐿, then since it is an odd
function of period 2𝐿, then odd periodic extension 𝑮(𝒙) will have a convergent Fourier
series representation consisting of sine terms only and given by
∞
𝒏𝝅𝒙
𝑮 (𝒙) = ∑ 𝒃𝒏 𝐬𝐢𝐧 (𝟒)
𝑳
𝒏=𝟏
Where
𝟐 𝑳 𝒏𝝅𝒙
𝒃𝒏 = ∫ 𝒇 (𝒙) 𝐬𝐢𝐧 𝒅𝒙 , 𝒏 = 𝟏, 𝟐, 𝟑, ⋯
𝑳 𝟎 𝑳
Example 7: Consider the following function defined in the interval 0 < 𝑥 < 4
𝑓 (𝑥) = 𝑥,
Obtain:
a) a half-range cosine series expansion
b) a half-range sine series expansion
2 4 1 4
𝑎 0 = ∫ 𝑓 (𝑥) 𝑑𝑥 = ∫ 𝑥 𝑑𝑥 = 4
4 0 2 0
2 𝐿 𝑛𝜋𝑥 2 4 𝑛𝜋𝑥 1 4𝑥 sin 𝑛𝜋𝑥 16 cos 𝑛𝜋𝑥 4
𝑎 𝑛 = ∫ 𝑓 (𝑥) cos 𝑑𝑥 = ∫ 𝑥 cos 𝑑𝑥 = [ + ]
𝐿 0 𝐿 4 0 4 2 𝑛𝜋 4 (𝑛𝜋) 2 0
0, 𝑓𝑜𝑟 𝑛 𝑖𝑠 𝑒𝑣𝑒𝑛
8
= (cos 𝑛𝜋 − 1) = { 16
(𝑛𝜋) 2 − , 𝑓𝑜𝑟 𝑛 𝑖𝑠 𝑜𝑑𝑑
(𝑛𝜋) 2
Hence,
16 1 1 3 1 5
𝐹(𝑥) = 2 − 2
(cos 𝜋𝑥 + 2 cos 𝜋𝑥 + 2 cos 𝜋𝑥 + ⋯ )
𝜋 4 3 4 5 4
𝑜𝑟
∞
16 1 1
𝐹(𝑥) = 2 − 2 ∑ 2 cos (2𝑛 − 1) 𝜋𝑥
𝜋 (2𝑛 − 1) 4
𝑛=1
Since 𝐹(𝑥) = 𝑓(𝑥) for 0 < 𝑥 < 4, it follows that this Fourier series is representative
of 𝑓(𝑥) within this interval. Thus the half- range cosine series expansion of 𝒇(𝒙) is
∞
𝟏𝟔 𝟏 𝟏
𝑭(𝒙) = 𝒙 = 𝟐 − 𝟐 ∑ 𝟐 𝐜𝐨𝐬 (𝟐𝒏 − 𝟏) 𝝅𝒙, 𝒇𝒐𝒓 𝟎 < 𝑥 < 4
𝝅 (𝟐𝒏 − 𝟏) 𝟒
𝒏=𝟏
2 𝐿 𝑛𝜋𝑥 2 4 𝑛𝜋𝑥 1 4𝑥 1 16 1 4
𝑏𝑛 = ∫ 𝑓 (𝑥) sin 𝑑𝑥 = ∫ 𝑥 sin 𝑑𝑥 = [− cos 𝑛𝜋𝑥 + sin 𝑛𝜋𝑥]
𝐿 0 𝐿 4 0 4 2 𝑛𝜋 4 (𝑛𝜋) 2 4 0
8 8
= cos 𝑛𝜋 = − (−1) 𝑛
𝑛𝜋 𝑛𝜋
Hence,
8 1 1 1 1 3
𝐺 (𝑥) = ( sin 𝜋𝑥 − sin 𝜋𝑥 + sin 𝜋𝑥 − ⋯ )
𝜋 4 2 2 3 4
𝑜𝑟
∞
8 (−1) 𝑛+1 1
𝐺 (𝑥) = ∑ sin 𝑛𝜋𝑥
𝜋 𝑛 4
𝑛=1
Since 𝐺 (𝑥) = 𝑓 (𝑥) for 0 < 𝑥 < 4, it follows that this Fourier series is representative
of 𝑓(𝑥) within this interval. Thus the half- range sine series expansion of 𝒇 (𝒙) is
∞
𝟖 ( −𝟏) 𝒏+𝟏 𝟏
𝒇(𝒙) = 𝒙 = ∑ 𝐬𝐢𝐧 𝒏𝝅𝒙 , 𝒇𝒐𝒓 𝟎 < 𝑥 < 4
𝝅 𝒏 𝟒
𝒏=𝟏
PARSEVAL’S IDENTITY
If 𝑎 𝑛 and 𝑏𝑛 are the Fourier coefficients corresponding to 𝑓(𝑥) and if 𝑓 (𝑥) satisfies the
Dirichlet conditions, then
∞
𝟏 𝑳 𝒂 𝟐
∫ {𝒇(𝒙)}𝟐 𝒅𝒙 = 𝟎 + ∑ (𝒂𝒏 𝟐 + 𝒃𝒏 𝟐 )
𝑳 −𝑳 𝟐
𝒏=𝟏
Example 8: Consider the following function defined in the interval 0 < 𝑥 < 4
𝑓 (𝑥) = 𝑥,
32 64 1 1 1
= 8 + 4 (1 + 2 + 2 + 2 + ⋯ )
3 𝜋 3 5 7
This imply that
1 1 1 1 𝜋4
1+ 2
+ 2
+ 2
+ ⋯ = ∑∞
𝑛=1 ( )2
=
3 5 7 2𝑛−1 24
1.4.FOURIER TRANSFORM
1.4.1. The Fourier integral
Where
𝟏 ∞
𝑨(𝜶) = ∫ 𝒇(𝒙) 𝐜𝐨𝐬 𝜶𝒙 𝒅𝒙
𝝅 −∞
𝟏 ∞
𝑩(𝜶) = ∫ 𝒇 (𝒙) 𝐬𝐢𝐧 𝜶𝒙 𝒅𝒙
{ 𝝅 −∞
𝐴 (𝛼) and 𝐵(𝛼) with −∞ < 𝛼 < ∞ are generalizations of the Fourier coefficients 𝑎 𝑛 and
𝑏𝑛. The right-hand side of (5)is also called a Fourier integral expansion of f .
Remarks:
• The result (5) holds if 𝑥 is a point of continuity of 𝑓(𝑥).
𝑓 ( 𝑥+0) +𝑓( 𝑥−0)
• If 𝑥 is a point of discontinuity, we must replace 𝑓(𝑥) by as in the
2
case of Fourier series. Note that the above conditions are su fficient but not
necessary.
• In the generalization of Fourier coefficients to Fourier integrals, 𝑎 0 may be
neglected, since whenever
1 𝐿
|𝑎 0 | = | ∫ 𝑓(𝑥)𝑑𝑥| → 0 𝑎𝑠 𝐿 → ∞
𝐿 −𝐿
EQUIVALENT FORMS OF FOURIER’S INTEGRAL THEOREM
𝟏 ∞ ∞
𝜑(𝑥) = ∫ ∫ 𝒇(𝒖) 𝒆𝒊𝜶(𝒖−𝒙)𝒅𝜶 𝒅𝒖
𝟐𝝅 −∞ −∞
Where it is understood that if 𝑓 (𝑥) is not continuous at 𝑥 the left side must be replaced by
𝑓 ( 𝑥+0) +𝑓( 𝑥−0)
.
2
These results can be simplified somewhat if 𝑓 (𝑥) is either an odd or an even function,
and we have:
𝝋(𝒙)
𝟐 ∞ ∞
= ∫ (∫ 𝒇(𝒖) 𝐜𝐨𝐬 𝜶𝒙 𝐜𝐨𝐬 𝜶𝒖 𝒅𝒖) 𝒅𝜶 , 𝒊𝒇 𝒇 (𝒙)𝒊𝒔 𝒂𝒏 𝒆𝒗𝒆𝒏 𝒇𝒖𝒏𝒄𝒕𝒊𝒐𝒏 (𝟕)
𝝅 𝟎 𝟎
𝝋(𝒙)
𝟐 ∞ ∞
= ∫ (∫ 𝒇(𝒖) 𝐬𝐢𝐧 𝜶𝒙 𝐬𝐢𝐧 𝜶𝒖 𝒅𝒖) 𝒅𝜶 , 𝒊𝒇 𝒇(𝒙)𝒊𝒔 𝒂𝒏 𝒐𝒅𝒅 𝒇𝒖𝒏𝒄𝒕𝒊𝒐𝒏 (𝟖)
𝝅 𝟎 𝟎
A (7) and (8) formulas are called the Fourier cosine integral, and the Fourier sine
integral respectively.
An entity of importance in evaluating integrals and solving differential and integral
equations, then we can put (6) in the following form:
-1 0 1 t
≥ 0 𝑎𝑛𝑑𝛼0 (𝑡 + 1) ≥ 0
This expression is the Fourier integral of
1, | 𝑡| ≤ 1
𝑓 (𝑡) = { | 𝑡| ≥ 1
0,
𝑒 −𝑥 , 𝑓𝑜𝑟 𝑥 ≥ 0
Example 10: Determine the Fourier transform of 𝑓 if 𝑓 (𝑥) = {
𝑒 2𝑥 , 𝑓𝑜𝑟 𝑥 < 0
Answers:
∞ 0 ∞
1 1
𝐹 (𝑖𝛼) = ∫ 𝑓 (𝑥) 𝑒 −𝑖𝛼𝑥 𝑑𝑥 = {∫ 𝑒 2𝑥 𝑒 −𝑖𝛼𝑥 𝑑𝑥 + ∫ 𝑒 −𝑥 𝑒 −𝑖𝛼𝑥 𝑑𝑥}
√2𝜋 −∞ √2𝜋 −∞ 0
0 ∞
1
= {∫ 𝑒 ( 2−𝑖𝛼)𝑥 𝑑𝑥 + ∫ 𝑒 −(1+𝑖𝛼) 𝑥 𝑑𝑥}
√2𝜋 −∞
−
0
( 2−𝑖𝛼) 𝑥 𝑥→0 ( )
− 1+𝑖𝛼 𝑥
𝑥→∞
1 𝑒 𝑒
= { | − | }
√2𝜋 2 − 𝑖𝛼 𝑥→−∞ 1 + 𝑖𝛼 𝑥→0+
𝑥→0− 𝑥→∞
1 𝑒 (2+𝑖𝛼) 𝑥 𝑒 (1−𝑖𝛼)(−𝑥)
= { | − | }
√2𝜋 2 − 𝑖𝛼 𝑥→−∞ 1 + 𝑖𝛼 𝑥→0+
1 1 1 1 𝛼 2 − 2 + 𝑖𝛼(5 + 2𝛼 2 )
= {− }= { }
√2𝜋 2 + 𝑖𝛼 1 + 𝑖𝛼 √2𝜋 (4 + 𝛼 2 )(1 + 𝛼 2 )
Hence
𝛼 2 − 2 + 𝑖𝛼 (5 + 2𝛼 2 )
1
𝐹(𝑖𝛼) = { }
√2𝜋 (4 + 𝛼 2 )(1 + 𝛼 2 )
Remarks:
𝟐 ∞
𝒇 (𝒙) = √ ∫ 𝑭𝒄 (𝒊𝜶) 𝐜𝐨𝐬 𝜶𝒙 𝒅𝜶
{ 𝝅 𝟎
and we call 𝐹𝑐 (𝑖𝛼) and 𝑓(𝑥)Fourier cosine transforms of each other.
• If 𝑓(𝑥) is an odd function, equation (8)yields
𝟐 ∞
𝑭𝒔 (𝒊𝜶) = √ ∫ 𝒇 (𝒙) 𝐬𝐢𝐧 𝜶𝒙 𝒅𝒙
𝝅 𝟎
𝟐 ∞
𝒇(𝒙) = √ ∫ 𝑭𝒄 (𝒊𝜶) 𝐬𝐢𝐧 𝜶𝒙 𝒅𝜶
{ 𝝅 𝟎
and we call 𝐹𝑠 (𝑖𝛼) and 𝑓(𝑥)Fourier sine transforms of each other.
• When the product of Fourier transforms is considered, a new concept called
convolution comes into being, and in conjunction with it, a new pair (function
and its Fourier transform) arises. In particular, if 𝐹 (𝑖𝛼) and 𝐺 (𝑖𝛼) are the Fourier
transforms of 𝑓 and 𝑔, respectively, and the convolution of 𝑓 and 𝑔 is defined to
be
∞
𝟏
𝒇∗ 𝒈 = ∫ 𝒇(𝒖) 𝒈(𝒙 − 𝒖)𝒅𝒖 (𝟏𝟏)
√𝝅 −∞
Then
∞
𝟏
𝑭(𝒊𝜶)𝑮(𝒊𝜶) = ∫ 𝒆 −𝒊𝜶𝒖 𝒇 ∗ 𝒈𝒅𝒖 (𝟏𝟐)
√𝝅 −∞
∞
𝟏
𝒇∗𝒈= ∫ 𝒆𝒊𝜶𝒙 𝑭(𝜶)𝑮(𝜶)𝒅𝜶 (𝟏𝟑)
{ 𝝅
√ −∞
where in both (11) and (13) the convolution 𝒇 ∗ 𝒈 is a function of 𝒙.
Now equate the representations of 𝒇 ∗ 𝒈 expressed in (11) and (13), i.e.,
1 ∞ 1 ∞ 𝑖𝛼𝑥
𝑓∗𝑔= ∫ 𝑓 (𝑢) 𝑔(𝑥 − 𝑢)𝑑𝑢 = ∫ 𝑒 𝐹(𝛼) 𝐺 (𝛼) 𝑑𝛼
√𝜋 −∞ √𝜋 −∞
In other words
𝒅𝒇
𝓕{ } = (𝒊𝜶)𝑭(𝒊𝜶)
𝒅𝒙
Repeating the argument 𝑛 times, it follows that
𝒅𝒏 𝒇
𝓕{ } = (𝒊𝜶) 𝒏 𝑭(𝒊𝜶) (𝟏𝟓)
𝒅𝒙𝒏
If 𝒙 = 𝒕 ≡ 𝒕𝒊𝒎𝒆 the above result (15) is referred to as the time-differentiation property,
and may be used to obtain frequency-domain representation of differential equations.
Example 11: Show that if the time signals 𝑦(𝑡) and 𝑢(𝑡) have the Fourier transforms
𝑌(𝑖𝛼) and 𝑈(𝑖𝛼) respectively, and if
That is
𝓕 {𝒇(𝒕 − 𝝉)} = 𝒆 −𝒊𝜶𝝉 𝑭(𝒊𝜶) (𝟏𝟔)
The result (16) is known as the time-shift property, and implies that delaying a signal by a
time 𝝉 causes its Fourier transform to be multiplied by 𝑒 −𝑖𝛼𝜏 .
Since
|𝑒 −𝑖𝛼𝜏 | = |cos 𝛼𝜏 − 𝑖 sin 𝛼𝜏 | = 1
= 𝐹 (𝑖𝛼̃ )
Thus,
𝓕{𝒆𝒊𝜶𝟎 𝒕 𝒇(𝒕)} = 𝑭[𝒊 (𝜶 − 𝜶𝟎 )] (𝟏𝟕)
Example 12: Determine the frequency spectrum of the signal 𝑔(𝑡) = 𝑓(𝑡) cos 𝛼𝑐 𝑡
1
Answer: As cos 𝛼𝑐 𝑡 = [𝑒 𝑖 𝛼𝑐𝑡 + 𝑒 −𝑖𝛼𝑐𝑡 ]
2
1 1
ℱ {𝑔 (𝑡)} = ℱ { 𝑓(𝑡)[ 𝑒 𝑖𝛼𝑐𝑡 + 𝑒 −𝑖𝛼𝑐𝑡 ]} = [ℱ {𝑒 𝑖 𝛼𝑐𝑡 𝑓(𝑡)} + ℱ {𝑒 −𝑖𝛼𝑐𝑡 𝑓(𝑡) }]
2 2
Use property (17), then we get:
1 1
𝑭[𝒊(𝜶 − 𝜶𝒄 )] + 𝑭[𝒊(𝜶 + 𝜶𝒄 )]
2 2
The effect of multiplying the signal 𝑓(𝑡) by the carrier signal cos 𝛼𝑐 𝑡 is thus to produce a
signal whose spectrum consists of two (scaled) version of 𝐹(𝑖𝛼) , the spectrum of 𝑓 (𝑡) ; one
centred on 𝜶 = 𝜶𝒄 and the other on 𝜶 = −𝜶𝒄 . The carrier signal cos 𝛼𝑐 𝑡 is said to be
modulated by the signal 𝑓 (𝑡) .
1.5.5. The symmetry property
We can establish the exact form of symmetric as:
∞
1
𝑓(𝑡) = ∫ 𝐹(𝑖𝛼) 𝑒 𝑖𝛼𝑡 𝑑𝛼
√2𝜋 −∞
Or on replacing 𝑡 by 𝑤,
∞
√2𝜋𝑓( −𝑤) = ∫ 𝐹 (𝑖𝑦) 𝑒 −𝑖𝑦𝑤 𝑑𝛼
−∞
EXERCISES ON 1 ST CHAPTER
1.1. Sketch the graphs of the following, inserting relevant values.
4, 0<𝑥<5 3, 0<𝑥 <4
a) 𝑓 (𝑥) = { 0, 5 < 𝑥 < 8 b) 𝑓(𝑥) = { 5, 4<𝑥 <7 e
𝑓 (𝑥) = 𝑓 (𝑥 + 8) 𝑓(𝑥) = 𝑓 (𝑥 + 10)
2 sin 𝑥 , 0<𝑥 <𝜋
3𝑥 − 𝑥 2 , 0<𝑥 <3
c) 𝑓 (𝑥) = { d) 𝑓(𝑥) = { 0, 𝜋 < 𝑥 < 2𝜋
𝑓(𝑥) = 𝑓(𝑥 + 3)
𝑓 (𝑥) = 𝑓 (𝑥 + 2𝜋)
𝑥 𝑥2
, 0<𝑥 <𝜋 , 0<𝑥 <4
2 4
e) 𝑓 (𝑥) = {𝜋 − 𝑥 , 𝜋 < 𝑥 < 2𝜋 f) 𝑓 (𝑥) = 4, 4<𝑥 <6
2 0, 6<𝑥 <8
𝑓 (𝑥) = 𝑓 (𝑥 + 2𝜋)
{ (𝑥)
𝑓 = 𝑓(𝑥 + 8)
1.2.State whether each of the following products is odd, even, or neither
a) 𝑥 2 sin 2𝑥 b) 𝑥 3 cos 𝑥 c) cos 2𝑥 cos 3𝑥 d)(2𝑥 + 3) sin 4𝑥 e) 𝑥 3 𝑒 𝑥 f)
1
cosh 𝑥
𝑥+2
1.3.If 𝑓(𝑥) is defined in the interval −𝜋 < 𝑥 < 𝜋 and 𝑓 (𝑥) = 𝑓 (𝑥 + 2𝜋) , state whether
or not each of the following functions can be represented by a Fourier series.
1
a)𝑓(𝑥) = 𝑥 4 b) 𝑓 (𝑥) = 3 − 2𝑥 c) 𝑓 (𝑥) = d)𝑓 (𝑥) = 𝑒 2𝑥 e) 𝑓(𝑥) = csc 𝑥 f) 𝑓(𝑥) =
𝑥
±√4𝑥
1.4. Prove
𝐿 𝑚𝜋𝑥 𝑛𝜋𝑥 𝐿 𝑚𝜋𝑥 𝑚𝜋𝑥 0, 𝑚 ≠ 𝑛
a) ∫−𝐿 cos cos 𝑑𝑥 = ∫−𝐿 sin sin 𝑑𝑥 = {
𝐿 𝐿 𝐿 𝐿 𝐿, 𝑚 = 𝑛
1, 2, 3, ⋯
2.1. Find the Fourier series of the following functions:
0, −1<𝑥<0
a) 𝑓 (𝑥) = { b) 𝑓 (𝑥) = |𝑥|, 𝑥 ∈ [−1, 1] c) 𝑓 (𝑥) = 𝑥, 𝑥 ∈ [−1, 1]
1, 0 < 𝑥 < 1
𝜋
𝑎, 0<𝑥 <
3
−1, − 2 < 𝑥 < −1 0,
𝜋
<𝑥<
2𝜋
d) 𝑓 (𝑥) = { 0, − 1 ≤ 𝑥 ≤ 1 e) 𝑓 (𝑥) = 3 3
2𝜋
1, 1 < 𝑥 < 2 −𝑎, <𝑥<𝜋
3
{ 𝑓 (𝑥) = 𝑓 (𝑥 + 𝜋)
−1, −2<𝑥<0
2.2. a) Find the Fourier sine series of the following function: 𝑓 (𝑥) = {
1, 0≤𝑥 <2
b) Show that the half-range Fourier sine series expansion of the function 𝑓 (𝑥) = 1, 0 <
𝑥 <𝜋
4 sin(2𝑛−1)𝑥
is ∑∞
𝑛=1 , 0<𝑥 <𝜋.
𝜋 2𝑛−1
2.3. a) Find the Fourier cosine series of the following functions: 𝑓 (𝑥) =
0, − 2 < 𝑥 < −1
{ 1, −1≤𝑥<1
0, 1 < 𝑥 < 2
b) Determine the half-range Fourier cosine series expansion of the function
𝑓(𝑥) = 2𝑥 − 1, 0<𝑥 <𝜋
c) Determine the Fourier cosine series to represent the function 𝑓(𝑥) where
𝜋
cos 𝑥 , 0 < 𝑥 <
2
𝑓(𝑥) = 𝜋
0, <𝑥 <𝜋
2
{ 𝑓 (𝑥) = 𝑓 (𝑥 + 2𝜋)
2.4. a) If 𝑓 (𝑥) is defined by 𝑓 (𝑥) = 𝑥 (𝜋 − 𝑥), 0 < 𝑥 < 𝜋 , express the function as
i) a half-range cosine series ii) a half-range sine series
b) If 𝑓 (𝑥) is defined by 𝑓(𝑥) = 1 − 𝑥 2 , 0 < 𝑥 < 1 , express the function as
i) a half-range cosine series ii) a half-range sine series
3.1. Calculate the Fourier transform of the two-sided exponential pulse given by
𝑒 𝑎𝑡 , 𝑡 ≤ 0
𝑓(𝑡) = { −𝑎𝑡 , 𝑎>0
𝑒 , 𝑡 >0
∈ {1 , 2, ⋯ , 𝑛} (0.3)
Here 𝑁 is called the order of the PDE. 𝑁is the maximum number of derivatives appearing
in the equation.
Definition 0.2: The order of a PDE is the order of the highest derivatives appearing in the
differentials.
Examples 0.1: Consider 𝑢 = 𝑢(𝑡, 𝑥) as a function of two variables
1) 𝜕𝑡2 𝑢 + (1 + cos 𝑢)𝜕𝑥3 𝑢 = 0 is a third-order PDE
2) 𝜕𝑡2 𝑢 + 2𝜕𝑥2 𝑢 + 𝑢 = 0 is a second-order PDE
Definition 0.3: A PDE is termed linear PDE if and only if it is linear in the unknown
function 𝑢 and the partial derivatives of 𝑢. All other PDE are termed non-linear PDE.
A linear PDE can be written as
ℒ𝑢 = 𝑓(𝑥1 ,𝑥 2 , 𝑥3 , 𝑥 4, ⋯ , 𝑥 𝑛)
For some linear operator ℒ and some function 𝑓 of the coordinates.
ℒ is a linear operator iff ℒ(𝑎𝑢 + 𝑏𝑣) = 𝑎ℒ (𝑢) + 𝑏ℒ(𝑣) for 𝑎, 𝑏 ∈ ℝ and all function 𝑢, 𝑣.
Examples 0.2: Consider = 𝑢(𝑡, 𝑥) , 𝑢 = 𝑢(𝑥, 𝑦) 𝑎𝑛𝑑 𝑣 = 𝑣(𝑥, 𝑦) as functions of two
variables
1) 𝜕𝑡2 𝑢 + (1 + cos 𝑢)𝜕𝑥3 𝑢 = 0 is a third-order non-linear PDE
2) 𝜕𝑡2 𝑢 + 2𝜕𝑥2 𝑢 + 𝑢 = 0 is a second-order linear PDE
Mathematics for Engineers III Prepared by Jean de Dieu NIYIGENA Page 26
3) Cauchy-Riemann equations
∑ 𝑐𝑖 𝑢𝑖 , 𝑓𝑜𝑟 𝑐1 , 𝑐2 , ⋯ , 𝑐𝑛 ∈ ℝ
𝑖=0
The following are examples of important partial differential equations that commonly
arise in problems of mathematical physics.
where
Wave equation
Examples 1.1: If, for example, we take u to be the dependent variable and 𝑥, 𝑦 and 𝑡 to be
independent variables, then the following equations:
𝜕𝑢 2 𝜕𝑢
1) ( ) + = 0 is a first-order in two variables,
𝜕𝑥 𝜕𝑡
𝜕𝑢 𝜕𝑢 𝜕𝑢
2) 𝑥 +𝑦 + = 0 is a first-order in three variables.
𝜕𝑥 𝜕𝑦 𝜕𝑡
Problem 2.1
Eliminate the constants 𝑎 and 𝑏 from the following equations:
a) 𝑧 = (𝑥 + 𝑎) (𝑦 + 𝑏)
b) 2𝑧 = (𝑎𝑥 + 𝑦) 2 + 𝑏
c) 𝑎𝑥 2 + 𝑏𝑦 2 + 𝑧 2 = 1
2.2.LAGRANGE’S LINEAR PDE
2.2.1. Formulation of Lagrange’s linear PDE
By eliminating of arbitrary functions
Let 𝑢 and 𝑣 be any two given functions of 𝑥, 𝑦 and 𝑧. Let 𝑢 and 𝑣 be connected by an
arbitrary function 𝜑 by the relation
𝝋(𝒖, 𝒗) = 𝟎 (∗∗∗∗)
Now, we want to eliminate 𝜑.
Differentiating partially, with respect 𝑥and 𝑦, we obtain
𝜕𝜑 𝜕𝜑 𝜕𝑢 𝜕𝜑 𝜕𝑢 𝜕𝑧 𝜕𝜑 𝜕𝑣 𝜕𝜑 𝜕𝑣 𝜕𝑧
= + + + =0
𝜕𝑥 𝜕𝑢 𝜕𝑥 𝜕𝑢 𝜕𝑧 𝜕𝑥 𝜕𝑣 𝜕𝑥 𝜕𝑣 𝜕𝑧 𝜕𝑥
𝜕𝜑 𝜕𝜑 𝜕𝑢 𝜕𝜑 𝜕𝑢 𝜕𝑧 𝜕𝜑 𝜕𝑣 𝜕𝜑 𝜕𝑣 𝜕𝑧
= + + + =0
{ 𝜕𝑦 𝜕𝑢 𝜕𝑦 𝜕𝑣 𝜕𝑧 𝜕𝑦 𝜕𝑣 𝜕𝑦 𝜕𝑣 𝜕𝑧 𝜕𝑦
𝜕𝜑 𝜕𝜑 𝜕𝑢 𝜕𝑢 𝜕𝜑 𝜕𝑣 𝜕𝑣
= ( + 𝑝) + ( + 𝑝) = 0
𝜕𝑥 𝜕𝑢 𝜕𝑥 𝜕𝑧 𝜕𝑣 𝜕𝑥 𝜕𝑧
𝜕𝜑 𝜕𝜑 𝜕𝑢 𝜕𝑣 𝜕𝜑 𝜕𝑣 𝜕𝑣
= ( + 𝑞) + ( + 𝑞) = 0
{ 𝜕𝑦 𝜕𝑢 𝜕𝑦 𝜕𝑧 𝜕𝑣 𝜕𝑦 𝜕𝑧
𝜕𝜑 𝜕𝜑
Eliminating and , then we obtain
𝜕𝑢 𝜕𝑣
Mathematics for Engineers III Prepared by Jean de Dieu NIYIGENA Page 30
𝜕𝑢 𝜕𝑢 𝜕𝑣 𝜕𝑣
+𝑝 +𝑝
|𝜕𝑥 𝜕𝑧 𝜕𝑥 𝜕𝑧 |
𝜕𝑢 𝜕𝑣 𝜕𝑣 𝜕𝑣 = 0
+𝑞 +𝑞
𝜕𝑦 𝜕𝑧 𝜕𝑦 𝜕𝑧
Which simplifies to
𝑷𝒑 + 𝑸𝒒 = 𝑹 (𝟏. 𝟐)
𝜕𝑢 𝜕𝑣 𝜕𝑢 𝜕𝑣 𝜕( 𝑢,𝑣)
𝑃= − ≡
𝜕𝑦 𝜕𝑧 𝜕𝑧 𝜕𝑦 𝜕( 𝑦,𝑧)
𝜕𝑢 𝜕𝑣 𝜕𝑢 𝜕𝑣 𝜕( 𝑢,𝑣)
Where 𝑄 = 𝜕𝑧 𝜕𝑥
−
𝜕𝑥 𝜕𝑧
≡
𝜕( 𝑧,𝑥)
𝜕𝑢 𝜕𝑣 𝜕𝑢 𝜕𝑣 𝜕( 𝑢,𝑣)
{𝑅 = 𝜕𝑥 𝜕𝑦
−
𝜕𝑦 𝜕𝑥
≡
𝜕( 𝑥,𝑦)
𝝏𝒛 𝝏𝒛
𝒚 −𝒙 =𝟎
𝝏𝒙 𝝏𝒚
ii) Now the given relation is of the form 𝜑 (𝑢, 𝑣) = 0 where
𝑢 = 𝑥 2 + 𝑦2 + 𝑦2
{
𝑣 = 𝑙𝑥 + 𝑚𝑦 + 𝑛𝑧
Hence, the PDE is
𝑃𝑝 + 𝑄𝑞 = 𝑅
𝜕𝑢 𝜕𝑣 𝜕𝑢 𝜕𝑣
𝑃= − = 2𝑛𝑦 − 2𝑚𝑧
𝜕𝑦 𝜕𝑧 𝜕𝑧 𝜕𝑦
𝜕𝑢 𝜕𝑣 𝜕𝑢 𝜕𝑣
Where 𝑄= − = 2𝑙𝑧 − 2𝑛𝑥
𝜕𝑧 𝜕𝑥 𝜕𝑥 𝜕𝑧
𝜕𝑢 𝜕𝑣 𝜕𝑢 𝜕𝑣
{ 𝑅 = 𝜕𝑥 𝜕𝑦 − 𝜕𝑦 𝜕𝑥 = 2𝑚𝑥 − 2𝑙𝑦
Mathematics for Engineers III Prepared by Jean de Dieu NIYIGENA Page 31
Therefore the required PDE is
(2𝑛𝑦 − 2𝑚𝑧)𝑝 + (2𝑙𝑧 − 2𝑛𝑥) 𝑞 = (2𝑚𝑥 − 2𝑙𝑦)
𝝏𝒛 𝝏𝒛
(𝒏𝒚 − 𝒎𝒛) + (𝒍𝒛 − 𝒏𝒙) = (𝒎𝒙 − 𝒍𝒚)
𝝏𝒙 𝝏𝒚
Problem 2.2
1. Form the PDE by eliminating the arbitrary functions from:
a) 𝑧 = 𝑓(𝑥 2 + 𝑦 2 ) b) 𝑧 = 𝑓 (𝑥 + 𝑐𝑡) + 𝛷(𝑥 − 𝑐𝑡) c) 𝑧 = 𝑓(𝑎𝑥 + 𝑏𝑦) + 𝑔(𝛼𝑥 +
𝛽𝑦)
d) 𝑧 = 𝑥𝑦 + 𝑓(𝑥 2 + 𝑦 2 + 𝑧 2 ) e) 𝑧 = 𝑓(𝑥 2 + 𝑦 2 + 𝑧 2 , 𝑥 + 𝑦 + 𝑧)
f) 𝑧 = 𝑓 (2𝑥 + 𝑦) + 𝑔(3𝑥 − 𝑦)
2.2.2. Solution of Lagrange’s linear PDE
Theorem 1.1: The general solution of the linear PDE 𝑷𝒑 + 𝑸𝒒 = 𝑹 is 𝜑(𝑢, 𝑣) = 0
Where 𝜑 is an arbitrary function and 𝑢(𝑥, 𝑦, 𝑧) = 𝑐1 and 𝑣 (𝑥, 𝑦, 𝑧) = 𝑐2form a solution
of the equations
𝒅𝒙 𝒅𝒚 𝒅𝒛
= = (𝟏. 𝟑)
𝑷 𝑸 𝑹
Procedure: To solve the equation 𝑷𝒑 + 𝑸𝒒 = 𝑹 we follow the following steps:
STEP1: Form the auxiliary simultaneous equations
𝒅𝒙 𝒅𝒚 𝒅𝒛
= =
𝑷 𝑸 𝑹
STEP2: Solve these auxiliary simultaneous equations giving two independent solutions
𝑢(𝑥, 𝑦, 𝑧) = 𝑐1 and 𝑣(𝑥, 𝑦, 𝑧) = 𝑐2 ;
STEP3: Then write down the solution as 𝜑(𝑢, 𝑣) = 0 or 𝑢 = 𝑓(𝑣) or 𝑣 = 𝐹(𝑢), where the
function is arbitrary.
Examples 1.3: Find the general integral of 𝑝𝑥 + 𝑞𝑦 = 𝑧
Answer: In term of comparison of 𝒑 + 𝑸𝒒 = 𝑹 , we get:
𝑃 = 𝑥, 𝑄 = 𝑦, 𝑎𝑛𝑑 𝑅 = 𝑧
STEP1: Form the auxiliary simultaneous equations
𝑑𝑥 𝑑𝑦 𝑑𝑧 𝑑𝑥 𝑑𝑦 𝑑𝑧
= = ↔ = =
𝑃 𝑄 𝑅 𝑥 𝑦 𝑧
STEP2: Solve these auxiliary simultaneous equations
get
1 1 1
𝑥𝑑𝑥 + 𝑦𝑑𝑦 + 𝑧𝑑𝑧 𝑑𝑥 + 𝑑𝑦 + 𝑑𝑧
𝑥 𝑦 𝑧
= 2
𝑥 (𝑧 − 𝑦 ) + 𝑦 (𝑧 − 𝑦 ) + 𝑧 (𝑧 − 𝑦 ) (𝑧 − 𝑦 ) + (𝑧 − 𝑦 ) + (𝑧 2 − 𝑦 2 )
2 2 2 2 2 2 2 2 2 2 2 2
We shall see some standard forms of such equations and solve them by special methods.
2.3.1. Type 1. 𝑭( 𝒑, 𝒒) = 𝟎
If the PDE contains 𝑝 and 𝑞 only, then suppose that 𝑧 = 𝑎𝑥 + 𝑏𝑦 + 𝑐 is a solution of the
equation 𝐹 ( 𝑝, 𝑞) = 0
𝜕𝑧 𝜕𝑧
Then 𝑝 = = 𝑎 and 𝑞 = =𝑏
𝜕𝑥 𝜕𝑦
For eliminating 𝑎 from this above system, then we obtain the general solution.
Examples 1.6: Solve this PDE 𝑝 2 + 𝑞 2 = 𝑛𝑝𝑞
Answer: The solution of this PDE is 𝑧 = 𝑎𝑥 + 𝑏𝑦 + 𝑐 subject to 𝑎 2 + 𝑏 2 = 𝑛𝑎𝑏
𝑛𝑎±√𝑛2 𝑎2−4𝑎2 𝑎
Solving for 𝑏, we get 𝑏 = = [𝑛 ± √𝑛2 − 4]
2 2
𝑎
Hence the complete solution is 𝑧 = 𝑎𝑥 + [𝑛 ± √𝑛2 − 4 ]𝑦 + 𝑐
2
𝜕𝑧
As = 0 = 1 which is absurd, then there is no singular solution
𝜕𝑐
𝜕𝑧
= 0 = 𝑥 + 𝜑 ′ (𝑎), 𝑦 + 𝑓 ′ (𝑎, 𝜑(𝑎)) (5)
𝜕𝑎
Eliminating 𝑎 between (4) and (5), then we obtain the general solution of a given PDE.
Examples 1.7: Solve this PDE 𝑧 = 𝑝𝑥 + 𝑞𝑦 + 𝑝 2 𝑞 2
Answer: This is Clairaut’s form
The complete solution of this PDE is 𝑧 = 𝑎𝑥 + 𝑏𝑦 + 𝑎 2 𝑏 2
Differential 𝑧 = 𝑎𝑥 + 𝑏𝑦 + 𝑎 2 𝑏 2 partially with respect to 𝑎 and 𝑏, we get
𝜕𝑧
= 0 = 𝑥 + 2𝑎𝑏 2
𝜕𝑎 𝑥 = −2𝑎𝑏 2
𝑎𝑛𝑑 ↔ { 𝑎𝑛𝑑
𝜕𝑧 2 𝑦 = −2𝑎 2 𝑏
{𝜕𝑏 = 0 = 𝑦 + 2𝑎𝑏
𝑥 𝑦 1
= = −2𝑎𝑏 = → 𝑎 = 𝑘𝑦 𝑎𝑛𝑑 𝑏 = 𝑘𝑥
𝑏 𝑎 𝑘
1
𝑥 = −2𝑎𝑏 2 = −2𝑘 3 𝑦𝑥 2 → 𝑘 3 = −
2𝑥𝑦
𝑧 = 𝑎𝑥 + 𝑏𝑦 + 𝑎 2 𝑏 2 ↔ 𝑧 = 𝑘𝑥𝑦 + 𝑘𝑥𝑦 + 𝑘 4 𝑥 2 𝑦 2
1 𝑘 3
𝑧 = 2𝑘𝑥𝑦 + 𝑘𝑥 2 𝑦 2 (− ) = 2𝑘𝑥𝑦 − 𝑥𝑦 = 𝑘𝑥𝑦
2𝑥𝑦 2 2
27 3 3 3 27 1 27
𝑧3 = 𝑘 𝑥 𝑦 = (− ) 𝑥 3𝑦3 = − 𝑥 2𝑦2
8 8 2𝑥𝑦 16
16𝑧 3 + 27𝑥 2 𝑦 2 = 0 is a singular solution
Taking 𝑏 = 𝜑(𝑎), (2) becomes
𝑧 = 𝑎𝑥 + 𝜑(𝑎), 𝑦 + 𝑎 2 [𝜑(𝑎)]2 , (*)
Mathematics for Engineers III Prepared by Jean de Dieu NIYIGENA Page 36
Differential (*) partially with respect to 𝑎, we get
𝜕𝑧
= 0 = 𝑥 + 𝜑 ′ (𝑎) 𝑦 + 2𝑎 [𝜑(𝑎)] 2 + 2𝑎 2 𝜑 ′(𝑎) (∗∗)
𝜕𝑎
Eliminating 𝑎 between (*) and (**), then we obtain the general solution of a given PDE
2.3.3. Type 3.
Case1. 𝑭(𝒛, 𝒑, 𝒒) = 𝟎
This PDE form not containing 𝑥 and 𝑦explicitily. As a trial solution, assume that 𝑧 is a
function of 𝑢 = 𝑥 + 𝑎𝑦, where 𝑎 is an arbitrary constant.
𝜕𝑧 𝑑𝑧 𝜕𝑢 𝑑𝑧 𝑑𝑧
𝑝= = . = .1=
𝜕𝑥 𝑑𝑢 𝜕𝑥 𝑑𝑢 𝑑𝑢
𝜕𝑧 𝑑𝑧 𝜕𝑢 𝑑𝑧 𝑑𝑧
𝑞= = . = .𝑎 = 𝑎
𝜕𝑦 𝑑𝑢 𝜕𝑦 𝑑𝑢 𝑑𝑢
𝑑𝑧 𝑑𝑧
Substituting these values of 𝑝 and 𝑞in 𝐹 (𝑧, 𝑝, 𝑞) = 0, we obtain 𝐹 (𝑧, ,𝑎 ) = 0 which
𝑑𝑢 𝑑𝑢
𝑑𝑧 𝑑𝑧
is an ordinary differential equation of the 1 st order. Solving for , we obtain = 𝜑(𝑧, 𝑎).
𝑑𝑢 𝑑𝑢
𝑑𝑧 𝑑𝑧
= 𝑑𝑢 ↔ ∫ = 𝑢 + 𝑐 → 𝑓(𝑧, 𝑎) = 𝑢 + 𝑐 = 𝑥 + 𝑎𝑦 + 𝑐
𝜑(𝑧, 𝑎) 𝜑(𝑧, 𝑎)
𝒇 (𝒛, 𝒂) = 𝒙 + 𝒂𝒚 + 𝒄
Case2. 𝑭(𝒙, 𝒑, 𝒒) = 𝟎
𝒛 = 𝒇(𝒙, 𝒂) + 𝒂𝒚 + 𝒄
Is a complete integral of a given PDE since it contains two arbitrary constants 𝑎 and 𝑐.
Case3. 𝑭(𝒚, 𝒑, 𝒒) = 𝟎
𝜕𝑧 𝜕𝑧
𝑑𝑧 = 𝑑𝑥 + 𝑑𝑦 = 𝑝𝑑𝑥 + 𝑞𝑑𝑦
𝜕𝑥 𝜕𝑦
𝒛 = 𝒂𝒙 + 𝒇(𝒚, 𝒂) + 𝒄
Is a complete integral of a given PDE since it contains two arbitrary constants 𝑎 and 𝑐.
Examples 1.8: Solve the following PDEs
a) 𝑝(1 + 𝑞) = 𝑞𝑧
b) 𝑞 = 𝑝𝑥 + 𝑝 2
c) 𝑝𝑞 = 𝑦
Answers:
a) 𝑝 (1 + 𝑞) = 𝑞𝑧
𝑑𝑧 𝑑𝑧
Assume 𝑢 = 𝑥 + 𝑎𝑦, 𝑝 = 𝑎𝑛𝑑 𝑞 = 𝑎
𝑑𝑢 𝑑𝑢
𝑑𝑧 𝑑𝑧 𝑑𝑧 𝑑𝑧
(1 + 𝑎 ) = 𝑎 𝑧 ↔ 1+ 𝑎 = 𝑎𝑧 → 𝑎𝑑𝑧 = (𝑎𝑧 − 1)𝑑𝑢
𝑑𝑢 𝑑𝑢 𝑑𝑢 𝑑𝑢
𝑎𝑑𝑧 1
→∫ = ∫ 𝑑𝑢 ↔ ln(𝑎𝑧 − 1) = 𝑢 + 𝑐
(𝑎𝑧 − 1) 𝑎
1 1
ln (𝑎𝑧 − 1) = (𝑥 + 𝑎𝑦) + 𝑐 = 𝑥 + 𝑦 + 𝑐
𝑎 𝑎
1
ln(𝑎𝑧 − 1) = 𝑥 + 𝑎𝑦 + 𝑐 .This is the complete integral.
𝑎
−𝑥±√𝑥2 +4𝑎 𝑥2 1
∫ 𝑑𝑧 = ∫ ( ) 𝑑𝑥 + ∫ 𝑎 𝑑𝑦 → 𝑧 = − ± ∫ √𝑥 2 + 4𝑎 𝑑𝑥 + 𝑎𝑦 + 𝑏
2 4 2
𝑥2 1 𝑥 𝑥
𝑧=− ± {2𝑎 sinh −1 ( ) + √𝑥 2 + 4𝑎 } + 𝑎𝑦 + 𝑏
4 2 2√𝑎 2
This is the complete integral. The singular and general integrals are found out as usual.
c) 𝑝𝑞 = 𝑦
Assume that 𝑝 = 𝑎 = 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡, then the equation becomes 𝑎𝑞 = 𝑦
𝑦
↔𝑞=
𝑎
𝑦
Since 𝑑𝑧 = 𝑝𝑑𝑥 + 𝑞𝑑𝑦 = 𝑎𝑑𝑥 + 𝑑𝑦
𝑎
𝑦 𝑦2
∫ 𝑑𝑧 = ∫ 𝑎 𝑑𝑥 + ∫ 𝑑𝑦 → 𝑧 = 𝑎𝑥 + +𝑏
𝑎 𝑎
𝑦2
𝑧 = 𝑎𝑥 + +𝑏
𝑎
This is the complete integral. The singular and general integrals are found out as usual.
2.3.4. Type 4. Separable equations
We say that a 1 st order PDE is separable if it can be written as 𝒇(𝒙, 𝒑) = 𝝋(𝒚, 𝒒)
∫ 𝑑𝑧 = ∫ 𝑓1 (𝑥, 𝑎) 𝑑𝑥 + ∫ 𝑓2 (𝑦, 𝑎) 𝑑𝑦
𝒛 = ∫ 𝒇𝟏 (𝒙, 𝒂) 𝒅𝒙 + ∫ 𝒇𝟐 (𝒚, 𝒂) 𝒅𝒚 + 𝒃
This expression contains two arbitrary constants and hence it is the complete integral. The
singular and general integrals are found out as usual.
Examples 1.9: Solve the following PDEs
𝑝 2 𝑦(1 + 𝑥 2 ) = 𝑞𝑥 2
Answers:
This equation is separable PDE.
(1 + 𝑥 2 ) 𝑞
𝑝2 = =𝑎
𝑥2 𝑦
(1 + 𝑥 2 ) 𝑥 √𝑎
𝑝2 2 =𝑎→𝑝=
𝑥 √1 + 𝑥 2
𝑞
= 𝑎 → 𝑞 = 𝑎𝑦
𝑦
𝑥 √𝑎
Hence 𝑑𝑧 = 𝑝𝑑𝑥 + 𝑞𝑑𝑦 = 𝑑𝑥 + 𝑎𝑦𝑑𝑦
√1+𝑥2
𝑥
∫ 𝑑𝑧 = √𝑎 ∫ 𝑑𝑥 + 𝑎 ∫ 𝑦 𝑑𝑦
√1 + 𝑥 2
𝟏
𝒛 = √𝒂(𝟏 + 𝒙𝟐) + 𝒂𝒚𝟐 + 𝒃
𝟐
This is the complete integral.
Differentiating partially w.r.t 𝑏, we find that there is no singular integral.
2.3.5. Type 5. Equations reducible to standard forms
Hence the equation reduces to 𝑭((𝟏 − 𝒎)𝑷, (𝟏 − 𝒏)𝑸) = 𝟎, which is of the form
𝒇(𝑷, 𝑸) = 𝟎
Case2. 𝑭(𝒙𝒎 𝒑, 𝒚𝒎 𝒒, 𝒛) = 𝟎, ∀𝒎, 𝒏 𝒂𝒓𝒆 𝒄𝒐𝒏𝒔𝒕𝒂𝒏𝒕𝒔
This type of PDE can be transform into standard form .
By putting 𝑥 1−𝑚 = 𝑋 and 𝑦 1−𝑛 = 𝑌,where 𝑚 ≠ 1 and 𝑛 ≠ 1, we get 𝒇 (𝑷, 𝑸, 𝒛) = 𝟎
Case3. For 1 st and 2 nd cases ∀𝒎 = 𝒏 = 𝟏
If = 1 , put and 𝑋 = ln 𝑥 and 𝑛 = 1 , put and 𝑌 = ln 𝑦, we get
𝜕𝑧 𝜕𝑧 𝑑𝑋 1 𝜕𝑧 1
𝑝= = = = 𝑃 → 𝑝𝑥 = 𝑃
𝜕𝑥 𝜕𝑋 𝑑𝑥 𝑥 𝜕𝑋 𝑥
𝜕𝑧 𝜕𝑧 𝑑𝑌 1 𝜕𝑧 1
𝑞= = = = 𝑄 → 𝑞𝑦 = 𝑄
𝜕𝑦 𝜕𝑌 𝑑𝑦 𝑦 𝜕𝑌 𝑦
Case4. 𝑭(𝒛𝒌𝒑, 𝒛𝒌 𝒒) = 𝟎, ∀𝒌 𝒊𝒔 𝒄𝒐𝒏𝒔𝒕𝒂𝒏𝒕
This type of PDE can be transform into an equation of the 1 st type by proper substitution
𝜕𝑍 𝜕𝑧 𝑃
If 𝑘 ≠ −1,put 𝑍 = 𝑧 𝑘+1 , then 𝑃 = = (𝑘 + 1)𝑧 𝑘 = (𝑘 + 1) 𝑧 𝑘 𝑝 → = 𝑧 𝑘 𝑝 and
𝜕𝑥 𝜕𝑥 𝑘+1
𝜕𝑍 𝜕𝑧 𝑄
𝑄= = (𝑘 + 1) 𝑧 𝑘 = (𝑘 + 1) 𝑧 𝑘 𝑞 → = 𝑧𝑘 𝑞
𝜕𝑦 𝜕𝑦 𝑘+1
𝟏 𝟏
Hence the equation reduces to 𝑭 ( 𝑷, 𝑸) = 𝟎, which is of the form 𝒇(𝑷, 𝑸) = 𝟎
𝒌+𝟏 𝒌+𝟏
𝜕𝑍 1 𝜕𝑧 1 𝜕𝑍 1 𝜕𝑧 1
If 𝑘 = −1,put 𝑍 = ln 𝑧, then 𝑃 = = = 𝑝 and 𝑄 = = = 𝑞 Hence the
𝜕𝑥 𝑧 𝜕𝑥 𝑧 𝜕𝑦 𝑧 𝜕𝑦 𝑧
Answers:
a) 𝑥 2 𝑝 2 + 𝑦 2 𝑞 2 = 𝑧 2
This equation is not in any of the four standard types. But this is reducible to one of the
standard types by proper substitution of the variables.
𝑥𝑝 2 𝑦𝑞 2
𝑥 2𝑝2 + 𝑦2𝑞 2 = 𝑧2 ↔ ( ) + ( ) = 1
𝑧 𝑧
This is of the form explained in case 5, where 𝑚 = 1, 𝑛 = 1 and 𝑘 = −1 .Then put = ln 𝑥
, 𝑌 = ln 𝑦 , 𝑍 = ln 𝑧
𝜕𝑍 𝜕𝑍 𝜕𝑧 𝜕𝑥 1 𝜕𝑍 𝑞𝑦
Then 𝑃 = = ∙ ∙ = ∙ 𝑝 ∙ 𝑥 and 𝑄 = =
𝜕𝑋 𝜕𝑧 𝜕𝑥 𝜕𝑋 𝑧 𝜕𝑌 𝑧
EXERCISES 2.1
a) 𝑝𝑥 2 + 𝑞𝑦 2 = (𝑥 + 𝑦)𝑧 b) 𝑝 √𝑥 + 𝑞√𝑦 = √𝑧 c) 𝑝𝑥 2 − 𝑞𝑦 2 = 𝑧 2 d) 𝑝𝑥 +
𝑞𝑦 = 𝑛𝑧
𝜕2 𝑢 𝜕2𝑢 1
5.Show that the PDE − = 2𝑢⁄𝑥 is satisfied by 𝑢 = 𝑓 (𝑦 − 𝑥) + 𝑓 ′ (𝑦 − 𝑥)
𝜕𝑥2 𝜕𝑦2 𝑥
𝜕2𝑧 𝜕2 𝑧
6. If 𝑧 = 𝑓 (𝑥 + 𝑖𝑦) + 𝐹(𝑥 − 𝑖𝑦, prove that + = 0, where 𝑓 𝑎𝑛𝑑 𝐹 are
𝜕𝑥2 𝜕𝑦2
arbitrary function.
𝜕2 𝑢 1 𝜕𝑢 𝜕2𝑢
7. If 𝑢 = 𝑓( 𝑥 2 + 𝑦) + 𝐹 (𝑥 2 − 𝑦) , show that − ∙ − 4𝑥 2 =0
𝜕𝑥2 𝑥 𝜕𝑥 𝜕𝑦2
i i i
Let Dx= , D y = ,D x = i , D y = i ,
i
x y x y
2u 2u 2u
+ k + k =0 (2.4.1)
x 2 xy y 2
1 2
(D 2
x )
+ k 1D xD y + k 2D 2y u = 0
D 2x + k 1D x D y + k 2D 2y = 0
Let the roots of this equation be m 1 and m2, that is, Dx=m1Dy, Dx=m 2Dy
dx dy du
The auxiliary system of equations for p-m2q=0 is of the type = =
1 - m2 0
Thus, u=(y+m2x) is a solution of (2.4.1). From (2.4.2) we also have (Dx-m1Dy) u=0
or p-m1q=0
dx dy du
Its auxiliary system of equations is = =
1 - m1 0
This gives –m1dx=dy or m 1x+y=c1 and u=c 2 and so u=(y+m1x) is a solution of (2.4.1).
(Dx-m1Dy)2 u = 0
dx dy du
Its auxiliary system of equations is = = which gives y+m 1x = a
1 - m1 ( y + m1x )
2u 2u
- =0
x 2 y 2
is,
dx dy du
Auxiliary system of equations for p-q=0 is = =
1 -1 0
dx dy du
This gives x+y = c. The auxiliary system for p+q = 0 is = =
1 1 0
2u 2u 2u
+ k + k =0 (2.4.5)
x 2 xy y 2
1 2
We have discussed the method for finding the general solution (complementary function)
applicable in finding particular solution of partial differential equations of the type (2.4.3).
Let f(Dx,Dy) be a linear partial differential operator with constant coefficients, then the
1
corresponding inverse operator is defined as
f (D x ,D y )
1
The following results hold f(Dx,Dy) ( x, y ) = ( x, y ) (2.4.6)
f (D x ,D y )
1 1 1
( x, y ) = ( x, y ) (2.4.7)
f1(D x ,D y )f2 (D x ,D y ) f1(D x ,D y ) f2 D x ,D y )
1 1
= ( x, y ) (2.4.8)
f2 (D x ,D y ) f1(D x ,D y )
1
1( x, y) + 2 ( x, y) = 1 1( x, y)
(D x ,D y ) f (D x ,D y )
(2.4.9)
1
+ 2 ( x, y )
f (D x ,D y )
1 1
( x, y )e ax+by = e ax+by ( x, y ) (2.4.11)
f (D x ,D y ) f (D x + a,D y + b)
1 1
= e ax e by ( x, y ) = e by e ax ( x, y )
f (D x + a,D y ) f (D x ,D y + b)
(2.4.12)
1 1
2 2
cos (ax + by ) = cos (ax + by )
f (D , D y )
x
2
f (-a , - b 2 )
(2.4.13)
1 1
2 2
sin (ax + by ) = sin (ax + by ) (2.4.14)
f (D , D y )
x
2
f (-a , - b 2 )
1
When (x,y) is any function of x and y, we resolve into partial fractions treating
f (D x , D y )
f(Dx, Dy) as a function of D x alone and operate each partial fraction on (x,y),
remembering that
1
(x,y) = ( x, c − mx )dx
D x − mD y
Example 2.4.2. Find the particular solution of the following partial differential equations
2u u
(ii) 3 - = e x sin( x + y )
x 2
y
(3D 2x + 4 D xD y - D y ) u = ex-3y
1
up = ex-3y
3D + 4 D x D y - D y
2
x
1
= ex-3y by (2.4.9)
3 + 4(-3) - (-3)
1 x-3y
=- e
6
1
up = 2
ex sin (x+y)
3D - D y
x
1
= ex sin (x+y)
(3(D x + 1) 2 - D y )
1
= ex sin(x+y)
(3D + 6 D x + 3 − D y )
2
x
1
= ex sin(x+y)
(3(-1) + 6D x + 3 - D y
1 (6D x + D y )
= ex sin (x+y)
= ex sin(x+y)
6Dx -Dy 36D 2x - D 2 y
1 x
=- e cos(x+y).
5
2u 2 2u
-c = e-xsin t
t 2
x 2
(D 2t -c2Dx2) u = e-xsin t
1
u p= 2
2 2
e − x sin t
D - c Dx
t
−x 1 1
=e 2
sin t = e − x sin t
D - (c(D x - 1)
t - 1- c 2
1
=- e − x sin t
c +1
2
u c = (x-ct)+ (x+ct)
1
u(x,t)= (x-ct)+ (x+ct) - e − x sin t
c +1
2
The solution u c is known as the d' Alembert's solution of the wave equation
2u 2 2u
-c =0.
t 2 x 2
partial derivatives appear in an algebraically linear form, 'that is, of the first degree. For
where the coefficients A,B,C,D.E and F and the function f are functions of x and y, is a
Left hand side of (2.5.1) can be abbreviated by Lu, where u has continuous partial
If u is a function having continuous partial derivatives of approp riate order, say n then a
partial derivative can be written as Lu=f where L is a differential operator, that is, L carries
operator L is called linear differential operator if L ( u+ v)= Lu+ v where and are
scalars and u and v are any functions with continuous partial derivatives of appropriate
order. A partial differential equation is called homogeneous if Lu=0, that is, f on the right
hand side of a partial differential equation is zero, say f=0 in (2.5.1). The partial differential
Examples 2.1:
first-order.
order
order.
second-order.
It is known from analytical geometry and algebra that the polynomial equation P
is positive, zero, or negative. Thus, the partial differential equation (2.5.3) is classified as
The equation
is called the characteristic equation of the partial differential equation (2.5.3). Solutions
Example 2.2: Examine whether the following partial differential equations are hyperbolic,
parabolic, or elliptic.
2u 2u 2u 2u 2u 2 u
2
(i) +x 2 +4=0 (ii) +y 2 =0 (iii) y - =0
x 2 y x 2 y x 2 y 2
(iv) u xx + x 2 u yy = 0 (v) x u xx + 2x u xy + y u yy = 0
Solutions: (i) A = 1, C = x, B = 0
B2-AC=0-y >0 if y<0 and so the equation is hyperbolic if y<0. It is parabolic if y=0
In this case the equation is hyperbolic B 2-AC=o if x=y. For this the equation is parabolic.
Exercises2.2
Write down the order and degree of partial differential equations in problems 1-5.
u u
1. + = u2
x y
2u u
2. =
x 2
t
3
u u
3. + =0
x y
u u
4. + 100 =0
t x
6. Verify that the functions u(x,y)=x 2-y2 and u(x,y) = e x sin y are solutions
of the equation
2u 2u
+ =0
x 2 y
x u x –y u y = 0
Examine whether cos (xy), e xy and (xy)3 are solutions of this partial differential equation.
8. 4 u xx-7 u xy + 3 u yy= 0
9. 4 u xx-8 u xy + 4 u yy= 0
2u 2u 2u
11. 4 2 - 12 +9 2 = 0
t xt x
2u 2u 2u
12. 8 -2 -3 2 = 0
x 2 xy y
For what values of x and y are the following partial differential equations
15. u xx + x 2 u yy = 0
20. 2(u+xp+yq)=yp 2
21. u 2=pqxy
23. pq=1
2u u
27. + 12 +2=0
x 2
x
2u 2u 2u
28. 4 2 - 16 2 + 15 2 = 0
x y y
2u 2u u
29. 3 + 4 - =0
x 2 xy y
2u u
30. 3 - = sin (ax+by)
x 2 y
2 u u u 2u
31. 3 2 -2 - 5 2 = 3x+y+ex-y
x x y y
For a material of constant density ρ, constant specific heat μ and constant thermal
conductivity K, the partial differential equation governing the temperature u at any location
(x, y, z) and any time t is
u K
= k 2u , where k =
t
Example 2.6.1
Find the temperature at any point in the bar at any subsequent time.
The partial differential equation governing the temperature u(x, t) in the bar is
u 2u
= k 2 [Parabolic]
t x
[Note that if an end of the bar is insulated, instead of being maintained at a constant
u u
temperature, then the boundary condition changes to ( 0, t ) = 0 or ( L, t ) = 0 .]
t t
T X
X T = k X T = k = c
T X
Again, when a function of t only equals a function of x only, both functions must equal the
same absolute constant. Unfortunately, the two boundary conditions cannot both be
Mathematics for Engineers III Prepared by Jean de Dieu NIYIGENA Page 55
satisfied unless T1 = T2 = 0. Therefore we need to treat this more general case as a
perturbation of the simpler (T1 = T2 = 0) case.
v 2v
= k 2
t x
together with the boundary conditions
v(0, t) = v(L, t) = 0
and the initial condition
v(x, 0) = f (x) – g(x)
n
2
T + kT = 0
L
whose general solution is
Tn ( t ) = cn e−n kt / L
2 2 2
Therefore
n x n 2 2 kt
vn ( x, t ) = X n ( x ) Tn ( t ) = cn sin exp −
L L2
n x
If the initial temperature distribution f (x) – g(x) is a simple multiple of sin for
L
n x n 2 2 kt
some integer n, then the solution for v is just v ( x, t ) = cn sin exp − .
L L2
( f ( z ) − g ( z ) ) sin nL z dz
L
2
The Fourier sine series coefficients are cn =
L 0
2 n z n x n2 2 kt
L
T2 − T1
v ( x, t ) =
L n = 1
f ( z ) − z − T1 sin dz
L L
sin exp −
L2
0 L
T −T
u ( x, t ) = v ( x, t ) + 2 1 x + T1
L
n z
2
cn =
0
((145z 2
)
− 240 z + 100 ) − ( 50 z + 100 ) sin
2
dz
n z
2
cn = 145 (z 2
− 2 z ) sin
2
dz
0
cn = 145
8 ( z − 1) n z
+ 2 sin
( n ) 2 z =0
z =2
2 16 n z 8 ( z − 1) n z
cn = 145 z ( 2 − z ) + cos + 2 sin
n ( n ) 2 ( n )
3
2
z =0
cn =
2320
( n )
3 (( −1)n −1)
2320 1 − ( −1) n x
n
9n2 2 t
u ( x, t ) = 50 x + 100 −
3 n = 1 n3 2
sin exp −
4
Some snapshots of the temperature distribution (from the tenth partial sum) from the Maple
file at "www.engr.mun.ca/~ggeorge/5432/demos/ex451.mws" are shown on the
next page.
Example 2.6.1 (continued)
2u 2 u
2
or its one-dimensional special case 2 = c [which is hyperbolic everywhere]
t x2
(where u is the displacement and c is the speed of the wave);
u
The heat (or diffusion) equation: = K 2u + K u
t
a one-dimensional special case of which is
u K 2u
= [which is parabolic everywhere]
t x 2
(where u is the temperature, μ is the specific heat of the medium, ρ is the density and K is
the thermal conductivity);
2u 2u
a special case of which is + = 0 [which is elliptic everywhere]
x2 y2
The complete solution of a PDE requires additional information, in the form of initial
conditions (values of the dependent variable and its first partial derivatives at t = 0),
boundary conditions (values of the dependent variable on the boundary of the domain) or
some combination of these condition
Example 2.6.2
Show that
f ( x + ct ) + f ( x − ct )
y ( x, t ) =
2
is a solution to the wave equation
2 y 1 2 y
− = 0
x2 c 2 t 2
with initial conditions y(x, 0) = f (x) and y ( x, t ) = 0
t t =0
f (r ) + f (s)
Let r = x + ct and s = x – ct , then y ( r , s ) = and
2
y y r y s
x
=
r x
+
s x
=
1
2
( ( f ( r ) + 0 ) 1 + ( 0 + f ( s ) ) 1) ,
2 y y y r y s
= ( f ( r ) 1 + f ( s ) 1) ,
1
= = +
x 2
x x r x x s x x 2
y y r y s
t
=
r t
+
s t
=
1
2
( ( f ( r ) + 0 ) c + ( 0 + f ( s ) ) ( −c ) ) ,
2 y y r y s
= ( c f ( r ) c − c f ( s ) ( −c ) ) ,
1
= +
t 2
r t t s t t 2
2 y 1 2 y
2 (
− 2 2 = ( f ( r ) + f ( s ) ) − c 2 f ( r ) + c 2 f ( s ) ) = 0 ,
1 1
x 2
c t 2 2c
f ( x + ct ) + f ( x − ct )
Therefore y ( x, t ) = is a solution to the wave equation for all twice
2
differentiable functions f (x). This is part of the d’Alembert solution.
A more general d’Alembert solution to the wave equation for an infinitely long string is
f ( x + ct ) + f ( x − ct ) x +ct
1
y ( x, t ) = + g ( u ) du
2 2c x −ct
and
y
Initial speed of string: = g ( x) for x
t x, 0
( )
for any twice differentiable functions f (x) and g(x).
Physically, this represents two identical waves, moving with speed c in opposite directions along
the string.
1
Proof that y ( x, t ) = g ( u ) du satisfies both initial conditions:
2c x −ct
1 x +ct 1 x
y ( x, t ) =
2c x −ct
g ( u ) du y ( x, 0 ) =
2c x
g ( u ) du = 0
Example 2.6.3
An elastic string of infinite length is displaced into the form y = cos x/2 on [–1, 1] only (and
y = 0 elsewhere) and is released from rest. Find the displacement y(x, t) at all locations on the
string x and at all subsequent times (t > 0).
( x + ct )
cos ( −1 − ct x 1 − ct )
where f ( x + ct ) = 2
0 ( otherwise )
u ( x, y ) = f1 ( y + 1 x ) + f 2 ( y + 2 x ) ,
where
−B − D −B + D
1 = and 2 =
2A 2A
and D = B 2 – 4AC
and f1, f2 are arbitrary twice-differentiable functions of their arguments.
λ 1 and λ 2 are the roots (or eigenvalues) of the characteristic equation.
2u 2u 2u
− 3 + 2 2 = 0
x2 x y y
u(x, 0) = −x2
u y(x, 0) = 0
2u 2u 2u
A + B + C = 0
x2 x y y2
A = 1, B = –3 , C = 2 D = 9 – 42 = 1 > 0
+3 1
(b) = = 1 or 2
2
The complementary function (and general solution) is
u(x, y) = f (y + x) + g(y + 2x)
Initial conditions:
u(x, 0) = f (x) + g(2x) = –x2 (1)
and
u y(x, 0) = f '(x) + g'(2x) = 0 (2)
d
(1) = f ( x ) + 2 g ( 2 x ) = − 2 x (3)
dx
g ( x ) = − 12 x 2 + k g ( y + 2 x ) = − 12 ( y + 2 x ) + k
2
f (y + x) = (y + x)2 – k
=
1
2
( )
2 y 2 + 4 xy + 2 x 2 − y 2 − 4 xy − 4 x 2 =(1 2
2
y − 2x2 )
2u 2u 2u
equation − 3 + 2 = 0 together with both initial conditions u(x, 0) = −x2 and
x2 x y y2
u y(x, 0) = 0.]
[Also note that the arbitrary constants of integration for f and g cancelled each other out. This
cancellation happens generally for this method of d’Alembert solution.]
For the particular solution, we require a function such that the combination of second partial
derivatives resolves to the constant 14. It is reasonable to try a quadratic function of x and y as
our particular solution.
uP uP
= 2ax + by and = bx + 2cy
x y
2 uP 2 uP 2 uP
= 2 a , = b and = 2c
x2 x y y2
2 uP 2 uP 2 uP
6 − 5 + = 12a − 5b + 2c = 14
x2 x y y2
We have one condition on three constants, two of which are therefore a free choice.
Choose b = 0 and c = a, then 14a = 14 c = a = 1
Therefore a particular solution is u = x2 + y2
Complementary function:
A = 6, B = –5 , C = 1 D = 25 – 46 = 1 > 0
+5 1 1 1
= = or
12 3 2
The complementary function is
( )
uC ( x, y ) = f y + 13 x + g y + 12 x ( )
and the general solution is
( ) (
u ( x, y ) = f y + 13 x + g y + 12 x + x 2 + y 2 )
Example 2.6.5 (continued)
( ) ( )
u ( x, y ) = f y + 13 x + g y + 12 x + x 2 + y 2
u
y
( ) (
= f y + 13 x + g y + 12 x + 2 y )
( ) ( )
u ( x, 0 ) = f 13 x + g 12 x + x 2 = 2 x + 1 (1)
and
( ) ( )
u y ( x, 0 ) = f 13 x + g 12 x + 0 = 4 − 6 x (2)
d
dx
1
( )
1
( )
(1) = f 13 x + g 12 x + 2 x = 2
3 2
(3)
(2) – 2(3)
1
3
( )
f 13 x − 4 x = 4 − 6 x − 4
( )
f 13 x = − 6 x = − 18 13 x ( ) f ( x ) = − 18 x
f ( x ) = − 9x2 + k
But
( ) ( )
u ( x, y ) = f y + 13 x + g y + 12 x + x 2 + y 2
(
u ( x, y ) = − 9 y + 13 x ) ( )
2
+ k + 4 y + 12 x + 1 − k + x 2 + y 2
u ( x, y ) = 1 + 2 x + 4 y − 6 xy − 8 y 2
Example 2.6.6
A = 1, B = 2, C = 1 D = 4 – 41 = 0
−2 0
= = − 1 or − 1
2
The complementary function (and general solution) is
where h(x, y) is any convenient non-trivial linear function of (x, y) except a multiple of (y – x).
Choosing, arbitrarily, h(x, y) = x,
u ( x, y ) = f ( y − x ) + x g ( y − x )
u(0, y) = 0 f (y) + 0 = 0
Therefore the function f is identically zero, for any argument including (y – x).
Therefore
u(x, y) = x g(y – x) = x (1 – (y – x))
maximum values in the data set; xMedian , the median, which is the boundary point to the left of
which (and to the right of which) are 50% of the data, and xMode , the mode, which is the most
frequently occurring value in a set of data.
Arithmetic Mean
Let x1 , x2 , , xn be an array of n measurements of a variable X . Suppose that these n
xi R x i i k
x= i =1
= i =1
= fi xi
n n i =1
Geometric Mean
For a set of positive numbers x1 , x2 , , xn , the geometric mean is the principal n th root of the
product of the n numbers.
n k
xG = n
xi =
i =1
n
x i =1
i
Ri
Harmonic Mean
The harmonic mean of a set of data x1 , x2 , , xn is the reciprocal of the arithmetic mean of the
Ri
ii xi Ri Ri xi xi
Ri
xi
1 152 1 152 152 0.0066
2 154 1 154 154 0.0065
3 155 1 155 155 0.0065
4 159 2 318 25281 0.0126
5 160 6 960 1.67772E+13 0.0375
6 161 2 322 25921 0.0124
7 162 1 162 162 0.0062
8 167 1 167 167 0.0060
9 170 4 680 835210000 0.0235
10 171 1 171 171 0.0058
11 172 4 688 875213056 0.0233
12 173 1 173 173 0.0058
4102 2.3337E+55 0.1526
12
R x i i
4102
Arithmetic mean x = i =1
= = 164.08 .
25 25
12
Geometric mean x G = 25 xiRi = 25 2.3337 1055 = 163.95
i =1
n 25
Harmonic mean x H = 12
= = 163.8270
Ri 0.1526
i =1 xi
Median
Suppose that the n observations x1 , x2 , , xn have been sorted according to size as x(1) , x( 2) , , x( n)
. We define the median denoted by xmedian or x , as the middle observation or the arithmetic mean
parts.
The first quartile Q1 is given by
x n + x n
+1
Q1 = 4 4
n n
where is the integer part of .
4 4
The third quartile Q3 is given by
x 3n + x 3n
+1
Q3 = 4 4
3n 3n
where is the integer part of .
4 4
The interquartile range is the difference between the third and first quartiles, and is thus given
by
= Q3 − Q1
Example 3.2
Consider the height measurements of 25 students given in ascending order are
152 154 155 159 159 160 160 160 160 160 160 161 161
162 167 170 170 170 170 171 172 172 172 172 173 .
• The median is
x = x n +1 = x 25+1 = x13 = 161.
2 2
• The mode is
xmod e = 160 .
152 154 155 159 159 160 160 160 160 160 160 161 161 162 167 170 170 170 170 171 172 172 172 172 173
xmin = x(1) Q1 =160 Q2 = x Q3 =170.5 xmax = x( 25)
R=21
The range has the drawback of involving only two values and does not involve the remaining
numbers in the data set.
Mean Deviation
The mean deviation (or average deviation) of a set of n observations x1 , , xn is defined as
n k
xi − x R i xi − x k
xMD = i =1
= i =1
= fi xi − x
n n i =1
Standard Deviation
The standard deviation of a set of n numbers x1 , , xn , denoted by s , is the root mean square of
the deviations from the arithmetic mean. More precisely,
( x − x) R ( x − x)
n 2 k 2
f ( x − x)
i i i k 2
s= i =1
= i =1
= i i
n n i =1
The variance of a set of data is defined as the square of the standard deviation and is thus given by
s2 .
Coefficient of Variation
The coefficient of variation (also called the coefficient of variability, the coefficient of dispersion,
or the relative standard deviation) is defined by
s s
CV = or CV = 100
x x
Example 3.5
Consider the height measurements of 25 students given in table 3.1. Calculate the mean deviation,
the standard deviation and the coefficient of variation.
Solution
R i xi − 164.08
148.24
The mean deviation is xMD = i =1
= 5.93 .
25 25
12
R ( x − 164.08)
2
i i
1031.84
The standard deviation is s = i =1
= 6.42 .
25 25
s 6.42
The coefficient of variation is CV = 100 = 100 = 3.91% .
x 164.08
Table 3.2. Shows basic calculations to be done.
Table 3.2: Basic calculations
( )
2
xi Ri xi − x Ri xi − x Ri xi − x
i
1 152 1 -12.08 12.08 145.93
2 154 1 -10.08 10.08 101.61
3 155 1 -9.08 9.08 82.45
4 159 2 -5.08 10.16 51.61
5 160 6 -4.08 24.48 99.88
6 161 2 -3.08 6.16 18.97
7 162 1 -2.08 2.08 4.33
8 167 1 2.92 2.92 8.53
9 170 4 5.92 23.68 140.19
10 171 1 6.92 6.92 47.89
11 172 4 7.92 31.68 250.91
12 173 1 8.92 8.92 79.57
148.24 1031.84
Skewness
This is a concept which is commonly used in statistical decision making. It refers to the degree in which a
given frequency curve is deviating away from the normal distribution
There are 2 types of skewness namely:
i. Positive skewness
ii. Negative skewness
1. Positive Skewness
Long tail
Median
Mode
Mean
Median
Mode
Mean
2. Negative Skewness
This is an asymmetrical curve in which the long tail extends to the left
NB: This frequency curve for the age distribution is characteristic of the age distribution in developed
countries
-
The mode is usually bigger than the mean and median
-
The median usually occurs in between the mean and mode
-
The no. of observations above the mean are usually more than those below the mean (see the
shaded region)
Measures of skewness
( mean - median )
1. Coefficient Skewness = 3
Standard deviation
mean - mode
2. Coefficient of skewness =
Standard deviation
NB: These 2 coefficients above are also known as Pearsonian measures of skewness.
Q 3 + Q1 - 2Q 2
3. Quartile Coefficient of skewness =
Q 3 + Q1
Where Q1 = 1st quartile
Q2 = 2nd quartile
Required
Using the Pearsonian measure of skewness, calculate the coefficients of skewness and hence comment
briefly on the nature of the distribution of the loans.
Arithmetic mean = 66.51
The standard deviation = 10.68
The Median = 65.27
( 66.51- 65.27 )
Therefore the Pearsonian coefficient = 3 = 0.348
10.68
Comment
The coefficient of skewness obtained suggests that the frequency distribution of the loans given was
positively skewed .
This is because the coefficient itself is positive. But the skewness is not very high implying the degree of
deviation of the frequency distribution from the normal distribution is small
Example 2
Using the above data calculate the quartile coefficient of skewness
Q 3 + Q1 - 2Q 2
Quartile coefficient of skewness =
Q 3 + Q1
(152.75 - 94 ) 5 = 58.53
Q1 =55. 5 +
97
1
( Q3 - Q1)
Percentile measure of kurtosis, K (Kappa) = 2
P90 - P10
Example
Refer to the table above for loans to small business firms/units
Required
Calculate the percentile coefficient of Kurtosis
90
P90 = ( n +1) = 0.9 ( 610 +1)
100
= 0.9 (611)
= 549.9
The actual loan for a firm in this position
( 549.9 - 538 )
(549.9) = 80.5 + x 5 = 81.99
40
10
P10 = (n + 1) = 0.1 (611) = 61.1
100
Mathematics for Engineers III Prepared by FELIX NDAYAMBAJE Page 82
The actual loan value given to the firm on this position is
50.5 +
( 61.1 − 32 ) x 5 = 52.85
62
= 0.9 (611)
= 549.9
In the study of probability, any process of observation is referred to as an experiment. The results
of an observation are called the outcomes of the experiment.
When different outcomes are obtained in repeated trials, the experiment is called a random
experiment. More precisely, an experiment is called a random experiment if its outcome cannot be
predicted.
Sample space
The set of all possible outcomes of a random experiment is called the sample space (or universal
set), and it is denoted by . An element in is called a sample point. Each outcome of a random
experiment corresponds to a sample point.
Example
If we toss a coin, there are two possible outcomes, heads ( H ) or tails ( T ).
Thus the sample space of the experiment of tossing a coin is
= H , T .
Example
If we toss a coin twice, the sample space of this experiment is
= HH , HT , TH , TT .
Example
If an experiment consists of measuring “lifetimes” of electric light bulbs produced by a company,
then the result of the experiment is a time t in hours that lies in some interval = t : 0 t 4000
where we assume that no bulb lasts more than 4000 hours.
Events
An event is any subset of the sample space . A sample point of is often referred to as a simple
or elementary event. The sample space is the subset of itself, that is . Since is the set
of all possible outcomes, it is often called the sure or certain event. The empty set , which is
called the impossible event because an element of cannot occur.
Example
If we toss a coin twice, the event that only one head comes up is the subset A = HT , TH of the
If the sets corresponding to events A and B are disjoint, i.e., A B = , the events are said to be
mutually exclusive. This means that they cannot both occur. We say that a collection of events
A1 , , An is mutually exclusive if every pair in the collection is mutually exclusive.
Example
Referring to the experiment of tossing a coin twice, let A be the event “at least one head occurs”
and B the event “the second toss results is a tail”.
In any random experiment there is always uncertainty as to whether a particular event will or will
not occur. As a measure of the chance, or probability, with which we can expect the event to occur,
it is convenient to assign a number between 0 and 1. If we are sure or certain that an event will
occur, we say that its probability is 100% or 1 . If we are sure that the event will not occur, we say
that its probability is zero.
1
If, for example, the probability is , we would say that there is a 25% chance it will occur and a
4
75% chance that it will not occur.
There are two important procedures by means of which we can estimate the probability of an event.
Classical approach: If an event A can occur in h different ways out of a total of n possible
ways, all of which are equally likely, then the probability of the event is
h
P ( A) = .
n
Example
Suppose we want to know the probability that a head will turn up in a single toss of a coin. Since
there are two equally likely ways in which the coin can come up (heads and tails) assuming it does
not roll away or stand on its edge, and of these two ways a head can arise in only one way, we
1
reason that the required probability is .
2
Frequency approach: If after n repetitions of an experiment, where n is very large, an event A
is observed to occur in h of these, then the probability of the event is
h
P ( A) = .
n
This is also called the empirical probability of the event A .
Example
Suppose we have a sample space . To each event A in the class C of events, we associate a
real number P ( A) . Then P is called a probability function, and P ( A) the probability of the
P ( A) 0
P () = 1
P Ai = P ( Ai )
i =1 i =1
Theorem 3.1
If A1 A2 , then
P ( A1 ) P ( A2 ) and P ( A2 − A1 ) = P ( A2 ) − P ( A1 )
Proof
( 1 ) ( 1 )
If A1 A2 ,then A2 = A1 A A2 and P ( A2 ) = P ( A1 ) + P A A2 P ( A1 ) ,since
( )
P A1 A2 = P ( A2 − A1 ).
0 P ( A) 1
Theorem 3.3
For , the empty set,
P () = 0
Proof
Note that = and and are mutually exclusive. Then P ( ) = P ( ) + P ( ) , and
P ( ) = 0 since P ( ) = 1.
Theorem 3.4
If A is the complement of A , then,
( )
P A = 1 − P ( A)
Proof
Note that = A A and A and A are mutually exclusive. Then P ( ) = P ( A ) + P A , and( )
( )
P A = 1 − P ( A ) since P ( ) = 1.
Theorem 3.5
If A = A1 A2 An , where A1 , A2 , , An are mutually exclusive event, then,
P ( A) = P ( A1 ) + P ( A2 ) + + P ( An )
In particular, if A = , then
P ( A) = P ( A1 ) + P ( A2 ) + + P ( An ) = 1
Theorem 2.6
If A and B are two events, then,
Proof
( )
Since A B = A B A , where A and B A are mutually exclusive, and
( )
B = ( A B ) B A , where A B and B A are mutually exclusive, then
( ) (
P ( A B ) = P ( A ) + P B A and P ( B ) = P ( A B ) + P B A . Substracting,)
P ( A B ) − P ( B ) = P ( A) − P ( A B ) , and thus P ( A B ) = P ( A) + P ( B ) − P ( A B ) .
n n
P Ai = P ( Ai ) − P ( Ai Aj ) + ( Ai Aj Ak ) +
i =1 i =1 i j i j k
+ ( −1) P ( A1 A2 An )
n −1
−
Theorem 3.7
For any events A and B ,
(
P ( A) = P ( A B ) + P A B )
Theorem 3.8
If an event must result in the occurrence of one of the mutually exclusive events A1 , A2 , , An ,
then,
P ( A) = P ( A A1 ) + P ( A A2 ) + + P ( A An )
Example
A single die is tossed once. The sample space is = 1, 2,3, 4,5, 6 .
• ( )
P A = 1 − P ( A ) = 0.80
• P ( B ) = 1 − P ( B ) = 0.70
( ) ( )
P A B = P A B = 1 − P ( A B ) = 1 − P ( A ) + P ( B ) − P ( A B ) = 1 − 0.40 = 0.60
Let A and B be two events such that P ( A) 0 . Denote by P ( B A) the probability of B given
that A has occurred. Since A is known to have occurred, it becomes the new sample space
replacing the original . From this we are led to the definition
P ( A B)
P ( B A) =
P ( A)
or P ( A B ) = P ( A) P ( B A)
P ( A B)
P ( A B) =
P ( B)
or P ( A B) = P ( B) P ( A B)
Bayes´ rule
P ( A) P ( B A)
P ( A B) =
P ( B)
Example
Find the probability that a single toss of a die will result in a number less than 4 if
(a) no other information is given and
(b) it is given that the toss resulted in an odd number.
Solution
(a) Let B denote the event {less than 4}, i.e., B = 1, 2,3 . Since B is the union of the events 1,
2, or 3 turning up, we see by Theorem 2. 5 that
1 1 1 1
P ( B ) = P (1 2 3) = P (1) + P ( 2 ) + P ( 3) = + + =
6 6 6 2
We assume equal probabilities for the sample points.
(b) Letting A be the event {odd number}, i.e. A = 1,3,5 . We see that
1 1 1 1
P ( A ) = P (1 3 5 ) = P (1) + P ( 3) + P ( 5 ) = + + = .
6 6 6 2
As A B = 1,3 , we see that
2 1
P ( A B) = = .
6 3
P ( A B)
Then P ( B A) =
1/ 3 2
= = . Hence, the added knowledge that the toss results in
P ( A) 1/ 2 3
1 2
an odd number raises the probability from to .
2 3
Theorems on Conditional Probability
Theorem3.9
For any three events A1 , A2 , A3 we have
Theorem 3.10
If an event A must result in one of the mutually exclusive events A1 , A2 , , An , then
n
P ( A ) = P ( Ai ) P ( A Ai )
i =1
Total Probability
• Total Probability
Proof
Let us consider the Venn diagram of Fig. 3.1.
P ( B ) = P ( B ) = P ( B A1 ) ( B An )
n n
= P ( B Ai ) = P ( Ai ) P ( B Ai ) .
i =1 i =1
Bayes´ Theorem
Let A1 , A2 , , An be mutually exclusive and exhaustive events. Then if B is any event in , we
obtain
P ( Ai ) P ( B Ai )
P ( Ai B ) =
( A ) P (B A )
n
j j
j =1
Example
Three facilities supply microprocessors to a manufacturer of elementary equipment. All are
supposedly made to the same specifications. However, the manufacturer has for several years
tested the microprocessors, and records indicate the following information (Table 3.1).
Table 3.1: Test results
Mathematics for Engineers III Prepared by FELIX NDAYAMBAJE Page 92
Supply Facility Fraction Defective Fraction Supplied
1 0.02 0.15
2 0.01 0.80
3 0.02 0.05
The manufacturer has stopped testing because of the costs involved, and it may be reasonably
assumed that the fractions that are defective and the inventory mix are the same as during the
period of record keeping. The director of manufacturing randomly selects a microprocessor, takes
it to the test department, and finds that it is defective. If we let A be the event that an item is
defective, and Bi be the event that the item came from facility i ( i = 1, 2,3 ), then we can evaluate
P ( B3 ) P ( A B3 )
P ( B3 A) =
P ( B1 ) P ( A B1 ) + P ( B2 ) P ( A B2 ) + P ( B3 ) P ( A B3 )
0.05 0.03 3
= = .
0.15 0.02 + 0.80 0.01 + 0.05 0.03 25
Independents Events
P ( A B ) = P ( A) P ( B )
If two events A and B are independent, then it can be shown that A and B are also
independent, that is
( )
P A B = P ( A) P B ( )
(
P A B ) = P ( A)
Then ( )
P AB =
P B( )
Three events A, B, C are independent if and only if
3.3. Probability distributions including Discrete distributions e.g. binomial and Poisson
distributions and Continuous distribution e.g. Normal Distribution.
a) Introduction
Random variable: the measurements of a random variable vary in a seemingly random and
unpredictable manner. A random variable assumes a unique numerical value for each of the
outcomes in the sample space of the probability experiment.
If we toss a coin twice, the number of Heads obtained could be 0, 1 or 2. The probabilities of
these occurrences are:
P (No heads) =P (TT) = (1/2)* (1/2) =1/4
P (1 head) =P (TH) +P (HT) = (0.5*0.5) + (0.5*0.5) =0.5
P (2 heads) =P (HH) =0.5*0.5=0.25
The probability distribution is often written as follow:
x 0 1 2
P(X=x) 0.25 0.5 0.25
X can take only values x1 , x2 , ..., x n .The probability associated with these values are
P1 , P2 , ...., Pn
Where
P( X = x1 ) = P1
P( X = x 2 ) = P2
.......... .......... .....
.......... .......... .....
P( X = x n ) = Pn
n
We always denote a random variable by a capital letter X,Y,Z….and the particular values by
lower case letter x,y,z….
x1 f (x1 ) = P1
x2 f (x2 ) = P2
x3 f (x3 ) = P3
…… ……..
xn f (x n ) = Pn
n
0 f ( xi ) 1 and f (x ) = 1
i
i =1
• Expectation: E(X )
Expected value or Mean
Let X be a r.v with pdf f(x)=P(X=x) and x is discrete. The expectation of X written E ( X ) is
given by
n n
E ( X ) = x P( X = x ) or E ( X ) = xi Pi or E ( X ) = xi f (xi )
i =1 i =1
It is the average of the numbers giving the gains and the losses weighted by their probabilities.
The game permits to expect an average gain or an average loss (by part) of E(X).
One says that the game is favourable if E(X) is positive and it is not favourable if E(X) is
negative.
If E(X) =0, the game is balanced.
Theorems:
1. Let consider an r.v X and a real number k:
E (kX ) = k E ( X )
E(X + k ) = E(X ) + k
2. Let consider X and Y, two r.v:
E( X + Y ) = E( X ) + E(Y )
Example: A random variable has pdf as shown. Find E(X).
X -2 -1 0 1 2
P(X=x) 0.3 0.1 0.15 0.4 0.05
Solution
E(X ) = x P( X = x ) = (−2 * 0.3) + (−1 * 0.1) + (0 * 0.15) + (1 * 0.4) + (2 * 0.05) = −0.2
all x
Let X be a r.v and pdf f(x) =P(X=x). Let also g(x) be a function of the r.v. X. Then ,
Example:
The r.v X has pdf P(X=x) for x=1, 2, 3.
X 1 2 3
P(X=x) 0.1 0.6 0.3
Compute
a) E(3) b) E(X) c) E(5X) d) E(5X+3)
Solution
a) E ( X ) = x P( X = x )
all x
f (x) = P( X = x), x IR E( X ) =
Let X be a r.v having the pdf and the where
is constant. The variance of X, written Var (X) is given by: Var X = E ( X − ) .Alternatively,
2
Mathematics for Engineers III Prepared by FELIX NDAYAMBAJE Page 97
Var X = E ( X − )
2
(
= E X 2 − 2 X + 2 )
= E (X 2
) − 2 * E ( X ) + E ( ) 2
= E (X 2
)− 2 * * + 2
= E (X 2
) − 2 + 2 2
= E (X 2
)− 2
Var X = E (X ) − E ( X )
2 2
n
= xi2 * Pi − E 2 ( X )
i =1
X = Var X
The variance is defined as the average of the sum of squared deviationsfrom different values of
gains (losses) around the expected value (=mean) of gain (or loss).
The standard deviation is a parameter that indicates the spread of the gains or the losses.
Properties
Var (k ) = 0
Var (kX ) = k 2Var ( X )
Var ( X + k ) = Var ( X )
Var (kX + +b ) = k 2Var ( X )
(kX ) = kX = k * X
Example:
An r.v X has the following distribution:
x 1 2 3 4 5
P(X=x) 0.1 0.3 0.2 0.3 0.1
x 1 2 3 4 5 6 sum
Hint:
x 1 2 3 4 5 6 sum
x^2 p(x) 1/6 4/6 9/6 16/6 25/6 36/6 91/6 = 15.1667
…. x i …
………. ……… ………. ……… f ( xi )
xn h( x n , y1 ) h( x n , y 2 ) ……….. h( x n , y m ) f (xn )
Total g ( y1 ) g ( y2 ) g (y j ) g ( ym ) 1
xi 1 2
f ( xi ) 0.3 0.7
yj 2 3 4
X Y 2 3 4 Total
1 0.3*0.1=0.03 0.3*0.5=0.15 0.3*0.4=0.12 0.3
2 0.7*0.1=0.07 0.7*0.5=0.35 0.7*0.4=0.28 0.7
Total 0.1 0.5 0.4 1
Note:
Two variables X and Y are independent iff:
1) E( X , Y ) = E( X ) * E(Y )
2) Var( X + Y ) = Var X + Var Y
3) Cov( X , Y ) = 0
• Covariance and correlation coefficient
Cov( X , Y )
The Pearson correlation coefficient of X and Y is r ( X , Y ) = * .
X Y
What is the probability that a packet with broken biscuits found by the checker was packed by
Aileen.
2. Calculate the expected value , the variance and the standard deviation for the following
pdf:
xi -5 -4 1 2
f ( xi ) 1 1 1 1
4 8 2 8
3. A perfect coin is tossed until one obtains either Head or 5 Tails. Compute
E( X ) , Var( X ) and X
4. Let consider two pdf of X and Y:
X 1 2 3 4
P(X=x) 1 1 1 1
4 4 4 4
Y 0 1 2
P(Y=y) 1 1 1
4 2 4
Binomial experiment: an experiment with a fixed number of independent trials. Each trial can
only have two outcomes, or outcomes which can be reduced to two outcomes. The probability of
each outcome must remain constant from trial to trial.
Binomial distribution: the outcomes of a binomial experiment with their corresponding
probabilities.
Let consider an experiment with n independent trials and only two possible outcomes. We call
one of the outcomes successful outcome and the probability of its occurring is P(s) =p. The other
outcome is called fail outcome and the probability of its occurring is P (f) =q=1 -p.
The probability for getting exactly k successes in n trials is
n
Pk = B(k , n, p ) = B( n, p ) = p k q n − k and k = 0,1, 2,..., n .
k
n k n−k
Then the pdf of X is given by f ( x ) = P ( X = x ) = p q and k = 0,1, 2,..., n
k
Example:
The probability that a person supports party A is 0.6.Find the probability that a randomly
selected sample of 8 voters, there are:
a) Exactly 3 who support party A.
b) More than 5 who support party A
Solution
X ~ Bin(n, p )
n
f ( x ) = P( X = x ) = p k q n − k and k = 0,1, 2,..., n
k
8
f (x ) = P( X = x ) = (0.6) k (0.4) 8− k and k = 0,1, 2,...,8
k
Thus
a) k=3
8 8−3
P(X=3)= 3 (0.6) (0.4) = 0.124
3
b) P(X>5)=P(X=6)+P(X=7)+P(X=8)
8 8− 6 8 8−7 8 8 −8
= 6 (0.6) (0.4) + 7 (0.6) (0.4) + 8 (0.6) (0.4)
6 7 8
8 8 8
= 6 (0.6) (0.4) + 7 (0.6) (0.4) + 8 (0.6) (0.4)
6 2 7 1 8 0
=0.315
a) E( X ) = np = 7 * (0.4) = 2.8
b) Var X = npq = 7 * (0.4) * (0.6) = 1.68
= Var X = 1.68 = 1.2961
Example:
Bin(10, 0.45)
If X ~ , find the mode of the probability distribution of X.
Solution
n k n−k
Here X has pdf as follow: f ( x ) = P( X = x ) = k p q and k = 0,1, 2,..., n
10
So that P( X = x ) = k (0.45) (0.55)
n−k
k
and k = 0,1, 2,...,10
But E(x) =np=10*0.45=4.5
10
Now P( X = 3) = 3 (0.45) (0.55)
10 −3
3
= 0.1664
10
P( X = 4 ) = (0.45) 4 (0.55)10− 4 = 0.2383
4
10
P( X = 5) = (0.45) 5 (0.55)10−5 = 0.2340
5
10
P( X = 6 ) = (0.45) 6 (0.55)10−6 = 0.1595
6
Here X with the highest probability is 4. Therefore the mode of X is 4.
• Poisson distribution
Poisson distribution: a probability distribution used when a density of items is distributed over a
period of time. The sample size needs to be large and the probability of success to be small.
Let X be a discrete r.v. It is said to follow the Poisson distribution if its pdf is of the form:
X ~ P0 (3.5) , find:
Example If
a) P( X = 4)
b) P( X 2)
c) P( X 1)
Solution
e −3.5 (3.5)
x
P( X = x ) = , x = 0, 1, 2,......
x!
e −3.5 (3.5)
4
a) P ( X = 4 ) = = 0.1888.....
4!
b)
P( X 2 ) = P( X = 0 ) + P( X = 1) + P( X = 2 )
e −3.5 (3.5) e −3.5 (3.5) e −3.5 (3.5)
0 1 2
= + +
0! 1! 2!
(3.5) 2
= e −3.5 1 + 3.5 + = 0.3208
2!
c)
P( X 1) = 1 − P( X 1)
= 1 − P( X = 0) + P( X = 1)
( )
= 1 − e −3.5 + e −3.5 (3.5) = 0.8641
a) E(X)
b) P(X<4)
Solution
b) P( X 4) = P( X = 0) + P( X = 1) + P( X = 2) + P( X = 3)
e −9 * 9 2 e −9 * 9 3
= e −9 + e −9 * 9 + + = 0.02122....
2! 3!
Uses of Poisson distribution
There are two main practical uses of Poisson distribution:
a) When we consider the distribution of random events
When an event is randomly scattered in time (or in space) and has mean number of occurrences
in a given interval and if X is a random variable “the number of occurrences in a given
interval, then X ~ P0 ( ) .
Examples:-Car accidents on a particular street of road in one day
-Accidents in a factory per week
-Telephone calls made to a switchboard in a given minute.
Exercise
The mean number of bacteria per millimetre of a liquid is known to be 4. Assuming that the
number of bacteria follows a Poisson distribution, find the probability that 1 ml of liquid there
will be:
❖ No bacteria
❖ 4 bacteria
❖ Less than 3 bacteria
e −4 4 x
P( X = x ) =
x!
Where X is the r.v. “the number of bacteria in 1 ml of liquid”
e −4 4 4
❖ P ( X = 4 ) = P (4 bacteria in 1 ml ) = = 0.195
4!
❖ P( X 3) = P(less than 3 bacteria in 1 ml )
= P( X = 0) + P( X = 1) + P( X = 2)
e −4 4 2
= e −4 + e −4 * 4 + = e − 4 (1 + 4 + 8) = 0.238
2!
b) When we approximate the binomial distribution.
Let X be an r.v. such that X ~ Bin (n, p ) .Now X can be approximated by a Poisson distribution
with parameter = np if n is large (> 50 ) and p is small (< 0.1).
The approximation gets better as n → and p → 0 .
Example:
A factory packs bolts in boxes of 500. The probability that a bolt is defective is 0.002.
Find the probability that a box contains 2 defective bolts.
Solution
x
500
p ( X = 2 ) = (0.002) (0.998) = 0.184
2 498
2
= n p = 500 * 0.002 = 1
Since n=500, p=0.002, we use the Poisson approximation with
e −1 * 1x
~ P0 (1) With pdf p( X = x ) = x ! , x = 0,1,2,..... 500
X
We require computing:
e −1 *12
P( X = 2) = = 0.184
2!
Mode
x = 0 , P( X = 0 ) = e −1.6 = 0.20189
x = 1, P( X = 1) = e −1.6 * (1.6 ) = 0.3230
e −1.6 * (1.6 )
2
x = 2, P( X = 2) = = 0.2584
2!
e −1.6 * (1.6 )
3
x = 3, P( X = 3) = = 0.1378
3!
The mode is 1.
In general if is nor an integer then the mode is an integer such that − 1 mod e
Continuous random variables
(i) Introduction
Y=f(x)
Area
0 a x b X
4. X = Var ( X )
−
4. X = Var ( X )
3
1+ x , 0 x 1
ExampleThe continuous random variable X has pdf f (x ) = 4
2
( )
0 , otherwise
Find the expectation, the variance and the standard deviation of X.
Solution
1
3 x2 x4
( ) (
31 1
)
1 1
E ( X ) = f (x ) d x = x 1 + x d x = x + x d x = + = +
3 2 3 3
all x 40 40 4 2 4 0 4 2 4
33 9
= = = 0.5625
4 4 16
3 x3 x5 3 1 1 3 8 2
( ) ( )
1
But E X 2 = x 2 f ( x ) d x =
3
4 0
x 2
+ x 4
d x = + = + = * = = 0 .4
all x 4 3 5 4 3 5 4 15 5
Var ( X ) = x 2 f ( x ) d x − 2
all x
X = Var X = 0.289
Solution
Y
f (x ) = (2 + x ) (4 − x )
3
0.3
80
X
Mode
Mode=1
ii) THE NORMAL DISTIBUTION
𝜇
The normal distribution is important in statistics for four main reasons:
1. Numerous phenomena measured on continuous scales have been shown to follow ( or can
be approximated by ) the normal distribution.
2. We can use the normal distribution to approximate various discrete probability
distributions, such as the binomial and the Poisson.
3. It provides the basis for the statistical process-control
By standardizing a normally a normally distributed random variable, we will need only one table.
𝑿−𝝁𝒙
By using the transformation formula for 𝑍, the standard normal score 𝒁 =
𝝈𝒙
68%
𝜇 −𝜎 𝜇 𝜇+𝜎
𝑃 (𝜇 − 𝜎 ≤ 𝑋 ≤ 𝜇 + 𝜎) = 0.68
𝑃 (𝜇 − 2𝜎 ≤ 𝑋 ≤ 𝜇 + 2𝜎) = 0.95
𝑃 (𝜇 − 3𝜎 ≤ 𝑋 ≤ 𝜇 + 3𝜎) = 0.997
∞
∫−∞ 𝑓(𝑥) = 1
Properties
1. 𝑃[𝑎 < 𝑋 < 𝑏] = 𝑃[𝑋 < 𝑏] − 𝑃[𝑋 < 𝑎]
2. 𝑃 [𝑋 > 𝑎] = 1 − 𝑃 [𝑋 < 𝑎]
3. 𝑃 [𝑋 < −𝑎] = 1 − 𝑃 [𝑋 < 𝑎]
𝑃[( 𝑎<𝑋<𝑏) ∩(𝑋>𝑐)]
4. 𝑃 (𝑎 < 𝑋 < 𝑏/𝑋 > 𝑐) =
𝑃 [𝑋>𝑐 ]
Examples:
1. 𝑋~𝑁(4,2) . Calculate𝑃 [𝑋 < 2] ?
Solution:
EXERCISES
1. Consider 𝑍 a variable follows a standard normal distribution.
Calculate:
a. 𝑃 [𝑍 < 0] b. 𝑃 [0 < 𝑍 < 2] c. 𝑃[𝑍 > 2.21] d. 𝑃 [𝑍 > 2.73/𝑍 > 2] e. 𝑃 [𝑍 < 2.04]
f. 𝑃 [|𝑍| < 2]
2. Consider 𝑍 a variable follows a standard normal distribution. Determine 𝑧 ∈ 𝑅 in each follows
cases:
a. 𝑃 [|𝑍| < 𝑧] = 0.95 b. 𝑃 [|𝑍| < 𝑧] = 0.90 c. 𝑃 [|𝑍| < 𝑧] = 0.8238
d. 𝑃 [𝑧 < 𝑍 < 1] = 0.6826 e. 𝑃 [0 < 𝑍 < 𝑧] = 0.4878
3. Number of cars that pass through a car wash between 16:00 and 17:00 has the following
probability distribution the table below:
𝑋 4 5 6 7 8 9
𝑃(𝑥) 1⁄12 1⁄12 1⁄4 1⁄4 1⁄4 1⁄4
The designation simple indicates that there is only one explanatory variable x to predict the
response y , and linear means that the model (3.1) is linear in 0 and 1 . For example, a model
x
such as yi = 0 + 1 xi + i is linear in 0 and 1, whereas the model yi = 0 + e 1 i + i is not
2
This assumption asserts that the variance of i or yi doesn´t depend on the value of xi . This
assumption is also known as the assumption of homoscedasticity, homogeneous variance or
constant variance.
3) cov ( i , j ) = 0 i j , or, equivalently cov ( yi , y j ) = 0 .
Estimation of 0 , 1
The Least Squares Method
Using the random sample of n observations yi , i = 1, 2, , n and the fixed constants
xi , i = 1, 2, , n , we seek estimators 0 and 1 that minimize the sum of squares of the deviations
. In fact, for a given value xi , i = 1, 2, , n , there will be a difference between the observed value
( )
n n
L = di2 = yi − 0 − 1 xi
2
i =1 i =1
L
( )
n
= −2 yi − 0 − 1 xi = 0
0 i =1
L
( )
n
= −2 yi − 0 − 1 xi xi = 0
1 i =1
Simplifying these equations leads to the following least squares normal equations:
yi = n 0 + 1 xi
i =1 i =1
n n n
yi xi = 0 xi + 1 xi2
i =1 i =1 i =1
The solution to the normal equations results in the following least squares estimators 0 and 1
:
n n
yi xi
yi xi − i =1 i =1
n
n
1 = i =1
2
n
xi
xi − i =1
n
i =1
2
n
n n
yi x i
0 = i =1
− 1 i =1
n n
( x − x )( y − y )
n n
yi xi − nx y i i
1 = i =1
= i =1
() ( x − x)
n n
xi2 − n x
2 2
or
i
i =1 i =1
0 = y − 1 x
n n
xi y i
where x = i =1
and y = i =1
.
n n
Example
Table 3.1 gives the annual consumption ( y ) of 10 households each selected from a group of
households with a fixed personal income ( x ). Both income and consumption are measured in $
10,000. Fit the least squares regression line of y on x .
Table: Annual Consumption of 10 Households
yi = 65
i =1
xi = 75
i =1
xi2 = 615
i =1
xi yi = 530
i =1
y
i =1
2
i = 459.4