Notes On Integral Transforms: Unnati Akhouri Miranda House, Delhi University, New Delhi, 110007 September 6, 2016
Notes On Integral Transforms: Unnati Akhouri Miranda House, Delhi University, New Delhi, 110007 September 6, 2016
Unnati Akhouri
Miranda House, Delhi University, New Delhi, 110007
September 6, 2016
1
Contents
1 Formula sheet 5
3 Integral Transforms 13
3.1 Laplace Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Properties of LT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.3 Cute examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.4 Laplace Transform Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2
Preface
The following notes were prepared in the Fall of 2016 when I was undertaking a course in
Mathematical Physics V (Semester V) at Miranda House, Delhi University. The course was
taught by Dr Sonam and I thank her for strengthening our foundations of the rigors of Tensors
and Transforms. The following notes are created from the point of view of how I worked my way
through the course and thus will take the reader through a journey of difficulties, surprises and
excitement that I encountered.
The notes are prepared from our lectures and prescribed books including Mathematical Methods
for Physicists by Arfken, Webber and Harris, Higher Engineering Mathematics by B V Ramana
and Higher Engineering Mathematics by B S Grewal. Some of the examples are also taken from
notes and journals found on the web which I shall reference as I use them. Finally, I may also
mention interesting readings and anecdotes that I came across while preparing these notes. I will
occasionally also use some hand-made illustrations and phy-mics (physics comics) to highlight
the results (which can be found independently on my website - www.fatalphysics.wordpress.com)
I thank my professors and friends (Supriya, Nikita Sonalika, Priya and Simran) for the
discussions on some of the topics we found particularly intriguing.
It is my delight to share these notes and I wish you, and myself all the best because first time
experience is always something unique.
Unnati Akhouri
Fall, 2016.
3
4
1 Formula sheet
R1
• Beta Function β(m, n) = 0 xm−1 (1 − x)n−1 dx
= Γ(m)Γ(n)
Γ(m+n)
√
• Γ 12 = π , Γ (−1) = ∞ , Γ (0) = ∞ , Γ (1) = 1.
R∞
• Gamma Function Γ (n) = 0 e−x xn−1 dx
• Γ (n + 1) = n!
• Binomial Expansion
(a + b)n =n C0 an b0 +n C1 an−1 b +n C2 an−2 b2 + . . . . . . . . . . . . n Cn a0 bn .
n(n−1) 2
(1 + x)n = 1 + nx + 2!
x + ..... here, a =1, b = x.
– if (1 + x)−1 then a =1, b = x and n =-1
⇒ 1 − x + x2 − x3 . . . . . . . (infinite expansion)
⇒ Σ∞ n n
n=0 (−1) x valid for|x| < 1(converges)
1
– (1 + x) 2 then a =1, b = x and n= 21
• Taylor Series for single variable function (the concept used physically if for
perturbation about a point.)
Expansion of f(x) about x0
0 00 000
(x−x0 )f (x0 ) (x−x0 )2 f (x0 ) (x−x0 )3 f (x0 )
f (x) = f (x0 ) + 1!
+ 2!
+ 3!
+ .....
(x−x0 )n f n (x 0)
⇒ Σ∞
n=0 n!
• Taylor Series for two variable function (the concept used physically if for
perturbation about a point.)
Expansion of f(x,y) about (x0 , y0 )
f (x, y) = f (x0 , y0 ) + [(x − x0 )fx (x0 , y0 ) + (y − y0 )fy (x0 , y0 )] + 2!1 [(x − x0 )2 fxx (x0 , y0 ) + 2(x −
x0 )(y − y0 )fxy (x0 , y0 ) + (y − y0 )2 fyy (x0 , y0 )] + ..........
where subscripts denote partial derivatives.
∂ ∂
Let ∆ = h ∂x + k ∂y whereh = (x − x0 ) and k = (y − y0 )
Then a compact version of the above expansion can be written as,
2 3
f (x, y) = f (x0 , y0 ) + ∆f (x0 , y0 ) + ∆2! f (x0 , y0 ) + ∆3! f (x0 , y0 ) + ......
∆n f (x0 ,y0 )
⇒ Σ∞ n=0 n!
• Error Function
Rx 2
erf(x) = √2π 0 e−t dt
erf (−x) = −erf
R ∞(x) 2
erf c(x) = √2π x e−t dt
Thus erf(x) + erfc(x) = 1.
5
• Dirac
R∞ Delta Function
−∞
f (x)δ(x − a)dx = f (a)
6
2 Recap of Fourier Series
2.1 Introduction to Fourier Series
In fields of Mathematics, Physics, Engineering etc we face a lot of mind bending functions. Be it
interpreting data, signals, or simply looking for a way to break down scary ie, not well behaved,
into a combination of cute ie, well behaved 1 known functions like the periodic functions sine and
cosine, the Fourier Analysis given by Jean-Joseph Fourier came across as an ingenious tool.
Though he originally developed the method to better understand the flow of heat in his work
Theorie Analytique de la Chaleur, these series had a deep, multi-fold influence in various other
fields of Sciences.
2
7
• Even function and Odd function Let f : [−L, L] ⇒ R. Then f(x) is called even, if
f (−x) = f (x) for all x [−L, L]. f(x) is called odd, if f (−x) = −f (x), for all x [−L, L].
Note: The graph of an even function is symmetric with respect to the y-axis, i.e., the graph
is invariant under reflection in the y-axis. Similarly, the graph of an odd function is
symmetric with respect to the origin. For a quick example, note that sin(x) is an odd
function and cos(x) is an even function! (So what about a constant function f(x) = a0 ?)
With these in mind we can now proceed to the formal Fourier series. Remember that this is
primarily a note set on Integrals so I am going to run through the Fourier series bit
assuming that you are already familiar with some basic integrals and facts about odd and
even functions.
A formal derivation of this can be found in any standard textbook and hence I will skip
that and present to you the final results.
An infinite
P∞ series of the form,
1 nπx
a
2 0
+ 1 (a n cos( L
) + bn sin( nπx
L
)
where,
1 L
Z
nπx
an = f (x)cos( dxwhere n = 0, 1, 2,3 ....... (3)
L −L L
8
and, Z L
1 nπx
bn = f (x)sin( dxwhere n = 1, 2, 3 ....... (4)
L −L L
is called the Fourier series of f(x).
As a handy trick remember odd fn * odd fn = even fn and odd fn * even fn = odd fn
thus, it follows from this, a
b
– if f(x) is an odd function then its FS will only comprise of sine terms (ie the the
f(x)times cos integral vanishes)
– if f(x) is an even function then its FS will only comprose of cosine terms (ie the f(x)
times sine integral vanishes, *remember even times odd = odd?)
a
I tend to use fn as a shorthand notation for function. Also FS for Fourier Series and FT for Fourier
transform
b
obviously! I mean any approximation should agree with the function at these basic points! If your
approximation is not agreeing to your function at least at the nodes then stop whatever you are doing
and start over!
9
10
2.4 Insightful examples
The examples given below are taken from Arfken and Webber. Pg 949 - 951. I will also
present some interestin problems after the solutions.
11
2.5 Cute summary
Many classical problems in physics involving oscillations and vibrations intuitively can be tackled
with our known functions. But many times (for eg in Quantum mechanics, electromagnetism etc)
we have to deal with functions which are note simple and cannot be tackled intuitively. In such
cases a perturbative theory or an approximate theory turns out to be our only shot at
understanding the phenomenon. Thus, though FS is just an approximation, since it gives an
infinite series, we see that within the acceptable limits of error just by using the first few
summation terms we can get really close to our original function. Here below are shown some
examples where just by adding the first 3 terms of the FS, you can already see that it starts
mimic-ing the original function. 4
Another place where FS analysis is helpful is when you have a strange looking data and you want
to piece-wise understand it even though you do not know a function that would fit your data. All
you have to do is find coefficients for your sine and cosine terms which upon addition resemble
your data set. Back substitute and voila, you have at least an approximate function for your
data5 . This requires computational analysis too, but I am sure you can appriciate how mind
bogglingly cool this is. Another interesting application in engineering is that of signal detection.
Say you receive some alien signal. Now by breaking down the signal as a FS you can find just
how much weight each of the sine and cosine terms have and thus can find the weight of each
frequency used in that signal. For a more cute insight I recommend these videos. Give them a try,
https://www.youtube.com/watch?v=mEN7DTdHbAU
https://www.youtube.com/watch?v=mEN7DTdHbAU
https://www.youtube.com/watch?v=kP02nBNtjrU
Figure 1: Left: the function, f (x) = 1 in 0 ≤ x < L. Right: the first three partial sums of its
Fourier sine series.
4
WOAH!! Even though these are simple and easy looking functions, but still, take a moment and think about
just how profound this is. Why does nature even behave this way? How am I able to approximate sharp edged
things using curvy and squiggly ones? Because magic.
5
Approximation gives approximation. In complete darkness, a glimmer!
12
3 Integral Transforms
An integral transform is an operator ie, a map from functions to functions that takes the form,
Z +∞
I(f )() = K(x, )f (x)dx (7)
∞
Where the function of two variables K is called the Kernel of the transform. A general set of
linearity properties that all integral transforms satisfy is,
In general, an integral transform is useful in the sense that it can translate a complicated
differential equation into a simpler form. Suppose we want to solve a differential equation, with
an unknown function f. We first apply the transform to the differential equation to turn it into
an equation we can solve easily or analytically: often an algebraic equation for the transform F of
f. Lastly we solve this equation for F and finally apply the inverse transform to find f.
where L is known as the Laplace Transform operator, original function f(t) is called the
determining function and the transformed function F(s) is called the generating function.
*Sufficient Conditions for the existence of LT of f(t)
The LT of f(t) exists when the improper integral in RHS converges ie takes a finite value.
1. f(t) is piece-wise continuous ie, f(t) is continuous in every sub interval and has finite limits
at end points of these sub intervals .
2. f(t) is of exponential order of γ ie for the convergence there must exist M and γ such that
|f (t)| < M eγt
3.2 Properties of LT
1. With the application of LT particular solution of differential equations can be obtained
directly without the necessity of first determining a general solution and then obtaining
particular solution by substitution of initial conditions.
13
2. LT solves non homogeneous DE without having to solve the homogeneous DE first.
3. LT is not only applicable to continuous functions but also to piece-wise continuous
functions, complicated periodic functions, step functions and impulse functions.
4. LT of a linear combination is the linear combination of the LTs.
s
5. Change of Scale when argument t of f is scaled by a constant ’a’ then s is replaced by a
in F(s) and the entire function is multiplied by a1
1 s
L{f (at)} = F ( ) (9)
a a
6. First shift theorem proves that multiplication of f(t) by ea t amounts to a replacement of
s by (s -a) in F(s)
#Proof R∞
By definition of LT L{f 0 (t)} = 0 e−st f 0 (t)dt = F (s)
Solving integration
R ∞ by parts,
∞
= e−st f (t)|0 − (−s)e−st f (t)dt
R ∞0 −st
⇒ −f (0) + s 0 e f (t)dt = −f (0) + sF (s).
#Proof Rt
We want to find, L{ 0 f (u)du} = F(s)
Rt
Let 0 f (u)du = g(t), then g 0 (t) = f (t) and g(0) = 0 as the integration limits would
become 0 to 0.
Taking transform on both sides,
9. LT of f (t)tn is
= (−1)n F n (s) (15)
14
#Proof
We want to find L{tf (t)}
We
R ∞ know by definition,
−st
0
e f (t)dt = F (s)
Differentiating both sides with respect to s. Using Leibniitz rule of differentiation under
integration sign, Z ∞ −st
de
f (t)dt = F 0 (s) (16)
0 ds
which
R ∞ −stgives,
0
e (−tf (t))dt = F 0 (s) Similarly one can show for repeated differentiation to get
result for nth iteration.
#Proof
We want to find L{ f (t)
t
}
We know
R ∞ −st by definition,
0
e f (t)dt = F (s)
Integrating both sides with du under limits u=s to ∞
Z ∞Z ∞ Z ∞
−st
e f (t)dsdt = F (u)du (17)
0 s s
15
3.3 Cute examples
√
1. LT of cos t
Since we do not know any typical form in which this fits, simply expand using Maclaurin
series (Special case of Taylor series around origin). We know expansion of cosice as,
2 4 (−1)n t2n
cost = 1 − t2! + t4! + .... = Σ∞ n=0 (2n)!
√
So √for t we get expansion
n n
as
∞ (−1) t
cos t = Σn=0 (2n)!
Thus the
√ LT is, (−1)n tn n
∞ (−1) L{t }
n
L{cos t} = L{Σ∞ n=0 (2n)! } = Σn=0 (2n)!
(−1) n!n
⇒ Σ∞
n=0 (2n)!sn+1
2. LT of sintt
We know L{sint} = s21+1 then from property 10 of LT
R∞
L{ sint
t
} = s s21+1 ds = π2 − tan−1 (s) = cot−1 (s) = tan−1 ( 1s )
t n−1
3. LT of 1−e −t
By TSE 6 of denominator
f (t) = tn−1 (1 − e−t )−1 = tn−1 (1 + e−t + e−2t + ..... = Σ∞
m=0 t
n−1 −mt
e
Thus the LT becomes
(n−1)!
L{f (t)} = L{Σ∞ m=0 t
n−1 −mt}
e = Σ∞ m=0 (s+m)n
R∞ Rt −t
4. Show t=0 u=0 e usinu dudt = π4 On rearranging, I can write the above expression as,
R ∞ −t R t sinu
t=0
e [ u=0 u du]dt and notice that this is simply the Laplacian transform of the function
in square brackets with s = 1 (by definition of LT).
We know LT of the term in square bracket using properties 8 and example 2 as
1
s
tan−1 ( 1s ) putting s=1 we get tan−1 (1) = π4
6
Abbreviation for Taylor Series Expansion
16
3.4 Laplace Transform Table
Sno. Determining function f(t) Laplace transform L{f (t)} = F (s)
1
1. 1 s
exists only when s > 0
n!
2. tn sn+1
3.
17
!!!!!!!!!!!!!!!!!!!!!!!!!!! haven’t type-setted my notes as yet. Keep checking and beck for
more. I will keep updating this document about twice every week. See you soon!
Cheers!
18