Syllabus Laplace Transform
Syllabus Laplace Transform
Ruben Hillewaere
Inspiré du syllabus de L. Zandarin et autres ouvrages
ECAM
Décembre 2024
Contents
1 Prerequisites 2
Bibliography 38
1
Chapter 1
Prerequisites
Voici les notions de base que vous devez maı̂triser pour aborder l’UE Outils Mathématiques
en 2BA:
Transformée de Laplace
Matrices
• équations des coniques, paraboles & hyperboles + déterminer les translations avec
le carré parfait
2
Chapter 2
2.1 Definitions
2.1.1 Causal functions and the Laplace transform
In this chapter, we will introduce the Laplace transform, a technique to model engineering
systems, and a powerful tool to solve constant coefficient linear differential equations. In
physics, events often start at a given time, which we take as the origin of time (t = 0),
and they last until some other point in time, which could be infinite - at least theoretically.
Therefore, we will consider real-valued functions f (t) of time that are equal to zero for
t < 0, and that might be discontinuous at t = 0. These functions are so-called causal or
one-sided functions, and we can think of them as input signals; cf. the course of Signals
and Systems in Bloc 3. The right-hand limit of such a causal function is denoted by f (0+),
the left-hand limit is obviously equal to zero.
The simplest causal function is the unit step function u(t), also called the Heaviside
function, which is defined by:
0 pour t < 0
u(t) =
1 pour t ≥ 0
u(t)
1
t
0
The unit step function, represented in Figure 2.1, enables us to turn any real function into
a causal function. For example, by multiplying a sine function with the unit step function,
we obtain the causal function sin t.u(t), as shown in Figure 2.2. In this chapter, functions
3
2.1. DEFINITIONS 4
of time will always be considered to be causal, and if f (t) happens to be two-sided, then we
actually consider f (t).u(t), even though we might sometimes neglect to write the factor
u(t).
sin t
sin t.u(t)
t
0
Definition 2.1.1. Consider a real-valued causal function f (t). The (one-sided) Laplace
transform of f (t) is the complex function
Z +∞
L f (t) = e −st f (t)dt (2.1)
0
When this integral is performed and the limits of t are substituted, the resulting expression
will only involve the complex variable s, which is independent of t. In other words, the
Laplace operator transforms the time domain function f (t) into a function F (s) of the
complex variable s. To avoid confusion, we use lower case letters for causal functions of
t, and capital letters for complex functions of s. Note that certain authors use the letter
p instead of the letter s.
Remark : for a two-sided function f (t), we can also define the two-sided Laplace trans-
form Z +∞
L f (t) = e −st f (t)dt
−∞
We won’t discuss this transform in this chapter.
jR
ρe jϕ
ρ
ϕ
R
0
Let’s examine what happens to each of these factors when t varies from 0 to +∞ :
To conclude, when t varies from 0 to +∞, the function e −st represents a rotating vector
with a decreasing module if Re(s) > 0 and an increasing module if Re(s) < 0.
Therefore, to ensure the convergence of the Laplace transform of the unit step function
u(t), it is required that Re(s) > 0. So finally we can write:
1
L u(t) = if Re(s) > 0 (2.2)
s
Example 2.1.3. Let us determine the Laplace transform of the ramp function f (t) =
t.u(t), represented in Figure 2.4.
t.u(t)
t
0
We find
+∞ +∞
e −st
Z Z
1 +∞
L {t.u(t)} = e −st
t dt = − te −st 0 − dt
0 s 0 (−s)
2.1. DEFINITIONS 6
using integration by parts. Again, if Re(s) > 0, the complex exponential function e −st
decreases to zero, and we know this decay occurs faster than t tends to infinity, so:
1 +∞ −st
Z
1 +∞ 1
L {t.u(t)} = 0 + e dt = − 2 e −st 0 = 2
s 0 s s
Thus,
1
L {t.u(t)} = if Re(s) > 0 (2.3)
s2
e at .u(t)
t
0
2
Example 2.1.5. If f (t) = e t u(t), then the integral
Z +∞
2
e −st e t dt
0
does not converge for any value of s. Therefore, the Laplace transform of this function
f (t) does not exist.
converges, and this is only true for a certain range of values of s ∈ C, called the region
of convergence or ROC. This ROC is represented by a shaded region in the complex s-
plane, efficiently showing for which values of s the Laplace transform is valid. E.g., for
the step function and the ramp function we have seen that Re(s) > 0 is required, so
the ROC is the right half of the complex plane. However, for the exponential function
f (t) = e at u(t) we need Re(s) > a, as illustrated in Figure 2.6. In the final example, the
2
function f (t) = e t u(t) is increasing “too fast”, which makes it impossible for the integral
to converge, regardless of the value of s.
jR jR jR
R R R
0 0 a
Figure 2.6: The region of convergence of the transforms of the (a) step, (b) ramp and (c)
exponential function with a < 0.
In complex analysis, one can investigate general criteria for the existence or the Laplace
transform of a causal function f (t), related to the so-called exponential order of f (t), but
that goes beyond the scope of this course. We will simply compute the Laplace transforms
of certain specific functions f (t), and determine for which complex values s the integral
converges, i.e. determine the ROC.
is the area under the graph of f from a to b if b → ∞, as shown in Figure 2.7. This
integral converges if this area is finite, in which case the function f (x) obviously tends to
zero when x → ∞.
R +∞
Theorem 2.1.6. If F (s) exists, i.e. if the integral 0 e −st f (t)dt converges, then
y = f (x)
x
0 a
Proof. Since the integral is linear, we can say that if f (t) and g(t) have a Laplace
transform, so does aL f (t) + bL g(t).
Example 2.2.2. The Laplace transform of the general step function f (t) = k.u(t) is
given by (k ∈ R):
k
L {k.u(t)} = if Re(s) > 0
s
Example 2.2.3. Consider an input signal f (t) = (5e −2t − 3e −t )u(t). Using Equation
2.4, we immediately find its Laplace transform is equal to
5 3
F (s) = −
s +2 s +1
To determine the region of convergence, we need to combine the criteria for each of the
individual terms. The Laplace transform of the first term exists if Re(s) > −2 and the
Laplace transform of the second term exists if Re(s) > −1, hence F (s) converges if
Re(s) > −1.
e jϕ = cos ϕ + j sin ϕ
2.2. PROPERTIES OF THE LAPLACE TRANSFORM 9
we know that
1 jϕ
e + e −jϕ
cos ϕ =
2
and
1 jϕ
e − e −jϕ
sin ϕ =
2j
Since Equation 2.4 also holds for a ∈ C if Re(s) > Re(a), we immediately find
1 1 1 1 s
L cos ωt = L e + e jωt −jωt
= + = 2 (2.5)
2 2 s − jω s + jω s + ω2
and
1 1 1 1 ω
L sin ωt = L e jωt − e −jωt =
− = (2.6)
2j 2j s − jω s + jω s 2 + ω2
The region of convergence of both transforms is given by Re(s) > Re(jω) and Re(s) >
Re(−jω), so Re(s) > 0. Note that we omitted the factor u(t) in these computations for
notation simplicity.
Example 2.2.5. Analogously, we can determine the Laplace transforms of the hyperbolic
functions cosh ωt = 21 (e ωt + e −ωt ) and sinh ωt = 21 (e ωt − e −ωt ) by computing:
1 1 1 1 s
L cosh ωt = L e + e ωt −ωt
= + = 2 (2.7)
2 2 s −ω s +ω s − ω2
and
1 1 1 1 ω
L sinh ωt = L e ωt − e −ωt = − = (2.8)
2 2 s −ω s +ω s2 − ω2
The region of convergence of both transforms is given by Re(s) > ω (think this through!).
So far, we have computed the Laplace transforms of a few basic signals such as step
functions, ramp functions, exponentials, trigonometric and hyperbolic functions, each with
their corresponding region of convergence. All formulas are gathered in Table 2.1 at the
end of this chapter. For each of these basic signals, the Laplace transform is a fraction
of polynomials in the complex variable s, and in such cases there are some interesting
properties related to the region of convergence.
Definition 2.2.6. The Laplace transform F (s) is called rational when it is a fraction of
polynomials in the complex variable s:
N(s)
F (s) =
D(s)
The roots of the denominator D(s) are called the poles of F (s).
E.g., the transforms of the step and the ramp functions have poles at s = 0, whereas the
Laplace transform of the exponential signal e at .u(t) has a pole at s = a. In Figure 2.8
2.2. PROPERTIES OF THE LAPLACE TRANSFORM 10
jR jR jR
0
× R
0
× R
a
× R
Figure 2.8: The poles and the ROC of the transforms of the (a) step, (b) ramp and (c)
exponential function with a < 0.
these poles are indicated with a symbol “×”. Notice for each of these examples that the
region of convergence is the open half plane to the right of these poles, a property that is
formalized in the following theorem.
Theorem 2.2.7. If the Laplace transform F (s) of a causal function f (t) is rational, then
the ROC is the region in the s-plane to the right of the rightmost pole.
We will accept this theorem without proof, but let us verify how this holds for the previous
examples. In Example 2.2.3, the Laplace transform is equal to
5 3
F (s) = −
s +2 s +1
so the poles are s = −2 and s = −1. The region of convergence and the poles are
represented in Figure 2.9 (a).
In Example 2.2.4, the poles of L cos ωt and L sin ωt are the roots of s 2 +ω 2 , i.e. s = jω
and s = −jω. The region of convergence, represented in Figure 2.9 (b), is indeed the
open half plane to the right of both these poles.
Finally, In Example 2.2.5, the poles of L cosh ωt and L sinh ωt are the roots of s 2 − ω 2 ,
i.e. s = ω and s = −ω. The region of convergence, represented in Figure 2.9 (c), is again
the open half plane to the right of both these poles.
Definition 2.2.8. If F (s) is the Laplace transform of the causal function f (t), then f (t)
is called the inverse Laplace transform of F (s), denoted by
f (t) = L −1 F (s)
3 3
Examples 2.2.9. 1) L −1
= sin 3t.u(t) since L {sin 3t.u(t)} = ,
s2 + 9 s2 + 9
which shows that Table 2.1 is extremely useful to determine inverse Laplace transforms.
2.2. PROPERTIES OF THE LAPLACE TRANSFORM 11
jR jR jR
jω×
×× R R × × R
−2 −1 −ω ω
−jω×
Figure 2.9: The poles and the ROC of (a) Example 2.2.3, (b) Example 2.2.4 and (c)
Example 2.2.5.
3 5s
2) L −1
− = 3t.u(t)−5 cos 2t.u(t), because the inverse Laplace transform
s2 s2 + 4
is also a linear operator.
3s + 11
3) In order to determine L −1
we cannot directly use the formulas of
(s + 3)(s + 4)
Table 2.1. We first have to split F (s) in partial fractions (cf. paragraph 7.4 in Stewart
Calculus):
3s + 11 A B
F (s) = = +
(s + 3)(s + 4) s +3 s +4
In the identity 3s + 11 = A(s + 4) + B(s + 3) we substitute s = −3 and s = −4 to find
A = 2 and B = 1. So,
2 1
L F (s) = L
−1 −1
+ = (2e −3t + e −4t )u(t)
s +3 s +4
Theorem 2.2.10. Suppose a causal function f (t) has a Laplace transform F (s). If its
derivative f ′ (t) also has a Laplace transform, then it is given by
Proof. First of all, note that the derivative of a causal function is also causal. By the
definition of the Laplace transform, we have
Z +∞
L f (t) =
′
e −st f ′ (t)dt
0
2.2. PROPERTIES OF THE LAPLACE TRANSFORM 12
Corollary 2.2.11. If the first n derivatives of f (t) exist and have Laplace transforms, then
L f (n) (t) = s n F (s) − s n−1 f (0+) − s n−2 f ′ (0+) − · · · − sf (n−2) (0+) − f (n−1) (0+)
In particular, if n = 2:
This means derivation in the time domain corresponds to multiplication with s in the
s-domain, which will be very useful to solve differential equations.
1
Example 2.2.12. If f (t) = sin t.u(t), then F (s) = for Re(s) > 0. The derivation
s2 +1
theorem says
s
L {cos t} = sF (s) =
s2 + 1
which is correct, as we already showed in Equation 2.5. We could also use the derivation
theorem to determine the Laplace transforms of sin t and cos t in an alternative way: say
L {sin t} = F (s), then L {cos t} = sF (s). By deriving a second time we find
which is equivalent to
1
F (s) =
s2 +1
L f (n) (t) = s n L t n
with f (0+) = 3. We apply the Laplace transform to both sides of the equation:
2
(sF (s) − 3) + 4F (s) =
s +3
which leads to
2 3s + 11
(s + 4)F (s) = +3=
s +3 s +3
so
3s + 11
F (s) =
(s + 3)(s + 4)
From Example 2.2.9 (3), we already know that
2 1
F (s) = +
s +3 s +4
by splitting it in partial fractions, and
2 1
f (t) = L −1 F (s) = L −1 + = (2e −3t + e −4t )u(t)
s +3 s +4
2.2. PROPERTIES OF THE LAPLACE TRANSFORM 14
f ′′ (t) − f ′ (t) = 3t
with initial conditions f (0+) = 1 and f ′ (0+) = 0. Again, we apply the Laplace transform
to both sides of the equation:
3
(s 2 F (s) − s − 0) − (sF (s) − 1) =
s2
which is equivalent to
3 s3 − s2 + 3
(s 2 − s)F (s) = + s − 1 =
s2 s2
leading to
s3 − s2 + 3
F (s) =
s 3 (s − 1)
Splitting this expression in partial fractions
A B C D
F (s) = + 2+ 3+
s s s s −1
leads to the identity
s 3 − s 2 + 3 = As 2 (s − 1) + Bs(s − 1) + C(s − 1) + Ds 3
3) Let us solve
f ′′ (t) + ω 2 f (t) = 0
with f (0+) = a and f ′ (0+) = 0. By transforming we find
(s 2 F (s) − as − 0) + ω 2 F (s) = 0
by the definition of the Laplace transform. Repeating this differentiation process n times
proves the general case.
□
Examples 2.2.16.
s 2 − ω2
d s
1) L {t. cos ωt.u(t)} = − = 2 (2.13)
ds s + ω2
2 (s + ω 2 )2
d ω 2ωs
2) L {t. sin ωt.u(t)} = − = (2.14)
ds s + ω2
2 (s 2 + ω 2 )2
dn
1 n!
3) L {t .e
n −at
.u(t)} = (−1)n
= (2.15)
ds n s +a (s + a)n+1
Task: can you easily determine the ROC of these transforms? Find out the correct answers
in Table 2.1.
2.2. PROPERTIES OF THE LAPLACE TRANSFORM 16
Theorem 2.2.17.
R Suppose a causal function f (t) has a Laplace transform F (s). If its
antiderivative f (t)dt also has a Laplace transform, then it is given by
R
f (t)dt t=0+
Z
F (s)
L f (t)dt = + (2.16)
s s
Since we supposed that the Laplace transform of f (t)dt exists, e −st f (t)dt tends to
R R
zero when t → +∞. In other words, we R suppose the complex variable s is in the region
of convergence of both L f (t) and L { f (t)dt}. Hence,
R
f (t)dt t=0+
Z
F (s)
L f (t)dt = +
s s
□
As a result, in case of well-chosen initial conditions, we can say that integration in the
time domain corresponds to division by s in the s-domain, which seems coherent with
our findings in the previous section. However, this will be of limited importance in our
application domain, since we will not be dealing with integral equations.
Proof. First note we use the variable v instead of s to avoid confusion with the boundary
s. So, by the definition of the Laplace transform,
Z +∞
F (v ) = e −v t f (t)dt
0
in which we swapped the two integrals and used the fact that f (t) is independent of the
variable v . Integration of the exponential leads to the following conclusion:
Z +∞ Z +∞ −v t +∞ Z +∞
e 1
F (v )dv = f (t) dt = f (t) e −st dt
s 0 −t s 0 t
proving Equation 2.17.
□
Examples 2.2.19.
Z +∞
sin at a h u i+∞ π s s
1) L = 2 2
du = Arctg = − Arctg = Arccotg
t s u +a a s 2 a a
+∞ +∞
e at − e bt
u−a s −b
Z
1 1
2) L = − du = ln = ln
t s u−a u−b u−b s s −a
Note that we again omitted the factor u(t) for notation simplicity, which is common
practice.
Theorem 2.2.20. Suppose a causal function f (t) has a Laplace transform F (s). Then
f (t).u(t)
f (t − τ )u(t − τ )
t
0 τ
Figure 2.10: The original (green) and the delayed (red) function.
since the shifted Heaviside function u(t − τ ) is zero from 0 to τ . We solve this integral
by applying the substitution v = t − τ :
Z +∞ Z +∞
−s(v +τ ) −sτ
e f (v )dv = e e −sv f (v )dv = e −sτ F (s)
0 0
□
This theorem is valid in the same region of convergence as the transform F (s) of the
original (non-shifted) function. The factor e −sτ is called the delay factor.
Example 2.2.21. Consider the following function represented in Figure 2.11, called a
rectangular pulse with duration τ .
f (t)
1
t
0 τ
In this context, however, we will prefer a shorter notation (think this through!!):
which enables us to compute the Laplace transform immediately with the second shift
theorem:
1 1 1
L f (t) = − e −sτ = (1 − e −sτ )
s s s
In order to determine the region of convergence of this transform, note that Theorem 2.2.7
cannot be applied since L f (t) is not rational ! However, given this signal is (1) of finite
2.2. PROPERTIES OF THE LAPLACE TRANSFORM 19
duration and (2) bounded for all t, the Laplace transform converges for any s ∈ C, hence
the region of convergence is the entire s-plane.
f (t)
1
t
0 τ 2τ 3τ
Example 2.2.23. Consider the function represented in Figure 2.13, which can be written
as
k k
f (t) = t.u(t) − (t − τ ).u(t − τ ),
τ τ
meaning it is a ramp function from 0 to τ , and a constant for t > τ (think this through!!).
f (t)
k
t
0 τ
g(t)
k
t
0 T T +τ
Example 2.2.24. Now consider the function g(t) represented in Figure 2.14, which is the
function f (t) from the previous example that has been delayed by a time T .
Therefore g(t) can be written as g(t) = f (t − T ) and its Laplace transform is:
k
L g(t) = e −sT L f (t) = −sτ
−sT
1 − e e
τs2
Theorem 2.2.25. Suppose a causal function f (t) has a Laplace transform F (s). Then
for a ∈ R and the region of convergence is shifted accordingly (a to the left if a > 0 and
a to the right if a < 0).
Suppose the transform L f (t) = F (s) is convergent for Re(s) > b, then the integrals
above will converge if Re(s) > b − a.
Examples 2.2.26.
ω
1) L {e −at sin ωt.u(t)} = for Re(s) > −a (2.20)
(s + a)2 + ω 2
s +a
2) L {e −at cos ωt.u(t)} = for Re(s) > −a (2.21)
(s + a)2 + ω 2
2.2. PROPERTIES OF THE LAPLACE TRANSFORM 21
ω
3) L {e −at sinh ωt.u(t)} = for Re(s) > ω − a (2.22)
(s + a)2 − ω 2
s +a
4) L {e −at cosh ωt.u(t)} = for Re(s) > ω − a (2.23)
(s + a)2 − ω 2
Theorem 2.2.27. Suppose a causal function f (t) has a Laplace transform F (s), then the
Laplace transform of f (at), where a > 0, is given by
1 s
L {f (at)} = F (2.24)
a a
and the boundary of the region of convergence is scaled accordingly.
Examples 2.2.28. 1) We can quickly verify this theorem with some transforms we know
1
already. Since L {e t .u(t)} = for Re(s) > 1, the scaling theorem with a > 0 says
s −1
1 1 1
that L {e at .u(t)} = · s = for Re(s) > a.
a a −1 s −a
1
2) Similarly, we know that L {sin t.u(t)} = 2 for Re(s) > 0, so the scaling theorem
s +1
1 1 ω
with ω > 0 says that L {sin ωt.u(t)} = · 2 = 2 for Re(s) > 0.
ω s
+1 s + ω2
ω
1
3) What is the inverse Laplace transform of ? Formula 2.20 tells us that
(3s + 1)2 + ω 2
1 1
L −1
2 2
= e −t sin ωt.u(t)
(s + 1) + ω ω
2.2. PROPERTIES OF THE LAPLACE TRANSFORM 22
for Re(s) > −1, so the scaling theorem with a = 1/3 leads to
1 1 −t ωt
L −1
2 2
= e 3 sin .u(t)
(3s + 1) + ω 3ω 3
2.2.9 Convolution
In Section 2.2.1 we saw that L {f (t).g(t)} = ̸ L f (t).L g(t) because the integral of a
product is not the product of the integrals. In this section, we introduce a new operation
called the convolution of two signals, such that its Laplace transform is the product of their
transforms.
Definition 2.2.29. Consider two causal functions f (t) and g(t) with their respective
Laplace transforms F (s) and G(s). The convolution of f (t) and g(t) is the integral
Z t
(f ∗ g)(t) = f (τ )g(t − τ )dτ
0
The motivation behind this definition lies in the following important convolution prop-
erty.
Theorem 2.2.30. The Laplace transform of the convolution of f (t) and g(t) is the
product of their respective Laplace transforms F (s) and G(s). In other words,
and therefore
Z +∞ Z +∞
F (s)G(s) = f (τ ) e −st g(t − τ )u(t − τ )dtdτ
0 0
2.2. PROPERTIES OF THE LAPLACE TRANSFORM 23
1
Example 2.2.31. Let us compute the inverse Laplace transform of using this
s 2 (s − a)
convolution property (we could also split in partial fractions!).
Having a closer look at this transform, we recognise it is the product of two well-known
1 1
transforms: 2 = L {t.u(t)} and = L {e at .u(t)} Therefore,
s s −a
Z t
1
L −1 at
=t ∗e = e aτ (t − τ )dτ
s 2 (s − a) 0
Theorem 2.2.32. If f (t) and f ′ (t) both have Laplace transforms and if the initial value
f (0+) exists, then it can be computed by
Corollary 2.2.33. If f ′ (t) and f ′′ (t) both have Laplace transforms and if the initial slope
f ′ (0+) exists, then it can be computed by
Proof. This is immediately true by the previous theorem and the theorem of differentiation
in time (Theorem 2.2.10).
□
3s 2 + 11s
f (0+) = lim sF (s) = lim =3
s→+∞ s→+∞ s 2 + 7s + 12
at a glance. We can verify our result with f (t) = (2e −3t + e −4t )u(t) found in Example
2.2.9 (3). We can also compute the initial slope:
Theorem 2.2.35. If f (t) and f ′ (t) both have Laplace transforms and if the final value
limt→+∞ f (t) exists, then it can be computed by
Examples 2.2.36. 1) Let us first verify this holds for the same example we used for the
initial-value theorem. We compute
3s 2 + 11s
lim f (t) = lim sF (s) = lim =0
t→+∞ s→0 s→0 s 2 + 7s + 12
2) It is really necessary to require that limt→+∞ f (t) exists for the theorem to hold. Let’s
take an easy counterexample with f (t) = sin ωt.u(t), which obviously does not have a
limit when t → +∞. If we were to apply the final-value theorem anyhow, we would obtain:
ωs
lim f (t) = lim sF (s) = lim 2 =0
t→+∞ s→0 s→0 s + ω 2
which is obviously untrue. One can mathematically define what are the precise constraints
for f (t) and F (s), but this goes beyond the scope of this course.
0 if t < 0 or t > ε
δε (t) =
1/ε if 0 < t < ε
An interesting property of δε (t) is that the area of the rectangle is equal to 1 for any value
of ε > 0.
Definition 2.3.1. The Dirac delta function, also called the unit impulse, is the generalised
function defined by
δ(t) = lim δε (t)
ε→0+
2.3. FURTHER APPLICATIONS OF THE LAPLACE TRANSFORM 26
δε (t)
1/ε
t
0 ε
When ε → 0+, we see 1/ε tends to +∞, so the rectangular signal tends to look like
a “spike”, an impulse with infinite height and infinitesimal duration, but still with an area
equal to 1, which is why it is called a unit impulse. The Dirac δ function is commonly
represented as in Figure 2.16, where the arrowhead stops at 1.
δ(t)
1
t
0
The unit impulse is not an actual function, since it would be equal to zero for any t ̸= 0
and tend to +∞ for t = 0, though still have an area equal to 1. Also, it is physically
impossible to realise a true impulse, but we can think of examples that approximate its
behaviour, e.g. a single stroke with a hammer, which delivers a huge amount of energy in
a very short time. It is also referred to as the Dirac distribution, as if 100% probability was
gathered in one single point.
L {δ(t)} = 1 (2.28)
for all s ∈ C.
Proof. Let us start by computing the Laplace transform of δε (t). Similarly to Example
2.2.21, we can rewrite
1
δε (t) = (u(t) − u(t − ε))
ε
so its Laplace transform is equal to
1 − e −sε
L {δε (t)} =
εs
2.3. FURTHER APPLICATIONS OF THE LAPLACE TRANSFORM 27
If we now take the limit ε → 0+ (and suppose we can invert the limit and the Laplace
transform!), we obtain:
1 − e −sε
L {δ(t)} = lim L {δε (t)} = lim =1
ε→0+ ε→0+ εs
by using L’Hospital’s rule. □
Note that we have not shown why the region of convergence of the Dirac’s transform is
the entire s-plane. We can loosely accept this by analogy with Example 2.2.21.
This theorem has major consequences for further applications, as we will see in the next
section. For now, it allows us to compute more inverse Laplace transforms.
Example 2.3.3. Let us compute the inverse Laplace transform of the rational
s 2 − 2s + 1
F (s) = , Re(s) > 2
s2 − s − 2
We cannot split this in partial fractions, since the degrees of the numerator and denominator
are equal. Our first step is to perform a long division, in order to rewrite (check this!) :
−s + 3
F (s) = 1 +
s2−s −2
The second term can be split in partial fractions, leading to (also check this!) :
4 1 1 1
F (s) = 1 − +
3s +1 3s −2
Finally we obtain:
4 1
f (t) = δ(t) − e −t u(t) + e 2t u(t)
3 3
The Dirac function has other interesting properties, such as the sifting property, which
some of you will see in the course of Signals and Systems in Bloc 3. We point out one
other property, that tells us the relationship between the unit step function and the Dirac
function.
Theorem 2.3.4. The Dirac δ function is the derivative of the unit step function:
du(t)
= δ(t)
dt
In other words, Z x
u(t) = δ(u)du
−∞
Proof. First of all, let us agree that the above equations are “symbolic”, in that sense that
we know the unit step function u(t) is not differentiable at t = 0, it is not even continuous.
2.3. FURTHER APPLICATIONS OF THE LAPLACE TRANSFORM 28
gε (t)
1
t
0 ε
Figure 2.17: The function gε (t) for any ε > 0, which tends to the unit step function u(t)
if ε → 0+.
We could show this alternatively by looking at the integral of the Dirac function. Because
the area delimited by δ(t) is equal to 1 and δ(t) = 0 for t ̸= 0, we have
Z +∞ Z β
δ(u)du = δ(u)du = 1
−∞ α
d 2y dy
a 2
+b + cy = x(t)
dt dt
2.3. FURTHER APPLICATIONS OF THE LAPLACE TRANSFORM 29
in which y (t) is the unknown function and x(t) is a given external force function (me-
chanical or electromotive). In this context, x(t) is often called the input signal and the
solution y (t) is called the output signal.
dy
Now, if we assume that the initial conditions are zero, so y (0) = (0) = 0, then the
dt
Laplace transform of the differential equation is equal to:
Definition 2.3.5. If x(t) is the input signal and y (t) is the output signal of a linear system
with zero initial conditions, then the transfer function of that system is given by
Y (s)
H(s) =
X(s)
In particular, if the input signal is the Dirac function δ(t), the corresponding output y (t)
is called the unit impulse response, which is denoted by h(t). Since L δ(t) = 1, it
immediately follows that H(s) = Y (s), showing the transfer function is equal to the
transform of the unit impulse function h(t). So,
H(s) = L h(t)
This transfer function is an important notion, since it fully describes the connection be-
tween the transformed input and the transformed output, so it actually defines all the
characteristics of the physical system. Knowing the transfer function of a system allows
us to predict the output signal for any given input signal: since Y (s) = H(s)X(s) we
find:
y (t) = L −1 {Y (s)} = L −1 {H(s)X(s)} = (h ∗ x)(t)
by the convolution theorem.
So, if we have a physical system with unknown characteristics, e.g. we don’t know the
damping factor of a spring, or we don’t know the inductor/capacitor in a electric circuit,
we will proceed as follows:
• for any other given input signal x(t), the output is computed by y (t) = (h ∗ x)(t)
Notice that this procedure only requires one single experiment, thanks to the notions of
the unit impulse δ(t) and the transfer function H(s).
2.3. FURTHER APPLICATIONS OF THE LAPLACE TRANSFORM 30
Example 2.3.6. Consider a mass m = 3 kg, attached to a spring with spring constant
k = 15 kg/s2 and damping constant λ = 6 kg/s, with an external force f (t) acting on
the spring.
equilibrium 0 m
position
λ
y
If we choose a convenient reference system (see Figure 2.18), the differential equation
resulting from Newton’s second Law is given by:
Assuming the initial position y (0) and initial speed ẏ (0) are equal to zero, the transfer
function of this mass-spring system is equal to:
Y (s) 1
H(s) = = 2
F (s) 3s + 6s + 15
with h(t) = 16 e −t sin 2t.u(t) and f (t) = u(t). In this example, convolution leads to a
longer calculation than splitting in partial fractions, because we need to integrate by parts
twice (check this, it’s good practice!).
Example 2.3.7. In Bloc 1 (see paragraph 17.3 in Stewart Calculus, edition 8), we have
seen the analogy between a mass-spring system and an electric circuit. If we consider an
electric circuit (see Figure 2.19) with a resistor R = 6 Ω, an inductor L = 3 H, a capacitor
1
C = 15 F, and an external electromotive force E(t), then the law of Kirchhoff leads to
the differential equation:
d 2Q dQ
3 2 +6 + 15Q = E(t)
dt dt
where Q(t), the charge on the capacitor, is the unknown function. The transfer function
of this system is identical to the one in the previous example:
L {Q(t)} 1
H(s) = = 2
L {E(t)} 3s + 6s + 15
switch
E(t)
ω
sin ωt.u(t) Re(s) > 0 2.6
s 2 + ω2
s
cosh ωt.u(t) Re(s) > ω 2.7
s − ω2
2
ω
sinh ωt.u(t) Re(s) > ω 2.8
s 2 − ω2
n!
t n .u(t) Re(s) > 0 2.11
s n+1
s 2 − ω2
t. cos ωt.u(t) Re(s) > 0 2.13
(s 2 + ω 2 )2
2ωs
t. sin ωt.u(t) Re(s) > 0 2.14
(s 2 + ω 2 )2
n!
t n .e −at .u(t) Re(s) > −a 2.15
(s + a)n+1
R
f (t)dt t=0+
Z
F (s)
f (t)dt + — 2.16
s s
Z +∞
f (t)
F (v )dv — 2.17
t s
ω
e −at sin ωt.u(t) Re(s) > −a 2.20
(s + a)2 + ω 2
s +a
e −at cos ωt.u(t) Re(s) > −a 2.21
(s + a)2 + ω 2
ω
e −at sinh ωt.u(t) Re(s) > ω − a 2.22
(s + a)2 − ω 2
s +a
e −at cosh ωt.u(t) Re(s) > ω − a 2.23
(s + a)2 − ω 2
1 s
f (at) F — 2.24
a a
Z t
f (τ )g(t − τ )dτ F (s)G(s) — 2.25
0
2.5 Exercices
1. Compute the Laplace transforms of the following causal functions (which are sup-
posed to be multiplied by u(t)):
(a) f (t) = 2 + 5t − t 3
(a) Falling object: when an object of mass m falls, it experiences air drag opposite
to the direction of motion and proportional to its speed. Therefore, the speed
v (t) of the object obeys the differential equation
λ
v̇ + v =g
m
where λ is a constant related to the shape of the object and g is the gravitational
acceleration. Solve this equation assuming the initial speed v (0) is equal to
zero. After solving, check what happens with the speed v (t) when t → ∞.
(c) Spring-mass system: when a mass m is attached to a spring with spring constant
k (without friction nor external forces), Newton’s second Law leads to the
differential equation:
k
ẍ + x = 0
m
Solve this equation for x(t) assuming x(0) = 2 and v (0) = 0.
2.5. EXERCICES 35
4. Compute the Laplace transforms of the following functions based on their graphs:
(a)
π
t
0
(b)
t
0 τ 2τ
5. Compute the following inverse Laplace transforms and sketch the graphs of the
original functions:
−2s
(3s + 6)e −3s
e
(a) L −1
(c) L −1
s +4 s 2 + 4s + 8
( )
−s
e
(b) L −1
s 2
3 +4
f (t)
t
0 3
2.6. SOLUTIONS EXERCICES 36
8. A given physical system that is subject to an input force e(t) is described by the
differential equation
y ′′ (t) + y ′ (t) − 2y (t) = e(t)
Supposing all initial conditions are equal to zero,
(b) compute the unit impulse response h(t) of this physical system,
(c) if e(t) = e −2t u(t), compute the output y (t) in two different ways.
h(t) = e −t u(t)
(a) Without computing the output y (t) explicitly, compute the initial value y (0+)
and the final value limt→+∞ y (t).
(b) Without using convolution, compute the output y (t) of the system.
(c) Verify that your answers from previous parts are coherent.
8 2t 10 −t 2 −t
(d) e + e − te u(t)
9 9 3
2.6. SOLUTIONS EXERCICES 37
−2t 1
(e) e + cos 2t − sin 2t u(t)
2
mg λ
mg
3. (a) v (t) = 1 − e− m t →
λ λ
3
1 − e −2t
(b) i (t) =
2
r !
k
(c) x(t) = 2 cos t
m
2 −sπ
4. (a) F (s) = 1 − e
s2 + 4
A −sτ −2sτ
(b) F (s) = 1 − 2e + e
τs2
5. (a) e −4(t−2) u(t − 2)
3
(b) sin 6(t − 1)u(t − 1)
2
(c) 3e −2(t−3) cos 2(t − 3)u(t − 3)
1 −4t 1 4
6. (a) − e + cos t − sin t u(t)
17 17 17
1 3t 1 −3t 12 −2t 3 −2t
(b) e − e + e − te u(t)
50 2 25 5
1 3t 1 −3t
(c) (1 + 3a)e − (1 − 3a)e u(t)
6 6
e −bt − e −at
7.
a−b
1
8. (a) H(s) =
s2 +s −2
(b) h(t) = 31 (e t − e −2t ) u(t)
1 t 1 −2t 1 −2t
(c) e − te − e u(t)
9 3 9
9. (a) y (0+) = 2 and lim y (t) = 0
t→+∞
(c) –
Bibliography
Signals and Stochastic Processes (Modules 14-15). J. Nehru Technological University Hy-
derabad. http://jntuhsd.in/uploads/programmes/Module14_LT_19.01_.2017_
.PDF.
Stephen Boyd. EE102: Introduction to Signals & Systems. Stanford University. http:
//web.stanford.edu/~boyd/ee102/.
Jianping YAO. ELG 3125A Signal and System Analysis (Lecture notes). Univer-
sity of Ottawa. https://www.site.uottawa.ca/~jpyao/courses/ELG3120_files/
Laplace.pdf.
38