Appendix N
One-Sided and Two-Sided Laplace Transforms
1. Introduction
2. Definition of One-Sided Transform
3. The Laplace Transform on the Complex Plane
3.1 Its Origins
3.2 Inversion by Line and Contour Integrals
4. Inversion of One-Sided Laplace Transform by Residues
5. The Two-Sided Transform
5.1 Definition and Regions of Convergence
5.2 Inversion of Two-Sided Transforms
1
1. Introduction
You probably have some familiarity with the standard Laplace transform and its
inversion by partial fractions. However, you may not have seen its links with the
classical theory of complex variables and the role of residues in the inversion of a
transform. These notes will fill a little of that gap. They're not a substitute for proper
study of complex variables, but they will show you how to calculate transforms and
invert them by resides (at least for the class of rational polynomial transforms).
Why the emphasis on residues? Because they are the easiest way to deal with the
second major topic of these notes: the two-sided Laplace tranform.
Many functions in communications and signal processing are two-sided; that is, they
are not necessarily zero for negative time. These notes show you how to represent them
with the two-sided Laplace transform and why you also need to specify a “region of
convergence”. You will see how to invert two-sided transforms of rational polynomial
type by residues.
2. Definition of One-Sided Transform
A one-sided function is zero for negative time; that is, t < 0 − , where 0 − denotes a
time just before 0 (the formulation makes allowance for impulses at time zero, δ (t ) . For
exponential, sinusoidal and polynomial signals, and for sytems described by linear
differenial equations with constant coefficients, the laplace transform provides a
convenient simplification. It’s a way of expressing any function as a superposition
(integral) of complex exponentials.
The Laplace transform of a function x(t) is
∫ x(t)e
− st
X (s) = dt
−
0
where s is a complex variable. The inverse transform is given by
c+ j ∞
1
2πj c −∫j∞
x (t ) = X ( s ) e st dt
where c is a real constant selected for convergence of X(s), as discussed in detail in the
next section.
2
A Short Table of One-Sided Transform Pairs
x (t ), t ≥ 0 − X (s )
, Re[s ] > a
e at , real a 1
s−a
, Re [s ] > Re [ p ]
e pt , real a 1
s− p
sin( ω t ) ω
, Re [s ] > 0
s +ω2
2
cos(ω t )
, Re [s ] > 0
s
s +ω2
2
u 0 (t ), δ (t ) 1
u −1 (t ) 1
s
u − 2 (t ) 1
s2
Questions:
1. Use the definition of the transform to verify the first entry in the table above.
2. What is the Laplace transform of the function
x (t ) = te − t
including the condition on Re [s] ?
3. The Laplace Transform on the Complex Plane
3.1 Its Origins
A function can be expanded on an interval [0,T] as a Fourier series – a sum of
sines and cosines with frequencies ω k = k 2π / T or, equivalently, as a sum of complex
exponentials – as follows:
∞
x (t ) = ∑X k e
jω k t
, 0≤t <T
k = −∞
where
3
1T
Xk = ∫
T0
x (t ) e − jω k t dt , ω k = k 2π / T
since
∞ T
1T
∫ x (t ) e − jω k t
dt = ∑ ∫ X n e jω nt e − jω k t dt = T X k
T0 n = −∞ 0
The series is a resolution of x(t) into components at frequencies ω k , each with complex
amplitude Xk.
Although the resulting series is periodic, it does represent the function on [0,T], as
shown in the sketch.
Now let us increase T, the length of the interval over which the series equals the
original function. The frequenciesω k will become more closely spaced, and in the limit
the sum becomes an integral
∞
1
∫ X (ω ) e
jω t
x (t ) = dω
2π −∞
where
X (ω ) = ∫ x (t ) e − jω t dt
0−
This type of resolution into components along a continuous frequency exis is known as
the Fourier transform and the pair of integrals is known as the Fourier transform pair.
The component of x(t) at frequency ω , X (ω ) , can be considered a density: if the units of
x(t) are volts, then the units of X (ω ) are volt-sec (or volt/Hz if we had been using Hz
instead of angular frequency in radian/sec). The transform X (ω ) is complex, to
represent both magnitude and phase.
4
When one attempts to calculate the Fourier tranform of some functions, however,
the integral does not converge. Examples are the unit step x (t ) = u −1 (t ) , or sin( ω ot ) or
exp( at) for a>0 (try them and see). A way around this difficulty is first to multiply x(t)
by e −σ t , for some real σ > 0 , and Fourier transform the result. With σ large enough,
most useful functions will converge. We can easily regain x(t) by multiplying the inverse
transform by e σ t . The pair of equations now becomes
X (ω ) = ∫ x (t ) e − (σ + jω )t dt
0−
∞
1
∫ X (s) e
(σ + jω ) t
x (t ) = dω
2π −∞
Finally, a change of variable gives the Laplace tranform. Define the complex frequency
variable s = σ + jω . Then
X ( s ) = ∫ x (t ) e − st dt
0−
c + j∞
1
2πj c −∫j∞
x (t ) = X ( s ) e s t ds
where we pick c = σ to be large enough for convergence of the transform.
The values of s for which the real part σ is large enough make up the “region of
convergence”. Consider, for example, transforming x (t ) = exp( −3t ) :
∞ ∞
, Re [s ] > −3
1
X ( s ) = ∫ e − 3t e − st dt = ∫e
− (3 + s )t
dt =
0− 0−
s +3
The pole and region of convergence are illustrated below:
5
3.2 Inversion by Line and Contour Integrals
A Laplace transform X(s) is apparently only going to be evaluated at points along
the line s = c + jω in the inverse transform integral. Nevertheless, X(s) is a fullblown
complex function of a complex variable s = σ + jω , which can represent any point on
the complex plane.
Consider the transform pair x (t ) = e − at and X ( s ) = 1/( s + a) . The two sketches
show clearly that the inverse transform is a line integral on the complex plane.
The region of convergence consists of all points to the right of s=-a – and generally, for
one-sided tranforms, it is to the right of the rightmost pole – and the integration line must
lie in the region of convergence. That is, c>-a.
The line integral suggests a contour integration. Consider the integration path
shown below. Curve C consists of the original line integral, circles around the poles, and
a giant semicircle to the left. The cuts do not have to be considered, since the integration
in one direction is cancelled by the contribution from the opposite direction, which has
opposite sign.
6
Now for some complex variable theory. First, the Cauchy Integral Theorem: if a
function F(s) is analytic within and on a closed curve C then
∫ F (s ) ds = 0
C
This is certainly true of rational polynomial transforms. The practical result,
demonstrated by the sketch below, is that our line integral equals the sum of integrals
around each of the poles, less the integral along the semicircle.
Next consider the integration along the semicircle. If the integrand in
1
x (t ) =
2πj ∫ X ( s ) e s t ds
approaches zero more quickly than 1/|s| on that semicircle, then the contribution from the
semicircle is negligible, and approaches zero as the radius increases. This condition is
ensured by the combination of (1) positive t and large negative σ in the exponential
factor and (2) X ( s ) → 0 uniformly as the radius becomes large. The latter condition is
in turn guaranteed if the degree of the numerator of X (s ) is less than the degree of the
denominator.
We now have the important result that our line integral equals the sum of integrals
around each of the poles, so that
x (t ) = ∑ Ñ∫
1
X ( s) es t ds
i Ci 2π j
where Ci is a closed curve encircling the ith pole. Next, we use the Cauchy Integral
Formula: if F(s) is analytic within and on a closed curve Ci, and the point s=a is within
Ci, then
1 F ( s)
2πj C∫i s − a
ds = F ( a)
F(a) is the “residue” of the integrand F(s)/(s-a) when it is integrated around the curve
centred on s=a. This is an extremely useful result. For example, we might want the
1
inverse transform x(t) of X ( s ) = . Then, as shown by the sketch above, we must
s+r
integrate e st /( s + r ) along a closed curve about the pole s=-r. In order to use the Cauchy
Integral Formula, we identify F(s) with est and a with –r. The inverse x(t) is obtained in
one step: x (t ) = e − rt , the residue at s=-r, which agrees with the result obtained in Section
1.
To obtain the residue at a pole with multiplicity greater than one, note that
differentiation of the Cauchy Integral Formula with respect to a yields
7
1 d n F (s) 1 F ( s)
n! ds n
= ∫
2πj C i ( s − a) n+1
ds
s =a
which again lets us evaluate the residue without explicitly performing the integration
4. Inversion by Residues
In this section, we’ll turn the contour integration and residue theory into a
relatively mechanical procedure for inverting a Laplace transform of the rational
polynomial type.
Given a Laplace transform X(s), we want the associated inverse transform x(t).
The integrand in the inverse transform equation is then X ( s ) e st . Define a pole of the
integrand as a singularity point at which it becomes infinite (a loose definition, but it
works for polynomials). For our rational polynomials, poles are roots of the
denominator. For example, the integrand
(s
+ 7 e st
2
)
(
( s − 2)(s + 3)2 s 2 + 2 s + 5 )
has three simple poles, at s=2 and s = −1 ± j 2 , and a double pole at s=-3. Let there be N
distinct poles sk, k=1,…,N, each of multiplicity nk. In our example, N=4: s1 =2, s2 =-3, s3 =-
1+j2, s4 =-1-j2 and n1 =1, n2 =2, n3 =1 and n4 =1.
Next, define the pole coefficient associated with pole sk as
Fk ( s ) = (s − s k ) k X ( s) e s t
n
In our example,
F2 ( s ) =
(s
+ 7 e st
2
)
(
( s − 2 ) s 2 + 2s + 5 )
Notice that this definition simply removes the offending pole.
Finally, we just operate on the pole coefficients to produce the residue at pole sk.
For a simple pole the residue Rk(t) is
Rk ( t ) = Fk ( s k )
In our example,
8
11e 2t
R1 (t ) = = 0.0339 e 2t
25 ⋅ 13
For a multiple pole, define the residue
1 d nk −1Fk ( s )
Rk (t ) =
(nk −1)! ds nk −1 s= s k
which requires calculation of the (nk-1)th derivative. This definition is clearly valid for
simple poles, too. In our example,
2 13 − 3t
R2 ( t ) = − + t e
5 40
after differentiation, substitution and simplification.
The inverse transform is, at last, just the sum of the residues:
N
x (t ) = ∑ Rk ( t )
k =1
In our example,
11 e 2t 2 13 − 3t e−t
x (t ) = − + te + (5 cos( 2t) + sin( 2t))
25 ⋅ 13 5 40 4 ⋅ 13
where the bracketed quantity in the last term can also be written as 26 cos(2t + tan −1 (1/ 5) ) .
Questions
3. Identify the poles and zeroes and their multiplicity in the transforms below:
s+3
(a) X ( s ) =
s + s 2 + 8s − 10
3
s2
(b) X ( s ) = 3
s + 7 s 2 + 15s + 9
and show their locations on the s-plane.
4. Invert the following transforms:
9
2
(a) X ( s ) =
s +3
s
(b) X ( s ) =
s − 5s + 6
2
1
(c) X ( s ) =
s + 2s + 5
2
s
(d) X ( s ) =
s + 8s + 16
2
Answers
3. (a) Poles at s = −1 ± j 3 , zero at s = −3 , all simple.
(b) Poles at s = 1 , s = −3 (double), zero at s = 0 (double)
4. (a) x (t ) = 2 e −3t , t ≥ 0 single pole
(b) x (t ) = 3 e 3t − 2 e 2t , t ≥ 0 two simple poles
1 −t
(c) x (t ) = e sin( 2 t ), t ≥ 0 complex conjugate poles
2
(d) x (t ) = (1 − 4 t ) e −4 t , t ≥ 0 double pole
10
5. The Two-Sided Laplace Transform
5.1 Definition and Regions of Convergence
The two-sided Laplace transform allows time functions to be non-zero for negative
time. It includes the one-sided transform that we have discussed already as a special
case. The definition
∫ x (t ) e
− st
X (s) = dt
−∞
seems straightforward enough, but there are some subtleties, which we discover through a
set of three examples.
First, consider the function
0, t < 0
x1 (t ) = 2t −3t
e − e , t ≥ 0
It is strictly causal; that is, it is zero for negative time. Its transform is easily evaluated as
5
X 1 (s) =
s +s−6
2
with region of convergence (ROC) defined by σ > −3 and σ > 2 for the two terms, or
simply σ > 2 . Next consider the two sided function
−e 2t , t < 0
x2 (t ) = −3t
−e , t ≥ 0
Its transform is obtained by integration
0 ∞
X 2 (s) = − ∫ e e 2t − st
dt − ∫ e − 3t e − st dt
−∞ 0
1 1 5
= − = 2
s−2 {
{ s+ 3 s + s −6
σ <2 σ > −3
The region of convergence is defined by σ > −3 (for the second, positive time, term) and
σ < 2 (for the first, negative time, term), or simply − 3 < σ < 2 . Interesting – its
transform is the same as X 1 ( s ) . Finally, consider the strictly anticausal function
11
−e 2t + e− 3t , t < 0
x3 (t ) =
0, t ≥ 0
Its transform is given by
0 0
X 3 ( s ) = − ∫ e 2t e − st dt + ∫e
− 3t
e− st dt
−∞ −∞
1 1 5
= − = 2
s −2
{ s+3 s + s−6
{
σ <2 σ <−3
and its region of convergence is defined by σ < 2 (for the first term) and σ < −3 (for the
second term), or simply σ < −3 . Of more interest is that its transform is also equal to the
other two: X 1 ( s ) = X 2 ( s) = X 3 ( s) .
You have just seen three different time functions produce the same two-sided
Laplace transform. Evidently, the transform alone is not sufficient to specify the time
function – you need the transform and a region of convergence. Each such region is
bounded by a pole on either side (except, of course, for the semi-infinite regions at the
left and right), and each corresponds to a different time function, or inverse transform.
The sketch below illustrates our example.
You may be wondering about region 2 in our example. It was convenient that the
intersection of the two elementary regions of convergence σ > −3 and σ > 2 was not
empty, giving − 3 < σ < 2 . But are there cases in which the intersection is empty? Yes,
although they are not often encountered. Consider, for example,
e , t < 0
−3t
x4 (t ) = 2t
e , t ≥ 0
which requires σ < −3 and σ > 2 . Here, there is no region of the s-plane in which a
single transform can represent all of x 4 (t ) . We can still handle this – we just use
12
separate transforms for the negative time and positive time halves. Clumsy, but there
aren’t many options. We then have a pair of transforms
1
X 4− ( s) = − , σ < −3 for the negative time half
s+3
1
X 4+ ( s ) = , σ >2 for the positive time half
s−2
Questions
5. Transform the function
−3e , t < 0
2t
x (t ) =
−2e , t ≥ 0
t
and sketch its pole-zero diagram with region of convergence.
6. Consider a specific region of convergence for a Laplace transform X(s). Were the
poles to its right contributed by the negative time or positive time half of x(t)? How
about the poles to its left? What can we conclude if it is an end region, e.g., no poles
to its right?
7. Under what condition on region of convergence does the time function x(t) decay to
zero on both sides of the origin, i.e., for t → ∞ and for t → −∞ ?
8. If you convolve two time functions, the Laplace transform of the result is the product
of the individual transforms. How is the ROC of the product related to the two
original ROCs?
Answers
s +1
5. Transformation gives X ( s ) = , with 1 < σ < 2 .
s − 3s + 2
2
13
6. Transformation of the positive time half of a function gives rise to constraints that σ
be greater than some values. Therefore, the poles to the left of the ROC are
associated with positive time. Similarly, for negative time the constraints are that σ
be less than some value and the corresponding poles are to the right of the ROC.
7. For the function to decay to zero for positive time, the poles must be less than zero.
These are the ones to the left of the ROC. Similarly, for the function to decay to zero
for positive time, the poles must be greater than zero, and they are to the right of the
ROC. Consequently, the ROC must include the origin (i.e., the imaginary axis) for
the function to decay on both sides. This is not surprising, since the Laplace
transform equals the Fourier transform on the imaginary axis, and convergence of the
Fourier transform requires the decay on both sides.
8. The ROC is the intersection of the two original ROCs. If the intersection is empty,
then the convolution itself is unbounded and undefined.
5.2 Inversion of Two -Sided Transforms
If you can invert one-sided transforms by residues, then you can invert two-sided
transforms. The definition is as before
c + j∞
1
x (t ) = ∫ X ( s ) e s t ds
2π j c − j∞
Consider the sketch below, where the integration line runs up the centre ROC. Closing
the curve with a giant semicircle to the left – where σ < 0 – allows the integral along that
semicircle to go to zero for positive time (consider the exponent of e st in the integrand).
Consequently, the integral is equal to the sum of residues at poles to the left of the ROC.
We saw this in inversion of the one-sided transform back in Section 3. Similar arguments
show that we can close the curve with a giant semicircle to the right for negative time.
The residue argument also holds, although the fact that the integration proceeds in the
opposite direction produces a negative sign.
14
We can summarize the inversion as follows:
∑ Rk (t ) , t ≥ 0
k∈{LSP}
x (t ) =
− ∑ Rk (t ) , t < 0
k∈{RSP}
where LSP is the set of poles on the left side of the ROC and RSP is the set of poles on
the right side of the ROC.
We’ll do an example. Consider the transform
5 5
X 1 (s) = =
s + s − 6 ( s + 3)( s − 2)
2
with region of convergence − 3 < σ < 2 . We saw this one in Section 5.1. The sum of
residues to the left of the ROC gives
5 es t
x (t ) = = −e −3t , t ≥ 0
s−2 s =−3
and the negative of the sum of residues to the right of the ROC gives
5 e st
x (t ) = − = −e 2 t , t < 0
s + 3 s =2
The result is correct, since it is consistent with the earlier example.
Questions
15
s +1
9. Calculate all three inverse transforms of X ( s ) =
s − 3s + 2
2
Answers
9. The easiest way to do this one is first to calculate both residues, then form the various
combinations for the different regions of convergence. For reference, the pole-zero
diagram and ROCs are illustrated below.
The residues are
( s + 1) e st ( s + 1) e st
R1 (t ) = = −2 e t
and R2 (t ) = = 3 e 2t
s−2 s =1
s −1 s= 2
so we have for the three regions of convergence:
0, t ≥ 0
x1 (t ) =
− R1 (t ) − R2 (t ) = 2e − 3e , t < 0
t 2t
R1 (t ) = −2e t , t ≥ 0
x 2 (t ) =
− R2 (t ) = −3e 2t , t < 0
R1 ( t ) + R2 ( t ) = −2e t + 3e 2t , t ≥ 0
x 3 (t) =
0, t < 0
16