Podmanik Ebook - Chapter 2
Podmanik Ebook - Chapter 2
Podmanik Ebook - Chapter 2
In his lifetime, the French mathematician, statistician, physicist, and astronomer, Pierre-Simon
Laplace, made many contributions to mathematics and science. Among those contributions was
the development of the Laplace transform.
The Laplace transform is an integral function that allows us to solve a variety of important and
useful differential equations without explicit use of integration. In fact, after we develop the key
concepts and Laplace transforms, we will be resorting to algebra to solve differential equations!
Let’s first provide the rather obscure definition of the Laplace transform. The Laplace transform
of a function is defined as follows:
∞
ℒ[𝑓(𝑡)] = ∫ 𝑓(𝑡) ∙ 𝑒 −𝑠𝑡 𝑑𝑡
0
That is, “the Laplace transform of a function, 𝑓(𝑡), is the definite integral of the product of 𝑓(𝑡)
and a function 𝑒 −𝑠𝑡 with respect to 𝑡 from 𝑡 = 0 to 𝑡 = ∞”.
This integral introduces a constant, 𝑠. This constant is arbitrary, but, as we will see, is used to
ensure that the integral converges. Before we delve into why Laplace defined this integral in the
bizarre way he did, let’s compute one.
Suppose that 𝑓(𝑡) = 1, the constant function. It follows that,
1 −𝑠𝑡 𝑡=∞
ℒ[1] = − 𝑒 |
𝑠 𝑡=0
This is an improper integral, so technically we should consider the limit of this function as 𝑡 →
∞. It is preferential to think about what happens to the function when 𝑡 is “arbitrarily large” and
make an argument about the outcome of the limit. That is,
1 1
ℒ[1] = − 𝑒 −𝑠(∞) − (− 𝑒 −𝑠(0) )
𝑠 𝑠
1
This is where the constant 𝑠 becomes important; the first term, = − 𝑠 𝑒 −𝑠(∞) , will converge to 0
as 𝑡 → ∞, only if 𝑠 ≥ 0. That is, if 𝑠 is, say, 5, then 𝑒 −5(∞) is 0. However, if 𝑠 is negative, such
as 𝑠 = −5, then 𝑒 −(−5)(∞) = 𝑒 5(∞) becomes infinitely large, and so the resulting integral will not
converge, as desired!
So, stipulating that 𝑠 must be nonnegative, we have that
1 1 1
ℒ[1] = − (0) + (1) =
𝑠 𝑠 𝑠
1
Our conclusion is that the Laplace transform of the function 𝑓(𝑡) = 1 is the new function, 𝑠 . This
result is now part of what we will call our “Laplace transform tool box”. While we had to do
integration to obtain this result, when faced with this same function again, we can jump directly
to the conclusion.
That is,
At first sight, it may appear that we have taken something “simple” and made it more
“complicated”, but we will soon see this has more value than it lets on.
Why Laplace Liked This Integral
Why would Laplace have chosen to evaluate an integral that is a product of the function of
interest and 𝑒 −𝑠𝑡 ? From what we have seen so far, the use of this factor ensured that both terms
of the definite integral had factors that converged nicely, either to 0 or to 1 (i.e. 𝑒 −𝑠(∞) = 0 and
𝑒 −𝑠(0) = 1).
Since we are certainly not limited to applying integration to “nice” functions, such as constants,
Laplace had to look out for the possibility that some “quickly growing” function may result in a
definite integral that diverges to positive or negative infinity. Because a function of the form 𝑒 −𝑠𝑡
is an exponential function, we can reason that there are few other functions that grow more
rapidly than this function decays (provided 𝑠 ≥ 0). So, even though this factor may not force all
integrals to converge nicely, it does so for the bulk of functions we would be interested in
considering. So, the short answer for this choice of a factor is for the purpose of integral
convergence!
From this ensues the reasoning for the limits of integration. The lower bound, 𝑡 = 0, is a fairly
obvious choice. When 𝑡 = 0, 𝑒 −𝑠𝑡 = 1, and so the result is neat. The upper bound, 𝑡 = ∞, does
about the same thing, except it helps a term converge to 0 (again, provided that the correct
polarity of 𝑠 is specified).
∞
So, with this reasoning, Laplace found that ∫0 𝑓(𝑡) ∙ 𝑒 −𝑠𝑡 𝑑𝑡 will be a converging integral for
most 𝑓(𝑡) functions we may want to consider in practice. (NOTE: it is easy to find a function
that does not fit in the window of convergence, namely, 𝑓(𝑡) = 𝑡𝑡 ; this function grows much
more rapidly than any other known function, as 𝑡 gets arbitrarily large).
While there is certainly more to be said about the development of this integral from a theoretical
standpoint, we need not dabble in those details.
But Why Is this Integral Needed?
Have faith, as we will shortly learn that this integral is key to solving differential equations
algebraically. As a bit of foreshadowing, the integral has its friendly inverse, known as the
derivative. Thus, we will use the Laplace transform to “integrate” a function, and we will then
use the inverse Laplace transform to bring it back into the correct domain.
Development of Some Common Laplace Transforms
∞
ℒ[8] = ∫ 8𝑒 −𝑠𝑡 𝑑𝑡
0
Without going through the integration again, we know that constant factors can factored out of
the integral. That is,
∞ ∞
∫ 8𝑒 −𝑠𝑡 𝑑𝑡 = 8 ∫ 1𝑒 −𝑠𝑡 𝑑𝑡
0 0
∞ 1
We already know that ∫0 1𝑒 −𝑠𝑡 𝑑𝑡 = 𝑠 , and so it follows that,
1 8
ℒ[8] = 8 ( ) =
𝑠 𝑠
How about non-constant functions for 𝑓(𝑡)? As one can imagine, the integrals can get a bit more
involved. Another relatively simple case would involve 𝑓(𝑡) = 𝑒 3𝑡 , for example. Its Laplace
transform would be:
∞
ℒ[𝑒 3𝑡 ] = ∫ 𝑒 3𝑡 ∙ 𝑒 −𝑠𝑡 𝑑𝑡
0
𝑡=∞
3𝑡 ]
1 (3−𝑠)𝑡
ℒ[𝑒 = 𝑒 |
3−𝑠 𝑡=0
Now we need to make an argument about 𝑠, the constant. Most importantly, for what value(s) of
1
𝑠 will the first term, 3−𝑠 𝑒 (3−𝑠)(∞) , converge? We know that as long as the coefficient of ∞ in the
exponent is negative, then the term will converge to 0. That is, when will 3 − 𝑠 be negative?
Mathematically,
3−𝑠 < 0
3<𝑠
So, as long as 𝑠 > 3, 3 − 𝑠 will be negative, and so the integral will converge to 0. We impose
this constraint with no consequence to the second term,
1 1
ℒ[𝑒 3𝑡 ] = (0) − (1)
3−𝑠 3−𝑠
1
=−
3−𝑠
Modern Differential Equations © Milos Podmanik Chapter 2, Page 6
, or,
1
=
𝑠−3
1
ℒ[𝑒 3𝑡 ] =
𝑠−3
Again, our question may be, what if we consider any function of the form 𝑓(𝑡) = 𝑒 𝑎𝑡 , where 𝑎 is
some real constant? If we step through the integration process above for any value of 𝑎, we will
find that the only difference is that the expression in the denominator will be of the form 𝑠 − 𝑎
(try it!). Certainly, we have to impose the constraint that 𝑠 > 𝑎, but this is of no real
consequence.
So, more generally, we can conclude that,
1
ℒ[𝑒 𝑎𝑡 ] =
𝑠−𝑎
∞
ℒ[𝑓(𝑡)] = ∫ (𝑔(𝑡) ± ℎ(𝑡)) ∙ 𝑒 −𝑠𝑡 𝑑𝑡
0
Since we know that the integral of a sum or difference of functions is the sum/difference of the
integrals, we have that,
∞ ∞
We recognize that, by definition, ∫0 𝑔(𝑡) ∙ 𝑒 −𝑠𝑡 𝑑𝑡 is ℒ[𝑔(𝑡)] and that ∫0 ℎ(𝑡) ∙ 𝑒 −𝑠𝑡 𝑑𝑡 is
ℒ[ℎ(𝑡)]. Thus, we have that the Laplace transform of a sum or difference of functions is the sum
or difference of the Laplace transforms of those functions. In other words,
The beauty of this process is that we really are not learning any new properties, per se. All of the
properties that are at play here are the same ones that are at play with integrals.
𝑑𝑦
Now here is the big one: suppose we have 𝑓(𝑡) = , that is, the function of interest is a
𝑑𝑡
differential (first derivative). It may appear futile to even try this Laplace transform, but it is
completely feasible.
∞
𝑑𝑦 𝑑𝑦 −𝑠𝑡
ℒ[ ]=∫ ∙ 𝑒 𝑑𝑡
𝑑𝑡 0 𝑑𝑡
Since 𝑑𝑦/𝑑𝑡 is a function it may seem challenging to compute this integral. However, we can
use integration by parts to further decompose it.
𝑑𝑦
Let 𝑢 = 𝑒 −𝑠𝑡 and 𝑑𝑣 = . Then,
𝑑𝑡
𝑑𝑢
= −𝑠𝑒 −𝑠𝑡 ⇒ 𝑑𝑢 = −𝑠𝑒 −𝑠𝑡 𝑑𝑡
𝑑𝑡
𝑣=𝑦
∞ ∞
∫ (𝑢 ∙ 𝑣) 𝑑𝑡 = 𝑢 ∙ 𝑣|𝑡=∞
𝑡=0 − ∫ 𝑣 ∙ 𝑑𝑢
0 0
∞
= 𝑦 ∙ 𝑒 −𝑠𝑡 |𝑡=∞
𝑡=0 − ∫ 𝑦 ∙ −𝑠𝑒
−𝑠𝑡
𝑑𝑡
0
Prior to evaluating the limits of integration, we integrate the second term first. Since 𝑠 is a
constant, it can be passed outside the integral, giving,
∞ ∞
𝑑𝑦 −𝑠𝑡 −𝑠𝑡 |𝑡=∞ −𝑠𝑡
∫ ∙ 𝑒 𝑑𝑡= 𝑦 ∙ 𝑒 𝑡=0 + 𝑠 ∫ 𝑦 ∙ 𝑒 𝑑𝑡
0 𝑑𝑡 0
Since 𝑦 is a function, we cannot explicitly compute the second integral (that is, 𝑦 = 𝑦(𝑡)).
However, from our definition of a Laplace transform,
∞
∫ 𝑦 ∙ 𝑒 −𝑠𝑡 𝑑𝑡 = ℒ[𝑦]
0
And, so we have,
∞
𝑑𝑦 −𝑠𝑡
∫ ∙ 𝑒 𝑑𝑡= 𝑦 ∙ 𝑒 −𝑠𝑡 |𝑡=∞
𝑡=0 + 𝑠ℒ[𝑦]
0 𝑑𝑡
And, as long as 𝑠 ≥ 0 and 𝑦(𝑡) grows less quickly than 𝑒 −𝑠𝑡 (we will assume so), then
∞
𝑑𝑦 −𝑠𝑡
∫ ∙ 𝑒 𝑑𝑡 = 𝑦(∞) ∙ (0) − 𝑦(0) ∙ (1) + 𝑠ℒ[𝑦]
0 𝑑𝑡
𝑑𝑦
ℒ[ ] = 𝑠ℒ[𝑦] − 𝑦(0)
𝑑𝑡
That is, the Laplace transform of a derivative term is 𝑠 multiplied by the Laplace transform of
𝑦(𝑡), minus 𝑦(𝑡) evaluated at 𝑡 = 0 (that is, the 𝑦 value of the initial condition!).
This may seem like an elusive definition in that it adds more symbols into the mix, but, for now,
let’s keep in mind that we have deconstructed a derivative into an expression that involves no
derivatives. In solving differential equations, this is precisely what we want!
We will soon see how this all comes in handy!
That is, if the Laplace transform of 𝑒 𝑎𝑡 is 1 divided by 𝑠 − 𝑎, then it should seem obvious that
1
the inverse of 𝑠−𝑎 is precisely 𝑒 𝑎𝑡 . Mathematically,
1
ℒ −1 [ℒ[𝑒 𝑎𝑡 ]] = ℒ −1 [ ]
𝑠−𝑎
1
𝑒 𝑎𝑡 = ℒ −1 [ ]
𝑠−𝑎
1
1. ℒ −1 [ ]
𝑠 − 2.4
𝜋
2. ℒ −1 [− ]
𝑠
4 2 5
3. ℒ −1 [ − + ]
𝑠 𝑠 + 4 𝑠 − 0.2
SOLUTION:
1. The function, 𝑓(𝑡), that has 1/(𝑠 − 2.4) as its Laplace transform is 𝑒 2.4𝑡 . That is, since,
1
ℒ[𝑒 𝑎𝑡 ] =
𝑠−𝑎
, we know that,
1
ℒ −1 [ ] = 𝑒 2𝑡
𝑠 − 2.4
𝜋 1
2. The function, 𝑓(𝑡), that has − 𝑠 = −𝜋 ∙ 𝑠 as its Laplace transform is −𝜋 ∙ 1 = −𝜋. We
now this because,
𝑐
ℒ[𝑐] =
𝑠
, so,
−𝜋
ℒ −1 [ ] = −𝜋
𝑠
4 2 5
3. The function, 𝑓(𝑡), that has 𝑠 − 𝑠+4 + 𝑠−0.2 as its Laplace transform requires a bit of
manipulation.
Intuitively, the inverse Laplace transform involves differentiation. Similarly, since the
derivative of a sum or difference of functions is the sum or difference of derivatives, the
inverse Laplace transform can be taken for each individual term. We have,
4 2 5 4 2 5
ℒ −1 [ − + ] = ℒ −1 [ ] − ℒ −1 [ ] + ℒ −1 [ ]
𝑠 𝑠 + 4 𝑠 − 0.2 𝑠 𝑠+4 𝑠 − 0.2
We can factor out constants in the same we can with Laplace transforms to get:
1 1 1
= 4 ∙ ℒ −1 [ ] − 2 ∙ ℒ −1 [ ] + 5 ∙ ℒ −1 [ ]
𝑠 𝑠+4 𝑠 − 0.2
= 4 ∙ 1 − 2𝑒 −4𝑡 + 5𝑒 0.2𝑡
4 2 5
ℒ −1 [ − + ] = 4 − 2𝑒 −4𝑡 + 5𝑒 0.2𝑡
𝑠 𝑠 + 4 𝑠 − 0.2
For example,
2 −1 1 1 2
ℒ [ 3 ] = ℒ −1 [ 3 ]
2 𝑠 2 𝑠
1 2!
= ℒ −1 [ 3 ]
2 𝑠
1
= 𝑡2
2
This trick is more of a technique that we will be referencing on a regular basis when our
transforms are off by some constant factor.
How the Laplace Transform and Inverse Laplace Transform Solve Differential Equations
So far, we know how to find Laplace transforms of a few select functions. We also know that
sums/differences of Laplace and inverse Laplace transforms can be taken term-by-term, and that
the constant factors can be passed in-and-out of both. Let’s summarize the Laplace transforms
we will be working with in the following examples:
𝑐
ℒ[𝑐] =
𝑠
1
𝐿[𝑒 𝑎𝑡 ] =
𝑠−𝑎
𝑑𝑦
ℒ[ ] = 𝑠ℒ[𝑦] − 𝑦(0)
𝑑𝑡
𝑛!
ℒ[𝑡 𝑛 ] =
𝑠 𝑛+1
𝑑𝑦
= 0.2𝑦
𝑑𝑡
We can evaluate the left side, but the most we can do on the right side is to factor out 0.2
(remember 𝑦 is a function of 𝑡, and so we cannot expressly compute its Laplace transform):
𝑠ℒ[𝑦] − 𝑦(0) = 0.2ℒ[𝑦]
We are given the initial condition 𝑦(0) = 3, so we can make the substitution:
𝑠ℒ[𝑦] − 3 = 0.2ℒ[𝑦]
3
ℒ[𝑦] =
𝑠 − 0.2
Now that we have ℒ[𝑦] isolated, we proceed to take the inverse Laplace transform of both sides:
3
ℒ −1 [ℒ[𝑦]] = ℒ −1 [ ]
𝑠 − 0.2
We recognize that the inverse on the right-hand side is an exponential function of the form 𝑒 0.2𝑡 :
1
𝑦 = 3ℒ [ ]
𝑠 − 0.2
𝑦 = 3𝑒 0.2𝑡
𝑑𝑦
We are done! We have effectively found the solution to the differential equation = 0.2𝑦 with
𝑑𝑡
𝑦(0) = 3. We can check our work by seeing that
𝑑
𝑦 = 3𝑒 0.2𝑡 ⇒ 𝑦(𝑡) = 0.6𝑒 0.2𝑡
𝑑𝑡
𝑑𝑦 ?
= 0.3𝑦
𝑑𝑡
?
0.6𝑒 0.2𝑡 = 0.2(3𝑒 0.2𝑡 )
0.6𝑒 0.2𝑡 = 0.6𝑒 0.2𝑡
𝑦(0) = 3𝑒 0.2(0) = 3
A simple model from electronics that models the rate of change of voltage across the capacitor
over time is given by:
Often, this simple model assumes that the resistance and capacitance are constant, but that the
source voltage (a battery or other source), 𝑉(𝑡), can vary over time.
Let’s assume that the voltage source provides a constant 3V, that the resistance is 𝑅 = 2, and
that the capacitance is 𝐶 = 0.75. We will initially suppose that there is a voltage across the
capacitor of 1V, that is, 𝑣𝑐 (0) = 1.
Our model is thus,
𝑑𝑣𝑐 3 − 𝑣𝐶
=
𝑑𝑡 1.5
, with 𝑣𝑐 (0) = 1.
Qualitatively, we can see that when 𝑣𝑐 is less than 3V, then 𝑑𝑣𝑐 /𝑑𝑡 will be positive, and so the
voltage across the capacitor will be increasing. As 𝑣𝑐 gets closer to 3, the rate of change of
voltage across the capacitor will grow asymptotically smaller. Similarly, when 𝑣𝑐 is larger than
3V, then we expect a decrease in voltage across the capacitor until this voltage gets closer to the
voltage source.
We will use Laplace transforms to solve this equation. We first make it slightly more
presentable:
𝑑𝑣𝑐 3 𝑣𝑐 1
= − =2− 𝑣
𝑑𝑡 1.5 1.5 1.5 𝑐
𝑑𝑣𝑐 1 2
ℒ[ ] = ℒ[2] − ℒ[𝑣𝑐 ] = ℒ[2] − ℒ[𝑣𝑐 ]
𝑑𝑡 1.5 3
We also need to note that the dependent variable is being called 𝑣𝑐 instead of 𝑦. There is only a
notation difference, however, and so we treat 𝑣𝑐 in the same way we would 𝑦:
2 2
𝑠ℒ[𝑣𝑐 ] + ℒ[𝑣𝑐 ] = + 1
3 𝑠
2 2
ℒ[𝑣𝑐 ] (𝑠 + ) = + 1
3 𝑠
2
𝑠 1
ℒ[𝑣𝑐 ] = +
2 2
𝑠+3 𝑠+3
2 1
ℒ[𝑣𝑐 ] = +
2 2
𝑠 (𝑠 + 3) 𝑠+3
We are nearly prepared to take the inverse Laplace transform of both sides. A key to knowing
whether more work needs to be done is whether all terms on the right are recognized as Laplace
transforms of some functions.
2
CAUTION: We have a slight problem: The first term, 2 is not the Laplace transform of a
𝑠(𝑠+ )
3
known function. The issue is that we have a product of two factors involving 𝑠. It is tempting to
2 1
think of this as 𝑠 ∙ 2, but the integral/derivative of a product is not the product of the
𝑠+
3
integrals/derivatives!
The good news is that we have an algebraic technique available to us called partial fraction
decomposition. The idea is that a product of two linear factors in a denominator can be split up
into a sum/difference of fractions of the form:
, for some constants 𝑎 and 𝑏. It is likely more desirable to use technology to find these constants,
but we will demonstrate the algebra of doing so for the purpose of highlighting the tedious
algebra involved (note that this is just one step of the solution process!).
To satisfy the above equation, we need to solve for 𝑎 and 𝑏. The traditional approach is to get rid
2
of denominators. We can do this by multiplying both sides by 𝑠 (𝑠 + ) to cancel them out:
3
2 2 2 𝑎 𝑏
𝑠 (𝑠 + ) = 𝑠 (𝑠 + ) ( + )
3 𝑠 (𝑠 + 2) 3 𝑠 𝑠+2
3 3
2
2 = 𝑎 (𝑠 + ) + 𝑏(𝑠)
3
It helps to collect the terms involving 𝑠 and constant terms as separate terms:
2
2 = 𝑎𝑠 + 𝑎 + 𝑏𝑠
3
2
2 = (𝑎 + 𝑏)𝑠 + 𝑎
3
Logically, we need to make sure that the first term involving 𝑠 zeros out (we do not have an 𝑠-
term on the left!). We also need to make sure that two-thirds of 𝑎 is exactly equal to 2. This leads
to the system of linear equations below:
𝑎+𝑏 =0
2
𝑎=2
3
2
𝑎=2→𝑎=3
3
𝑎 + 𝑏 = 0 → 3 + 𝑏 = 0 → 𝑏 = −3
Thus, we have that 𝑎 = 3, 𝑏 = −3. We can make these substitutions back into the original partial
fraction equation:
2 𝑎 𝑏
= +
2 𝑠 𝑠+2
𝑠 (𝑠 + 3) 3
2 3 −3
= +
2 𝑠 𝑠+2
𝑠 (𝑠 + 3) 3
Finally, we can replace the complicated fraction in our Laplaced differential equation:
2 1
ℒ[𝑣𝑐 ] = +
2 2
𝑠 (𝑠 + 3) 𝑠+3
3 −3 1
ℒ[𝑣𝑐 ] = + +
𝑠 𝑠+2 𝑠+2
3 3
And we now recognize the functions that these Laplace transforms are linked back to through the
inversion process:
3 −3 1
ℒ −1 [ℒ[𝑣𝑐 ]] = ℒ −1 [ + + ]
𝑠 𝑠+2 𝑠+2
3 3
2 2
𝑣𝑐 = 3 − 3𝑒 −3𝑡 + 𝑒 −3𝑡
2
𝑣𝑐 = 3 − 2𝑒 −3𝑡
We now have our solution. To verify our qualitative analysis of behavior of the voltage across
the capacitor, we look at a graph:
As predicted, the capacitor begins at 1V and charges until its voltage is consistent with the
voltage of the voltage source, namely, 3V.
Example 2.1.2 Consider an RC-circuit where the voltage source varies over time. We assume
the voltage starts at 2V and that it is increased constant by 0.5V each time unit. Assuming 𝑅 =
SOLUTION:
We know that 𝑉(𝑡) = 2 + 0.5𝑡, given the linear change in the voltage in the voltage source. Our
model is:
𝑑𝑣𝑐 2 + 0.5𝑡 − 𝑣𝑐
=
𝑑𝑡 1.5
, with 𝑣𝑐 (0) = 1.
𝑑𝑣𝑐 4 1 2
ℒ[ ] = ℒ [ + 𝑡 − 𝑣𝑐 ]
𝑑𝑡 3 3 3
4 1 2
𝑠ℒ[𝑣𝑐 ] − 𝑦(0) = ℒ[1] + ℒ[𝑡] − ℒ[𝑣𝑐 ]
3 3 3
4 1 1 1! 2
𝑠ℒ[𝑣𝑐 ] − 1 = ∙ + ∙ 1+1 − ℒ[𝑣𝑐 ]
3 𝑠 3 𝑠 3
2 4 1 1 1
𝑠ℒ[𝑣𝑐 ] + ℒ[𝑣𝑐 ] = ∙ + ∙ 2 + 1
3 3 𝑠 3 𝑠
2 4 1 1 1
ℒ[𝑣𝑐 ] (𝑠 + ) = ∙ + ∙ 2 + 1
3 3 𝑠 3 𝑠
4 1 1 1 1
ℒ[𝑣𝑐 ] = ∙ + ∙ +
3 𝑠 (𝑠 + 2) 3 𝑠 2 (𝑠 + 2) 𝑠 + 2
3 3 3
Prior to inverting both sides, we see that two terms are not directly recognizable as the
transforms of known functions. We need to perform a partial fraction decomposition for the first
two terms. Using technology, we have that,
4
3 2 2
= −
2 𝑠 𝑠+2
𝑠 (𝑠 + 3) 3
1
3 1 3 1 3
= + ∙ −
2 2𝑠 2 4 𝑠 + 2 4𝑠
𝑠 2 (𝑠 + 3) 3
NOTE: performing the partial fraction decompositions manually would be very time-
consuming!
4 1 1 1 1
ℒ[𝑣𝑐 ] = ∙ + ∙ +
3 𝑠 (𝑠 + 2) 3 𝑠 2 (𝑠 + 2) 𝑠 + 2
3 3 3
2 2 1 3 1 3 1
ℒ[𝑣𝑐 ] = ( − )+( 2+ ∙ − 𝑠) +
𝑠 𝑠+2 2𝑠 4 𝑠+2 4 𝑠+3
2
3 3
We can either perform the inverse Laplace transform process first and combine like terms later,
or combine like terms first. The latter makes more sense so that we have fewer inverse Laplace
transforms to take. Simplifying the equation gives:
1 1 1 1 5 1
ℒ[𝑣𝑐 ] = ∙ 2− ∙ + ∙
2 𝑠 4 𝑠+2 4 𝑠
3
1 −1 1 1 1 5 1
𝑣𝑐 = ℒ [ 2 ] − ℒ −1 [ ] + ℒ −1 [ ]
2 𝑠 4 2 4 𝑠
𝑠+3
1 1 2 5
𝑣𝑐 = 𝑡 − 𝑒 −3𝑡 +
2 4 4
More interestingly, we can compare the voltage source voltage to the voltage across the
capacitor to verify this intuition:
Final Remarks
The Laplace transform is a nice way to develop a tool box of “identities” that can be used to
algebraically solve differential equations. On the flipside, the algebra quickly becomes tedious. It
should be noted that Laplace transforms are just one way to solve differential equations. Other
techniques, such as the method of integrating factors, could also have been used here.
The beauty of Laplace transforms, as we will see, comes into play when we begin considering
differential equations with piecewise rates of change. Piecewise differential equations would
need to be solved by integrating each piece separately, however, Laplace transforms allow us to
treat the differential equation as one function. Additionally, we can introduce “shocks” into our
systems (such as those of physical impact, earthquakes, etc.) and handle their role(s) through the
use of Laplace transforms.
Over time, variables in a physical system often change. The majority of our mathematical models
of real-world phenomena have made constant assumptions. For example, we have assumed that
population sizes will only be a function of the current population size. We have assumed that the
voltage source in an RC-circuit is a function that, although introduces changes in the voltage
source, do so in exactly the same way over time.
What if, however, we introduce migration into our population model for only one year at some
point in time? What if we “flip” a switch to turn off a voltage source at some point in time?
These questions are not only interesting, but are important to consider in modeling more
realistically.
Oliver Heaviside, an English electrical engineer, mathematician, and
physicist from the early 1900s offered a solution, which is now coined the
Heaviside function. The Heaviside function is quite basic: the output of
this function is either 0 or 1. In its most basic form, the output of the
function will be 0 until some time, 𝑡 < 𝑎. Once 𝑡 reaches the desired value,
𝑎, then the function will instead output a 1. Mathematically, this is a
piecewise function,
0, 𝑡<𝑎
𝑢𝑎 (𝑡) = {
1, 𝑡≥𝑎
It is important to note that 𝑢5 (𝑡) takes on a value of 1 for all 𝑡 ≥ 5. That is, unless we do
something to switch the Heaviside function off, it will remain on as 𝑡 → ∞.
While this function seems almost to simple to be useful, we should observe that this function is
rarely used by itself. For example, suppose we would like to turn on a voltage source of 9V at
time 𝑡 = 5 in an RC-circuit. Since the Heaviside function, 𝑢4 (𝑡) will switch from an output of 0
to 1 at time 𝑡 = 5, its use is necessary. We can also treat the Heaviside function as a multiplier.
We do this by taking 9 ∙ 𝑢5 (𝑡). As a piecewise function this would be:
9 ∙ 0, 𝑡<5
9 ∙ 𝑢5 (𝑡) = {
9 ∙ 1, 𝑡≥5
Graphically,
9𝑒 −.1(𝑡−5) ∙ 0, 𝑡<5
9𝑒 −.1(𝑡−5) ∙ 𝑢5 (𝑡) = { −.1(𝑡−5)
9𝑒 ∙ 1, 𝑡≥5
0, 𝑡<5
= { −.1(𝑡−5)
9𝑒 , 𝑡≥5
The Heaviside function is quite simple to work with: from 𝑡 = 0 to 𝑡 = 𝑎, the function is valued
at 0, and from 𝑎 ≤ 𝑡 < ∞, the function is valued at 1. Splitting up this integral gives:
𝑎 ∞
ℒ[𝑢𝑎 (𝑡)] = ∫ 0 ∙ 𝑒 −𝑠𝑡 𝑑𝑡 + ∫ 1 ∙ 𝑒 −𝑠𝑡 𝑑𝑡
0 𝑎
1 −𝑠𝑡 𝑡=∞
= 0 + (− ) 𝑒 |
𝑠 𝑡=𝑎
1
= 0 + 𝑒 −𝑎∙𝑠
𝑠
1 −𝑎∙𝑠
ℒ[𝑢𝑎 (𝑡)] = 𝑒
𝑠
This is quite handy! The Laplace transform of a discontinuous function is, here, continuous!
NOTE: Often times, sources will substitute 𝐹(𝑠) = ℒ[𝑓(𝑡)], making the right-hand side appear
as 𝑒 −𝑎𝑠 ∙ 𝐹(𝑠). It is preferred to use the notation above for ease-of-use.
In words, this states that the Laplace transform of a product involving a Heaviside and some
function that is shifted right the same number of units at which the Heaviside “turns on” is equal
to the product of 𝑒 −𝑎𝑠 and the Laplace transform of the unshifted version of the function. At first,
this can be quite overwhelming.
𝑓(𝑡) = 9𝑒 −.1𝑡
𝑓(𝑡 − 5) = 9𝑒 −.1(𝑡−5)
So, by identification,
ℒ[𝑢5 (𝑡) ∙ 𝑓(𝑡 − 5)] = 𝑒 −5𝑠 ∙ ℒ[𝑓(𝑡)]
We know that 𝑓(𝑡 − 5) = 9𝑒 −.1(𝑡−5) and that 𝑓(𝑡), the unshifted function, is 𝑓(𝑡) = 9𝑒 −.1𝑡 .
Thus,
SOLUTION:
We have a product of the form 𝑢𝑎 (𝑡) ∙ 𝑓(𝑡 − 𝑎), where 𝑎 = 2. We identify that 𝑓(𝑡 − 2) =
(𝑡 − 2)3 . In order to determine 𝑓(𝑡), we must, quite literally, replace 𝑡 − 2 with 𝑡, giving us,
𝑓(𝑡) = 𝑡 3
We conclude that,
6
ℒ[𝑢2 (𝑡) ∙ (𝑡 − 2)3 ] = 𝑒 −2𝑠 ∙
𝑠4
1
𝑒 −4𝑠 ∙
𝑠−3
In words, the inverse Laplace transform of a product involving 𝑒 −𝑎𝑠 and the Laplace transform
of some function, 𝑓(𝑡), is the product of the Heaviside function, 𝑢𝑎 (𝑡) and the function 𝑓(𝑡 −
𝑎), which is the function shifted 𝑎 units to the right. It is crucial that we pay attention to the
notation here. The value of 𝑎 is uniformly the same. Thus, knowing, for example, that we have
𝑒 −2𝑠 helps us immediately determine that 𝑎 = 2. Furthermore, knowing what 𝑓(𝑡) is will help us
tremendously in writing 𝑓(𝑡 − 2).
1
ℒ −1 [𝑒 −4𝑠 ∙ ]
𝑠−3
1
By identification, we see that 𝑎 = 4. We also know that 𝑠−3 is the Laplace transform of some
function 𝑓(𝑡). From experience, we know that 𝑓(𝑡) = 𝑒 3𝑡 . We have everything we need to
compute the inverse:
So,
1
ℒ −1 [𝑒 −2𝑠 ∙ ] = ℒ −1 [𝑒 −2𝑠 ∙ ℒ[𝑒 3𝑡 ]] = 𝑢2 (𝑡) ∙ 𝑒 3(𝑡−2)
𝑠−3
1
Example 2.2.2 Find ℒ −1 [𝑒 −1.2𝑠 ∙ 𝑠4 ].
SOLUTION:
1
In this case, we have that 𝑎 = 1.2. The second factor, 𝑠4 is nearly the Laplace transform of 𝑡 3 ,
3!
except that, ℒ[𝑡 3 ] = 𝑠(3+1). We are off by a factor of 3! = 6, so we multiply by “1”, making
this:
1 6
ℒ [𝑒 −1.2𝑠 ∙ 4 ]
6 𝑠
1
ℒ −1 [𝑒 −1.2𝑠 ∙ ] = 𝑢1.2 (𝑡) ∙ (𝑡 − 1.2)3
𝑠4
Now that we have a better sense of working with Laplace transforms involving these specific
products of Heaviside functions, we are well-endowed to proceed with some problem-solving.
𝑑𝑦
Example 2.2.3 Consider the basic differential equation = 𝑢3 (𝑡) − 1, 𝑦(0) = 0. Solve the
𝑑𝑡
differential equation using Laplace transforms.
𝑑𝑦
ℒ[ ] = ℒ[𝑢3 (𝑡)] − ℒ[1]
𝑑𝑡
1 −3𝑠 1
𝑠ℒ[𝑦] − 𝑦(0) = 𝑒 −
𝑠 𝑠
1 1
𝑠ℒ[𝑦] = 𝑒 −3𝑠 −
𝑠 𝑠
1 −3𝑠 1
ℒ[𝑦] = 2 𝑒 − 2
𝑠 𝑠
1 1
𝑦 = ℒ −1 [𝑒 −3𝑠 ∙ ] − ℒ −1
[ ]
𝑠2 𝑠2
We recognize the second terms as the transform of 𝑡1 . The first term requires the use of the
second transform introduced,
1
We see that 𝑎 = 3 and that 𝑠2 is the Laplace transform of 𝑡1 . Therefore 𝑓(𝑡) = 𝑡, and so it
follows that 𝑓(𝑡 − 3) = 𝑡 − 3.
1
The inverse of 𝑒 −3𝑠 ∙ 𝑠2 is 𝑢3 (𝑡) ∙ (𝑡 − 3). Thus,
𝑦 = 𝑢3 (𝑡) ∙ (𝑡 − 3) − 𝑡
−𝑡, 𝑡<3
𝑦(𝑡) = {
(𝑡 − 3) − 𝑡, 𝑡≥3
−𝑡, 𝑡<3
={
−3, 𝑡≥3
Graphically, we see the following:
𝑑𝑦
Intuitively, = 𝑢3 (𝑡) − 1 produces a constant rate of change of -1 when 𝑡 < 3 and then a rate
𝑑𝑡
of change of 0 when 𝑡 ≥ 3, which explains full the behavior of the resulting solution graph.
SOLUTION:
1 − 0, 𝑡<5
1 − 𝑢5 (𝑡) = {
1 − 1, 𝑡≥5
1, 𝑡<5
={
0, 𝑡≥5
𝑑𝑣𝑐
ℒ[ ] = ℒ[9] − 9ℒ[𝑢5 (𝑡)] − ℒ[𝑣𝑐 ]
𝑑𝑡
9 9 −5𝑠
𝑠ℒ[𝑣𝑐 ] − 𝑣𝑐 (0) = − 𝑒 − ℒ[𝑣𝑐 ]
𝑠 𝑠
9 9 −5𝑠
𝑠ℒ[𝑣𝑐 ] + ℒ[𝑣𝑐 ] = − 𝑒
𝑠 𝑠
9 9
ℒ[𝑣𝑐 ] = − 𝑒 −5𝑠 ∙
𝑠(𝑠 + 1) 𝑠(𝑠 + 1)
We notice that both terms are not immediately invertible. In particular, when we notice our
denominator contains a product of two polynomials involving 𝑠, the appropriate course of action
is to attempt a partial fraction decomposition.
We find that,
9 9 9
= −
𝑠(𝑠 + 1) 𝑠 𝑠 + 1
9 9 1 1
= − − 9𝑒 −5𝑠 ∙ + 9𝑒 −5𝑠 ∙
𝑠 𝑠+1 𝑠 𝑠+1
1 1 1 1
𝑣𝑐 = 9ℒ −1 [ ] − 9ℒ −1 [ ] − 9ℒ −1 [𝑒 −5𝑠 ∙ ] + 9ℒ −1 [𝑒 −5𝑠 ∙ ]
𝑠 𝑠+1 𝑠 𝑠+1
We recognize the first two terms to be 9 ∙ 1 = 9 and −9 ∙ 𝑒 −1𝑡 , respectively. The latter two terms
require the inverse transform of the product of 𝑒 −𝑎𝑠 and another function, which leads us to
using the following inverse:
1
For the fourth term, we also have that 𝑎 = 5, but the functions whose Laplace transform is 𝑠+1 is
𝑓(𝑡) = 𝑒 −1𝑡 and, so, 𝑓(𝑡 − 5) = 𝑒 −1(𝑡−5) .
1
We conclude that ℒ −1 [𝑒 −5𝑠 ∙ 𝑠+1] = 𝑢5 (𝑡) ∙ 𝑒 −1(𝑡−5) .
To assess the behavior of the voltage of the capacitor over time, we first split the function into its
respective pieces:
9 − 9𝑒 −𝑡 , 𝑡<5
𝑣𝑐 (𝑡) = {
9 − 9𝑒 −𝑡 − 9 + 9𝑒 −(𝑡−5) , 𝑡≥5
9 − 9𝑒 −𝑡 , 𝑡<5
= { −(𝑡−5) −𝑡
9𝑒 − 9𝑒 , 𝑡 ≥ 5
Many natural phenomena take on the form of a second-order differential equation, namely, one
that contains as its highest-ordered derivative, the second derivative. From the shock assembly in
our vehicles, to vibrations in bridges with traffic, to the effects of building-sway due to
earthquakes, to the motion of an airplane, second-order differential equations help us to
mathematically explain many different natural behaviors in physical systems. We will soon see
why, but it is believed that the strange title of this section needs a little bit of motivation.
Rather than to create an entirely new chapter for second-order equations, we will aim to study
them here, in one section, without the use of Laplace transforms, only to circle back to thinking
about how to solve such differential equations with Laplace transforms. Each method offers
some very powerful advantages, but a bit of intuition can, in fact, go a long way in understanding
second-order system solutions.
The General Model of Interest
We will soon see that these models are more natural then the following definition shows, but for
recognition purposes, we will define a general second-order linear differential equation to be
one of the form
𝑑2𝑦 𝑑𝑦
2
+ 𝑝(𝑡) + 𝑞(𝑡)𝑦 = 𝑔(𝑡)
𝑑𝑡 𝑑𝑡
, where 𝑝(𝑡), 𝑞(𝑡), and 𝑔(𝑡) are coefficients that are functions of 𝑡 only. In this section (and for
the bulk of our studies, we will only consider cases where 𝑝(𝑡) and 𝑞(𝑡) are constant. We call
these types of equations second-order constant-coefficient linear differential equations.
In this basic system, we ignore any effects of friction in the model (these will be considered
later). Imagine a spring attached to a fixed point and then, on the other end, to a mass of size 𝑚
kg. From our intuition about springs, we know that springs do not like to be stretched or
compressed. They prefer to be at rest, or in equilibrium.
When the spring is stretched beyond its rest point, the force in the spring acts in the opposite
direction, and it wants to compress itself back to rest.
Similarly, when the spring is compressed beyond its rest point, the force caused by the spring
acts in the opposite direction, and so the spring wants to stretch itself back to rest.
𝐹 = −𝑘 ∙ 𝑦
We will define 𝑦 > 0 when the spring is compressed and 𝑦 < 0 when the spring is stretched.
We also know from Newton’s Second Law of Motion that net force, 𝐹, is the product of mass,
𝑚, and acceleration, 𝑎, that is 𝐹 = 𝑚 ∙ 𝑎.
Since we are talking about the forces exerted by the spring, we are talking about the acceleration
of the mass attached to the spring. Its displacement was defined as 𝑦, or 𝑦(𝑡), and so it follows
𝑑2 𝑦
that acceleration is the second derivative of displacement with respect to time, or . We can
𝑑𝑡 2
update our model to be:
𝑑2 𝑦
𝐹 =𝑚∙
𝑑𝑡 2
𝑑2𝑦
−𝑘𝑦 = 𝑚
𝑑𝑡 2
𝑑2 𝑦
𝑚 2 + 𝑘𝑦 = 0
𝑑𝑡
𝑑2 𝑦
To make our lives simpler, we divide through by 𝑚 to make the coefficient of be 1:
𝑑𝑡 2
𝑑2𝑦 𝑘
+ 𝑦=0
𝑑𝑡 2 𝑚
1 𝑘𝑔 − 𝑚
1𝑁 =
sec 2
What if we were to apply a force of only 1 N to a box with a mass of 10 kg? Turning to the
equation,
𝐹 =𝑚∙𝑎
1 = 10 ∙ 𝑎 ⇒ 𝑎 = 0.1 𝑚/𝑠 2
This means the velocity would only increase by 0.1 meters per second every second. Our table
would look more like this (again assuming an initial velocity of 1 𝑚/𝑠):
Time, s Total Displacement of Box, m Velocity in the Next Second
0 0 1
1 1 1.1
2 2.1 1.2
3 3.3 1.3
4 4.6 1.4
To get some hands-on experience, I highly encourage you to check out this applet:
http://www.physicsclassroom.com/Physics-Interactives/Newtons-Laws/Force/Force-Interactive
18 kN The estimated bite force of a 6.1m adult great white shark[18]
104 N
25.5 to The estimated bite force of a large 6.7m adult Saltwater
34.5 kN Crocodile[19]
109 N
giganewton (GN)
(SOURCE: https://en.wikipedia.org/wiki/Orders_of_magnitude_(force) )
𝑑2𝑦
−𝑘𝑦 = 𝑚 2
𝑑𝑡
𝑚
Since force is measured in Newtons, or 𝑘𝑔 ∙ , it follows that
𝑠𝑒𝑐 2
𝑚
−𝑘𝑦 = 𝑘𝑔 ∙
𝑠𝑒𝑐 2
Since 𝑦 is measured in meters, we replace it with its units and “solve” for the units of 𝑘:
𝑚 𝑁
−𝑘 ∙ 𝑚 = 𝑘𝑔 ∙ → −𝑘 =
⏟ 𝑠𝑒𝑐 2 𝑚
𝑁
𝑁
Thus, 𝑘 is measured in 𝑚. That is, we can (perhaps) think of the spring constant, 𝑘, as being the
amount of force exerted per of the spring. This unit is a bit elusive to make intuitive sense of, but
some units are helpful, in that a larger 𝑘 means a larger force per meter of length.
Suppose that we have a 1 kg mass suspended from the end of a spring with spring constant 𝑘 =
𝑁
1 𝑠𝑒𝑐. Our model is,
𝑑2𝑦
+𝑦 =0
𝑑𝑡 2
We certainly cannot integrate with respect to 𝑡, because this function is autonomous, and we
know that 𝑦 is an unknown function that depends on 𝑡.
The equation reads, “the second derivative of 𝑦 with respect to 𝑡, plus the original function, 𝑦,
must equal zero.”
To solve this equation, we must find such a function. More specifically, what function, when
added to its second derivative, cancels out to give us 0? We must think about functions that have
recursive derivatives, that is, derivatives that somehow maintain the form of the original
function. Two such candidates come to mind:
sin 𝑡 or cos 𝑥𝑡
𝑒𝑥
Let’s try 𝑒 𝑡 .
We would now need to substitute into our differential equation to see if this function works:
𝑑2𝑦 ?
2
+𝑦 =0
𝑑𝑡
?
𝑒𝑡 + 𝑒𝑡 = 0
2𝑒 𝑡 ≠ 0
The key here is that we need the function and its second derivative to be the same function with
opposite signs.
Are these the only solutions? Almost. Consider any two sinusoidal functions with varying
amplitudes of the form,
𝑦1 (𝑡) = 𝑘1 sin 𝑡
𝑦2 (𝑡) = 𝑘2 cos 𝑡
These will also work, since the constant is present in both the original function and in the second
derivative of the function. Having two such general solutions will be important.
The beauty of this “guess-and-check” method is that it required no integration; our knowledge of
derivative perfectly sufficed.
What will such a spring-mass system look like, then? For demonstrative purposes, Suppose the
particular solution 𝑦(𝑡) = 2 sin 𝑡 satisfied our initial condition(s). A graph reveals the following:
Consider a slight variant of this model, namely that 𝑚 = 4 and 𝑘 = 1. Our model is:
𝑑2𝑦
+ 4𝑦 = 0
𝑑𝑡 2
Our choice of 𝑦1 (𝑡) = 𝑘1 sin 𝑡 and 𝑦2 (𝑡) = 𝑘2 cos 𝑡 will no longer work, since,
𝑑
𝑦 (𝑡) = 𝑎𝑘1 cos 𝑎𝑡
𝑑𝑡 1
𝑑 𝑑𝑦1 (𝑡)
( ) = −𝑎2 𝑘1 sin 𝑎𝑡
𝑑𝑡 𝑑𝑡
𝑘1 sin 𝑎𝑡 (−𝑎2 + 4) = 0
The only way for the left side to cancel out is for the second factor, −𝑎2 + 4, to be 0. We can
solve for the value of 𝑎:
−𝑎2 + 4 = 0
4 = 𝑎2
𝑎 = ±2
It turns out (and we will show this soon) that we have some redundancy.
Furthermore, we can go through the same process with 𝑦2 = 𝑘2 cos(𝑎𝑡) and show that 𝑎 = ±2
satisfies the differential equation we are working to solve. So, in practice, it appears we have
four solutions:
𝑦1𝑎 (𝑡) = 𝑘1𝑎 sin(2𝑡) and 𝑦1𝑏 = 𝑘1𝑏 sin(−2𝑡)
𝑦2𝑎 (𝑡) = 𝑘2𝑎 cos(2𝑡) and 𝑦2𝑏 = 𝑘2𝑏 cos(−2𝑡)
The redundancy is due to trigonometric identities – namely that sin(𝑥) = −cos(𝑥) and
cos(−𝑥) = cos(𝑥). Using these substitutions in the functions with subscript 𝑏, we get:
𝑦1𝑎 (𝑡) = 𝑘1𝑎 sin(2𝑡) and 𝑦1𝑏 = −𝑘1𝑏 cos(2𝑡) = 𝑘1𝑏 cos(2𝑡)
𝑦2𝑎 (𝑡) = 𝑘2𝑎 cos(2𝑡) and 𝑦2𝑏 = 𝑘1𝑏 cos(2𝑡)
NOTE: 𝑘1𝑏 is an arbitrary constant, so −𝑘1𝑏 is still an arbitrary constant, allowing us to “absorb”
the sign as part of 𝑘1𝑏 .
, and so it follows that 𝑦1𝑏 (𝑡) and 𝑦2𝑏 (𝑡) are equal to 𝑦2𝑎 (𝑡). Therefore, we really have two
solutions:
𝑦1 (𝑡) = 𝑘1 sin(2𝑡)
We could repeat this process for any 𝑘 and 𝑚. Just to illustrate, suppose 𝑘 = 2 and 𝑚 = 3, then
our model is:
𝑑2𝑦 3
+ 𝑦=0
𝑑𝑡 2 2
𝑑 𝑑 𝑑𝑦1 (𝑡)
Choosing a guess of 𝑦1 (𝑡) = 𝑘1 sin(𝑎𝑡), we know that 𝑑𝑡 𝑦1 (𝑡) = 𝑎𝑘1 cos(𝑎𝑡) and 𝑑𝑡 ( )=
𝑑𝑡
−𝑎2 𝑘1 sin(𝑎𝑡). Substituting,
3
−𝑎2 𝑘1 sin(𝑎𝑡) + 𝑘1 sin(𝑎𝑡) = 0
2
Solving for 𝑎,
3
𝑘1 sin(𝑎𝑡) (−𝑎2 + ) = 0
2
3
−𝑎2 + =0
2
3
𝑎 = ±√
2
3 3
Thus, 𝑦1 (𝑡) = 𝑘1 sin (√(2) 𝑡) and 𝑦2 (𝑡) = 𝑘2 cos (√2 𝑡) are both solutions.
𝑘
𝑦1 (𝑡) = 𝑘1 sin (√ 𝑡)
𝑚
𝑘
𝑦2 (𝑡) = 𝑘2 cos (√ 𝑡)
𝑚
For any frictionless (undamped) spring-mass system, we have a general solution to the
differential equation!
Another Second-Order Homogeneous Case
The guess-and-check method proposed works in many different special cases. For instance,
suppose we slightly modify our undamped spring-mass system, such that
𝑑2𝑦
− 𝑐𝑦 = 0
𝑑𝑡 2
In this instance, it is our desire to find a function (or functions), 𝑦(𝑡), such that the difference
between its second derivative and some multiple of itself is zero. What functions have this
property?
Using substitution,
𝑑2𝑦
− 𝑐𝑦 = 0
𝑑𝑡 2
𝑟 2 𝑘𝑒 𝑟𝑡 − 𝑐𝑘𝑒 𝑟𝑡 = 0
We now attempt to solve for 𝑟 by factoring out 𝑘𝑒 𝑟𝑡 (note that this quantity cannot be zero!):
𝑘𝑒 𝑟𝑡 (𝑟 2 − 𝑐) = 0
And, so,
𝑟2 − 𝑐 = 0
𝑟2 = 𝑐
𝑟 = ±√𝑐
Due to the quadratic nature of how 𝑟 is obtained, we again have two solutions:
𝑦1 (𝑡) = 𝑘1 𝑒 √𝑐𝑡
𝑑
𝑦(𝑡) = 2𝑘𝑡
𝑑𝑡
𝑑 𝑑𝑦(𝑡)
( ) = 2𝑘
𝑑𝑡 𝑑𝑡
, we require that the initial displacement be 0.2 m and that the initial velocity be 0.1 m/sec. If we
assume either of the two solution functions, such as 𝑦1 (𝑡) = 𝑘1 sin 𝑡, we may feel compelled to
satisfy the initial conditions. We can easily obtain the velocity function by taking the derivative
of our solution,
𝑑𝑦1
= 𝑘1 cos 𝑡
𝑑𝑡
, or,
𝑦 ′ (𝑡) = 𝑘1 cos 𝑡
𝑦1 (𝑡) = 𝑘1 sin 𝑡
If 𝑦1 (𝑡) and 𝑦2 (𝑡) are any solutions to a homogeneous second-order linear differential
equation of the form
𝑑2𝑦 𝑑𝑦
2
+ 𝑝(𝑡) + 𝑞(𝑡)𝑦 = 0
𝑑𝑡 𝑑𝑡
Then any sum of constant multiples of these two solutions is also a solution. That is, given
constants 𝑘1 and 𝑘2 , then
𝑑2 𝑦
We can easily verify this theorem for the differential equation + 𝑦 = 0:
𝑑𝑡 2
To check this sum of 𝑦1 (𝑡) and 𝑦2 (𝑡) is also a solution, we find its second derivative:
𝑑
𝑦(𝑡) = 𝑘1 cos(𝑡) − 𝑘2 sin(𝑡)
𝑑𝑡
We now substitute this solution sum into the differential equation to verify that it, too, is a
solution:
𝑑2𝑦 ?
2
+𝑦 =0
𝑑𝑡
?
(= −𝑘1 sin(𝑡) − 𝑘2 cos(𝑡)) + (𝑘1 cos(𝑡) − 𝑘2 sin(𝑡)) = 0
0=0
And, so, we have verified that the sum of these functions is also a solution.
The formal proof of this theorem is fairly straightforward to follow. We need this beyond our
proof-by-example so that other situations follow the same principle.
Suppose we have a general second-order differential equation:
𝑑2 𝑦 𝑑𝑦
+ 𝑝(𝑡) + 𝑞(𝑡)𝑦 = 0
𝑑𝑡 2 𝑑𝑡
, and,
In other words, they both satisfy the equation (without having to question the equality of the two
sides of each equation.
We argue that 𝑦(𝑡) = 𝑘1 𝑦1 (𝑡) + 𝑘2 𝑦2 (𝑡) is also a solution. Thus, we check it in the differential
equation by finding the first and second derivatives:
This may appear quite messy, but if we distribute 𝑝(𝑡) and 𝑞(𝑡) and collect all terms related to
𝑦1 (𝑡) and 𝑦2 (𝑡) into separate terms, factoring out the constant, then we have,
The first set of terms in parenthesis and the terms in the second set of parenthesis equal zero,
since each function, 𝑦1 (𝑡) and 𝑦2 (𝑡) satisfies the differential equation. This leaves us with:
0=0
We have thus shown that a sum of two solutions is a solution, and the Principle of Superposition
Theorem holds!
The reason this theorem is so useful relates to satisfying the initial conditions for displacement
and velocity. We now know that,
𝑑2𝑦
+𝑦 =0
𝑑𝑡 2
We want 𝑦(0) = 0.2 m to be the initial displacement at time 𝑡 = 0 and for 𝑦 ′ (0) = 0.1 m/sec at
that same time. We already know that our solution, 𝑦(𝑡) = 𝑘1 sin(𝑡) + 𝑘2 cos(𝑡), is the
displacement function. We can find its derivative to give us the velocity function (note that we
will use 𝑦′(𝑡) instead of 𝑑𝑦/𝑑𝑡 here just to make the notation less involved):
NOTE: We cannot absorb the sign of 𝑘2 in 𝑦′(𝑡) into 𝑘2 , since it comes from the original
function, 𝑦(𝑡), and the derivative tells us the signs should be opposite!
This is where our theorem is useful – we now plug in our initial conditions:
Excellent! We were able to satisfy both initial conditions because we formed a system of linear
equations with two-unknowns. These two unknowns can be determined by the two constraints
(initial conditions) given.
Our particular solution for this spring-mass system is:
Given a sine or cosine function, sin(𝑎𝑡) or cos(𝑎𝑡), the period, or the number of units of 𝑡 for
one full oscillation (peak-to-peak) to occur is given by,
2𝜋
period = , or identically,
𝑎
2π
period =
frequency
The frequency of a periodic function is the complete number of periods per 2𝜋 time units,
since
period
frequency =
2𝜋
sin(𝑎1 𝑡) + cos(𝑎2 𝑡)
, has a non-obvious period. The period of such a sum can be computed by finding the least
common multiple of the given periods or by finding the greatest common divisor of the
frequencies.
For example, if we have a function 𝑦 = 8 sin(4𝑡) + 3cos(12𝑥), the periods of the two terms
are
2𝜋 𝜋
𝑝1 = =
4 2
2𝜋 𝜋
𝑝2 = =
12 6
, respectively. Intuitively, one period of the sum of such functions occurs when both functions
simultaneously terminate a cycle. The function 3cos(12𝑥) has a higher frequency and only
𝜋 𝜋 𝜋 𝜋 3𝜋 𝜋
takes 𝑡 = 6 time units to complete a single cycle. After 3 cycles ( 6 + 6 + 6 = 6 = 2 time
units), 3cos(12𝑥) will be terminating its third cycle, while 8sin(4𝑡) will be terminating its
𝜋
first cycle, simultaneously. Thus, the period of this sum function will be 2 time units.
𝜋 𝜋
𝑙𝑐𝑚 ( , )
2 6
To do this we realize that both components contain 𝜋, and so the least common multiple will
involve 𝜋 times the least common multiple of ½ and 1/6. Intuitively, the first few integer
multiplies of each of these fractions are:
1 2 3 4 5
, = 1, , = 2,
2 2 2 2 2
1 2 1 3 1 4 2
, = , = , =
6 6 3 6 2 6 3
1
The least common multiple is 2. Thus the least common multiple of the original two
𝜋 𝜋
components is 2 . That is, one period of the sum of sinusoidal functions is 2 ≈ 1.57 seconds.
Another easier (though slightly less intuitive) way is to find the greatest common divisor of the
frequencies. The frequencies of these two functions are 𝑓1 = 4 and 𝑓2 = 12 cycles per 2𝜋
4 2 12 6
units of time or 𝑓1 = 2𝜋 = 𝜋 and 𝑓2 = 2𝜋 = 𝜋 cycles per 1 unit of time.
2 6
gcd ( , )
𝜋 𝜋
𝜋
We can factor (divide) out 2 from both components:
2 𝜋
= (1)
𝜋 2
6 𝜋 3
= ( )
𝜋 2 𝜋
𝜋
And, so, we have that the period of this sum of sinusoidal functions is, again, 2 time units.
Final Thoughts
When mattresses are tested for strength and durability of the springs, a large machine is used to
impose force on the mattress repeatedly. This quality control process is another example of a
spring-mass system. The “mass”, in this case, can be thought of the arm of the machine that
presses on the springs of the mattress. While there are other factors here that we will not
consider, we know that there is a controlled force on the mattress. This controlled force has an
associated forcing function, which varies in some pre-specified way over time.
To account for such a function, we need to revisit Newton’s Second Law of Motion:
𝐹 = 𝑚𝑎
Since we have previously defined 𝑦(𝑡) to be the displacement of the mass from the springs
𝑑2 𝑦
resting point, we have that 𝑎 = , as in our previous model.
𝑑𝑡 2
The net force, 𝐹, is now the sum of two forces, one being the force exerted by the spring,
𝐹𝑠𝑝𝑟𝑖𝑛𝑔 = −𝑘𝑦, and the force imposed by the forcing function, which we will denote 𝐹𝑓𝑜𝑟𝑐𝑖𝑛𝑔 =
𝑓(𝑡).
𝑑2𝑦
𝐹𝑠𝑝𝑟𝑖𝑛𝑔 + 𝐹𝑓𝑜𝑟𝑐𝑖𝑛𝑔 = 𝑚
𝑑𝑡 2
𝑑2𝑦
−𝑘𝑦 + 𝑓(𝑡) = 𝑚
𝑑𝑡 2
𝑑2𝑦
𝑚 + 𝑘𝑦 = 𝑓(𝑡)
𝑑𝑡 2
𝑑2𝑦 𝑘 𝑓(𝑡)
2
+ 𝑦=
𝑑𝑡 𝑚 𝑚
So that we do not have to worry about the forcing function being divided by a constant (thus
𝑓(𝑡)
making the algebra messier), we will simply rewrite 𝐹(𝑡) = . Thus, it will be assumed that
𝑚
𝐹(𝑡) already takes the mass into effect.
We recognize this to be a second-order constant-coefficient homogeneous differential
equation.
∞ 2
𝑑2𝑦 𝑑 𝑦 −𝑠𝑡
ℒ [ 2] = ∫ 2
𝑒 𝑑𝑡
𝑑𝑡 0 𝑑𝑡
𝑑2 𝑦
First, let 𝑢 = 𝑒 −𝑠𝑡 and 𝑑𝑣 = 𝑑𝑡 2
Then,
𝑑𝑢 = −𝑠𝑒 −𝑠𝑡 𝑑𝑡
𝑑𝑦
𝑣=
𝑑𝑡
The second term on the right requires another round of integration by parts, so we let,
𝑢 = −𝑠𝑒 −𝑠𝑡
𝑑𝑦
𝑑𝑣 =
𝑑𝑡
Then,
𝑑𝑢 = 𝑠 2 𝑒 −𝑠𝑡 𝑑𝑡
𝑣=𝑦
By substitution,
∞
𝑑2 𝑦 −𝑠𝑡 −𝑠𝑡
𝑑𝑦 𝑡=∞ −𝑠𝑡 𝑡=∞
∞
∫ 2
𝑒 𝑑𝑡 = 𝑒 | − [−𝑠𝑒 𝑦|𝑡=0 − ∫ 𝑦𝑠 2 𝑒 −𝑠𝑡 𝑑𝑡]
0 𝑑𝑡 𝑑𝑡 𝑡=0 0
∞ ∞
𝑑 2 𝑦 −𝑠𝑡 𝑡=∞ 𝑡=∞
∫ 2
𝑒 𝑑𝑡 = 𝑒 𝑦′(𝑡)|𝑡=0 + 𝑠𝑒 𝑦(𝑡)|𝑡=0 + 𝑠 ∫ 𝑦𝑒 −𝑠𝑡 𝑑𝑡
−𝑠𝑡 −𝑠𝑡 2
0 𝑑𝑡 0
As in the development of the Laplace transform of the first derivative, the last term on the right
is, by definition, ℒ[𝑦],
∞
𝑑 2 𝑦 −𝑠𝑡
∫ 𝑒 𝑑𝑡 = 𝑒 −𝑠𝑡 𝑦′(𝑡)|𝑡=∞
𝑡=0 + 𝑠𝑒
−𝑠𝑡
𝑦(𝑡)|𝑡=∞ 2
𝑡=0 + 𝑠 ℒ[𝑦]
0 𝑑𝑡 2
, and, provided that 𝑠 > 0 and that 𝑦(𝑡) and 𝑦′(𝑡) are of no more than exponential growth, then,
∞
𝑑2 𝑦 −𝑠𝑡
∫ 𝑒 𝑑𝑡 = (0)𝑦 ′ (∞) − (1)𝑦 ′ (0) + 𝑠(0)𝑦(∞) − 𝑠(1)𝑦(0) + 𝑠 2 ℒ[𝑦]
0 𝑑𝑡 2
∞
𝑑2 𝑦 −𝑠𝑡
∫ 𝑒 𝑑𝑡 = −𝑦 ′ (0) − 𝑠𝑦(0) + 𝑠 2 ℒ[𝑦]
0 𝑑𝑡 2
∞
𝑑 2 𝑦 −𝑠𝑡
∫ 𝑒 𝑑𝑡 = 𝑠 2 ℒ[𝑦] − 𝑠𝑦(0) − 𝑦 ′ (0)
0 𝑑𝑡 2
𝑑3𝑦
ℒ [ 3 ] = 𝑠 3 ℒ[𝑦] − 𝑠 2 𝑦(0) − 𝑠𝑦 ′ (0) − 𝑦 ′′ (0)
𝑑𝑡
Another interesting note is that the Laplace transform requires an initial condition for both the
displacement and velocity! We discovered this intuitively in the previous section.
In thinking about what type of forcing functions would likely be seen, 𝑓(𝑡) = sin(𝑎𝑡) and
𝑓(𝑡) = cos(𝑎𝑡) are two such functions. Thus, we will need to know their Laplace transforms. By
definition of a Laplace transform, then,
∞
ℒ[sin(𝑎𝑡)] = ∫ sin(𝑎𝑡) 𝑒 −𝑠𝑡 𝑑𝑡
0
Integrating this requires the use of integration by parts. Since this is an integral from Calculus II,
we simply use the result from a table of integrals showing the integral of a product of a
trigonometric and exponential function:
∞ 𝑡=∞
1
∫ sin(𝑎𝑡) 𝑒 −𝑠𝑡 𝑑𝑡 = 𝑒 −𝑠𝑡 (−𝑠
∙ sin 𝑎𝑡 − 𝑎 cos 𝑎𝑡)|
0 𝑎2 + 𝑠 2 𝑡=0
𝑠
ℒ[cos(𝑎𝑡)] =
𝑠 2 + 𝑎2
𝑑2𝑦
+𝑦 =1
𝑑𝑡 2
𝑑2𝑦
ℒ [ 2 ] + ℒ[𝑦] = ℒ[1]
𝑑𝑡
Taking transforms:
Isolating ℒ[𝑦]:
1
𝑠 2 ℒ[𝑦] − 𝑠(0) − 0 + ℒ[𝑦] =
𝑠
1
𝑠 2 ℒ[𝑦] + ℒ[𝑦] =
𝑠
1
ℒ[𝑦](𝑠 2 + 1) =
𝑠
1
ℒ[𝑦] =
𝑠(𝑠 2 + 1)
Prior to taking the inverse transform, we notice we have a product of polynomials in the
denominator, so we perform a partial fraction decomposition using technology. We obtain:
1 𝑠
ℒ[𝑦] = − 2
𝑠 𝑠 +1
The second term appears to have the form of a cosine function, since,
𝑠
ℒ[cos(𝑎𝑡)] =
𝑠 2 + 𝑎2
By observation, 𝑎2 is equal to 1, which implies that 𝑎 = 1 (we ignore the case where 𝑎 = −1 for
convenience).
1 𝑠
𝑦 = ℒ −1 [ ] − ℒ −1 [ 2 ]
𝑠 𝑠 + 12
𝑦(𝑡) = 1 − cos(1𝑡)
𝑦(𝑡) = 1 − cos(𝑡)
We see the periodic nature of the spring’s movement. Initially, it is difficult to conceptualize why
the forcing function of a constant 1 Newton does what it does.
As we begin keeping track of the spring’s movement, we see that 𝑦 ≥ 0 for all 𝑡. This means
that, from rest, the spring is compressed 2 meters and then, perhaps, the force of the spring on the
machine forces the spring back to rest. This behavior occurs periodically. Thus, it appears
reasonable to assume that the force, since it is positive, acts as a limited compression force on the
spring.
Example 2.4.1 Suppose that we have a periodic sinusoidal forcing function of 𝐹(𝑡) = sin(3𝑡)
Newtons imposed on the mattress for a short period. Furthermore, suppose that the spring in the
mattress is initially at rest, so 𝑦(0) = 0, and is not in motion, so 𝑦 ′ (0) = 0. Furthermore,
𝑑2𝑦
+ 𝑦 = sin(3𝑡)
𝑑𝑡 2
𝑑2 𝑦
ℒ [ 2 ] + ℒ[𝑦] = ℒ[sin(3𝑡)]
𝑑𝑡
Taking transforms:
1
𝑠 2 ℒ[𝑦] − 𝑠𝑦(0) − 𝑦 ′ (0) + ℒ[𝑦] =
𝑠2 + 32
Isolating ℒ[𝑦]:
1
𝑠 2 ℒ[𝑦] − 𝑠(0) − 0 + ℒ[𝑦] =
𝑠2 +9
1
𝑠 2 ℒ[𝑦] + ℒ[𝑦] =
𝑠2 +9
1
ℒ[𝑦](𝑠 2 + 1) =
𝑠2 +9
1
ℒ[𝑦] =
(𝑠 2 + 1)(𝑠 2 + 9)
1 1
ℒ[𝑦] = −
8(𝑠 2 2
+ 1) 8(𝑠 + 9)
1 −1 1 1 1
𝑦(𝑡) = ℒ [ 2 ] − ℒ −1 [ 2 ]
8 𝑠 +1 8 𝑠 +9
Both inverse transforms are identified to be sine functions, where 𝑎2 = 1 in the first term and
𝑎2 = 9 → 𝑎 = 3 in the second term:
1 −1 1 1 1
𝑦(𝑡) = ℒ [ 2 2
] − ℒ −1 [ 2 ]
8 𝑠 +1 8 𝑠 + 32
We obtain:
1 1
𝑦(𝑡) = sin(1𝑡) − sin(3𝑡)
8 8
An applet to visually demonstrate the behavior of this spring-mass system can be found at the
following link: https://www.geogebra.org/m/Wh5NjWWK.
Example 2.4.2 Suppose that we have a periodic sinusoidal forcing function of 𝐹(𝑡) = cos(5𝑡)
Newtons imposed on the mattress for a short period. Furthermore, suppose that the spring in the
SOLUTION:
𝑑2 𝑦
+ 𝑦 = cos(5𝑡)
𝑑𝑡 2
𝑑2𝑦
ℒ[ ] + ℒ[𝑦] = ℒ[cos(5t)]
𝑑𝑡 2
Taking transforms:
𝑠
𝑠 2 ℒ[𝑦] − 𝑠𝑦(0) − 𝑦 ′ (0) + ℒ[𝑦] =
𝑠 2 + 52
Isolating ℒ[𝑦]:
𝑠
𝑠 2 ℒ[𝑦] − 𝑠(0) − 0 + ℒ[𝑦] =
𝑠 2 + 25
𝑠
𝑠 2 ℒ[𝑦] + ℒ[𝑦] =
𝑠 2 + 25
𝑠
ℒ[𝑦](𝑠 2 + 1) =
𝑠2 + 25
𝑠
ℒ[𝑦] =
(𝑠 2 + 1)(𝑠 2 + 25)
𝑠 𝑠
ℒ[𝑦] = −
24(𝑠 2 + 1) 24(𝑠 2 + 25)
1 −1 𝑠 1 −1 𝑠
𝑦(𝑡) = ℒ [ 2 ]− ℒ [ 2 ]
24 𝑠 +1 24 𝑠 + 25
Both inverse transforms are identified to be cosine functions, where 𝑎2 = 1 in the first term and
𝑎2 = 25 → 𝑎 = 5 in the second term:
1 −1 𝑠 1 𝑠
𝑦(𝑡) = ℒ [ 2 2
] − ℒ −1 [ 2 ]
24 𝑠 +1 24 𝑠 + 52
We obtain:
1 1
𝑦(𝑡) = cos(1𝑡) − cos(5𝑡)
24 24
An applet to help visualize the spring’s behavior can be observed by visiting the following link:
http://ggbm.at/vpVCs4PG.
□
If our assumption of a frictionless spring-mass system was to hold, this would make for one
uncomfortable vehicle ride. Imagine bumping up-and-down periodically as you drive to campus!
Fortunately, our vehicles are equipped with struts. A strut acts as a damper on vibrations and,
when in good condition, it helps to keep our ride comfortable. Why not eliminate the spring in
the assembly altogether? This would also make for an uncomfortable ride, as we would feel the
impact of every bump in the road.
In order to account for a damping force in our differential equation, we make a statement as
follows: damping of a spring-mass system is the force that counteracts velocity. This force is not
equal to the velocity (as this would again result in abrupt stops in vibrations), but instead it is
proportional to the current velocity in the opposite direction.
We will call the damping coefficient, that is, the proportionality constant, 𝑏. Thus,
𝑑𝑦
𝐹𝑑𝑎𝑚𝑝𝑖𝑛𝑔 = −𝑏
𝑑𝑡
In other words, the damping force is negatively proportional to the current velocity – it acts to
slow velocity.
𝐹 = 𝑚𝑎
𝑑2𝑦
𝐹𝑠𝑝𝑟𝑖𝑛𝑔 + 𝐹𝑑𝑎𝑚𝑝𝑖𝑛𝑔 + 𝐹𝑓𝑜𝑟𝑐𝑖𝑛𝑔 = 𝑚
𝑑𝑡 2
𝑑𝑦 𝑑2𝑦
−𝑘𝑦 − 𝑏 + 𝑓(𝑡) = 𝑚 2
𝑑𝑡 𝑑𝑡
𝑑2𝑦 𝑑𝑦
𝑚 2
+𝑏 + 𝑘𝑦 = 𝑓(𝑡)
𝑑𝑡 𝑑𝑡
𝑑2 𝑦 𝑏 𝑑𝑦 𝑘 𝑓(𝑡)
2
+ + 𝑦=
𝑑𝑡 𝑚 𝑑𝑡 𝑚 𝑚
𝑑2 𝑦 𝑏 𝑑𝑦 𝑘
+ + 𝑦 = 𝐹(𝑡)
𝑑𝑡 2 𝑚 𝑑𝑡 𝑚
So that we do not have to worry about the forcing function being divided by a constant (thus
𝑓(𝑡)
making the algebra messier), we will simply rewrite 𝐹(𝑡) = . Thus, it will be assumed that
𝑚
𝐹(𝑡) already takes the mass into effect.
By virtue of natural model development, we now have the form of a general constant-
coefficient differential equation. This model may require a difficult solution process, but we
will consider special cases that result in more manageable mathematics.
Unit Analysis
Furthermore, it is useful to develop a little bit of intuition regarding the units of this term. It must
be the case that this term is in Newtons, since it is a force, and so,
𝑚 𝑑𝑦
𝑘𝑔 ∙ 2
=𝑏
𝑠𝑒𝑐 𝑑𝑡
𝑑𝑦
Since is measured in m/sec, we can “solve” for the units of 𝑏:
𝑑𝑡
Thus, its units are kg per second. While the non-physicist (such as the author of the book) can
only speculate, it seems reasonable to say that this unit describes the amount of counter-mass
imposed on the spring-mass system continuously each second. Then, it seems reasonable to say
that the larger the value of 𝑏, the stronger the counteracting damping force placed on the spring-
mass system.
A Homogeneous Case
In the real world, systems are often damped. If
you release the bottom end of a slinky and
watch it bounce, it will naturally come to a
complete (or relatively complete) stop.
Similarly, a vehicle that drives over a pothole in
the road will eventually cease to bounce up-and-
down.
Without even providing a solution process, we
might intuitively expect the graph of 𝑦(𝑡), the
displacement of the mass from rest, as a
function of time to result in slowing oscillations.
To see if our model agrees with our intuition, we consider the case where 𝑚 = 1, 𝑏 = 3, and 𝑘 =
2. For now, we assume there is no forcing function. Thus, our model becomes:
𝑑2𝑦 𝑑𝑦
2
+3 + 2𝑦 = 0
𝑑𝑡 𝑑𝑡
We could go about solving this differential equation in more than one way, namely, by using the
guess-and-check method, or by using Laplace transforms. We first attempt the latter.
𝑦(𝑡) = 𝑘 sin(𝑎𝑡)
𝑑
𝑦(𝑡) = 𝑎𝑘 cos(𝑎𝑡)
𝑑𝑡
𝑑 𝑑𝑦(𝑡)
( ) = −𝑎2 𝑘 sin(𝑎𝑡)
𝑑𝑡 𝑑𝑡
We can plug this guess into the differential equation to see if it is possible to find such a value of
𝑎 that will make the preceding sum zero:
𝑑2𝑦 𝑑𝑦
2
+3 + 2𝑦 = 0
𝑑𝑡 𝑑𝑡
?
−𝑎2 𝑘 sin(𝑎𝑡) + 3𝑎𝑘 cos(𝑎𝑡) + 2𝑘 sin(𝑎𝑡) = 0
We have a problem: in general, sin(𝑎𝑡) and cos(𝑎𝑡) are not in phase, and so it is not possible to
find a value of 𝑎 that makes these terms cancel out for all 𝑡. Thus, sine and cosine are not
appropriate guesses.
𝑑
𝑦(𝑡) = 𝑎𝑘𝑒 𝑎𝑡
𝑑𝑡
𝑑 𝑑𝑦(𝑡)
( ) = 𝑎2 𝑘𝑒 𝑎𝑡
𝑑𝑡 𝑑𝑡
The advantage this function has is that all the terms contain the same function, 𝑒 𝑎𝑡 . Thus, we
need to find a value of 𝑎 that will make this expression equation true. We can begin by factoring
out 𝑘𝑒 𝑎𝑡 from each of the terms:
?
𝑘𝑒 𝑎𝑡 (𝑎2 + 3𝑎 + 2) = 0
As long as 𝑎2 + 3𝑎 + 2 can equal zero, we have a match! Mathematically, we must solve for 𝑎
in the equation,
𝑎2 + 3𝑎 + 2 = 0
We can do this one of two ways: attempt to factor it, or use the quadratic formula. While this
polynomial factors into (𝑎 + 2)(𝑎 + 1) (thereby giving us 𝑎 = −2 or 𝑎 = −1), this is not likely
to be possible in typical cases.
Just In Time Review – Quadratic Formula
𝑎𝑥 2 + 𝑏𝑥 + 𝑐 = 0
The value of 𝑥 that solves this equation is given by the quadratic formula, which is,
−𝑏 ± √𝑏 2 − 4𝑎𝑐
𝑥=
2𝑎
−3 ± √1
=
2
−3 ± 1
=
2
𝑎1 = −1, 𝑎2 = −2
NOTE: As a foreshadowing, we know that it is possible that the discriminant, 𝑏 2 − 4𝑎𝑐, can be
negative, which introduces a new problem we will have to handle in such circumstances.
𝑦1 (𝑡) = 𝑘1 𝑒 −1𝑡
𝑦2 (𝑡) = 𝑘2 𝑒 −2𝑡
Since we will need to satisfy two initial conditions, in general, it follows from the Superposition
Theorem that any constant multiple sum of these two functions is also a solution. Our general
solution is:
𝑦(𝑡) = 𝑘1 𝑒 −𝑡 + 𝑘2 𝑒 −2𝑡
Assuming, for example, that 𝑦(0) = 3, 𝑦 ′ (0) = 0 are our two initial conditions
3 = 𝑘1 𝑒 −0 + 𝑘2 𝑒 −2(0)
3 = 𝑘1 + 𝑘2
0 = −𝑘1 − 2𝑘2
We can do this any number of ways, including by substitution, elimination, or by using matrix
algebra (we will learn the matrix algebra approach later on).
Solving by substation requires that we solve one of the equations for either one of the unknowns,
substitute this into the other equation, and isolate a constant. For simplicity, we do this for the
second equation:
𝑘1 = −2𝑘2
Substitution yields:
3 = 𝑘1 + 𝑘2
3 = (−2𝑘2 ) + 𝑘2
3 = −𝑘2
𝑘2 = −3
𝑦(𝑡) = 6𝑒 −𝑡 − 3𝑒 −2𝑡
What we see is not very exciting. It appears that the second the mass is released, the spring
returns to rest quickly and without oscillations. What might be the cause of this? Since the only
difference in this model in comparison to our undamped model, the damping term must be the
culprit. This type of damped spring-mass system is termed to be overdamped, that is, one that
would physically result in, perhaps, an uncomfortable bounce in a vehicle!
The following applet can be used to visualize how the mass returns to rest:
http://ggbm.at/T3CNDSDU.
−2 ± √22 − 4(1)(2)
𝑎=
2(1)
−2 ± √−4
=
2
Since the discriminant is −4, we have non-real solutions for 𝑎! While there are techniques to
remedy this issue (thanks to our friend, Leonhard Euler), we will attempt another solution
strategy: Laplace transforms.
Before we do so, we will need two additional transforms, mainly for inversion purposes. We will
not derive these two Laplace transform, but it can be done by using the Laplace transform
definition:
𝑏
ℒ[𝑒 𝑎𝑡 ∙ sin(𝑏𝑡)] =
(𝑠 − 𝑎)2 + 𝑏 2
𝑠−𝑎
ℒ[𝑒 𝑎𝑡 ∙ cos(𝑏𝑡)] =
(𝑠 − 𝑎)2 + 𝑏 2
In words, the Laplace transform of a product of an exponential function and a sinusoid (sine or
cosine) is the expression on the right. Often these types of functions are present whenever
sinusoids have amplitudes that decay exponentially.
As an example,
𝑠−3
ℒ[𝑒 3𝑡 ∙ cos(4𝑡)] =
(𝑠 − 3)2 + 16
3 3
ℒ −1 [ ] = ℒ −1
[ ]
(𝑠 + 2)2 + 9 (𝑠 − (−2))2 + 32
3
ℒ −1 [ ] = 𝑒 −2𝑡 ∙ cos(3𝑡)
(𝑠 − 2)2 + 32
𝑑2𝑦 𝑑𝑦
+ 2 + 2𝑦 = 0
𝑑𝑡 2 𝑑𝑡
𝑑2𝑦 𝑑𝑦
ℒ [ 2 ] + 2ℒ [ ] + 2ℒ[𝑦] = ℒ[0]
𝑑𝑡 𝑑𝑡
Isolating ℒ[𝑦]:
ℒ[𝑦](𝑠 2 + 2𝑠 + 2) = 3𝑠 + 6
We have a new problem: the denominator is not part of a recognizable Laplace transform. We
could attempt a partial fraction decomposition, but, since this denominator is not factorable (try
it!), we will be left short-handed.
To solve a problem where a quadratic function with the linear term (2𝑠 in this case) is in the
denominator, we should attempt to complete the square.
(𝑥 − 𝐴)2 + 𝐵
, where 𝐴 and 𝐵 are some constants corresponding to the 𝑥-coordinate and 𝑦-coordinate of the
vertex of the parabola. This is often called vertex form. These two components can be
calculated by the following formulas:
𝑏
𝐴=−
2𝑎
𝐵 = 𝐴2 + 𝑏𝐴 + 𝑐
NOTE: These formulas are less sophisticated than they appear. Finding 𝐴 involves locating the
𝑥 position of the vertex and 𝐵 involves substituting this 𝑥 position back into the quadratic
function to evaluate the 𝑦 position. The formulas are for convenience.
2
𝐴=− = −1
2(1)
𝐵 = (−1)2 + 2(−1) + 2 = 1
𝑠 2 + 2𝑠 + 2 = (𝑠 − −1)2 + 1 = (𝑠 + 1)2 + 1
NOTE: We can verify that the expansion of the left-side gives us the right-hand side.
We recognize the second term to be a product of an exponential function, where 𝑎 = −1, and a
1
sine function, where 𝑏 = 1 (we can think of this as 2 ).
(𝑠−(−1)) +12
The first term is a bit tricky – we have the structure of the product of an exponential function,
where 𝑎 = −1, and a cosine function, however, the numerator should be 𝑠 + 1, based on the
structure of the Laplace transform! We address this by using the Add-One Trick.
Occasionally, we will run into a scenario in which we know we need to invert into the product
of an exponential function and a cosine function. If the numerator is not of the form 𝑠 − 𝑎,
then we can get it into this form. That is suppose we need to invert:
𝑠
(𝑠 − 𝑎)2 + 𝑏 2
We can introduce an 𝑎 term into the numerator by adding it and by subtracting it. That is, the
above expression is equivalent to:
𝑠+⏟
𝑎−𝑎
=0
(𝑠 − 𝑎)2 + 𝑏 2
We then split the fraction into two components so that we can remove the positive 𝑎 term (or
negative, as necessary) from the numerator (properties of fractions with common
denominators):
𝑠−𝑎 𝑎
2 2
+
(𝑠 − 𝑎) + 𝑏 (𝑠 − 𝑎)2 + 𝑏 2
𝑠 1
ℒ[𝑦] = 3 ∙ 2
+6∙
(𝑠 + 1) + 1 (𝑠 + 1)2 + 1
𝑠+1−1 1
ℒ[𝑦] = 3 ∙ + 6 ∙
(𝑠 + 1)2 + 1 (𝑠 + 1)2 + 1
𝑠+1 1 1
ℒ[𝑦] = 3 ∙ ( 2
− 2
)+6∙
(𝑠 + 1) + 1 (𝑠 + 1) + 1 (𝑠 + 1)2 + 1
𝑠+1 1 1
ℒ[𝑦] = 3 ∙ 2
−3 2
+6∙
(𝑠 + 1) + 1 (𝑠 + 1) + 1 (𝑠 + 1)2 + 1
We can either invert the right side or we can combine the second two terms, since they both
contain constants in the numerator:
𝑠+1 1
ℒ[𝑦] = 3 ∙ + 3
(𝑠 + 1)2 + 1 (𝑠 + 1)2 + 1
𝑠+1 1
𝑦 = 3ℒ −1 [ 2
] + 3ℒ −1 [ ]
(𝑠 + 1) + 1 (𝑠 + 1)2 + 1
We can think of this as a periodic function with period 2𝜋 ≈ 6.28 seconds, however, the
amplitude is decaying exponentially over time (recall that the coefficient of a sinusoidal function
describes its amplitude).
Graphing the function:
We can see that the damping term in the differential equation is doing its job. There is only one
oscillation in the spring-mass system before it returns to rest. This type of system is known as
critically damped.
In practice, an inductor passes low-frequency signals, while impeding high frequencies. This
makes an inductor suitable for more sophisticated applications in which signals need to be
precisely controlled. A bass speaker, or subwoofer, is an example of a device in which we would
want to pass low frequencies and bar high frequencies (hence the “bassy” sound).
Furthermore, we define 𝑄(𝑡), to be the charge across the capacitor at time 𝑡. This charge is
𝑑𝑄
measured in a unit known as Coulombs. The rate at which the charge changes, , is a unit of
𝑑𝑡
current, or the flow of charge, measured in amperes, or amps, and denoted 𝐼. That is,
𝑑𝑄
=𝐼
𝑑𝑡
A device in a circuit has a certain voltage drop. From a careful study of such voltage drops. The
voltage drop across a resistor is the product of the resistance value and the amount of current in
the circuit:
𝑉𝑅 = 𝐼 ∙ 𝑅
The voltage drop across an inductor is the product of the rating of the inductor, measured in the
unit Henry, 𝐿, and the rate at which the charge, or current, is changing. That is,
Finally, the voltage drop across a capacitor is the ratio of charge across the capacitor, 𝑄, and the
rating of the capacitor, 𝐶 Farads, that is,
𝑄
𝑉𝐶 =
𝐶
Kirchoff’s Law states that the sum of the voltage drops equals the voltage supplied by the
voltage source (such as a battery). Mathematically,
𝑉(𝑡) = 𝑉𝑅 + 𝑉𝐿 + 𝑉𝐶
Making substitutions,
𝑑𝐼 𝑄
𝑉(𝑡) = 𝐼𝑅 + 𝐿 +
𝑑𝑡 𝐶
𝑑𝑄
Since, 𝐼 = , it follows that,
𝑑𝑡
𝑑𝑄 𝑑2𝑄 1
𝑉(𝑡) = ∙𝑅+𝐿∙ 2 + 𝑄
𝑑𝑡 𝑑𝑡 𝐶
𝑑2 𝑄 𝑑𝑄 1
𝐿 2
+𝑅 + 𝑄 = 𝑉(𝑡)
𝑑𝑡 𝑑𝑡 𝐶
𝑑 𝑑2 𝑄 𝑑𝑄 1 𝑑
(𝐿 2 + 𝑅 + 𝑄) = (𝑉(𝑡))
𝑑𝑡 𝑑𝑡 𝑑𝑡 𝐶 𝑑𝑡
𝑑3𝑄 𝑑2 𝑄 1 𝑑𝑄
𝐿 + 𝑅 + = 𝑉′(𝑡)
𝑑𝑡 3 𝑑𝑡 2 𝐶 𝑑𝑡
𝑑𝑄
Since 𝐼 = ,
𝑑𝑡
𝑑2 𝐼 𝑑𝐼 1 𝑑𝐼
𝐿 2+𝑅 + = 𝑉′(𝑡)
𝑑𝑡 𝑑𝑡 𝐶 𝑑𝑡
If we parallel this differential equation to that of the spring-mass system, it follows that resistor
can be thought of as incorporating a “damping” effect in the charge across the capacitor.
Furthermore, the larger the capacitance, the smaller the “springiness” of the charge.
To better understand this, suppose we have a circuit with 𝑅 = 24 Ohms, 𝐿 = 2 Henrys, and a
capacitor with 0.005 Farads. The power source supplies 𝑉(𝑡) = 12sin(10𝑡) volts. Initially, the
capacitor has a charge of 𝑄(0) = 0.1 Coulombs and a current of 𝐼(0) = 0.
𝑑2 𝑄 𝑑𝑄 1
2 + 24 + 𝑄 = 12sin(10𝑡)
𝑑𝑡 2 𝑑𝑡 0.005
10
𝑠 2 ℒ[𝑄] − 0.1𝑠 + 12𝑠ℒ[𝑄] − 1.2 + 100ℒ[𝑄] = 6 ∙
𝑠 2 + 102
10
ℒ[𝑄](𝑠 2 + 12𝑠 + 100) = 6 ∙ + 0.1𝑠 + 1.2
𝑠2 + 102
10 0.1𝑠 1.2
ℒ[𝑄] = 6 ∙ + 2 + 2
(𝑠 2 + 102 )(𝑠 2
+ 12𝑠 + 100) (𝑠 + 12𝑠 + 100) (𝑠 + 12𝑠 + 100)
Prior to inverting, we will need to do two things: first, we will need to perform a partial fraction
decomposition on the first term. Secondly, we will need to complete the square on 𝑠 2 + 12𝑠 + 5.
Using technology,
10 12(12𝑠 + 239) 12(12𝑠 + 95)
6∙ = −
(𝑠 2 + 102 )(𝑠 2 + 12𝑠 + 100) 4685(𝑠 2 + 12𝑠 + 100) 4685(𝑠 2 + 102 )
Completing the square on 𝑠 2 + 12𝑠 + 100, we get (𝑠 + 6)2 + 64, leaving us with:
At this point, the algebra gets a bit cumbersome. We notice that the first, third, and fourth terms
have a common denominator, as long as we rewrite the first terms as:
12
(12𝑠 + 239)
4685
((𝑠 + 6)2 + 64)
We would like to combine these three terms to minimize the amount of algebra that needs to be
performed as part of the inversion process. That is, we will combine the numerators:
12
(12𝑠 + 239) + 0.1𝑠 + 1.2 = 0.1307𝑠 + 1.8122 = 0.1307(𝑠 + 13.8612)
4685
, leaving the combined expression in decimal form, rounded to four decimal places (this beats
having to keep track of long fractions!).
Thus,
12
𝑠 + 13.8612 4685 (12𝑠 + 95)
ℒ[𝑄] = 0.1307 −
(𝑠 + 6)2 + 82 (𝑠 2 + 102 )
After careful inspection, we observe that the first term will invert into the product of an
exponential function and a sinusoidal function, but the numerator must be of the form 𝑠 + 6 to
match the quadratic term in the denominator. The second term has the form of a non-decaying
sine function and a cosine function, though we need to split it up into two separate terms.
𝑠 + 13.8612 𝑠 1
ℒ[𝑄] = 0.1307 − 0.0307 ∙ − 0.2433 ∙
(𝑠 + 6)2 + 82 𝑠 2 + 102 𝑠 2 + 102
The second term is ready to be inverted into cos(10𝑡). The third term needs a factor of 10 in the
numerator, so we multiply it by 10/10:
𝑠 + 13.8612 𝑠 0.2433 10
ℒ[𝑄] = 0.1307 − 0.0307 ∙ − ∙
(𝑠 + 6)2 + 82 𝑠 2 + 102 10 𝑠 2 + 102
The first term requires the add-one trick to get 𝑠 + 6 into the numerator. Thus, we add and
subtract 6 from the numerator:
𝑠 + (6 − 6) + 13.8612 𝑠 10
ℒ[𝑄] = 0.1307 − 0.0307 ∙ − 0.02433 ∙
(𝑠 + 6)2 + 82 𝑠 2 + 102 𝑠 2 + 102
𝑠+6 7.8612 𝑠
ℒ[𝑄] = 0.1307 − 0.1307 − 0.0307 ∙ − 0.02433
(𝑠 + 6)2 + 82 (𝑠 + 6)2 + 82 𝑠 2 + 102
10
∙ 2
𝑠 + 102
The first term inverts into 𝑒 −6𝑡 cos(8𝑡), but the second term needs a constant of 8 in the
numerator to invert into 𝑒 −6𝑡 sin(8𝑡). We multiply it by 8/8:
𝑠+6 0.1307(7.8612) 8 𝑠
ℒ[𝑄] = 0.1307 − − 0.0307 ∙
(𝑠 + 6)2 + 82 8 (𝑠 + 6)2 + 82 𝑠 2 + 102
10
− 0.02433 ∙ 2
𝑠 + 102
𝑠+6 8 𝑠
ℒ[𝑄] = 0.1307 2 2
− 0.1284 2 2
− 0.0307 ∙ 2 − 0.02433
(𝑠 + 6) + 8 (𝑠 + 6) + 8 𝑠 + 102
10
∙ 2
𝑠 + 102
Initially, we see an aberration in the periodic nature of the charge across the capacitor, after
which, the charge approaches a stable periodic nature. This stable periodic nature is caused by
the forcing function, which, in this context, is the voltage supply. Since the voltage supply
oscillates in the charge it provides, the capacitor’s charge varies, as well.
This method is not significantly simpler than using Laplace transforms and yields the same
result. There are plenty of resources on the web for solving general second-order linear
differential equations with constant coefficients.
Explosions, impacts, and sudden, swift forces on a spring-mass system can all cause brief, but
important aberrations in spring-mass systems and electrical circuits (perhaps a voltage or current
spike). The theoretical physicist Paul Dirac introduced a function that could, at least
theoretically, account for such an impulse.
The impulse function, or Delta function, is not, in standard mathematical function, because it
consists of, as we will see, an infinitely large output value at a particular point in time that lasts
for an infinitely small period of time. This function is often represented by a graph of the
following sorts:
The graph above shows that the Delta function is valued at 0 everywhere except at (in this case)
𝑡 = 0, where the output signal is a spike. How could we, then think about such a function?
Suppose we take a spring-mass system and hit the mass with a hammer at time 𝑡 = 2.
𝑑2𝑦
+ 5𝑦 = 𝐹(𝑡)
𝑑𝑡 2
The function, 𝐹(𝑡), requires some sort of mathematical representation. For now, we will define it
as follows:
This, of course, provides only a qualitative description of the system, since “arbitrarily large” is
not quite something we can work with in practice. Since we cannot say that 𝐹(𝑡) = ∞ when 𝑡 =
2, we will be required to start with something more tangible.
Let’s define a similar function (for starters) that is 0 everywhere, except for a small interval of
time. We will call this function 𝑔Δ𝑡 and define it as follows:
𝑘, when 2 − Δ𝑡 ≤ 𝑡 ≤ 2 + Δ𝑡
𝑔Δ𝑡 = {
0, otherwise
That is, we will let this function be some value, 𝑘, when we are within Δ𝑡 units of 𝑡 = 2 and it
will be equal to 0 everywhere else. Graphically,
For ease of computation, we choose to calculate 𝑘 such that the area between 2 − Δ𝑡 and 2 + Δ𝑡
is precisely 1. Since the region is rectangular, then area is the product of the height, 𝑘, and the
width, which is 2 + Δ𝑡 − (2 − Δ𝑡) = 2Δ𝑡, of the rectangle.
Thus,
, so,
1
𝑘=
2Δ𝑡
We can now update our function with this more mathematical representation:
1
𝑔Δ𝑡 = {2Δ𝑡 , when 2 − Δ𝑡 ≤ 𝑡 ≤ 2 + Δ𝑡
0, otherwise
We really desire for Δ𝑡 to be very close to 0, so that we can simulate the sudden impact of a
hammer hitting the mass. That is, we want,
lim 𝑔Δ𝑡
Δ𝑡→0
Our answer is clearly infinity, since smaller widths require for taller heights in order to keep the
are fixed at 1.
∞
ℒ[𝑔Δ𝑡 ] = ∫ 𝑔Δ𝑡 ∙ 𝑒 −𝑠𝑡 𝑑𝑡
0
2+Δ𝑡
1
=∫ ∙ 𝑒 −𝑠𝑡 𝑑𝑡
2−Δ𝑡 2Δ𝑡
Since Δ𝑡 is a constant with respect to 𝑡 (it does not change with respect to change in time), we
can factor it out:
1 2+Δ𝑡 −𝑠𝑡
= ∫ 𝑒 𝑑𝑡
2Δ𝑡 2−Δ𝑡
1 −1 −𝑠𝑡 2+Δ𝑡
= ∙ ∙𝑒 |
2Δ𝑡 𝑠 2−Δ𝑡
𝑒 −2𝑠
Factoring out :
𝑠
The goal here is to get Δ𝑡 → 0. That is, we want the limit of this transform as Δ𝑡 goes to 0
𝑒 −2𝑠
The first factor, , is constant with respect to Δ𝑡, so we can pass it outside the limit:
𝑠
We now need to use our understanding of limits to evaluate the above limit. We can assess the
limit graphically:
We see that the limit appears to be s (a better way to see this might be to look at various values
of 𝑠 so that there are no parameters in the expression)! We can update our function:
Since this is particular to 𝑡 = 2, we can generalize that the coefficient of 𝑠 in the exponent is
generally the time at which the impulse occurs. We give this function a new name:
𝛿𝑎 (𝑡) = 𝑒 −𝑎𝑠
That is, the Laplace transform of the Delta function with impulse force at time 𝑡 = 𝑎 is 𝑒 −𝑎𝑠 .
This result is quite amazing! We started with a concept, made a mathematical model for it, and
produced a continuous Laplace transform!
As a side note, the above limit could also have been computed by applying L’Hospital’s Rule
(recall that if the limit is an indeterminate form, 0/0, then a derivative of the numerator and
denominator can be taken until this problem no longer exists (which occurs after the first
derivative is taken).
𝑑2𝑦
+ 5𝑦 = 𝐹(𝑡)
𝑑𝑡 2
We now know that 𝐹(𝑡) = 𝛿2 (𝑡), so that the impulse force occurs instantaneously at 𝑡 = 2.
Further, we will assume that the mass is in motion at 𝑡 = 0 so that 𝑦(0) = 1 and 𝑦 ′ (0) = −1
We proceed to compute the Laplace transform:
ℒ[𝑦](𝑠 2 + 5) = 𝑒 −2𝑠 + 𝑠 − 1
𝑒 −2𝑠 𝑠 1
ℒ[𝑦] = 2
+ 2 − 2
𝑠 +5 𝑠 +5 𝑠 +5
We recognize the second term to be a cosine function with 𝑎2 = 5 → 𝑎 = √5, and the second
function, once scaled by 5/5, is the sine function with 𝑎2 = 5 → 𝑎 = √5. The first term is one
that we have seen before – it is a function involving the product of 𝑒 −𝑎𝑠 and the Laplace
transform of another function (which we recognize to be sine, after we scale it by a factor of 5/5).
We recall the following formula
Here, 𝑎 = 2 and the function is 𝑓(𝑡) = sin(√5 𝑡), so 𝑓(𝑡 − 2) = sin (√5(𝑡 − 5))
1 −1 −2𝑠 5 1
ℒ [𝑒 ∙ 2 ] = 𝑢2 (𝑡) ∙ sin (√5(𝑡 − 5))
5 𝑠 +5 5
1 1
𝑦(𝑡) = 𝑢2 (𝑡) ∙ sin (√5(𝑡 − 5)) + cos(√5𝑡) − sin(√5𝑡)
5 5
1
cos(√5𝑡) − sin(√5𝑡) , 𝑡<2
𝑦(𝑡) = { 5
1 1
sin (√5(𝑡 − 5)) + cos(√5𝑡) − sin(√5𝑡), 𝑡≥2
5 5
Producing a graph:
We notice a couple of interesting things. For one, the impulse force at time 𝑡 = 2 seems to assist
the mass in compressing. At first, the hammer blow instantly re-stretches the spring slightly, but
Secondly, we notice that the impulse increased the amplitude of the sinusoid. We can think of the
impulse force as having increased the maximum distances the spring would stretch and
compress, assuming no damping force.
Interesting Cases
We can now begin to ask more particular questions about the hammer-to-mass simulation.
Suppose the blow was to be 10 times stronger and opposite the original force? We could easily
update our model to become:
𝑑2𝑦
+ 5𝑦 = −10𝛿2 (𝑡)
𝑑𝑡 2
As it turns out, depending on the magnitude of the impulse force, the direction of the impulse
force, and where in its cycle the spring-mass is, the effects of the impulse force may be radically
different.