Appendix I - Numerical Solution of Ordinary Differential Equations
Appendix I - Numerical Solution of Ordinary Differential Equations
numerical solution techniques for solving ordinary differential equations, which one might consider
in developing a general-purpose program, such as the EMTP. Since power system networks are
mostly linear, techniques for linear ordinary differential equations are given special emphasis.
I.1 Closed Form Solution
Let us assume that the differential equations are written in "state-variable form," and that
the equations are linear,
dx
dt = [A][x] + [g(t)],
with
(I.1)
constant square matrix [A], and a vector of known forcing functions [g(t)]. There is no unique way
of writing equations in state variable form, but it is common practice to choose currents in
inductances and voltages across capacitances as state variables. For example, Eq. (I.1) could have
the following form for the network of Fig. I.1:
I-1
di
dt
R
-L
dvc
dt
1
C
1
1
L i
v(t)
L
+
0 vc
(I.2)
With Laplace transform methods, especially when one output is expressed as a function of
one input, the system is often described as one nth-order differential equation, e.g., for the example
of Fig. I.1 in the form
2
di 1
dv(t)
d i
+R + i=
.
2
dt C
dt
dt
Such an nth-order differential equation can of course always be rewritten as a system of n first-order
differential equations, by introducing extra variables x 2 = dx1/dt, x3 = dx2/dt, to xn = dxn-1/dt, for the
higher-order derivatives, with x1 = x. In the example, with x1 = i and x2 = di/dt,
-1
R
-
L
=
1
produces another state-variable formulation for this example. While [A] of this formulation differs
from that in Eq. (I.2), its eigenvalues are the same.
The closed-form solution of Eq. (I.1), which carries us from the state of the system at t - t to
that at t, is
t
(I.3)
where the matrix e[A]t is called the "transition matrix." Eq. (I.3) contains the case where [x(t)] is
simply desired as a function of t by setting t = t. The computational task lies in finding this
transition matrix. Since there is no closed-form solution for the matrix exponential e [A]t, the way
out is to transform this matrix to a diagonal matrix, whose elements can easily be evaluated by
using the eigenvalues 1 of [A] and the matrix of eigenvectors (modal matrix) [M] of [A]. and then
I-2
transformation" due to J.G.F. Francis [3], and for finding eigenvectors the "inverse iteration scheme"
due to J.H. Wilkinson [4], which has also been described in modified form by J.E. Van Ness [5]. With
[] and [M] known, where [] is the diagonal matrix of eigenvalues i, e[A]t is diagonalized 1,
[M ] -1 e[A] t [M] = [ e t ]
Once the diagonal elements eit have been found, this can be converted back to give
[A] t
= [M] [ et ] [ M -1 ]
(I.4)
where
[et] = diagonal matrix with elements eit,
[M]
i
(I.5)
The
"convolution integral" in Eq. (I.5) can be evaluated in closed form for many types of functions [g(t)].
For the network of Fig. I.1, the eigenvalues can be obtained by setting the determinant of
[A]-[U] to zero ( [U] = identity matrix),
-
R
-
L
1
C
1, 2 = -
1
L
or
= 0
R
R
1
+ _ ( )2 2L
2L
LC
If
(I.6)
<
2(L
/C),
then the system is underdamped 2, and the argument under the square root will be negative, giving
1
If [M] diagonalizes [A], it will also diagonalize e [A]t. The matrix exponential is defined as the
series of Eq. (I.13), and then one simply has to show that [M] not only diagonalizes [A], but all
positive powers [A]n as well. Since [A] = [M][][M]-1 it follows that [A]n = ([M][][M]-1)([M][][M]-1)...
([M][][M]-1) of [A]n = [M][n][M]-1. Therefore, [M]-1[A]n[M] = [n] is again diagonal.
2
If R > 2*sqrt(L/C), then the system is overdamped, giving two real eigenvalues. The critically
damped case of R = 2 * sqrt(L/C) seldom occurs in practice; it leads to two identical eigenvalues.
This latter case of "multiple eigenvalues" may require special treatment, which is not discussed
here.
I-3
1, 2 = + _ j ; with = -
R
, =
2L
For
1
R 2
-(
)
LC
2L
(I.7)
1, 2 = -
1
[M] =
2
e- j120
j 120
j
e 120
2
[M ] =
j 3 - e- j120
and
1
3
+_ j
= e+_j 120
2
2
-1
-1
If we set t = t to obtain the state variables simply as a function of time and of initial conditions,
then Eq. (I.5) becomes
with
1
2 t
- e sin t
1
e ( cost - 3 sin t
(t -u)
3
i(t)
i(0) 2 t e [ cos( (t - u)) - 3 sin( (t - u))]v(u)
=
du
+ 0
vc (t)
2 t t
1 vc (0) 3
(t-u)
si
n
(
(t
u))v(u)
e
3
3
and
as
(I.8)
defined in Eq. (I.7). If we were to assume that the voltage source is zero and that v c(0) = 1.0 p.u.,
then we would have the case of discharging the capacitor through R-L, and from Eq. (I.8) we would
immediately get (realizing that i(0) = 0 ),
i(t) = -
2 t
e sin t
3
1
t
vc (t) = e ( cos t + sin t )
3
Could such a closed-form solution be used in an EMTP? For networks of moderate size, it probably
I-4
could. J.E. Van Ness had no difficulties finding eigenvalues and eigenvectors in systems of up to
120 state variables [5]. If the network contains switches which frequently change their position,
then its implementation would probably become very tricky. Combining it with Bergeron's method
for distributed-parameter lines, or with more sophisticated convolution methods for lines with
frequency-dependent parameters, should in principle be possible. Where the method becomes
almost unmanageable, or useless, is in networks with nonlinear elements. Another difficulty would
arise with the state-variable formulation, because Eq. (I.1) cannot as easily be assembled by a
computer as the node equations used in the EMTP. This difficulty could be overcome, however,
since there are ways of using node equations even for state-variable formulations, by
distinguishing node types according to the types of branches (R, L, or C) connected to them.
Where do Laplace transform methods fit into this discussion since they provide closed-form
solutions as well? To quote F.H. Branin [2], "...traditional methods for hand solution of networks are
not necessarily best for use on a computer with networks of much greater size.
the Laplace
transform techniques fit this category and should at least be supplemented, if not supplanted, by
numerical methods better adapted to the computer>" He then goes on to show that essentially all
of the information obtainable by Laplace transforms is already contained in the eigenvalues and
eigenvectors of [A]. It is surprising that very few, if any, textbooks show this relationship. The
Laplace transform of Eq. (I.1) is
(I.9a)
(I.9b)
or
rewritten
(I.10)
The
computational task in Eq. (I.10) is the determination of the inverse of (s[U]-[A]). The key to doing
this efficiently is again through the eigenvalues and eigenvectors of [A]. With that information, the
matrix (s[U]-[A]) is diagonalized,
(I.11)
and
then
the
inverse becomes
(I.12)
in
which the inverse on the right-hand side i now trivial to calculate since (s[U]-[]) is a diagonal
I-5
matrix (that is, one simply takes the reciprocals of the diagonal elements). To quote again from F.H.
Branin [2], "...one of the more interesting features of this method is the fact that it is far better
suited for computer-sized problems than the traditional Laplace transform techniques involving
ratio of polynomials and the poles and zeros thereof.
coefficients of the polynomials in a network function P(s)/Q(s) is not only time-consuming but also
prone to serious numerical inaccuracies, especially when the polynomials are of a high degree.
The so-called "topological" formula approach [25] to computing these network functions involves
finding all the trees of a network and then computing the sum of the corresponding treeadmittance products. But the number of trees may run into millions for a network with only 20
nodes and 40 branches. And even if this were not enough of an impediment, the computation of
the roots of the polynomials P(s) and Q(s) is hazardous because these roots may be extremely
sensitive to errors in the coefficients. In the writer's judgment, therefore, the polynomial approach
is just not matched to the network analysis tasks which the computer is called upon to handle. The
eigenvalue approach is much better suited and gives all of the theoretical information that the
Laplace transform methods are designated to provide. For example, the eigenvalues are identical
with the poles of the network functions. Moreover, any network function desired may be computed
straightforwardly and its sensitivity obtained, either with respect to frequency or with respect to
any network parameter. Finally, even the pole sensitivities can be calculated..."
I.2 Taylor Series Approximation of Transition Matrix
The matrix exponential e[A]t can be approximated by a power series, derived from a Taylor
series expansion,
[A] t
= [U] + t [A] +
t2
t3
t4
[A ] 2 +
[A ] 3 +
[A ] 4 + ...
2!
3!
4!
This
(I.13)
eigenvalue (which means a small time constant), the integration step t must be kept small in
order to permit rapid convergence of Eq. (I.13)" [2]. This refers to the problem encountered in "stiff
systems", where there are large differences between the magnitudes of eigenvalues, and where
the largest eigenvalues produce "ripples" of little interest to the engineer, who is more interested in
the slower changes dictated by the smaller eigenvalues, as indicated in Fig. I.2. The method of
using Eq. (I.13) becomes numerically unstable, for a given finite
I-6
number of terms if t is not sufficiently small to trace the small, uninteresting ripples.
therefore, not a practical method for an EMTP.
It is,
instability as the Runge-Kutta method discussed in Section I.5, which is not too surprising, since
this method becomes identical with the fourth-order Runge-Kutta method if 5th and higher-order
terms are neglected in Eq. (I.13), at least if the forcing function [g(t)] in Eq. (I.1) is zero
("autonomous system"), as further explained in Section I.5. Since this method is not practical,
more details such as the handling of the convolution integral in Eq. (I.3) are not discussed.
I.3 Rational Approximation of Transition Matrix
A rational approximation for the matrix exponential, which is numerically stable and
therefore much better than Eq. (I.13), is due to E.J. Davison 3 [6],
This was pointed out to the writer by K.N. Stanton when he was at Purdue University (now
President of ESCA Corp. in Seattle)
I-7
[A] t
t
t2
t3
t
t2
t3
( [U] [A] +
[A ] 2 [A ] 3 )-1 ( [U] +
[A] +
[A ] 2 +
[A ] 3 )
2
4
12
2
4
12
(I.14)
lower-order rational approximation, which is also numerically stable for all t, neglects the second
and high-order terms in Eq. (I.14).
[A] t
( [U] -
t
t
[A] )-1 ( [U] +
[A] )
2
2
(I.15)
This
is
identical with the trapezoidal rule of integration discussed in the following section.
Would it be worthwhile to improve the accuracy of the EMTP, which now uses the
trapezoidal rule, with the higher-order rational approximation of Eq. (I.14)?
This is a difficult
question to answer. First of all, the EMTP is not based on state-variable formulations, and it is
doubtful whether this method could be applied to individual branch equations as easily as the
trapezoidal rule (see Section 1). Furthermore, if sparsity is to be exploited, much of the sparsity in
[A] could be destroyed when the higher-order terms are added in Eq. (I.14). By and large, however,
the writer would look favorably at this method if the objective is to improve the accuracy of EMTP
results, even though it is somewhat unclear how to handle the convolution integral in Eq. (I.3).
I.4 Trapezoidal Rule of Integration
Since this is the method used in the EMTP, the handling of the forcing function [g(t)] in Eq.
(I.1), or analogously the handling of the convolution integral in Eq. (I.3), shall be discussed here.
Let Eq. (I.1) be rewritten as an integral equation,
t
(I.16)
which is still exact. By using linear interpolation on [x] and [g] between t-t and t, assuming for the
time being that [x] were known at t (which, in reality, is not true, thereby making the method
"implicit"), we get
t
t
[A] ( [x(t - t)] + [x(t)] ) +
( [g(t - t)] + [g(t)] )
2
2
(I.17)
Linear interpolation implies that the areas under the integral of Eq. (I.16) are approximated by
trapezoidals (Fig. I.3); therefore the name "trapezoidal rule of integration". The method is identical
with using "central difference quotients" in Eq. (I.1),
I-8
Trapezoidal
rule
(I.18)
of
and could just as well be called the "method of central difference quotients". Eq. (I.17) and (I.18)
can be rewritten as
( [U] -
t
t
t
[A] ) [x(t)] = ( [U] +
[A] ) [x(t - t)] +
( g(t - t)] + [g(t)] ) (I.19)
2
2
2
which, after premultiplication with ([U]-t[A]/2)-1, shows that we do indeed get the approximate
transition matrix of Eq. (I.15).
Working with the trapezoidal rule of integration requires the solution of a system of linear,
algebraic equations in each time step.
modifications occur because of switching or nonlinear effects, the matrix ([U]-t[A]/2) for this
system of equations remains constant. It is therefore best and most efficient to triangularize this
matrix once at the beginning, and again whenever network changes occur, and to perform the
downward operations and backsubstitutions only for the right-hand side inside the time step loop,
using the information contained in the triangularized matrix. The solution process is broken up into
two parts in this scheme, one being the triangularization of the constant matrix, the other one
being the "repeat solution process" for right-hand sides (which is done repeatedly inside the time
step loop). this concept of splitting the solution process into one part for the matrix and a second
I-9
part for the right-hand side is seldom mentioned in textbooks, but it is very useful in many power
system analysis problems, not only here, but also in power flow iterations using a triangularized
[Y]-matrix, as well as in short-circuit calculations for generating columns of the inverse of [Y] one at
a time. For more details, see Appendix III.
It may not be obvious that the trapezoidal rule applied to the state variable equations (I.1)
leads to the same answers as the trapezoidal rule first applied to individual branch equations,
which are then assembled into node equations, as explained in Section 1. The writer has never
proved it, but suspects that the answers are identical. For the example of Fig. I.1, this can easily be
shown to be true.
The trapezoidal rule of integration is admittedly of lower order accuracy than many other
methods, and it is therefore not much discussed in textbooks. It is numerically stable, however,
which is usually much more important in power system transient analysis than accuracy by itself.
Numerical stability more or less means that the solution does not "blow up" if t is too large;
instead, the higher frequencies will be incorrect in the results (in practice, they are usually filtered
out), but the lower frequencies for which the chosen t provides an appropriate sampling rate will
still be reasonably accurate. Fig. I.4 illustrates this for the case of a three-phase line energization.
This line was represented as a cascade connection of 18 three-phase nominal -circuits. The curve
for t = 5 (based on f = 60 Hz, i.e., t = 231.48 s)
cannot follow
some of the
fast
oscillations
noticeable
in
overall
accuracy
Fig. I.4 - Switching surge overvoltage at the receiving end in a threephase open-ended line
is
error
the
exact and approximate value at a particular instant in time is obviously not a good measure by
itself for overall accuracy, or for the usefulness of a method for these types of studies. In Fig. I.4,
an error as large as 0.6 p.u. (at the location of the arrow, assuming that the curve for t = 0.5
gives the exact value) is perfectly acceptable, because the overall shape of the overvoltages is still
represented with sufficient accuracy.
A physical interpretation of the trapezoidal rule of integration for inductances is given in
Section 2.2.1. This interpretation shows that the equations resulting from the trapezoidal rule are
identical with the exact solution of a lossless stub-line, for which the answers are always
I-10
dx
] = [ f ([x], t) ]
dt
(I.20)
There are many variants of the Runge-Kutta method, but the one most widely used appears to be
the following fourth-order method: Starting from the known value [x(t-t), the slope is calculated
at the point 0 (Fig. I.5(a)),
[ x(1) ]
= [ f( [x(t - t) ], t - t ) ]
t
(I.21a)
[ x(1) ] = [ x(t - t) ] +
(a)
(b)
1
[ x(1) ]
2
(c)
(I.21b)
(d)
[ x(2) ]
t
= [ f( [ x(1) ], t - ]
t
2
and
(I.21c)
this
is
[ x(2) ] = [ x(t - t) ] +
1
[ x(2) ]
2
(I.21d)
Then the slope is evaluated for a third time, now at midpoint 2 (Fig. I.5(c)),
[ x(3) ]
t
= [ f ( [ x(2) ], t )]
t
2
I-11
(I.21e)
(I.21f)
[ x(4) ]
= [ f ( [ x(3) ], t ) ]
t
(I.21g)
From these four slopes in 0, 1, 2, 3 (Fig. I.5(d)), the final value at t is obtained by using their
weighted averages,
t
[ x(1) ]
[ x(2) ]
[ x(3) ] [ x(4) ]
(
+2
+2
+
)
6
t
t
t
t
(I.22)
The mathematical derivation of the Runge-Kutta formula is quite involved (see, for example,
in [7]).
sample points (0,1,2,3 in Fig. I.5). There are variants as to the locations of the sample points, and
hence as to the weights assigned to them. There are also lower-order Runge-Kutta methods which
use fewer sample points.
As already mentioned in Section I.2, the fourth-order Runge-Kutta method of Eq. (I.21) and
(I.22) is identical with the fourth-order Taylor series expansion of the transition matrix if the
differential equations are linear, at least for autonomous systems with [g(t)] = 0 in Eq. (I.1). In
that case, Eq. (I.1) becomes
[ x(1) ]
t
= [A] [x(t - t)] , [ x(1) ] = ( [U] + [A] ) [x (t - t)]
t
2
With these values, the second slope becomes
[ x(2) ]
t
= ( [A] +
[A ] 2 ) [x (t - t)]
t
2
and
t2 2 t3 4
[A ] + [A ] ) [x(t - t)]
2
4
If the slopes are calculated at a number of points and graphically displayed as short lines, then
one gets a sketch of the "direction field", as indicated in Fig. I.5(d).
I-12
[ x(4) ]
t2
t3
= ( [A] + t [A ] 2 +
[A ] 3 +
[A ] 4 ) [x(t - t)]
t
2
4
Finally, the new value is obtained with Eq. (I.22) as
t2 2 t3 3 t4 4
[A ] + [A ] + [A ] ) [x(t - t)]
2
6
24
which is indeed identical with the Taylor series approximation of the transition matrix in Eq. (I.13).
If [A] is zero in Eq. (I.1), that is, if [x] is simply the integral over the known function [g(t)],
then the fourth-order Runge-Kutta method is identical with Simpson's rule of integration, in which
the curve is approximated as a parabola going through the three known points in t-t, t-t/2, and t
(Fig. I.6).
The Runge-Kutta method is prone to numerical instability if t is not chosen small enough.
"It becomes painfully slow in the case of problems having a wide spread of eigenvalues. For the
largest eigenvalue (or, equivalently, its reciprocal, the smallest time constant) controls the
permissible size of t. But the smallest eigenvalues (largest time constants) control the network
response and so determine the total length of time over which the integration must be carried out
to characterize the response. In the case of a network with a 1000 to 1 ratio of largest to smallest
eigenvalue, for instance, it might be necessary to take in the order of 1000 times as many
I-13
integration steps with the Runge-Kutta method as with some other method which is free of the
minimum time-constant barrier" [2}. This problem is indicated in Fig. I.2: Though the ripples may
be very small in amplitude, they will cause the slopes to point all over the place, destroying the
usefulness of methods based on slopes.
I.6 Predictor-Corrector Methods
These methods can again be used for any system of ordinary differential equations of the
type of Eq. (I.20). To explain the basic idea, let us try to apply the trapezoidal rule to Eq. (I.20),
which would give us
t
( [ f ( [x(t - t)], t - t ) ] + [ f ( [ x(h -1) ], t ) ] )
2
(I.23)
In
the
linear case discussed in Section I.4, this equation could be solved directly for [x]. In the general
(time-varying or nonlinear) case, this direct solution is no longer possible, and iterative techniques
have to be used. This has already been indicated in Eq. (I.23) by using superscript (h) to indicate
the iteration step; at the same time, the argument "t" has been dropped to simplify the notation.
The iterative technique works as follows:
1.
Use a predictor formula, discussed further on, to obtain a "predicted" guess [x (0)] for the
solution at time t.
2.
In iteration step h (h=1,2,...), insert the approximate solution [x (h-1)] into the right-hand side
of Eq. (I.23) to find a "corrected" solution [x(h)].
3.
If the difference between [x(h)] and [x(h-1)] is sufficiently small, then the integration from t-t
to t is completed. Otherwise, return to step 2.
Eq. (I.23) is a second-order corrector formula. To start the iteration process, a predictor
formula is needed for the initial guess [x (0)]. A suitable predictor formula for Eq. (I.23) can be
obtained from the midpoint rule,
[ x(0)
(t) ] = [x(t - 2 t)] + 2 t [ f ([x(t - t)], t - t) ]
(I.24)
or
from
an
[ x(0)
(t) ] = [x(t - 3 t)] +
3
t( [ f([x(t - t)], t - t) ] + [ f([x(t - 2t)], t - 2 t) ] )
2
(I.25)
The difference in step 3 of the iteration scheme gives an estimate of the error, which can be
used
(a)
to decide whether the step size t should be decreased (error too large) or can be increased
I-14
It is generally better to shorten the step size t than to use the corrector formula repeatedly in step
2 above. In using the error estimate to improve the prediction, it is assumed that the difference
between the predicted and corrected values changes slowly over successive time steps. This "past
experience" can then be used to improve the prediction with a modifier formula. Such a modifier
formula for the predictor of Eq. (I.25) and for the corrector of Eq. (I.23) would be
(0)
[ x(0)
]+
improved ] = [ x
9
( [x(t - t)] - [ x(0)
(t - t) ] )
10
(I.26)
Besides the second-order methods of Eq. (I.23) to Eq. (I.26), there are of course higher-order
methods. Fourth-order predictor-corrector methods seem to be used most often. Among these are
Milne's method and Hamming's method, with the latter one usually more stable numerically. The
theory underlying all predictor-corrector methods is to pass a polynomial through a number of
points at t, t-t, t-2t, ..., and to use this polynomial for integration. The end-point at t is first
predicted, and then once or more often corrected.
stability properties of the corrector formula are more important than those of the predictor formula,
because the latter is only used to obtain a first guess and determines primarily the number of
necessary iteration steps. The predictor and corrector formula should be of the same order in the
error terms. There are different classes of predictors: Adams-Bashforth predictors (obtained from
integrating Newton' backward interpolation formulas), Milne-type predictors (obtained from an
open Newton-Cotes forward-integrating formula), and others. Note that those formulas requiring
values at t-2t, or further back, are not "self-starting"; Runge-Kutta methods are sometimes used
with such formulas to build up enough history points.
It is questionable whether non-self-starting high-order predictor-corrector formulas would be
very useful for typical power system transient studies, since waves from distributed-parameter
lines hitting lumped elements look almost like discontinuities to the lumped elements, and would
therefore require a return to second-order predictor-correctors each time a wave arrives. In linear
systems, the second-order corrector of Eq. (I.23) can be solved directly, however, and is then
identical with the trapezoidal rule as used in the EMTP.
I.7 Deferred Approach to the Limit (Richardson Extrapolation and Romberg Integration)
The idea behind these methods is fairly simple. Instead of using higher-order methods, the
second-order trapezoidal rule (either directly with Eq. (I.17) for linear systems, or iteratively with
Eq. (I.23) for more general systems) is used more than once in the interval between t- t and t, to
improve the accuracy. Assume that the normal step size t is used to find [x(1)] at t from [x(t-t)],
as indicated in Fig. I.7. Now repeat the integration with the trapezoidal rule with half
I-15
the step size t/2, and perform two integration steps to obtain [x (2)]. With the two values [x (1)] and
[x(2)], an intelligent guess can be made as to where the solution would end up if the step size were
decreased more and more. This "extrapolation towards t=0" (Richardson's extrapolation) would
give us a better answer
[x(t)] = [ x(2) ] +
1
( [ x(2) ] - [ x(1) ] )
3
(I.27)
The
accuracy can be further improved by repeating the integration between t-t and t with 4,8,16,...
intervals. The corresponding extrapolation formula for t 0 is known as "Romberg integration."
Whether any of these extrapolation formulas are worth the extra computational effort in an
EMTP is very difficult to judge. Some numerical analysts seem to feel that these methods look very
promising. They offer an elegant accuracy check as well.
I.8 Numerical Stability and Implicit Integration
I-16
The writer believes that the numerical stability of the trapezoidal rule has been one of the
key factors in making the EMTP such a success. It is therefore worthwhile to expound on this point
somewhat more.
The trapezoidal rule belongs to a class of implicit integration schemes, which have recently
gained favor amongst numerical mathematicians for the solution of "stiff systems", that is, for
systems where the smallest and largest eigenvalues or time constants are orders of magnitude
apart [70]. Most power systems are probably stiff in that sense. While implicit integration schemes
of higher order than the trapezoidal rule are frequently proposed, their usefulness for the EMTP
remains questionable because they are numerically less stable. A fundamental theorem due to
Dahlquist [71] states:
Theorem:
Let a multistep method be called A-stable, if, when it is applied to the problem
[dx/dt] = [x], Re() < 0, it is stable for all t > 0.
d x
+ x = 0, with x(0) = 0, dx/dt(0) = 10 -4
2
dt
(I.28)
its
(I.29)
The amplitude of 10 -4 shall be considered as very small by definition. Eq. (I.28) must be rewritten
as a system of first-order differential equations in order to apply any of the numerical solution
techniques,
dx1 /dt
dx 2 /dt
-1
with
1 x1
0 x 2
(I.30)
x1=
x
and
x1 (t)
=e
x 2 (t)
[a] t
with
x1 (t - t)
x 2 (t - t)
I-17
(I.30a)
-1
[A] =
(I.30b)
x1 (t)
t
1- 4
=
2
x 2 (t)
1+ t
4
- t
x1 (t - t)
t 2 x 2 (t - t)
1
4
(I.31)
x1 (t) + x 2 (t) = x1 (t - t) + x 2 (t - t)
2
Eq.
(I.31) for any choice of t. Therefore, if the solution is started with the correct initial conditions
x12(0) + x22(0) = 10-8, the solution for x will always lie between -10 -4 and +10-4, even for step sizes
which are much larger than one cycle of oscillation.
across" oscillations which are very fast but of negligible amplitude, without any danger of
numerical instability.
Explicit integration techniques, which include Runge-Kutta methods, are inherently
unstable. They require a step size tailored to the highest frequency or smallest time constant (rule
of thumb: t 0.2 Tmin), even though this mode may produce only negligible ripples, with the
overall behavior determined by the larger time constants in stiff systems.
Applying the
t2 t4
1
+
2
24
x1 (t)
=
x 2 (t)
3
- t + t
6
t3
6 x1 (t - t)
t 2 t 4 x 2 (t - t)
1+
2
24
t -
(I.32)
Plotting the curves with a reasonably small t, e.g., 6 samples/cycle, reveals that the Runge-Kutta
method of Eq. (I.32) is more accurate at first than the trapezoidal rule, but tends to lose the
amplitude later on (Fig. I.8). This is not serious since the ripple is assumed to be unimportant in
the first place. If the step size is increased, however, to t > 2/ cycles, then the amplitude
I-18
then the amplitude will eventually grow to infinity. This is illustrated in table I.1 for t = 1 cycle.
Table I.1 - Numerical solution of Eq. (I.28) with t = 1 cycle
t in cycles
exact
0.5810-4
-0.9410-4
0.9610-4
-0.6310-4
0.0610-4
0.5310-4
-0.004
-0.32
-18
-590
-6800
2,600,000
trapezoida
l rule
RungeKutta
Ref. 72 explains that the trapezoidal rule remains numerically stable even in the limiting
case where the time constant T in an equation of the form
T dx 2/dt = K x1 - x2
(I.33)
(I.34)
which is the correct answer as long as the solution starts from correct initial conditions K x 1(0) x2(0) = 0. Even a slight error in the initial conditions, K x 1(0) - x2(0) = will not cause serious
problems. Since Eq. (I.34) just flips the sign of the expression from step to step, the error would
only produce ripples superimposed on the true solution for x2.
Semlyen and Dabuleanu suggest an implicit third-order integration scheme for the EMTP, in
which second-order interpolation (parabola) is used through two known points at t - 2 t and t - t,
and through the yet unknown solution point at t [73]. Applying this scheme to Eq. (I.30) produces
x1 (t)
a
=
-b
x 2 (t)
b x1 (t - t)
c
+
a x 2 (t - t)
-d
I-19
d x1 (t - 2 t)
c x 2 (t - 2 t)
with
(I.35)
Eq.
40
t 2 ) / det
144
13
b=
t / det
12
5
c=
t 2 / det
144
t
d=/ det
12
25
det = 1 +
t2
144
a = (1 -
(I.35) gives indeed higher accuracy than the trapezoidal rule, but only as long as the step size is
reasonably small, and as long as the number of steps is not very large. After 40 cycles, with a step
size of 6 samples/cycle, Eq. (I.35) would produce peaks which have already grown by a factor of
20,000. This indicates that the choice of the step in Eq. (I.35) is subject to limitations imposed by
numerical stability considerations, whereas the trapezoidal rule is not.
A step size of 6
samples/cycles is not too large for fast oscillations which have no influence on the overall behavior.
The trapezoidal rule simply filters them out. High-order implicit integration schemes are therefore
not as useful for the EMTP as one might be thought to believe from recent literature on implicit
integration schemes for stiff systems.
I.9 Backward Euler Method
The major drawback of the trapezoidal rule of integration of Section I.4 is the danger of
numerical oscillations when it is used as a differentiator, e.g., in
v = L di / dt
(I.36)
with current i being the forcing function. A sudden jump in di/dt, which could be caused by current
interruption in a
Instead, the
trapezoidal rule of integration produces undamped numerical oscillations around the correct
answer, as explained in Section 2.2.2. These oscillations can be damped out by adding a parallel
resistor Rp across the inductance. Section 2.2.2 shows that critical damping is achieved if R p =
2L/t. In that case, the "damped" trapezoidal rule of Eq. (2.20) transforms Eq. (I.36) into
v(t) =
L
[i(t) - i(t - t)]
t
(I.37)
which is simply the backward Euler method. Therefore, the "critically damped" trapezoidal rule
and the backward Euler method are identical.
In general, the undamped trapezoidal rule is better than the backward Euler method,
because the latter method produces too much damping. It is a good method, however, if it is only
used for a few steps to get over instants of discontinuities (see Appendix II).
I-20