Nu Taro Continuous
Nu Taro Continuous
James Nutaro
Oak Ridge National Laboratory
[email protected]
1 Introduction
Computer simulation of a system described by dierential equations requires that some element of the system
be approximated by discrete quantities. There are two system aspects that can be made discrete; time and
state. When time is discrete, the dierential equation is approximated by a dierence equation (i.e., a
discrete time system), and the solution is calculated at xed points in time. When the state is discrete, the
dierential equation is approximated by a discrete event system. Events correspond to jumps through the
discrete state space of the approximation.
The essential feature of a discrete time approximation is that the resulting dierence equations map a
discrete time set to a continuous state set. The time discretization need not be regular. It may even be
revised in the course of a calculation. None the less, the elementary features of a discrete time base and
continuous state space remain.
The basic feature of a discrete event approximation is opposite that of a discrete time approximation.
The approximating discrete event system is a function from a continuous time set to a discrete state set.
The state discretization need not be uniform, and it may even be revised as the computation progresses.
These two dierent types of discretizations can be visualized by considering how the function x(t), shown
in gure 1a, might be reduced to discrete points. In a discrete time approximation, the value of the function
is observed at regular intervals in time. This kind of discretization is shown in gure 1b. In a discrete event
approximation, the function is sampled when it takes on regularly spaced values. This type of discretization
is shown in gure 1c.
-0.35
-0.25
-0.15
-0.05
0.05
0.15
0.25
0.35
0.45
0.55
0.65
x(t)
t
(a) Continuous
-0.35
-0.25
-0.15
-0.05
0.05
0.15
0.25
0.35
0.45
0.55
0.65
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 9.5 10
x(t)
t
(b) Discrete time
-0.35
-0.25
-0.15
-0.05
0.05
0.15
0.25
0.35
0.45
0.55
0.65
x(t)
t
(c) Discrete state
Figure 1: Time and state discretizations of a system.
From an algorithmic point of view, these two types of discretizations are widely divergent. The rst
approach emphasizes the simulation of coupled dierence equations. Some distinguishing features of a
dierence equation simulator are nested for loops (used to compute function values at each time step),
SIMD type parallel computing (using, e.g., vector processors or automated for loop parallelization), and
good locality of reference.
The second approach emphasizes the simulation of discrete event systems. The main computational
features of a discrete event simulation are very dierent from a discrete time simulation. Foremost among
1
them are event scheduling, poor locality of reference, and MIMD type asynchronous parallel algorithms. The
essential data structures are dierent, too. Where dierence equation solvers exploit a matrix representation
of the system coupling, discrete event simulations often require dierent, but structurally equivalent, data
structures (e.g., inuence graphs).
Mathematically, however, they share several features. The approximation of functions via interpolation
and extrapolation are central to both. Careful study of error bounds, stability regimes, conservation prop-
erties, and other elements of the approximating machinery is essential. It is not surprising that theoretical
aspects of dierential operators, and their discrete approximations, have a prominent place in the study of
both discrete time and discrete event numerical methods.
This conuence of applied mathematics, mathematical systems theory, and computer science makes the
study of discrete event numerical methods particularly challenging. This paper presents some basic results,
and it avoids more advanced topics. My goal is to present essential concepts clearly, and so portions of this
material will, no doubt, seem underdeveloped to a specialist. Pointers into the appropriate literature are
provided for those who want a more in depth treatment.
2 Simulating of a single ordinary dierential equation
Consider an ordinary dierential equation that can be written in the form of
x(t) = f(x(t)). (1)
A discrete event approximation of this system can be obtained in, at least, two dierent ways. To begin,
consider the Taylor series expansion
x(t +h) = x(t) +h x(t) +
n=2
h
n
n!
x
(n)
(t). (2)
If we x the quantity D = |x(t +h) x(t)|, then the time required for a change of size D to occur in x(t) is
approximately
h =
D
| x(t)|
if x(t) = 0
otherwise
. (3)
This approximation drops the summation term in equation 2 and rearranges what is left to obtain h. Algo-
rithm 1 uses this approximation to simulate a system described by 1. The procedure computes successive
approximations to x(t) on a grid in the phase space of the system. The resolution of the phase space grid is
D, and h approximates the time at which the solution jumps from one phase space grid point to the next.
The sgn function at line 14 in algorithm 1, is dened to be
sgn(q) =
1 if q < 0
0 if q = 0
1 if q > 0
.
The expression Dsgn(f(x)) on line 14 could, in this instance, be replaced by hf(x) because
hf(x) =
D
|f(x)|
f(x) = Dsgn(f(x)).
However, the expression Dsgn(f(x)) highlights the fact that the state space, and not the time domain, is
discrete. Notice, in particular, that the computed values of x are restricted to x(0) + kD, where k is an
integer and D is the phase space grid resolution. In contrast to this, the computed values of t can take any
value.
The procedure can be demonstrated with a simulation of the system x(t) = x(t), with x(0) = 1 and
D = 0.15. Each step of the simulation is shown in table 1. Figure 2 shows the computed x(t) as a function
of t.
2
Algorithm 1 Simulating a single ordinary dierential equation.
t 0
x x(0)
while terminating condition not met do
print t , x
if f(x) = 0 then
h
else
h
D
|f(x)|
end if
if h = then
stop simulation
else
t t +h
x x +Dsgn(f(x))
end if
end while
t x f(x) h
0.0 1.0 -1.0 0.15
0.15 0.85 -0.85 0.1765
0.3265 0.7 -0.7 0.2143
0.5408 0.55 -0.55 0.2727
0.8135 0.4 -0.4 0.3750
1.189 0.25 -0.25 0.6
1.789 0.1 -0.1 1.5
3.289 -0.05 0.05 3.0
6.289 0.1 -0.1 1.5
7.789 -0.05 0.05 3.0
Table 1: Simulation of x(t) = x(t), x(0) = 1, using algorithm 1 with D = 0.15.
The approximation given by equation 3 can be obtained in a second way. Consider the integral
t0+h
t0
f(x(t)) dt
= D. (4)
As before, D is the resolution of the phase space grid and h is the time required to move from one point
in the phase space grid to the next. For time to move forward, it is required that h > 0. In the interval
[t
0
, t
0
+ h], the function f(x(t)) can be approximated by f(x(t
0
)). Substituting this approximation into 4
and solving for h gives
h =
D
|f(x(t0))|
if f(x(t
0
)) = 0
otherwise
.
This approach to obtaining h gives the same result as before.
There are two important questions that need answering before this can be considered a viable simulation
procedure. First, can the discretization parameter D be used to bound the error in the simulation? Second,
under what conditions is the simulation procedure stable? That is, under what circumstances can the error at
the end of an arbitrarily long simulation run be bounded? Several authors (see, e.g., [25], [8], and [13]) have
addressed these questions in a rigorous way. Happily, the answer to the rst question is a yes! The second
question, while answered satisfactorily for linear systems, remains (not surprisingly) largely unresolved for
non-linear systems.
3
0
0.2
0.4
0.6
0.8
1
0 1 2 3 4 5 6 7 8
x
(t)
t
Figure 2: Computed solution of x(t) = x(t), x(0) = 1 with D = 0.15.
The rst question can be answered as follows: If x(t) = f(x(t)) describes a stable and time invariant
system (see [19], or most any other introductory systems textbook), then the error at any point in a simulation
run is proportional to D. The constant of proportionality is determined by the system under consideration.
The time invariant caveat is needed to avoid a situation in which the rst derivative can change independently
of x(t) (i.e., the derivative is described by a function f(x(t), t), rather than f(x(t))). In practice, this problem
can often be overcome by treating the time varying element of f(x(t), t) as a quantized input to the integrator
(see, e.g., [12]).
The linear dependence of the simulation error on D is demonstrated for two dierent systems in gures 3
and 4. In these examples, x(t) is computed until the time of next event exceeds a preset threshold. The error
is determined at the last event time by taking the dierence of the computed and known solutions. This
linear dependency is strongly related to the fact that the scheme is exact when x(t) is a line, or, equivalently,
when the system is described by x(t) = k, where k is a constant.
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
x
(t)
t
D=0.001
D=0.01
D=0.05
D=0.1
D=0.15
exp(-t)
(a) Comparison of computed and exact solutions.
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16
a
b
s
o
lu
te
e
rro
r
D
(b) Absolute error as a function of D.
Figure 3: Error in the computed solution of x(t) = x(t), x(0) = 1.
3 Simulation of coupled ordinary dierential equations
Algorithm 1 can be readily extended to sets of coupled ordinary dierential equations. Consider a system
described by equations in the form
x(t) =
f( x), (5)
4
0
0.5
1
1.5
2
2.5
3
3.5
4
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x
(t)
t
D=0.01
D=0.005
D=0.001
D=0.00075
0.02/(0.005+(2.0-0.005)*exp(-2*x))
(a) Comparison of computed and exact solutions.
0
0.01
0.02
0.03
0.04
0.05
0.06
0 0.001 0.002 0.003 0.004 0.005 0.006 0.007 0.008 0.009 0.01
a
b
s
o
lu
te
e
rro
r
D
(b) Absolute error as a function of D.
Figure 4: Error in the computed solution of x(t) = (2 0.5x(t))x(t), x(0) = 0.01.
where x is the vector
[x
1
(t) , x
2
(t) , ... , x
m
(t)]
and
f( x)) is a function vector
[f
1
( x(t)) , f
2
( x(t)) , ... , f
m
( x(t))].
As before, we construct a grid in the m dimensional state space. The grid points are regular spaced by a
distance D along the state space axises. To simulate this system, four variables are needed for each x
i
, and
so 4m variables in total. These variables are
x
i
, the position of state variable i on its phase space axis,
tN
i
, the time until x
i
reaches its next discrete point on the ith phase space axis,
y
i
, the last grid point occupied by the variable x
i
, and
tL
i
, the last time at which the variable x
i
was modied.
The x
i
and y
i
are necessary because the function f
i
() is computed only at grid points in the discrete
phase space. Because of this, the motion of the variable x
i
along its phase space axis is described by a
piecewise constant velocity. This velocity is computed using the dierential function f
i
() and the vector
y = [y
1
, ..., y
m
]. The value of y
i
is updated when x
i
reaches a phase space grid point. The time required for
the variable x
i
to reach its next grid point is computed as
h =
D|xiyi|
|fi( y)|
if f
i
( y) = 0
otherwise
. (6)
The quantity D is the distance separating grid points along the axis of motion, |x
i
y
i
| is the distance
already traveled along the axis, and f
i
( y) is the velocity on the ith phase space axis.
With equation 6, and an extra variable t to keep track of the simulation time, the behavior of a system
described by equation 5 can be computed with algorithm 2.
To illustrate the algorithm, consider the coupled linear system
x
1
(t) = x
1
(t) + 0.5 x
2
(t) (7)
x
2
(t) = 0.1 x
2
(t)
with x
1
(0) = x
2
(0) = 1 and D = 0.1. Table 2 gives a step by step account of the rst eight iterations of
algorithm 2 applied to this system. The output values computed by the procedure are plotted in gure 3
5
Algorithm 2 Simulating a system of coupled ordinary dierential equations.
t 0
for all i [0, m] do
tL
i
0
y
i
x
i
(0)
x
i
x
i
(0).
end for
while terminating condition not met do
print t, y
1
, ..., y
m
for all i [0, m] do
tN
i
tL
i
+h
i
, where h
i
is given by equation 6.
end for
t min{tN
1
, tN
2
, ..., tN
m
}
Copy y to a temporary vector y
tmp
for all i [0, m] such that tN
i
= t do
y
i
y
i
+Dsgn(f
i
( y
tmp
))
x
i
y
i
tL
i
t
end for
for all j [0, m] such that a changed y
i
alters the value of f
j
( y) and tN
j
= t do
x
j
x
j
+ (t tL
j
)f
j
( y
tmp
)
tL
j
t
end for
end while
t x
1
x
1
y
1
tL
1
h
1
x
2
x
2
y
2
tL
2
h
2
0 1 -0.5 1 0 0.2 1 -0.1 1 0 1
0.2 0.9 -0.4 0.9 0.2 0.25
0.45 0.8 -0.3 0.8 0.45 0.3333
0.7833 0.7 -0.2 0.7 0.7833 0.5
1 0.6567 -0.25 1.0 0.5733 0.9 -0.09 0.9 1 1.111
1.573 0.6 -0.15 0.6 1.573 0.6667
2.111 0.5193 -0.2 2.111 0.9033 0.8 -0.08 0.8 2.111 1.25
3.014 0.5 -0.1 0.5 3.014 1
Table 2: Simulation of two coupled ordinary dierential equations on a discrete phase space grid.
(note that the gure shows results beyond the eight iterations listed in the table). Each row in the table
shows the computed values at the end of an iteration (i.e., just prior to repeating the while loop). Blank
entries indicated that the variable value did not change in that iteration. The blank entries, and the irregular
time intervals that separate iterations, highlight the fact that this is a discrete event simulation. An event
is the arrival of a state variable at its next grid point in the discrete phase space. Clearly, not every variable
arrives at its next phase space point at the same time, and so event scheduling provides a natural way to
think about the evolution of the system.
Stability and error properties in the case of coupled equations are more dicult to reason about, but
they generally reect the one dimensional case. In particular, the simulation procedure is stable, in the sense
that the error can be bounded at the end of an arbitrarily long run, when it is applied to a stable and time
invariant linear system (see [8] and [25]). The nal error resulting from the procedure is proportional to the
phase space grid resolution D (see [8] and [25]).
6
0
0.2
0.4
0.6
0.8
1
0 5 10 15 20 25 30 35
y
(t)
t
y1
y2
Figure 5: Plot of y(t) for the calculation shown in table 2.
4 DEVS representation of discrete event integrators
It is useful to have a compact representation of the integration scheme that is readily implemented on
a computer, can be extended to produce new schemes, and provides an immediate support for parallel
computing. The Discrete Event System Specication (DEVS) satises this need. A detailed treatment of
DEVS can be found in [24]. Several simulation environments for DEVS are readily available online (e.g.,
PowerDEVS [7], adevs [10], DEVSJAVA [26], CD++ [22], and JDEVS [3] to name just a few).
DEVS uses two types of structures to describe a discrete event system. Atomic models describe the
behavior of elementary components. Here, an atomic model will be used to represent individual integrators
and dierential functions. Coupled models describe collections of interacting components, where components
can be atomic and coupled models. In this application, a coupled model describes a system of equations as
interacting integrators and function blocks.
An atomic model is described by a set of inputs, set of outputs, and set of states, a state transition
function decomposed into three parts, an output function, and a time advance function. Formally, the
structure is
M =< X, Y, S,
int
,
ext
,
con
, , ta >
where
X is a set of inputs,
Y is a set of outputs,
S is a set of states,
int
: S S is the internal state transition function,
ext
: QX
b
S is the external state transition function
with Q = {(s, e) | s S&0 e ta(s)}
and X
b
is a bag of values appearing in X,
con
: S X
b
S is the conuent state transition function,
: S Y is the output function, and
ta : S is the time advance function.
The external transition function describes how the system changes state in response to input. When
input is applied to the system, it is said that an external event has occurred. The internal transition
function describes the autonomous behavior of the system. When the system changes state autonomously,
an internal event is said to have occurred. The conuent transition function determines the next state of
the system when an internal event and external event coincide. The output function generates output values
at times that coincide with internal events. The output values are determined by the state of the system
7
just prior to the internal event. The time advance function determines the amount of time that must elapse
before the next internal event will occur, assuming that no input arrives in the interim.
Coupled models are described by a set of components and a set of component output to input mappings.
For our purpose, we can restrict the coupled model description to a at structure (i.e., a structure composed
entirely of atomic models) without external input or output coupling (i.e., the component models can not
be aected by elements outside of the network). With these restrictions, a coupled model is described by
the structure
N =< {M
k
}, {z
ij
} >
where
{M
k
} is a set of atomic models, and
{z
ij
} is a set of output to input maps z
ij
: Y
i
X
j
where the i and j indices correspond to M
i
and M
j
in {M
k
} and is the non-event.
The output to input maps describe which atomic models can aect one another. The output to input map
is, in this application, somewhat over generalized and could be replaced with more conventional descriptions
of computational stencils and block diagrams. The non-event is used, in this instance, to represent compo-
nents that are not connected. That is, if component i does not inuence component j, then z
ij
(x
i
) = ,
where x
i
X
i
.
These structures describe what a model can do. A canonical simulation algorithm is used to generate
dynamic behavior from the description. In fact, algorithms 1 and 2 are special cases of the DEVS simulation
procedure. The generalized procedure is given as algorithm 3. Its description uses the same variables as
algorithm 2 wherever this is possible. Algorithm 3 assumes a coupled model N, with a component set
{M
1
, M
2
, ..., M
n
}, and a suitable set of output to input maps. For every component model M
i
, there is a
time of last event and time of next event variable tL
i
and tN
i
, respectively. There are also state, input, and
output variables s
i
, x
i
, and y
i
, in addition to the basic structural elements (i.e., state transition functions,
output function, and time advance function). The variables x
i
and y
i
are bags, with elements taken from
the input and output sets X
i
and Y
i
, respectively. The simulation time is kept in variable t.
To map algorithm 2 into a DEVS model, each of the x variables is associated with an atomic model
called an integrator. The input to the integrator is the value of the dierential function, and the output of
the integrator is the appropriate y variable. The integrator has four state variables
q
l
, the last output value of the integrator,
q, the current value of the integral,
q, the last known value of the derivative, and
, the time until the next output event.
The integrators input and output events are real numbers. The value of an input event is the derivative at
the time of the event. An output event gives the value of the integral at the time of the output.
The integrator generates an output event when the integral of the input changes by D. More generally, if
q is the desired change, [t
0
, T] is the interval over which the change occurs, and f(x(t)) is the rst derivative
of the system, then
T
0
f(x(t
0
+t)) dt = F(T) = q. (8)
The function F(T) gives the exact change in x(t) over the interval [t
0
, T]. Equation 8 is used in two ways.
If F(T) and q are known, then the time advance of the discrete event integrator is found by solving for T.
If F(T) and T are known, then the next state of the integrator is given by q +F(T), where T is equal to the
elapsed time (for an external event) or time advance (for an internal event).
8
Algorithm 3 DEVS simulation algorithm.
t 0 {Initialize the models}
for all i [1, n] do
tL
i
0
sets
i
totheinitialstateofM
i
end for
while terminating condition not met do
for all i [1, n] do
tN
i
tL
i
+ta(s
i
)
Empty the bags x
i
and y
i
end for
t min{tN
i
}
for all i [1, n] do
if tN
i
= t then
y
i
i
(s
i
)
for all j [1, n] & j = i & z
ij
(y
i
) = do
Add z
ij
(y
i
) to the bag x
j
end for
end if
end for
for all i [1, n] do
if tN
i
= t & x
i
is empty then
s
i
int,i
(s
i
)
tL
i
t
else if tN
i
= t & x
i
is not empty then
s
i
con,i
(s
i
, x
i
)
tL
i
t
else if tN
i
= t & x
i
is not empty then
s
i
ext,i
(s
i
, t tL
i
, x
i
)
tL
i
t
end if
end for
end while
9
The integration scheme used by algorithms 1 and 2 approximates f(x(t)) with a piecewise constant
function. At any particular time, the value of the approximation is given by the state variable q. Using q in
place of f(x(t
0
+T)) in equation 8 gives
T
0
q dt = qT.
When q and T are known, then the function
F(T, q) = qT (9)
approximates F(T). Because T must be positive (i.e., we are simulating forward in time), the inverse of
equation 9 can not be used to compute the time advance. However, the absolute value of the inverse,
F
1
(q, q) =
q
| q|
if q = 0
otherwise
(10)
is suitable.
The state transition, output, and time advance functions of the integrator can be dened in terms of
equations 9 and 10. This gives
int
((q
l
, q, q, )) =
(q +
F(, q), q +
F(, q), q,
F
1
(D, q)),
ext
((q
l
, q, q, ), e, x) =
(q
l
, q +
F(e, q), x,
F
1
(D|q +
F(e, q) q
l
|, x)),
con
((q
l
, q, q, ), x) =
(q +
F(, q), q +
F(, q), x,
F
1
(D, x)),
((q
l
, q, q, )) = q +
F(, q), and
ta((q
l
, q, q, )) = .
In this denition,
F computes the next value of the integral using the previous value, the approximation
of f(x(t)) (i.e., q), and the time elapsed since the last state transition. The time that will be needed for
the integral to change by an amount D is computing using
F
1
. The arguments to
F
1
are the distance
remaining (i.e., D minus the distance already traveled) and the speed with which the distance is being covered
(i.e., the approximation of f(x(t))).
An implementation of this denition is shown in gure 6. This implementation is for the adevs simulation
library. The implementation is simplied by taking advantage of two facts. First, the output values can be
stored in a shared array that is accessed directly, rather than via messages. Second, the derivative value,
represented as an input in the formal expression, can be calculated directly from the shared array of output
values whenever a transition function is executed.
The integrator class is derived from the atomic model class, which is part of the adevs simulation library.
The atomic model class has virtual methods corresponding with the output and state transition functions of
the DEVS atomic structure. The time advance function for an adevs model is dened as ta(), where is
a state variable of the atomic model, and its value is set with the hold() method. The integrator class adds
a new virtual method, f(), that is specialized to compute the derivative function using the output value
vector y.
A DEVS simulation of a system of ordinary dierential equations, using algorithm 3, gives the same
result as algorithm 2. This is demonstrated by a simulation of the two equation system 7. The code used
to execute the simulation is shown in gure 7. The state transitions and output values computed in the
course of the simulation is shown in table 3. A comparison of this table with table 2 conrms that they are
identical.
10
class Integrator: public atomic {
public:
/ Arguments are the initial variable value, variable index,
integration quatum, and an array for storing output values. /
Integrator(double q0, int index, double D, double x):
atomic(),index(index),q(q0),D(D),x(x) { x[index] = q; }
/ Initialize the state prior to start of the simulation. /
void init() {
dq = f(index,x); compute sigma();
}
/ DEVS state transition functions. /
void delta int() {
q = x[index]; dq = f(index,x); compute sigma();
}
void delta ext(double e, const adevs bag<PortValue>& xb) {
q += edq; dq = f(index,x); compute sigma();
}
void delta conf(const adevs bag<PortValue>& xb) {
q = x[index]; dq = f(index,x); compute sigma();
}
/ DEVS output function. /
void output func(adevs bag<PortValue>& yb) {
x[index] += Dsgn(dq);
output(cell interface::out,NULL,yb); // Notify inuencees of change.
}
/ Event garbage collection function. /
void gc output(adevs bag<PortValue>& g){}
/ Virtual derivative function. /
virtual double f(int index, const double x) = 0;
private:
/ Index of the variable associated with this integrator. /
int index;
/ Value of the variable, its derivative, and the integration quantum. /
double q, dq, D;
/ Shared output variable vector. /
double x;
/ Sign function. /
static double sgn(double z) {
if (z > 0.0) return 1.0; if (z < 0.0) return -1.0; return 0.0;
}
/ Set the value of the time advance function. /
void compute sigma() {
if (fabs(dq) < ADEVS EPSILON) hold(ADEVS INFINITY);
else hold(fabs((D-fabs((q-x[index])))/dq));
}
};
Figure 6: Code listing for the Integrator class.
11
/ Integrator for the two variable system. /
class TwoVarInteg: public Integrator {
public:
TwoVarInteg(double q0, int index, double D, double x):
Integrator(q0,index,D,x){}
/ Derivative function. /
double f(int index, const double x) {
if (index == 0) return -x[0]+0.5x[1];
else return -0.1x[1];
}
};
int main() {
double x[2];
TwoVarInteg intg[2];
// Integrator for variable x1
intg[0] = new TwoVarInteg(1.0,0,0.1,x);
// Integrator for variable x2
intg[1] = new TwoVarInteg(1.0,1,0.1,x);
// Connect the output of x2 to the input of x1
staticDigraph g;
g.couple(intg[1],1,intg[0],0);
// Run the sumulation for 3.361 units of time
devssim sim(&g);
while (sim.timeNext() <= 3.4) {
cout "t = " sim.timeLast() endl;
for (int i = 0 ; i < 2; i++) {
intg[i]printState();
}
sim.execNextEvent();
}
// Done
return 0;
}
Figure 7: Main simulation code for the two equation simulator.
t q
1
q
1
y
1
ta event type q
2
q
2
y
2
ta
2
0 1 -0.5 1 0.2 init 1 -0.1 1 1
0.2 0.9 -0.4 0.9 0.25 internal
0.45 0.8 -0.3 0.8 0.3333 internal
0.7833 0.7 -0.2 0.7 0.5 internal
1 0.6567 -0.25 0.5733 external 0.9 -0.09 0.9 1.111
1.573 0.6 -0.15 0.6 0.6667 internal
2.111 0.5193 -0.2 0.9033 external 0.8 -0.08 0.8 1.25
3.014 0.5 -0.1 0.5 1 internal
Table 3: DEVS simulation of two coupled ordinary dierential equations.
12
Figure 8: A cellspace view of the system described by equation 13.
5 The heat equation
In many instances, discrete approximations of partial dierential equations can be obtained by a two step
process. In the rst step, a discrete approximation of the spatial derivatives is constructed. This results
in a large set of coupled ordinary dierential equations. The second step approximates the remaining time
derivatives. This step can be accomplished with the discrete event integration scheme.
To illustrate this process, consider the heat (or diusion) equation
u(t, x)
t
=
2
u(t, x)
x
2
. (11)
The function u(t, x) represents the quantity that becomes diuse (temperature if this is the heat equation).
The spatial derivative can be approximated with a center dierence, this giving
2
u(t, kx)
x
2
u(t, (k + 1)x) 2u(t, kx) +u(t, (k 1)x)
x
2
, (12)
where x is the resolution of the spatial approximation, and the k are indices on the discrete spatial grid.
Substituting 12 into 11 gives a set of coupled ordinary dierential equations
du(t, kx)
dt
=
u(t, (k + 1)x) 2u(t, kx) +u(t, (k 1)x)
x
2
(13)
that can be simulated using the DEVS integration scheme. This dierence equation describes a grid of N
integrators, and each integrator is connected to its two neighbors. The integrators at the end can be given
xed left and right values (i.e., xing u(t, 1x) and u(t, (N + 1)x)) equal to a constant), or some other
suitable boundary condition can be used. For the sake of illustration, let u(t, 1x) = u(t, (N+1)x)) = 0.
With these boundary conditions, two equivalent views of the system can be constructed. The rst view,
show in equation 14, utilizes a matrix to describe the coupling of the dierential equations in 13.
d
dt
u(t, 0)
u(t, x)
u(t, 2x)
...
u(t, (N 1)x)
u(t, Nx)
=
1
x
2 1 0 0
1 2 1 0 0
0 1 2 1 0
0 ... ... ... 0
0 0 1 2 1
0 0 0 1 2
u(t, 0)
u(t, x)
u(t, 2x)
...
u(t, (N 1)x)
u(t, Nx)
(14)
Because the kth equation is directly inuenced only by the (k+1)st and (k-1)st equations, it is also possible
to represent equations 13 as a cell space in which each cell is inuenced by its left and right neighbors. The
discrete event model favors this representation. The discrete event cellspace, which is illustrated in gure 8,
has an integrator at each cell, and the integrator receives input from its left and right neighbors. Figure 9
shows the adevs simulation code for equation 13. The cellspace view of the equation coupling is implemented
using the adevs Cellspace class.
The discrete event approximation to equation 13 has two potential advantages over a similar discrete time
approximation. The discrete time approximation is obtained from the same approximation to the spatial
13
class DiInteg: public Integrator, public cell interface {
public:
DiInteg(double q0, int index, double D, double x, double dx):
Integrator(q0,index,D,x),cell interface(){ dx2=dxdx; }
double f(int index, const double x) {
return (x[index-1]-2.0x[index]+x[index+1])/dx2;
}
private:
static double dx2;
};
double DiInteg::dx2 = 0.0;
void print(const double x, double dx, int dim, double t) {
for (int i = 0; i < dim; i++) {
double soln = 100.0sin(M PIidx/80.0)exp(-tM PIM PI/6400.0);
cout idx " " x[i] " " fabs(x[i]-soln) endl;
}
}
int main() {
// Build the solution array and assign boundary and initial values
double len = 80.0;
double dx = 0.1;
int dim = len/dx;
double x = new double[dim+2];
// Half sine intial conditions with zero at boundaries
for (int i = 0; i <= dim; i++) {
x[i] = 100.0sin(M PIidx/80.0);
}
x[0] = x[dim+1] = 0.0;
// Create the DEVS model
double D = 10.0;
cellSpace cs(cellSpace::SIX POINT,dim);
for (int i = 1; i <= dim; i++) {
cs.add(new DiInteg(x[i],i,D,x,dx),i-1);
}
// Run the model
devssim sim(&cs);
sim.run(300.0);
print(x,dx,dim+2,sim.timeLast());
// Done
delete [ ] x;
return 0;
}
Figure 9: Code listing for the heat equation solver.
14
derivatives, but using the explicit Euler integration scheme to approximate the time derivatives (see, e.g.,
[18]). Doing this gives a set of coupled dierence equations
u(t + t, kx) = u(t+, kx) + t
.
This discrete time integration scheme has an error term that is proportional to the time step t. In this
respect, it is similar to the discrete event scheme whose approximation to the time derivative is proportional
to the quantum size D. However, there is an extra constraint in the discrete time formulation that is not
present in the discrete event approximation. This extra constraint is a stability condition on the set of
dierence equations (not the dierential equations, which are inherently stable). For a stable simulation
(i.e., for the state variables to decay rather than explode), it is necessary that
t
x
2
2
.
Freedom from the stability constraint is a signicant advantage that the discrete event scheme has over
the discrete time scheme. For discrete time systems, this stability constraint can only be removed by
employing implicit approximations to the time derivative. Unfortunately, this introduces a signicant new
computational overhead because a system of equations in the form Ax = b must be solved at each integration
step (see, e.g., [18]).
The unconditional stability of the discrete event scheme can be demonstrated with a calculation. Consider
a heat conducting bar with length 80. The ends are xed at a temperature of 0. The initial temperature of the
bar is given by u(0, x) = 100sin(x/80). Figures 10a and 10b show the computed solution at t = 300 using
x = 0.1 and dierent values of D. Even with large values of D, it can be seen that the computed solution
remains bounded. Figure 10c shows the error in the computed solution for the more reasonable choices of D.
From the gure, the correspondence between a reduction in D and a reduction in the computational error is
readily apparent.
-20
0
20
40
60
80
100
0 10 20 30 40 50 60 70 80 90
u
(3
0
0
,x
)
x
D=100
D=10
D=1
(a) Simulation with large D.
0
10
20
30
40
50
60
70
0 10 20 30 40 50 60 70 80 90
u
(3
0
0
,x
)
x
D=0.01
D=0.001
D=0.0001
(b) Simulation with small D.
0
5
10
15
20
25
30
35
0 10 20 30 40 50 60 70 80 90
A
b
s
o
lu
te
e
rro
r a
t t=
3
0
0
x
D=0.01
D=0.001
D=0.0001
(c) Absolute errors.
Figure 10: DEVS simulation of the heat equation with various quantum sizes.
In many instances, the discrete event approximation enjoys a computational advantage as well. An in
depth study of the relative advantage of a DEVS approximation to the heat equation over a discrete time
approximation is described in [5] and [23]. This advantage is realized in the forest re simulation described
by [12], where a diusive process is the spatially explicit piece of the forest re model. In that report, the
DEVS approximation is roughly four times faster than explicit discrete time simulation giving the same
errors with respect to experimental data.
The reason for the performance advantage can be understood intuitively in two related ways. The rst is
to observe that the time advance function determines the frequency with which state updates are calculated
at a cell. The time advance at each cell is inversely proportional to the magnitude of the derivative, and so
cells that are changing slowly will have large time advances relative to cells that are changing quickly. This
causes the simulation algorithm to focus eort on the changing portion of the solution, with signicantly
less work being devoted to portions that are changing slowly. This is demonstrated in gure 11. The state
15
transition frequency at a point is given by the inverse of the time advance function following an internal
event (i.e., u(ix), t)/D, where i is the grid point index). Figure 11a shows the state transition frequency
at the beginning and end of the simulation. Figure 11b shows the total number of state changes the are
computed at each grid point over the course of the calculation. It can be seen that the computational eort
is focused on the center of the bar, where the state transition functions are evaluated most frequently.
0
200
400
600
800
1000
1200
1400
1600
0 10 20 30 40 50 60 70 80
S
t
a
t
e
t
r
a
n
s
it
io
n
f
r
e
q
u
e
n
c
y
x
t = 300
t = 0
(a) State update frequency.
0
200000
400000
600000
800000
1e+06
1.2e+06
0 10 20 30 40 50 60 70 80 90
N
u
m
b
e
r
o
f
c
o
m
p
u
t
e
d
s
t
a
t
e
c
h
a
n
g
e
s
x
(b) State transition count.
Figure 11: Activity tracking in the DEVS diusion simulation using D = 0.0001.
A second explanation can be had by observing that the number of quantum crossings required for the
solution at a grid point to move from its initial to nal state is, approximately, equal to the distance between
those two states divided by the quantum size. This gives a lower bound on the number of state transitions
that are required to move from one state to another. It can be shown that, in many instances, the number
of state transitions required by the DEVS model will closely approximate this ideal number (see [5]).
6 Conservation laws
Conservation laws are an important application area where DEVS approximations of the time derivatives
can be usefully applied. A DEVS simulation of Eulers uid equations is presented in [16]. In that report,
a signicant performance advantage was obtained, relative to a similar time stepping method, via the ac-
tivity tracking property described above. In this section, the application of DEVS to conservation laws is
demonstrated for a simpler problem, where it is easier to focus on the derivation of the discrete event model.
A conservation law in one special dimension is described by a partial dierential equation
u(t, x)
t
+
F(u(t, x))
x
= 0.
The ux function F(u(t, x)) describes the rate of change in the amount of u (whatever u might represent)
at each point x (see, e.g., [18]). To be concrete, consider the conservation law
u(t, x)
t
+u(t, x)
u(t, x)
x
= 0. (15)
Equation 15 describes a material with quantity u(t, x) that moves with velocity u(t, x). In this equation,
the ux function is u(t, x)
2
/2. Equation 15 is obtained by taking the partial with respect to x of this ux
function.
As before, the rst step is to construct a set of coupled ordinary dierential equations that approximates
the partial dierential equation. There are numerous schemes for approximating the derivative of the ux
16
class ClawInteg: public Integrator, public cell interface {
public:
ClawInteg(double q0, int index, double D, double x, double dx):
Integrator(q0,index,D,x),cell interface(){ ClawInteg::dx=dx; }
double f(int index, const double x) {
return 0.5(x[index-1]x[index-1]-x[index]x[index])/dx;
}
private:
static double dx;
};
double ClawInteg::dx = 0.0;
Figure 12: Integrator for the conservation law solver.
function with respect to x (see, e.g., [9]). One of the simplest is an upwinding scheme on a spatial grid with
resolution x. Applying an upwinding scheme to 15 gives
u(t, kx)
u(t, kx)
x
1
2x
(u(t, (k 1)x)
2
u(t, kx)
2
). (16)
Substituting 16 into 15 gives the set of coupled ordinary dierential equations
du(t, kx)
dt
=
1
2x
(u(t, (k 1)x)
2
u(t, kx)
2
). (17)
It is common to approximate the time derivatives in equation 17 with the explicit Euler integration scheme
using a time step t. This gives the set of dierence equations
u(t + t, kx) = u(t, kx) +
t
2x
(u(t, (k 1)x)
2
u(t, kx)
2
)
that approximates the set of dierential equations. The dierence equations are stable provided that the
condition
t
x
max|u(it, jx)| 1
is satised at every time point i and every spatial point j. Because equations 17 are nonlinear, it is not
necessarily true that a discrete event approximation will be stable regardless of the size of the integration
quantum. However, it is possible to nd a suciently small quantum for which the scheme works (see [13]).
This remains an open area of research, but we will move recklessly ahead and try generating solutions with
several dierent quantum sizes and observe the eect on the solution.
For this example, a space 10 units in length is assigned the initial conditions
u(0, x) =
sin(x/4) if 0 x 4
0 otherwise
and the boundary conditions u(t, 0) = u(t, 10) = 0. The integrator implementation for this model is shown in
gure 12. The simulation main routine is identical to the one for the heat equation (except where DiInteg
is replaced by ClawInteg; see gure 9). Figure 13 shows snapshots of the solution computed with x = 0.1
and three dierent quantum sizes; 0.1, 0.01, and 0.001. The computed solutions maintain important features
of the real solution, included the shock formation and shock velocity (see [18]).
While the advantage of the discrete event scheme with respect to stability remains unresolved (but look
promising!), a potential computational advantage can be seen. From the gure, it is apparent that the
larger derivatives follow the shock, with the area in front of the shock having zero derivatives and the area
behind the shock having diminishing derivatives. The DEVS simulation apportions computational eort
appropriately. This is shown in gure 14 for the simulation with D = 0.001. Figure 14a shows several
17
-0.2
0
0.2
0.4
0.6
0.8
1
0 2 4 6 8 10 12
u
(t,x
)
x
t=0
t=5
t=10
t=15
(a) D = 0.1.
0
0.2
0.4
0.6
0.8
1
0 2 4 6 8 10 12
u
(t,x
)
x
t=0
t=5
t=10
t=15
(b) D = 0.01.
0
0.2
0.4
0.6
0.8
1
0 2 4 6 8 10 12
u
(t,x
)
x
t=0
t=5
t=10
t=15
(c) D = 0.001.
Figure 13: Simulation of equation 17 with various quantum sizes.
snapshots of the cell update frequency (i.e., u(ix), t)/D following an internal event, where i is the grid
point index) at times corresponding to the solution snapshots shown in gure 13c. Figure 14b shows the
total number of state transitions computed at each cell at those times. The eect of this front tracking
behavior can be signicant. In [16] it is responsible for a speedup of 35 relative to a discrete time solution
for Eulers equations in one spatial dimension.
1e-04
0.001
0.01
0.1
1
10
100
1000
0 1 2 3 4 5 6 7 8 9 10
S
t
a
t
e
u
p
d
a
t
e
f
r
e
q
u
e
n
c
y
x
t=0
t=5
t=10
t=15
(a) Update frequency.
1
10
100
1000
10000
0 1 2 3 4 5 6 7 8 9 10
T
o
t
a
l
n
u
m
b
e
r
o
f
s
t
a
t
e
t
r
a
n
s
it
io
n
s
x
t=0
t=5
t=10
t=15
(b) State transition count.
Figure 14: Front tracking in the DEVS simulation of equation 17 with D = 0.001.
7 Two point integration schemes
The integration scheme discussed to this point is a single point scheme. It relies on a single past value of
the function, and it is exact for the linear equation x(t) = k, where k is a constant. Recall that the single
point scheme for simulating a system described as x(t) = f(x(t)) can be derived from the expression
t0+h
t0
f(x(t)) dt
= D (18)
by approximating f(x(t)) with the value f(x(t
0
)).
If the function f(x(t)) in equation 18 is approximated using the previous two values of the derivative,
then the resulting method is called a two point scheme. A DEVS model of a two point scheme requires the
state variables
18
q, the current approximation to x(t),
q
l
, the last grid point occupied by q,
, the time required to move from q to the next grid point,
q
1
and q
0
, the last two computed values of the derivative, and, possibly,
h, the time interval between q
1
and q
0
.
At least two dierent two point methods have been described (see [8] and [14]). The rst method
approximates f(x(t)) in equation 18 with the line connecting points q
1
and q
0
. The distance moved by x(t)
in the interval [h, h +T] can be approximated by
h+T
h
q
1
q
0
h
+ q
0
dt =
q
1
q
0
2h
T
2
+ q
1
T = q.
The functions
F
1
(T, q
1
, q
0
, h) =
q
1
q
0
2h
T
2
+ q
1
T, (19)
and
F
1
1
(q, q
1
, q
0
, h) = T, (20)
where T is the smallest positive root of
q
1
q
0
2h
T
2
+ q
1
T
= q
and if such a root does not exist, can be used to dene the state transition, output, and time advance
functions (which will be done in a moment). Equations 19 and 20 are exact when x(t) is a quadratic.
The second method approximates f(x(t)) with the piecewise constant function
a q
1
+b q
0
, a +b = 1. (21)
If x(t) is the line mt +b, then f(x(t)) = m, (am+bm) = (a +b)m = m, and so this approximation is exact.
Integrating equation 21 over the interval [0, T] gives the approximating functions
F
2
(T, q
1
, q
0
) = (a q
1
+b q
0
)T, and (22)
F
1
2
(q, q
1
, q
0
) =
q
|a q
1
+b q
0
|
. (23)
This approximation does not require the state variable h.
For brevity, let q denote the state of the integrator, and let
dq denote the variables q
1
, q
0
or q
1
, q
0
, h as
needed. Which is intended will be clear from the context in which it is used. The time advance function for
a two point scheme is given by
ta( q) = ,
and the output function is dened by
( q) =
F(,
dq).
If equations 19 and 20 are used to dene the integration scheme, then the resulting state transition
functions are
int
( q) = (q +
F
1
(,
dq), q +
F
1
(,
dq), q
1
, q
1
, ,
F
1
1
(D, q
1
, q
1
, )),
ext
( q, e, x) = (q
l
, q +
F
1
(e,
dq), x, q
1
, e,
F
1
1
(D |q +
F
1
(e,
dq) q
l
|, x, q
1
, e)), and
con
( q, x) = (q +
F
1
(,
dq), q +
F
1
(,
dq), x, q
1
, ,
F
1
1
(D, x, q
1
, )).
19
When equations 22 and 23 are used to dene the integrator, then the state transition functions are
int
( q) = (q +
F
2
(,
dq), q +
F
2
(,
dq), q
1
, q
1
,
F
1
2
(D, q
1
, q
1
)),
ext
( q, e, x) = (q
l
, q +
F
2
(e,
dq), x, q
1
,
F
1
2
(|q +
F
2
(e,
dq) q
l
| D, x, q
1
)), and
con
( q, x) = (q +
F
2
(,
dq), q +
F
2
(,
dq), x, q
1
,
F
1
2
(D, x, q
1
)).
The scheme that is constructed using equations 19 and 20 is similar to the QSS2 method in [8], except
that the input and output trajectories used here are piecewise constant rather than piecewise linear.
The scheme constructed from equations 22 and 23 is nearly second order accurate when a and b are
chosen correctly. If a =
3
2
and b =
1
2
, then the error in the integral of 21 is
E = (f(x
1
)
3f(x
1
)
2
+
f(x
0
)
2
)T +
1
2
T
2
d
dt
f(x
1
) +
n=3
1
n!
d
dt
(n+1)
f(x
1
)T
n
. (24)
For this scheme to be nearly second order accurate, the terms that depend on T and T
2
need to be as
small as possible. Let h be the time separating x
1
and x
0
(i.e., x
1
= x(t
1
) and x
0
= x(t
0
) and h = t
1
t
0
),
and let =
T
h
, the ratio of the current time advance to the previous time advance. It follows that T = h.
The function
d
dt
f(x
1
) can be approximated by
d
dt
f(x
1
)
f(x
1
) f(x
0
)
h
. (25)
Substituting equation 25 into equation 24 and dropping the high order error terms gives
E h(
f(x
1
) f(x
0
)
2
+
f(x
0
) f(x
1
)
2
). (26)
Equation 26 approaches zero as approaches 1. It seems reasonable to assume T and h become increasingly
similar as D is made smaller. From this assumption, it follows that the low order error terms in equation 24
vanish as D shrinks.
Figures 15a and 15b show the absolute error in the computed solution of x(t) = x(t), x(0) = 1, as a
function of D for these two integration schemes. The simulation is ended at t = 1.0, and and the absolute
error are recorded at that time. In both cases, it can be observed that the absolute error is proportional to
D
2
.
These two schemes use additional information to reduce the approximation error with respect to the
single point scheme. Fortunately, these two schemes share the unconditional linear stability of the single
point scheme (see [8] and [13]), and so they represent a trade o between storage, execution time, and
accuracy. When dealing with very large systems, the single point scheme has the advantage of needing less
computer memory because it has fewer state variables per integrator. However, it will, in general, be less
accurate than a two point scheme for a given quantum size. Moreover, if the quantum size is selected to
obtain a given error, then the two point scheme will generally use a larger quantum that the one point
scheme, and so the simulation will nish more quickly using the two point scheme.
8 Conclusions
This chapter introduced some essential techniques for constructing discrete event approximations to contin-
uous systems. Discrete event simulation of continuous systems is an active area of research, and the breadth
of the eld can not be adequately covered in this short space. So, in the conclusion, some recent results are
summarized and references given for the interested reader.
20
0
0.001
0.002
0.003
0.004
0.005
0.006
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
a
b
s
o
lu
t
e
e
r
r
o
r
D
(a) Simulation error using equations 19 and 20.
0
0.0005
0.001
0.0015
0.002
0.0025
0.003
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
a
b
s
o
lu
t
e
e
r
r
o
r
D
(b) Simulation error using equations 22 and 23.
Figure 15: Simulation error as a function of D for the system x(t) = x(t) with x(0) = 1.
In [1], an adaptive quantum scheme is introduced. This scheme allows the integration quantum to be
varied during the course of the calculation in order to maintain an upper bound on the global error. An
application of adaptive quantization to a re spreading model is discussed in [11].
A methodology for approximating general time functions as DEVS models is discussed in [4]. The
approximations introduced in that paper associate events with changes in the coecients of an interpolating
polynomial. An application of this methodology to partial dierential equations is shown in [21].
Applying DEVS models to nite element method for equilibrium problems is discussed in [2] and [17]. A
steady state heat transfer problem is used to demonstrate the method.
Simulation of partial dierential equations leads naturally to parallel computing. Parallel discrete event
simulation for the numerical methods presented in this paper are discussed in [13] and [15]. Specic issues
that emerge when simulating DEVS models using logical-process based algorithms are described in [15].
Parallel discrete event simulation applied to particle in cell methods is discussed in [20] and [6].
References
[1] Jean-Se
bastien Bolduc and Hans Vangheluwe. Mapping odes to devs: Adaptive quantization. In Proceed-
ings of the 2003 Summer Simulation MultiConference (SCSC03), pages 401407, Montre al, Canada,
July 2003.
[2] M. DAbreu and G. Wainer. Improving nite elements method models using cell-devs. In Proceedings
of the 2003 Summer Computer Simulation Conference, Montreal, QC. Canada, 2003.
[3] Jean-Baptiste Filippi and Paul Bisgambiglia. Jdevs: an implementation of a devs based formal frame-
work for environmental modelling. Environmental Modelling & Software, 19(3):261274, March 2004.
[4] Norbert Giambiasi, Bruno Escude, and Sumit Ghosh. Gdevs: A generalized discrete event specication
for accurate modeling of dynamic systems. Trans. Soc. Comput. Simul. Int., 17(3):120134, 2000.
[5] R. Jammalamadaka. Activity characterization of spatial models: Application to the discrete event
solution of partial dierential equations. Masters thesis, University of Arizona, Tucson, Arizona, USA,
2003.
[6] H. Karimabadi, J. Driscoll, Y.A. Omelchenko, and N. Omidi. A new asynchronous methodology for
modeling of physical systems: breaking the curse of courant condition. Journal of Computational
Physics, 205(2):755775, May 2005.
21
[7] E. Kofman, M. Lapadula, and E. Pagliero. Powerdevs: A devs-based environment for hybrid system
modeling and simulation. Technical Report LSD0306, School of Electronic Engineering, Universidad
Nacional de Rosario, Rosario, Argentina, 2003.
[8] Ernesto Kofman. Discrete event simulation of hybrid systems. SIAM Journal on Scientic Computing,
25(5):17711797, 2004.
[9] Dietmar Kroner. Numerical Schemes for Conservation Laws. Wiley, Chichester, New York, 1997.
[10] A. Muzy and J. Nutaro. Algorithms for ecient implementations of the devs & dsdevs abstract simula-
tors. In 1st Open International Conference on Modeling & Simulation, pages 401407, ISIMA / Blaise
Pascal University, France, June 2005.
[11] Alexandre Muzy, Eric Innocenti, Antoine Aiello, Jean-Francois Santucci, and Gabriel Wainer. Cell-devs
quantization techniques in a re spreading application. In Proceedings of the 2002 Winter Simulation
Conference, 2002.
[12] Alexandre Muzy, Paul-Antoine Santoni, Bernard P. Zeigler, James J. Nutaro, and Rajanikanth Jam-
malamadaka. Discrete event simulation of large-scale spatial continuous systems. In Simulation Multi-
conference, 2005.
[13] James Nutaro. Parallel Discrete Event Simulation with Application to Continuous Systems. PhD thesis,
University of Arizona, Tuscon, Arizona, 2003.
[14] James Nutaro. Constructing multi-point discrete event integration schemes. In Proceedings of the 2005
Winter Simulation Conference, 2005.
[15] James Nutaro and Hessam Sarjoughian. Design of distributed simulation environments: A unied
system-theoretic and logical processes approach. SIMULATION, 80(11):577589, 2004.
[16] James J. Nutaro, Bernard P. Zeigler, Rajanikanth Jammalamadaka, and Salil R. Akerkar. Discrete
event solution of gas dynamics within the devs framework. In Peter M. A. Sloot, David Abramson,
Alexander V. Bogdanov, Jack Dongarra, Albert Y. Zomaya, and Yuri E. Gorbachev, editors, Interna-
tional Conference on Computational Science, volume 2660 of Lecture Notes in Computer Science, pages
319328. Springer, 2003.
[17] H. Saadawi and G. Wainer. Modeling complex physical systems using 2d nite elemental cell-devs. In
Proceedings of MGA, Advanced Simulation Technologies Conference 2004 (ASTC04), Arlington, VA.
U.S.A., 2004.
[18] Gilber Strang. Introduction to Applied Mathematics. Wellesley-Cambridge Press, Wellesley, Mas-
sachusetts, 1986.
[19] Ferenc Szidarovszky and A. Terry Bahill. Linear Systems Theory, Second Edition. CRC Press LLC,
Boca Raton, Florida, 1998.
[20] Yarong Tang, Kalyan Perumalla, Richard Fujimoto, Homa Karimabadi, Jonathan Driscoll, and Yuri
Omelchenko. Parallel discrete event simulations of physical systems using reverse computation. In
ACM/IEEE/SCS Workshop on Principles of Advanced and Distributed Simulation (PADS), Monterey,
CA, June 2005.
[21] Gabrial A. Wainer and Norbert Giambiasi. Cell-devs/gdevs for complex continuous systems. SIMULA-
TION, 81(2):137151, February 2005.
[22] Gabriel Wainer. Cd++: a toolkit to develop devs models. Software: Practice and Experience,
32(13):12611306, 2002.
[23] Bernard P. Zeigler. Continuity and change (activity) are fundamentally related in devs simulation of
continuous systems. In Keynote Talk at AI, Simulation, and Planning 2004 (AIS04), October 2004.
22
[24] Bernard P. Zeigler, Herbert Praehofer, and Tag Gon Kim. Theory of Modeling and Simulation, 2nd
Edition. Academic Press, 2000.
[25] Bernard P. Zeigler, Hessam Sarjoughian, and Herbert Praehofer. Theory of quantized systems: Devs
simulation of perceiving agents. Cybernetics and Systems, 31(6):611647, September 2000.
[26] Bernard P. Zeigler and Hessam S. Sarjoughian. Introduction to devs modeling and simulation with java:
Developing component-based simulation models. Unpublished manuscript, 2005.
23