0% found this document useful (0 votes)
12 views

Notes Discrete Event Systems

- The notes introduce discrete event systems, which are dynamic systems whose behavior is driven by asynchronous events. Examples include manufacturing plants, banks, airports, etc. - Both untimed and timed models of discrete event systems are discussed, including state automata and timed automata. Markov chains are also mentioned as an important class of timed automata. - The main application discussed is queueing theory, as queueing systems are an important class of discrete event systems. - Basics of systems theory are covered, distinguishing between static and dynamic systems and introducing the concept of system state.

Uploaded by

Cathrine Muigai
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Notes Discrete Event Systems

- The notes introduce discrete event systems, which are dynamic systems whose behavior is driven by asynchronous events. Examples include manufacturing plants, banks, airports, etc. - Both untimed and timed models of discrete event systems are discussed, including state automata and timed automata. Markov chains are also mentioned as an important class of timed automata. - The main application discussed is queueing theory, as queueing systems are an important class of discrete event systems. - Basics of systems theory are covered, distinguishing between static and dynamic systems and introducing the concept of system state.

Uploaded by

Cathrine Muigai
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

DI

DIPARTIMENTO DI INGEGNERIA DELL’INFORMAZIONE


E SCIENZE MATEMATICHE

Lecture notes of
Discrete Event Systems

Simone Paoletti

Version 0.4
October 28, 2020
Indice

Notation 1

Introduction 2

1 Basics of systems theory 3


1.1 Systems and mathematical models . . . . . . . . . . . . . . . . 3
1.2 Static vs dynamic systems . . . . . . . . . . . . . . . . . . . . . 4
1.3 The concept of state . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Time-driven vs event-driven systems . . . . . . . . . . . . . . . 7
1.5 Discrete event systems . . . . . . . . . . . . . . . . . . . . . . . 8

2 Untimed models of discrete event systems 12


2.1 State automata (with outputs) . . . . . . . . . . . . . . . . . . . 13
2.2 Graphical representation . . . . . . . . . . . . . . . . . . . . . . 15

3 Timed models of discrete event systems 17


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 The clock structure . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3 Timed automata . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4 Event timing dynamics . . . . . . . . . . . . . . . . . . . . . . . 22

Bibliography 23

A Review of probability theory 24


A.1 Basic concepts and definitions . . . . . . . . . . . . . . . . . . . 24
A.1.1 Properties of probability spaces . . . . . . . . . . . . . . 25
A.1.2 Conditional probabilities and independence . . . . . . . . 26
A.1.3 Total probability rule and Bayes theorem . . . . . . . . . 26
A.2 Random variables . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Indice ii

A.2.1 Discrete random variables . . . . . . . . . . . . . . . . . 27


A.2.2 Continuous random variables . . . . . . . . . . . . . . . 28
A.2.3 Bivariate random variables . . . . . . . . . . . . . . . . . 29
A.2.4 Expected value and variance . . . . . . . . . . . . . . . . 31
A.3 Notable probability distributions . . . . . . . . . . . . . . . . . 32
A.3.1 Discrete distributions . . . . . . . . . . . . . . . . . . . . 32
A.3.2 Continuous distributions . . . . . . . . . . . . . . . . . . 33
Notation

The main acronyms, symbols, operators, functions, etc. used in these


notes, are introduced next.

Acronyms
DES Discrete Event System
cdf Cumulative Distribution Function
pdf Probability Density Function
pmf Probability Mass Function

Symbols, operators and functions


, equal by definition
∅ empty set
AcS complement of set A
ATB union of sets A and B
A B intersection of sets A and B
2A power set of set A (set of all subsets of A)
R the field of real numbers
C the field of complex numbers
Z the set of integer numbers
N the set of natural numbers
∈ belongs to

/ does not belong to
, equal by definition
dx
dt
, ẋ first derivative of x
d2 x
dt2
, ẍ second derivative of x
FX (x) cdf of the random variable X
fX (x) pdf of the random variable X
pX (x) pmf of the random variable X
µX , E[X] expected value of the random variable X
2
σX , Var(X) variance of the random variable X
Introduction

Discrete event systems are systems whose dynamic behaviour is driven


by asynchronous occurrences of events. Some examples are:

• a manufacturing plant with machines, workers, buffers, etc.;

• a bank with different types of customers and services (desks, ATMs, etc.);

• an airport with passengers in different states (check-in, security control,


gate, boarding, etc.);

• a computer system with processes accessing to resources;

• a road system with cars, roads, crosses, traffic lights, etc.;

• a hospital with different types of patients and wards;

• a fast-food restaurant with a staff and different types of customers.

Broadly speaking, discrete event systems can be found in a variety of fields,


such as control, computer science, automated manufacturing, and communi-
cation, information and transportation networks.
The objective of this course is to equip the students with several model-
ing, analysis and simulation tools for discrete event systems. In this respect,
modeling, probability and programming are the main contents of the course.
Both untimed and timed automata will be introduced as models of discrete
event systems. Markov chains will be addressed as an important class of timed
automata. The main application of the tools presented in the course, will be
queueing theory, since queueing systems are probably the most important and
widespread class of discrete event systems.
These lecture notes follow the line of teaching held in the course, but
do not substitute the textbook [1]. They should be rather considered as a
complement to it.
1

Basics of systems theory

1.1 Systems and mathematical models


In general terms, a system is what is of interest for a particular study. A
system is not necessarily associated with physical objects and natural laws:
consider, for instance, economic mechanisms or human behavior.
From the input-output point of view, a system can be represented as in
Figure 1.1, where:

• u denotes the system inputs, or independent variables;

• y denotes the system outputs, or dependent variables.


✬ ✩
u y
✲ system ✲

✫ ✪

Figure 1.1: Input-output representation of a system.

Inputs are those variables that can be varied independently of the system,
and are fed to the system to modify its behavior. Outputs are those variables
describing how the systems reacts to the inputs. Different systems may react
differently to the same inputs. For instance, the same force may determine a
different acceleration depending on the mass of the body to which it is applied.
In this sense, the outputs depend on the inputs applied and on the system itself.
A (mathematical) model describes, implicitly or explicitly, a functional
relationship between the inputs and the outputs of a system. It should be
Basics of systems theory 4

always clear that the model is not the system. For instance, a resistor is not
the Ohm’s Law. The Ohm’s Law describes the voltage-current characteristic
of a resistor in the operating zone where it is approximately linear. In general,
a model differs from the system due to parameter uncertainties, unmodeled
dynamics, approximations. The structure, the accuracy and the complexity of
a model depend on the system to be described and on its intended use.

1.2 Static vs dynamic systems


A first basic classification is between static and dynamic systems.

Definition 1.1 A system is termed static when the outputs at any time in-
stant depend only on the inputs applied at the same time instant.

Models of static systems are just functions mapping the inputs at time t to
the outputs at time t:
y(t) = f (u(t)). (1.1)
An example is the Ohm’s Law, mapping the current i passing through a resistor
to the voltage v across the resistor:

v(t) = R i(t) (1.2)

where R is the resistance.

Definition 1.2 A system is termed dynamic when the outputs at any time
instant depend on the whole past history of the inputs.

Differential equations are models of dynamic systems. Consider for instance


the current-voltage relation of a capacitor:
dv(t)
i(t) = C (1.3)
dt
where C is the capacitance. If we integrate both sides of (1.3) between t0
(initial time) and t > t0 , we get
Z
1 t
v(t) = v(t0 ) + i(τ )dτ (1.4)
C t0
where it is apparent that the voltage (system output) at time t depends on
the past values of the current (system input) up to time t.
In this course, we will study a class of dynamic systems (namely, discrete
event systems) which is not described by differential equations.
Basics of systems theory 5

1.3 The concept of state


Consider the following problem.

Problem 1.1 Given a system, is it sufficient to know the inputs u(t) for all
t ≥ t0 in order to determine uniquely the outputs y(t) for all t ≥ t0 ?

For static systems, the answer is yes. What about dynamic systems?

Example 1.1 A constant braking force is applied to a car. We are interested


in the space needed to stop the car. This implies that, in our problem, the
car is the system, the braking force is the input, and the displacement is the
output. We can write the second Newton’s law of motion for the car:

Ma(t) = −f (1.5)

where M is the mass of the car, a(t) is the car acceleration, and f is the mod-
ulus of the braking force. Recalling that the acceleration is the first derivative
of the velocity, and integrating both sides of (1.5) between t0 (initial time) and
t > t0 , we get
f
v(t) = v(t0 ) − (t − t0 ) (1.6)
M
where v denotes the velocity of the car. Recalling that the velocity is the first
derivative of the position, and integrating both sides of (1.6) between t0 (initial
time) and t > t0 , we further get
f
∆x(t) = v(t0 )(t − t0 ) − (t − t0 )2 (1.7)
2M
where ∆x denotes the displacement. It can be observed that the displacement
does not depend only on the input (the braking force f ), but also on the value
of the velocity of the car at time t0 . This implies that the knowledge of the
input only, is not sufficient to determine uniquely the output. Also the value
v(t0 ) is needed. Notice that the velocity is neither an input, nor an output
of the system. It is rather an additional variable whose value at time t0 is
necessary in order to determine uniquely the output for a given input. 

From Example 1.1 we understand that, for dynamic systems, the answer
to Problem 1.1 is no. This leads to the following definition.

Definition 1.3 The state of a dynamic system is a set of variables whose


values at time t0 are necessary to determine uniquely the outputs y(t) for all
t ≥ t0 , given the inputs u(t) for all t ≥ t0 .
Basics of systems theory 6

Typically, the state of a system will be denoted by x. Providing the correct


definition of state for a system is crucial in any modeling task.

Example 1.2 The queueing system of Figure 1.2 is formed by a single server
preceded by a queue (or storage space). The total capacity of the queueing
system is K, therefore the number of slots in the queue is K − 1.

queue server
Figure 1.2: Queueing system of Example 1.2.

Assume that one is interested in whether the server is idle or busy, and to this
aim defines the following variable:

0 if the server is idle
x= (1.8)
1 otherwise.

The question is whether (1.8) can be taken as a definition of state for the
system. The answer is no, and the reasoning is simple. Assume that x = 1.
This implies that the server is busy. What happens when the current service
terminates? Using only the information contained in x = 1, it is not possible
to determine uniquely whether the next value will be x′ = 0 or x′ = 1. On
the basis of Definition 1.3, the variable in (1.8) is therefore not a state for the
queueing system. Notice that the next value will be x′ = 0 if no customer is
waiting in the queue, and x′ = 1 otherwise. It is apparent that full information
about the number of customers in the system is needed for a proper definition
of state. This example will be continued in Example 1.4. 

Notice that, for a given system, the definition of state is not unique. As will be
clear later (see, e.g., Example 1.5), different definitions of state can be given for
the same system, depending on the problem or application which is of interest.

Definition 1.4 A system is termed with continuous state when its state takes
values in a continuous set. It is termed with discrete state when its state takes
values in a discrete/countable set.

There exists a class of systems, called hybrid systems, for which some compo-
nents of the state are continuous, and others are discrete.
Basics of systems theory 7

30

28

26

°C 24

22

20

18
0 4 8 12 16 20 24
hour

Figure 1.3: Plot of ambient temperature in a summer day.

1.4 Time-driven vs event-driven systems


An important classification is between time-driven and event-driven systems.
Definition 1.5 A system is termed time-driven when all its variables (inputs,
state and outputs) may change at any time instant with regular/synchronous
ticking.
The above definition implies that time is an independent variable for time-
driven systems. There is an external clock, and all system variables may
change at any clock tick. Figure 1.3 shows an example of this type, where the
observed variable is the ambient temperature in a summer day. Differential
equations are typical models used to describe time-driven systems.

Definition 1.6 A system is termed event-driven when its state changes only
upon the (typically irregular/asynchronous) occurrence of events.

It will be clear later that, for event-driven systems, “time” (meaning the time
instants when the events occur) is a dependent variable of the system. An
example of sample path (or state trajectory) of an event-driven system is shown
in Figure 1.4. Its most evident characteristic is that it is piecewise constant,
with the time instants of the state jumps determined by the asynchronous
occurrence of events.
Example 1.3 A typical example of event-driven system is the queueing sys-
tem of Example 1.2. If the state is defined as the number of customers in the
system, it is clear that the state changes only upon the occurrence of events,
in particular:
Basics of systems theory 8

x(t)

Figure 1.4: Example of sample path of an event-driven system.

• the state is increased by one when the arrival of a new customer occurs
(except when the system is full);

• the state is decreased by one when the service of a customer terminates,


and the customer leaves the system. 

An event should be thought of as occurring instantaneously. Events can be of


different types:

• specific actions (e.g., somebody presses a button, a customer arrives in


a queue, a job is terminated, etc.);

• spontaneous occurrences (e.g., the failure of a component, an interruption


of service, etc.);

• fulfilment of logic conditions (e.g., a warning threshold is exceeded, etc.).

In the following, a generic event will be typically denoted by e.

1.5 Discrete event systems


We are now ready for the definition of discrete event system.

Definition 1.7 A discrete event system (DES) is a dynamic, event-driven


system with discrete state.

It follows that a DES is characterized by:


Basics of systems theory 9

a a a a a
0 1 2 ... K-1 K

d d d d
Figure 1.5: State transition diagram for the queueing system of Example 1.4.

• a discrete set E of events;

• a discrete state space X ;

• an event-driven dynamics.

There exist real systems that can be naturally viewed as discrete event systems.
A typical example is provided next.

Example 1.4 Consider again the queueing system of Example 1.2, and define
the state x as the number of customers in the system. It turns out that x may
take values in the discrete state space

X = {0, 1, 2, . . . , K − 1, K}. (1.9)

Moreover, the state x may change only upon the arrival of a new customer, or
upon the termination of a service (assuming that customers depart from the
system after service). We can thus define the event set

E = {a, d} (1.10)

where a denotes the arrival of a new customer, and d denotes the termination
of a service in the server. The event-driven dynamics of the queueing system
can be represented through the state transition diagram in Figure 1.5. In the
state transition diagram, the nodes represent the states of X , whereas labeled
arcs represent the state transitions: an arc from x to x′ with label e means
that, when the current state is x and the next event is e, the next state will
be x′ . Notice that in state x = K the queueing system is full, and therefore
arrivals of new customers (that are still possible) are rejected due to lack of
space (the state of the system does not change). Notice also that the state
transition diagram contains the information that event d is not possible in
state x = 0 (there is no arc labeled with d exiting from node 0). Indeed, when
the queueing system is empty, no termination of service is possible. 
Basics of systems theory 10

In other cases, real systems that are naturally time-driven, can be modeled as
event-driven for particular applications.

Example 1.5 A cart moves along a track. Sensors are located at three points
of the track (they are denoted by A, B and C in Figure 1.6). Each sensor sends
an impulse when the cart crosses the corresponding point, in both directions.
For the sake of simplicity, it is assumed that the cart never changes direction
when it is across a sensor.
xxxxx xxxxx
xxxxx xxxxx
xxxxx xxxxx
xxxxx xxxxx
xxxxx xxxxx
xxxxx xxxxx
xxxxx xxxxx
xxxxx xxxxx
xxxxx xxxxx
xxxxx xxxxx
xxxxx xxxxx
xxxxx xxxxx
xxxxx xxxxx
xxxxx xxxxx

z A B C

Figure 1.6: Cart moving along a track.

The cart can be naturally seen as a time-driven system, whose motion is mod-
eled by the second Newton’s law:

M z̈(t) = f (t) (1.11)

where M is the mass of the cart, z is the position of the cart along the track,
and f is the tractive force applied to the cart. In principle, by integrating
twice (1.11), and given the initial position z(t0 ), the initial velocity ż(t0 ), and
the applied force f (t) for all t ≥ t0 , it is possible to know from the model
the exact position z(t) of the cart at any t ≥ t0 . In practice, uncertainty
on the aforementioned quantities may lead to discrepancies between the real
system and the model. However, in some applications, it is neither required
nor feasible to know the exact position of the cart. It is sufficient to know the
interval in which the cart is localized. To this aim, the input provided by the
sensors can be used. Define the state of the system as follows:


 0 if z < A

1 if A ≤ z < B
x= (1.12)

 2 if B ≤ z < C

3 if z ≥ C.
Thus, we have a discrete state space

X = {0, 1, 2, 3}. (1.13)

Moreover, define the set of events

E = {a, b, c} (1.14)
Basics of systems theory 11

a b c
0 1 2 3

a b c
Figure 1.7: State transition diagram for the cart system of Example 1.5.

where a means that an impulse is received from the sensor located at A, and
so on. With these definitions, we can construct the state transition diagram
in Figure 1.7. Notice that the dynamics of the system so described, turns out
to be an event-driven dynamics. 

From Example 1.5 we understand that the distinction between time-


driven and event-driven systems is not so strict. The same real system could
be suitably modeled as time-driven or event-driven, depending on the applica-
tion. We also learn that the definition of state is not unique for a given system.
In the case of the cart of Example 1.5, the state is the pair (position,velocity)
for the time-driven model, and the interval where the cart is localized for the
event-driven model.
2

Untimed models of discrete


event systems

In this chapter, we introduce state automata as untimed models of discrete


event systems. The term untimed refers to the fact that these models do
not include time. Given a sequence of events, a state automaton returns the
corresponding sequence of states. No information about time instants when
events (and thus state transitions) occur, is available from the model. In
this respect, untimed models only describe the logical behaviour of discrete
event systems. An input-output representation of untimed models is given in
Figure 2.1, where ek denotes the kth event, and xk denotes the state after the
kth event, with x0 the initial state.
✬ ✩
{e1 , e2 , . . .} {x0 , x1 , x2 , . . .}
✲ untimed model ✲

✫ ✪

Figure 2.1: Input-output representation of untimed models.

Notice that the event sequence {e1 , e2 , . . .} is the model input, and the cor-
responding state sequence {x0 , x1 , x2 , . . .} is the model output, provided that
event ek is possible in state xk−1 for all k = 1, 2, . . .. To understand this, recall
the queueing system of Example 1.4, where event d is not possible in state 0.
An event sequence such that event d should occur in state 0, is infeasible for
the queueing system.
Untimed models of discrete event systems 13

2.1 State automata (with outputs)


Based on Definition 1.7, a “natural” class of untimed models of discrete event
systems is represented by state automata.

Definition 2.1 A state automaton is a 5-tuple (E, X , Γ, f, x0), where:

• E is a countable set of events;

• X is a countable set of states;

• Γ : X → 2E is the active event function, such that Γ(x) ⊆ E is the set of


all events that are possible in state x ∈ X ;

• f : X × E → X is the state transition function, such that x′ = f (x, e) is


the next state when event e ∈ Γ(x) occurs in the current state x ∈ X ;

• x0 ∈ X is the initial state.

The following rules describe how a state automaton operates, i.e. it gen-
erates the state sequence corresponding to a given event sequence.

Algorithm 2.1

1. Set k = 1

2. If ek ∈
/ Γ(xk−1 ), then goto step 5

3. Compute xk = f (xk−1 , ek )

4. Set k = k + 1 and goto step 2

5. Error exit

Notice that an error exit occurs in step 5 of Algorithm 2.1. This is because
the next event is one which is actually not possible in the current state. This
leads to the following definition of feasibility for an event sequence.

Definition 2.2 The event sequence {e1 , e2 , . . .} is said to be feasible for the
DES described by the state automaton (E, X , Γ, f, x0) if ek ∈ Γ(xk−1 ) for all
k = 1, 2, . . ., where x0 is the initial state and xk = f (xk−1 , ek ).

If we draw the following analogies:

• the event set E is the alphabet;


Untimed models of discrete event systems 14

feasibility check state update


– x0 = 0
e1 = a ∈ Γ(0) = Γ(x0 ) x1 = f (x0 , e1 ) = f (0, a) = 1
e2 = a ∈ Γ(1) = Γ(x1 ) x2 = f (x1 , e2 ) = f (1, a) = 2
e3 = d ∈ Γ(2) = Γ(x2 ) x3 = f (x2 , e3 ) = f (2, d) = 1
e4 = a ∈ Γ(1) = Γ(x3 ) x4 = f (x3 , e4 ) = f (1, a) = 2
e5 = d ∈ Γ(2) = Γ(x4 ) x5 = f (x4 , e5 ) = f (2, d) = 1
e6 = d ∈ Γ(1) = Γ(x5 ) x6 = f (x5 , e6 ) = f (1, d) = 0

Table 2.1: Iterations of Algorithm 2.1 in Example 2.1.

• the event sequences are the words;

• the set of the feasible event sequences is the language,

we can think of characterizing a DES through the language it recognizes (or


accepts). This establishes interesting connections with formal language theory.

Example 2.1 Consider the queueing system of Example 1.2. The definitions
of E and X are given in Example 1.4. The other elements of the corresponding
state automaton (E, X , Γ, f, x0) are defined next.

• Active event function Γ

Γ(0) = {a}
Γ(x) = {a, d}, x = 1, 2, . . . , K

• State transition function f



x + 1 if x = 0, 1, . . . , K − 1
f (x, a) =
K otherwise
f (x, d) = x − 1, x = 1, 2, . . . , K.

The initial state is x0 = 0, if we assume that the system is initially empty.


Algorithm 2.1 can be applied to determine the state sequence correspond-
ing to the event sequence {e1 = a, e2 = a, e3 = d, e4 = a, e5 = d, e6 = d}. The
iterations of the algorithm are reported in Table 2.1. It turns out that the state
sequence is {x0 = 0, x1 = 1, x2 = 2, x3 = 1, x4 = 2, x5 = 1, x6 = 0}. On the
other hand, given the event sequence {e1 = a, e2 = a, e3 = d, e4 = d, e5 = d},
the algorithm returns an error exit, meaning that the event sequence is infea-
sible starting from x0 = 0. 

Definition 2.1 can be extended to the case of systems with outputs.


Untimed models of discrete event systems 15

Definition 2.3 A state automaton with outputs is a 7-tuple (E, X , Γ, f, x0, Y, g),
where:

• (E, X , Γ, f, x0) is a state automaton;

• Y is a countable set of outputs;

• g : X → Y is the output function, such that y = g(x) is the output


corresponding to state x ∈ X .

For a state automaton with outputs, Algorithm 2.1 returns also the output
sequence {y0, y1 , y2 , . . .}, where yk = g(xk ), k = 0, 1, 2, . . ..
It is worthwhile to note that state automata with outputs include Moore
machines as special cases. In theory of computation, a Moore machine is a
finite-state machine whose output values are determined solely by the current
state. Moore machines are typically used in sequential logic implementation.

Example 2.2 Consider again the queueing system of Examples 1.2, 1.4 and
2.1, and assume that one is only interested in whether the server is idle or
busy. To this aim, one can extend the state automaton of Example 2.1 by
adding the output set Y = {0, 1} and the output function

0 if x = 0
g(x) =
1 otherwise

where outputs 0 and 1 mean that the server is idle and busy, respectively. 

2.2 Graphical representation


A state automaton (with outputs) can be represented through an oriented
graph with labeled arcs (and nodes), according to the following rules.

• The nodes of the graph are the states of the automaton.

• The oriented and labeled arcs describe the state transition function f :
an oriented arc with label e is added from node x to node x′ if and only
if e ∈ Γ(x) and x′ = f (x, e).
e
✗✔ ✗✔
... ....
....... ........
.. .....
.... ✇ ..

x x′
✖✕✖✕
Untimed models of discrete event systems 16

a a a a a
0 1 2 ... K-1 K

d d d d
Figure 2.2: State transition diagram with outputs for the queueing system of
Example 2.2.

• The labels on the nodes describe the output function g: a label y is added
on node x if and only if y = g(x).

✗✔
..... .............
...... y ........ ..............
.....
.... ✇ ....
.. ... ✇.....
x
✖✕

A special case is when the output is binary, i.e. y ∈ {0, 1}.


✛✘
– If g(x) = 0: x
✚✙
✛✘
✓✏
– If g(x) = 1: x
✒✑
✚✙
• The initial state x0 is represented by an arrow pointing to the corre-
sponding node.
e′ .. ... .... ✗✔
........... ......❥ ......
.....
.... ′
x
✗✔ ✖✕
..
.
..

✗✔
✲ x0
✖✕
...
... ′′
x
........✖✕
...
....
...... ..
........... ........✯

e′′

The resulting graph is also called state transition diagram.

Example 2.3 For the state automaton of Example 2.1, the state transition
diagram is shown in Figure 1.5. When the variant of Example 2.2 is considered,
the state transition diagram with outputs is shown in Figure 2.2. 
3

Timed models of discrete event


systems

As we have seen in the previous chapter, state automata describe only the
logical behaviour of discrete event systems. Given a sequence of events, a
state automaton returns the corresponding sequence of states. In this chapter,
we introduce timed automata as models of discrete event systems including
information about time instants when events (and thus state transitions) occur.

3.1 Introduction
One could argue that event times (i.e. time instants when events occur) could
be provided as inputs to the timed model of a DES. If this were the case,
there would be little to say. Recall that in discrete event systems state tran-
sitions occur only simultaneously with events. If event times were known, the
same would be for time instants when state transitions occur, and one would
only need to know the next state, given the current state and the next event.
That is, timed models of discrete event systems would remain merely logical
models. Now, recalling the definition of system input given in Section 1.1, the
question is whether event times can be actually considered as inputs to dis-
crete event systems. In other words, is it true that event times can be decided
independently of the system? The next example tries to answer this question.

Example 3.1 Consider the following two queueing disciplines:

• First In-First Out (FIFO) is a queueing discipline where the oldest task
of the queue is processed first;
Timed models of discrete event systems 18

task arrival time execution time


#1 1.0 4.0
#2 3.5 2.0
#2 4.5 2.5

Table 3.1: Task specification for Example 3.1.

• Round Robin (RR) is a queueing discipline where time slices are assigned
to each task of the queue in equal portions and in circular order.

In the case of FIFO, the ongoing task uses the resource until completion,
without interruption. In the case of RR, a time slice is assigned to the first
task of the queue. If processing of the task does not terminate before the time
slice expires, the task is relegated to the last position of the queue, and all
other tasks are shifted one position forward. On the other hand, if processing
of the task terminates before the time slice expires, the processed task leaves
the system, and all other tasks are shifted one position forward in the queue.
In both cases, the task on top of the queue is then assigned a new time slice.
Thanks to the cyclic and equitable assignment of the resource to all tasks of
the queue, one advantage of RR versus FIFO is that it is starvation-free.
Consider a resource, and tasks needing to use the resource. The system
can be modeled as the queueing system of Figure 1.2, where the resource plays
the role of the server, and the queue is the list of waiting tasks. Given the
tasks with characteristics specified in Table 3.1, we want to determine when
each task will be completed in case either FIFO or RR is applied. The resource
is assumed to be initially idle. In the following, we denote by ai and di the
arrival and the completion of the ith task, respectively.

FIFO

• The initial time instant is t0 = 0.

• At t1 = 1.0, task #1 arrives (event a1 ) and accesses the resource. Exe-


cution of task #1 will terminate at t = 5.0.

#1

• At t2 = 3.5, task #2 arrives (event a2 ) and is added to the waiting list.


Timed models of discrete event systems 19

#2 #1

• At t3 = 4.5, task #3 arrives (event a3 ) and is added to the waiting list.

#3 #2 #1

• At t4 = 5.0, execution of task #1 terminates (event d1 ). Task #1 leaves


the system, and task #2 accesses the resource. Execution of task #2
will terminate at t = 7.0.

#3 #2

• At t5 = 7.0, execution of task #2 terminates (event d2 ). Task #2 leaves


the system, and task #3 accesses the resource. Execution of task #3
will terminate at t = 9.5.

#3

• At t6 = 9.5, execution of task #3 terminates (event d3 ). Task #3 leaves


the system. The system is now is empty.

The sample path determined by the FIFO queueing discipline is shown in


Figure 3.1.

a1 a2 a3 d1 d2 d3

0 1.0 3.5 4.5 5.0 7.0 9.5 time

Figure 3.1: Sample path determined by the FIFO queueing discipline.

RR
In the case of RR, expiration of the time slice is another event to be considered.
We denote this event by s. It is assumed that the time slice amounts to Ts = 1.5
time units.
Timed models of discrete event systems 20

• The initial time instant is t0 = 0.

• At t1 = 1.0, task #1 arrives (event a1 ) and is assigned a time slice. The


execution time of task #1 amounts to 4.0 time units, which is greater
than the time slice. The assigned time slice will expire at t = 2.5.

#1

• At t2 = 2.5, the time slice expires (event s). Since task #1 is the only task
currently in the system, it is again assigned a time slice. The residual
execution time of task #1 amounts to 2.5 time units, which is greater
than the time slice. The assigned time slice will expire at t = 4.0.

#1

• At t3 = 3.5, task #2 arrives (event a2 ) and is added to the waiting list


with execution time equal to 2.0 time units.

#2 #1

• At t4 = 4.0, the time slice expires (event s). Task #1 is added to the
waiting list with residual execution time equal to 1.0 time unit. Task #2
is assigned a time slice. The execution time of task #2 amounts to
2.0 time units, which is greater than the time slice. The assigned time
slice will expire at t = 5.5.

#1 #2

• At t5 = 4.5, task #3 arrives (event a3 ) and is added to the waiting list


with execution time equal to 2.5 time units.

#3 #1 #2
Timed models of discrete event systems 21

• At t6 = 5.5, the time slice expires (event s). Task #2 is added to the
waiting list with residual execution time equal to 0.5 time units. Task #1
is assigned a time slice. The residual execution time of task #1 amounts
to 1.0 time unit, which is less than the time slice. Task #1 will be
completed at t = 6.5.

#2 #3 #1

• At t7 = 6.5, execution of task #1 terminates (event d1 ). Task #1 leaves


the system, and task #3 is assigned a time slice. The execution time of
task #3 amounts to 2.5 time units, which is greater than the time slice.
The assigned time slice will expire at t = 8.0.

#2 #3

• At t8 = 8.0, the time slice expires (event s). Task #3 is added to the
waiting list with residual execution time equal to 1.0 time unit. Task #2
is assigned a time slice. The residual execution time of task #2 amounts
to 0.5 time units, which is less than the time slice. Task #2 will be
completed at t = 8.5.

#3 #2

• At t9 = 8.5, execution of task #2 terminates (event d2 ). Task #2 leaves


the system, and task #3 is assigned a time slice. The residual execution
time of task #3 amounts to 1.0 time unit, which is less than the time
slice. Task #3 will be completed at t = 9.5.

#3

• At t10 = 9.5, execution of task #3 terminates (event d3 ). Task #3 leaves


the system. The system is now is empty.
Timed models of discrete event systems 22

a1 s a2 s a3 s d1 s d2 d3

0 1.0 2.5 3.5 4.0 4.5 5.5 6.5 8.0 8.5 9.5 time

Figure 3.2: Sample path determined by the RR queueing discipline.

The sample path determined by the RR queueing discipline is shown in Fig-


ure 3.2.
By comparing Figures 3.1 and 3.2, it can be observed that events d1 and
d2 occur at times 5.0 and 7.0, when the FIFO queueing discipline is applied,
and at times 6.5 and 8.5, when the RR queueing discipline is applied. Since
the queueing discipline is part of the definition of the queueing system, we can
conclude these event times cannot be decided independently of the system, i.e.
they cannot be considered as system inputs. 

The lesson we learn from Example 3.1, is that event times may depend on
the system, and therefore they cannot be taken as system inputs, in general.
Event times are rather system outputs, for which suitable modeling mecha-
nisms should be devised.

3.2 The clock structure


3.3 Timed automata
3.4 Event timing dynamics
Bibliography

[1] C. G. Cassandras and S. Lafortune. Introduction to discrete event systems.


Springer, 2nd edition, 2008.
A

Review of probability theory

These notes of probability theory are not exhaustive. They are only intended
to summarize the basic concepts, whose knowledge is preliminary for under-
standing the theory of stochastic DES.

A.1 Basic concepts and definitions


Probability is a model of uncertainty. Roughly speaking, it associates a number
between 0 and 1 to each possible outcome of a random phenomenon: the
greater the number, the more likely the outcome.
The fundamental elements of a probabilistic problem are the following:

• a random phenomenon;

• the set Ω of all possible outcomes of the random phenomenon, called the
sample space;

• a family F of subsets of Ω, called the events 1 ;

• a probability function P : F → [0, 1].

The family F must include the empty set ∅ and the sample space Ω. Moreover,
it must be closed with respect to the standard set operations (complementa-
tion, union, intersection). This means that, if A and B are events in F , then
one can build other events as follows:
1
The term event is overloaded in this course. In probability theory, events are sets. In
the theory of DES, events are instantaneous occurrences.
Review of probability theory 25

• complementation
Ac ≡ A does not occur (logical NOT)

• union
S
A B ≡ at least one of A and B occurs (logical OR)

• intersection
T
A B ≡ both of A and B occur (logical AND)
T
If A B = ∅, A and B are said to be disjoint.
We do not go here into the details of how a probability function P is
defined. It is sufficient to recall that P is required to satisfy P (Ω) = 1 (in this
S
respect, Ω is also called the certain event), and P (A B) = P (A) + P (B) if
A, B ∈ F are disjoint (additivity property).
The triple (Ω, F , P ) is called probability space.

A.1.1 Properties of probability spaces


As a consequence of their definition, probability functions enjoy some addi-
tional properties:

i) P (∅) = 0
S
Proof. Since Ω = Ω ∅, and Ω and ∅ are disjoint, we have P (Ω) =
S
P (Ω ∅) = P (Ω) + P (∅). Hence, P (∅) = 0.

ii) P (Ac ) = 1 − P (A)


S
Proof. Since A Ac = Ω, and A and Ac are disjoint, we have 1 = P (Ω) =
S
P (A Ac ) = P (A) + P (Ac ). Hence, P (Ac ) = 1 − P (A).

iii) If A ⊆ B, then P (A) ≤ P (B)


T S T T
Proof. Since B = (B A) (B Ac ), and A = A B, we have P (B) =
T T \
P (B A) + P (B Ac ) = P (A) + P (B Ac ) ≥ P (A).
| {z }
≥0
S T
iv) P (A B) = P (A) + P (B) − P (A B)
S T T
Proof. We have P (A B) = P (A) + P (B Ac ), because A and B Ac
S T
are disjoint and their union is A B. Moreover, P (B Ac ) = P (B) −
T S T
P (B A). Hence, P (A B) = P (A) + P (B) − P (A B).
Review of probability theory 26

A.1.2 Conditional probabilities and independence


Given two events A and B with P (A) > 0, we define conditional probability of
B given A the quantity
T
P (A B)
P (B|A) , . (A.1)
P (A)
Intuitively, P (B|A) is the probability of event B, modified by the fact that one
has observed event A. For this reason, P (B) is often called prior probability,
whereas P (B|A) is called posterior probability.
Two events A and B are said to be independent if
\
P (A B) = P (A)P (B). (A.2)
If A and B are independent, then P (B|A) = P (B). This means that the
fact that event A has been observed, does not modify the inference on the
occurrence of event B.

A.1.3 Total probability rule and Bayes theorem


Let {A1 , . . . , An } be a partition of the sample space Ω, that is:
n
[
i) Ai = Ω
i=1
T
ii) Ai Aj = ∅, ∀i 6= j (mutually disjoint events).
Moreover, let B be an event. The total probability rule states that:
Xn
P (B) = P (B|Ai )P (Ai ). (A.3)
i=1
If C is another event, then it holds that:
Xn \
P (B|C) = P (B|Ai C)P (Ai |C). (A.4)
i=1
The Bayes theorem follows from the total probability rule:
T
P (Aj B) P (B|Aj )P (Aj )
P (Aj |B) = = Pn . (A.5)
P (B) i=1 P (B|Ai )P (Ai )

A.2 Random variables


Random variables are real-valued functions of the outcomes of a random phe-
nomenon. More formally, a random variable X 2 is a function X : Ω → R such
2
A random variable is typically denoted by an upper-case letter, whereas lower-case
letters denote the realizations (the observed values) of the random variable.
Review of probability theory 27

that, for all x ∈ R, the set {ω ∈ Ω : X(ω) ≤ x} is an event in F . We typically


drop the dependence on ω, and we simply write the event as {X ≤ x}.
Since {X ≤ x} is an event, we can compute its probability. The function

FX (x) , P (X ≤ x) ∀x ∈ R, (A.6)

is called the cumulative distribution function (cdf) of the random variable X.


It satisfies the following properties:

i) lim FX (x) = 0
x→−∞

ii) lim FX (x) = 1


x→+∞

iii) FX is non-decreasing and right-continuous.

A random variable X is fully specified by its cdf. When we are given a ran-
dom variable with its cdf, the underlying probability space (Ω, F , P ) typically
remains unspecified.
The rules of Section A.1.1 can be applied to events of the type {X ≤ x}.
For instance, given a, b ∈ R with a < b, we have:
\
P (a < X ≤ b) = P ({X > a} {X ≤ b})
[
= P (X > a) + P (X ≤ b) − P ({X > a} {X ≤ b})
| {z }
certain event
= P ({X ≤ a}c ) + P (X ≤ b) − 1
= P (X ≤ b) − P (X ≤ a)
= FX (b) − FX (a). (A.7)

The support of a random variable X is defined as the smallest closed set


SX such that P (X ∈ SX ) = 1. Loosely speaking, the support of X can be
thought of as the closure of the set of all possible values that X can take.

A.2.1 Discrete random variables


A discrete random variable X is a random variable which takes values in a
discrete set {x(1) , x(2) , . . .}. It follows that the cdf FX of a discrete random
variable X is a right-continuous, piecewise-constant function. See Fig. A.1,
where it is assumed x(1) < x(2) < . . . without loss of generality. It can be
shown that the jump at the point x(i) amounts to P (X = x(i) ). Hence, for a
Review of probability theory 28

FX (x)

x(1) x(2) x(3) x


Figure A.1: Example of the cdf of a discrete random variable.

discrete random variable X the information provided by the cdf is equivalently


provided by the probability mass function (pmf), defined as:

pX (x(i) ) , P (X = x(i) ) ∀i = 1, 2, . . . . (A.8)

The pmf satisfies the following properties:

i) pX (x(i) ) > 0 ∀i = 1, 2, . . .
X
ii) pX (x(i) ) = 1.
i

A.2.2 Continuous random variables


A continuous random variable X is a random variable which takes values
in a continuous set. A continuous random variable X is fully specified by its
cdf FX . We say that X has probability density function (pdf) fX if the function
fX satisfies:

i) fX (x) ≥ 0 ∀x ∈ R
Z +∞
ii) fX (t)dt = 1
−∞
Z x
iii) FX (x) = fX (t)dt ∀x ∈ R.
−∞

Notice that a necessary condition for iii) to hold, is that FX is a continuous


function. At points x ∈ R where FX is also differentiable, it holds that:
dFX (x)
fX (x) = . (A.9)
dx
Review of probability theory 29

y
(x, y)

Ax,y

Figure A.2: Region Ax,y .

If a continuous random variable X admits a pdf fX , the support of X is


practically defined as the closure of the set {x ∈ R : fX (x) > 0}. Moreover,
P (X = x) = 0 for all x ∈ R. Hence, P (X < x) = P (X ≤ x), etc..
Using a pdf fX , the computation of a probability involving the random
variable X boils down to the evaluation of an integral. Indeed, we have:

P (a < X ≤ b) = FX (b) − FX (a)


Z b Z a
= fX (t)dt − fX (t)dt
−∞ −∞
Z b
= fX (t)dt. (A.10)
a

A.2.3 Bivariate random variables


Let X and Y be random variables. The sets {X ≤ x} and {Y ≤ y} are events.
Hence, their intersection {X ≤ x, Y ≤ y} is also an event, of which we can
compute the probability. The joint cdf of X and Y is defined as:

FX,Y (x, y) , P (X ≤ x, Y ≤ y) = P ((X, Y ) ∈ Ax,y ) ∀(x, y) ∈ R2 , (A.11)

where Ax,y is the region in Fig. A.2. Given the joint cdf of X and Y , their
marginal cdfs can be obtained as follows:

FX (x) = lim FX,Y (x, y) ∀x ∈ R (A.12a)


y→+∞

FY (y) = lim FX,Y (x, y) ∀y ∈ R. (A.12b)


x→+∞
Review of probability theory 30

Two random variables X and Y are said to be independent if, for all
a, b, c, d ∈ R with a ≤ b and c ≤ d, it holds that:

P (a ≤ X ≤ b, c ≤ Y ≤ d) = P (a ≤ X ≤ b) P (c ≤ Y ≤ d). (A.13)

In other words, the events {a ≤ X ≤ b} and {c ≤ Y ≤ d} must be independent


for all possible choices of a ≤ b and c ≤ d. If X and Y are independent, then
it holds that:
FX,Y (x, y) = FX (x)FY (y) ∀(x, y) ∈ R. (A.14)

Let X and Y be continuous random variables. A function fX,Y (x, y) is


said to be a joint pdf of X and Y if:

i) fX,Y (x, y) ≥ 0 ∀(x, y) ∈ R2


Z Z
ii) fX,Y (t, s)dtds = 1
R2
Z Z
iii) FX,Y (x, y) = fX,Y (t, s)dtds ∀(x, y) ∈ R2 .
Ax,y

Then, for each (quite regular) set A ⊆ R2 , the probability of the event
{(X, Y ) ∈ A} can be computed as:
Z Z
P ((X, Y ) ∈ A) = fX,Y (t, s)dtds. (A.15)
A

The marginal pdfs of X and Y can be obtained from their joint pdf using the
relations:
Z +∞
fX (x) = fX,Y (x, y)dy ∀x ∈ R (A.16a)
−∞
Z +∞
fY (y) = fX,Y (x, y)dx ∀y ∈ R. (A.16b)
−∞

In general, it is not possible to recover the joint pdf of two random variables
X and Y , given their marginal pdfs. It is indeed possible if X and Y are
independent, because in that case it holds that:

fX,Y (x, y) = fX (x)fY (y) ∀(x, y) ∈ R2 (A.17)

(with the possible exception of a set of measure zero).


Review of probability theory 31

A.2.4 Expected value and variance


The expected value (or expectation, mean) of a random variable X, typically
denoted by µX or E[X], is an average of its values, weighted by the correspond-
ing pmf or pdf. We distinguish the cases of discrete and continuous random
variables.

• If X is a discrete random variable taking values {x(1) , x(2) , . . .}, its ex-
pected value is defined as:
X
E[X] , x(i) pX (x(i) ), (A.18)
i

provided that the sum is finite. If φ : R → R is a function, then:


X
E[φ(X)] = φ(x(i) )pX (x(i) ). (A.19)
i

• If X is a continuous random variable with pdf fX , its expected value is


defined as: Z +∞
E[X] , tfX (t)dt, (A.20)
−∞

provided that the integral converges. If φ : R → R is a function, then:


Z +∞
E[φ(X)] = φ(t)fX (t)dt. (A.21)
−∞

The expected value is a linear operator. If X and Y are random variables, and
a ∈ R, we have:

i) E[X + Y ] = E[X] + E[Y ]

ii) E[a X] = a E[X].


2
The variance of a random variable X, typically denoted by σX or Var(X),
is defined as:
Var(X) , E[(X − µX )2 ]. (A.22)

It measures the dispersion of the values taken by the random variable X around
its expectation µX (see Fig. A.3). If X is a discrete random variable, we have:
X
Var(X) = (x(i) − µX )2 pX (x(i) ), (A.23)
i
Review of probability theory 32

fX (x)

Figure A.3: Pdfs of two continuous random variables with the same mean, but
different variance: the variance of the random variable with red pdf is greater.

provided that the sum is finite, while for a continuous random variable with
pdf fX : Z +∞
Var(X) = (t − µX )2 fX (t)dt, (A.24)
−∞

provided that the integral converges. The variance of a random variable X


enjoys the following properties, where a ∈ R:

i) Var(aX) = a2 Var(X)

ii) Var(a + X) = Var(X).

A.3 Notable probability distributions


In this section we recall some notable probability distributions, useful for the
theory of DES.

A.3.1 Discrete distributions


Uniform distribution

Let X be a discrete random variable, taking values in the finite set X =


{x(1) , x(2) , . . . , x(N ) }. X is said to have a uniform distribution over X if
1
pX (x(i) ) = ∀i = 1, . . . , N. (A.25)
N
Review of probability theory 33

The expectation of X coincides with the arithmetic mean of its values:


N N
X
(i) i 1 X (i)
E[X] = x pX (x ) = x . (A.26)
i=1
N i=1

Geometric distribution

The geometric distribution derives from a sequence of independent, identically


distributed Bernoulli trials, where p ∈ (0, 1) is the probability of failure in each
trial. It describes the number of Bernoulli trials needed to get the first success.
Let X be a discrete random variable, taking values in the set {1, 2, 3, . . .}.
X is said to have a geometric distribution of parameter p ∈ (0, 1) if
pX (i) = pi−1 (1 − p) ∀i = 1, 2, 3, . . . . (A.27)
The expected value of X is obtained as follows:
∞ ∞ ∞
!
X X d X
E[X] = i pX (i) = (1 − p) i pi−1 = (1 − p) pi
i=1 i=1
dp i=0
 
d 1 1 1
= (1 − p) = (1 − p) 2
= , (A.28)
dp 1 − p (1 − p) 1−p
where we used the formula of the geometric series:

X 1
pi = ∀p : |p| < 1. (A.29)
i=0
1 − p
The geometric distribution is the only discrete distribution to enjoy the mem-
oryless property. Let n, m > 0. Then:
T
P ({X = n + m} {X > n})
P (X = n + m|X > n) =
| {z } P (X > n)
posterior probability
P (X = n + m) pn+m−1 (1 − p)
= =
P (X > n) pn
m−1
= p (1 − p) = P (X = m) . (A.30)
| {z }
prior probability

A.3.2 Continuous distributions


Uniform distribution

A continuous random variable X is said to have a uniform distribution over


the interval [a, b] with a < b, if it has a pdf of the type:
( 1
if x ∈ [a, b]
fX (x) = b−a (A.31)
0 otherwise.
Review of probability theory 34

1 1
b−a

0 0
a b a b
(a) (b)

Figure A.4: (a) Cdf and (b) pdf of a random variable X ∼ U(a, b).

Equivalently, the cdf of X is of the type:




 x− 0 if x < a
a
FX (x) = if a ≤ x ≤ b (A.32)
 b−a

1 if x > b.
The expected value of X is obtained as follows:
Z +∞ Z b  2 b
1 1 t a+b
E[X] = tfX (t)dt = tdt = = . (A.33)
−∞ b−a a b−a 2 a 2
If the random variable X is uniformly distributed over the interval [a, b], we
write in compact form X ∼ U(a, b). The cdf and pdf of X ∼ U(a, b) are shown
in Fig. A.4.

Exponential distribution

A continuous random variable X is said to have an exponential distribution


with rate λ > 0 if it has a pdf of the type:
 −λx
λe if x ≥ 0
fX (x) = (A.34)
0 otherwise.
Equivalently, the cdf of X is of the type:

0 if x < 0
FX (x) = (A.35)
1 − e−λx if x ≥ 0.
The expected value of X is obtained as follows:
Z +∞ Z +∞
E[X] = tfX (t)dt = tλe−λt dt
−∞ 0
Z +∞  −λt +∞
 +∞ e 1
= −te−λt 0 + −λt
e dt = − = . (A.36)
| {z } 0 λ 0 λ
0
Review of probability theory 35

1 λ

1 − e−1

λe−1

0 0
1 1
λ λ
(a) (b)

Figure A.5: (a) Cdf and (b) pdf of a random variable X ∼ Exp(1/λ).

If the random variable X is exponentially distributed with rate λ, we write in


compact form X ∼ Exp( λ1 ), where λ1 is called the scale (or location) parameter.
The cdf and pdf of X ∼ Exp( λ1 ) are shown in Fig. A.5.
The exponential distribution is the only continuous distribution to enjoy
the memoryless property. Let t, s > 0. Then:
T
P ({X > t + s} {X > t})
P (X > t + s|X > t) =
| {z } P (X > t)
posterior probability
P (X > t + s) 1 − FX (t + s) e−λ(t+s)
= = =
P (X > t) 1 − FX (t) e−λt
= e−λs = P (X > s) . (A.37)
| {z }
prior probability

You might also like