Behavioural Models - From Modelling Finite Automata To Analysing Business Processes (PDFDrive)
Behavioural Models - From Modelling Finite Automata To Analysing Business Processes (PDFDrive)
Behavioural
Models
From Modelling Finite Automata to
Analysing Business Processes
Behavioural Models
Matthias Kunze Mathias Weske
•
Behavioural Models
From Modelling Finite Automata to
Analysing Business Processes
123
Matthias Kunze Mathias Weske
Zalando SE Hasso Plattner Institute (HPI)
Berlin University of Potsdam
Germany Potsdam
Germany
Bridges and buildings are here to stay and thus mainly need static descriptions
for engineering them. Software is designed and built to perform actions in
various forms. Designing complex behaviour is a difficult and error-prone task
that enforces solid intellectual capabilities to understand the temporal and
often very dynamic behaviour of a software system. Behaviour is generally
more difficult to design than the pretty static data and architectural structures
of software systems.
In the age of digitalisation of virtually every domain, designing appropriate
software systems is one of the big challenges of our time. Their scope, their
complexity, and their interactions with users can only be understood with
a solid theory that is at the same time practically applicable. Designing
and understanding behaviour of systems becomes relevant not only for the
software itself, but for intelligent and smart technical solutions that usually
embody several interacting subsystems, often running in a cloud environment.
Intelligent products will allow, but also enforce us to interact with them in
complex behavioural patterns that need to be designed well and thoroughly
understood and analysed when composing even more complex systems, ser-
vices and business processes. Interactions between intelligent components that
autonomously control our traffic, electric power and water supply system also
need to be established very precisely in order to avoid errors that might have
severe consequences.
Whereas most engineering disciplines use mathematical calculus for de-
scribing continuous behaviour, computer science has developed a solid theory
for digital transitions between concretely definable set of states. This theory
has been defined and explored in various variants of finite automata, state
machines, temporal logics, Petri nets, and business processes. They all have
a shared understanding of state and discrete state changes using events and
actions.
The book written by Matthias Kunze und Mathias Weske covers the core
of this theory and incrementally adds and discusses the various extensions that
have been created over the last few decades. This includes traditional sequential
VII
VIII Foreword
IX
X Preface
Part I Foundations 1
1 Introduction 3
1.1 Behavioural Models . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Motivating Example . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 On Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Models in Computer Science . . . . . . . . . . . . . . . . . . . . 13
1.5 Modelling in System Development . . . . . . . . . . . . . . . . 17
4 Concurrent Systems 81
4.1 State Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.2 Interacting Systems . . . . . . . . . . . . . . . . . . . . . . . . 93
4.3 Petri Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . 124
XI
XII CONTENTS
8 Verification 231
8.1 Overview of Verification . . . . . . . . . . . . . . . . . . . . . . 232
8.2 Temporal Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
8.3 Model Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
8.4 Behavioural Properties . . . . . . . . . . . . . . . . . . . . . . . 253
8.5 Business Process Compliance . . . . . . . . . . . . . . . . . . . 257
Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . 271
References 273
Index 277
Part I
Foundations
1
Introduction
This first chapter introduces the topic and the scope of the book, and it
describes the motivation for using behavioural models to capture the dynamic
aspects of a system. This chapter is intended to elucidate the underlying
principles of the modelling of behaviour, elaborating on what behaviour is and
what models are.
stay in touch with our family, friends, and colleagues. Our economy depends
on information technology, from the design of products and services to their
development and deployment and to marketing and after-sales services.
Given the continuous growth in the scope and functionality of information
technology, not only are the information technology systems involved becoming
more and more complex, but also the interactions between them. Every software
system of significant complexity consists of several subsystems that interact
with each other.
Some typical subsystems are databases, client software, and application
servers. People working in the field of software architectures have developed
methods and techniques to describe how complex software systems are designed
and built. While software architectures are important to represent the structure
of software systems, however, the power and complexity of software systems
result to a large extent from their behaviour.
An example of an online shop can be used to illustrate system behaviour.
When a customer points a web browser at an online shop, the shop shows
products for sale. The customer selects some of these products by putting them
in a shopping cart. Next, the customer completes the purchase by providing
payment and shipping information. Finally, the products ordered are shipped
to the customer by a logistics service provider.
These actions are events generated by the customer’s web browser, which
belongs to the environment of the online shop. The online shop reacts to
these events by internal actions, for instance by updating the internal data for
payment and shipment information and by showing the customer a confirmation
page.
Today’s online shops are far more complex than the one discussed here,
but this simplified scenario suffices for the illustration of system behaviour.
When the behaviour of an online shop is investigated, the events that drive
its interaction with its environment have to be addressed first. In the above
scenario, we can identify the events and their reactions as shown in Table 1.1.
Event Reaction
Checkout Record payment and shipment information
Pay order Initiate shipment
Ship products Archive order
Looking at these events, we observe that the online shop cannot accept
every event at every possible point in time. In fact, these events depend on
each other; they are causally related. For instance, the order can only be paid
6 1 Introduction
for by the customer when the purchase has been completed, otherwise there is
nothing to make a payment for.
To capture these dependencies, we refer to the concept of states, which
represent the knowledge of the system about previous interactions with its
environment. Every system, once switched on, starts in an initial state, in
which it accepts certain events to which it reacts and other events that it will
ignore. We can represent the relation between events and states for the online
shop in graphical form as a first, fairly simple behavioural model, which is
shown in Fig. 1.2.
When a customer has entered the online shop, the system is in a state
Shop entered. When it is in this state, the customer can select products and
add them to the shopping cart. When the customer proceeds to the checkout,
triggered by the event Checkout, this state is left and a subsequent state
Shopping completed is entered, in which payment and shipping information are
recorded. This is realised by a directed arc, called a state transition, labelled
with the name of the event that causes the system to transition to the next
state.
The Order paid state comes next. This state can only be entered after the
payment has been settled, indicated by the event Pay order, received from the
payment system of the online seller in the state Shopping completed. When the
products have been delivered to the customer’s shipment address, indicated
by the event Ship products, the purchase ends and the state Products shipped
is entered.
This diagram is a description of the behaviour of the online shop, a behav-
ioural model. On the one hand, it is a simplification that shows only its major
states and the transitions between them that are caused by events accepted
from the shop’s environment. On the other hand, it constrains the way the
shop performs interactions with its environment. This is a general pattern of
behavioural models: they restrict behaviour and only allow systems to behave
in a defined manner.
For instance, after the Shopping completed state has been entered, the
customer can no longer put products in the shopping cart. This is the case
because the behavioural specification of the shop does not accept an event
add products to cart in this state. In Fig. 1.2, this becomes apparent, since the
diagram has no such state transition from the Shopping completed state. The
diagram, as simple as it is, provides constraints on how the system should be
constructed.
1.3 On Modelling 7
1.3 On Modelling
After sketching some of the basic aspects of models as simplified representations
of originals, this section investigates models and their properties in more detail.
This discussion also shows that models come in two flavours. If a model
describes an existing original, it is a descriptive model. It is called descriptive
because it describes an existing original. A model is prescriptive if it prescribes
how an original should be constructed.
A subway map is a descriptive model because it describes the existing
original, i.e., a subway system. The same holds true for the profile of a person
on a social network. The profile describes an existing original, which is the
person to whom the profile belongs. In the case of the engineering plan, the
original did not exist when the plan was developed. It therefore serves as a
prescriptive model, since it prescribes how the kitchen should be constructed.
During engineering projects, prescriptive models turn into descriptive mod-
els. This is the case for most engineering plans, such as the one used to construct
the kitchen. The engineering plan of the kitchen serves as a prescriptive model,
a blueprint for how to build the kitchen. When the construction of the kitchen
is completed, the engineering plan can be considered a descriptive model, since
it now describes the kitchen. At that point in time, there exists an original
that the model describes.
10 1 Introduction
Modelling Languages
We express ourselves in languages. This holds true for verbal and written
communication in natural language, but it is also true for communication via
models. Models are expressed in languages. The examples of models discussed
so far have used different modelling languages.
A graph language was used to represent the states and state transitions
of an online shop. That language consists of labelled nodes, which can be
connected by arcs. Graphs are used to describe various aspects of computer
systems.
The language of engineering plans was used to express the kitchen plan. It
consists of model elements that represent, for instance, walls, windows, and
doors. Kitchen appliances are important elements of the original, so they are
represented by corresponding language elements. Finally, measurements are
provided to allow the construction of the kitchen as planned.
Each of these diagrams uses a different modelling language. When designing
a modelling language, the key concepts of the language need first to be defined.
In the case of behavioural models, the key concepts to be expressed (and
therefore the central concepts to be represented in a modelling language) are
events and states. These concepts are associated with each other. Each state
transition is bound to an event and relates two states in an order, namely a
source state and a target state.
Once the concepts and their relationships are understood, a notation for
expressing models in that language is required. We could have used squares
and dotted arcs to express state transition diagrams, but we have opted for
ellipses and solid directed arcs instead. Ellipses represent states, while arcs
represent state transitions. States and state transitions are labelled to give
these elements meaning. It is important that the arcs are directed, because –
for each state transition – the source and target states need to be expressed.
These steps represent, basically, how modelling languages can be defined.
Later in this book, several modelling languages to express behavioural
models will be introduced. The conceptual models of these languages are
typically defined by mathematical formalisms. We can already define a simple
conceptual model for the above state diagrams. Each state diagram consists of
a set of states S and a relation R between states, such that R ⊆ S × Σ × S
is a set that consists of pairs of states, each bound to an event. Each triple
(s, δ, t) ∈ R represents the fact that there is a transition from a source state
s to a target state t, which is triggered by the event δ. This conceptual
model, represented mathematically, is the basis of the language. We have
introduced a notation consisting of ellipses (for states) and directed arcs (for
state transitions) in Fig. 1.2.
Typically there are several different aspects to be investigated for one original,
often related to different groups of persons. Considering the kitchen example,
an engineering plan serves as a model for constructors of the kitchen. It focuses
on the relevant aspects for this target group and disregards, for instance, the
visual appearance of the kitchen, which is definitely relevant for the owner. To
12 1 Introduction
serve this new modelling goal, another model of the kitchen has to be designed,
one that shows the visual appearance of the kitchen. This model is shown in
Fig. 1.5.
Goals of Models
Before turning to models in computer science, we shall take a broader view of
modelling by discussing why we model. We model to represent the aspects of
an original which are relevant for a particular modelling goal. With a model of
the core aspects at hand, people can comprehend the relevant aspects of the
original. So, comprehension is one main reason why we model. This applies
to all models, from the subway map to the personal profile and the kitchen
model, both the visual and the engineering model. Models are important for
the comprehension of complex systems.
1.4 Models in Computer Science 13
Not only structural models in computer science, such as data models, but also
behavioural models have model properties. To illustrate this, we return to the
behavioural aspects of the online shop example and show a variant of a state
transition diagram in Fig. 1.7.
In addition to the diagram in Fig. 1.2, the current version identifies one
state as the initial state (Shop entered) and one state as the final state (Products
shipped), represented by the incoming arrow in the initial state and the double
border of the final state.
Fig. 1.7: States of an online shop, with initial and final states
This model describes the behaviour of the system from start to finish.
States are traversed from the point in time at which the customer enters the
online shop until the completion of the ordering process. This is a model that
describes the behaviour of an original.
The original might not be as tangible as it is in the case of the data
model or the kitchen engineering plan; the original is the software system
that implements the online shop. The original may be very complex indeed: it
might include a database system to store the information required to run the
online store, and further software components such as a web interface and an
application server.
The state diagram shown in Fig. 1.7 is a behavioural model of the online
shop, since it describes its states and the events that trigger transitions between
them. Before discussing the model properties of mapping, abstraction, and
pragmatics, the concept of a state has to be elaborated on.
In programming, a state of a system – the state of a program in execution –
can be represented by an assignment of program variables to concrete values
that those variables have at a given point in time. This is a very detailed
characterisation of the state of a dynamic system.
In the online shop discussed so far, the state is more abstract and far less
detailed. Considering the shopping state, the value of the (variable) shopping
cart can change, depending on which articles have been selected by the customer.
The state transition to the paying state depends on a single variable. This
is the variable that records whether the customer has decided to complete
shopping. Therefore, the state of the online shop is an abstraction of the
programmatic state of the system.
16 1 Introduction
Behavioural models can represent both detailed states and high-level states.
The use of the model depends on the modelling goal. If the behaviour of
programs is the centre of attention then detailed representation of states has
to be considered, which might use the current values of program variables.
To reduce complexity and to allow one to cope with the complexity of
software systems, abstract characterisations of states may sometimes be more
useful than detailed characterisations. It should to be stressed that even when
detailed states are considered in a state diagram, the diagram provides an
abstraction. For instance, if the state of an integer variable is represented by a
state diagram, the states abstract away the physical aspects of the variable,
such as its internal representation or the memory locations of the values stored.
To this end, even a program expressed in a programming language is a model
that abstracts away the details of the program’s representation in the computer
system.
Owing to the abstraction of states in the online shop example, not all
actions that are performed by the system actually trigger a transition in the
state diagram. For instance, when the system is in the shopping state, the
customer may put additional products in the shopping cart. This results in a
series of actions by the system, such as updating the data for the shopping
cart storage. However, only certain actions actually cause a state transition,
for instance choosing to proceed to checkout, which triggers a state transition
from the shopping state to the paying state.
Which actions cause a state transition and which actions do not change
the state of the system is a modelling decision. The modelling goal of the
state diagram shown in Fig. 1.7 is to represent the main states of the online
shop. Therefore, the particular contents of the shopping cart and the actions
taken to change those contents are not relevant. Consequently, these actions
do not cause a change in the behavioural model. Proceeding to checkout,
however, results in a state change to Shopping completed that is relevant to
the modelling goal. This is why a corresponding state change is represented in
the model.
This discussion suggests that the model shown in Fig. 1.7 satisfies the
abstraction property of models: many actions have been disregarded, and
detailed system states have been abstracted away. By using the concept of
abstraction, we can reduce the complexity of the online shop to a few states
that are considered relevant to achieving the modelling goal.
The mapping property of models is satisfied as well, because we can map
elements of the model to the original. For instance, from the incoming arc
in the Shop entered state we can conclude that the system always starts in
that state. It can be mapped to a situation in which the system allows the
customer to browse the online shop and to add articles to the shopping cart.
The behavioural model provides pragmatics, because it can serve as a
replacement for the original during the design phase. For instance, engineers
can discuss with business experts whether to allow resuming shopping after
shopping has been completed and payment information has been entered.
1.5 Modelling in System Development 17
Notice that this is disallowed by the original state diagram shown in Fig. 1.2,
but it is allowed by the diagram shown in Fig. 1.7, because of the edge from
the Shopping completed state to the Shop entered state.
This is one of the main purposes of engineering models: to discuss alterna-
tives in the design of a system. Models are a concise way of specifying important
aspects of the system to be developed, without people being distracted by
technical details of the software code. A further aspect of system design emerges
here. Systems should always satisfy the constraints which are represented in
the models that describe them.
(QDFWPHQW 'HVLJQ
,PSOHPHQWDWLRQ
When the implementation has been completed and tests do not show any
errors, i.e., the system behaves as specified in the models, the software for the
online shop can be deployed and made available on the internet. At this point,
the enactment phase starts.
During enactment, new requirements might emerge. For instance, a product
reviewing system might be needed, which allows customers to write reviews of
the articles they have bought. In this case, the requirements for the extension
of the online store’s functionality have to be elicited and the system design
has to be extended accordingly, before re-entering the implementation phase
and finally the enactment phase.
5HTXLUHPHQWV
(QDFWPHQW 0RGHOOLQJ
HOLFLWDWLRQ
$QDO\VLV
,PSOHPHQWDWLRQ 9DOLGDWLRQ
9HULILFDWLRQ
Once an initial model has been captured in the modelling phase, the model
is analysed. There are two kinds of analysis, informal and formal analysis. In
the informal analysis, the stakeholders discuss the model and evaluate whether
the model meets the modelling goals.
Typically, the first version of the model does not fully meet the modelling
goal, so that the model needs to be modified. This domain-specific, informal
analysis is called validation. Validation is a human activity. Stakeholders
communicate and answer the question of whether the model represents the
original in a sufficiently detailed manner, i.e., whether it satisfies the modelling
goal.
The formal analysis is called verification and uses mathematical precision to
answer questions in an algorithmic fashion. Formal analysis can be conducted
if the model is expressed in a mathematically precise manner, i.e., if it is a
formal model. This book will introduce a series of formal modelling languages
and ways to verify properties of models expressed in those languages. With
today’s modelling and verification techniques, generic properties of behavioural
models such as absence of deadlocks can be proven. In addition, domain-specific
properties such as compliance properties can also be investigated in the context
of system verification.
2
Discrete Dynamic Systems
Software systems are complex entities that can only be described by looking
at different aspects in turn, using different models. We choose the type of
model depending on the particular modelling goal. If we are interested in the
structural aspects of a system, we use different modelling languages from what
we use for modelling dynamic aspects. Rather than talking about dynamic
and static systems, we discuss dynamic and static aspects of a system, using
the appropriate modelling languages.
In the previous chapter we discussed two modelling languages that can
be used to represent the dynamic and static properties of a system, i.e., data
models and state diagrams. With state transitions, we can model dynamic
aspects and with data models static ones.
Data models describe a static aspect of a system, because they provide a
blueprint of the data stored in the system. In the example shown in Fig. 1.6,
each customer has a customer identification, a name, and an address, while
each order has an order identification, data, and an amount. Data models
specify the structure of the data processed by systems; they do not make any
stipulations about the behaviour of the system, i.e., about dynamic aspects,
respectively.
For instance, data models cannot be used to specify that a customer can put
articles in a shopping basket only after the customer has been authenticated.
Causal dependencies cannot be expressed in data models, nor in other types of
static models, such as software architectures. Therefore, data models and also
software architectures provide a means to express static aspects of systems.
In contrast, state diagrams are dynamic models, since they explicitly
consider states and the state transitions that a system can perform. In case
the of the online shop, the state diagram explicitly introduced in Fig. 1.7
represents the states of the shop and the state transitions that are possible.
We can use state diagrams to impose constraints on the behaviour of systems,
for instance by allowing an order to be paid for only after shopping has been
completed. Since state diagrams allow us to express constraints on system
behaviour, they describe dynamic aspects of systems.
voltage
current
& /
(a) LC circuit (b) Variation of voltage and electric current with time
The LC circuit is a continuous system. The state of the system, i.e., its
voltage and current, consists of continuous dimensions. The state transitions are
continuous functions that change the state over time, as depicted in Fig. 2.1b.
As the capacitor is charged with the opposite polarity by the induced magnetic
field in the inductor, the voltage and current alternate between positive and
negative values.
This diagram shows that the voltage is highest when the capacitor is
maximally charged and no electric current is flowing through the circuit,
whereas the current is maximum when the capacitor is discharged and the
voltage is zero. Owing to electrical resistance in the circuit, the maximum
values of the current and voltage slowly decrease.
The majority of models in computer science, however, are not concerned
with continuous behaviour, as one is interested in the operations that a system
performs to achieve its goal. Examples are the operations executed by an
algorithm and the activities carried out to decide whether a credit application
should be granted or not.
For this purpose, the continuous time dimension is discretised. That is, it
is split into intervals and the system is described only at the expiration of
each interval. As a consequence, the states of a system can be represented by
a set instead of by a continuous dimension. Depending on the modelling goal,
a state may include an identity, a textual description, or a composite model
itself, for instance a data model. Then, the state transitions form a temporal
relation between states.
This is in line with the physical construction of a computer. In the electrical
circuits of a CPU, low and high voltages denote values of 0 and 1, respectively.
As the voltage is a continuous function of time, a clock is introduced and
voltages are interpreted only at clock ticks. Discrete dynamic systems adopt
this concept by imposing a discrete time model. At any point in time, the
system is in one particular state, whereas the state of the system may change
when a clock tick occurs.
24 2 Discrete Dynamic Systems
3URGXFW
UHDG\
2EWDLQUDZ $VVHPEOH
0DQX PDWHULDOV SURGXFW 3URGXFW
0DFKLQH
IDFWXULQJ PDQX
IURP UHDG\
VWDUW IDFWXUHG
ZDUHKRXVH
3URGXFW
IDXOW\
5HF\FOHSURGXFWPDWHULDOV
The systems discussed so far are of a sequential nature. In the online shop, a
customer first enters the shop and selects products, before submitting payment
information. Finally, the products are shipped and the process completes.
Sequential behaviour can be characterised by a sequence of state transitions.
The sequence of states of the online shop can be seen in the state diagram
shown in Fig. 1.2.
Sequential behaviour can also include choices. In a state diagram, a choice
is represented by a state with multiple transitions. We have seen this type
of state diagram in the manufacturing machine example. Figure 2.3 shows
this situation because, after the product has been manufactured, either the
state Product ready or the state Product faulty is reached. In either case, the
behaviour of the system is sequential. Models to capture the behaviour of
sequential systems will be covered in Chapter 3.
6\VWHP0RGHO
6WDWLF '\QDPLF
&RQWLQXRXV 'LVFUHWH
6HTXHQWLDO &RQFXUUHQW
The set of states and the state transition relation can be visualised by a
graph representation, which consists of nodes and edges. Each state s ∈ S is
represented by a node, and each state transition (s, s ) ∈ δ by an edge between
states s and s . Since (s, s ) is an ordered pair, the edge is directed from s to
s .
From Definition 2.1, it follows that two nodes can be connected by at most
one edge in the same direction.
To illustrate the concept of a state transition system, we revisit the be-
havioural model of the online shop shown in Fig. 1.2. For convenience, we
abbreviate the states as follows: Shop entered (se), Shopping completed (sc),
Order paid (op), and Products shipped (ps). Formally, this state transition
system can be described by (S, δ), such that
The three arcs shown in the graphical representation of the state transition
system in Fig. 1.2 are reflected by the three tuples in the transition relation.
3URGXFW
UHDG\
SU
0DQX 3URGXFW
IDFWXULQJ 0DFKLQH PDQX
UHDG\
VWDUW IDFWXUHG
PV PU SP
3URGXFW
IDXOW\
SI
At each point in time, the state transition system is in exactly one state s,
from which it may transition to another state s if there exists a state transition
(s, s ) ∈ δ.
If more than one state transition is possible for a given state s, for instance,
if there are two state transitions (s, s ), (s, s ) ∈ δ such that s = s , then s
and s are exclusive, i.e., exactly one state transition is chosen.
In the example, there are two transitions possible in the state Product
manufactured. Each time this state is reached, one of the transitions is used
and the other is discarded.
For instance, if in the first iteration the product is faulty, then the lower
transition to the state Product faulty is chosen and manufacturing is started
again. If the second iteration results in a product that meets its specification,
the upper transition to the state Product ready is chosen.
28 2 Discrete Dynamic Systems
Fig. 2.6: States and labelled state transitions for an online shop
Figure 2.6 shows a variant of the state transition diagram for the online
shop that captures these actions. This version of the model is much richer,
since it associates actions that the system takes with state transitions. Given
this model, system designers have to make sure that the shop transits from the
state Shop entered to the state Shopping completed only by an action called
Checkout.
The term “action” refers to any function or procedure that the system
executes to perform a state change. The abstraction property of models also
applies to state transitions. It is a modelling decision which functionality is
associated with a particular action. Therefore, the term “action” subsumes
function calls, procedure calls, and other types of events that may happen.
The model does not define any further constraints on how the checkout
action should be implemented. In the example of the online shop, this action
is typically implemented by a button in the user interface of the online shop.
When that button is clicked on, a procedure is called on the server side, which
sends a web page to the browser with the shopping cart and further information,
for example, the overall amount payable.
Models like the one shown in Fig. 2.6 are called labelled state transition
systems. They are very useful during system design, because they constrain
the execution of procedures to certain states of the system. For instance,
invoking the checkout action in the state Order paid would not make sense,
and therefore this action is disallowed in the labelled state transition diagram
shown in Fig. 2.6.
On the other hand, certain sequences of actions by the system are allowed
in the labelled state transition diagram. In the case of the online shop, the
only sequence that is allowed is Checkout, Pay order, Ship products. When we
discuss the behaviour of systems using labelled state transition diagrams, such
actions are typically referred to as “input”. So we can say that the online shop
2.2 Transition Systems 29
can reach the state Shopping completed from the state Shop entered with the
input Checkout.
To generalise, each state transition is associated with some input, and
different inputs are represented by different symbols. The set of all input
symbols of a transition system is called its alphabet, denoted by Σ. Generally
speaking, the alphabet consists of all labels in the state transition system.
Each of these labels refers to exactly one action that can be performed by the
system.
Owing to the abstraction that models provide, the software system can
perform many more actions. However, the alphabet of a labelled state transition
system represents all of the actions of a software system that are relevant
for the modelling purpose. In this book, we consider finite alphabets only, as
we are concerned with the modelling of systems for which the set of possible
actions that the system can perform is known a priori.
Definition 2.3 A labelled state transition system is a tuple (S, Σ, δ) such that
The elements of the state transition relation are no longer pairs of states
but triples, consisting of the source state, transition label, and target state.
The introduction of labels also allows us to define multiple transitions
between a given pair of states. As a consequence, there may be several edges
leading from one state to another in the graphical representation of the system
model, each of which has a different label.
30 2 Discrete Dynamic Systems
The semantics of such a situation is that the system model allows different
state transitions from a state s to a state s . Since the labels of the state
transitions are different, the actions performed by the system are different, too.
However, their effects on the modelled state change of the system are identical.
This is the result of abstraction during the modelling process. In the real
system a different action is executed, which might also have slightly different
effects on the state of the system, but this difference is not relevant to the
model and therefore is abstracted away.
An example of such a situation is given in Fig. 2.7a, where there are different
ways of sending a quote. The model expresses these different ways (sending
the quote by either email, fax, or letter), showing the system engineers that
several alternative ways of sending the quote need to be developed. However,
after sending the quote using any of these ways, the system is in the state
Quote sent.
6HQGTXRWHE\HPDLO ^6HQGTXRWHE\HPDLO
6HQGTXRWHE\ID[
5HTXHVW 6HQGTXRWHE\ID[ 4XRWH 5HTXHVW 6HQGTXRWHE\OHWWHU` 4XRWH
UHFHLYHG VHQW UHFHLYHG VHQW
6HQGTXRWHE\OHWWHU
(b)
(a)
Fig. 2.7: Two labelled transition systems for a quote management system that
have several state transitions between a given pair of states
The set of all sequences that a labelled transition system can generate
characterises the set of all possible behaviours of that system. We formalise
these sequences based on the alphabet of labelled transition systems.
5HVXPHVKRSSLQJ 5
Fig. 2.8: Online shop labelled state transition system with loop
The labelled state transition diagram shown in this figure defines several
sequences. In fact, owing to the cycle in the diagram, an infinite number of
different behaviours are specified by the diagram. Consider a simple case, in
which a user enters the shop, proceeds to checkout, and pays for the order,
before the products are shipped. This behaviour is represented by the sequence
C, P, S .
If the customer chooses instead to resume shopping after the first checkout,
the following sequence occurs:
C, R, C, P, S .
Intuitively, these sequences start at the beginning of the process and end when
the shopping process is completed. However, sequences are not limited to
32 2 Discrete Dynamic Systems
P, S , R, C, P , S , R, C, R, C, R, C, R, C, P .
Definition 2.5 Let (S, Σ, δ) be a labelled state transition system. The state
transition relation for sequences δ ∗ ⊆ S × Σ ∗ × S is defined as follows.
(s, ε, s ) ∈ δ ∗ =⇒ s = s
(sc, R, C, R, C, R, C, P , op) ∈ δ ∗
because
This shows that for each label in the sequence, there is a corresponding state
transition in the state transition system that originates from the current state.
Owing to the loop in the labelled state transition system, it would of course
suffice to show that
C, R, C, P, S .
The basis of this summarising example is the online shop shown in Fig. 2.6.
That diagram focused on the main phases of the online shop, disregarding
essential functionality such as authentication of users.
&KHFNRXW
&
Fig. 2.9: Online shop labelled state transition system with authentication
web page that is sent to the customer’s web browser. This page shows the
contents of the shopping cart and associated information, for example, the
amount payable. The state transition in the model can be mapped to a series
of steps that are taken by the system to complete shopping.
The model also satisfies the pragmatics property, because it can serve as
a replacement for the software system. We can discuss the behaviour of the
system and the states that are reachable from a given state without being
distracted by details of the technical implementation. The pragmatics property
is important when it comes to new requirements that have to be incorporated
into a software system.
Based on the version of the online shop specified in Fig. 2.9, we now assume
that new requirements emerge.
$XWKHQWLFDWH
$
Fig. 2.10: Labelled state transition system for online shop after new require-
ments have been incorporated
By analysing web logs, the management team finds out that in many cases
customers have left the shop without authenticating. The inference is that
customers do not want to authenticate before they browse the shop and find
products that they like. Clearly, before paying for their order, customers need
to have authenticated, but they do not need to do so before then. This leads to
a the new requirement to delay authentication until the latest possible point
in time, i.e., immediately before payment.
The labelled state transition system proves useful in redesigning the online
shop accordingly. The management team decides to shift the state Customer
authenticated to after the state Shopping completed but before the state Order
paid. This allows customers to enter the shop and immediately start shopping,
before being asked for authentication. The resulting extended version of the
online shop is shown in Fig. 2.10.
36 2 Discrete Dynamic Systems
Bibliographical Notes
The fundamentals of modelling, including the model characteristics discussed
in this chapter, were introduced in a seminal book by Stachowiak (1973). There
are several books on modelling in computer science. Henderson-Sellers (2012)
looks at modelling from a mathematical and ontological perspective. Different
aspects related to the syntax and the semantics of models in computer science
are discussed by Harel and Rumpe (2004).
In a book edited by Embly and Thalheim (2011), different types of concep-
tual models are discussed, ranging from data models to interaction modelling
and modelling of systems requirements. The representation and analysis of
business processes are the central features of a textbook by Weske (2012),
which introduces the concepts of process modelling and a set of languages to
represent business processes.
Part II
Models of Behaviour
3
Sequential Systems
Definition 3.1 A system shows sequential behaviour if all events of the system
are totally ordered.
• Σ is a finite alphabet,
• δ ⊆ S × Σ × S is a state transition relation,
• s0 ∈ S is the initial state, and
• F ⊆ S is a set of final states.
With this definition, we can specify an automaton that captures the be-
haviour of the ticket vending machine. This automaton is depicted in Fig. 3.2.
1¼
select
Ticket ticket 0¼ 50ct 0.5¼ 50ct 1¼ 50ct 1.5¼ confirm Ticket
selection paid paid paid paid supplied
1¼
cancel
cancel Ticket
cancel cancelled
cancel
The initial state of the automaton is denoted by an incoming arc that has
no source state. In the diagram, this state is called Ticket selection. The system
has two outcomes, representing the selling of the ticket and the cancellation
of the vending process. These outcomes are represented by two final states,
called Ticket supplied and Ticket cancelled. In the graphical representation of
automata, final states are denoted by a double border.
Our ticket vending machine comes with several simplifications. We assume
that all tickets cost 1.50 A C and the machine accepts only precise amounts of
50 ct and 1 A
C coins, i.e., it does not need to give any change. These assumptions
lead to the following alphabet that covers the possible inputs to the ticket
vending machine:
{0 A
C paid, 0.5 A
C paid, 1 A
C paid, 1.5 A
C paid} ⊂ S.
In the state 0 A
C paid, a customer may insert either a 50 ct coin or a 1 A C coin
and the value of the coin inserted is stored in different subsequent states. These
42 3 Sequential Systems
alternatives are represented by the following state transitions, which share the
same source state:
(0 A
C paid, 50 ct, 0.5 A
C paid) ∈ δ,
(0 A
C paid, 1 A
C, 1 A
C paid) ∈ δ.
If the customer inserts a 50 ct coin first, then they might insert another 50 ct
coin or a 1 A
C coin. However, if 1 A
C was inserted first, adding another 1 A
C would
exceed the ticket price. Hence, in the state 1 AC paid, the automaton accepts
only a 50 ct coin.
The complete set of tuples in δ is shown in Fig. 3.2. In particular, each
arc from state s to s labelled l in the automaton is represented by a state
transition (s, l, s ) ∈ δ.
The automaton allows us to cancel the ticket purchase at any time after
starting but before completing the purchase. Therefore, state transitions with
the label cancel have been introduced accordingly.
In order to extend the ticket vending machine in such a way that it offers
tickets at different prices, new states need to be introduced. For each different
ticket type, a new state that starts accepting coins needs to be created. For
each of these states, the logic to accept the correct amount of money needs to
be adapted. This leads to an increasing number of states and state transitions.
Nevertheless, the automaton will still be finite.
sequence needs to be complete with respect to the initial and final states of
the automaton.
Execution sequences have already been introduced in Section 2.2. Here, we
refine the notion of execution sequences for automata such that every sequence
starts in the initial state and ends in a final state of the automaton. We refer
to these sequences as complete execution sequences, from now on.
For the example automaton shown in Fig. 3.2, each execution sequence
starts with select ticket and terminates with either confirm or cancel. An
example of a complete execution sequence is
We can describe the behaviour of the ticket vending machine by the sequences
that the corresponding automaton can generate. The set of all these sequences
for a given automaton is called the language of the automaton.1
The language of the automaton shown in Fig. 3.2 consists of the follow-
ing complete execution sequences. First, three execution sequences lead to
purchasing of the ticket:
1
In theoretical computer science, sequences are called “words”. Since we use
automata to model the behaviour of systems, however, we use the term “execution
sequence” rather than “word”.
44 3 Sequential Systems
We could also use these sequences to define the behaviour of the system.
However, as this example already shows, it is quite hard to understand and
discuss the operations of a system using sequences alone. Graphical representa-
tion by automata helps people to model and discuss the behaviour of systems.
This also includes the maintenance of systems, for instance, when new system
requirements emerge.
To discuss this aspect, we suppose that the ticket vending machine is
extended in a way that allows us to represent multiple purchasing processes
that are carried out one after another with a single automaton. Figure 3.3
shows the corresponding adaptation. The automaton returns to its initial state
whenever a ticket is supplied or the purchase is cancelled.
This example shows that the initial state of an automaton can have incoming
arcs, and final states can have outgoing state transitions. It is even possible
that the initial state is also a final state at the same time.
After a ticket has been supplied, the automaton returns to the state
Ticket selection through a reset action. So far, actions have represented inter-
actions of the customer with the vending machine. The reset action, however,
is not triggered externally by a customer, but internally by the system. Never-
theless, the behavioural specification of the system needs to take this action
into account.
Recall that a complete execution sequence always terminates in a final state.
Since Ticket selection is the only final state of the automaton shown in Fig. 3.3,
every complete sequence needs to terminate in the state Ticket selection. As
this is the initial state of the automaton, the empty sequence ε constitutes a
complete execution sequence in terms of Definition 3.4:
1¼
select
Ticket ticket 0¼ 50ct 0.5¼ 50ct 1¼ 50ct 1.5¼ confirm Ticket
selection paid paid paid paid supplied
cancel
1¼
cancel
cancel
cancel
In each labelled transition system and each automaton studied so far, one
action leads to exactly one subsequent state. For instance, in the online shop
example shown in Fig. 2.8, the action Checkout (C) leads from the state
Shop entered (se) to the state Shopping completed (sc). In the state se with
a given input C , there is exactly one state that can be reached in one step,
namely sc.
We call this behaviour deterministic because, given a state (here se) and
an action (C ), the next state is uniquely determined (sc). In deterministic
automata, all transitions from a given state s have different labels, i.e.,
(s, l, s ), (s, l , s ) ∈ δ ∧ s = s =⇒ l = l .
Different arcs that emerge from a given state are distinguished by different
actions, denoted by different symbols from the alphabet Σ that label the state
transitions in an automaton. This makes sure that all choices are deterministic.
If we take a close look at Definition 3.2, however, this constraint cannot
be found. State transitions are defined by a relation, i.e., a set of tuples that
consists of a source state, an action from the alphabet, and a target state. The
definition does not exclude the possibility of two different state transitions
that share the same source state and the same symbol:
(s, l, s ), (s, l, s ) ∈ δ =⇒
s = s .
If, from a state s, there is more than one next state s , s possible with a
given input l, the choice is not determined by the automaton. To describe
the behaviour of the system, we must recognise that all choices are possible,
and the next state is not determined by the input label. Therefore, we call
this behaviour non-deterministic. Non-deterministic finite automata will be
addressed below.
46 3 Sequential Systems
In a deterministic finite automaton, for each state and for each symbol
from the alphabet, there is at most one state transition that leads to a target
state. Hence, the state transition is a function that maps a pair consisting of a
source state and a symbol to a target state.
The automaton shown in Fig. 3.3 is deterministic. This is due to the fact
that all outgoing arcs of each state have different labels. For instance, the state
0AC paid has three outgoing arcs, each of which has a different label, leading
to different next states.
This example also shows that not every input symbol can be applied in
every state. This is a desired property, since a system cannot perform all
actions in all states. Automata constrain the availability of actions to certain
states. Mathematically, this constraint can be expressed by defining the state
transition function to be a partial function.
A partial functions relates only a subset of its domain to elements of its
codomain. Hence, a state transition does not need to be defined for every
combination of source state and input symbol.
For example, in Fig. 3.3, the symbols select ticket and confirm are not
represented by any state transition that leaves the state 0 A
C paid. In a complete
automaton, every state has an outgoing state transition for every symbol of
the alphabet. For the reason stated above, virtually all systems are described
by partial state transition functions.
The state transition relation of a finite automaton can also be illustrated by
a matrix. Table 3.1 shows the matrix for the ticket vending machine automaton
introduced in Fig. 3.3.
In the matrix, the leftmost column denotes the source states and the
topmost row denotes the symbols from the alphabet. At the intersection of a
source state and a symbol is the target state. Therefore, we find the target
state s at position (s, l) of the matrix if and only if δ(s, l) = s .
This representation shows in which states a certain input symbol can be
read. For instance, the confirm symbol can only be read in state the 1.5 A C paid,
whereas the symbol cancel can be read in all states except Ticket selection
and Ticket supplied.
3.1 Finite Automata 47
Table 3.1: The state transition relation of the automaton in Fig. 3.3
select 50 ct 1A
C confirm cancel reset
ticket
Ticket 0AC
selection paid
0A
C paid 0.5 A
C 1AC Ticket
paid paid selection
0.5 A
C paid 1AC 1.5 A
C Ticket
paid paid selection
1A
C paid 1.5 A
C Ticket
paid selection
1.5 A
C paid Ticket Ticket
supplied selection
Ticket Ticket
supplied selection
The matrix shows that for each combination of source state and input
symbol there exists at most one target state, which satisfies the requirement of
a deterministic finite automaton. It also has a lot of gaps, where a combination
of source state and symbol has no target state because the state transition
relation of the automaton is a partial function. For complete finite automata,
there are no gaps in the state transition matrix.
The output of a Moore automaton depends only on its current state, and this
output is provided whenever the state is entered. We capture this intuition by
the following definition.
2¼SDLG
2¼ / 50ct confirm
3¼SDLG confirm
1¼
/ 1¼,50ct
1¼ 2¼
select
Ticket
Ticket ticket 0¼SDLG 50ct 50ct 50ct confirm
0.5¼SDLG 1¼SDLG 1.5¼SDLG supplied
selection / selection
/ ticket
1¼
2.5¼
/ 1¼ confirm
2¼
The automaton does not provide an output for each state. For instance, in
the state 0.5 A
C paid, the forward slash separator and the output are missing.
In the graphical representation of states in which the automaton produces no
output, the output is omitted, as well as the forward slash that separates the
state identifier from the output sequence.
Just like the state transition function of a finite automaton, the output
function can be represented by a matrix. This is depicted in Table 3.3 for the
automaton in Fig. 3.5. An empty output sequence is denoted by .
The close coupling of the state transition function and the output function
can also be expressed by a combination of the two functions that maps source
states and input symbols to output sequences and target states:
δ̂ : S × Σ Λ∗ × S.
The combined state transition function then comprises tuples (s, l, ω, s ). Con-
sequently, state transition arcs in Mealy automata that lead from a source
state s to a target state s are labelled by the input symbol (l), a forward slash
(/), and the output sequence (ω).
The diagram in Fig. 3.6 depicts a Mealy automaton for the ticket vending
machine. If a state transition emits no output, i.e., the empty sequence ε, the
forward slash and the empty-sequence brackets are omitted.
2¼ / 50ct
1¼ 2¼/1¼
select
Ticket ticket 0¼ 50ct 0.5¼ 50ct 1¼ 50ct 1.5¼ confirm Ticket
selection paid paid paid 1¼ / 50ct paid / ticket supplied
/ selection
1¼ 2¼ / 1¼,50ct
For any given input, the Mealy automaton behaves exactly like the Moore
automaton shown in Fig. 3.5. That is, for a given sequence of input symbols,
both automata provide the same sequence of output symbols.
The Mealy automaton, however, has a different structure. In particular,
fewer states are required. If two or more state transitions share the same source
state and the same target state but are labelled with different input symbols,
they can have different output sequences in a Mealy automaton. In a Moore
automaton this is not possible, so that multiple states are required to capture
the same behaviour.
Consider the state transitions that lead from the state 0.5 A C paid to the
state 1.5 A
C paid. The state transition labelled 1 AC results in an empty output
sequence, as the amount of 1.5 AC is reached exactly, whereas the state transition
that takes 2 A
C as input needs to return change, resulting in the output sequence
1AC.
The output function of a Mealy automaton can be represented by a matrix,
similarly to Moore automata. However, the actual output sequences depend
on the source states and input symbols. The combined output function of the
automaton, denoted by δ̂, is depicted in Table 3.4. This function maps pairs
3.2 Automata with Output 53
select ticket 50 ct 1A
C
Ticket Selection (0 A
C paid, selection)
0A
C paid (0.5 A
C paid, ) (1 A
C paid, )
0.5 A
C paid (1 A
C paid, ) (1.5 A
C paid, )
1A
C paid (1.5 A
C paid, ) (1.5 A
C paid, 50 ct)
1.5 A
C paid
2A
C confirm
Ticket Selection
0A
C paid (1.5 A
C paid, 50 ct)
0.5 A
C paid (1.5 A
C paid, 1 A
C )
1A
C paid (1.5 A
C paid, 1 A
C, 50 ct)
1.5 A
C paid (Ticket supplied, ticket)
3.2.3 Conclusion
Finite automata with output can be used to describe the behaviour of dynamic
systems by their internal states and state transitions and by the output
generated. State transitions can be triggered from the system’s environment
by actions, represented by the input alphabet, to which the automaton reacts
and gives feedback, represented by the output alphabet.
The typical application of finite automata is in capturing behaviour in
terms of processing sequences of input. However, finite automata can also
model the internal aspects of software systems. In this case, the actions that a
software system performs are represented by input symbols. For example, if a
new customer needs to be stored in an enterprise application, the execution of
the procedure that stores the customer can be represented by an input symbol,
for instance Store new customer. The system can then reach a state Customer
stored.
Finite automata are widely used to represent the behaviour of systems, often
using a specific variant of automata, namely state machines. State machines
are part of the UML standard; they are based on extended automata, which
are introduced next.
54 3 Sequential Systems
The first extensions to finite automata are variables, assignments, and condi-
tions. These concepts will be introduced by extending the example of the ticket
vending machine that guided us through previous sections. Assume that the
public transport company that runs the ticket vending machine offers three
types of tickets.
A ticket of type A costs 2.60 A
C, a ticket of type B costs 2.90 A
C, and a ticket
of type C costs 3.20 A
C. To extend the ticket vending machine with these ticket
choices and more elaborate means to accept payment, we introduce three
integer (int) variables that reflect the price in euro cents, where, for instance,
a ticket of type A costs 260 cents.
3.3 Extended Automata 55
{ 10ct / a := a + 10,
int p:=0 { 50ct / a := a + 50,
int a:=0 { 1¼/ a := a + 100 }
If the automaton is in the state Paying, the variable p can have three differ-
ent values, 260 ct, 290 ct, or 320 ct, depending on the assignment. Furthermore,
when coins of value 10 cents, 50 cents, or 1 A C are inserted, the variable a is
increased by the value of the inserted coin, while the automaton returns to
the state Paying. Hence, the variable a can be assigned infinitely many values
in the same state of the automaton.
Initial values can be specified for each variable. In the automaton shown in
Fig. 3.7 this is done by an annotation in the bottom left corner of the diagram.
In this figure, int denotes the fact that the integer numbers are the domain
of the variables. The initial values are set to 0; these are assigned when the
system enters the initial state for the first time.
In the automata discussed so far, every state transition with a source state s
could be triggered when the automaton is in s. However, we might want to
constrain a state transition to certain conditions. The concept of a conditional
state transition is therefore introduced here using the example of the ticket
vending machine.
After the customer has selected a ticket, the customer needs to insert coins
to pay for it. An extended automaton with a state Paying and a self-loop was
introduced above. The self-loop is followed whenever a coin is inserted. The
amount already inserted is represented by the variable a. When, for example,
a 50 cents coin is inserted, the value of a is incremented by 50.
The automata discussed so far have done nothing to prevent overpaying.
However, the customer should only be allowed to insert coins if the amount
currently inserted is less than the price of the ticket. This property of the
system can be reflected in an extended automaton using conditional state
transitions. For the sake of simplicity, we consider 10 cents, 50 cents, and 1 A
C
coins only. An extension to other coins and notes is, however, straightforward.
A condition is an expression in first-order logic over the variables of an
automaton. With variables a for the amount and p for the price, we can define
an expression that returns true if and only if the amount is smaller than the
price, by a < p. By use of a conditional state transition of this form, we can
make sure that a coin can only be inserted if that condition holds. The state
Paying in Fig. 3.8 has state transitions that loop back to that state and accept
inserted coins as long as the above expression holds.
To represent conditional state transitions, we extend the state transitions
with a condition expression, resulting in the following labelling scheme:
[ condition ] input / output; assignments
condition is a first-order logic expression over the set of variables of the
automaton. The state transition can occur only if the expression evaluates
to true. The default expression is the logical constant true.
3.3 Extended Automata 57
{ ticket A / sel. A; p:=260
{ ticket B / sel. B; p:=290 [a p] IJ [r = 0] IJ
{ ticket C / sel. C; p:=320 } / r := a±p / ticket
Ticket Returning Ticket
Paying
selection change supplied
The outgoing state transitions of the state Paying in Fig. 3.8 are interpreted
as follows. If a < p and the customer inserts a 50 cents coin, the assignment
a := a + 50 increases the amount of a by 50. Similarly, the amount is increased
by 100 if the user inserts a 1 AC coin. Notice that no coins can be inserted if
sufficient funds have been inserted, i.e., if a ≥ p holds.
In Table 3.5, we provide an example execution of the extended automaton
for the ticket vending machine. The state of the automaton is given in the
leftmost column and the assignment of its variables in the rightmost column.
The second and third columns show the input and output of the state transitions
leaving that state.
In the initial state Ticket selection, all variables are assigned their initial
values. From this state, the customer chooses a ticket of type A, represented
by a state transition with input ticket A and output sequence sel. A . This
state transition also updates the assignment of the variable p (price) with the
price of the chosen ticket and leads to the state Paying, which waits for coins
to be inserted.
In the example, the customer inserts 50 cents. The automaton traverses
the respective state transition, increases the amount of money inserted, a, by
50 and returns to the state Paying. When coins are inserted, this behaviour is
repeated as long as the condition a < p holds true.
After the fourth iteration, marked by Paying* , we have a = 300, and
the condition a < p is not true any more. Hence, the state transitions that
accept coins cannot be used. On the other hand, the state transition leading
58 3 Sequential Systems
In the automata discussed so far, a state transition can only occur, if an input
symbol is read. On a more abstract level, this means that a state transition is
always associated with an interaction of a software system with its environment,
for instance, the insertion of a coin into a ticket vending machine.
However, there are situations in which a state transition should occur
without any interaction. We have encountered such a situation in the ticket
vending machine example in Fig. 3.3. There, we introduced a state transition
with the label reset to show that the automaton returns to its initial state and
is ready to serve new customers once the ticket has been supplied.
However, reset does not reflect any input by a customer, in contrast to all
other state transitions of the ticket vending machine. To reflect this automatic
transition of the system, a specific type of transition is used, a silent state
transition. A silent state transition is denoted by a specific input symbol τ ∈ Σ.
This represents a state transition that is not associated with any action of the
system to be modelled. Silent state transitions are also called τ -transitions.
We have used a silent state transition in the advanced ticket vending
machine shown in Fig. 3.8. The state Paying accepts inserted coins until the
customer has inserted sufficient funds. That is, if a ≥ p, no coins can be inserted
any more. Instead, the automaton proceeds to the state Returning change
without any interaction with the customer. This transition is achieved by a
τ -transition.
Silent state transitions can be associated with a condition, an output
sequence, and variable assignments. In the sample automaton, the τ -transition
from Paying to Returning change computes the amount of money to be re-
turned to the customer, r := a − p, if the condition a ≥ p evaluates to true.
The execution example in Table 3.5 shows that the ticket vending machine
has accepted an amount that is greater than the ticket price, namely a = 300
for a ticket price p = 260, as shown in the state marked Paying* . The silent
state transition computes the change to be returned and assigns it to the
variable r.
Iterating over state Returning change, the automaton returns change to
the customer. As long as the amount to return is larger than 50 cents the
automaton returns 50 cents coins, and only for small amounts does it return
10 cents coins.
This behaviour is also achieved by silent state transitions. All outgoing state
transitions of the state Returning change are qualified by mutually exclusive
conditions. That is, it is not possible that any two conditions evaluate to true
at the same time. Hence, it is clear which state transition is to be chosen,
based on the value of r.
60 3 Sequential Systems
The automaton iterates over the state Returning change until the change
has been returned and the condition r = 0 holds true. Then the automaton
outputs the ticket and terminates. All these state transitions are carried out
without customer interactions, because they are τ -transitions.
Non-deterministic Choices
[r = 0] IJ
/ ticket Ticket
... Returning
change supplied
// declaration of variables
int p :=0 ,
a :=0 ,
r :=0;
// state Paying
while a < p {
input ( coin );
switch coin {
case 1 AC: a := a + 100;
case 5 ct : a := a + 50;
case 10 ct : a := a + 10;
}
}
r := a - p ;
iterates over the state Returning change, where the while loop starts and
continues to return change without any input while r > 0.
The reader might find it valuable to reflect on the model properties of
abstraction, mapping, and pragmatics of the extended automaton, the model,
and the original, i.e., the program.
Finite automata abstract away time in such a way that only the causal order
of actions that can be carried out is captured in the model. However, cases
exist where the modelling goal requires a notion of time. For instance, a smoke
detector must raise an alarm if it detects smoke in its vicinity. However, to
avoid false alarms from cigarette-smoking passers-by, it might be required that
the alarm should be raised only if smoke has been detected continuously for
at least ten seconds. Such behaviour cannot be modelled with the automata
that we have studied so far.
However, variables and conditions equip us with the capability to capture
quantitative information about the passage of time in automata. Timed au-
tomata introduce local clocks – integer variables that serve as counters. As
time passes, these counters are increased by the environment of the system
and can be used to enable or disable state transitions according to timing
constraints.
We can make use of the modelling of time in the ticket vending machine.
A customer might select a ticket but then decide to leave without proceeding
with a purchase, following customers should be welcomed with the same initial
state as the first customer. Therefore, the machine should return to the initial
state after some time of inactivity.
To add this time-out behaviour to the automaton, we insert a new state
Ready to pay, shown in Fig. 3.10. Once the customer has selected a ticket, a
clock is started before this state is entered. The clock is represented by an
integer variable t, which is initialised in the state transitions leaving the state
Ticket selection by the assignment t := 0.
WLPHULQWW
>W!@IJ
LQWS ^>DS@FWD D
LQWD ^>DS@FWD D
LQWU ^>DS@¼D D`
Timed automata abstract real time away to the extent that state transitions
require no time and time passes only while an automaton is residing in a state.
Variables that serve as clocks need to be distinguished from regular variables.
This is done by the keyword timer in the declaration of the variable. When the
automaton enters the state Ready to pay, the clock t starts to tick, as defined
by the initialisation of the timer variable in the state transitions. We stipulate
that the clock value is incremented every second. Outgoing transitions of this
state can define constraints using the current clock value.
The automaton accepts coins that are inserted within the first 60 seconds
only, denoted by the condition t ≤ 60. If the customer inserts coins in time,
the automaton proceeds to the state Paying in the same manner as in Fig. 3.8,
but with initial funds, i.e., the value of the first coin inserted. For the sake of
simplicity, we have omitted the rest of the automaton from the diagram. If
the automaton remains in the state Ready to pay for more than one minute
without customer interaction, i.e., t > 60, it returns to the initial state.
An automaton can have multiple clocks, because time is discrete in the
system model, and the clock ticks need to be related to real time, for example
minutes or milliseconds. In the above example, one tick represents the passing
of one second.
Even though the terms “finite automata” and “state machines” are often used
synonymously, we shall use mainly “state machine” from here onwards. Finite
automata with output and extended automata form the conceptual background
to behavioural modelling using UML state machines.
While state machines and extended automata share many concepts, for
instance states, state transitions, and variables, state machines use a different
notation. In state machines, states are represented by rectangles with rounded
corners.
However, there are also conceptual differences between extended automata
and state machines. We shall discuss these aspects using the state machine
shown in Fig. 3.11. This shows the same behaviour as the extended ticket
vending machine depicted in Fig. 3.8 as an extended automaton. Since the
labels of state transitions will not be discussed until later in this section, they
have been omitted from Fig. 3.11.
3.4 State Machines 65
,QLWLDO 7LFNHW
VWDWH VXSSOLHG
7LFNHW 5HDG\WR 5HWXUQLQJ
3D\LQJ
VHOHFWLRQ SD\ FKDQJH
In finite automata, the initial and final states are regular states, as defined in
Definition 3.3. As such, they can have incoming and outgoing state transitions
and can be traversed during the execution of an automaton.
In UML state machines, in contrast, the initial state is a so-called pseu-
dostate that represents an entry point of the automaton. It is shown as a filled
circle. It has no incoming arc and exactly one outgoing arc. The automaton
cannot remain in this state, as the only outgoing state transition from the
initial state represents the initialisation of the automaton. Therefore, this state
may only have an action assigned that initialises the system; otherwise, no
action is provided.
Final states denote the definitive termination of the automaton and, hence,
must not have any outgoing arcs. A final state is drawn as a filled circle with
a solid circle around it. Different outcomes of an automaton are modelled by
different final states.
Triggers
In finite automata, state transitions are triggered by actions, for instance the
selection of a particular ticket or the insertion of a coin into a ticket vending
machine. These actions are represented by the automaton’s input alphabet. In
order to extend the automaton with new actions or inputs, the input alphabet
needs to be modified. For instance, for each new coin or note that the ticket
vending machine is to accept, a new input symbol needs to be introduced.
The modelling goal of a ticket automaton may, however, focus not on the
actual coins and notes that will be accepted by the automaton but rather on
the amount of money. Hence, it is beneficial to abstract away the actual input
and use a more flexible approach to represent different inputs.
In state machines, this can be achieved by so-called triggers. Triggers cause
state transitions based on some action carried out, some input provided, or
some event that influences the system. Triggers may have parameters, which
can be used by variables in the state transitions.
The following example can be used to illustrate triggers in state machines.
The model of the ticket vending machine should abstract away the actual
coins to be inserted, and represent only the amount of money that has been
inserted so far. Therefore, a trigger insert(amount) is introduced, where insert
represents the action of inserting money into the ticket vending machine, and
amount is a parameter that carries the value of the inserted coin or note.
66 3 Sequential Systems
State machines also offer the keyword at to denote that a state transition
is triggered at a certain point in time; for example, “at December 31, 23:59:59”
models the transition to a new year.
In a similar fashion, state machines introduce variable triggers by the
keyword when. A variable trigger is executed when the corresponding variable
is assigned a particular value. For instance, “when a = 5” denotes that a state
transition is triggered when the variable a is assigned the value 5.
7LFNHWVHOHFWLRQ ,GOH
HQWU\GLVSOD\ PHQX HQWU\GLVSOD\ FRPPHUFLDO
DIWHUV
VHOHFW WLFNHWSULFH
FDQFHO GLVSOD\ WLFNHW
S SULFHV WLFNHW
DIWHUV
5HDG\WRSD\
HQWU\GLVSOD\ SD\S
LQVHUW DPRXQW
D DPRXQW
FDQFHO
UHWXUQ D
FRQILUP>DS@
SULQW7LFNHW V
UHWXUQ DS
DIWHUV
7LFNHWVXSSOLHG
^$%&`V QXOO
HQWU\GLVSOD\ ³FKDQJH´
LQWS
DS
LQWD
So far, extended automata and state machines are to a large extent equiva-
lent. Figure 3.12 shows the extended automaton of Fig. 3.8 as a state machine.
Here, we observe the same states Ticket selection, Ready to pay, Paying, and
Ticket supplied and also state transitions that use triggers, guards, and assign-
ments.
68 3 Sequential Systems
Effects
State transitions also have output that uses parameters. For example, the
following output is associated with the transition that leads to the state Ticket
supplied in Fig. 3.12:
Effects added to state transitions are useful for storing trigger parameters
in variables, since these parameters are available only in the scope of the state
3.4 State Machines 69
transition. In the state machine shown in Fig. 3.12, this method is used to
store the chosen ticket selection and ticket price, as well as the amount of
money inserted.
Clocks were introduced for extended automata as variables that are incre-
mented while the automaton remains in a state. In state machines, time can
be employed in a similar fashion.
Figure 3.12 shows an additional feature of the ticket vending machine that
leverages clocks. If the machine is in the state Ticket selection for more than
one minute, it proceeds to the state Idle and starts showing commercials on the
screen. A time trigger using the keyword after is used for this state transition.
The state Idle is left if any interaction happens, i.e., if the customer
presses any button, denoted by the label “∗”. The same happens if there is no
interaction for five minutes in the state Ready to pay. From the diagram, one
can observe that a timer can be used in different states.
So far, this chapter has shown that the behaviour of sequential systems can be
modelled by automata or, in the context of UML, state machines. Examples
have been used to illustrate the concepts of this modelling and their application.
To ease comprehension, these examples have been of limited complexity. In
real-world application scenarios, however, not only simple but also complex
systems have to be represented in behavioural models.
The automata introduced so far can also be used to represent complex
behaviour. The resulting models, however, would also become complex, so that
communication and agreement about the behaviour represented would become
very difficult.
To deal with complexity in automata, we can abstract from behaviour
away and represent such behaviour by specific states. A single state may
then represent behaviour captured by a subautomaton. Since a state in a
subautomaton can represent a further subautomaton, these automata are
organised hierarchically. States that are described by a subautomaton are
called composite states. Composite states are addressed in the next section.
To illustrate the concept of hierarchical state machines, we consider the
ticket vending machine example again. Figure 3.13 focuses on one part of the
state machine that was introduced in Fig. 3.12. In that automaton, the steps
involved in selecting a ticket are modelled by a single state, Ticket selection.
The output of selecting a ticket is represented by the state transition labelled
select(ticket, price), plus a timing constraint. This means that the individual
steps involving the selection of a ticket are abstracted away.
In practice, however, the selection of a ticket involves a number of steps,
such as choosing a ticket fare, choosing a period for which the ticket will be
valid, and finally confirming the purchase. In each of these steps, it should
be possible to return to a previous step to make changes. If our goal is to
70 3 Sequential Systems
,GOH dŝĐŬĞƚƐĞůĞĐƚŝŽŶ
HQWU\GLVSOD\ FRPPHUFLDO
DIWHUV
DIWHUV 5HDG\WRSD\
HQWU\GLVSOD\ SD\S
VWULQJV
LQWS
LQWD
7LFNHWVHOHFWLRQ
VHOHFW VHOHFW
IDUH SHULRG
)DUH 3HULRG &RQILU FRQILUP
VHOHFWLRQ VHOHFWLRQ PDWLRQ
EDFN EDFN
that internal states and state transitions occur only in a certain context,
for instance when variables are assigned or timers are started.
The existence of state machines with hierarchical states shows that states can
reference other state machines that describe their internal behaviour. States
with this property, i.e., states that contain internal logic, are called composite
states. A state machine that captures the internal behaviour of a composite
state is called the submachine of that state.
It is possible to nest composite states in other composite states which then
yields a hierarchy of states. At the lowest level of this hierarchy there are states
that are not composite. Every state, composite or not, can have no more than
one direct parent.
Each submachine must have an initial state, which is triggered when the
composite state is entered. Hierarchical state machines can also use pseu-
dostates for initial states, as discussed earlier in this section. A state machine
cannot reside in a pseudostate and, therefore, the outgoing transition of the
initial state is triggered immediately. The submachine is then in the state that
is the target state of the initial state transition. If this state is a composite
state, then again the initial state of its submachine is triggered, and so on.
As a consequence, if an automaton is in a composite state, it is always
in a substate of that composite state as well. Hence, at any point in time,
exactly one non-composite state of an automaton is active, whereas a number
of composite states – the direct and indirect parent states of the non-composite
state – are active as well.
Figure 3.13 showed the collapsed form of the composite state Ticket selec-
tion. In Fig. 3.15 we show its submachine. Visually, submachines are nested
in an expanded composite state. When the composite state Ticket selection
is entered, the initial state transition of its submachine is triggered and the
state Fare selection is entered. In that state, the customer is presented with
different fares, A, B, or C , to choose from.
Upon a choice of, for instance, ticket A, the price for the chosen fare is set,
p := 260 cent, and the submachine proceeds to the next state, Period selection.
It is possible to return to a previous state, denoted by the state transitions
labelled back. Eventually, the customer will confirm the ticket selection, which
terminates the submachine and leads to leaving the composite state.
Hierarchical automata allow not only grouping but also state transitions
to be grouped. If all states of a submachine have an outgoing state transition
with the same label, then these state transitions can be grouped into a single
state transition with the composite state as its source.
In Fig. 3.15, the composite state Ticket selection is the source state of
the state transition labelled cancel and therefore the state transition can be
triggered from every state of the submachine. If, for instance, the submachine
is in the state Period selection and the customer chooses to cancel the ticket
3.4 State Machines 73
7LFNHWVHOHFWLRQ
)DUHVHOHFWLRQ
HQWU\GLVSOD\ IDUH^$%&`
3HULRGVHOHFWLRQ
HQWU\GLVSOD\ YDOLGSHULRG
HQWU\GLVSOD\ ^VLQJOHGD\PRQWK`
VLQJOHV VVLQJOH
EDFN ZHHNS SāāV VZHHN
PRQWKS SāāV VPRQWK
&RQILUPDWLRQ
HQWU\GLVSOD\ WLFNHWV
HQWU\GLVSOD\SULFHS
FRQILUP
VWULQJV
LQWS
LQWD
6WDUW
HQWU\GLVSOD\ ZHOFRPH
3HULRGVHOHFWLRQ
HQWU\GLVSOD\ YDOLGSHULRG
HQWU\GLVSOD\ ^VLQJOHGD\PRQWK`
VLQJOHV VVLQJOH
EDFN ZHHNS SāāV VZHHN
PRQWKS SāāV VPRQWK
&RQILUPDWLRQ
HQWU\GLVSOD\ WLFNHWV
HQWU\GLVSOD\SULFHS
FRQILUP
VWULQJV
LQWS
LQWD ͘͘͘
state of the submachine, the user can cancel the interaction. This behaviour is
represented by the state transition from the state Ticket selection to the state
Start.
If a state transition leads from a state inside a submachine to a state
outside that submachine, the composite state can be left without reaching its
final state. This is different from a state transition that has the composite
state as its source state, where the state transition can be triggered from any
state inside the composite state.
This example shows that state transitions that lead to composite states
may cause inconsistencies in the submachine’s variables. For instance, if the
price had not been set as part of the effect of the state transition from Start to
Confirmation, it could not have been used in the state Confirmation. Modellers
need to be aware of this aspect of composite state transitions.
As mentioned above, effects of the entering and leaving of states can also
be specified for composite states. When a composite state is entered, the entry
effect is triggered before the submachine is initialised. When a composite state
is left, the exit effect is triggered after the termination of the submachine, i.e.,
after final state has been reached.
The entry and exit effects of a composite state are always executed when
the composite state is entered or exited, no matter whether the state transition
that causes the entry or exit crosses the boundaries of the composite effect or
is attached to the composite state itself.
Exit effects are executed from the innermost state, i.e., the non-composite
state, outwards to the parent states. Entry effects are executed in the reverse
direction, starting from the outermost composite state that is entered. As the
submachine of a composite state is initialised on entry, effects attached to the
initial state transition are executed as well.
6 6
H[LWH[ HQWU\HQ
LQ
7WUDQV
6 6
H[LWH[ HQWU\HQ
If T is triggered, S1.1 is left, executing its exit effect ex 1.1 , followed by the
exit effect of its direct parent state S1 , i.e, ex 1 . Next, the effect of the state
transition trans is executed. Upon entering S2 , first the entry effect of the
composite state en 2 is executed. Then the submachine of the composite state is
initialised. Since the initial state is a pseudostate, its outgoing state transition
is triggered immediately, and the effect in 2 is executed. The submachine enters
state S2.1 , and its entry effect en 2.1 is executed.
In conclusion, triggering of T leads to a state transition from S1.1 to S2.1 ,
causing the following sequence of effects to be carried out:
7LFNHWVHOHFWLRQ
VHOHFW WUDYHOGDWH
GDWH WUDYHOGDWH
6WDUW
'DWHVHOHFWLRQ
HQWU\GLVSOD\ ZHOFRPH
GDWH WRGD\ HQWU\GLVSOD\ GDWH
FRQILUP VHOHFW
WLFNHW$ FDQFHO WLFNHWPHQX
GDWH
S V $VLQJOH
7LFNHWFRQILJXUDWLRQ
WLFNHW%
S V %VLQJOH
+
WLFNHW&
S V &VLQJOH
)DUHVHOHFWLRQ
HQWU\GLVSOD\ IDUH^$%&`
3HULRGVHOHFWLRQ
HQWU\GLVSOD\ YDOLGSHULRG
HQWU\GLVSOD\ ^VLQJOHGD\PRQWK`
VLQJOHV VVLQJOH
EDFN ZHHNS SāāV VZHHN
PRQWKS SāāV VPRQWK
&RQILUPDWLRQ
HQWU\GLVSOD\ WLFNHWV
HQWU\GLVSOD\SULFHS
HQWU\GLVSOD\YDOLGIURPGDWH
FRQILUP
DIWHUV
VWULQJV
͘͘͘
,GOH LQWS
LQWD
HQWU\GLVSOD\ FRPPHUFLDO GDWHGDWH QXOO
Fig. 3.18: The history state allows returning to the last visited state in a
submachine
78 3 Sequential Systems
The use of the history state is illustrated here with the example shown in
Fig. 3.18. At any point during the ticket selection process, the customer can
choose a travel date, starting from which the selected ticket will be valid.
The state Date selection can be accessed from any substate of the composite
state Ticket configuration via the state transition labelled select date. The
state Date selection shows the current travel date and allows it to be changed;
this can be done repeatedly if required. Upon triggering of confirm, the
corresponding state transition leads to a history state within the composite
state Ticket configuration. The history state then activates the state in which
the submachine resided when it was left for date selection.
If, for instance, the submachine was in the state Period selection when the
customer chose to change the travel date, the submachine will be entered in
the same state, using the history state. The entry effect of Period selection is
carried out.
In Fig. 3.18, we have added a state Idle of the kind that we introduced
earlier along with the state transition for the time trigger. Hence, the state
Ticket selection is now complete and shows the expanded form of the composite
state depicted in Fig. 3.13. The design of our ticket vending machine is thus
concluded.
Bibliographical Notes
In this chapter, automata were introduced as a conceptual basis for modelling
discrete dynamic systems that also offers an intuitive notation for capturing
them graphically. Automata go back to Huffman (1954), Mealy (1955), and
Moore (1956), who envisioned ways to describe sequential switching circuits.
Imagine an electrical relay board that has input switches that can be in the
position on or off, and an output of lights that can also be on or off. Switching
circuits can be used to represent a theory of combinatorial logic, where the
output (lights) is a function of the input (switches). Sequential switching
circuits are an extension, in that they allow “remembering” of the state of
previous inputs. Consequently, the output is a function of the current input
and the previous states. This can be projected onto automata, where the
output depends on the current state and the input that led to that state.
Non-deterministic finite automata were first studied by Rabin and Scott
(1959). These authors argue that the benefit of non-deterministic automata lies
in the small number of internal states and the “ease in which specific machines
can be described”. Rabin and Scott also showed that every non-deterministic
automaton can be replaced with a behaviourally equivalent deterministic
automaton. In the worst case, the latter has 2|S| states, where S is the set of
states in the non-deterministic automaton.
Automata generally capture only the ordering of actions, and not the real-
time behaviour of discrete dynamic systems. To overcome this limitation, Alur
3.4 State Machines 79
Fig. 4.1: Finite automaton captures sequential behaviour: all events are causally
related
once a has happened: b is causally dependent on a. The same holds for event
c, which can only happen after a has happened.
The characterisation of sequential behaviour relates only events that actu-
ally occur in a particular execution sequence. It does not relate events at the
level of the model. For instance, there is no path involving b and d in the finite
automaton. But then, b and d cannot happen in sequence in any execution.
A causal dependency between events a and b does not mean that b must
happen after a has happened. The example has shown that c can happen after
a, bringing the automaton to the state s3 . Instead, it means that b can only
happen after a has happened.
In a sequential system every event that occurs is causally dependent
on the event that occurs previously.
In the sample finite automaton shown in Fig. 4.1, the following complete
execution sequences can occur: a, c, d, e , a, b, e . In either case, every event
that occurs is causally dependent on the event that has just occurred: in the
first execution sequence, e depends on d, d depends on c, and c depends on a.
In the second execution sequence, e depends on b, which depends on a.
Systems with this property are called sequential systems, because the events
can occur only in a sequence that is defined by the model, for instance, by a
finite automaton. We now turn our attention to concurrent systems, where the
constraint of causal dependencies between events in sequential systems does
not apply any more.
In a concurrent system, events that occur do not need to be causally
dependent on events that have previously occurred.
This means that in a concurrent system, not all events are causally related;
several events can occur independently of each other. A typical concurrent
system consists of several subsystems, which can – for some time – proceed
independently of each other.
A purchasing scenario will serve as an example of a concurrent system. A
customer wants to purchase a laptop computer and sends a request for quote to
three sellers. The customer can now proceed by allocating a budget, planning
which software tools to install on the laptop, and so on. When a seller receives
a message requesting a quote, the seller decides whether to prepare and send
4.1 State Machines 83
a quote message to the customer. Assuming all sellers submit a quote, the
customer receives three quotes, selects one, and sends an order to one selected
seller.
This example of a typical interaction between business processes describes
a concurrent system. The system consists of four subsystems, the customer and
three sellers. Each seller’s activities to prepare a quote are causally independent
from the activities performed by the other sellers and by the customer. We will
return to similar examples when we discuss concurrency in business process
models later in this chapter.
The remainder of this chapter is organised as follows. Section 4.1 investigates
the ability of state machines to represent concurrent behaviour. Interactions
among subsystems of a concurrent system are discussed in Section 4.2. Petri
nets are the centre of attention in Section 4.3.
FRQILUP
2UDQJH RUDQJHVRGD 2UDQJH
VHOHFWHG VXSSOLHG
RUDQJH
'ULQN
VHOHFWLRQ
FRQILUP
OHPRQ /HPRQ OHPRQVRGD /HPRQ
VHOHFWHG VXSSOLHG
¼ ¼
¼
¼
(b) Automaton A2 for payment
received by the automaton, the selected soda is supplied and the final state of
the automaton is reached.
The second automaton, shown in Fig. 4.2b, models the payment process.
A number of coins are inserted until the amount of 2 A C is paid. Notice that
overpaying is not possible in this automata, since, for instance, in the state
1AC paid, the automaton does not accept a 2 A C coin. The state transition
confirm leads to the completion of the purchase.
The behaviour of the vending machine is defined by the composition of
the two automata. When the system starts, both automata are in their initial
states. Each subsystem progresses independently in a way that depends on the
input. At any point in time, one state transition of either automaton can take
place. Since the two automata can proceed independently of each other, they
are concurrent. For instance, selecting orange may take place before, after, or
at the same time as a 1 A C coin is inserted.
Earlier in this section we characterised sequential execution by stating
that consecutive events are always causally related. However, if we investigate
the drink vending machine, there are events that occur consecutively but are
not causally related. This applies, for instance, to the events orange, and 2 A
C.
Therefore, the drink vending machine is a concurrent system.
The behaviour of the system will now be discussed in more detail, using a
formal representation of the states of the system which is a combination of
the states of each of its subsystems. Formally, we can capture the state of a
composite system as follows.
4.1 State Machines 85
(Drink selection, 0 A
C paid) ∈ S1 × S2 .
In this state, the system offers several state transitions based on the inputs to
the automata. For instance, selecting lemon in the drink selection automaton
leads to the composite state
C paid) ∈ S1 × S2 ,
(Lemon selected, 0 A
whereas inserting a 2 A
C coin into the payment automaton in the start state
leads to
C paid) ∈ S1 × S2 .
(Drink selection, 2 A
However, a confirm input in the state (Lemon selected, 0 A
C paid) results in the
state
(Lemon supplied, 0 AC paid) ∈ S1 × S2 ,
because A2 does not accept the event confirm in the state 0 A C paid, whereas
A1 does so in the state Lemon selected. In this state, the composite automaton
has provided the beverage without payment, which is an undesired behaviour.
This example shows two facets of behavioural modelling. On the one hand
it is beneficial to modularise the definition of behaviour into independent
subsystems, because less complex models suffice to specify the behaviour.
On the other hand, the subsystems of a given system always have some
dependencies, otherwise they would not be part of one system. In the above
example, the machine should provide the beverage only after the payment has
been received.
The required synchronisation of state transitions in distinct subsystems is
taken into account in UML state machines by orthogonal states.
In the example of the soft drink vending machine, we observed that concurrent
behaviour can be captured by several automata that together describe the
behaviour of a system. This section introduces state machines with orthogonal
states, which allow us to represent concurrent behaviour in one automaton,
rather than in several automata as presented. The idea is very similar to that
discussed above: a state machine can be in several states at the same time.
These states are called orthogonal states.
Orthogonal states are an extension of the composite states of UML state
machines that were discussed in subsection 3.4.3. In contrast to composite
86 4 Concurrent Systems
states, an orthogonal state does not have only one submachine, but at least
two submachines. Each submachine is contained in its own region and all
submachines of an orthogonal state are active concurrently.
2UWKRJRQDO6WDWH
5HJLRQ$
6$ 6$
6
5HJLRQ%
6% 6%
The orthogonal state shown in Fig. 4.3 contains two submachines, each of
which is contained in one region; these regions are Region A and Region B,
respectively. For the sake of simplicity, we have omitted the labels of state
transitions. Regions can be horizontally or vertically aligned, and are named
by a label in the respective region. To avoid confusing the name of a region and
the name of the orthogonal state, the latter is put into a rectangle attached to
the orthogonal state, as shown at the top of Fig. 4.3.
The execution semantics of the orthogonal states and their submachines
in this example can be described as follows. Once the orthogonal state has
been entered through the state transition from S1 , each submachine triggers
its initial pseudostate, entering the state (SA.1 , SB.1 ). Then, each submachine
progresses normally and independently of the other submachine, in a way that
depends on the input to the system.
The orthogonal state can be exited in two ways, similarly to a composite
state. If the orthogonal state has an unlabelled outgoing state transition, that
transition is triggered once all submachines have terminated by reaching their
respective final states. Orthogonal states may also possess labelled outgoing
state transitions. If such a state transition is triggered, each submachine is
exited immediately, independent of the state it is currently in.
In contrast to composite states, orthogonal states have no entry or exit
effects. However, orthogonal states and composite states can be nested, and
states of submachines can have effects. Then, the ordering of entry and exit
effects applies, as discussed for composite states in Section 3.4.3.
So far, we have covered the basics of concurrency in state machines. That is,
two or more submachines progress independently of one another. In most cases,
however, the individual parts of one system are not completely independent of
one another, as we have already seen in the example of the soft drink vending
machine.
4.1 State Machines 87
ĐƚŝǀĞ
VHW7HPSHUDWXUH QHZBWHPS
6WDQG%\
WHPS QHZBWHPS
LQWWHPS
Fig. 4.4: State machine model for a heating system with two concurrent
subsystems in an orthogonal state
Figure 4.4 shows how such a heating system can be modelled using state
machines with orthogonal states. Since the complete system consists of one
orthogonal state, we have omitted the initial and final states of the parent au-
tomaton. When the system is started, the two submachines Heating Regulation
and Heating Control are started in their respective initial states.
Heating Regulation maintains the room temperature by means of the states
Heating off and Heating on and their respective state transitions. We have
used t() to represent the current room temperature. The submachine starts in
state Heating off of the composite state Active. When the room temperature
drops below the desired temperature, i.e., t() < temp, the submachine turns
88 4 Concurrent Systems
the heating on. This is achieved by the state transition to Heating on with
a variable trigger, indicated by the keyword when, that fires as soon as the
condition on the variable temp is met. In the same way, the heating is turned
off as soon as the desired room temperature temp is reached, i.e., t() ≥ temp.
The events win.open and win.close represent the opening and closing,
respectively, of the window. We assume that these events are emitted by sensors
attached to the window. If the window is opened in any of the substates of
the composite state Active, the heating should be turned off. This has been
modelled using the state Inactive. A time trigger is used to leave the state
Inactive after 10 minutes, i.e., 600 seconds.
To set the desired temperature temp independently of the current state of
Heating Regulation, a region is added to the orthogonal state that contains
a simple state machine consisting of one state StandBy. In that state the
machine can receive a new temperature, modelled as an input setTemperature
with an input parameter new_temp. In this case, the machine updates the
temperature with the new value, i.e., temp := new_temp. This is possible
because all variables in a state machine are global and are therefore shared
between the submachines of an orthogonal state. Therefore, Heating Regulation
reacts immediately, based on the new desired temperature.
We have already argued that the states of a concurrent system are a combination
of the states of its concurrent parts. This applies to state transitions as well. In
this section, different ways to represent dependencies between state transitions
of orthogonal state machines are discussed.
^ZLQRSHQRZV
^ZLQRSHQRZV ^ZLQRSHQRZV
:LQGRZ0RQLWRU
ĐƚŝǀĞ
+HDWLQJ&RQWURO
LQWRZV
LQWWHPS
Fig. 4.5: Home automation system with separate window monitor. Signalling
is achieved by Window Monitor emitting win.open, received by Heating Regu-
lation.
The details of the submachine in the region Heating Control are omitted;
the submachine is still the same as in Fig. 4.4 and is contained in the orthogonal
state, but it has been collapsed in the diagram for the sake of simplicity.
In Fig. 4.5, the submachine in the region Window Monitor consists of two
states. The state Closed indicates that all windows are closed. Upon entry, this
state yields the output effect win.close, which serves as a signal and can be
observed by other submachines of the same orthogonal state, i.e., by Heating
Regulation. Analogously, the state open indicates that some windows are open.
The state machine counts the number of open windows in the integer variable
ows.
If the window monitor is in the state closed and a window is opened, ows
is set to one. The window monitor transits to the state open, for which the
entry effect is carried out, emitting the signal win.open. This signal is used as
a state transition trigger in the Heating Regulation submachine.
This example shows how an output effect of a state transition in one region
can be used as the input trigger of a state transition in another region, which
enables interaction between orthogonal submachines.
We refer to this mechanism as signalling of state transitions, because an
output of one region resembles a signal that is sent to another region. A signal
is instantaneous. It is received and processed by any submachine that is in
90 4 Concurrent Systems
ĐƚŝǀĞ
^ZLQRSHQRZV
^ZLQRSHQRZV
:LQGRZ0RQLWRU
^ZLQRSHQRZV
^ZLQRSHQRZV ` ^ZLQRSHQRZV
&ORVHG 2SHQ ^ZLQRSHQRZV
HQWU\ZLQFORVH HQWU\ZLQRSHQ ^ZLQFORVHRZV
^ZLQFORVH>RZV @ ^ZLQFORVHRZV
^ZLQFORVH>RZV @ ^ZLQFORVHRZV`
^ZLQFORVH>RZV @`
+HDWLQJ5HJXODWLRQ
+HDWLQJ&RQWURO
VWULQJFRGH ´´
LQWRZV
In the state Ready, the alarm is about to ring loudly, unless the correct
code is entered within a short amount of time.
This is achieved by two outgoing state transitions of the state Ready and
one outgoing state transition attached to the state Active. If a code cd is
inserted and this code is the same as the previously stored code code, the
burglar alarm returns to the state Inactive. If the code entered is wrong, i.e.,
code = cd, or no code has been inserted within 30 seconds, the submachine
proceeds to the state Alarm, which sounds an alarm. This state can only be
left by entering the correct code.
The correct code can also be inserted when the submachine is in the state
Active, for instance to disable the alarm before entering or leaving the house.
If we look closely at the state transition from Inactive to Active, a new
variant of a guard is found:
[Window Monitor in closed].
This guard expresses the condition that the state transition can only occur if
the submachine in the region Window Monitor is in the state closed. That is,
the alarm can only be activated, if all windows are closed. In many situations,
such a condition about the state of other submachines in the orthogonal
92 4 Concurrent Systems
state proves useful, because signals are instantaneous and are not stored. For
instance, the signal win.close can only be processed during a state transition.
However, if all windows are closed, the window monitor is in the state Closed,
which we can refer to.
To summarise, using a state transition guard that requires other subma-
chines to be in a particular state, we can synchronise state transitions between
submachines.
The synchronisation of state transitions enables us to complete the model of
the soft drink vending machine introduced at the beginning of subsection 4.1.1.
The corresponding state machine is depicted in Fig. 4.7. The submachines of
the orthogonal state Running have been derived from the extended automata
in Fig. 4.2.
5XQQLQJ
FRQILUP>3D\PHQWLQ¼SDLG@
2UDQJH RUDQJHVRGD
'ULQN6HOHFWLRQ
RUDQJH VHOHFWHG
'ULQN
6HOHFWLRQ FRQILUP>3D\PHQWLQ¼SDLG@
OHPRQ /HPRQ OHPRQVRGD
VHOHFWHG
FRQILUP>'ULQN6HOHFWLRQLQ2UDQJHVHOHFWHG__
FRQILUP>'ULQNVHOHFWLRQLQ/HPRQVHOHFWHG@
¼ ¼
3D\PHQW
¼
¼
VKXWRII
Fig. 4.7: State machine of the soft drink vending machine with synchronised
state transitions
The previous section has addressed the modelling of single systems that show
concurrent behaviour using orthogonal states. However, monolithic systems
are rather an exception, because the majority of software systems interact
with other systems.
An example is an online shop that accepts orders and payments, and is
responsible for delivering products to customers. For this purpose, the online
shop needs to interact with the customers, payment services, and logistics
providers. In such settings, collections of autonomous systems interact in order
to coordinate their activities. We refer to these settings as interacting systems.
Each part of an interacting system is an independent system that may
show sequential or concurrent behaviour internally. Since the focus of this
section is on the interaction of systems and the resulting behaviour, the term
“interacting systems” is used; interaction is also a key aspect of distributed
systems.
Any attempt to model all systems involved in a distributed system as a
single entity, that is, a state machine with orthogonal states, would result in a
complex and probably incomprehensible model. Therefore, modularisation is
used. Each system is modelled on its own, capturing its internal behaviour using,
for example, state machines. In addition, interactions have to be captured
as well. This section looks at some methods and techniques to represent
interactions of systems.
GHOLYHUSDFNDJH
/RJLVWLFV3URYLGHU
UHFHLYHSDFNDJH
LQLWLDWHSD\PHQW
&XVWRPHU
3D\PHQW6HUYLFH
H[HFXWHWUDQVDFWLRQ
WLPH
cannot be forced into any particular behaviour by other systems, which leads
to a loose coupling of these systems.
In the example at hand, the payment service provider accepts a request
message for a financial transaction. The message contains all required informa-
tion, including account holders and bank account numbers and the amount of
funds to be transferred. Upon a successful transaction, the payment provider
sends a notification message to the receiver of the funds – the online shop – to
communicate that the payment has been carried out.
It is important to understand that time diagrams abstract away the types
of messages sent. The definition of a message includes information items, for
instance an electronic record of a successful payment, and physical items, such
as the goods ordered. In time diagrams, both types of messages are represented
by directed arcs (see Fig. 4.8).
The use of messages as a means for communication enables further abstrac-
tion, depending on the goal of the model. In Fig. 4.8, the temporal ordering
of various events is shown; for instance, handing over of the package occurs
prior to delivering the package. However, to gain an overview of the processes
of the online shop, one might not be interested in the details, but only in the
order in which shipment and payment are carried out.
VKLSSDFNDJH UHFHLYHSD\PHQW
2QOLQH6KRS
&XVWRPHU
UHFHLYHSDFNDJH
PDNHSD\PHQW
WLPH
This more abstract representation is depicted in Fig. 4.9, where only the
interaction of Online Shop and Customer is shown. This figure abstracts away
the transport of the package by the logistics provider and captures it by a
single message that is sent from the online shop to the customer. The same
applies to the payment and the payment provider.
Synchronous Communication
B %
rcv(m) UFY P
B is ready
snd(ack) VQG DFN
%LVEXV\
time WLPH
(a) Receiver ready (b) Receiver busy
Consequently, after the sending of a message, both the sender and the receiver
are active at the same time.
If, however, the receiver is not ready to receive a message because it is
currently busy with another task, then the delivery of the message will be
delayed. This delay is depicted by the segmented message arc in Fig. 4.10b. It
is also possible that the receiver may not accept a message from the sender A
if it is expecting a message from another party beforehand.
Recall that the sender can only proceed once its message has been received
and acknowledged. This may result in an undesired situation: if the receiver
chooses never to receive the message, then the sender cannot proceed.
Response-based communication is very similar to delivery-based communi-
cation. Rather than waiting for an acknowledgement, the sender is blocked
until a response message is received. This message contains information that
is provided by the recipient in response to the initial message.
Response-based communication is depicted in Fig. 4.11. Instead of con-
firming the delivery of the original message immediately after reception, the
recipient waits until the message has been processed and only then returns a
response to the sender.
Synchronous response-based communication is sometimes also referred to
as remote procedure call or RPC. System A wants to execute a procedure
that is available on system B. Hence, it sends a message m that contains all
required information to B; B then executes the procedure, and returns the
result as a response message res to A.
This concept is also used by HTTP requests sent by web browsers. The
browser sends a request message to a web server. Upon reception, the web
server retrieves the requested information, compiles a web-page of dynamic
content, and returns the result to the browser, which then renders it and shows
it to the user.
Just as in delivery-based communication, the sender is blocked until the
response has been received. Here, the response is sent only after the processing
of the message has been completed, as shown in Fig. 4.11a. If the receiver is
98 4 Concurrent Systems
B B
rcv(m) snd(res) rcv(m) snd(res)
B is ready B is busy
time time
(a) Receiver ready (b) Receiver busy
currently busy, then the response is delayed as well. We have all witnessed this
waiting when trying to access overloaded web sites. This situation is depicted
in Fig. 4.11b.
The two variants of synchronous communication share the benefit that the
sender knows that its message has been received or responded to, but also
the drawback that the sender is blocked and cannot proceed until a response
has been received. Hence, if the receiver is unable or unwilling to accept the
sender’s message or respond to it, the sender is stuck.
Asynchronous Communication
snd(m) snd(m)
A A
B B
rcv(m) rcv(m)
B is ready B is busy
time time
(a) Receiver ready (b) Receiver busy
B B
rcv(m2) rcv(m1) rcv(m1) rcv(m2)
time time
(a) Asynchronous (b) Synchronous delivery-based
instance, Fig. 4.14 describes a situation where system B only accepts a message
from system A after it has received and processed a message from system C .
Although message m1 , sent by system A, reaches system B before m2 has even
been sent, it is not immediately accepted by system B. Only after m2 from
system C has been processed can message m1 be accepted.
B B
C C
snd(m2) rcv(res2) snd(m2)
time time
(a) Synchronous response-based (b) Asynchronous
Messages
$ % $ % $ %
VHQG LQYRLFHDPRXQW
VXEPLW SDUFHOGHVW
VXEPLW SDUFHO
SD\ DPRXQW
QRWLI\
Fig. 4.16: Sequence diagram for the online shop scenario showing episodes
where participants are active
An example is shown in Fig. 4.16; the online shop is active over the course of
the complete interaction depicted, whereas the other participants are activated,
when they receive a message and remain active for the time required to process
that message.
For instance, the Logistics Provider is activated by the message sub-
mit(parcel) from the Online Shop. It then delivers the parcel to the customer;
it remains active until it receives a confirmation as a response message from
the Customer. The Customer is active as soon as they receive an invoice,
waiting for the delivery of the parcel. The Payment Service is active only for
the duration of a financial transaction, i.e., from receiving a request from a
Customer until sending the notification to the Online Shop. Waiting for a
response to a synchronous response-based communication is considered time
spent active, although the sender is blocked until the response message is
received.
Figure 4.16 also shows that messages can be labelled with a name and a
set of parameters, passed along with the message:
message-name [(parameter 1 [, parameter 2 [, ...] ] )]
104 4 Concurrent Systems
The message can carry any number of parameters, which are surrounded by
parentheses. If no parameters are included, the parentheses can be omitted.
For instance, the label send(invoice, amount) expresses the fact that an
invoice is sent to a customer, which contains the invoice document and the
amount payable as parameters. After receiving the parcel, the customer uses
the latter parameter to initiate the payment with the Payment Service by
passing the same amount as a parameter of the message pay.
To match a response in synchronous response-based communication with
the message originally sent, the message name and parameters are repeated.
Additionally, the response is passed as a return value separated by a colon from
the message declaration. In Fig. 4.16 the message submit(parcel) is responded
to with the signature (sign.) of the customer.
VHQG LQYRLFHDPRXQW
SD\ DPRXQW
WUDQVIHU 2QOLQH6KRS
WUDQVIHU &XVWRPHUDPRXQW
QRWLI\ &XVWRPHUDPRXQW
Fig. 4.17: Sequence diagram with a message where sender and receiver are the
same participant
Combined Fragments
SDU
JHW3ULFH GHVW
JHW3ULFH GHVW
DOW
VXEPLW SDUFHOGHVW
>SS@
>HOVH@
VXEPLW SDUFHOGHVW
Figure 4.18 shows an excerpt from the shipping process that involves the
online shop and several logistics providers. Here, we see two instances of the
same system type Logistics Provider. These are distinguished by instance
identifiers l1 and l2 . In the model, the Online Shop first obtains prices for a
delivery from each Logistics Provider using the synchronous response-based
message getPrice(dest) and then chooses the least expensive offer for the
shipment of a parcel.
The sequence diagram features two fragment types. The upper fragment,
with the keyword par, consists of two operands that are carried out concurrently.
The messages between the online shop and the logistics providers can be sent
and received in any order, as long as the order of messages within each operand
is preserved. For example, getPrice(dest) can be sent first to the logistics
provider l2 and then to l1 , whereas the response from l2 can be received before
the response from l1 . The process continues after this combined fragment, i.e.,
when the sequences modelled in each operand have been carried out completely.
The return values of a response-based message can be used in the remainder
of the sequence diagram. In the example in Fig. 4.18, the Online Shop chooses
the logistics provider with the lower of the prices p1 and p2 . This behaviour
is achieved by the alternative fragment, identified by the keyword alt, which
denotes that exactly one of its operands is chosen for execution. Which of
the alternatives is chosen is determined by a guard that is attached to each
operand. If the price p1 of logistics provider l1 is the best price, i.e., p1 < p2 ,
then l1 is chosen, modelled by the message submit(parcel,dest) sent to l1 .
To avoid the possibility of the process getting stuck, every alternative
fragment should have a default operand, denoted by a guard else, which is
chosen if no other guard evaluates to true. In the example in Fig. 4.18, the
second logistics provider l2 is chosen as the default, that is, if l1 is not cheaper
than l2 .
Iterations of parts of a sequence diagram are represented by the keyword
loop. The example in Fig. 4.19 shows a login procedure for the online shop.
Customers enter their username un and password pw. The response res is a
Boolean variable that indicates a successful login with a true value, and false
otherwise.
In contrast to par and alt fragments, which have at least two operands,
loops have only one operand, which can also have a guard. Similarly to the
case of alternative operands, a guard states that the operand is executed if
the guard evaluates to true. In the example, the login procedure is repeated if
the previous attempt has failed, i.e., res = false.
Additionally, loops can have minimum and maximum iteration numbers,
denoted in parentheses following the keyword loop:
loop(min,max).
Independently of the guard, the loop is guaranteed to be executed at least min
and at most max times. In the example, the Customer can attempt to login at
most three times: once before the loop and twice within the loop. If the third
4.3 Petri Nets 107
&XVWRPHU 2QOLQH6KRS
ORJLQ XQSZ
ORRS
ORJLQ XQSZ
>UHV IDOVH@
ORJLQ XQSZ UHV
DOW ORFN XQ
>UHV IDOVH@
ORFNHG XQ
>HOVH@ ZHOFRPH XQ
Fig. 4.19: Sequence diagram showing a loop fragment for repeated attempts to
log in to the online shop
attempt fails, i.e., res = false, the alt fragment chooses to lock the Customer’s
account and inform the customer about this action with a locked(un) message.
UML allows the following configurations for the minimum and maximum
numbers of loop iterations.
• loop(a,b) denotes that the loop is executed at least a and at most b times,
where a and b are natural numbers, a ≥ 0, and b ≥ a.
• loop(a,*) means that the operand is repeated at least a times, with a ≥ 0.
No upper limit of repetitions is given.
• loop(*) indicates that the number of iterations is not restricted at all; it
has the same meaning as loop(0,*).
Recall that the guard has a lower priority than the configuration with minimum
and maximum numbers of iterations loop(a,b). This means that the guard is
only evaluated if a ≤ c ≤ b for the current iteration counter c.
One of the reasons for the success of Petri nets is their dual characteristic;
the graphical and mathematical representations of Petri nets are equivalent.
Petri nets are bipartite graphs that consist of places, transitions, and directed
arcs between them. The graph is bipartite because each arc connects a place
to a transition or a transition to a place.
The mathematical foundation of Petri nets allows formal reasoning about
their properties, for instance, to identify anomalies in the behaviour of the
systems described. Petri nets are defined as follows.
t2
t1
p1 p2 p3
t3
• A transition may represent an event that can occur, whereas an input place
of a transition denotes a precondition that needs to be satisfied for the
event to occur; an output place denotes a postcondition.
• A transition may represent a task to be carried out. Its input places may
represent resources that are required for the task to be carried out, and
its output places may represent the resources that are released after the
task’s completion.
• A transition can also represent a computational step in an algorithm. An
input place may denote the activation of this step and an output place its
termination. At the same time, input places may represent input data for
the computation and output places the result of the computational step.
To summarise, Petri nets are structures that consist of places, transitions,
and arcs between them. A distinctive aspect of Petri nets is their ability to
characterise behaviour by tokens on places and rules about how the distribution
of these tokens within a Petri net might change. Tokens are represented by
black filled circles on places in a Petri net. A token on a place might represent
that a precondition is met, that resources are available, or that input data is
available.
The Petri net shown in Fig. 4.20 contains a token on p1 . Formally, the
tokens in a Petri net are captured in its marking.
For the sake of brevity, we shall use a shorthand version to denote the
marking of a net, which lists all places that carry at least one token. The
marking of the Petri net shown in Fig. 4.20 can be expressed by [p1 ]. There
is exactly one token on the place p1 and no token on any other place. If a
Petri net has one token on the places p2 , p3 and p5 , we express the marking
by [p2 , p3 , p5 ] in this shorthand.
As discussed earlier, the transitions are the active components in Petri
nets. Transitions can change the state of a system. In Petri net theory, we say
a transition fires. When a transition fires, the state of the system can change.
Rules that define under what conditions a transition can fire and what the
result of the firing of a transition is are called firing rules.
110 4 Concurrent Systems
Since there are different types of Petri nets, there are different firing rules.
In general, a transition can fire only if sufficiently many tokens reside on its
input places. When a transition fires, tokens on its input places are consumed
and new tokens are produced on its output places.
A transition can fire only if it is enabled. The firing of a transition is
regarded as an atomic step that does not consume time. Therefore, on the
firing of a transition t, the deletion of tokens in the input places of t, the firing
of t, and the generation of tokens on the output places of t occur as one atomic
step. “Atomic” means that the operation is indivisible. Later in this section,
we will introduce the most important types of Petri nets for the modelling of
behaviour, based on their respective firing rules.
Before doing so, another example of a Petri net will be introduced that
shows the wide applicability of the approach. Figure 4.21 shows a Petri net
representing a bicycle manufacturing process. The transitions have labels in
natural language, which makes the net easily understandable. Also, some places
have text labels, for instance “orders” and “bicycles”.
This Petri net can be interpreted as follows. When an order comes in, first a
frame has to be taken from the inventory. Afterwards, the wheels are mounted,
before the brakes and the drive train are attached. As will be discussed shortly,
the brakes and drive train can be attached in any order. Once both components
have been attached to the frame, the bicycle can be tested.
p2 p3
attach
orders p1 brakes bicycles
get attach test
frame wheels bicycle
attach
drive train
p4 p5
The basic type of Petri nets is condition event nets. The intuition behind
condition event nets is as follows. Places represent conditions and transitions
represent events. This means that a token on a place models the fact that a
condition is met. This assumption has implications for the markings that are
permitted in condition event nets.
Since a condition can be either true or false, there can be either no token
on a place (the condition is false) or one token (the condition is true). It does
not make sense to have multiple tokens on a place. Therefore, the number of
tokens on a given place is at most one in condition event nets.
Figure 4.22 shows the manufacturing process as a condition event net. In
particular, the Petri net system is shown with one token on the place p1 , one
4.3 Petri Nets 111
p2 p3
attach
orders p1 brakes bicycles
get attach test
frame wheels bicycle
attach
drive train
p4 p5
Order B Order A
token on p2 and one token on p5 . The tokens represent cases in the operational
business of the manufacturing company.
Order A is represented by two tokens and Order B by one token. This state
is valid in this condition event net, because there is no place with more than
one token. Firings of transitions have to be forbidden in a condition event net,
if the resulting marking would be disallowed.
This property holds for attaching the wheels, for instance. There is a token
in the preset of the attach wheels transition, but if the transition were fired,
there would be two tokens on p2 . Therefore, in condition event nets, a transition
is enabled only if there is a token on each place in its preset and no tokens
on its output places. Situations where a place is both in the preset and in the
postset of a transition will be covered below.
Based on the firing rules for condition event nets, the Petri net shown in
Fig. 4.20 can be discussed in more detail. Here, we assume that this Petri net
is a condition event net. The transition t1 is enabled, because there is a token
on each place in its preset, i.e., on [p1 ], and there is no token on any place in
its postset, i.e., on [p2 ].
Firing of t1 changes the marking of the Petri net to [p2 ]. In this state,
the transitions t2 and t3 are enabled at the same time. However, when one
transition fires, the token on p2 is removed and the other transition is no longer
enabled. This situation is called a conflict. Two transitions are in conflict
112 4 Concurrent Systems
if firing one transition disables the other transition. In our example, firing
t2 disables t3 and, of course, vice versa. Hence, an exclusive choice is made
between these transitions.
In condition event nets, the firing of transitions represents the occurrence
of events. Therefore, the Petri net shown in Fig. 4.20 encodes sequences of
events. In fact, two sequences of events are possible, t1 , t2 and t1 , t3 . This
Petri net encodes sequential behaviour because each event that can occur is
causally dependent on the event that has occurred before it. For instance, t2
can only happen once t1 has happened; the same holds for t3 . This means that
Petri nets can be used to represent sequential behaviour.
Petri nets can also represent concurrent behaviour. As discussed at the
beginning of this chapter, when a system shows concurrent behaviour, consec-
utive events do not need to be causally related; they can occur independently
of each other. These considerations are illustrated by the example shown in
Fig. 4.23.
p2 p4
t2
t1
p1
t3
p3 p5
attached, the token representing the mechanic is put back on the mechanics 1
place, indicating their availability for the next bike.
mechanic 1 mechanic 2
p2 p3
attach
orders p1 brakes bicycles
get attach test
frame wheels bicycle
attach
drive train
p4 p5
Notice that the Petri net allows the brakes and the drive train to be
attached in any order. This is facilitated by the concurrency in the Petri net.
In any case, however, the bike can only be tested after both the brakes and
the drive train have been attached.
The discussion of the manufacturing process example has shown that not
all scenarios can be modelled adequately by condition event nets. If places
represent storage, then the storage capacity is limited to one. However, when
we look at real-world systems, this limitation does not apply. For instance, a
manufacturing company might be able to receive many orders at the same
time. There might be storage for many bikes that are in production, not just
for one, as represented by the Petri net shown in Fig. 4.25.
p2 p3
attach
orders p1 brakes bicycles
get attach test
frame wheels bicycle
attach
drive train
p4 p5
Order B Order A
Fig. 4.25: A manufacturing process defined by a condition event net, with two
instances
By the weighting of the arc from the wheels place to the attach wheels
transition, the net represents the fact that two wheels are taken from the
supply and attached to the frame when the transition attach wheels fires. Place
transition nets allow us to define complex system behaviour in a more compact
way than condition event nets can.
p2 p3
attach
orders p1 brakes bicycles
get attach test
frame wheels bicycle
attach
drive train
p4 p5
2 2
Fig. 4.27: State of the Petri net shown in Fig. 4.26 after attaching the wheels
and the drive train
The classes of Petri nets introduced so far are adequate to represent a broad
range of system behaviours. However, there are situations in which the mod-
elling constructs provided by these nets are not adequate to express complex
behaviour. Extended Petri nets allow us to represent such behaviour in a
compact and comprehensible manner. A broad range of extensions to classical
116 4 Concurrent Systems
Petri nets has been proposed in the literature, and this section looks at two of
them: inhibitor arcs and reset arcs. In both cases, the structure of the Petri
nets is not changed: Petri nets remain bipartite graphs. These extensions are
realised by two new types of arcs, inhibitor arcs and reset arcs.
Inhibitor Arcs
In the Petri nets introduced so far, a transition can fire if there is a sufficient
number of tokens on its input places and, in the case of condition event nets,
after firing of the transition no place contains more than one token. Inhibitor
arcs reverse this approach. A transition can fire only if there are no tokens
on places that are connected to the transition by an inhibitor arc. That is, a
token in such a place prevents, or inhibits, the transition’s firing.
Inhibitor arcs only affect the enabling and not the firing semantics of a
Petri net. Therefore, inhibitor arcs can be combined with different types of
Petri nets. Since we will use mainly place transition nets in the remainder of
this book, place transition nets will be extended with inhibitor arcs.
Inhibitor arcs are distinguished graphically from regular arcs in Petri nets:
an inhibitor arc is shown by an edge with a small circle at the end, connected
to the transition that can fire only if the connected place is empty.
Two different Petri nets that represent parts of a back-ordering process in
the warehouse of the bike manufacturer are shown in Fig. 4.28. At the top of
each net, we see a part of the bike manufacturing process presented above.
In Fig. 4.28a, a normal place transition net is shown. In this net, back-
ordering of drive trains can be done arbitrarily often. Each time, five drive
trains are ordered. After the drive trains are received, they are put on the drive
trains place, which represents the storage of drive trains by the bike company.
Figure 4.28b shows a variant of this Petri net, involving an inhibitor arc. This
arc makes sure that drive trains can only be ordered if no drive trains are
available, i.e., if no token is on the drive trains place.
Reset Arcs
Now that we have introduced inhibitor arcs, this section looks at reset arcs in
Petri nets. Recall that inhibitor arcs affect the enabling of transition firings,
4.3 Petri Nets 117
5 5
back-order receive back-order receive
drive trains drive trains drive trains drive trains
(a) Back-ordering Petri net without (b) Back-ordering Petri net with in-
inhibitor arc hibitor arc
but they do not affect the actual firing. In some sense, reset arcs reverse the
semantics of inhibitor arcs in that they do not affect the enabling of transitions,
but do affect firing behaviour.
In particular, when a transition fires, all places that are connected to that
transition by reset arcs are cleansed of all their tokens, i.e., these places are
reset. In general, reset arcs can be applied to various classes of Petri nets.
An example of a place transition net with reset arcs is shown in Fig. 4.29a.
Here, firing the transition clear inventory removes all tokens from all places
that are connected to it with a reset arc, effectively removing all items from
all inventory places of the bike manufacturer. There are several different ways
to represent reset arcs in extended Petri nets. If a transition t has a single
or only a few reset places, those reset places can be connected to t with an
arc with a double arrowhead, as shown in Fig. 4.29a. If there are several reset
places for a transition t, we can mark a region of the Petri net and connect
that region to t using a single reset arc. The region can be considered as a
shorthand notation for a reset arc to each place in it. Figure 4.29b shows the
corresponding Petri net using a reset region.
118 4 Concurrent Systems
FOHDU FOHDU
LQYHQWRU\ LQYHQWRU\
(a) Petri net for clearing the in- (b) Petri net for clearing the inven-
ventory using inhibitor arcs tory using cancel region
Based on the definitions of inhibitor arcs and reset arcs, we can define
extended Petri nets as follows.
cancellation
request cancelled order
cancel
production
p2 p3
attach
orders p1 brakes bicycles
get attach test
frame wheels bicycle
attach
drive train
p4 p5
Extended Petri nets and the associated notation are illustrated in Fig. 4.30,
which shows a variation of the bike manufacturing process discussed above. In
this example, reset places are used to cancel an order. This is achieved by a
transition cancel production, which removes all tokens from the manufacturing
process in the region enclosed by the dashed line. Using inhibitor arcs, this
Petri net also specifies that an order can be cancelled only before a bike has
been tested.
4.3 Petri Nets 119
In traditional Petri nets, such as condition event nets or place transition nets,
tokens cannot be distinguished from each other. That is, while tokens may
represent items of material or information, transitions cannot define which
items need to be consumed and generated upon firing.
This drawback is addressed by coloured Petri nets, where a token is an
object that carries data in much the same way as variables in programming
languages. Hence, these tokens can be distinguished from each other and can
serve as both inputs and outputs of firing transitions.
Coloured Petri nets enrich classical Petri nets by extending their known
structural elements, while preserving the fundamental semantics and bipartite
graph structure of Petri nets.
NLGV
LI D Q SD\UHGXFHG
QWNLG WLFNHW
YLVLWRUV WKHQ Q SULFH
ŚŽůĚĞƌƐ
7RP QD
-DQH FKHFN
DJH
3HWHU
LI D! Q SD\UHJXODU
SULFH QWUHJ
WKHQ Q
JURZQXSV
This example shows a net with three transitions. Based on the outcome of
the check age transition, which takes as input a token that represents a visitor
to a museum, either a reduced or a regular entry fee has to be paid for the
visitor to become a ticket holder. The coloured Petri net has different places
with the following colour sets:
When the transition fires, one binding is chosen, and tokens with the bound
values are consumed and produced. Assuming that age check fires under the
binding b2 , the variable n is bound to Jane and a to 24 during firing.
Outgoing arcs of a transition can be expressed using logical clauses that
determine which output places tokens are produced on, and how. In the
example, if the age variable a of the transition check age is less than or equal
to 12, a token that consists only of a name is produced in the place kids;
otherwise, a token is produced in the place grown-ups.
NLGV
LI D Q SD\UHGXFHG
QWNLG WLFNHW
YLVLWRUV WKHQ Q SULFH
ŚŽůĚĞƌƐ
7RP QD FKHFN
DJH
3HWHU
LI D! Q SD\UHJXODU
SULFH QWUHJ
WKHQ Q
JURZQXSV
-DQH
Fig. 4.32: Coloured Petri net for selling tickets after the transition check age
is fired
Figure 4.32 shows the state of the coloured Petri net after check age has
fired under the binding b2 . According to the arc expression (n, a), the token
(Jane, 24) is removed from the input place and a new token (Jane) is produced
on the grown-ups place, as determined by the arc expression.
In the next step, the net might choose to check the age of another visitor or
proceed with the transition pay regular price. In the latter case, a new token
is produced on the place ticket holders, which represents Jane with the ticket
that she has bought.
We have not yet addressed transition guards in coloured Petri nets. There-
fore, we introduce a second example, which also shows how these nets can
be used to specify not only processes but also interconnected networks of
processes on a detailed level.
Consider a warehouse of a bike manufacturer, which consists of a central
storage place, i.e., an inventory, and an inbound dock, where back-ordered
items are delivered to. The warehouse also has an outbound dock, where
items are requested and handed out. This is shown in Fig. 4.33, where places
represent these storage areas. New items are collected from the inbound dock
and taken to the inventory, to be transferred to the outbound dock if requested.
The places inbound dock, inventory, and outbound dock have the same
colour set (prod: string, num: integer). A token on any of these places denotes
that num items of product reside on that place.
For every token on the place inbound dock, there exists a binding to the
transition add that leads to the token’s consumption and the production of
122 4 Concurrent Systems
LQERXQG RXWERXQG
GRFN LQYHQWRU\ >QXP! GRFN
SURGQXP UHT SURG@
SURGQXP SURGQXP SURGQXP
DGG UHPRYH
SURGQXP
UHTXHVW
SURGQXP SURG UHT
SURGQXP QXPQXP
FRQVROLGDWH
>SURG SURG@
an identical token in the inventory. Hence, this transition merely moves the
token from one place to another.
When an item is requested in the outbound dock, a token that carries the
name req of the required product is placed on the request place. If the place
inventory carries at least one token, then there exists a binding which maps
token values to transition variables for the transition remove. However, this
transition may not be able to fire, because of its guard.
For instance, it may be that a token (brake) is on the place request and
the only token in the place inventory is (frame, 7 ). A frame should not be
handed out if a brake was requested. Hence, the guard of the transition remove
ensures that more than 0 items of prod are in the inventory, and that the item
taken from the inventory is of the same product type as the requested item,
by means of req = prod.
In this case, the binding is important, because only the variables of a
transition can be used in the guard. If we assume additionally that a token
(brake, 12) was in the place inventory, two bindings would exist for the remove
transition:
type are delivered to the warehouse. Assume that the place inventory carries
a token (brake, 5), stating that five brakes are in stock. Additionally, four
new brakes are added to the stock by placing a token (brake, 4) on the place
inbound dock and firing the transition add, which moves this token to the place
inventory, which now contains two tokens for brakes.
From a theoretical perspective, this not problematic, as places can carry any
number of tokens. However, from a practical point of view this is undesirable,
because the transition remove does not delete these tokens when the supply is
emptied for a given product type, but rather returns a token with num = 0 to
the place inventory, when it consumes a token with num = 1. As a result, as
the net operates, more and more tokens accumulate on the place inventory.
This shortcoming is mended by the consolidate transition, which consumes
two tokens of the same product type and merges them into one token in the
inventory. This is possible because of the way bindings work in coloured Petri
nets.
Recall that the data of a token is stored as a tuple. A token does not
include variable names, but consists only of a tuple of values. Arc expressions
read these tuples and assign them to variables of a transition, which yields a
binding. In this way, it is also possible that more than one token is consumed
or produced on a place as a result of an arc expressions in a coloured Petri net.
The arc expression (prod 1 , num 1 ), (prod 2 , num 2 ) denotes that two tokens
are required for the binding of transition consolidate. In our example, with
two tokens for the brake product type – (brake, 5) and (brake, 4) – in the place
inventory, the binding of these two tokens to the variables of the transition is
as follows:
b(prod 1 ) = brake,
b(num 1 ) = 5,
b(prod 2 ) = brake,
b(num 2 ) = 4.
The transition consolidate can fire because the guard prod 1 = prod 2 is
satisfied for this binding. If there were other tokens with different product
types in the place inventory, bindings with different values for prod 1 and prod 2
would exist as well, but these would not satisfy the guard.
Upon firing, consolidate consumes these two tokens for the same product
type, removes them from the inventory, and produces only one token as a
result on the place inventory, for which it sums the numbers num 1 and num 2 .
In our example, a new token (brake, 9) is produced in the inventory.
This can be executed repeatedly, whenever there exist at least two tokens
for the same product type. Looking at the coloured Petri net in Fig. 4.33, we
see highly concurrent behaviour as, at the same time, new items can be added
to and removed from the warehouse. Simultaneously, the consolidation process
makes sure that the warehouse is always being tidied up.
124 4 Concurrent Systems
In the next chapter, coloured Petri nets are used to define the semantics of
workflow control flow patterns, which are the conceptual basis of the control
flow constructs used in behavioural models describing business processes.
Bibliographical Notes
Arguably, business process modelling is the most widely used form of modelling
effort in today’s organisations. The key role of business process models is to
capture the essence of how work is organised and conducted in business and
administrative organisations. From a formal perspective, a business process
model represents the behaviour that an organisation can perform. These models
are used as blueprints for designing working procedures for human workers.
They are also crucial in the development of software systems that support
business processes, completely or in part. This chapter takes two perspectives
on the modelling of business processes by looking at workflow control flow
patterns and an industry-standard business process modelling language.
Workflow patterns were introduced to provide a language-independent char-
acterisation of several different aspects of business process models, including
data, resources, exceptions, and control flow structures. Since the behaviour
of systems is the central theme of this book, we shall cover only control
flow patterns. These patterns represent recurring control flow structures that
are found in business process models and in languages designed to express
these models. Section 5.1 introduces the most important control flow patterns.
Coloured Petri nets are used to specify their behavioural semantics.
In Section 5.2, some key concepts in business process modelling are intro-
duced, before the most important elements of the industry-standard Business
Process Model and Notation, BPMN, are covered. We start by looking at busi-
ness process diagrams, and discuss their elements and the execution semantics
of these elements. Business process diagrams concentrate on single organisations
and their processes. Since concurrency can be found when business processes
interact, this section also covers collaboration diagrams. Finally, a mapping
from business process diagrams to Petri nets will be introduced, which serves
as a basis for a formal analysis of business process models.
F F F F
$ %
L S R
The sequence pattern is shown in Fig. 5.1 as a coloured Petri net. The
firing of the transition A puts a token on the place p1 , which enables B. Hence,
the transition B can only fire after A has fired, realising the sequence pattern.
5.1 Workflow Patterns 127
F F
%
F S R
F
$
L
F F F
&
S R
• After a credit request has been received (activity A), the legal status of
the applicant can be checked (B) concurrently with checking his or her
financial situation (C).
• After receiving an order, an online store might choose to start packaging
the ordered goods (B) while processing the credit card transaction (C).
• After accepting an offer to organise a workshop (A), we can select a venue
(B) while inviting the keynote speakers at the workshop.
The name of this pattern might be misleading, since the pattern does not
enforce parallel execution in the sense that activities are executed at the same
time, parallel in time. Rather, the word “parallel” represents the concurrent
execution of activities, which was discussed in Chapter 4.
F F
$ F
L S
F
&
R
F F
% F
L S
Using coloured Petri nets allows the processing of several process instances
using a single net. To illustrate this observation, we consider a situation in
which there are tokens for several process instances in the coloured Petri net
shown in Fig. 5.4. These tokens have different case identifiers. In the figure,
tokens with different case identifiers are visualised using different colours. The
process instance with the identifier c = 1 (light grey) is already completed, as
represented by the token on the final place.
Cases 2 (grey) and 3 (black) are still active. The transition C is enabled for
the binding b(c) = 3, because there is one token for case 3 on both of its input
places. In contrast, C is not enabled for the binding b (c) = 2, since there is no
token satisfying this binding on p2 . When C fires under the binding b(c) = 3,
5.1 Workflow Patterns 129
the tokens with value 3 are removed from p1 and p2 , and a token with value 3
is put on o1 , completing this process instance.
F F
LIFRQGWKHQF %
HOVHHPSW\
S R
F
$
L
LIFRQGWKHQHPSW\ F F
&
HOVHF
S R
F
$
F
L
F F
&
S R
F F
%
L
allows different conditions on each of the arc expressions originating from the
transition A, as shown in Fig. 5.7. This means that any subset of the places in
the postset of A can receive a token. It is the responsibility of the modeller to
ensure that not all subsets are the empty set, i.e., for each process instance, at
least one branch is chosen. The multi-choice pattern also subsumes the parallel
split pattern, since all of the conditions of the former might evaluate to true.
The multi-choice pattern is also known as the inclusive-or split.
F F
LIFRQGWKHQF %
HOVHHPSW\
S R
F
$
L
LIFRQGWKHQF F F
&
HOVHHPSW\
S R
LIFRQGWKHQHPSW\HOVHF
S S
F F F F
% '
S F
LIFRQGWKHQF
HOVHHPSW\ R
F F
$ (
L LIFRQGWKHQF
HOVHHPSW\
S F
F F
&
S
LIFRQGWKHQHPSW\HOVHF
This pattern covers only the structured case; for the general case of deciding
the inclusive-or join, the reader is referred to the bibliographical notes at the
end of this chapter.
F
$
L F
F
$ F S
F F
L %
F R
P āF F
F
F F S
$P UHV
F
LP S
There are several patterns that relate to multiple instances of a given activity
in a process model. These patterns differ with respect to the synchronisation
of all instances after completion and with respect to the point in time at which
the number of instances is determined.
Consider a process with activities A, B, and C, which are executed se-
quentially. For each case there is one instance of A and one instance of C, but
several instances of activity B. This situation can be encountered, for instance,
in a procurement scenario, where an order consists of a number of line items,
and where each line item represents a product ordered. After the order has
been accepted (one instance of A), we check the availability of each line item
(several instances of B, one for each product ordered), before packaging the
goods (one instance of C).
The synchronisation here refers to the fact that C synchronises all instances
of the multiple-instance activity B. In this example, the synchronisation is
required because the goods cannot be packaged before the availability of
each product has been checked. The absence of synchronisation of multiple
instances would allow C to start before the termination of the multiple instances
activities, which occurs rarely in practice.
Regarding the point in time when the number of instances is determined,
there are three alternatives.
5.1 Workflow Patterns 135
F QāF F F QāF F
$ % &
L S S R
Fig. 5.10: Pattern of multiple instances with a priori design time knowledge
expressed as a coloured Petri net
F
%
F R
F F
$
L S
F F
&
R
The execution semantics of the deferred choice pattern is shown in Fig. 5.11.
After A has been executed, a token is put on p1 , which enables both B and
C. In this state, either of transitions B and C can fire, i.e., either event can
happen. In the credit request example, A refers to the sending of the request
from the bank to the client, B refers to receiving the response, and C represents
a time-out event, which occurs after the defined time span has expired. Note
that the two transitions are in conflict: if one transition fires, its consumption
of the token on p1 disables the other transition.
The cancel case pattern is used to stop the execution of a process in any
state that the process can be in. Cancelling a case is conceptually challenging,
because the process can be in any state when cancellation occurs. To realise
this pattern in a traditional Petri net without reset arcs or inhibitor arcs,
one dedicated transition is required for each state the process can be in that
removes tokens from the respective places. For instance, a transition t might
5.2 Introduction to Business Process Modelling 137
be responsible for cancelling the process in the state [p1 , p3 , p4 ], if that state
can be reached by the process.
Since the resulting Petri net would be very complex, we use a coloured
Petri net with reset arcs and inhibitor arcs (see subsection 4.3.3), to represent
the cancel case pattern, as shown in Fig. 5.12. First of all, one token is put
on each of the places p1 and p2 , where the token on p1 is used to start the
case. The place p2 is used to indicate that the process is active. Notice that
the case cannot be cancelled in the state [p1 , p2 ], because of the token on p1 ,
which inhibits firing of the Cancel case transition.
When the case starts, the token on p1 is removed by firing the first transition
T of the case. During the case’s execution, it can be cancelled by firing the
transition Cancel case. When this happens, all tokens in the case are removed,
as defined by the cancel region of the reset arc. Also, the token on p2 is
consumed. When the Cancel case transition fires, the process reaches the final
state [o].
S F
F F
F 6WDUW F (QG F
FDVH FDVH
L S S R
&DQFHO
F F FDVH F F
7 7
F F
If the case is not cancelled and terminates normally, a token is put on the
place p3 . Thereby, the Cancel case transition becomes inhibited and cannot fire.
This is important, since cancelling a completed case needs to be prevented. The
complete process ends with firing of the End case transition, which removes
tokens from p2 and p3 , reaching the final state.
&ODLP
$SSURYHG
DSSURYH 3UHSDUH
OHWWHURI
DSSURYDO
'HFLGHRQ
5HJLVWHU FODLP
FODLP FRYHUDJH
&ODLP 3UHSDUH
5HFHLYHG OHWWHURI
UHMHFW UHMHFWLRQ
&ODLP
5HMHFWHG
The process starts with receiving a claim. This is represented by the start
event in the business process diagram. Whenever such an event occurs, i.e.,
whenever a claim is received, a new process instance is created. The first
activity to be executed is registering the claim, followed by an activity to
decide on the coverage of the claim. The process diagram shows that the
process can continue in two different ways. The actual alternative chosen
depends on the decision taken. In this example, either a letter of approval is
prepared or a letter of rejection is prepared. The process has two alternative
end events, representing its two possible outcomes.
Since business process diagrams represent activities that are performed and
events that occur, as well as causal relationships between them, business process
diagrams are behavioural models. The originals represented by business process
diagrams are concrete instances of processes performed by organisations, such
as the processing of a particular insurance claim: for instance, a claim for the
theft of a road bike of type Complexon Bravo belonging to a client Martin
Wilder with identification number 1123221 on 19 February 2015.
We shall use this example to argue that business process diagrams satisfy the
model characteristics introduced by Stachowiak.
5.2 Introduction to Business Process Modelling 139
skipped
skip
When an activity is created, it enters the state init. In this state, the
activity is not yet ready for execution. By means of the enable state transition,
the activity enters the state ready. Activities that are in this state can enter
the state running by means of the begin state transition. Finally, there is a
state transition from running to terminated, indicating the completion of the
process.
We shall use the insurance claim example shown in Fig. 5.13 to illustrate
the states and state transitions of activities. When a claim is received, activi-
ties for the corresponding process instance are created and initialised. Since,
immediately after receiving the claim, only the Register claim activity can
be executed, only this activity is in the state ready. Maintaining the state of
activities is important, since users can only select activities to be started which
are ready.
If a particular activity is not required in a process instance, then the activity
can be skipped. This is represented by a skip state transition from the not
started complex state to skipped. When an activity is terminated or skipped, it
is closed.
The skipping of activities happens in the sample process, if the claim is
accepted and a letter of approval is prepared. In this situation, no letter of
rejection has to be prepared. Consequently, the respective activity enters the
state skipped and, thereby, is closed.
State transition diagrams are useful for capturing the behaviour of indi-
vidual activities in a business process. To capture the behaviour of complete
5.3 Business Process Model and Notation 141
This section sketches the Business Process Model and Notation, or BPMN.
BPMN has been the industry standard in business process modelling for
several years. This language provides a rich set of language elements, but many
business processes can be captured by using just a few of them.
Rather than introducing the complete set of language elements, this section
focuses on the most important ones, such as activities, events, gateways, and
sequence flow. We concentrate on the sequential and concurrent behaviour
that can be specified in business process diagrams. After introducing the basic
concepts of the language, we define the behaviour of business process diagrams
by providing a mapping to Petri nets.
3UHSDUH 6HQG
5HYLHZ 5HYLHZ
35 65
3DSHU 5HYLHZ
5HFHLYHG 3U 6HQW 5V
When the execution of this process is investigated in detail, we see that the
first thing that happens is the receipt of the paper, which is represented by
the start event Paper received. When this event happens, the Prepare review
activity enters the state ready, as depicted in Fig. 5.14. We can map this
behaviour to a Petri net as follows.
142 5 Business Process Models
3DSHU
5HFHLYHG 3U 3U 3U
(a) BPMN start (b) Event has not (c) Event has (just)
event occurred occurred
and a token is put on the terminated place. Thereby, the activity lifecycle
introduced in Fig. 5.14 is mapped to a Petri net, abstracting away skipping of
process activities.
Fig. 5.17: Mapping of a process activity to a Petri net, taking into account the
activity lifecycle
To map the business process shown in Fig. 5.15 to a Petri net, events and
activities have to be mapped accordingly. In the case of sequential activities,
the terminated place of the first activity can be merged with the ready place
of the second one.
To illustrate this merging of places, consider activities A and B that are
executed sequentially. The terminated place of activity A can be merged with
the ready place of activity B. This captures the semantics of BPMN, since,
after A terminates, B immediately enters the state ready. Along the same lines,
the place representing a start event can be merged with the ready place of
the first activity, and the termination place of the last activity can be merged
with the place representing the end event.
Fig. 5.18: Mapping of process model shown in Fig. 5.15 to a Petri net, taking
into account the activity lifecycle
Figure 5.18 shows the corresponding Petri net. For illustration purposes, we
have marked merged places with the labels of both of the original places. Since
the labelling of places is not relevant for the execution semantics of Petri nets,
we will drop the labels of places where appropriate. The corresponding Petri
net, alongside the mapped business process diagram, is shown in Fig. 5.19.
The mapping from activities to Petri nets discussed so far takes the activity
lifecycle into account. However, for some tasks of process model analysis, such
as checking soundness, the different states of process activities are not relevant.
Therefore, a simplified mapping from process activities to Petri nets is now
introduced, which abstracts away the running state of activities.
144 5 Business Process Models
3UHSDUH 6HQG
5HYLHZ 5HYLHZ
35 65
3DSHU 5HYLHZ
5HFHLYHG 3U 6HQW 5V
3UHSDUH
5HYLHZ
35 35
Fig. 5.20: Mapping from a process activity to a Petri net that abstracts away
the running state
35 65
Fig. 5.21: Petri net corresponding to the business process model shown in
Fig. 5.15 with abstraction of the running state
BPMN business process diagrams are the state-of-the-art technique for mod-
elling business processes. BPMN is a rich language that provides many elements
5.3 Business Process Model and Notation 145
with well-described semantics. This section does not aim at providing the com-
plete syntax and semantics of BPMN. Rather we concentrate on the most
important language elements. Examples will be used to illustrate the language
elements and their semantics.
Figure 5.22 shows a business process model expressed as a BPMN business
process diagram. The model represents the activities that are performed and
their logical ordering during a typical reviewing process in academia.
$5 35 65
'HFLGHRQ 5HYLHZ
5HYLHZLQJ 6HQW 5V
'5
5HYLHZLQJ 6HQG
QR
5HTXHVW &DQFHO
5HFHLYHG 5U ODWLRQ 6&
&DQFHOODWLRQ
6HQW &V
\HV
$5 35 65
'HFLGHRQ 5HYLHZ
5HYLHZLQJ 6HQW 5V
'5
5HYLHZLQJ 6HQG
QR
5HTXHVW &DQFHO
5HFHLYHG 5U ODWLRQ 6&
&DQFHOODWLRQ
6HQW &V
$FFHSW
5HYLHZLQJ
$5
'HFLGHRQ
5HYLHZLQJ
'5
6HQG
&DQFHO
ODWLRQ 6&
$5 $5
'5 '5
6& 6&
$5 *3 35 65 65
'5
6&
Fig. 5.25: Petri net representing the behaviour of the business process diagram
shown in Fig. 5.23
$5 35 65
5HYLHZ
'HFLGHRQ 6HQW 5V
5HYLHZLQJ &DQFHOODWLRQ
'5 5HFHLYHG
5HYLHZLQJ &U
5HTXHVW 6HQG 6HQG
QR
'5 W W W W
6&
6$
Fig. 5.27: Petri net defining the execution semantics of the business process
diagram shown in Fig. 5.26
We can study precisely in which states of the business process this can-
cellation is possible. There are four possibilities for cancelling the subprocess,
represented by the transitions t1 to t4 in the Petri net in Fig. 5.27.
The subprocess can be cancelled when reviewing has been accepted. In this
state, the subprocess is enabled, so that the cancellation can be achieved by
firing of t1 . Cancellation can also occur while the Get Paper activity is active.
The removal of the respective token by t2 interrupts that activity and cancels
the subprocess. Cancellation can also occur after getting the paper and before
preparing the review; this is achieved by firing of the transition t3 . Finally, the
subprocess can be cancelled during preparation of the review by t4 .
Notice that the subprocess can no longer be cancelled when the review has
been prepared, because in that state the subprocess has already terminated.
5HDG
5HPLQGHU
55
5HPLQGHU
5HDG 5HUH
5HPLQGHU
5HFHLYHG 5HUF
5HFHLYHG 5U &DQFHO
ODWLRQ 6&
&DQFHOODWLRQ
6HQW &V
Figure 5.29 shows a Petri net that defines the semantics of the business
process diagram shown in Fig. 5.28. The interesting part of this Petri net is
the representation of the non-interrupting attached event.
We can receive a reminder only during the execution of the subprocess. This
is represented by a place p, which contains a token only when the subprocess is
active. This can be achieved by the transition AR (Accept Reviewing) putting
a token on p on entering the subprocess and by the termination transition of
the Prepare Review activity PRt removing a token from p. As long as there
is a token on p, the transition Rerc (Reminder Received) can fire. Each firing
results in a token on q, which allows the transition to read the reminder (RR)
to fire. This Petri net is shown in Fig. 5.29.
S
'5
T
6& 5HUF 55
This Petri net can also be used to argue that the process allows concurrent
activities. If we consider the state of the Petri net shown in Fig. 5.29, we
can see that the Prepare Review activity is currently running: in the Petri
net, the transition to begin the preparation of the review has fired, while the
5.3 Business Process Model and Notation 151
terminating transition has not fired. At the same time, there are two unread
reminders and one read reminder. This means that, concurrently, additional
reminders can be read and the preparation of the review can terminate.
In fact, three transitions can fire independently of each other, i.e., con-
currently: the transition to terminate the preparation of the review (PRt )
the transition to receive an additional reminder (Rerc), and the transition to
read a reminder (RR). Even after completion of review preparation there is
concurrency in the Petri net, which then involves the transitions RR and SR.
While we can express concurrency by non-interrupting attached events,
this is not the most common way to represent concurrency in business process
models. Typically, parallel gateways are used. An example of a business process
model with parallel gateways is shown in Fig. 5.30.
\HV
65 55 6$1
3DSHU
$FFHSWHG
'HFLGHRQ 6HQG 5HFHLYH 'HFLGHRQ 3D
5HYLHZHUV 5HTXHVWWR 5HYLHZ $FFHSWDQFH
'5 5HYLHZHU 55 '$
65
6XEPLWWHG
3DSHU
5HFHLYHG 6U 6HQG 5HFHLYH 6HQG
5HTXHVWWR QR 5HMHFWLRQ
5HYLHZHU 5HYLHZ 1RWLILFDWLRQ
65 55 651
3DSHU
5HMHFWHG
3U
This figure depicts a reviewing process from the point of view of the
programme committee chair, who invites reviews for papers that have been
submitted to a conference. Each paper is reviewed by three reviewers. These
reviews are performed concurrently. When all three reviews have been received,
the programme committee chair decides on acceptance or rejection, based on
the reviews received. (This is a simplification of real-world reviewing processes.)
Discussing the behaviour in more detail, we see that the process starts with
receiving a submitted paper, followed by the selection of three reviewers for
that paper. The parallel split gateway determines that all activities following
that gateway are enabled concurrently. Therefore, the programme committee
chair can choose to send the requests in any order. Notice that any concurrent
execution of these activities is permitted by this process model, provided that
a review can be received only after the respective invitation has been sent.
The parallel join gateway waits for all incoming edges to be signalled, and
only then signals its outgoing edge. In this case, the decision on acceptance
can only be made after all three reviews have been received.
152 5 Business Process Models
Fig. 5.31: Petri net representing the behaviour of the business process diagram
shown in Fig. 5.30
The Petri net representing the behaviour of this business process diagram
is shown in Fig. 5.31. The concurrent execution of inviting the reviewers and
waiting for their reviews is illustrated in this figure. In the state shown, the
request has been sent to reviewer 1, but the review has not been received yet.
The request to reviewer 2 has not been sent, while reviewer 3 has already
submitted their review to the programme committee chair.
The term “parallel gateway” is indeed somewhat misleading. This control
flow structure does not prescribe that the next activities have to be done in
parallel, meaning at the same time. Any concurrent interleaving is permitted.
In the context of BPMN – in fact, in most languages for behavioural modelling –
the word parallel can be translated as concurrent.
BPMN provides an additional way to express concurrency. This is based
on so-called multiple-instances activities. When a multiple-instances activity
is enabled, several instances are started. Depending on the type of multiple-
instances activity – both sequential and parallel types are possible, represented
by different markers – these activities are performed sequentially or concur-
rently.
Figure 5.32 shows a parallel multiple-instances subprocess with the respec-
tive marker. Since sending a request and receiving a review belong together,
these tasks are combined into a subprocess. Assuming that the number of
instances of the multiple-instances subprocess is three, the behaviours repre-
sented by the business process diagrams shown in Fig. 5.30 and Fig. 5.32 are
equivalent. This means that the Petri net shown in Fig. 5.31 represents the
execution semantics of the reviewing process involving this multiple-instances
subprocess as well.
The language elements introduced so far and the examples to illustrate these
have centred around individual business processes, i.e., business processes
that are performed by a single organisation. When we investigate the paper-
reviewing process in detail, however, we find activities of sending and receiving
messages. In this section, the communication between business processes will
5.3 Business Process Model and Notation 153
6HQG
$FFHSWDQFH
\HV
1RWLILFDWLRQ
3DSHU
$FFHSWHG
'HFLGHRQ 6HQG 5HFHLYH 'HFLGHRQ
5HYLHZHUV 5HTXHVWWR 5HYLHZ $FFHSWDQFH
5HYLHZHU
6XEPLWWHG
3DSHU
5HFHLYHG 6HQG
QR
5HMHFWLRQ
1RWLILFDWLRQ
3DSHU
5HMHFWHG
&DQFHOODWLRQ 5HYLHZ
5HFHLYHG &DQFHOOHG
&U 5F
3&&KDLU
6HQG
5HYLHZLQJ
5HTXHVW 65 $FFHSWDQFH
3DSHU 5HFHLYHG
5HFHLYHG $U
3U 6HQG 5HFHLYH
3DSHU 5HYLHZ
63 55
5HYLHZ
5HFHLYHG 5U
5HYLHZ
6HQW 5V
5HYLHZHU
'HFLGHRQ
5HYLHZLQJ
'5
5HYLHZLQJ
5HTXHVW
QR
6HQG
5HFHLYHG &DQFHOOD
5U WLRQ 6&
&DQFHOODWLRQ
6HQW &V
Now that we have discussed the interaction between the business processes
in some detail, we will discuss the concurrent behaviour represented here. We
observe that the two processes are, at least in part, executed concurrently. This
is due to the fact that the sending of a message represents a behaviour similar
to a parallel split: it opens a new thread of control in the business process of
the communication partner. On the other hand, a receive event corresponds to
a parallel join, since a receive activity needs to wait for an incoming message.
Notice that the collaboration diagram represents concurrent behaviour
even though all business processes involved are sequential. The concurrency is
introduced only by communication between the parties.
3&&KDLU
6HQG 5HFHLYH
5HYLHZLQJ &RQILUPDWLRQ
5HTXHVW 65 5&
65 5&
5HYLHZHU
5HFHLYH 6HQG
5HTXHVW &RQILUPDWLRQ
55 6&
55 6&
Fig. 5.34: Mapping of send and receive activities and send and receive events
to a Petri net
3&&KDLU
&U
65
$U 63 55
$5 *3 35 65
'5
6&
5HYLHZHU
Fig. 5.35: Petri net representing the behaviour of the collaboration diagram
shown in Fig. 5.33
can fire only after the reviewer has decided to either perform the review (AR)
or sent a cancellation (SC ).
This example shows how an event-based gateway can be used to express
the deferred choice pattern. It also shows that the Petri net structure that
represents the deferred choice pattern can be found in the mapped Petri net
in Fig. 5.35.
The previous version of the reviewing process contains concurrency only
through the interaction between the business processes. The next example
provides a more complete picture of a real-world reviewing process. It looks
at the end-to-end process, including the author, the programme committee
chairperson, and the reviewers. We call it an end-to-end process because
it covers the interaction starting from the author sending the paper to the
programme committee chair until the receipt of a notification message by the
author.
It is a typical situation that a client (in this case the author) starts an
interaction by submitting a request (in this case a paper) and completes
the interaction by receiving a response (the notification). In this sense, this
example can serve as a blueprint for business processes such as the handling
of an insurance claim or of an application for a loan in the banking sector.
The collaboration diagram involving the business processes of the author,
the programme committee chair and the reviewers is shown in Fig. 5.36. The
collaboration starts when the author submits a paper to the programme
5.3 Business Process Model and Notation 157
5HMHFWLRQ
5HFHLYHG
5U
$XWKRU
6XEPLW
3DSHU 63 $FFHSWDQFH
5HFHLYHG
$U
5HYLHZLQJ
5HTXHVW 5HYLHZ
5HYLHZHU
6HQG
6HQG $FFHSWDQFH
5HFHLYH 1RWLILFDWLRQ
\HV
5HTXHVWWR 5HYLHZ
5HYLHZHU 6$1 3DSHU
65 55
$FFHSWHG
'HFLGHRQ 'HFLGHRQ 3D
3&&KDLU
5HYLHZHUV $FFHSWDQFH
'5 '$
6HQG 5HFHLYH
5HTXHVWWR 5HYLHZ 6HQG
6XEPLWWHG 5HYLHZHU
QR
3UHSDUH 6HQG
5HYLHZHU
5HYLHZ 5HYLHZ
35 65
5HYLHZLQJ 5HYLHZ
5HTXHVW 6HQW 5V
5HFHLYHG 5U
Fig. 5.36: Collaboration diagram involving the author, the programme com-
mittee chairperson, and two reviewers
158 5 Business Process Models
committee chairperson. After sending the paper, the author waits to receive a
notification, which is realised by an event-based gateway.
The chairperson decides on the reviewers and sends reviewing requests to
two reviewers concurrently. To keep the example reasonably small, we assume
that the reviewers start preparing their review as soon as they have received
the message. After the chairperson has received both reviews, he or she decides
whether to accept the paper, and sends the appropriate message to the author.
On receiving this message, the corresponding event following the event-based
gateway fires.
$XWKRU
5U
63
$U
5HYLHZHU
35 65
'5 '$
35 65
5HYLHZHU
Fig. 5.37: Petri net of the collaboration diagram shown in Fig. 5.36. The boxes
indicating the parts of the Petri net that map the respective participants have
been added for the purpose of illustration
Petri net, regions are used to demarcate the parts of the net that map to the
various participants.
The process starts with the author submitting a paper to the programme
committee chairperson. Upon receiving the paper, the chairperson opens two
concurrent branches, one for each reviewer invited. When Reviewer 1 submits a
review, the transition RR1 can fire. When RR2 has also fired, the chairperson
can decide whether to accept or reject the paper. The corresponding message
is sent by the transitions SR and SA, respectively. The deferred choice of the
author process reflects him or her waiting for a notification message. When
that message is received, the collaboration concludes.
Bibliographical Notes
With workflow patterns, van der Aalst et al. (2003) introduced an important
basis for business process modelling languages. A revised version was provided
by Russell et al. (2006). That version includes a richer set of patterns and a
formalisation based on coloured Petri nets.
The inclusive-or join has the reputation of being the most complex workflow
control flow pattern. This was highlighted by Kindler (2006), where the term
vicious circle was used to illustrate the complexity of the problem. Replacing an
inclusive-or join was addressed by Favre and Völzer (2012), while the execution
semantics of the inclusive-or join in the context of BPMN was covered by
Gfeller et al. (2011).
There are textbooks on business process management that provide a broader
perspective on the matter. Weske (2012) introduces the concepts of business
process management and provides a description of the most important business
process modelling languages, including BPMN and its various types of diagrams.
Organisational aspects of business processes are covered in the book by Dumas
et al. (2013), which also looks at process identification, discovery, and redesign.
The BPMN standards document is available by the Object Management
Group (2011). A mapping of BPMN to Petri nets was introduced by Dijkman
et al. (2008). zur Muehlen and Recker (2008), and Kunze et al. (2011a) have
investigated the occurrence of control flow structures in collections of process
models. These results inspired the selection of workflow control flow patterns
presented in this chapter.
Part III
Analysis of Behaviour
6
State Spaces
1¼
select
Ticket ticket 0¼ 50ct 0.5¼ 50ct 1¼ 50ct 1.5¼ confirm Ticket
selection paid paid paid paid supplied
1¼
cancel
cancel Ticket
cancel cancelled
cancel
will be explored in detail in Chapter 8, the basic idea behind this important
concept is introduced in this section because it is based on state spaces of
dynamic systems.
)UHH
WLFNHWV
¼
FRQILUP
VHOHFW
7LFNHW WLFNHW ¼ FW ¼ FW ¼ FW ¼ FRQILUP 7LFNHW
VHOHFWLRQ SDLG SDLG SDLG SDLG VXSSOLHG
¼
FDQFHO
FDQFHO 7LFNHW
FDQFHO FDQFHOOHG
FDQFHO
Fig. 6.2: Variant of the ticket vending machine, including a state that cannot
be reached
In the ticket vending machine shown in Fig. 6.2, one state and one state
transition have been added. The state Free tickets has a state transition to the
state Ticket supplied, which means that if the system is in the former state,
a ticket can be supplied without payment. This variant of the ticket vending
machine serves an illustrative purpose, but it might also be introduced by a
modelling error.
It might seem that the state Free tickets is harmful, because free tickets
might be provided by the system. However, when we investigate the behaviour
of the ticket vending machine in more detail, it turns out that this is not
the case. Despite the new state and the state transition to the state Ticket
supplied, the vending machine cannot dispense a ticket without payment. This
is due to the fact that the state Free tickets cannot be reached from the initial
state. There is no sequence of inputs that allows the automaton to reach the
state Free tickets.
This example shows a very simple case in which not all states of a system
can actually be reached. The concept behind this example is that of reachability
of states. When analysing dynamic systems, it is essential to investigate the
states that are reachable and to analyse their properties.
The ticket vending machine shown in Fig. 6.2 has one state that is not
reachable, namely, the state Free tickets. All other states are reachable from
the initial state. Since the behavioural model is represented by an automaton
without extensions, the model and its state space are equivalent. As one state is
166 6 State Spaces
not reachable, the set of reachable states consists of all states of the automaton,
except for the state Free tickets.
To summarise, the state space of a dynamic system is characterised by its
states and the state transitions between them. To analyse a dynamic system, it
is important to consider the set of reachable states, which are all states which
can be reached from the initial state. In general, state transitions represent
operations performed by the system, which, in general, change the system
state.
As introduced in Section 3.2, Moore automata are finite automata that can
generate output when states are reached. To discuss the state spaces of Moore
automata, consider Fig. 6.3, which shows the state space of the Moore automa-
ton for the ticket vending machine introduced in Fig. 3.5. The ticket vending
machine allows one to select a ticket and pay for it with 50 cents, 1 A C, and
2AC coins. Recall that Moore automata provide the output directly in a state.
The output of this Moore automaton provides information about the ticket
selection, given change, and supplies the printed ticket.
For finite automata without any output, each state of the state space can be
specified just by the label of the respective state in the finite automaton. Since
Moore automata generate output, however, a variable needs to be introduced
to represent the current value of the output, which contributes to the state of
the system.
By abstracting away the state labels of the automaton, a state in the state
space of a Moore automaton can be represented by the current value of the
output variable. In order to capture the values of variables, the concept of
a valuation is introduced. Even though Moore automata have only a single
variable, we shall introduce the general case where a system has an arbitrary
number of variables, which will be required when extended automata are
discussed later in this chapter.
¼SDLG
¼ Y FW FRQILUP
¼SDLG FRQILUP
¼
Y ¼FW
¼ ¼
VHOHFW
7LFNHW WLFNHW
¼SDLG FW ¼SDLG FW ¼SDLG FW ¼SDLG FRQILUP 7LFNHW
VHOHFWLRQ VXSSOLHG
Y VHOHFWLRQ Y ٣ Y ٣ Y ٣
Y ٣ Y WLFNHW
¼
¼
Y ¼ FRQILUP
¼
This definition is illustrated by the sample state space of the Moore au-
tomaton introduced in Fig. 6.3. Each state in the state space is a valuation
that assigns values to the variable v.
The initial state of the Moore automaton does not have any output; there-
fore, this state is represented by s0 (v) = ⊥. In the next state of the automaton,
an output is generated, resulting in s1 (v) = selection. This state represents
the fact that, on entering the 0 A C paid state, the automaton generates the
selection as output. When a 2 A C coin is inserted into the automaton, it enters
the 2 AC paid state, outputting 50 cents. Therefore, s2 (v) = 50 ct.
It is worth mentioning the state space of a Moore automaton has the same
structure as the Moore automaton itself. That is, each state of the automaton
is represented by exactly one state in the state space. This is so because
every output of a Moore automaton belongs to exactly one state, so that no
additional states are required to capture the output with variables.
Mealy automata are also able to generate output. In contrast to Moore au-
tomata, Mealy automata associate this output with state transitions rather
than with states, as introduced in Section 3.2.
This property has implications for the representation of the state space
of a Mealy automaton. This is due to the fact that the value of the output
variable v is not related directly to states, but rather to state transitions. The
automaton can enter a state by traversing different transitions, which might
generate different output values for the same state. Hence, for a given state of
the state space, the output variable would assume different values.
Consider a state s of a Mealy automaton, and state transitions (s , a, s),
(s , b, s) that both lead to s. Then, the automaton can reach the state s with
different values a and b of the output variable. Since the state space of a
dynamic system is based on the values of variables, these situations have to
be distinguished. As a result, the state space contains two states s1 , s2 , with
s1 (v) = a and s2 (v) = b.
To illustrate these concepts, we now investigate the Mealy automaton
shown in Fig. 6.4. This automaton describes the same ticket vending machine
from the previous section, but this time using outputs as part of the state
transitions. In general, this leads to more compact states, as we have discussed
in subsection 3.2.2.
The state space of the Mealy automaton is shown in Fig. 6.5. When we
investigate the 1.5 A
C paid state of the Mealy automaton, it turns out that the
6.2 State Spaces of Sequential Systems 169
2€ / 50ct
1€ 2€/1€
select
Ticket ticket 0€ 50ct 0.5€ 50ct 1€ 50ct 1.5€ confirm Ticket
selection paid paid paid 1€ / 50ct paid / ticket supplied
/ selection
1€
state space has three corresponding states. These states differ in the value of
the output variable, which assumes values ⊥ (representing no output), 50 ct,
and 1 AC, respectively.
In general, for each state transition with a different output value which
leads to a state s in a Mealy automaton, a separate state is required in the
automaton’s state space. Based on these considerations, the state space of a
Mealy automaton is defined as follows.
¼SDLG
¼ Y FW FRQILUP
¼SDLG FRQILUP
¼
Y ¼
¼ ¼
VHOHFW
7LFNHW 7LFNHW
WLFNHW ¼SDLG FW ¼SDLG FW ¼SDLG FW ¼SDLG FRQILUP
VHOHFWLRQ VXSSOLHG
Y VHOHFWLRQ Y ٣ Y ٣ Y ٣
Y ٣ Y WLFNHW
¼
¼
Y ¼ FRQILUP
¼
Fig. 6.5: State space of the Mealy automaton shown in Fig. 3.6
• A set of states S = {s|s : V → dom(V )}, such that for each state transition
(s , l, s ) ∈ δm in the Mealy automaton with output ω, i.e., (s , l, ω) ∈ λ,
there is a state s in the state space, such that s(v) = ω. We say that s
corresponds to s . If s is a final state in the Mealy automaton, then s is
a final state in its state space.
Since the initial state s0m of the Mealy automaton does not have any incoming
arcs, the corresponding state s0 in the state space has no output, which is
represented by s0 (v) = ⊥.
There is a transition between the states s and s in the state space, i.e.,
(s, s ) ∈ δ, iff there is a transition between the states corresponding to s and
s in the Mealy automaton.
Extended automata, which were introduced in Section 3.3, augment the capa-
bilities of finite automata with variables, conditions, and assignments. They are
an extension of both Moore and Mealy automata, because they can have many
variables and output can be associated with both states and state transitions.
Values of variables are assigned as part of state transitions and allow
complex computations. If an extended automaton is in a particular state, its
6.2 State Spaces of Sequential Systems 171
variables may have values that depend not only on the state, but also on the
execution sequence that has to that state.
An example of an extended automaton is depicted in Fig. 6.6. It has
variables i, j, each of which can assume values from a finite set of natural
numbers, i.e., i ∈ {0, 1} and j ∈ {0, 1, 2}. The state transition triggered by the
action A updates the value of i by adding 3 and computing the remainder of
the division of i by 2. The latter operation is called “modulo” and denoted by
mod. For instance, 1 mod 2 = 1, 2 mod 2 = 0, and 3 mod 2 = 1.
As a result of this computation, i has a value of either 0 or 1. The state
transition B updates the variable j in a similar manner. Only if the values of
the variables i and j are equal in the state s2 can the automaton terminate,
which is triggered by the action C .
Fig. 6.6: Extended automaton with variables i and j and transitions with
assignments
Table 6.1: States resulting from an execution sequence of the automaton shown
in Fig. 6.6
σ A , B , A , B , A , B , A , B , A , B , A , B
State S1 S2 S1 S2 S1 S2 S1 S2 S1 S2 S1 S2 S1
i 0 1 1 0 0 1 1 0 0 1 1 0 0
j 0 0 2 2 1 1 0 0 2 2 1 1 0
This table illustrates that the values of the variables i and j do not coincide
with the states s1 and s2 , but depend on the execution sequence that has led
to the state. For instance, the first time s2 is visited, i = 1 and j = 0, whereas
the second time s2 is visited, i = 0 and j = 2.
172 6 State Spaces
s0 s2 s4 s6 s8 s10
(S1,0,0) (S1,1,2) (S1,0,1) (S1,1,0) (S1,0,2) (S1,1,1)
A B A B A B A B A B A B
s1 s3 s5 s7 s9 s11
(S2,1,0) (S2,0,2) (S2,1,1) (S2,0,0) (S2,1,2) (S2,0,1)
C C
s12 s13
(S3,1,1) (S3,0,0)
Fig. 6.7: The state space of the automaton shown in Fig. 6.6
The start state s0 in the state space is represented by (S1 , 0, 0), meaning
that the automaton is in the state S1 and the state s0 is represented by
s0 (i) = 0 and s0 (j) = 0. From this state, the automaton can proceed to s1
by way of action A updating i with its new value 1, which yields a new state
s1 , with s1 (i) = 1 and s1 (j) = 0. Eventually, the state s11 is reached, which
is represented by (S2 , 0, 1). The state transition B leads to the state s0 , the
initial state of the state space.
This discussion shows that the definition of the state space of an extended
automaton is very similar to that for a Mealy automaton. Instead of a single
variable, extended automata have multiple variables. However, in each state,
each variable has exactly one value. Just as in the case of output automata, a
state in the state space of an extended automaton is a valuation on a set V of
variables.
6.2 State Spaces of Sequential Systems 173
In the state space, l is the label of the resulting transition leading to s. The
state s is a final state in the state space iff s is a final state in the extended
automaton. The initial state of the state space corresponds to the initial state
of the automaton, where only the entry clause has been evaluated.
This definition is illustrated by the example shown in Fig. 6.7. The initial
state of the state space is s0 ∈ S, represented by (S1 , 0, 0), since S1 is the initial
state of the extended automaton and there is no entry clause in that state,
which might change the initial values of the variables. If the variables had not
been initialised, then they would have had the value ⊥. Using Definition 6.4,
the state space of the extended automaton can be derived, as explained above.
s0 s2 s4 s6
(S1,0,0) (S1,3,2) (S1,4,1) (S1,3,0)
A B A B A B A
s1 s3 s5 s7
(S2,3,0) (S2,4,2) (S2,3,1) (S2,4,0)
Fig. 6.9: State space of the extended automaton shown in Fig. 6.8, with a loop
and without a final state
The state space is shown in Fig. 6.9. Notice that the state space is a cyclic
graph, which shows the non-terminating nature of the automaton. This is due
to the fact that in the state S2 of the extended automaton, i = j always holds,
which prevents the automaton from reaching the final state S3 .
The examples introduced so far have used finite domains of variables. However,
some automata have variables with an infinite domain, where there exists no
upper bound on the value a variable can assume. Since each different value
that a variable can assume results in a different state in the state space, the
number of states in the state space becomes infinite as well.
Before we introduce an approach to dealing with situations like this, we
will present an example of an automaton with unbounded variable values. This
6.2 State Spaces of Sequential Systems 175
%M M
The values of the variables can be computed easily. Initially, the automaton
is in the state S1 , and both variables have the initial value zero. On entering
the state S2 , the value of i is incremented by 3, and returning to the state S1
increments j by 2. After the first iteration of the cycle involving S1 and S2 ,
we have i = 3 and j = 2.
The final state, S3 , can be reached after the third and the fourth iteration
of the loop made up of S1 and S2 , and afterwards every fifth and sixth iteration.
After the third iteration, i = 9 and j = 4, so that (i mod 2) = (j mod 3) = 1.
s0 s2 s4 s6 s8 s10
(S1,0,0) (S1,3,2) (S1,6,4) (S1,9,6) (S1,12,8) (S1,15,10)
A B A B A B A B A B
s1 s3 s5 s7 s9 . .. .
(S2,3,0) (S2,6,2) (S2,9,4) (S2,12,6) (S2,15,8)
C C
s11 s12
(S3,9,4) (S3,12,6)
Fig. 6.11: State space of the extended automaton with an infinitely large state
space shown in Fig. 6.10
Despite the possibility of reaching the final state, the automaton can also
use the transition B to start the next iteration. This is also possible if the
condition associated with the state transition C can be evaluated to true.
The state space of the automaton is shown in part in Fig. 6.11. It is easy
to see that after iteration n in the state S2 , i = n · 3, and j = (n − 1) · 2. It
176 6 State Spaces
is also easy to see that there is no upper bound on either of these variables,
resulting in an infinite state space.
Infinite state spaces are used in formal verification of discrete dynamic
systems in Chapter 8.
'
6
&
6
$ % *
6 6 6
7
+
( )
6
Since state spaces do not use composite states, these states can be eliminated
by connecting all incoming state transitions of a composite state to the initial
state of the automaton. Hierarchical automata and UML state machines share
the concept of pseudostates for an initial state and for history states. These are
entry points to a state machine. However, the automaton cannot reside in a
pseudostate; instead, the next state is entered instantly. Therefore, pseudostates
are not available in state spaces and need to be eliminated as well.
6.2 State Spaces of Sequential Systems 177
State transitions that have a composite state as the source state and a
target state st in the automaton need to be replicated for each internal state
of the composite state. This means that each internal state is connected to st .
History states require special treatment. Since they allow the automaton to
return to an internal state that was active when the composite state was left,
it is necessary to store the state that was active. However, this is not possible
in state spaces. Therefore, the part of the state machine that is executed after
re-entering a composite state through a history pseudostate needs to be copied
for each internal control state.
6
& '
&
$ % *
6 6 6 7
( ) ( )
6 6
Fig. 6.13: The state space of the hierarchical automaton shown in Fig. 6.12
5136WDWH
6WDUWUDGLR 65
5DGLRRII 5DGLRRQ &KDQJH
5DGLR
1DYLJDWLRQ2II 12
6KXWGRZQ '1
6WDUW
1DYLJDWLRQ
(QWHU
1DYLJDWLRQ DGGUHVV ($ 5RXWH URXWH 67
1DYLJDWH 1
UHDG\ 15 FRPSXWHG 5&
&DQFHO5RXWH &5
5HFHLYHFDOO 5&
0DNHFDOO 0&
3KRQHUHDG\ 3KRQHFRQ
3KRQH
35 YHUVDWLRQ 3&
7HUPLQDWHFDOO 7&
8QUHJLVWHU
SKRQH 83
3KRQHRII 32
6KXWGRZQUDGLR
QDYLJDWLRQ '51
6\VWHP2II 62
Fig. 6.14: State machine of car radio navigation system with orthogonal states
causally related. For instance, we can change the radio station while the system
is navigating, and we can cancel navigation while making a phone call.
The key question in this section relates to the state space of concurrent
systems. As indicated above, the state space of a system is represented by the
values of its variables, and by events that represent system behaviour by state
transitions. Since behavioural models restrict the events that can occur during
run time, we are interested in the ordering of system events, represented by
state transitions.
As this discussion shows, all interleavings of events generated by orthogonal
subsystems represent valid system behaviour. Therefore this semantics of a
concurrent system is called interleaving semantics. For other semantics of
concurrent systems, the reader is referred to the bibliographical notes of this
chapter.
Part of the state space of the car radio and navigation system is shown
in Fig. 6.15. Since there are no dependencies between the state transitions
in the three orthogonal substates, the number of states equals the product
of the numbers of states in the submachines. Hence, the number of states
resulting from the orthogonal state is 2 · 4 · 3 = 24. This is due to the fact
that the radio subsystem has two states, the navigation subsystem has four
180 6 State Spaces
^Z
V
52)
Z͕D
12 ^ ^
35 hW
hW ^d
E
V V
521 521 D͕Z
15 5&
35 35
V V ^d
52)
52)
15 5& hW
35 35
hW Z͕ Z͕
d
D D
d
^Z ^Z
V V
52) 52) ^d
15 5&
3& 3&
ZE ZE
^Z E d
ZE
V
52)
15 E V
32 62
ZE
Fig. 6.15: Part of the state space of the car radio and navigation system.
Overall, the state space consists of combinations of states of the orthogonal
state machines plus the final state.
states, the phone subsystem has three states, and all combinations of states
represent valid states in the state space. Notice that we do not represent the
initial pseudostates in the state space, since these are immediately left after
being entered.
Owing to the large number of states and state transitions resulting from
this, we deliberately included only an excerpt of the complete state space in
Fig. 6.15. Note that in any state, the radio navigation system can be shutdown
entirely, represented by the action DRN that originates from the orthogonal
state in Fig. 6.14. For the state space, this means that there exists a state
transition from each state to the final state s24 labelled SO, assuming that
the 24 states resulting from the orthogonal state are s0 through s23 . We have
left out the transitions to s24 for the sake of clarity, and only sketched that
behaviour by several incoming state transitions to s24 .
To investigate the behaviour of the car radio and navigation system, its
states have to be considered. In the example, it suffices to use three variables,
6.3 State Spaces of Concurrent Systems 181
each of which represents the current state of one subsystem. The variable vR
is responsible for maintaining the state of the radio, vN for the navigation
system, and vP for the phone.
After the radio and navigation system is started up, the orthogonal state is
entered. In each submachine, the pseudostate is entered, which is immediately
followed by the corresponding first state of each submachine. As a result, the
system is in the state s0 = (ROF , NR, PR), meaning that the radio is off,
s0 (vR ) = ROF , the navigation is ready, s0 (vN ) = NR, and the phone is ready
as well, s0 (vP ) = P R.
In s0 , the following events might occur, corresponding to state transitions
in the three subsystems:
• In the radio subsystem, the radio can be started (event SR).
• In the navigation subsystem, an address can be entered (EA) or the
navigation can be shut down (DN ).
• In the phone subsystem, a call can be made (MC ) or received (RC ), or
the phone can be unregistered (UP).
• The radio navigation system can be shut down entirely (DRN )
These behavioural alternatives are reflected in the state space shown in
Fig. 6.15. Since the two transitions MC and RC lead to the same system state,
we have marked the corresponding edge with both labels.
It is a typical situation in concurrent systems that several events can occur
at each point in time. For example, in the state s0 , any of the seven events
mentioned in the list above can happen. Each walk through the state space of
a concurrent system serialises the events in an order in which they can occur.
In fact, one specific execution sequence of the concurrent system is represented
by a walk from the start state to an end state.
This example also shows the so-called state explosion problem. This problem
is due to the increasing number of states in the state space of concurrent systems.
The example shown has just nine states in the submachines, but 24 states
in the state space of the orthogonal state. In general, the number of states
increases exponentially in the degree of concurrency. For instance, a business
process model with concurrent execution of 20 branches, each of which contains
a single activity only, will result in 220 states, which exceeds a million states.
The bibliographical notes of this chapter contain references to papers looking
at techniques for reducing the state space of concurrent systems.
Regarding the state spaces of state machines, so far we have assumed that all
transitions in orthogonal states are independent. This assumption, however,
does not hold in general, as was discussed in Section 4.1 for a soft drink vending
machine. In this section, the implications of these dependencies for the state
spaces of state machines with orthogonal states are investigated.
182 6 State Spaces
¼ ¼
¼ ¼
'ULQNVHO FW 'ULQNVHO FW 'ULQNVHO FW 'ULQNVHO FW 'ULQNVHO WHUP
¼SDLG ¼SDLG ¼SDLG ¼SDLG ¼SDLG
¼
OHPRQ OHPRQ OHPRQ OHPRQ OHPRQ
¼
FRQILUP
¼ ¼
¼
¼
Fig. 6.16: State space of the soft drink vending machine shown in Fig. 4.7
Selection is in the substate Drink Selection, while the state Payment is in the
substate 0 AC paid. The concurrent behaviour described by the state machine
with orthogonal states can be seen from the rather complex state space. To
keep the state space manageable, we have abstracted away the transition to
shut off the vending machine.
In the state space, the dependency between orthogonal states described
above is represented by the confirm transitions. These can only occur if
sufficient funds have been paid and the beverage has been chosen. Hence, any
dependency between orthogonal states reduces the concurrency of the state
space. In the example, for instance, a confirm transition is only possible if the
state machines of the orthogonal states are in the corresponding substates.
S S S S R
65 55 W 6$1
L S W S S S W S S
'5 65 55 '$
S S S S R
65 55 W 651
Fig. 6.18: State space of the Petri net shown in Fig. 6.17
6.3 State Spaces of Concurrent Systems 185
Bibliographical Notes
Some of the approaches to modelling the behaviour of discrete dynamic systems
also look at the state spaces generated by those models. There are several
different tools for computing the state space of concurrent systems. In this
chapter we used PIPE2 to compute the state space shown in Fig. 6.18. More
information about that tool can be found at http://pipe2.sourceforge.net/.
The state explosion problem was discussed by Valmari (1998). Techniques
to reduce the state space of concurrent systems were introduced by Clarke
et al. (1999a). There are several different semantics that can be associated
with models that capture concurrent behaviour. In this book we have used
interleaving semantics, which captures the behaviour by all possible interleav-
ings of actions. There is also truly concurrent semantics; readers interested in
this advanced topic are referred to Winskel and Nielsen (1995). An overview of
different execution semantics for Petri nets was given by Juhás et al. (2007).
7
Comparing Behaviour
In the previous chapter, we have shown how state spaces can be used to
capture the behaviour of discrete dynamic system models, expressed in dif-
ferent modelling languages. In this chapter, we investigate how this universal
representation allows us to compare behavioural models.
Behavioural comparison of systems is essential in a number of situations.
Take, for instance, a model of a system’s interactions, which abstracts away
the internal behaviour of the system. This model can be represented, for
instance, as a sequence diagram. The internal behaviour of the system can be
represented by another model, for instance a state machine.
In order to ensure that an implementation of the system complies with
its specification, one needs to compare the two models and verify that they
show the same behaviour regarding interactions. To address these issues, this
chapter introduces notions of behavioural equivalence.
We can also use comparison of models of the same system. Recently, the
development of software systems has seen a major paradigm shift. A software
system is no longer a final product, but rather is under constant evolution
and modernisation. Replacing a component of a software system with a more
efficient or more versatile one must not break the overall system. This can
be ensured by comparing the behaviour of the two components and showing
that the new component refines the behaviour of the old one. The concept of
behavioural inheritance can be used to investigate these properties.
In addition to behavioural equivalence and inheritance, behavioural similar-
ity is also investigated in this chapter. This concept is less strict and allows us
to measure to what degree two systems resemble each other, even if they show
distinct behavioural features. Behavioural similarity enables one, for instance,
to search among a number of system models to avoid creating duplicates of
what already exists and, thus, can help one to reduce the maintenance effort
related to the management of model repositories.
A large body of research work has addressed how the behaviour of two
systems can be compared, including behavioural equivalence, behavioural
inheritance, and behavioural similarity. In this chapter, we cover the most
E E
V V
V V V V
D D E D
(a) (b)
To illustrate this observation, Fig. 7.1 shows an example of two state spaces
that express the very same behaviour. Both systems allow actions a and b to
be executed in an alternating fashion, starting with a and terminating with
b. In both systems, arbitrarily many iterations of this pair of events can be
executed.
Following the above reasoning, behavioural equivalence cannot be defined
on the basis of the structure of state spaces. Therefore, the notion of an
impartial observer who monitors a running system and observes the events
that occur has been devised. Based on this concept, behavioural equivalence
can be defined as follows.
Two systems are behaviourally equivalent if an observer is not able to
distinguish their behaviours.
employed. Observers differ in the information that they take into account
during observation.
For example, one observer might only consider the actions that two systems
can carry out, whereas another observer could also rely on decisions that
can be taken during a system run. Hence, equivalence relations differ in the
definition of an observer and the abstraction that the observer applies when
comparing systems.
Behavioural equivalence can be considered as a binary relation between
system models. Let M be the set of models; then E ⊆ M × M is an equivalence
relation. If two models m1 , m2 ∈ M are equivalent, then (m1 , m2 ) ∈ E. Based
on this notion, we shall now discuss properties of behavioural equivalence
relations.
Any equivalence relation – and therefore also E – is reflexive, symmetric,
and transitive. E is reflexive because a model is always equivalent to itself; it
holds that ∀ m ∈ M : (m, m) ∈ E.
It is symmetric; that is, if one system is equivalent to another system, then
the second system is also equivalent to the first system. Mathematically we
can express this property as ∀ m1 , m2 ∈ M : (m1 , m2 ) ∈ E =⇒ (m2 , m1 ) ∈ E.
Equivalences are transitive, meaning that if one system is equivalent to a
second system and the second system is equivalent to a third system, then the
first and the third system are also equivalent: ∀ m1 , m2 , m3 ∈ M : ((m1 , m2 ) ∈
E ∧ (m2 , m3 ) ∈ E) =⇒ (m1 , m3 ) ∈ E.
These properties allow many different system models to be partitioned into
clusters, called equivalence classes, where all members of one class are pairwise
equivalent in their behaviour.
To decide consistency between two behavioural models, an observer is
used. This observer assumes the abstraction of the more abstract of the two
system models, i.e., the behavioural interface, and tries to mimic all possible
behaviours of this model in the internal behaviour.
The observer is capable of monitoring only the publicly observable actions,
which are the sending and receiving of messages. The general idea is as follows.
If the behavioural interface and the internal behaviour are equivalent in their
actions from the viewpoint of the observer, they are consistent.
Since the internal behaviour shows more detail than the behavioural inter-
face, we understand the internal behaviour of a system as a refinement of its
behavioural interface. Refinements allow traversing between different levels of
abstraction. A behavioural interface, for instance, is more abstract than an
implementation of a system, as it abstracts away the internal behaviour and
incorporates only those concepts that are relevant for the system’s interaction
with other systems.
In one or several refinement steps, the level of abstraction is lowered by
adding internal actions and decisions. However, not all possible refinements
preserve an implementation’s consistency with its specification; this requires
us to establish a relation between the implementation and its specification and
to examine it.
190 7 Comparing Behaviour
5HDG\
UHFHLYH
RUGHU
2UGHU
SODFHG
>SURGXFWVLQVWRFN@ >SURGXFWV
UHVHUYHSURGXFWV QRWLQVWRFN@
,QYRLFH %DFN2UGHU
3URGXFWV
UHVHUYHG
VHQGLQYRLFH
VHQG VHQG
SD\DPRXQW VKLSPHQW VKLSPHQW
SURGXFWV
VHQW
VHQGVKLSPHQW DUFKLYH
RUGHU
%X\HU 6HOOHU
%X\HU 6HOOHU
LQWHUIDFH LQWHUIDFH
%X\HUQ 6HOOHUQ
&RPSDWLELOLW\ &RQVLVWHQF\
D E F
D F E
E D F
E F D D
F D E E
F E D F
(a) m1 (b) m2
Fig. 7.4: Two BPMN business process diagrams that specify concurrent execu-
tions
194 7 Comparing Behaviour
Figure 7.4 shows two BPMN business process diagrams, m1 and m2 . These
models are different in their structure, but they use the same set of activities.
These models will now be used to study trace equivalence in detail.
As a first step in the task of deciding the trace equivalence of these models,
the corresponding state spaces are derived from m1 and m2 . These state
spaces are shown in Fig. 7.5. For the sake of readability, we have used the
abbreviations a, b, and c for the actions above.
D E F
V V
V F V
D E
D
E V V F
E
V V
V V
E F D F D F
V V D
F E E E
V V V V
D
V V
F D F D
F E D
V V V V
E
(a) (b)
Fig. 7.5: State spaces of the BPMN business process diagrams shown in Fig. 7.4
We observe that the state spaces differ in the number of states and in the
state transitions as well. In Fig. 7.5a, the order in which the actions are carried
out is chosen in the first state, resulting in six different sequences of these
actions. Nevertheless, any ordering of these actions is allowed.
In contrast, Fig. 7.5b allows one to choose from the set of actions stepwise.
First, an action can be chosen from all actions. After the first action is executed,
the next action can be chosen from the remaining two, and so on.
We first look at the traces of the state space A1 , shown in Fig. 7.5a, of the
business process model m1 .
LA1 = { a, b, c , a, c, b , b, a, c , b, c, a , c, a, b , c, b, a }
Next, the traces of the process model m2 are investigated; its state space,
A2 , is shown in Fig. 7.5b. Choosing the upper branch results in the trace
a, b, c . However, after choosing a we can also select c next, which results in
the trace a, c, b . Consequently, the state space of m2 shows exactly the same
set of traces as m1 . Therefore the languages of the state spaces are identical:
LA1 = LA2 . As a result, the business process diagrams m1 and m2 shown in
Fig. 7.4 are trace equivalent.
7.1 Behavioural Equivalence 195
The trace equivalence introduced so far does not take the internal actions
of the systems into account. Rather, both business process models use the
same set of actions; they are on the same level of abstraction.
On the other hand, the abstraction of the internal state of a system is an
apt assumption when one is reasoning about the interaction of systems, as we
have argued in subsection 4.2.1. Therefore, trace equivalence is a well-suited
candidate for verifying that a behavioural interface fits its implementation.
To use trace equivalence, however, it is necessary to ignore the internal state
transitions that are not represented in the behavioural interface.
Figure 7.6 shows the behavioural interface of an online shop – the seller of
the goods in the above scenario. The process starts with the shop receiving
the order from a customer, followed by the concurrent sending of the shipment
and the invoice, which is then followed either by the receipt of the payment or
by the receipt of returned goods from the customer.
The behavioural interface of the order-handling process is composed only
of the send and receive actions that are required for the interaction with the
customer. This model produces the following traces:
Because the sending of the shipment and of the invoice are concurrent, they can
occur in either order. Additionally, the process needs to distinguish between
two alternative incoming messages, i.e., the receipt of the payment and of the
returned goods, which results in four different traces.
If we look at the internal behaviour of the order handling process depicted
in Fig. 7.7, we perceive more actions that show up in the traces, as illustrated
by the following trace:
UHFHLYH FORVH
UHWXUQ T RUGHU
UHFHLYH SUHSDUH VHQG VHQG
RUGHU VKLSPHQW LQYRLFH VKLSPHQW
T T T T T T
UHFHLYH FORVH
SD\PHQW T RUGHU
This trace is also contained in the set of traces of the behavioural interface,
which means that, for this trace, the implementation complies with the be-
havioural interface. However, in order to decide trace equivalence between a
behavioural interface and the internal behaviour of a system implementing
that interface, this property must hold for all traces.
Looking at the internal behaviour of the order-handling process, we observe
that the shipment is always sent before the invoice. While this may be a
valid assumption for an implementation of the online shop, it violates the
specification of the behavioural interface, which states that either of these
actions may occur before the other.
This means that a trace that is allowed by the interface is not allowed by
the implementation. Therefore, the behavioural interface, Fig. 7.6, and the
internal behaviour in Fig. 7.7 are not trace equivalent.
As a consequence, the internal behaviour cannot be used to handle orders
in the online shop. Since interaction partners rely only on the specification
of the interface, a customer may insist on receiving an invoice first and only
afterwards the shipment, as offered by the online shop’s behavioural interface.
If the communication with the customer is synchronous, the customer would
wait forever for the invoice while being incapable of receiving the shipment.
Figure 7.8 shows the internal behaviour of another order-handling process.
In contrast to Fig. 7.7, it provides explicitly for the selling of a gift voucher,
which is shipped and invoiced like a regular order. However, gift vouchers are
non-refundable and, therefore, the customer may not send the order back but
has to pay in every case.
7.1 Behavioural Equivalence 197
VHQG VHQG
LQYRLFH U VKLSPHQW
U U U U
SUHSDUH UHFHLYH
VKLSPHQW SD\PHQW
LVJLIW VHQG VHQG FORVH
VKLSPHQW U LQYRLFH
YRXFKHU RUGHU
UHFHLYH DQDO\VH
RUGHU RUGHU UHFHLYH
U U U SD\PHQW U
LVQRWJLIW FORVH
YRXFKHU VHQG VHQG RUGHU
LQYRLFH U VKLSPHQW
Fig. 7.8: Internal behaviour of an order-handling process that allows the buying
of gift vouchers
7.1.3 Bisimulation
only traces, but also the choices that are offered by a system during observation.
Hence, a bisimulation observer also takes into account the alternative state
transitions that can be processed in every state of a system.
Simulation means that if one system offers a choice, i.e., a set of alternative
state transitions in a certain state, any equivalent system must offer the same
set of state transitions in a corresponding state. Bisimulation requires that
the simulation property is symmetric, i.e., that two behavioural models can
simulate each other. The correspondence of states is denoted by a bisimulation
relation between two systems.
i. (s0 , s0 ) ∈ B,
ii. (x, x ) ∈ B ∧ (x, l, y) ∈ δ =⇒ ∃ y ∈ S : (x , l, y ) ∈ δ ∧ (y, y ) ∈ B,
iii. (x, x ) ∈ B ∧ (x , l, y ) ∈ δ =⇒ ∃ y ∈ S : (x, l, y) ∈ δ ∧ (y, y ) ∈ B.
The above definition ensures that the state spaces of two systems simulate
each other by establishing a correspondence between states that offer the same
set of state transitions. If the state spaces simulate each other, the systems
described by those state spaces are bisimulation equivalent, or bisimilar.
To illustrate bisimulation, we shall use an abstract example before returning
to our online shop later. Consider the automata depicted in Fig. 7.9. Note that,
while the system in Fig. 7.9b, consisting of the states {s1 , s2 , s3 , s4 }, is deter-
ministic, the system in Fig. 7.9a, which consists of the states {q1 , q2 , q3 , q4 , q5 },
is non-deterministic owing to the non-deterministic choice with label a in the
state q1 . Nevertheless, both systems produce exactly the same set of traces,
{ a, b , a, c }, and are therefore trace equivalent.
We will now investigate if these systems are also bisimilar. The first con-
dition of Definition 7.2 requires that the initial states of the two systems
correspond to each other, which means that they are one tuple in the bisimula-
tion relation B. Since bisimulation is a symmetric relation, s0 must correspond
to s0 and vice versa. In the example in Fig. 7.9, this constitutes the initial
tuple in the bisimulation relation, i.e., (q1 , s1 ) ∈ B.
The second condition of the definition states that if two states simulate each
other, i.e., (x, x ) ∈ B, then both states must offer the same state transitions.
This means that if there exists a state transition with label l ∈ Σ that originates
from x and leads to y, i.e., (x, l, y) ∈ δ, then there must also exist a state
transition with the same label l ∈ Σ that leads from x to a state y , i.e.,
(x , l, y ) ∈ δ . Furthermore, the target states of these transitions must also
7.1 Behavioural Equivalence 199
T
D E
T T V
E
D E D
T V V
F F
T V
be in the bisimulation relation, i.e., (y, y ) ∈ B. This condition must hold for
every state x ∈ S and every transition that originates from x.
In our example, the state q1 in Fig. 7.9a has two state transitions labelled
a. For each target state of these transitions, q2 and q5 , there must be a
corresponding state in Fig. 7.9b to satisfy bisimilarity. Looking at the system,
we observe only one state transition labelled a. Hence, the state s2 is the
corresponding state for both q2 and q5 , i.e., (q2 , s2 ) ∈ B and (q5 , s2 ) ∈ B.
Consequently, both of these states must offer the same set of choices of state
transitions themselves. The state q5 has a state transition labelled b, leading
to q3 , which is mirrored in Fig. 7.9b by the state transition from s2 to s3 ,
i.e., (q3 , s3 ) ∈ B. Likewise, both of the state transitions b and c from q2 are
mirrored, leading to (q3 , s3 ) ∈ B and (q4 , s4 ) ∈ B.
Since the two systems are rooted in corresponding initial states, the above
definition ensures that, no matter how far the two systems proceed in a mutual
simulation, each state transition in one system will be simulated by the other
system, leading to corresponding states. The example shows how rooting the
bisimulation relation in the initial states of two systems being compared leads
to chaining of corresponding states and state transitions.
We have already shown that the second condition holds for our systems,
that is, the deterministic system in Fig. 7.9b simulates the non-deterministic
system in Fig. 7.9a. To verify bisimilarity of these systems, we must also show
that the non-deterministic system simulates the deterministic system. Looking
at the systems in Fig. 7.9, it becomes clear that this is not the case, because
of different moments of choice. In the deterministic system in Fig. 7.9b the
choice between b and c is made after a, whereas this is not the case in the
non-deterministic system in (a). If the state transition to q5 labelled a is chosen,
then it is not possible to choose between b and c any more. The transition b is
the only continuation offered in this state.
One might ask why we chose a non-deterministic system in the example
above. The reason is as follows. For two deterministic systems that are without
silent state transitions and produce finite sets of bounded traces, bisimulation
and trace equivalence coincide.
200 7 Comparing Behaviour
VHQG VHQG
LQYRLFH U VKLSPHQW
U U U U
IJ UHFHLYH
SD\PHQW
VHQG VHQG
IJ VKLSPHQW U LQYRLFH IJ
UHFHLYH
RUGHU IJ UHFHLYH
U U U SD\PHQW U
IJ VHQG VHQG IJ
LQYRLFH U VKLSPHQW
Weak Bisimulation
i. The initial states of A and A are in the bisimulation relation: (s0 , s0 ) ∈ B.
ii. If (x, x ) ∈ B and (x, l, y) ∈ δ then it must be true that either l = τ and
(y, x ) ∈ B or there exists a sequence σ = τ, . . . , τ, l, τ, . . . , τ that leads
from x to y , i.e., (x , σ , y ) ∈ δ ∗ and (y, y ) ∈ B.
iii. If (x, x ) ∈ B and (x , l, y ) ∈ δ then it must be true that either l = τ and
(x, y ) ∈ B or there exists a sequence σ = τ, . . . , τ, l, τ, . . . , τ that leads
from x to y, i.e., (x, σ, y) ∈ δ ∗ and (y, y ) ∈ B.
Branching Bisimulation
Branching bisimulation is a stronger notion of equivalence than weak bisimula-
tion and preserves the branching structure of a system even in the presence of
silent transitions, hence its name. Branching bisimulation requires that the
same set of choices is offered before and after each unobservable action. Hence,
where weak bisimulation abstracts away the internal evolution of a system,
branching bisimulation takes all intermediate steps into account.
Formally, branching bisimulation requires the following reformulation of
the definition of bisimulation presented in Definition 7.2.
i. The initial states of A and A are in the bisimulation relation: (s0 , s0 ) ∈ B.
ii. If (x, x ) ∈ B and (x, l, y) ∈ δ then it must be true that either l = τ and
(y, x ) or there exists a sequence σ = τ, . . . , τ, l, τ, . . . , τ that leads from
x to y , i.e., (x , σ , y ) ∈ δ ∗ , such that every state si passed through
in this sequence corresponds to x , i.e., ∀ 1 ≤ i < |σ | : (x, si ) ∈ B, and
(y, y ) ∈ B.
iii. If (x, x ) ∈ B and (x , l, y ) ∈ δ then it must be true that either l = τ and
(x, y ) or there exists a sequence σ = τ, . . . , τ, l, τ, . . . , τ that leads from
x to y, i.e., (x, σ, y) ∈ δ ∗ , such that every state si passed through in this
sequence corresponds to x , i.e., ∀ 1 ≤ i < |σ| : (x , si ) ∈ B, and (y, y ) ∈ B.
7.1.4 Discussion
V Y
D D
V Y IJ
E IJ E
V Y Y
(a) (b)
U
D
D
T T U U
IJ D
IJ
T U
IJ E E
T T U
(c) (d)
X
D
X
IJ X
E
X
(e)
Fig. 7.12: State spaces of various system models, used to illustrate different
equivalence relations
In the following, we briefly compare pairs of models in Fig. 7.12 and discuss
whether the models are equivalent with respect to an equivalence relation.
Recall that equivalence relations are transitive.
Table 7.1: Equivalence relations for pairs of the models shown in Fig. 7.12
Fig. 7.12 (a) + (b) (a) + (c) (a) + (d) (a) + (e)
trace equivalence
weak bisimulation equivalence
branching bisimulation equivalence
completely mirrored in the state u1 , and hence these systems are branching
bisimilar.
IJ
D E T D E V
T T V V
(a) (b)
VHOHFW
SURGXFW FKHFNRXW
T T T
(a)
FKHFNRXW
(b)
begin a transaction, read a record from the database, update a record, and
commit the transaction. A database transaction allows a number of database
operations to be bundled together such that either all operations are executed
successfully or no change is made to the database at all when the transaction
is committed. This mechanism avoids database inconsistencies. The system in
Fig. 7.15 provides only a single execution trace,
FDQFHO
FDQFHO WUDQVDFWLRQ
WUDQVDFWLRQ
In comparison, the system model shown in Fig. 7.16 also offers the possibility
to cancel a transaction. In that case, no changes are made to the database.
This system generates the following traces:
insert record to be executed. System components that were suitable for the
original system would then fail, as they would not be able to insert a record.
The same applies if we were to remove a state transition, say read record.
The example above has shown that the insertion of actions can introduce
anomalies into a system’s behaviour. This is not true in general, however.
This section introduces projection inheritance. Projection inheritance is based
on the proposition that actions which are not present in the system from
which behaviour is inherited are only internal actions. If the effects of these
actions are ignored, then the two systems must be branching bisimilar to
satisfy projection inheritance. This notion is called projection inheritance,
because when we investigate inheritance we project onto the common actions
and abstract away internal actions.
Consider a system B with a set of actions ΣB which inherits from a system
A with a set of actions ΣA . Since B inherits behaviour from A, it contains all
actions of A, i.e., ΣA ⊆ ΣB . B may also contain actions that are not present
in A. In this context, projection refers to hiding all additional actions, i.e., all
state transitions that are labelled with a symbol in ΣB \ ΣA are treated as
silent state transitions. Branching bisimulation can be decided for A and B.
On comparing Fig. 7.17 with the original database module shown in
Fig. 7.15, it turns out that there is an additional action insert record. Following
the above informal definition of projection inheritance, this action is treated as
a silent state transition. The two systems are then branching bisimilar, because
they offer the same set of choices in all pairs of corresponding states. After
the sequence begin transaction, read record has been executed, the external
behaviour of the system in Fig. 7.17 offers the choice to update a record
after a silent state transition, which coincides with the definition of branching
bisimulation.
For the same reason, projection inheritance also allows one or several
actions to be added in a parallel branch. Imagine an extension of our original
database module that allowed us to insert a new record independently of, and
therefore concurrently with, the reading and updating of another record. As a
consequence of the interleaved semantics, this extension would lead to a system
with the state space depicted in Fig. 7.18. This system offers a number of
choices during its execution. For instance, in the state v1 the environment, i.e.,
212 7 Comparing Behaviour
another software module, may choose between read record and insert record.
If reading a record is chosen, another choice is offered between updating and
inserting a record, and so on.
XSGDWH LQVHUW
UHFRUG Y UHFRUG
UHDG LQVHUW
UHFRUG Y Y Y
EHJLQ UHFRUG FRPPLW
WUDQVDFWLRQ XSGDWH WUDQVDFWLRQ
Y Y Y UHFRUG
LQVHUW UHDG
UHFRUG Y UHFRUG
extension of A. This results in the removal of all state transitions in the state
space of B that carry labels not present in A. For protocol inheritance to hold,
the remaining behaviour must then be branching bisimilar to A.
Protocol inheritance excludes the insertion of mandatory behaviour. In
both cases, that is inserting an action into a sequence and adding a parallel
action, removal of the state transition that represents the added action would
result in disconnecting the state space.
For example, removing the state transition insert record in Fig. 7.17 would
disconnect the state space such that states u4 , u5 , and u6 would become un-
reachable from the initial state. This clearly invalidates branching bisimulation.
The same applies to added actions that are executed in parallel, owing to
the interleaved semantics of the state spaces. As can be seen in Fig. 7.18,
blocking the state transition insert record would also cut the state space into
two disconnected parts.
One particular exception exists for adding mandatory behaviour: protocol
inheritance allows mandatory behaviour to be appended to the final state of
the parent system if all state transitions that originate in this state are not
present in the parent system. In this case, these new transitions would be
blocked to verify protocol inheritance and the child system would not be able
to leave this state either, which preserves branching bisimilarity. However, it
is arguable that the system would then not reach its final state.
The addition of alternative behaviour can preserve protocol inheritance if
the actions offered as additional choices are not present in the parent system.
In the earlier example of this kind of change operation (see Fig. 7.16) blocking
of cancel transaction would lead to exactly the same behaviour as that of the
original system.
UHDG
UHFRUG
accepted either, since these changes affect the state space but are not blocked,
because no new actions have been introduced.
7.2.5 Discussion
Three relations for behaviour inheritance have been introduced, trace in-
heritance, projection inheritance, and protocol inheritance. The differences
and commonalities have been discussed by means of the change operations,
introduced in subsection 7.2.1. Up to this point, however, two change operations
have been omitted.
The first such change operation, the addition of optional behaviour, refers
to a loop that can be iterated arbitrarily often, i.e., not at all, once, or several
times. A simple example is shown in Fig. 7.20. The original database module
shown in Fig. 7.15 has been extended with an optional action insert record that
can be carried out arbitrarily often after read record and before update record.
LQVHUW
UHFRUG
Following the discussion above, it should be clear that this kind of change
operation can be allowed for all of the inheritance relations explored so far.
Trace equivalence accepts this change unconditionally, as the non-iteration of
the extension yields the original traces of the system. Projection inheritance
and protocol inheritance accept this change only if the added behaviour consists
of new actions.
For projection inheritance, branching bisimulation holds if all of the optional
behaviour is treated as silent transitions; for protocol inheritance, only the
first action must be new, i.e., not present in the parent system, to prevent
the behavioural extension to be entered. In the above example, all inheritance
relations are preserved.
Second, parallelisation of sequential behaviour has not been addressed. We
assume that some set of actions are independent of one another, that is, the
execution of one action is not a condition for the execution of another. These
actions could be executed in any order or in parallel. Figure 7.21 shows the
state space of a behavioural modification of our original database module that
does not add new actions. Instead, the previously sequentially ordered actions
read record and insert record are now carried out concurrently. This introduces
a number of choices into the system each time the next action to be carried
out is chosen. In the example, such a choice is offered in the state y2 .
7.2 Behavioural Inheritance 215
UHDG XSGDWH
UHFRUG \
EHJLQ UHFRUG FRPPLW
WUDQVDFWLRQ WUDQVDFWLRQ
\ \ \ \
XSGDWH UHDG
UHFRUG \ UHFRUG
Looking at trace inheritance, we infer that the set of the traces that result
from parallelisation of actions also includes the original sequential sequence.
However, for the projection and protocol inheritance relations, both of
which are based on branching bisimulation, we come to the conclusion that
neither of them can accept parallelisation as an inheritance-preserving change
operation. Since no new action is added to the system, no action can be
hidden or blocked. However, the extended system offers choices that cannot be
simulated by the parent system. Consequently, branching bisimulation does
not hold.
When comparing the three inheritance relations that we have introduced,
we observe that there is no clear hierarchy or order of strictness among them.
For each relation, a different subset of change operations preserves inheritance.
An overview of the allowed change operations is depicted in Table 7.2. The
change operations annotated with an asterisk (∗ ) preserve the corresponding
inheritance relation only if the behavioural extension consists of new actions
in the case of projection inheritance, or at least starts with new actions in the
case of protocol inheritance.
sequential behaviour
optional behaviour
Parallelisation of
Resequencing of
Addition of
Addition of
Addition of
behaviour
Trace inheritance
Projection/protocol inheritance ∗
Projection inheritance ∗ ∗
Protocol inheritance ∗ ∗
Lifecycle inheritance ∗ ∗ ∗
216 7 Comparing Behaviour
The table also lists two more inheritance relations, namely projection/pro-
tocol inheritance and lifecycle inheritance. These inheritance relations result
from combining protocol and projection inheritance. The more restrictive
inheritance relation – projection/protocol inheritance – applies if, for a given
parent behaviour and its extension, both protocol and projection inheritance
apply at the same time. Lifecycle inheritance is less restrictive than projection
inheritance and protocol inheritance. Lifecycle inheritance holds if, for any
extension in a child model, either protocol or projection inheritance holds.
Hence it subsumes the allowed change operations for both of these inheritance
relations.
FDQFHO
WUDQVDFWLRQ
FDQFHO
WUDQVDFWLRQ
FDQFHO
WUDQVDFWLRQ
Here, A is the set of automata representing the state spaces of discrete dynamic
systems. The similarity function can assume values between 0 and 1, where
0 indicates minimum similarity. A similarity value of 1 indicates maximum
similarity, or equality, from the perspective of the similarity function.
Behavioural similarity functions have a number of properties. Similarity
is reflexive, i.e., the similarity between a system and itself is maximal, i.e.,
sim(A, A) = 1, and symmetric, i.e., sim(A, B) = sim(B, A). This means that
it does not matter in which order two systems are compared.
duplicate process models may also originate from different information systems,
different departments of an organisation, or different views of the same business
process.
2UGHU
UHSODFHPHQW
$QDO\VH
SUREOHP
2UGHUVSDUH 5HSDLU 6HWWOH
SDUWV PDFKLQH SD\PHQW
For an example, consider the two process models shown in Fig. 7.23,
which describe the repair processes of two washing machine repair companies.
Process model A starts with the activity Analyse problem and, based on
the results, continues with Order spare parts and Repair machine, or with
Order replacement, i.e., the ordering of a completely new washing machine.
After the machine is repaired, Settle payment is executed. If, however, a
replacement machine is ordered, then payment can be settled concurrently
with ordering the replacement, since the price for the replacement is already
known. Process model B also starts with Analyse problem, but it does not
include replacing the machine. Therefore, the activity Order spare parts is
performed and followed by iterating Repair machine and Test machine until
the machine is fixed. Eventually, the activity Settle payment is performed as
well.
The two process models share a number of activities and also some be-
haviour. For instance, both processes start with analysing the problem and
terminate with settling payment. Furthermore, in both processes the machine is
repaired after spare parts have been ordered. However, there are also significant
differences, such as the alternative option to not repair the machine at all but
to order a replacement in the first process.
Upon studying these models more closely, it becomes obvious that neither
the equivalence relations nor the inheritance relations introduced earlier in this
chapter are suitable for identifying the commonalities between the models, as
both models add activities that are not present in the other model. This shows
the need for a more relaxed notion of how to compare behavioural models.
7.3 Behavioural Similarity 219
25
$3
26 50 63
$3 26 50 70 63
Fig. 7.24: Petri nets for the process models shown in Fig. 7.23
Recall that additional silent transitions are inserted into Petri nets to
represent exclusive and parallel gateways in process models. These have no
effect on their own, but preserve the branching structure of the business
process. From the Petri net, we can derive the state space following the
strategy discussed in subsection 6.3.3. This provides us with the state spaces
shown in Fig. 7.25, where the silent transitions of the Petri nets are represented
as silent state transitions labelled τ .
220 7 Comparing Behaviour
V
25 63
IJ
VIJ
V V
IJ
63 V 25
$3
V V
IJ 26 50 IJ 63
V V V V V
$3 26 IJ 50 70 IJ 63
U U U U U U U U
IJ
(b) State space S(B)
Fig. 7.25: State spaces derived from the Petri nets shown in Fig. 7.24
Now, we can compute the traces of these systems. Similarly to the approach
to comparing traces in trace equivalence, silent state transitions are ignored as
they have no effect on the environment of the process.
Comparing Traces
Traces are a purely sequential representation of the behaviour of a concurrent
system, which coincides with the sequentialisation applied when the state
space is derived from that system. If a set of activities are concurrent in the
original model, we will find a number of traces that comprise all permutations
of them in the orders in which they can be interleaved. Similarly to the state
explosion problem mentioned on page 181, this can lead to an exponential
growth in the number of traces.
Process model A in Fig. 7.23 allows the following traces:
AP, OR, SP ,
AP, SP, OR ,
AP, OS, RM, SP .
Looking at process model B in Fig. 7.23, we observe that the set of traces
is actually infinite. This is due to the loop around RM and TM , which is not
restricted in the model:
AP, OS, RM, T M, SP ,
AP, OS, RM, T M, RM, T M, SP ,
AP, OS, RM, T M, RM, T M, RM, T M, SP ,
...
7.3 Behavioural Similarity 221
n-grams of Traces
In the above listings we have underlined those 2-grams, which are common
to the state spaces of the two business processes. To compute the similarity
between two sets of n-grams, we use the Jaccard similarity, which divides the
number of common elements in two sets by the number of the elements in the
union of these sets:
|A ∩ A |
sim Jaccard (A, A ) =
|A ∪ A |
To apply this to the sets of n-grams of the traces of two systems, we divide
the number of common n-grams by the total number of n-grams. For the above
2
2-grams this results in sim trace (S(A), S(B)) = 10 : two out of ten 2-grams are
common to the two processes, which leaves us with a trace similarity of 0.2
for our process models for the repair business. If none of the n-grams of two
systems are shared, the numerator equals 0, which yields a similarity of 0.
At the other extreme, if all n-grams of the trace sets of the two systems are
identical, the numerator equals the denominator, which results a similarity of
1, which indicates the equality of the two systems with regard to the similarity
function.
222 7 Comparing Behaviour
This system is not trace equivalent to the state space in Fig. 7.25b, but has
an identical set of 2-grams.
The length of the n-grams, determined by n ∈ N, is a configuration
parameter for the strictness of the n-gram similarity function. The greater n
is, the stricter the similarity function becomes. For instance, a value of n = 1
gives a set equal to the set of actions of the system, because every 1-gram
consists of exactly one action and duplicate n-grams are ignored. Consequently,
1-gram similarity completely ignores the behaviour of a system, as all temporal
ordering relations between the actions are lost.
On the other hand, increasing n leads to increasing the length of the
subsequences that are compared by the n-gram similarity function. If n = 3,
only subsequences of three successive actions in the two systems are identified
as commonality. If n = 5, we need to find identical subsequences of at least
five actions, and so on. If n is at least as large as the length of the longest
trace of a system, i.e. n ≥ maxσ ∈ LA (σ), then n-gram equivalence coincides
with trace equivalence, because every n-gram is identical to one trace of the
system. Empirical studies have shown that the most effective similarity results
are obtained for the value n = 2.
Behavioural Profiles
Table 7.3: Behavioural profiles for the repair processes shown in Fig. 7.23
AP OS RM OR SP AP OS RM TM SP
AP + → → → → AP + → → → →
OS · + → + → OS · + → → →
RM · · + + → RM · · || || →
OR · · · + || TM · · · || →
SP · · · · + SP · · · · +
(a) Profile A (b) Profile B
Here, sim i refers to the Jaccard similarity for each of the behavioural rela-
tions, and wi ∈ [0, 1] denotes the weight applied to that relation such that
w+ + w→ + w|| = 1.
We have computed the elementary similarity values for each relation set.
Table 7.4 gives an overview, which indicates that the strongest similarity
between the repair processes is obtained with the strict order relation. Despite
226 7 Comparing Behaviour
Table 7.4: Elementary similarity values for each of the behavioural profile
relations
Similarity
3
sim + = 7
≈ 0.43
6
sim → = 10
= 0.6
0
sim || = 4
=0
the fact that each model adds an activity compared with the other, 60% of
their actions are still executed in the same order.
The considerable similarity in terms of exclusiveness results from the
reflexive pairs of activities Analyse problem (AP), Order spare parts (OS), and
Settle payment (SP), which can be executed at most once. Finally, the two
processes share nothing with respect to interleaving order relations, because the
interleaved execution includes different activities in the two models, resulting
in a similarity value of 0.
Based on the elementary similarity values, we have also computed the
aggregated similarity, sim Profile , for different weights, as listed in Table 7.5.
The last row indicates the average aggregated similarity where the weights of
all individual relation sets are equal.
$ & $ &
% ' % '
(a) (b)
Fig. 7.26: Two process models that have a different structure but exactly the
same behaviour
Bibliographical Notes
The idea of comparing the behaviour of systems goes back to Moore (1956),
who described a thought experiment to determine the behaviour of a system.
Here, an automaton, i.e., a sequential system, that accepts input and provides
output is connected to an experimenter – a machine that successively sends
input to the automaton and receives its output. Moore elaborated on the
conclusions that could be drawn by such an experimenter. From his standpoint,
two systems that cannot be distinguished by the same experimenter are
behaviourally equivalent.
Later, Hoare (1978) extended the notion of sequential systems by adding
concurrency and introduced trace semantics – the basis of trace equivalence.
Trace semantics provides a sequential representation of concurrency by in-
terleaving actions that can be carried out in parallel. Hoare claimed that
the behaviour of a system can be described entirely by a possibly infinite
set of traces. Hence, if two systems have the same sets of traces, they are
equivalent. In Hoare (1980), a complete process algebra was presented that
provides many useful properties and operations for capturing and composing
system behaviour.
Park (1981) and Milner (1982) proposed the concept of bisimulation,
which examines whether two systems are able to simulate each other. Milner
(1982) also introduced observation equivalence, i.e., weak bisimulation, as
a means to compare systems that have silent state transitions. Branching
bisimulation was introduced by van Glabbeek (1993) to provide a stronger
notion of equivalence than weak bisimulation that also incorporates silent state
transitions. Behavioural equivalence relations were also covered by Pomello
et al. (1992), van Glabbeek and Goltz (2001), and Hidders et al. (2005), who
provided formalisations and comparisons of these relations.
Behavioural inheritance was studied by Basten (1998) for the lifecycles of
objects in a software system. Later, van der Aalst and Basten (2002) showed
how projection and protocol inheritance can be applied to problems related to
7.3 Behavioural Similarity 229
cal modelling language for business process compliance rules. We show how
compliance rules translate to formal specifications and how BPMN-Q helps to
identify compliance violations in business process models.
This chapter gives an overview of system verification by introducing the
main concepts. Business process compliance is used mainly as an example
to illustrate the steps involved in the design and verification of systems. We
should point out that the steps are identical when one is verifying other types
of system properties.
Even though a few formal definitions cannot be avoided, we have deliber-
ately ignored formal completeness for the sake of comprehension. The chapter
concentrates on the main concepts and aims to provide an entrance into a
more comprehensive body of literature on system analysis and verification,
which is referred to in the bibliographical notes at the end of this chapter.
remove the defect that has been discovered. An overall picture of modelling
and verification is shown in Fig. 8.1.
UHILQH &RXQWHU
H[DPSOH
0RGHO
FKHFNHU
6\VWHPPRGHO
YHULILHG
The central items in this figure are the system model, which specifies the
behaviour of a system, and a property model, which captures a desired property
to be satisfied by the system model. Here, the system model is a behavioural
model, such as an extended automaton, a sequence diagram, or a business
process model. This model represents the behaviour of the system. As argued
before, the state space of a system model serves as a uniform representation of
the behaviour of the system, and therefore serves as a basis for system analysis.
Desired properties of a behavioural system, as well as properties that are
undesired and must be avoided, can be captured in different ways. Initially, such
properties are typically of an informal nature. Since formal verification requires
formal models, these properties have to be formalised. This formalisation step
results in a formal specification of the property, which can then be checked.
This formal specification uses temporal logic, which will be introduced in the
next section.
Verification is based on a specific component that is able to check formal
specifications of properties against the state space of a system model automat-
ically. This component is known as a model checker. A model checker takes as
input the state space of a system and a formal specification of a property. It
checks whether that particular property is actually satisfied in the state space.
If the property holds, the system model is verified.
If the system model does not satisfy the formal property, the model checker
not only returns a negative result, but also provides a counterexample. This
counterexample shows one behaviour of the system that leads to the violation
of the formal property.
This counterexample is valuable for improving the system design. The main
use of the counterexample is shown in Fig. 8.1. As it shows undesired behaviour,
234 8 Verification
it directly fuels the refinement of the system design. However, the counter
example might also expose problems in the formalisation of the property or in
the abstraction of the system model to the state space. Regardless of the origin
of the verification failure, such a counterexample provides useful information
to the engineers that enables them to improve the design of the system.
s : V → dom(V ).
V = {v1 , v2 }, (1)
Σ = {l}, (2)
∀ s, s ∈ S : |{s | (s, l, s ) ∈ δ, l ∈ Σ}| = 1, (3)
∀ s ∈ S \ {s0 } : |{s | (s , l, s) ∈ δ, l ∈ Σ}| = 1 ∧ (4)
|{s | (s , l, s0 ) ∈ δ, l ∈ Σ}| = 0,
∀ (s, l, s ) ∈ δ : s (v1 ) = s(v1 ) + s(v2 ) ∧ s (v2 ) = s(v1 ), (5)
s0 (v1 ) = 1, s0 (v2 ) = 0 (6)
F = ∅. (7)
has exactly one incoming state transition. The initial state has, of course, no
incoming state transition.
For all state transitions, i.e., all tuples in δ, the values of the variables
in the target state s are computed from the values of these variables in the
source state s, which is expressed in line (5). The variables are declared in line
(1) and initialised in line (6). Line (7) explicitly states that this automaton
has no final state. Since all state transitions represent the same function, there
exists only one label l in the alphabet (see line (2)).
This means that the state transitions impose updating of the variables when
advancing from one state to the next, leading to the progression of states and
variables shown in Table 8.1. In fact, the state transition relation implements
the behaviour of computing the numbers of the Fibonacci sequence, which are
stored in the variable v1 .
The Fibonacci sequence is infinite. Since every state represents one partic-
ular assignment of the variables v1 and v2 , there exist an infinite number of
states. Consequently, the above automaton is infinite.
State s0 s1 s2 s3 s4 s5 s6 s7 . . .
v1 1 1 2 3 5 8 13 21 . . .
v2 0 1 1 2 3 5 8 13 . . .
Using states as valuations, and predicate logic for the state transition
relation as a means to put these valuations in relation, is feasible for simple
examples such as the one above. However, it becomes cumbersome and very
complex when we try to express more complicated behaviours.
Temporal logic enables us to put states into relation by means of temporal
operators. A temporal operator could, for example, be used to express the
condition that if one action has been executed, another action must be executed
as well before the system halts.
The term temporal logic refers to the ordering of actions and events in
the state space, and does not address real-time semantics, such as that of
timed automata, which were discussed in subsection 3.3.3. In general, two
major types of temporal logic are distinguished: linear temporal logic and
computation tree logic. Both address the ordering of events and actions, but
they differ in their expressiveness and in how paths in the state spaces are
evaluated.
Figure 8.2 depicts the state space of a software system that leads a customer
care agent through the various steps of placing a courier delivery: that is, an
express messenger picks up a shipment from one location and transports it
to another location. The system starts in the state s0 , where the customer
236 8 Verification
IURP
IURP WR
WR SDLG
IURP SDLG VHQW
SDLG UHFRUG VHQW ILQ
VKLSPHQW
UHFRUG GHWDLOV KDQJXS
DFFRXQW V V V IJ
QXPEHU
KDQJXS
V FXVWRPHULV IURP
UHJLVWHUHG
IURP WR
WR SDLG
QHZ IURP VHQW VHQW
FXVWRPHU
V V V V
UHFRUG UHFRUG VHWWOH
DFFRXQW VKLSPHQW
GHWDLOV GHWDLOV SD\PHQW
Otherwise, the customer care system records the shipment details and
settles the payment. The final state, denoted by the atomic proposition fin, is
reached after the customer hangs up the phone.
Temporal logic extends predicate logic with temporal operators. Before these
operators are introduced, the evaluation of expressions has to be addressed.
The basis of temporal logic is state spaces, which were introduced in
Chapter 6. In subsection 2.2.3, we introduced execution sequences as a means
to express system runs by series of actions that are carried out in a particular
order. So far, the focus of behavioural analysis has been on state transitions.
In temporal logic, we are interested in the atomic propositions that are
true in certain states. Therefore the concept of computations is introduced. A
computation π is the sequence of states that is traversed in a system run. A
sample computation in the above system is given by the following sequence of
states:
π = s0 , s1 , s4 , s5 , s6 , s7 .
Computations are infinite sequences of states, because time does not stop.
This is apparent in the silent state transition that creates an iteration of the
final state s7 in Fig. 8.2. Once a system has reached a final state that will
never be left, it will remain in that state forever, iterating the silent state
transition as time passes. When we display computations in this chapter, this
property is represented by an overline on the state that is repeated infinitely
often.
Each state defines atomic propositions that hold while the system is in
that state. For example, in the state s5 the following propositions hold:
If we were interested in the names of states and the actions that had been
carried out to reach those states, we could have captured this information in
atomic propositions as well.
Using atomic propositions and Boolean operators, i.e., predicate logic, we
can reason about the properties of a state. Linear temporal logic (LTL) allows
us to reason about computations, i.e., sequences of states.
Let M = (S, Σ, δ, s0 , F ) be the state space, let π ∈ S ∗ be a computation
that starts in a state s ∈ S, and let ϕ be a predicate logic formula. Then the
formula holds for π if it holds for the first state s = π(1) in the path.
We can express an LTL formula for any computation of the system or for a
particular state. In the latter case, the formula must hold for all computations
that originate from that state:
M, s |= ϕ =⇒ ∀ π, π(1) = s : M, π |= ϕ.
238 8 Verification
In the example of the customer care system shown in Fig. 8.2, the formula
M, s1 |= from ∧ paid evaluates to true because the atomic propositions from
and paid evaluate to true. In contrast, this is not the case in the state s5 ,
which is expressed by M, s5 |= from ∧ paid.
The semantics of temporal operators is defined on the basis of computations.
In LTL, a computation is represented by a linear sequence of states, hence the
name “linear temporal logic”.
To reason about system behaviour (characterised by sequences of states),
temporal logic complements predicate logic with temporal operators. These
temporal operators are the next operator (a property holds in the next state of
the computation), the until operator (property 1 holds until property 2 holds),
the eventually operator (eventually a property will hold), and the globally
operator (a property holds in all states).
Let π = s1 , s2 , . . . , sn be a computation. We refer to the ith position of
a computation by π(i), where π(i) ∈ S, because a computation is a sequence
of states in the state space M. Recall that a computation is not required to
start in the initial state of a system. In the above computation, π(1) is the
first state s1 , π(2) the second state s2 , and so on.
Next The next operator (denoted by X) requires that an expression ϕ holds
in the next state of the computation:
M, π |= X(ϕ) ⇐⇒ M, π(2) |= ϕ.
M, π |= F(ϕ) ⇐⇒ ∃ i ≥ 1 : M, π(i) |= ϕ.
M, π |= G(ϕ) ⇐⇒ ∀ i ≥ 1 : M, π(i) |= ϕ.
Until The until operator U combines two expressions in the form ψUϕ. It
requires that ψ evaluates to true and will continue to be true in every
reachable state until a state π(i) is reached in which ϕ is true. Note that
ϕ might already be true in the first state of the computation:
M, π |= ϕ
M, s |= ϕ
to indicate that in all paths starting from the state s, the property ϕ holds.
LTL formulas in which states are used are called state formulas. LTL formulas
in which paths are used are called path formulas.
To illustrate temporal operators in LTL, the customer care system shown
in Fig. 8.2 is revisited. It is reasonable to assume that every parcel that has
been paid for will be sent. This property of the system can be expressed in
LTL by the following state formula:
M, s0 |= G(paid → F(sent)).
This formula states that, globally, that is, in all states reachable from the initial
state s0 , if the atomic proposition paid is true, then a state must eventually
be reached in which sent is true.
The formula uses an implication, denoted by p → q, which can also be
written as ¬p ∨ q, because either p does not hold, or if p holds then q must hold
as well. Consequently, the above LTL expression is equivalent to the following
one:
M, s0 |= G(¬paid ∨ F(sent)).
We can conclude that in any state, either of the two conditions ¬paid and
F(sent) must hold. In order to find out whether this formula holds, we identify
all states for which ¬paid is violated, because only then do we need to evaluate
the second condition. That is, we search for states where paid is true.
The atomic proposition paid is true in the states s1 , s2 , s6 , and s7 . Hence,
we need to examine whether, from any of these states, a state is reachable in
which sent evaluates to true as well.
• M, s1 |= F(sent), because in the state s3 that is reachable from state s1 ,
sent is true.
• M, s2 |= F(sent), because in this state sent is true. This is an example,
where the eventually operator is satisfied immediately, without the need to
progress to another state.
• M, s6 |= F(sent), for the same reason. In this case, sent holds even before
paid. However, none of the LTL operators makes an assumption about the
history of a computation.
• M, s7 |= F(sent). In the final state s7 , sent holds as well.
240 8 Verification
Consequently, the LTL formula holds for all computations that start in a state
where paid is true. We conclude that the following LTL formula holds:
M, s0 |= G(paid → F(sent)).
It is worth discussing why the formula above uses the globally operator.
For comparison, consider the following statement without this operator:
M, s0 |= paid → F(sent).
This formula states that if the atomic proposition paid is true in the state
s0 of the state space M, then it must eventually be followed by a state in
which sent evaluates to true. However, on investigating the state space shown
in Fig. 8.2, we see that paid is not true in the state s0 and, therefore, the
right-hand side of the implication is not required to hold.
By enclosing the statement with the globally operator, we express the
condition that in all states reachable from s0 the implication must hold.
In the remainder of this section, a number of equivalences and implications
are listed that are useful for formulating and transforming LTL formulas.
The first equivalence states that in a state space M, for a path π, the
negation of a property ϕ holds if and only if on the same path the property
does not hold:
M, π |= ¬ϕ ⇐⇒ M, π |= ϕ.
On a path, the property ϕ ∧ ψ holds if and only if on the path the property ϕ
and the property ψ hold:
M, π |= ϕ ∧ ψ ⇐⇒ M, π |= ϕ ∧ M, π |= ψ.
On a path, the property ϕ ∨ ψ holds if and only if on the path the property ϕ
or the property ψ holds:
M, π |= ϕ ∨ ψ ⇐⇒ M, π |= ϕ ∨ M, π |= ψ.
On a path, the property ϕ ∧ ψ holds in the next state if and only if in the next
state the property ϕ and the property ψ hold:
numerical zero is a small circle whose diameter is approximately equal to two-thirds of
that of O.
M, π |= X(ϕ ∧ ψ) ⇐⇒ M, π |= X(ϕ) ∧ X(ψ).
M, π |= ϕ =⇒ M, π |= F(ϕ).
If, for a computation π, a property ϕ holds in the next state then the condition
that this property shall hold eventually is satisfied as well:
M, π |= X(ϕ) =⇒ M, π |= F(ϕ).
M, π |= F(ϕ) ⇐⇒ M, π |= ϕ ∨ X(F(ϕ)).
M, π |= ¬G(ϕ) ⇐⇒ M, π |= F(¬ϕ).
M, π |= G(ϕ) ⇐⇒ M, π |= ¬F(¬ϕ).
242 8 Verification
As indicated earlier, the until operator states that a property will eventually
hold, which leads to the following equivalence:
M, π |= F(ϕ) ⇐⇒ M, π |= true U ϕ.
The last two statements show that in fact the temporal operators X and U
are sufficient to express any LTL formula, as the globally operator G can be
constructed using the eventually operator F, and F can in turn be constructed
using the until operator U. The atomic proposition true is understood as a
constant that always evaluates to true. Again, the last statement emphasises
that the property expressed by the second operand of the until operator must
eventually become true.
Above, we have explained that LTL formulas hold for a computation if they
hold for the first state of that computation. As a consequence, an LTL formula
applied to a state in the state space implicitly holds for every computation
that starts in that state.
It is interesting to know that two systems that are trace equivalent also
satisfy the same set of LTL expressions. If we say that a system model satisfies
an LTL formula, we mean that the formula holds in the initial state of the
system. Consequently, all computations of the system that start in the initial
state must satisfy the LTL expression. These computations are equivalent to
the traces of the system, because traces are sequences of actions that start in
the initial state and end in a final state of a system.
In subsection 7.1.2, we argued that two systems that are trace equivalent
have identical sets of traces. If two systems have the same set of traces, then
they also have the same set of computations, and hence satisfy the same LTL
expressions.
Linear temporal logic is only able to express properties regarding all execution
sequences that start in a certain state. Thus, we can only investigate whether
a property holds in all possible behaviours of the system.
However, it is also useful to investigate whether there exists some behaviour
with a certain property. An example of such a property is “from any state, it
is possible to reach the final state”. This property cannot be expressed in LTL,
because an LTL formula is evaluated for all computations originating from a
given state. Figure 8.3 shows an example that illustrates this observation.
In the state space shown, it is possible to reach the final state s2 , because
in the state s1 action c can be chosen. However, it is also possible that this
will never happen. In LTL, the expression F(fin) would not be evaluated to
true, because there is a possible behaviour of the system in which s2 is not
reached.
8.2 Temporal Logic 243
E
ILQ
D F
V V V IJ
M, s0 |= AG(EF(fin)).
This CTL formula reads: For all computations that start in s0 , it holds globally
that fin holds eventually.
As discussed earlier, CTL formulas use the branching structure of the state
space. In a state space tree (or computation tree), whenever a state has a
number of outgoing state transitions, each of them leads to a new state. State
transitions cannot converge into the same state again. As a result, each state,
except for the initial state, has exactly one incoming state transition, and at
least one outgoing one, resulting in a tree structure.
A computation tree for the above system is shown in Fig. 8.4. The compu-
tation starts in the state s0 . A state transition labelled a brings the system to
the state s1 . In this state, two things can happen. Either the loop transition b
is taken, bringing the computation to the state s1 , or the transition c is taken,
leading to the state s2 . In s1 , there are again these two alternatives, which
leads to the tree structure of the computation shown in Fig. 8.4.
Since there is a τ -transition in the state s2 of the state space shown in
Fig. 8.3, the system iterates in this state forever. As a consequence, the state
space tree becomes infinite. Nevertheless, the termination of the system is
represented by the atomic proposition fin attached to all final states, which in
our case is just the state s2 .
244 8 Verification
E ILQ ILQ
F IJ IJ
V¶ V¶ V¶
E ILQ
F IJ
V´ V´
E
The benefit of the state spaces tree is that it makes the branching structure
of computations explicit. Each state has a unique history, i.e., there exists
exactly one path that leads to a given state.
In computation tree logic, temporal operators and qualifiers are always
paired. This allows us to define whether a property must hold for all computa-
tions or just for one computation. Let M = (S, Σ, δ, s0 , F ) be a state space
and π a path in M. As above, π(1) refers to the first state on the path.
All The all quantifier (denoted by A) expresses the condition that a temporal
logic formula ϕ applies to all paths π starting in a state s:
Exists The exists quantifier (denoted by E) expresses the condition that there
exists at least one computation π that starts in s for which a temporal
logic formula evaluates to true:
In CTL, pairs of quantifiers and the until operator can be written AU(ψ, ϕ),
which is equivalent to A[ψ U ϕ]. We shall use the latter form from now on.
Pairing the temporal logic operators X, F, G, U with the path quantifiers E
and A results in eight possible combinations, illustrated in Fig. 8.5. Since a
8.2 Temporal Logic 245
CTL formula applies to computation trees that start in a given state, the tree
structure of the state space visualises in a straightforward fashion how CTL
formulas are evaluated. In the following discussion, the state s0 refers to the
root of the computation tree.
In Fig. 8.5, EX(ϕ) states that the property ϕ must hold in at least one
state that immediately succeeds s0 , whereas AX(ϕ) requires that ϕ holds in
all directly succeeding (next) states.
Similarly, EF(ϕ) requires ϕ to become true in at least one reachable future
state, while AF(ϕ) states that, for all computations, ϕ must eventually become
true in some future state. The temporal operator F does not state when a
property becomes true; hence, this may happen after different numbers of state
transitions from the initial state s0 . Nevertheless, AF(ϕ) requires that there
does not exist a single computation, in which ϕ remains false indefinitely.
CTL expressions always refer to paths in the state space and, hence, EG(ϕ)
requires that ϕ always holds along at least one path. In Fig. 8.5, we have
highlighted only one path. However, it may well be the case that several paths
satisfy ϕ globally. Note that the globally operator includes the state in which
the path starts, and hence ϕ must also hold in s0 . AG(ϕ) requires that ϕ holds
henceforth in all future states of all computations.
Finally, E[ψ U ϕ] requires that along at least one path, ψ must hold until
ϕ becomes true. We have visualised this property using different shadings of
states in Fig. 8.5: ψ evaluates to true in the grey states, and ϕ in the black
states. A[ψ U ϕ] only evaluates to true if ψ holds globally and, no matter which
alternatives are chosen, until ϕ becomes true. It is important to recall that
the until temporal operator requires the condition to become true eventually.
This means that a state must be reached where ϕ evaluates to true, no matter
which quantifier is applied to the temporal operator.
246 8 Verification
M, s |= ¬AX(ϕ) ⇐⇒ M, s |= EX(¬ϕ),
M, s |= ¬EX(ϕ) ⇐⇒ M, s |= AX(¬ϕ),
M, s |= ¬AG(ϕ) ⇐⇒ M, s |= EF(¬ϕ),
M, s |= ¬EG(ϕ) ⇐⇒ M, s |= AF(¬ϕ).
We observe that negation always switches the quantifier. That is, if a property
does not always hold, then there exists at least one path in the computation
tree where it does not hold.
The next set of equivalences are referred to as existential rules:
These equivalences show that all CTL formulas can be rewritten in such a
way that only the combinations EX, E[_ U _], and AG are required, where the
underscore _ is a placeholder for formulas. For this reason, formulas that use
only these three combinations are said to be in the existential normal form of
CTL.
The relation between the formulas AF(ϕ) and A[true U ϕ], as well as between
EF(ϕ) and E[true U ϕ], becomes apparent in Fig. 8.5 when we assume that the
states shaded grey are true.
The last equivalence in particular deserves an explanation. A[ψ U ϕ] states
that ψ must hold until ϕ becomes true. That is, until this point in time, either
ψ or ϕ must evaluate to true. The first part of the equivalence, ¬E((¬ϕ) U (¬ψ∧
¬ϕ)), addresses this very issue and states that there exists no such path in
which ϕ cannot become true, until both ψ and ϕ are false. Hence, in such
a path there needs to exist a point where both ψ and ϕ are not true. The
second part, ¬EG(¬ϕ), ensures that ϕ becomes true eventually, i.e., that no
path exists where ϕ remains false globally, and thereby forever.
We now return to the earlier example of the customer care system for ordering
express deliveries. We showed that every shipment that is paid for is also sent,
which is a system requirement. Changing the perspective, we would also like
8.2 Temporal Logic 247
This CTL formula reads as follows: There exists no path where a shipment is
not paid until the system reaches a final state in which the shipment is sent.
Note that a final state can only be reached by way of the state transition
hang up.
Looking at the example in Fig. 8.2 and assuming that for the final states
s2 and s6 fin is true, it is apparent that no such path exists.
However, the state space is not complete with regard to the possible
system behaviour. Since the customer care agent interacts with the client over
the phone, the client may choose to hang up the phone at any time. This
observation leads to a number of additional final states depending on the point
in time at which the customer hangs up, as depicted in Fig. 8.6.
IURP
SDLG
ILQ
V IJ IURP
IURP WR
WR SDLG
KDQJXS SDLG VHQW
UHFRUG VHQW ILQ
VKLSPHQW
GHWDLOV KDQJXS
V V V IJ
UHFRUG
DFFRXQW IURP
QXPEHU KDQJXS
SDLG FXVWRPHULV
UHJLVWHUHG
V IURP
QHZ IURP WR
FXVWRPHU WR SDLG
IURP VHQW VHQW
V V V V
UHFRUG UHFRUG VHWWOH
DFFRXQW VKLSPHQW
GHWDLOV GHWDLOV SD\PHQW
KDQJXS KDQJXS KDQJXS KDQJXS
V IJ V IJ V IJ
Fig. 8.6: State space of a customer care system including hang up choices of
customers
248 8 Verification
If we apply the above formula to the system design, it becomes clear that
a path in the computation tree exists that violates the desired property:
π = s0 , s3 , s4 , s5 , s11 .
V IJ IURP
IURP WR
WR SDLG
KDQJXS SDLG VHQW
UHFRUG VHQW ILQ
VKLSPHQW
GHWDLOV KDQJXS
V V V IJ
UHFRUG
DFFRXQW IURP
QXPEHU KDQJXS
SDLG FXVWRPHULV
UHJLVWHUHG
V IURP
QHZ WR
FXVWRPHU IURP SDLG
IURP SDLG VHQW
V V V V
UHFRUG VHWWOH UHFRUG
DFFRXQW VKLSPHQW
GHWDLOV SD\PHQW GHWDLOV
KDQJXS KDQJXS KDQJXS KDQJXS
V IJ V IJ V IJ
Fig. 8.7: Corrected state space of a customer care system satisfying the formal
property M, s0 |= ¬E[¬paid U (sent ∧ fin)]
It might be a surprise, however, that not all LTL formulas can be expressed
in CTL, as will be discussed in Section 8.4, when behavioural properties are
analysed. Experience shows that most behavioural specifications that are
practically relevant for the design and analysis of software systems can be
expressed in CTL.
In this book, we aim to provide a basic understanding of LTL and CTL
model checking. The topic has already been covered in an exhaustive manner
in several excellent textbooks. Tools exist that can perform model checking
automatically and in a very efficient manner. For details, the interested reader
is referred to the bibliographical notes at the end of this chapter.
ࣦࣧ ࣦɔ
Fig. 8.8: The computations of the system intersect with the computations
satisfying the property ϕ
These sets are shown in the Venn diagram in Fig. 8.8. The set LM on the
left-hand side contains all computations that the system is able to perform,
i.e., the paths of the system. On the right-hand side, all computations that are
permitted by the property ϕ are shown. To illustrate the motivation behind
the LTL model-checking approach, we investigate the subsets shown in this
figure.
The intersection LM ∩ Lϕ contains all computations that both are possible
for the system and at the same time satisfy the desired property. Lϕ − LM
contains all computations that satisfy this property but are not possible system
behaviour. This is why we do not care about that set.
The problematic set is LM − Lϕ , because it contains possible system
behaviour that at the same time violates the desired property. Therefore, in
LTL model checking we check whether this set is empty. If this set is empty,
then the system behaviour satisfies the property. If it is not empty, then we
8.3 Model Checking 251
have found a violation. In other words, the system satisfies the property if its
computations are a subset of the computations that satisfies the property:
LM ⊆ Lϕ . (1)
If this subset relation holds then the system always behaves in a way that
satisfies the desired property, because LM does not contain any computations
that are not covered by Lϕ . This situation is illustrated in Fig. 8.9.
ࣦɔ ࣦࣧ
Fig. 8.9: The set of computations of the system is included in the set of
computations satisfying the property ϕ
When discussing the overall picture of system verification shown in Fig. 8.1,
we highlighted the fact that model checkers return a counterexample whenever
a violation is found. The Venn diagram shown in Fig. 8.8 can be used to
illustrate the role of a counterexample in this context.
Recall that we can decide the subset relationship between sets by deciding
intersection using negation. In particular, the set of computations that are not
allowed by ϕ are those that result from the negation of ϕ. L¬ϕ refers to this
set of computations that are not covered by Lϕ . This leads to a condition that
is equivalent to the one specified in (1):
LM ∩ L¬ϕ = ∅. (2)
This condition states that there exists no computation in the state space M
that is also in the set of disallowed computations L¬ϕ .
If this condition is violated, there must be at least one element p ∈ LM ∩L¬ϕ .
Since the path p is in both sets, we can conclude that p is a possible system
behaviour and at the same time satisfies the negation of the property ϕ.
Therefore, p is the counterexample we were looking for. This counterexample
is returned to the designer, who can use it to improve the system design.
Although at first glance it seems that CTL is more complex than LTL, because
it adds quantifiers to temporal logic formulas, the approach to checking whether
an expression holds is, in fact, simpler.
This is due to the property that LTL formulas must be examined for every
computation of an automaton. For this reason, LTL model checking is based
252 8 Verification
Φ = EX(¬p ∧ q).
backtracing step, it checks whether ψ holds. If this is the case, the state is
marked with E[ψ U ϕ] and all incoming state transitions to this state are also
traced back, checking for ψ. This is repeated until ψ is not true in an incoming
state.
AG(ϕ) is checked by tracing all state transitions forwards and ensuring
that in every reachable state in the state space ϕ evaluates to true.
Every time the algorithm checks the next subformula, all states for which
this subformula holds are marked accordingly, until the algorithm terminates
with completing the check for the whole CTL expression that we started with
in the first place. If the initial state is marked with the whole CTL expression,
the system satisfies the requirement entirely.
8.4.1 Reachability
In the course of this book, we have already elaborated on the most elementary
behavioural property, reachability. For instance, in Chapter 2, reachability was
introduced as the possibility of advancing to a particular state by traversing the
state transitions of a labelled state transition system. In general, reachability
expresses the property of a system that it is able to reach a certain state. It
does not prescribe that the system should always reach that state.
For instance, in the ticket vending machine example shown in Fig. 3.3, we
used reachability to argue that it should be possible to cancel the purchase of
a ticket. Of course, we did not require that a purchase has eventually to be
cancelled.
254 8 Verification
CTL: M, s |= EF(ϕ).
CTL: M, s |= AG(EF(ϕ)).
In this formula, reachability of ϕ is required for s and every state that can
occur after s. This can be used to express the condition that it must always
be possible to reach a state where ϕ is true, no matter what happens. In the
ticket vending machine example, this property would be too strong, since in
every reachable state it must be possible to cancel the purchase. This is not
desired, since it would allow us to cancel a purchase even after it had been
completed.
Based on the equivalence of F(ϕ) and (true) U (ϕ), we can restrict the paths
in the state space for which reachability is satisfied. This observation can be
illustrated by a ticket vending machine similar to the one we have introduced
earlier in this book.
Let the atomic proposition init be true only in the initial state of the
automaton. The following expression requires that returning to the initial
state must only be possible if funds have been inserted, i.e., if the amount a of
money paid is larger than 0:
8.4.2 Safety
LTL: M, s |= G(¬ϕ),
CTL: M, s |= AG(¬ϕ).
Both of these formulas state that in the current and all future states ϕ will
always evaluate to false. Similarly to reachability, safety can be put under
conditions. The abstract condition “while ϕ holds, ψ must not become true”
is expressed by the following formulas:
For instance, the statement that “a ticket will not be supplied unless the
ticket purchase has been confirmed” can be expressed as follows, where the
atomic proposition conf indicates that the ticket has been confirmed and supp
indicates that it has been supplied:
8.4.3 Liveness
Checking a system only for safety is, in general, not sufficient to ensure the
proper system behaviour, because every safety property can be satisfied by
preventing the system from doing anything at all. For this reason, liveness
properties require that the system makes progress.
Liveness properties can be expressed directly using the eventually temporal
operator F in LTL and CTL:
LTL: M, s |= F(ϕ),
CTL: M, s |= AF(ϕ).
These temporal logic formulas express the condition that some property should
eventually become true. For this reason, liveness properties are sometimes
also referred to as eventuality properties. A well-known example of a liveness
property is the guaranteed termination of a system. It is a bizarre circumstance
that the termination of a program is called a liveness property.
Again, we can put a temporal expression under conditions using the until
operator U instead of F. Recall that in CTL, A[ψ U ϕ] states that a state in
which ϕ evaluates to true must eventually be reached and, until this state is
reached, ψ must be true.
256 8 Verification
In LTL, encapsulating the implication into G ensures that for every occur-
rence of ψ a reaction ϕ occurs. This reaction pattern is useful, for instance,
in asynchronous message-based interaction, where an apt statement of the
requirement is “every request (ψ) is eventually responded to by a message
(ϕ)”. In CTL, the same is achieved by encapsulating the implication into AG,
meaning that the implication should hold for all states globally.
Safety and liveness properties are, arguably, the most important kinds
of behavioural properties, because in combination they ensure that desired
behaviour will happen, while undesired behaviour will not happen. In fact,
every temporal logic formula has an equivalent representation comprising a
conjunction of a safety property and a liveness property.
8.4.4 Fairness
LTL: M, s |= GF(ϕ).
Fairness properties are important for systems that are designed to run
forever. For instance, when designing an elevator control system, we have to
make sure that it will always be possible to get to the ground floor. Another
example is a traffic light that is modelled to show a green light infinitely often.
Note that this is different from the requirement that the traffic light should
show green forever.
It is important to mention that in general fairness cannot be expressed
in CTL, because CTL does not allow the G and F operators to be combined
8.5 Business Process Compliance 257
logic. Each pattern can be translated into a CTL formula. As a result, com-
pliance rules expressed in BPMN-Q can be verified by CTL model-checking
tools.
BPMN-Q is activity-oriented, since elementary patterns prescribe rules
about the presence, absence, and ordering of activities in business processes.
It reuses graphical primitives from BPMN for start events and end events and
for activities, as shown in Fig. 8.10.
The approach is based on the matching of query elements to business
processes. For instance, a BPMN-Q start event matches the initialisation of a
business process. Along similar lines, BPMN-Q end events match every end
event in the business process. If the business process has several end events in
concurrent paths, then the triggering of all of these events indicates termination
of the business process instance and, thus, matches a BPMN-Q end event.
$FWLYLW\
$ % $ %
©OHDGVWRª ©SUHFHGHVª
(a) (b)
A D
Fig. 8.12: Example of the difference between the BPMN-Q «leads to» and
«precedes» path edge quantifiers
Path edges can also include the keyword Exclude, which expresses the
condition that a particular activity must not appear on the path, i.e., cannot
be executed in the sequence that is represented by the path edge. This activity
is referred to by a parameter of the keyword. For instance, Exclude(A) denotes
that execution of activity A is excluded on the path.
In the present example, all activities can be executed (in different compu-
tations), so that any Exclude annotation involving activities in the process
model would not be matched.
are based on patterns, which are translated into corresponding CTL formulas
that can be checked against the state space of the business process.
Elementary Patterns
AG(A → AF(B)).
AG(¬E[¬A U B]).
This formula states that there exists no path where activity A is not executed
until B has occurred. It mirrors the response pattern in that every response
message must be preceded by a request.
$
©OHDGVWRª
AG(start → AF(B)).
8.5 Business Process Compliance 261
([FOXGH $
©OHDGVWRª
Using the Exclude annotation, we can also express the global absence of an
activity in every business process instance. This is shown in Fig. 8.14. This
pattern consists of a path that connects the start and end of a business process.
The exclusion of A states that there exists no such path, on which A occurs
between the start and termination of a business process. Similarly to the start
event, we also introduce an atomic proposition end that is true in every final
state of the state space. This leads to the following CTL expression:
The occurrence of the start event implies that, in every path, the atomic
proposition A must remain false, i.e., activity A must not be executed, until
end becomes true, which indicates the termination of the business process.
A modification of the global absence pattern allows the presence of a
particular activity to be excluded in only some region of a business process,
i.e., between two activities. For this purpose, the start and end events in the
pattern are replaced with particular activities, which is mirrored in the CTL
formula by replacing the atomic propositions start and end with the atomic
propositions for the respective activities.
For an example of a BPMN-Q pattern and its evaluation, recall the business
process for reviewing a scientific manuscript, which we introduced in Fig. 5.26.
Briefly, the process captures the activities from the receipt of a review request
to the sending of the review by the reviewer to the programme committee
chairperson.
Figure 8.15 shows the state space for this business process, derived from
the Petri net that corresponds to the process model. Looking at the state
space, it becomes obvious that it follows a tree structure, which is required for
the proper annotation of states with atomic propositions for activities, as we
have argued above. Also, we have reused the abbreviations of activity names
for the atomic propositions of the states.
For this business process, we state the following compliance rules:
1. In every reviewing process, a decision to accept or reject a review must be
taken.
2. When a review has been accepted, a response must be sent.
262 8 Verification
V V V V V V V V
IJ
W W W W
V V V V
6$ 6$ 6$ 6$
VWDUW
'5 V V V V
V V
HQG HQG HQG HQG
V V V V
IJ IJ IJ IJ
IJ
V V V
Fig. 8.15: State space tree for the reviewing process presented in Fig. 5.26
8.5 Business Process Compliance 263
([FOXGH 65 *3
©SUHFHGHVª
activity must be absent. This pattern is depicted in Fig. 8.16 and is formalised
by the following CTL formula:
This expression states that no path exists that begins with the start event of
the process, continues at some point with SR and allows GP to be executed
afterwards. Note that the pattern permits SR to be executed after GP.
Looking at the process model, it is obvious that this compliance rule is
satisfied, because GP and SR lie on a path of the business process model and
therefore cannot be executed in the “wrong” order.
However, we should bear in mind that process models and compliance rules
are typically specified by different persons with different responsibilities and
that we are addressing the automatic verification of compliance rules using
CTL and state spaces as formal models for model checking. Hence, we should
apply the CTL formula to the state space shown in Fig. 8.15, verify it using a
CTL model checker, and deduce that in fact no path exists that allows SR to
be executed such that it can be followed by GP.
Advanced Patterns
The patterns introduced above provide building blocks to express more complex
compliance rules by combining them. This results in advanced BPMN-Q
patterns. Looking at the elementary patterns above, we observe that each of
them comprises a start node and an end node, both of which can be events or
activities, and a path edge that is qualified with one of the keywords «leads to»
and «precedes» and may also be annotated with Exclude. In fact, the CTL
formula results from the particular configuration of path edges in the pattern,
the nodes are represented by atomic propositions in the CTL formulas.
$5 35 65
©SUHFHGHVª ©OHDGVWRª
Fig. 8.17: An advanced compliance rule made up of the precedence and response
patterns
Note that this compliance rule requires the execution of AR and SR only if
PR is executed.
A commonly used pattern is the between scope presence pattern, which
requires that an activity B must always be executed after an activity A and
before another activity C if any one of the activities A and C is executed.
This is modelled in BPMN-Q using a combination of the response and prece-
dence patterns, as depicted in Fig. 8.18. The formalisation of that pattern is
straightforward:
$ % &
©OHDGVWRª ©SUHFHGHVª
©OHDGVWRª
$ % &
©OHDGVWRª ©SUHFHGHVª
©SUHFHGHVª
([FOXGH $ % ([FOXGH $
©SUHFHGHVª ©OHDGVWRª
Let us first elaborate on the second part of the pattern on the right-hand side
of the logical conjunction (∧), which is expressed using the after scope absence
pattern. This subformula states that in all states, it holds globally that the
occurrence of B in the state space requires that A must not ever occur before
the process terminates with the atomic proposition end. Vice versa, this also
means that if B does not occur, then A can occur. However, if A occurs then
B must not have occurred before it, otherwise the compliance rule is violated.
Furthermore, it is also allowed that neither of A and B is executed at all.
The first part of the CTL formula is analogous, but it uses the before scope
absence pattern. Here, the formula requires that no such path exists where
B can be executed after A. Hence, the execution of A excludes the execution
of B, but if A is not executed, then no restriction on the execution of B is
imposed.
Before we conclude this section, we shall attempt to shed some light on some
rather generic correctness criteria for process models. The above examples of
compliance rules were domain-specific. That is, for the particular domain of
reviewing scientific publications, we expressed a number of requirements that
266 8 Verification
$
©OHDGVWRª
The first generic correctness criterion states that every activity in a business
process model should be able to contribute to the successful termination of the
process. Therefore, we refer to it as the participation and termination pattern.
In detail, this means that every activity participates in a process instance,
and the process eventually reaches an end event after the activity has been
executed. For one activity, this is shown in Fig. 8.21 and is formalised in the
following CTL expression:
[RU HQG
V V IJ
[RU $
V V
VWDUW [RU
$
V V IJ
[RU %
% V V IJ
a path from the initial state in the state space to their execution, denoted by
the atomic propositions A and B, respectively, in Fig. 8.22b. For activity A,
there exists also a path to a terminating state, s4 . However, not all paths in
the state space lead to a state where the atomic proposition end becomes true.
Hence, the criterion is violated, because there exists the possibility that the
process does not terminate properly after A has been executed.
Recall, that the correctness query in Fig. 8.21 requires that activity A can
participate in a business process, i.e., it can be executed, and its execution
leads to the successful termination of the process. For a generic correctness
check of a business process, we need to replicate the correctness query for
every activity that exists in the business process model.
Let Σ be the set of activities {a1 , a2 , . . . , an } in a business process model.
The participation of all activities and the proper termination of the process
can be verified as follows:
©OHDGVWRª
The beauty of this correctness query lies in its simplicity, which is also
revealed by its formalisation in CTL:
AG(start → AF(end)).
For every BPMN-Q compliance pattern and complex query that we have
introduced above, there exists a precise formalisation in CTL. We have also
briefly covered the topic of CTL model checking in subsection 8.3.2. Because
every BPMN-Q compliance rule can be transformed into an equivalent CTL
formula, BPMN-Q compliance rules can be checked against the state space of
a business process model.
As mentioned in Section 8.1, model checkers distinguish between positive
and negative verification results. If a process model satisfies a compliance rule,
i.e., the temporal logic expression is satisfied, nothing remains to be done.
However, if a compliance rule is violated, then the process model needs to be
mended. This can be a difficult task if no information about the reason for
the violation is given. Therefore, model checkers provide information about
the sequence of state transitions that leads to a violation of the temporal logic
expression.
This information is, however, of limited use, because the model checker
returns only the first of potentially many sequences that lead to a violation.
In addition, the sequence of state transitions needs to be mapped back to the
state space and from there to the business process model to identify the root
cause of the violation, which requires manual effort.
For this reason, anti-patterns have been introduced into BPMN-Q as a
means to explain the violation of a compliance rule visually in the business
process model under examination. If the formal property expressed in a
BPMN-Q compliance rule is not satisfied by a business process model, an
anti-pattern rule is constructed. By matching this anti-pattern rule against the
business process model, we can identify and highlight all paths, i.e., complete
fragments of the process model, that violate the compliance rule.
Formally, anti-pattern rules are constructed from compliance rules by the
negation of their formal specifications in CTL. For each BPMN-Q pattern,
an anti-pattern exists. The following example shows this negation for the
response pattern, where we have applied the CTL equivalence rules introduced
in Section 8.2.2.
8.5 Business Process Compliance 269
$ ([FOXGH %
The negated CTL formula can also be expressed using BPMN-Q, as shown
in Fig. 8.24. Note that the path in the anti-pattern is not qualified. This is the
case because the pattern states that there exists a path, whereas BPMN-Q
path qualifiers apply a path edge to all possible paths (see subsection 8.5.1).
The anti-patterns for the other elementary patterns can be constructed
similarly. Figure 8.25 shows anti-patterns for the precedence, global presence,
and global absence patterns.
If a compliance rule is composed of several elementary patterns, its coun-
terpart is nevertheless constructed by negating the complete CTL formula
for the compliance rule. We have explained earlier that complex rules are
constructed by joining the CTL formalisations of the elementary patterns
using logical conjunction. According to De Morgan’s laws, the negation of
conjoined statements is equivalent to the disjunction of the negated statements.
Formally, this is expressed as follows. Let ϕ1 , ϕ2 , . . . , ϕn be CTL formulas for
elementary compliance patterns:
¬(ϕ1 ∧ ϕ2 ∧ · · · ∧ ϕn ) ⇐⇒ ¬ϕ1 ∨ ¬ϕ2 ∨ · · · ∨ ¬ϕn .
Consequently, ¬ϕ1 , ¬ϕ2 , . . . , ¬ϕn are the anti-patterns of the elementary com-
pliance patterns. The counterpart of a complex compliance rule is, therefore,
the disjunction of the elementary anti-patterns.
270 8 Verification
([FOXGH % $
([FOXGH $ $
(b) Anti-pattern for the (c) Anti-pattern for the global ab-
global presence pattern sence pattern
$5 35 65
5HYLHZ
'HFLGHRQ 6HQW 5V
5HYLHZLQJ &DQFHOODWLRQ
'5 5HFHLYHG
5HYLHZLQJ &U
5HTXHVW 6HQG 6HQG
QR
Fig. 8.26: Process model fragment that is highlighted by the response anti-
pattern
Applied to the reviewing process model shown in Fig. 5.26, this anti-pattern
matches the only path that starts in the initial state of the state space, executes
the activity AR (Accept Reviewing), and leads to a final state while SR is never
executed; see Fig. 8.15. This is the case if, during the preparation of the review,
an exception occurs and the reviewer sends an apology back. Figure 8.26 shows
the part of the process model that is highlighted by the BPMN-Q anti-pattern.
8.5 Business Process Compliance 271
Bibliographical Notes
We started this chapter by arguing that predicate logic was insufficient to
express the desired properties of a behavioural system in an elegant and formal
way. Instead, we proposed the use of temporal logic, which addresses the
relation between states by means of temporal operators.
Temporal logic goes back to Pnueli (1977), who introduced LTL in the
late 1970s. The second type of temporal logic is CTL, which was put forward
by Clarke and Emerson (1981) and Emerson and Halpern (1985). We have
explained that CTL and LTL are not equivalent in their expressiveness and
that neither supersedes the other. This means that there exist expressions in
each of the logics that cannot be expressed in the other.
The visualisation of possible combinations of temporal operators and CTL
qualifiers in Fig. 8.5 was inspired by Alessandro Artale’s lecture notes for his
course on formal methods at the Free University of Bozen-Bolzano.
Combining LTL and CTL leads to CTL*, which was proposed by Emerson
and Halpern (1986). Every formula that can be expressed in LTL or CTL can
also be expressed in CTL*. Put simply, CTL* removes the restriction in CTL
that every temporal operator must be paired with a qualifier, and this allows
more expressive statements.
A variety of extensions to LTL exist; one of the most notable ones is the
use of past temporal operators, which were added by Lichtenstein et al. (1985).
Past linear temporal logic (PLTL) is not more expressive than pure LTL, which
means that the same properties can be stated in both logics, but PLTL allows
more concise expressions in some cases.
Model checking is the process of testing whether a property is satisfied
by a system using a formal specification of the former and the state space
of the latter. Owing to their different semantics, LTL and CTL use different
approaches. LTL model checking is based on a set-theoretic approach that
incorporates the languages of the property and the system, and was introduced
by Lichtenstein and Pnueli (1985). The approach followed in this chapter is
that of Vardi and Wolper (1986). CTL, in contrast, can be checked by an
algorithmic approach, which was first demonstrated by Clarke et al. (1986).
Karsten Wolf’s group at the University of Rostock provides a rich set of
verification and analysis tools that are available under an open source license.
The Low Level Petri net Analyzer (LoLA) allows us to define properties in
CTL* and to verify whether a given Petri net verifies these properties. LoLA
was introduced by Schmidt (2000).
Besides these research publications, a number of excellent textbooks exists
that cover the topics of temporal logic and model checking in an exhaustive
fashion. We mention those of Baier and Katoen (2008), Clarke et al. (1999b),
and Berard et al. (2010).
In particular, the categories of behavioural properties, of reachability, safety,
liveness, and fairness were inspired by Berard et al. (2010). Safety and liveness
272 8 Verification
Pnueli A (1977) The temporal logic of programs. In: 18th Annual Symposium on
Foundations of Computer Science, IEEE Computer Society, pp 46–57
Pomello L, Rozenberg G, Simone C (1992) A survey of equivalence notions for net
based systems. In: Advances in Petri Nets, Springer, Lecture Notes in Computer
Science, vol 609, pp 410–472
Rabin MO, Scott D (1959) Finite automata and their decision problems. IBM J Res
Dev 3(2):114–125
Reisig W (2013) Understanding Petri Nets: Modeling Techniques, Analysis Methods,
Case Studies. Springer
Russell N, Hofstede AHMT, van der Aalst WM, Mulyar N (2006) Workflow control
flow patterns: A revised view. Tech. rep., BPM Center Report BPM-06-22
Schmidt K (2000) Lola: A low level analyser. In: ICATPN, pp 465–474
Stachowiak H (1973) Allgemeine Modelltheorie. Springer
Valmari A (1998) The state explosion problem. In: Lectures on Petri Nets I: Basic
Models, Advances in Petri Nets, Springer, Lecture Notes in Computer Science,
vol 1491, pp 429–528
Vardi MY, Wolper P (1986) An automata-theoretic approach to automatic program
verification (preliminary report). In: Proc Symposium on Logic in Computer
Science (LICS ’86), IEEE Computer Society, pp 332–344
Weber B, Reichert M, Rinderle-Ma S (2008) Change patterns and change support
features: Enhancing flexibility in process-aware information systems. Data Knowl
Eng 66(3):438–466
Weidlich M (2011) Behavioural profiles – a relational approach to behaviour consis-
tency. PhD thesis, Hasso Plattner Institute, University of Potsdam
Weidlich M, Weske M, Mendling J (2009) Change propagation in process models
using behavioural profiles. In: High Performance Computing, Networking Storage
and Analysis, IEEE Computer Society, pp 33–40
Weidlich M, Dijkman R, Mendling J (2010) The ICoP framework: Identification
of correspondences between process models. In: Advanced Information Systems
Engineering, Springer, Berlin, Heidelberg, Lecture Notes in Computer Science,
vol 6051, pp 483–498
Weidlich M, Mendling J, Weske M (2011) Efficient consistency measurement based
on behavioral profiles of process models. IEEE Trans Software Eng 37(3):410–429
Weske M (2012) Business Process Management: Concepts, Languages, Architectures,
2nd edn. Springer
Winskel G, Nielsen M (1995) Models for concurrency. In: Abramsky S, Gabbay DM,
Maibaum TSE (eds) Handbook of Logic in Computer Science, vol 4, Oxford
University Press, pp 1–148
Index