0% found this document useful (0 votes)
55 views39 pages

Introduction To Trace Theory

This document is the introduction chapter to a book on trace theory. It provides background on Petri nets and formal language theory as motivations for developing trace theory to model concurrent systems. Some limitations of the interleaving approach to concurrency are discussed. Specifically, interleaving reduces concurrency to non-determinism, which conflates different types of non-determinism and makes refinement and inevitability difficult to analyze. Trace theory was developed to address these issues using tools from formal language theory while still modeling concurrent systems similarly to Petri nets. The chapter introduces basic concepts like strings, alphabets, and projections that are important to trace theory.

Uploaded by

leszek445
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views39 pages

Introduction To Trace Theory

This document is the introduction chapter to a book on trace theory. It provides background on Petri nets and formal language theory as motivations for developing trace theory to model concurrent systems. Some limitations of the interleaving approach to concurrency are discussed. Specifically, interleaving reduces concurrency to non-determinism, which conflates different types of non-determinism and makes refinement and inevitability difficult to analyze. Trace theory was developed to address these issues using tools from formal language theory while still modeling concurrent systems similarly to Petri nets. The chapter introduces basic concepts like strings, alphabets, and projections that are important to trace theory.

Uploaded by

leszek445
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Chapter 1

Introduction to
Trace Theory
Antoni Mazurkiewicz
Institute of Computer Science, Polish Academy of Sciences
ul. Ordona 21, 01-237 Warszawa, and
Institute of Informatics, Jagiellonian University
ul. Nawojki 11, 31-072 Krakow, Poland
amazSwars.ipipan.waw.pi

Contents

1.1 Introduction 3
1.2 Preliminary Notions 5
1.3 Dependency and Traces 7
1.4 Dependence Graphs 14
1.5 Histories 19
1.6 Ordering of Symbol Occurrences 21
1.7 Trace Languages 27
1.8 Elementary Net Systems 33
1.9 Conclusions 40

1.1 Introduction
The theory of traces has been motivated by the theory of Petri Nets and the theory
of formal languages and automata.
Already in 1960's Carl Adam Petri has developed the foundations for the theory
of concurrent systems. In his seminal work [226] he has presented a model which
is based on the communication of interconnected sequential systems. He has also
presented an entirely new set of notions and problems that arise when dealing with

3
4 C H A P T E R 1. INTRODUCTION T O T R A C E T H E O R Y

concurrent systems. His model (or actually a family of "net-based" models) that
has been since generically referred to as Petri Nets, has provided both an intu-
itive informal framework for representing basic situations of concurrent systems,
and a formal framework for the mathematical analysis of concurrent systems. The
intuitive informal framework has been based on a very convenient graphical repre-
sentation of net-based systems, while the formal framework has been based on the
token game resulting from this graphical representation.
The notion of a finite automaton, seen as a restricted type of Turing Machine, has
by then become a classical model of a sequential system. The strength of automata
theory is based on the "simple elegance" of the underlying model which admits
powerful mathematical tools in the investigation of both the structural properties
expressed by the underlying graph-like model, and the behavioural properties based
on the notion of the language of a system. The notion of language has provided a
valuable link with the theory of free monoids.
The original attempt of the theory of traces [189] was to use the well developed
tools of formal language theory for the analysis of concurrent systems where the
notion of concurrent system is understood in very much the same way as it is
done in the theory of Petri Nets. The idea was that in this way one will get a
framework for reasoning about concurrent systems which, for a number of important
problems, would be mathematically more convenient that some approaches based
on formalizations of the token game.
In 1970's when the basic theory of traces has been formulated, the most popular
approach to deal with concurrency was interleaving. In this approach concurrency
is replaced by non-determinism where concurrent execution of actions is treated
as non-deterministic choice of the order of executions of those actions. Although
the interleaving approach is quite adequate for many problems, it has a number
of serious pitfalls. We will briefly discuss here some of them, because those were
important drawbacks that we wanted to avoid in trace theory.
By reducing concurrency to non-determinism one assigns two different meanings
to the term "non-deterministic choice": the choice between two (or more) possible
actions that exclude each other, and the lack of information about the order of
two (or more) actions that are executed independently of each other. Since in the
theory of concurrent systems we are interested not only in the question "what is
computed?" but also in the question "how it is computed?", this identification may
be very misleading.
This disadvantage of the interleaving approach is well-visible in the treatment of
refinement, see e.g. [49]. It is widely recognized that refinement is one of the basic
transformations of any calculus of concurrent systems from theoretical, method-
ological, and practical point of view. This transformation must preserve the basic
relationship between a system and its behaviour: the behaviour of the refined system
is the refined behaviour of the original system. This requirement is not respected if
the behaviour of the system is represented by interleaving.
Also in considerations concerning inevitability, see [192], the interleaving ap-
proach leads to serious problems. In non-deterministic systems where a choice
between two partners may be repeated infinite number of times, a run discriminat-
1.2. PRELIMINARY N O T I O N S 5

ing against one of the partners is possible; an action of an a priori chosen partner
is not inevitable. On the other hand, if the two partners repeat their actions in-
dependently of each other, an action of any of them is inevitable. However, in the
interleaving approach one does not distinguish between these two situations: both
are described in the same way. Then, as a remedy against this confusion, a special
notion of fairness has to be introduced, where in fact this notion is outside of the
usual algebraic means for the description of system behaviour.
Finally, the identification of non-deterministic choice and concurrency becomes
a real drawback when considering serializability of transactions [104]. To keep
consistency of a database to which a number of users have concurrent access, the
database manager has to allow concurrent execution of those transactions that do
not interfere with each other. To this end it is necessary to distinguish transactions
that are in conflict from those which are independent of each other; the identification
of the non-deterministic choice (in the case of conflicting transactions) with the
choice of the execution order (in case of independent transactions) leads to serious
difficulties in the design of database managing systems.
Above we have sketched some of the original motivations that led to the formula-
tion of the theory of traces in 1977. Since then this theory has been developed both
in breadth and in depth, and this volume presents the state of the art of the theory
of traces. Some of the developments have followed the initial motivation coming
from concurrent systems, while other fall within the areas such as formal language
theory, theory of partially commutative monoids, graph grammars, combinatorics
of words, etc.
In our paper we discuss a number of notions and results that have played a
crucial role in the initial development of the theory of traces, and which in our
opinion are still quite relevant.

1.2 Preliminary Notions


Let X be a set and R be a binary relation in X; R is an ordering relation in X,
or X is ordered by R, if R is reflexive (xRx for all x e X), transitive (xRyRz
implies xRz), and antisymmetric (xRy and yRx implies x = y); if, moreover, R is
connected (for any i , j 6 l either xRy or yRx), then R is said to be a linear, or
total ordering. A set, together with an ordering relation, is an ordered set. Let X
be a set ordered by relation R. Any subset Y of X is then ordered by the restriction
of RtoY, i.e. by Rn(Y xY).
The set {0,1, 2 , . . . , n , . . . } will be denoted by to. By an alphabet we shall under-
stand a finite set of symbols (letters); alphabets will be denoted by Greek capital
letters, e.g. E, A , . . . , etc. Let E be an alphabet; the set of all finite sequences of el-
ements of E will be denoted by E*; such sequences will be referred to as strings over
E. Small initial letters a, b,..., with possible sub- or superscripts will denote strings,
small final Latin letters: w, u,v,..., etc. will denote strings. If u = (ai, a 2 , . . . , a n )
is a string, n is called the length of u and is denoted by |w|. For any strings
(ai, o 2 , . . . , an), (&i, & 2 ,..., bm), the concatenation (a1, a2,. • •, an) o (&i, &2, • • •, &m)
6 C H A P T E R 1. INTRODUCTION T O T R A C E T H E O R Y

is the string (a x , a 2 , . . . , an, b1, 6 2 , . . . , bm). Usually, sequences (au a 2 , . . . , an) are
written as aia2 .. .an. Consequently, a will denote symbol a as well as the string
consisting of single symbol a; the proper meaning will be always understood by a
context. String of length 0 (containing no symbols) is denoted by e. The set of
strings over an alphabet together with the concatenation operation and the empty
string as the neutral element will be referred to as the monoid of strings over E, or
the free monoid generated by E. Symbol E* is used to denote the monoid of strings
over E as well the set E* itself.
We say that symbol a occurs in string w, if w = W\aw2 for some strings u>i, w 2 .
For each string w define Alph(w) (the alphabet of w) as the set of all symbols
occurring in w. By w(a) we shall understand the number of occurrences of symbol
a in string w.
Let E be an alphabet and u b e a string (over an arbitrary alphabet); then ir-£(w)
denotes the (string) projection of w onto E defined as follows:

{ e,

7TS(M),
if w = e,

if w — ua, a 0 E,
7Tx;(«)a, if w = ua, a € E.
Roughly speaking, projection onto E deletes from strings all symbols not in E. The
subscript in 7T£ is omitted if E is understood by a context.
The right cancellation of symbol a in string w is the string w -f- a defined as
follows:
e + a = e, (1.1)
W
( b)— = i ' if a = 6, , .
\ ' ' | (w -=- a)6, otherwise. ' '

for all strings w and symbols a, b. It is easy to prove that projection and cancellation
commute:
7TE(TO) v a = 7r s (w T d ) , (1.3)

for all strings w and symbols a.


In our approach a language is defined by an alphabet E and a set of strings
over E (i.e. a subset of £*). Frequently, a language will be identified with its set
of strings; however, in contrast to many other approaches, two languages with the
same set of strings but with different alphabets will be considered as different; in
particular, two singleton languages containing only empty strings, but over different
alphabets, are considered as different. Languages will be denoted by initial Latin
capital letters, e.g. A, B,..., with possible subscripts and superscripts.
If A, B are languages over a common alphabet, then their concatenation AB is
the language {uv \ u 6 A, v G B} over the same alphabet. If w is a string, A is a
language, then (omitting braces around singletons)

wA = {wu | u e A}, Aw = {uw | u 6 ^4}.


1.3. D E P E N D E N C Y AND T R A C E S 7

The power of a language A is defined recursively:

A° = {e},An+1=AnA,

for each n = 0 , 1 , . . . , and the iteration A* of A is the union:

oo

n=0
Extend the projection function from strings to languages defining for each language
A and any alphabet E the projection of A onto E as the language TTS{A) defined as

TT S (A) = {w{u) [u e A}.

For each string w elements of the set

Pref (w) = {u | 3i> : uv = UJ}

are called prefixes of w. Obviously, Pref (w) contains e and w. For any language A
over E define
Pref (A) = ( J Pref(w).

Elements of Pref (A) are called prefixes of A. Obviously, A C Pref (A). Language A
is prefix closed, if A = Pref (A). Clearly, for any language A the language Pref (A)
is prefix closed. The prefix relation is a binary relation C in E* such that u C w if
and only w € Pref (w). It is clear that the prefix relation is an ordering relation in
any set of strings.

1.3 Dependency and Traces


By a dependency we shall mean any finite, reflexive and symmetric relation, i.e. a
finite set of ordered pairs D such that if (a, 6) is in D, then (6, a) and (a, a) are in
D. Let D be a dependency; the the domain of D will be denoted by E p and called
the alphabet of D. Given dependency D, call the relation ID = ( E p x S p ) — D
independency induced by D. Clearly, independency is a symmetric and irreflexive
relation. In particular, the empty relation, identity relation in E and the full relation
in E (the relation E x E) are dependencies; the first has empty alphabet, the second
is the least dependency in S, the third is the greatest dependency in E. Clearly,
the union and intersection of a finite number of dependencies is a dependency. It is
also clear that each dependency is the union of a finite number of full dependencies
(since any symmetric and reflexive relation is the union of its cliques).

E x a m p l e 1.3.1 The relation D = {a, b}2 U {a, c} 2 is a dependency; E # = {o, 6, c},


Jb = {(6,c),(c,6)}.
8 C H A P T E R 1. INTRODUCTION T O T R A C E T H E O R Y

In this section we choose dependency as a primary notion of the trace theory;


however, for other purposes it may be more convenient to take as a basic notion con-
current alphabet, i.e any pair (E, D) where E is an alphabet and D is a dependency,
or any pair ( E , / ) , where E is an alphabet and / is an independency, or reliance
alphabet, i.e. any triple (E, D, I), where E is an alphabet, D is a dependency and J
is independency induced by D.
Let D be a dependency; define the trace equivalence for D as the least congruence
=D in the monoid E B such that for all a, b

(a, b) e ID => ab =D ba. (1.4)

Equivalence classes of =£> are called traces over D; the trace represented by
string w is denoted by [W]D. By [E*]D we shall denote the set {[w]n | w £ E | , } ,
and by [T,]D the set {[a]p | a € E/j}.

E x a m p l e 1.3.2 For dependency D = {a, &}2U{a, c} 2 the trace over D represented


by string abbca is [abbcajjy = {abbca,abcba,acbba}.

By definition, a single trace arises by identifying all strings which differ only
in the ordering of adjacent independent symbols. The quotient monoid M(Z?) =
E D / = D is called the trace monoidovei D and its elements the traces over D. Clearly,
M(D) is generated by [EJp. In the monoid M(Z>) some symbols from E commute
(in contrast to the monoid of strings); for that reason M(D) is also called a free
partially commutative monoid over D. As in the case of the monoid of strings, we
use the symbol M(D) to denote the monoid itself as well as the set of all traces
over D. It is clear that in case of full dependency, i.e. if D is a single clique, traces
reduce to strings and M(£>) is isomorphic with the free monoid of strings over E ^ .
We are going to develop the algebra of traces along the same lines as it has been
done in case of of strings. Let us recall that the mapping <pr> • E* —> P*].D such
that
</>£>(«;) = [W]D
is a homomorphism of E* onto M(D), called the natural homomorphism generated
by the equivalence =£>.
Now, we shall give some simple facts about the trace equivalence and traces.
Let D be fixed from now on; subscripts D will be omitted if it causes no ambiguity,
i" will be the independency induced by D, E will be the domain of D, all symbols
will be symbols in E D , all strings will be strings over E p , and all traces will be
traces over D, unless explicitly stated otherwise.
It is clear that u = v implies Alph (u) = Alph (v); thus, for all strings w, we can
define Alph ([w]) as Alph (w). Denote by ~ a binary relation in E* such that u ~ v
if and only if there are x, y € E*, and (a, b) E I such that u = xaby, v = xbay; it is
not difficult to prove that = is the symmetric, reflexive, and transitive closure of ~ .
In other words, u = v if and only if there exists a sequence (w0, w b . . . , wn),n > 0,
such that u>o = u,wn = v, and for each i, 0 < i < n, wi-1 ~ u>;.
1.3. D E P E N D E N C Y AND T R A C E S 9

Permutation is the least congruence ~ in E* such that for all symbols a, b


ab ~ ba. (1-5)
If u ~ v, we say that u is a permutation of v. Comparing (1.4) with (1.5) we see at
once that u = v implies u ~ v.
Define the mirror image wR of string w as follows:
eR = e, (ua)R = a{uR),
for any string u and symbol a. It is clear that the following implication (the mirror
rule) holds:
u = v^uR = vR; (1.6)
thus, we can define [w] as equal to [wR].
Since obviously u ~ v implies (u -j- a) ~ (i; 4- a), and = is the transitive and
reflexive closure of ~ , we have the following property (the cancellation property) of
traces:
u = v =$• (u 4- a) = (v 4 a); (1-7)
thus, we can define [w] 4 a as [w 4 a], for all strings w and symbols a.
We have also the following projection rule:
u =v^> 7r s (u) = 7r s («) (1.8)
for any alphabet E, being a consequence of the obvious implication u ~„=>- 7rE(u) =
TE(V)-
Let u, v be strings, a, b be symbols, a / b. If ua = «6, then applying twice rule
(1.7) we get ti T J = « T o; denoting u -r- b by w, by the cancellation rule again we
get u = (ua) -J- a = (vb) ~ a = ( » T a)6 = w6. Similarly, we get v = wa. Since
ua = vb, wba = wa& and by the definition of traces ab = ba. Thus, we have the
following implication for all strings u, v and symbols a, b:
ua, = vb A a / b => (a,b) E / A 3w : u = wb A v = wa. (1-9)
Obviously, u = v => xuy = xvy. If xuy = xvy, by the cancellation rule xu = xv;
by the mirror rule uRxR = vRxR; again by the cancellation rule uR = vR, and again
by the mirror rule u = v. Hence, from the mirror property and the cancellation
property it follows the following implication:
xuy = xvy =>• u = v. (1-10)
Extend independency relation from symbols to strings, defining strings u, v to
be independent, (u, v) G I, if Alph (u) x Alph (v) C / . Notice that the empty string
is independent of any other: (e, w) 6 I trivially holds for any string w.
Proposition 1.3.3 For all strings w,u,v and symbol a £ Alph(t;): uav — wa =>
(a,v) G / .
Proof: By induction. If v = e the implication is clear. Otherwise, there is symbol
b 7^ a and string x such that v = xb. By Proposition 1.9 (a, b) £ I and uaxb = wa.
By cancellation rule, since b / o, we get uax = (w -v- b)a. By induction hypothesis
(a, x) 6 I. Since (a, b) G / , (a, xb) G I which proves the proposition. •
10 C H A P T E R 1. INTRODUCTION T O T R A C E T H E O R Y

The next theorem is a trace generalization of the Levi Lemma for strings, which
reads: for all strings u, v, x, y, if uv = xy, then there exists a string w such that
either uw = x, wy = v, or xw = u, wv = y. In case of traces this useful lemma
admits more symmetrical form:

T h e o r e m 1.3.4 (Levi Lemma for traces) For any strings u,v,x,y such that uv =
xy there exist strings zi,z2,z3,Z4 such that (z2,z3) £ / and

u = zxz2,v = z3z4,x = z1z3,y = z2z4. (1.11)

Proof: Let u, v, x, y be strings such that uv = xy. If y = e, the proposition is


proved by setting z\ = u, z2 = z4 = e, z3 = v. Let w be a string and e be a symbol
such that y = we. Then there can be two cases: either (1) there are strings v',v"
such that v = v'ev" and e 0 Alph(«"), or (2) there are strings u',u" such that
u = u'eu", and e ^ Alph (u"v).
In case (1) by the cancellation rule we have uv'v" = xw; by induction hypothesis
there are z[, z'2: z'3, z'4 such that

UV = Z-i Zry f V = Zo Z A ^ X = Z-i Zr> • 11) = Zty ZA .

and (z'2,z'3) € I. Set z\ = z'1,z2 = z'2e,z3 = z'3,z4 = z'4. By Proposition 1.3.3


(e, v") G I, hence (e, z'3) e / and (e, z'4) G / . Since (e, z'4) E / , ez'4 = z'4e; and it is
easy to check that equivalences (1.11) are satisfied. We have also (z'2,z'3) G i" and
(e, z'3) G / ; it yields (z'2e, z'3) G / which proves (z2, z3) G / .
In case(2) by cancellation rule we have u'u"v = xw and by induction hypothesis
there are z[, z'2, z'3, z'4 such that

U — Z-t Zry« U V = ZOZA. X = Z-i Zo* ID = ZC\ZA.

and ( Z 2 J 2 3 ) £ !• Set, as above, z\ = z[,z2 = z'2e,z3 = z'3,z4 = z'4. By Proposition


1.3.3 (e,u"v) G / , hence (e,z'3)inl and (e,z'4) G 7". Since ue = z'2z'4e = z'2ez'4 =
z2z4), equivalences (1.11) are satisfied. By induction hypothesis (z'2,z'3) G / ; to-
gether with (e, z'3)inl it implies (z2, z3) G / . •
Let [u], [v] be traces over the same dependency. Trace [u] is a prefix of trace [v]
(and [v] is a dominant of [u]), if there exists trace [x] such that [ux] = [v]. Similarly
to the case of strings, the relation "to be a prefix" in the set of all traces over a
fixed dependency is an ordering relation; however, in contrast to the prefix relation
in the set of strings, the prefix relation for traces orders sets of traces (in general)
only partially.
The structure of prefixes of the trace [abbca] for dependency {a, ft}2 U {a, c} 2 is
given in the Figure 1.2.

Proposition 1.3.5 Let [w] be a trace and [u],[v] be prefixes of [w]. Then there
exist the greatest common prefix and the least common dominant of [u] and [v].
1.3. D E P E N D E N C Y AND T R A C E S 11

Figure 1.1: Graphical interpretation of Levi Lemma for traces.

[06]

[abb] [abc]

[abbe]

[abbca]

Figure 1.2: Structure of prefixes of [abbca]£, for D = {a, b}2 U {a, c}


12 C H A P T E R 1. INTRODUCTION T O T R A C E T H E O R Y

Proof: Since [u], [v] are prefixes of [w], there are traces [a;], [y] such that ux =
vy. Then, by Levi Lemma for traces, there are strings 21,22,23,24 such that
u = zxz2,x = z3z4,v = ziz3,y = 2224, and z2,z3 are independent. Then [zi] is
the greatest common prefix and [ziz2z3] is the least common dominant of [it], [v].
Indeed, [21] is a common prefix of both [u], [v]; it is the greatest common prefix,
since any symbol in z2 does not occur in 23 and any symbol in 23 does not occur in
2 2 ; hence any extension of [21] is not a prefix either of [2122] = [u], or of [2123] = [v].
Thus, any trace being a prefix of [u] and of [v] must be a prefix of [21]. Similarly,
[ziz2z3] is a common dominant of [u], [v]; but any proper prefix of \z\z2z3\ is either
not dominating [u], if it does not contain a symbol from 22, or not dominating [v],
if it does not contain a symbol from 23. •
Let us close this section with a characterization of trace monoids by homomor-
phisms (so-called dependency morphisms).
A dependency morphism w.r.t. D is any homomorphism <j> from the monoid of
strings over T,f> onto another monoid such that

Al : 4>{w) = <f>(e) => w = e,


A2 : (a,b) E I => ${ab) = </>{ba),
A3 : 4>{ua) = 4>{v) =>• cf>(u) = <j>(y 4- a),
A4 : (j>(ua) = (j>(vb) A a j= b => (a, 6) e I.

L e m m a 1.3.6 Let 4> be a dependency morphism, u, v G E*, a, b G E. If (j>{ua) =


<j>(vb) and a / b, then there exists w € E* such that(f>(u) — 4>(wb) and<f>(v) = 4>{wa).

Proof: Let u, v, a, b be such as assumed in the above Lemma and let

4>(ua) = <f>(vb) (1.12)

Applying twice A3 to (1.12) we get

4>{u 4 6) = (j)(v 4 a) (1.13)

Set w = u -T- b. Then

<j>(wa) = 4>{{u±-b)a)
= (j}{(ua) -r- b), since a ^ b,
= (j)(v), from (1.12) by A3,

and

<j>(wb) = 0((u 4-6)6)


= 0((u4-a)6), from (1.13),
= 4>{(vb) 4 a), since a / 6,
= (j)(u), from (1.12) by A3.

a
1.3. D E P E N D E N C Y AND T R A C E S 13

L e m m a 1.3.7 Let (/>, i/> be dependency morphisms w.r.t. the same dependency.
Then <j>(x) = <j>(y) => ip{x) = ij){y) for all x,y.

Proof: Let I? be a dependency and let 4>,yj be two concurrency mappings w.r.t.
dependency D, x,y € E*, and </>(x) — 4>(y)- H x = e, then by A l y = e and clearly
ip(x) = tl>(y). If x ^ e, then y ^ e. Thus, x = ua, y = vb for some u, v £ E*, a, 6 £ E
and we have
</>(ua) = <£(t;&). (1.14)
There can be two cases. In the first case we have a = b, then by A3 <f>(u) = <f>(v)
and by induction hypothesis ip(u) = ip(v); Thus ip(ua) = rj){vb), which proves
tp(x) = ip{y). In the second case, a^b. By A4 (a, 6) £ / . By Lemma 1.3.6 we get
(f>(u) = <f)(wb), 4>{v) = <f>{wa), for some w G E*. By induction hypothesis

rl>(u) = ip(wb), tjj(v) = ip(wa) (1-15)

hence
tp(ua) = ip(wba),i/i(vb) = ip(wab). (1-16)
By A3, since (a, 6) £ J,
V>(6a) = V(a&), (1.17)
hence ip(wba) = ip(wab), which proves ip(x) = ^{y). n

T h e o r e m 1.3.8 If 4>,ip are dependency morphisms w.r.t. the same dependency


onto monoids M, N, then M is isomorphic with N.

Proof: By Lemma 1.3.7 the mapping denned by the equality

0(0M) - V H ,
for all w 6 E* is a homomorphism from M onto N. Since 6 has its inverse, namely

B-\rl>(w)) = 4>[w),
which is a homomorphism also, 6 is a homomorphisin from TV onto M, 9 is an
isomorphism. •

T h e o r e m 1.3.9 The natural homomorphism of the monoid of strings overT.£> onto


the monoid of traces over D is a dependency morphism.

Proof: We have to check conditions Al - A4 for <j>{w) = [w\o- Condition A l


is obviously satisfied; condition A2 follows directly from the definition of trace
monoids. Condition A3 follows from the cancellation property (1.7); condition A4
follows from (1.9). •
Corollary. If <j> is a dependency morphism w.r.t. 73 onto M, then M is isomorphic
with the monoid of traces over D and the isomorphism is induced by the mapping
6 such that 9([a]o) = 4>D{O) for all a £ E ^ .
Proof: It follows from Theorem 1.3.8 and Theorem 1.3.9. •
14 C H A P T E R 1. INTRODUCTION T O T R A C E T H E O R Y

1.4 Dependence Graphs


Dependence graphs are thought as graphical representations of traces which make
explicit the ordering of symbol occurrences within traces. It turns out that for a
given dependency the algebra of traces is isomorphic with the algebra of dependency
graphs, as defined below. Therefore, it is only a matter of taste which objects
are chosen for representing concurrent processes: equivalence classes of strings or
labelled graphs.
Let D be a dependency relation. Dependency graphs over D (or d-graphs for
short) are finite, oriented, acyclic graphs with nodes labelled with symbols from E u
in such a way that two nodes of a d-graph are connected with an arc if and only if
they are different and labelled with dependent symbols. Formally, a triple

-Y = (V,R,tp)

is a dependence graph (d-graph) over D, if

V is a finite set (of nodes of 7), (1-18)


RCV xV (the set of arcs of 7), (1.19)
<p:V —> EJJ (the labelling of 7), (1.20)

such that

R+ n idv = 0, (acyclicity) (1.21)


(vuv2) € R U .R - 1 U idv & (<p(vi), f{v2)) € D (D-connectivity) (1.22)

Two d-graphs 7', 7 " are isomorphic, 7' ~ 7", if there exists a bijection between their
nodes preserving labelling and arc connections. As usual, two isomorphic graphs are
identified; all subsequent properties of d-graphs are formulated up to isomorphism.
The empty d-graph (0, 0, 0) will be denoted by A and the set of all isomorphism
classes of d-graphs over D by To-

E x a m p l e 1.4.1 Let D = {o, b}2 U {a, c}2. Then the node labelled graph (V, R, ip)
with
V = {1,2, 3, 4, 5},
R = {(1, 2), (1,3), (1,4), (1, 5), (2,4), (2,5), (3,5), (4, 5)},
ip{l) = a, <p(2) = b, v{3) = c, v ( 4 ) = 6, v ( 5 ) = a,
is a d-graph. It is (isomorphic to) the graph in Fig. 1.3

The following fact follows immediately from the definition:

P r o p o s i t i o n 1.4.2 Let D',D" be dependencies, (V,R',ip) be a d-graph over D',


(V,R",<p) be a d-graph over D". If D' C D", then R' C R".

A vertex of a d-graph with no arcs leaving it (leading to it) is said to be a


maximum(minimum, resp.) vertex. Clearly, a d-graph has at least one maximum
1.4. DEPENDENCE GRAPHS 15

Figure 1.3: A dependence graph over D = {a, b}2 U {a, c} 2 .

and one minimum vertex. Since d-graphs are acyclic, the transitive and reflexive
closure of d-graph arc relation is an ordering. Thus, each d-graph uniquely deter-
mines an ordering of symbol occurrences. This ordering will be discussed later on;
for the time being we only mention that considering d-graphs as descriptions of
non-sequential processes and dependency relation as a model of causal dependency,
this ordering may be viewed as causal ordering of process events.
Observe that in case of full dependency D (i.e. if D = Sp x Sp) the arc relation
of any d-graph g over D is the linear order of vertices of g; in case of minimum
dependency D (i.e. when D is an identity relation) any d-graph over D consists of
a number of connected components, each of them being a linearly ordered set of
vertices labelled with a common symbol.
Define the composition of d-graphs as follows: for all graphs 71,72 in To the
composition (71 o 72) with 71 with 72 is a graph arising from the disjoint union of
71 and 72 by adding to it new arcs leading from each node of 71 to each node of 72,
provided they are labelled with dependent symbols. Formally, (V, R, ip) ~ (7! o 72)
iff there are instances (Vi, i?i, v?i), (V2, R2, ¥2) of 71,72, respectively, such that

V = ViUV 2 l VinV r 2 = 0, (1.23)


R = R1VjR2\JRi2, (1.24)
<p = ¥>iU</52, (1-25)
16 CHAPTER l. INTRODUCTION T O T R A C E T H E O R Y

Figure 1.4: D-graphs composition

where R\2 denotes a binary relation in V such that

(vi, v2) 6 R12 & v1 € Vi A v2 E V2 A (v{v{), <p(v2)) G D. (1.26)

E x a m p l e 1.4.3 In Fig. 1.4 the composition of two d-graphs is presented; thin


arrows represent arcs added in the composition.
Proposition 1.4.4 The composition of d-graphs is a d-graph.

Proof: In view of (1.23) and (1.25) it is clear that the composition 7 of two d-
graphs 71,72 is a finite graph, with nodes labelled with symbols from Y,£>- It is
acyclic, since 71 and 72 are acyclic and by (1.26), in 7 there is no arcs leading from
nodes of 72 to nodes of 71. Let vi,v2 be nodes of 7 with (<p(vi), ^p(v2)) € D. If
both of them are nodes of 71 or of 72, then by D-connectivity of components and
by (1.24) they are also joined in 7. If v\ is a node of 71 and v2 is a node of 72, then
by (1.24) and (1-26) they are joined in 7, which proves D-connectivity of 7. •
Composition of d-graphs can be viewed as "sequential" as well as "parallel":
composing two independent d-graphs (i.e. d-graphs such that any node of one of
them is independent of any node of the other) we get the result intuitively under-
stood as "parallel" composition, while composing two dependent d-graphs, i.e. such
that any node of one of them is dependent of any label of the other, the result can
be viewed as the "sequential" or "serial" composition. In general, d-graph composi-
tion is a mixture of "parallel" and "sequential" compositions, where some (but not
all) nodes of one d-graph are dependent on some nodes of the other, and some of
them are not. The nature of composition of d-graphs depends on the nature of the
underlying dependency relation.
Denote the empty graph (with no nodes) by e. The set of all d-graphs over a
dependency D with composition o defined as above and with the empty graph as a
distinguished element forms an algebra denoted by G(D).
1.4. DEPENDENCE GRAPHS 17

T h e o r e m 1.4.5 The G(D) is a monoid.

Proof: Since the empty graph is obviously the neutral (left and right) element
w.r.t. to the composition, it suffices to show that the composition of d-graphs is
associative. Let, for i = 1, 2, 3, (Vj, Ri, ipt) be a representative of d-graph 7$, such
that Vi n Vj = 0 for i ^ j . By simple calculation we prove that ((71 o 72) o 73) is
(isomorphic to) the d-graph (V, R, tp) with

V = VXUV2UV3, (1.27)
R = Ri U R2 U R3 U Rl2 U iZ13 U R23 (1.28)
</> = ^ U </>2 U ¥>3, (1-29)

where i?y denotes a binary relation in V such that

(i>i, t>2) e Rij &v1€ViAv2e Vj A (<y9(vi), <p(v2)) e -D,

and the same result we obtain for (71 o (72 o 73)). •


Let D be a dependency. For each string w over Sp denote by {W)D the d-graph
defined recursively:

(e)n=e, (wa)D = (w)D o (a,0, {(a, a)}) (1.30)

for all strings w and symbols a. In other words, (wa) arises from the graph (w) by
adding to it a new node labelled with symbol a and new arcs leading to it from all
vertices of (w) labelled with symbols dependent of a.
D-graph (abbca)r> with dependency D = {a, 6}2L){a, c} 2 is presented in Fig. 1.3.
Let dependency D be fixed from now on and let £ , / , (w) denote Y,D,ID, {W)D,
respectively.

Proposition 1.4.6 For each d-graph 7 over D there is a string w such that (w) =
7-

Proof: It is clear for the empty d-graph. By induction, let 7 be a non-empty


d-graph; remove from 7 one of its maximum vertices. It is clear that the resulting
graph is a d-graph. By induction hypothesis there is a string w such that (w)
is isomorphic with this restricted d-graph; then (wa), where a is the label of the
removed vertex, is isomorphic with the original graph. •
Therefore, the mapping () is a surjection.

Proposition 1.4.7 Mapping <j> : E* — • G(D) defined by <j>(w) = (w) is a depen-


dency morphism w.r.t. D.
18 C H A P T E R 1. INTRODUCTION T O T R A C E T H E O R Y

Proof: By Proposition 1.4.6 mapping $ is a surjection. By an easy induction we


prove that for each strings u, v

<j>(uv) = (uv) = (u) o (i>) = <f>{u) o (j>(v),

and clearly <j>(e) = (e) = e. Thus, (f> is a homomorphism onto G(D). Condition A l
is obviously satisfied. If (a, b) € / , d-graph (ab) has no arcs, hence it is isomorphic
with (6a). It proves A2. By definition, d-graph (no), for string u and symbol a, has a
maximum vertex labelled with a; if (ua) = (v), then (v) has also a maximum vertex
labelled with a; removing these vertices from both graphs results in isomorphic
graphs; it is easy to see that removing such a vertex from (v) results in (v 4- a).
Hence, (u) = ( D T O ) , which proves A3. If 7 = (ua) = (vb) and a / 6, then 7 has at
least two maximum vertices, one of them labelled with a and the second labelled
with b. Since both of them are maximum vertices, there is no arc joining them. It
proves (a, b) € I and A4 is satisfied. P

T h e o r e m 1.4.8 The trace monoid M(D) is isomorphic with the monoid G(D) of
d-graphs over dependency D; the isomorphism is induced by the bisection 0 such
that 6([a\r>) = {a)o for all a E E D -
Proof: It follows from Corollary to Theorem 1.3.9. •
The above theorem claims that traces and d-graphs over the same dependency
can be viewed as two faces of the same coin; the same concepts can be expressed
in two different ways: speaking about traces the algebraic character of the concept
is stressed upon, while speaking about d-graphs its causality (or ordering) fea-
tures are emphasized. We can consider some graph - theoretical features of traces
(e.g. connectivity of traces) as well as some algebraic properties of dependence
graphs (e.g. composition of d-graphs). Using this isomorphism, one can prove facts
about traces using graphical methods and the other way around, prove some graph
properties using algebraic methods. In fact, dual nature of traces, algebraic and
graph-theoretical, was the principal motivation to introduce them for representing
concurrent processes.
An illustration of graph-theoretical properties of traces consider the notion (use-
ful in the other part of this book) of connected trace: a trace is connected, if the
d-graph corresponding to it is connected. The algebraic definition of this notion
could be the following: trace t is connected, if there is no non-empty traces £1,^2
with t — t]t2 and such that Alph(ti) x Alph(*2) C / . A connected component of
trace t is the trace corresponding to a connected component of the d-graph corre-
sponding to t; the algebraic definition of this notion is much more complex.
As the second illustration of graph representation of traces let us represent in
graphical form projection iri of trace [abed] with dependency {a, b, c} 2 U {6, c, d } 2
onto dependency {a, c} 2 U {c, d } 2 and projection 7r2 of the same trace onto depen-
dency {a,b}2 U {a, c} 2 U {c, d } 2 (Fig. 1.5). Projection 7Ti is onto dependency with
smaller alphabet than the original (symbol b is deleted under projection); projection
7r2 preserves all symbols, but delete some dependencies, namely dependency (6, c)
and (6, d).
1.5. HISTORIES 19

{a,c}2 U {c,d}2 {a, b, c} 2 U {b, c, d}2 {a, b}2 U {a, c} 2 U {c, d}2

a d a

/ /
Tl "T2
c
/ \ / /
d

Figure 1.5: Trace projections

1.5 Histories
The concepts presented in this section originate in papers of M.W. Shields [253,
254, 255]. The main idea is to represent non-sequential processes by a collection
of individual histories of concurrently running components; an individual history
is a string of events concerning only one component, and the global history is a
collection of individual ones. This approach, appealing directly to the intuitive
meaning of parallel processing, is particularly well suited to CSP-like systems [138]
where individual components run independently of each other, with one exception:
an event concerning a number of (in CSP at most two) components can occur only
coincidently in all these components ("handshaking" or "rendez-vous" synchroniza-
tion principle). The presentation and the terminology used here have been adjusted
to the present purposes and differ from those of the authors.
Let S = (Ei, £2, • • • i ^n) be a n-tuple of finite alphabets. Denote by P(S) the
product monoid
EtxE*x...xE;. (1.31)

The.composition in -P(S) is component-wise: if u = (iti, u2,..., un), v = (v\, v2,...,


vn), then
u v = (u1v1,u2v2,...,unvn), (I- 3 2 )

for a l l u , v e S ; x S ^ x . . . x £ £ .
Let E = E i U E2 U • • • U E n and let TTJ be the projection from E onto Ej,
for i = 1,2, ...,n. By the distribution in S we understand here the mapping
•K : £* —> E^ x £2 x . . . E* defined by the equality:

ir(w) = (7Ti(ui), n2(w),..., 7r„(w)). (1.33)

For each a e £ the tuple ir(a) will be called the elementary history of a. Thus,
20 CHAPTER l. INTRODUCTION T O T R A C E T H E O R Y

the elementary history of a is the n-tuple (ai, a 2 , . . . , an) such that

_ J a, if a 6 Ej,
1
1 e, otherwise.

Thus, elementary history ir(a) is n-tuple consisting of one symbol strings a that
occur in positions corresponding to components containing symbol a and of empty
strings that occur in the remaining positions.
Let i l ( E ) be the submonoid of P ( E ) generated by the set of all elementary
histories in P ( S ) . Elements of -H"(E) will be called global histories (or simply,
histories), and components of global histories - individual histories in E .

E x a m p l e 1.5.1 Let Ej = {a,fc},E 2 = { a , c } , E = ( E ! , E 2 ) . Then E = {a,b,c}


and the elementary histories of E are: (a, a), (6, e), (e, c). The pair:

7r(w) = (abba, aca)

is a global history in E, since

TT(TO) = (a,a)(b,e)(b,e)(e,c)(a,a)
= Tr(abbca).

The pair (abba, cca) is not a history, since it cannot be obtained by composition of
elementary histories.

From the point of view of concurrent processes the subalgebra of histories can
be interpreted as follows. There is n sequential components of a concurrent system,
each of them is capable to execute (sequentially) some elementary actions, creating
in this way a sequential history of the component run. All of components can act
independently of each other, but an action common to a number of components
can be executed by all of them coincidently. There is no other synchronization
mechanisms provided in the system. Then the join action of all components is a n-
tuple of individual histories of components; such individual histories are consistent
with each other, i.e. - roughly speaking - they can be combined into one global
history. The following theorem offers a formal criterion for such a consistency.

P r o p o s i t i o n 1.5.2 The distribution in E is a homomorphism from the monoid of


strings over E onto the monoid of histories H(E).

P r o o f : It is clear that the distribution of any string over E is a history in -ff(E).


It is also clear that any history in -ff(E) can be obtained as the distribution
from a string over E, since any history is a composition of elementary histories:
7r(a) 1 7r(a) 2 • • • ir(a)m = 7r(a 1 a 2 • • • am). By properties of projection it follows the
congruence of the distribution: ir(uv) = n(u)n(v). •
1.6. ORDERING OF SYMBOL OCCURRENCES 21

T h e o r e m 1.5.3 Let £ = ( E i , E 2 , . . . ,£„),£> = E^UE^U- • - U S 2 . The distribution


in £ is a dependency morphism w.r.t. D onto the monoid of histories _ff(S).

Proof: Denote the assumed dependency by D and the induced independency by


I. By Proposition 1.5.2 the distribution is a homomorphism onto the monoid of
histories. By its definition we have at once 7r(w) = 7r(e) => w = e, which proves A l .
By the definition of dependency, two symbols a, b are dependent iff there is index i
such that a as well as b belongs to E,, i.e. a, b are independent iff there is no index i
such that Ej contains both of them. It proves that if (a, b) € I, then ir(ab) = 7r(fea),
since for all i = 1, 2 , . . . , n

{ a,
6,
e,
if a £ E;,
if b 6 Ej,
in the remaining cases.

Thus, A2 holds. To prove A3, assume Tt(ua) = 7r(v); then we have iVi(ua) = iTi(v)
for all i = 1, 2 , . . . , n; hence 7T;(ua) -r-a = 7T;(v) -j-a; since projection and cancellation
commute, 7Tj(ua -=- a) = 7r,(« -j- a), i.e. TTJ(U) = 7r»(u -=- a) for all i, which implies
7T(M) = TT(V -=- a). It proves A3. Suppose now 7r(wa) = n(vb); if it were i with
a £ Ej, 6 £ Ej, it would be 7T;(u)a = 7Tj(i))6, which is impossible. Therefore, there is
no such i, i.e. (a, 6) £ I. •

T h e o r e m 1.5.4 Monoid of histories H(Y<i, E 2 , . . . , E n ) is isomorphic with the


monoid of traces over dependency Ef U E | U • • • U E^.

Proof: It follows from the Theorem 1.5.3 and Corollary to Theorem 1.3.9. •
Therefore, similarly to the case of d-graphs, histories can be viewed as yet an-
other, "third face" of traces; to represent a trace by a history, the underlying de-
pendency should be converted to a suitable tuple of alphabets. One possibility, but
not unique, is to take cliques of dependency as individual alphabets and then to
make use of Theorem 1.5.4 for constructing histories. As an example consider trace
[abbca] over dependency {a, b}2L){a, c} 2 as in the previous sections; then the system
components are ({a, b}, {a, c}) and the history corresponding to trace [abbca] is the
history (abba,aca).

1.6 Ordering of Symbol Occurrences


Traces, dependence graphs and histories are the same objects from the algebraic
point of view; since monoids of them are isomorphic, they behave in the same way
under composition. However, actually they are different objects; their difference is
visible when considering how symbols are ordered in corresponding objects: traces,
graphs, and histories.
There are two kinds of ordering defined by strings: the ordering of symbols
occurring in a string, called the occurrence ordering, and the ordering of prefixes of
a strings, the prefix ordering. Looking at strings as at models of sequential processes,
22 C H A P T E R 1. INTRODUCTION T O T R A C E T H E O R Y

the first determines the order of event occurrences within a process. The second is
the order of process states, since each prefix of a string can be interpreted as a partial
execution of the process interpreting by this string, and such a partial execution
determines uniquely an intermediate state of the process. Both orderings, although
different, are closely related: given one of them, the second can be reconstructed.
Traces are intended to be generalizations of strings, hence the ordering given by
traces should be a generalization of that given by strings. Actually, it is the case;
as we could expect, the ordering resulting from a trace is partial. In this chapter
we shall discuss in details the ordering resulting from trace approach.
We start from string ordering, since it gives us some tools and notions used
next for defining the trace ordering. We shall consider both kinds of ordering: first
the prefix ordering and next the occurrence ordering. Next, we consider ordering
defined within dependence graphs; finally, ordering of symbols supplied by histories
will be discussed.
In the whole section D will denote a fixed dependency relation, E be its alphabet,
/ the independency relation induced by D. All strings are assumed to be strings
over E. The set of all traces over D will be denoted by the same symbol as the
monoid of traces over D, i.e. by M(D).
Thus, in contrast to linearly ordered prefixes of a string, the set Pref(t) is
ordered by [ only partially. Interpreting a trace as a single run of a system, its
prefix structure can be viewed as the (partially) ordered set of all intermediate
(global) system states reached during this run. In this interpretation [e] represents
the initial state, incomparable prefixes represent states arising in effect of events
occurring concurrently, and the ordering of prefix represents the temporal ordering
of system states.
Occurrence ordering in strings. At the beginning of this section we give a
couple of recursive definition of auxiliary notions. All of them define some functions
on T,*D; in these definitions w denotes always a string over £ # , and e, e', e" symbols
in E D -
Let a be a symbol, w be a string; the number of occurrences of a in w is here
denoted by w(a). Thus, e(a) = 0, wa(a) = w(a) + l,wb(a) = w{a) for all strings w
and symbols a, b. The occurrence set of w is a subset of E x w:

Occ (w) = {(a, n ) | a e E A l < n < w(a)} (1.34)

It is clear that a string and its permutation have the same occurrence set, since
they have the same occurrence number of symbols; hence, all representatives of a
trace over an arbitrary dependency have the same occurrence sets.

E x a m p l e 1.6.1 -The occurrence set of string abbca is the set

{(a,l),(a,2),(fe,l),(6,2),(c,l)}.

Let R C E x u> and A be an alphabet; define the projection TVA(R) of R onto A


as follows:
TTA{R) = {{a,n) e R | a € A}. (1.35)
1.6. O R D E R I N G O F SYMBOL O C C U R R E N C E S 23

It is easy to show that


TTA(OCC (w)) = Occ (nA(w)). (1.36)
Thus, TT A (OCC(W;)) C Occ(w).
The occurrence ordering in a string w is an ordering Ord (w) in the occurrence
set of w defined recursively as follows:

Ord(e) = 0, Ord (wa) = Ord(w) U (Occ (wa) x {(a, w(a)}). (1.37)

Thus, Ord(w) C Occ(w) x Occ (TO). It is obvious that Ord(io) is a linear ordering
for any string w.
Occurrence ordering in traces. In this section we generalize the notion of
occurrence ordering from strings to traces. Let D be a dependency. As it has
been already noticed, u = v implies Occ (u) = Occ(«), for all strings u,v. Thus,
the occurrence set for a trace can be set as the occurrence set of its arbitrary
representative. Define now the occurrence ordering of a trace [w] as the intersection
of occurrence orderings of all representatives of [w]:

Ord(H) = Pi 0rd (")- (L38)


This definition is correct, since all instances of a trace have the common occurrence
set and the intersection of different linear ordering in a common set is a (partial)
ordering of this set. Thus, according to this definition, ordering of all representatives
of a single trace form all possible linearizations (extensions to the linear ordering)
of the trace occurrence ordering. From this definition it follows that the occurrence
ordering of a trace is the greatest ordering common for all its representatives. In
the rest of this section we give some properties of this ordering and its alternative
definitions.
We can interpret the ordering Ord([w]) from the point of view of concurrency
as follows. Suppose there is a number of observers looking at the same concurrent
process represented by a trace. Observation made by each of them is sequential;
each of them sees only one representative of this trace. If two observers discover an
opposite ordering of the same events in the observed process, it means that these
events are actually not ordered in the process and that the difference noticed by
observers results only because their specific points of view. Thus, such an ordering
should not be taken into account, and consequently, two events can be considered
as really ordered in the process if and only if they are ordered in the same way in
all possible observations of the process (all observers agree on the same ordering).
D-graph ordering. Consider now d-graphs over D. Let 7 = (V, R, ip) be a
dependence graph. Let, as in case of strings, 7(a) denotes the number of vertices
of 7 labelled with a. Define the occurrence set of 7 as the set

Occ (7) = {(a, n) I 1 < n < 7(a)}. (1.39)

We say that a vertex v precedes vertex u in 7, if there is an arc leading from v to


u in 7. Let v(a,n) denotes a vertex of 7 labelled with a which is preceded in 7 by
24 C H A P T E R l. INTRODUCTION T O T R A C E T H E O R Y

(a,l)

(M)

(c,l)

(6,2)

(a, 2)

Figure 1.6. The occurrence relation for d-graph {abbca)r) over D = {a, b}2 U {a, c} 2 .

precisely n — 1 vertices labelled with a. Observe that for each element (a, n) in the
occurrence set of 7 there exists precisely one vertex v(a,n).
Occurrence relation for 7 is the binary relation Q(j) in Occ (7) such that

((a, n), (6,TO))e Q(7) ^ (w(a, n), v(b, m)) e R). (1.40)

E x a m p l e 1.6.2 The diagram of the occurrence relation for the d-graph (abbca)
over {a, 6} 2 U {a, c} 2 is given in Fig.1.6

Since Q(7) is acyclic, its transitive and reflexive closure of Q(i) is an ordering
relation in the set Occ (7); this ordering will be called the occurrence ordering of 7
and will be denoted by Ord (7).

P r o p o s i t i o n 1.6.3 For each string w, the ordering Ord(w) is an extension of


Ord((w)) to a linear ordering.

Proof: It follows easily from the definition of arc connections in d-graphs by com-
paring it with the definition of Ord (w) given by (1.37) •

Proposition 1.6.4 Let 7 be a d-graph, and let S be an extension of Ord(j) to a


linear ordering. Then there is a string u such that S = Ord (u) and g = (u).
1.6. ORDERING OF SYMBOL OCCURRENCES 25

Proof: By induction. If 7 is the empty graph, then set u = e. Let 7 be not empty
and let S be the extension of Ord (g) to a linear ordering. Let (a, n) be the maximum
element of S. Then 7 has a maximum vertex labelled with a. Delete from 7 this
vertex and denote the resulting graph by j3; delete also the maximum element from
S and denote the resulting ordering by R. Clearly, R is the linear extension of /?; by
induction hypothesis there is a string, say w, such that R = Ord (w) and f3 = (w).
But then S = Ord (wa) and 7 = (wa). Thus, u = wa meets the requirement of the
proposition. •

T h e o r e m 1.6.5 For all strings w: Ord{[w\) = Ord((w)).


Proof: Obvious in view of the definition of the trace ordering and propositions
1.6.3 and 1.6.4. •
This theorem states that the intersection of linear orderings of representatives
of a trace is the same ordering as the transitive and reflexive closure of the ordering
of the d-graph corresponding to that trace.
H i s t o r y o r d e r i n g . Now, let (Ei, E 2 , . . . , E„) be n-tuple of finite alphabets,
E = E i U E 2 U • • • U E„, D = T,j U E | U • • • U £ £ , m be the projection on E { , and let
7r be the distribution function.

Fact 1.6.6 For any string w over E and each i < n:


Tn(Ord(w)) = Ord(TTi(w)). (1.41)
Proof: Equality (1.41) holds for w = e. Let w = ue, for string u and symbol e. If
e ^ Ei, equality (1.41) holds by induction.hypothesis, since 7Tj(io) = ^ ( M ) . Let then
a e E^. By definition, Ord (ua) = Ord (tt)u(Occ ( a a ) x {(a, w(a) + l)}). By induction
hypothesis 7Ti(Ord («)) = Ord (7ri(w)); by (1.36) 7ri(Occ (ua)) = Occ ( ^ ( u o ) ) ; a € Ej
implies 7Tj(u)(a) = u(a), hence 7r(Ord (ua)) = Occ (iVi(ua)) U {(0,7Tj(u)(a) + 1} =
Ord (iTi(ua)). It ends the proof. •

F a c t 1.6.7 For each string w over E :


n
Occ(w) = M Occ(ni(w)).
i=l

Proof: Since, by definition, ^ ( O c c (w)) C Occ (w), and 7ri(Occ (w)) — Occ (iVi(w))
for each i < n, hence U"=i ^ c c (ni(w)) ^= Occ(io). Let (e,k) G Occ(w); hence
(a, k) G 7T,(Occ (w)) for i such that a € E;; thus, (a, k) 6 Occ (-Ki(w)) for this i; but
it means that (o, k) £ \J™=1 Occ (iri(w)), which completes the proof. D
Let (wi, w2,..., wn) be a global history; define
n
Ord( W l ,«; 2 ,...,«; r l ) = (UOrd(«; i ))* (1.42)
i=l

The following theorem close the comparison of different types of defining trace
ordering.
26 C H A P T E R 1. INTRODUCTION T O T R A C E THEORY

(...a...&...) (...6...C.) (...C...C.)

6
a Q
0

b 0- -0 b
6

0
6

c 0-

Figure 1.7: History ordering: e follows a.

T h e o r e m 1.6.8 Let D = \J Ef, fei 7r(w) he distribution function in (Ei, E 2 , . . . ,


E n ) , and let (w) be the d-graph {w)D. Then for each string w Ord(Tr(w)) =
Ord((w)).

Proof: Let D be such as assumed above, let (w) denotes (W)D, and let 7r(w) =
(wi,u>2, • • -,wn). Thus, for each i : ui» = 7r;(iu). Since Occ (w) = Occ((w)), and
Occ (u>i) C Occ(w), we have Occ (IUJ) C Occ ((it))). Any two symbols in E» are
dependent, hence Ord(u>i) C Ord((w)). Therefore, | J " = 1 Ord («;») C Ord((-u;)) and
since Ord ((to)) is an ordering, (U"=i Ord (WJ))* C Ord((w)). To prove the inverse
inclusion, observe that if there is an arc in an occurrence graph from (e',k') to
(e",k"), then (e',e") € £>; it means that there is i such that e',e" G Ej and that
((e',fc'),(e",fc")) 6 Ord («;;), hence ((e',k'),(e",k")) € Ord(ir(w)). It ends the
proof. O
The history ordering, defined by (1.42), is illustrated in Fig. 1.7.
It explains the principle of history ordering. Let us interpret ordering Ord (7rw)
1.7. T R A C E LANGUAGES 27

in similar way as we have done it in case of traces. Each individual history of a


component can be viewed as the result of a local observation of a process limited
to events belonging to the repertoire of the component. From such a viewpoint,
an observer can notice only events local to the component he is situated at; re-
maining actions are invisible for him. Thus, being localized at z-th component, he
notices the ordering Ord(7Ti(u))) only. To discover the global ordering of events in
the whole history, all such individual histories have to be put together; then events
observed coincidently from different components form a sort of links between indi-
vidual observations and make possible to discover an ordering between events local
to separate and remote components. The rule is: an event occurrence [a",n") fol-
lows another event occurrence (a',n'), if there is a sequence of event occurrences
beginning with (a',n') and ending with (a",n") in which every element follows its
predecessor according to the individual history containing both of them. Such a
principle is used by historians to establish a chronology of events concerning remote
and separate historical objects.
Let us comment and compare all three ways of defining ordering within graphs,
traces, and histories. They have been defined in the following way:
Ord ((w)): as the transitive and reflexive closure of arc relation of (w);
Ord ([w]): as intersection of linear orderings of all representatives of [w];
Ord (iv(w)): as the transitive closure of the union of linear orderings
of all 7Tf (w) components.
All of them describe processes as partially ordered sets of event occurrences.
They have been defined in different way: in natural way for graphs, as the intersec-
tion of individually observed sequential orderings in case of traces, and as the union
of individually observed sequential orderings in case of histories. In case of traces,
observations were complete, containing every action, but sometimes discovering dif-
ferent orderings of some events; in case of histories, observations were incomplete,
limited only to local events, but discovering always the proper ordering of noticed
events. In the case of traces the way of obtaining the global ordering is by compar-
ing all individual observations and rejecting a subjective and irrelevant information
(by intersection of individual orderings); in case of histories, all individual obser-
vations are collected together to gain complementary and relevant information (by
unifying individual observations). While the second method has been compared to
the method of historians, the first one can be compared to that of physicists. In all
three cases the results are the same; it gives an additional argument for the above
mentioned definitions and indicates their soundness.

1.7 Trace Languages


Let D be a dependency; by a trace language over D we shall mean any set of traces
over D. The set of all trace languages over D will be denoted by T n ) . For a given
set L of strings over S c denote by [L\D the set {[W]D \ w e L}; and for a given set
28 C H A P T E R 1. INTRODUCTION T O T R A C E T H E O R Y

T of traces over D denote by \JT the set {w | [w]rj e T}. Clearly,

Lc\J[L]DandT=[[JT]D (1.43)

for all (string) languages L and (trace) languages T over D. String language L such
that L = \J[L]D is called to be (existentially) consistent with dependency D [3].
The composition 7\T 2 of trace languages Ti, T 2 over the same dependency D is the
set {tit2 | *i 6 T2 A t 2 A T2}. The iteration of trace language T over Z) is defined in
the same way as the iteration of string language:
00
rp* I I rpn

n=0

where
T° = [e]D and T n + 1 = TnT.
The following proposition results directly from the definitions:

P r o p o s i t i o n 1.7.1 For any dependency D, any string languages L,Lx,L2 over


T,£), and for any family {Li},g/ of string languages over ED-'

[0b = 0, (1.44)
[•^IIDI^D = [LIL2]D, (1.45)
LrCL2^ [Li}D C [L2]D, (1.46)
[L1]DU[L2]D = [L1UL2]D, (1.47)
\J[Li]D = [\jLi]D, (1.48)
iei iei
[L]*D = [L*]D. (1.49)

Before defining the synchronization of languages (which is the most important


operation on languages for the present purpose) let us introduce the notion of pro-
jection adjusted for traces and dependencies.
Let C be a dependency, C C D, and let t be a trace over D; then the trace
projection itc{t) of t onto C is a trace over C defined as follows:

C [e]c, Xt = [e]D,
7r
c(<) - < T C ( * I ) , if i = i i [ e ] D , e 0 S c ,
[ 7T C (ti)[e]c, if t = ti[e]£>,ee E c .

Intuitively, trace projection onto C deletes from traces all symbols not in E c and
weakens the dependency within traces. Thus, the projection of a trace over depen-
dency D onto a dependency C which is "smaller", but has the same alphabet as D,
is a "weaker" trace, containing more representatives the the original one.
The following proposition is easy to prove:
1.7. T R A C E LANGUAGES 29

Proposition 1.7.2 Let C,D be dependencies, D C C. Then u =c v => -KD(U) =£>


TTD(V).

Define the synchronization of the string language L\ over S i with the string
language L2 over £2 as the string language {L\ || L2) over (£1 U £ 2 ) such that
to e (Li || L2) •&• 7rKl (w) G L\ A TTE2 (W) G L2.

Proposition 1.7.3 Let L,, Li, L2, L3 be string languages, £ 1 be the alphabet of L\,
£2 be the alphabet of L2, let {L;};gj be a family of string languages over a common
alphabet, and let w be a string over £1 U £2. Then:

L\\L = L,
L1 \\L2 = L2\\LU
L1 || (L2 || L 3 ) = (L x || L2) || L3,
{H= l }l|{WsJ = {W=1uEJ,
(U^)l|i=U(^H L )
(L, || L 2 ){ W } = ( L j T r ^ H } ) II (L 2 {7r S2 ( W )}),
{«;}(£! || L2) = ( { ^ S l ( W ) } L 1 ) || ({nS2(w)}L2).

Proof: Equalities (1.50) - (1.50) are obvious. Let w G (Uiel -^«) II ^ ' ^ m e a n s that
7r S '(w) e {{jieILi) and 7r s (w) G (X); then 3i G 7 : 7T£<(w) G Z; and 7rE(w) G L,
i.e. to G (Uig/(-ki II ^)> which proves (1.50). To prove (1.50), let u G (Lx \\ L2){w}\
it means that there exists v such that v € (Li || -L2) and M = vw; by definition of
synchronization, ir^fv) G Li and ir-£2(v) G £ 2 ; because w is a string over £1 U £2,
this is equivalent to ^^(vw) G Li{7t-^1(w)} and TVS2(VW) G L2{n-^2(w)}\ it means
that u G ( L ^ T T S ^ W ) } ) || (L 2 {7r S2 (w)}). Proof of (1.50) is similar. •
Define the synchronization of the trace language T\ over Di with the trace
language T2 over £>2 as the trace language (Ti || T 2 ) over (7?i U D 2 ) such that
t G (Ti || T 2 ) ^ 7r Cl (t) G Tj A 7r D2 (t) G T 2 .
Proposition 1.7.4 For any dependencies Di, D2 and any string languages L\ over
£ D l , £2 over £ D 2 •'
[•^llci II [-^2]D2 = \L\ || ^ I d u D a
Proof:

[Li}Dl || [L2]D2 = {t I *Dl(t) e [^IIDX A T D , W e [£ 2 ]D 2 }


= { H D I U D 2 I T S D I (U>) G L I A TTSD2 (W) G L 2 }
= { H d u D 2 I u> e Li || L2}
= [{w I w G (Li || L 2 )}] D l ui? 2
= [£l || ^ I d u D j -
D
30 C H A P T E R 1. INTRODUCTION T O T R A C E T H E O R Y

Figure 1.8: Synchronization of two singleton trace languages.

Proposition 1.7.5 Let T,Ti,T2,T$ be trace languages, let D\ be the dependency


ofTi, D2 be the dependency 0/T2, let {Ti}igj be a family of trace languages over a
common dependency, and let t be a trace over D\ U D2 • Then:

T\\T = T, (1.50)
Ti || T2 = T2 || 7\, (1.51)
Ti || (T2 || T 3 ) = (Tj || T 2 ) || T3, (1.52)
{[e]D1}\\{[e]D,} = {[e]D1uDa}, (1.53)
(\jTi)\\T=\J{Ti\\T) (1.54)
iei iei
(2\ || T2){t} = (^{^(t)}) || (T 2 {7r D2 (t)}), (1.55)
W m I T2) = ({^Dl(t)}Ti) || ({^(*)}T 2 ). (1.56)
Proof: It is similar to that of Proposition 1.7.3. •

E x a m p l e 1.7.6 Let £>i = {a, b, d}2 U {6, c, d} 2 , D 2 = {a, 6,c} 2 U {a, c,e} 2 ,Ti =
{ [ c a b d j c , } , ^ = {[ac6e]£)2}. The synchronization 2\ || T 2 is the language {[acbde]^)}
over dependency D = DiU D2 = {a, b, c, d } 2 U {a, c, e} 2 . Synchronization of these
two trace languages is illustrated in Fig. 1.8 where traces are represented by corre-
sponding d-graphs (without arcs resulting by transitivity from others).
Observe that in a sense the synchronization operation is inverse w. r. to pro-
jection, namely we have for any trace t over Di U D2 the equality:

{*} = {*!>,(*)} || { W W -
Observe also that for any traces t1 over Z?i, t2 over D2, the synchronization {ti} ||
{t2} is either empty, or a singleton set. Thus the synchronization can be viewed as a
partial operation on traces. In general, it is not so in case of strings: synchronization
of two singleton string languages may not be a singleton. E.g., the synchronization
1.7. T R A C E LANGUAGES 31

of the string language {ab} over {a, b} with the string language {cd} over {c, d} is
the string language {abed, acbd, cabd, acdb, cadb, cdab}, while the synchronization of
the trace language {[a&]} over {a,b}2 with the trace language {[cd]} over {c,d} 2
is the singleton trace language {[afced]} over {a, b}2 U {c, d}2. It is yet another
justification of using traces instead of strings for representing runs of concurrent
systems.

Synchronization operation can be easily extended to an arbitrary number of


arguments: for a family of trace languages {Tj}j 6 /, with T; being a trace language
over Di, we define ||j 6 j T; as the trace language over U i 6 / Di such that

te\\i£iTi&Vi€l-KDi(t)€Ti.

Fact 1.7.7 If T\,Ti are trace languages over a common dependency, then Ti ||
T2 = T i n T 2 .

Proof: Let Ti,T 2 be trace languages over dependency D; then t G (Tx || T 2 ) if and
only if 7T£)(t) G T\ and 7TD(£) £ T 2 ; but since t is a trace over D, 7r£>(t) = t, hence
t E Ti and t e T 2 . D
A trace language T is prefix-closed, if s C ( £ T =)• s 6 T.

P r o p o s i t i o n 1.7.8 lfT\,T2 are prefix-closed trace languages, then so is their syn-


chronization T\ || T 2 .

Proof: Let Ti,T 2 be prefix-closed trace languages over dependencies Di,D2, re-
spectively. Let £ € (Ti || T 2 ) and let £' be a prefix of t; then there is t" such that t =
t't". But then, by properties of projection, KD1 (t) = 7rol (i'i") = TTD1 {t')^Di (t") £
Ti and 7TD2(£) = iTD2(t't") = TTr)2(t')Tr£)2{t") € T 2 ; since Ti,T 2 are prefix-closed,
KpAt') € T i a n d ^D2{t') G T 2 ; hence, by definition, t' <E (Ti || T 2 ). D
Fixed-point equations. Let X i , X 2 , . . . , X n , Y be families of sets, n > 0, and
let / : X i x X 2 x • • • x X „ — • Y; / is monotone, if

(vt: x't c xf) =» /(xi, x2,...x'n) c /(x;', x2',..., x;')


for each X,', X f g Xi, i = 1, 2 , . . . , n.

Fact 1.7.9 Concatenation, union, iteration, and synchronization, operations on


string (trace) languages are monotone.

Proof: Obvious in view of corresponding definitions. •

Fact 1.7.10 Superposition (composition) of any number of monotone operations is


monotone.

Proof: Clear. •
32 C H A P T E R 1. INTRODUCTION TO T R A C E THEORY

Let Di,D2,..., Dn, D be dependencies, n > 0, and let

/ : 2 S °i X 2 S °2 x • • • X 2Ei>~ > 2S^>;

/ is congruent, if

(Vi : [ X ^ , = [Xf] D i ) => [/(X{, X2,... x ; ) ] f l = [/(*{', X ^ ' , . . . , X ; ' ) ] D

for all X^, X f e 2 E ^ , i = 1, 2 , . . . , n.

Fact 1.7.11 Concatenation, union, iteration, and synchronization operations on


string languages are congruent.

Proof: It follows from Proposition 1.7.1 and Proposition 1.7.4. •

Fact 1.7.12 Superposition (composition) of any number of congruent operations is


congruent.

Proof: Clear. •

Let D\, D2, • • •, Dn, D be dependencies, n > 0, and let

/ : 2 E "i x 2 E "2 x • • • x 2 E o„ —> 2 E ^

be congruent. Denote by [f\o the function


[f]D : 2 T D I x 2 T ° 2 x • • • x 2 T D » —> 2TD.

defined by the equality

[/]c([^i]r>D [L2\D2,- • •: [ i » ] f l j = [f(Li,L2,.. .,Ln)]D

This definition is correct by congruence of / .


We say that the set XQ is the least fixed point of function / : 2X — • 2X, if
f(xo) = XQ and for any set a; with f{x) = x the inclusion xg C x holds.

T h e o r e m 1.7.13 Let D be a dependency and f : 2 E » —> 2 E ° fee monotone and


congruent. If L0 is the least fixed point of f, then [LQ]D is the least fixed point of
[f]D-

Proof: Let D, f, and LQ be such as defined in the formulation of Theorem. First


observe that for any L with f(L) C L we have LQ C L. Indeed, let L' be the least set
of strings such that f(L') C L'; then £ ' C L. Since / is monotone, f(f(L') C f(L'),
hence /(I/') meets also the inclusion and consequently L' C f(L'); hence, L' = f(L')
and by the definition of L0 we have LQ C L' C L. NOW,

M D = [/(LO)]D = [/b([Lo]u)
1.8. ELEMENTARY N E T SYSTEMS 33

by congruence of / . Thus, [LQ]D is a fixed point of [f]o- We show that [LO]D is the
least fixed point of [/]r>. Let T be a trace language over D such that [J]D(T) = T
and set L = \JT. Thus [L]D = T and \JT = \J[L}D = L. By (1.43) we have f(L) C
U [ / ( i ) ] n ; by congruence of / we have \J[f(L)]D = U[/]-D([ L b); b y definition of L
we have U [ / M M r > ) = U [ / b ( r ) , and by definition of T we get \J[f]D(T) = {JT
and again by definition of L: \JT = L. Collecting all above relations together we
get f(L) C L. Thus, as we have already proved, L0 C L, hence [Lo]o C [L]^ = T.
n
It proves [Lo] to be the least fixed point of [/]r>.
This theorem can be easily generalized for tuples of monotonic and congru-
ent functions. As we have already mention, functions built up from variables and
constants by means of union, concatenation and iteration operations can serve as
examples of monotonic and congruent functions. Theorem 1.7.13 allows to lift the
well-known and broadly applied method of defining string languages by fixed point
equations to the case of trace languages. It offers also a useful and handy tool for
analysis concurrent systems represented by elementary net systems, as shown in the
next section.
It is worthwhile to note that, under the same assumptions as in Theorem 1.7.13,
Mo can be the greatest fixed point of a function / , while [M0] is not the greatest
fixed point of [/].

1.8 Elementary Net Systems


In this section we show how the trace theory is related to net models of concurrent
systems. First, elementary net systems, introduced in [238] on the base of Petri
nets and will be presented. The behaviour of such systems will be represented by
string languages describing various sequences of system actions that can be executed
while the system is running. Next, the trace behaviour will be defined and compared
with the string behaviour. Finally, compositionality of behaviours will be shown:
the behaviour of a net composed from other nets as components turns out to be the
synchronization of the components behaviour.
E l e m e n t a r y n e t s y s t e m s . By an elementary net system, called simply net
from now on, we understand any ordered quadruple

N = {PN,EN,FN,m%)

where PN, EN are finite disjoint sets (of places and transitions, resp.), FN C PN x
EN U EN X PN (the flow relation) and m°N C PN. It is assumed that Dom(FN) U
COOI(FN) = PN^EN (there is no "isolated" places and transitions) and FnF~1 = 0.
Any subset of PN is called a marking of ./V; mN is called the (initial marking). A
place in a marking is said to be marked, or carrying a token.
As all other types of nets, elementary net systems are represented graphically
using boxes to represent transitions, circles to represent places, and arrows leading
from circles to boxes or from boxes to circles to represent the flow relation; in such
a representation circles corresponding to places in the initial marking are marked
34 C H A P T E R 1. INTRODUCTION T O T R A C E THEORY

r^hr<h

Figure 1.9: Example of an elementary net system.

with dots. In Fig. 1.9 the net (P, E, F, m) with

P = {1,2,3,4,5,6},
E = {a,b,c,d,e},
F = {(l,a),(a,2),(2,6),(6,3),(3,c),(c,l),
(l,d),(d,4),(5,6),(6,6),(6,e),(e,5)}),
m = {1,5}

is represented graphically.
Define P r e , Post, Prox as functions from E to 2 P such that for all a G E: as
follows:

PreAr(a) {p\ (p,a)£FN}; (1.57)


Post AT (a) {p | (e,p) e FN}; (1.58)
Prox AT (a) Pre AT (a) U Post jy (a); (1.59)

places in Pre AT (a) are called the entry places of a, those in Post AT (a) are called the
exit places of a, and those in Prox N(O-) are neighbours of a; the set Prox A/-(a) is the
neighbourhood of a in TV. The assumption F n F _ 1 = 0 means that no place can
be an entry and an exit of the same transition. The subscript JV is omitted if the
net is understood.
Transition junction of net N is a (partial) function 6N : 2£ x EN — • 2PN such
that
Pre (a) C m i , Post (a) n mi = 0,
6 / v ( " i i j f f l ) = "^2 *> m 2 = (mi — Pre (a)) U Post (c
As usual, subscript N is omitted, if the net is understood; this convention will hold
for all subsequent notions and symbols related to them.
1.8. E L E M E N T A R Y N E T SYSTEMS 35

Clearly, this definition is correct, since for each marking m and transition a there
exists at most one marking equal to S(m, a). If S(m, a) is defined for marking m
and transition a, we say that a is enabled at marking m. We say that the transition
function describes single steps of the system.
Reachability function of net N is the function 6^ : 2P x E* — • 2P defined
recursively by equalities:

6*N(m,e)=m, (1.60)
8*N{m,wa) = 8jsi(6*N(m,w), a) (1-61)
P
for all m € 2 , w € E*, a G E.
Behavioural function of net N is the function /?jy : E* — • 2 defined by
/3N(io) = S*(m°, w) for all w e E*.
As usual, the subscript N is omitted everywhere the net N is understood.
The set (of strings) SV = Dorn(f3) is the (sequential) behaviour of net JV; the set
(of markings) RN = Cod((3) is the set of reachable markings of N. Elements of S will
be called execution sequences of N; execution sequence w such that /3(w) = m C P
is said to fead to m.
The sequential behaviour of iV is clearly a prefix closed language and as any prefix
closed language it is ordered into a tree by the prefix relation. Maximal linearly
ordered subsets of S can be viewed as sequential observations of the behaviour of
N, i.e. observations made by observers capable to see only a single event occurrence
at a time. The ordering of symbols in strings of S reflects not only the (objective)
causal ordering of event occurrences but also a (subjective) observational ordering
resulting from a specific view over concurrent actions. Therefore, the structure of
S alone does not allow to decide whether the difference in ordering is caused by a
conflict resolution (a decision made in the system), or by different observations of
concurrency. In order to extract from S the causal ordering of event occurrences
we must supply S with an additional information; as such information we take here
the dependency of events.
Dependency relation for net N is defined as the relation DM C E X E such that

(a, b) G DN <£> Proxjv(a) n Proxjv(6) # 0.

Intuitively speaking, two transitions are dependent if either they share a common
entry place (then they "compete" for taking a token away from this place), or they
share a common exit place (and then they compete for putting a token into the
place), or an entry place of one transition is the exit place of the other (and then one
of them "waits" for the other). Transitions are independent, if their neighbourhoods
are disjoint. If both such transitions are enabled at a marking, they can be executed
concurrently (independently of each other).
Define the non-sequential behaviour of net ./V as the trace language [Sjp and
denote it by S / y That is, the non-sequential behaviour of a net arises from its
sequential behaviour by identifying some execution sequences; as it follows from
the dependency definition, two such sequences are identified if they differ in the
order of independent transitions execution.
36 C H A P T E R 1. INTRODUCTION T O T R A C E T H E O R Y

Sequential and non-sequential behaviour consistency is guaranteed by the fol-


lowing proposition (where the equality of values of two partial functions means that
either both of them are defined and then their values are equal, or both of them are
undefined).

Proposition 1.8.1 For any net N the behavioural function (5 is congruent w.r. to
D, i.e. for any strings u, v £ E*,u =r> v implies /3(M) = P(v).

Proof: Let u =£> v; it suffices to consider the case u = xaby and v = xbay, for
independent transitions a,b and arbitrary x, y £ E*. Because of the behavioural
function definition, we have to prove only

8(6(m, a),b) = 8(6(m, 6), a)

for an arbitrary marking m. By simple inspection we conclude that, because of


independency of a and b, i.e. because of disjointness of neighbourhoods of a and b,
both sides of the above condition are equal to a configuration

(m - (Pre (a) U Pre (&))) U (Post (a) U Post (6))

or both of them are undefined. •


This is another motivation for introducing traces for describing concurrent sys-
tems behaviour: from the point of view of reachability possibilities, all equivalent
execution sequences are the same. As we have shown earlier, firing traces can be
viewed as partially ordered sets of symbol occurrences; two occurrences are not
ordered, if they represent transitions that can be executed independently of each
other. Thus, the ordering within firing traces is determined by mutual dependen-
cies of events and then it reflects the causal relationship between event occurrences
rather than the non - objective ordering following from the string representation of
net activity.
Two traces are said to be consistent, if both of them are prefixes of a common
trace. A trace language T is complete, if any two consistent traces in T are prefixes
of a trace in T.

Proposition 1.8.2 The trace behaviour of any net is a complete trace language.

Proof: Let B be the trace behaviour of net N, I be the independence induced by


D, and let [w], [u] £ B, and let [IM] = [uy] for some strings x,y. Then by Levi
Lemma for traces there are strings v%, v2, v3, t>4 such that viv2 = w, V3V4 = x, viv3 =
u,v2vi = y and (1*2,^3) £ I- Let m = f3(vi). Since w G Dom{l3),v1V2 € Dom{f3);
since u £ (3, (viv3) G Dom{(3); hence, S*(m, v2) is defined and S*(m,v3) is defined;
since {v2,v3) e I, vxv2v3 = vxv3v2; moreover, S*(m, v2v3) is defined, 8*(m, v3v2) is
defined, and 6*(m,v2v3) = 8*{m,v3V2). It means that ^11)2^3 = v\v3v2 £ Dom(f3),
i.e. wv3 = uv2 £ Dom(/3). It ends the proof. •
1.8. E L E M E N T A R Y N E T SYSTEMS 37

e 6 b c

2Q i (•)—- d
4

^- a J

Figure 1.10: Decomposition of a net (into two sequential components).

C o m p o s i t i o n of n e t s and their behaviours. Here we are going to present


a modular way of finding the behaviour of nets. It will be shown how to construct
the behaviour of a net from behaviours of its parts: The first step is to decompose
a given net into modules, next, to find their behaviours, and finally, to put them
together finding in this way the behaviour of the original net.
Let / = { 1 , 2 , . . . , n } ; we say that a net N = (P, E, F, m) is composed of nets
Ni = (Pi, Ei, Fi, mi), i G I, and write
N = Nt + N2 + • • • + Nn,
if i / j implies P, n Pj = 0 (Ni are pairwise place-disjoint), and
n n n n
P=[jPi,E=[jEi,F={jFi,m=\Jml.
The net in Fig. 1.9 is composed from two nets presented in Fig. 1.10.
Composition of nets could be defined without assuming the disjointness of sets
of places; instead, we could use the disjoint union of sets of places rather than the
set-theoretical union in the composition definition. However, it would require a
new definition of nets, with two nets considered identical if there is an isomorphism
identifying nets with different sets of places and preserving the flow relation and
the initial marking. For sake of simplicity, we assume sets of places of the compo-
sition components to be disjoint, and we use the usual set-theoretical union in our
definition.
Proposition 1.8.3 Composition of nets is associative and commutative.
Proof: Obvious. •
38 C H A P T E R 1. INTRODUCTION T O T R A C E T H E O R Y

Let / = {1, 2 , . . . , n} and let N = N = JVX + N2 + • • • + Nn, Nt = (P 4) Eu Fu m?)


for all i E I. Let P r e ^ , P o s t ^ , P r o x J V ; be denoted by P r e ; , P o s t ; , P r o x , ; more-
over, let 7T; be the projection functions from E onto E\. Finally, let m ; = m n Pi
for any m C P .

P r o p o s i t i o n 1.8.4 Let Q, P C P and set Qi = Q n P j , Rt = P O P / . If P = \JieI Pi


and the members of the family { P j } , e / are pairwise disjoint, then

Q Cm O Vi e I: Qi Crrii,
R = Qr\m O Vie I: Ri = Qinrrii,
m = Q UR <£> Wi € I :rrii = QiURi,
R = m-Q O \/i e I: Ri = rrii - Qi.

andQ = {Ji€lQi,R = \Ji(-IRi.


Proof: Clear. •

P r o p o s i t i o n 1.8.5 D^ = [j^-iD^.

Proof: Let (a, b) € D(N); by definition, Prox(a) n Prox(fc) ^ 0; by (1.8.4)


Prox(a) n Prox(fe) = Q <S> V» e /ProXi(a) n ProXi(fe) = Qi\ hence, by Proposi-
tion 1.8.4, (a, 6) € D 4> Prox(a) n Prox(6) / 0 o Q / 0 <* \Jl=1Qi / 0 <£•
D
U?=i Proxi(a) n ProXi(6) + 0 «• (a, 6) € U?=i A * •

P r o p o s i t i o n 1.8.6 Let <5; denotes the transition function of JVj. X/ien

<5(m', o) = m " •» Vi € I : (a £ £ ; A 5i("^, o) = m " V a ^ j A m | = m").

/or a// m', m " 6 P and a € E.

Proof: It is a direct consequence of Proposition 1.8.4. •


From the above proposition it follows by an easy induction:

6*(m',w) = m" &VieI: 5 ? ( ™ i , f i M ) = m'!. (1.62)

The main theorem of this section is the following:


T h e o r e m 1.8.7

BN1+N2 + -+Nn = BNl || BN2 II • • • II BNn.

Proof: Set r)(m) = {w £ E* \ S*(m°, w) = m} for each m C P and rn(m) = {w e


E* | 6?(m°,u>) = m} for each mi C i*; by Proposition 1.62 S*(m°,w) = m <S>
V ie j5*(m°,7rj(w)) = mi; thus, w G ??(m) <=> V i e j : ^ ( u ; ) € ??i(mj). It means that

7j(m) = »7i(mi) || i) 2 (m 2 ) || • • • || r]n{mn).

Thus, by definition of the behaviour, the proof is completed. •


1.8. E L E M E N T A R Y N E T SYSTEMS 39

This theorem allows us to find the behaviour of a net knowing behaviours of


its components. One can expect the behaviour of components to be easier to find
than that of the whole net. As an example let us find the behaviour of the net
in Fig. 1.9 using its decomposition into two components, as in Fig. 1.10. These
components are sequential; from the theory of sequential systems it is known that
their behaviour can be described by the least solutions of equations:

Xi = XifceU&Ue
X2 = X2abc UabUaUdUe

for languages Xi,X2 over alphabets {6, e}, {a, b, c, d}, respectively. By definition of
trace behaviour of nets, by Theorem 1.8.7, and Theorem 1.7.13 and we have

[X 1 ] £ , l = [X{\Dl[be)Dl U [b]Dl U [e}Dl


[X2]D2 = [X2]D2 [abc}D, U [ab]D2 U [a]D2 U [d\Da U [e}D2

with £>i = {6, e} 2 , D2 = {a, 6, c, d}2. The equation for the synchronization Xi || X2,
is, by properties of synchronization, as follows:

[ * i U II [x2]D2 =
([Xi]Dl[be]Dl U [b]Dl U [e] Dl ) || {[X2}D2[abc\D2 U [ab]n2 U [a]Dl U [d[Da U [e}D2)

and after some easy calculations (using Proposition 1.7.5 we get

[Xi || X2}D =
[Xi || X2]D U [abc U abe U ab U a U d U e]D

with D = D\ U D2 = {b, e } 2 U {a, b, c, d}2. Observe that composition of two com-


ponents that are sequential results in a net with some independent transitions; in
the considered case, the independent pairs of transitions are (a, e), (c, e), (d, e) (and
their symmetric images).
There exists a standard set of very simple nets, namely nets with a singleton
set of places, the behaviours of which can be considered as known. Such nets are
called atomic or atoms; an arbitrary net can be composed from such nets; hence,
the behaviour of any net can be composed from the behaviours of atoms. Thus,
giving the behaviours of atoms, we supply another definition of the elementary net
system behaviour. More formally, let N = (P, E, F, m°) be a net; for each p 6 P
the net
JVp = ({p},Prox(p),F p ,(m°) p ),
where

Fp = {(e,p) | e G Pre (p)} U {(p, e) | e £ Post (p)}, (m°) p = m° n {p},

is called an atom of N (determined by p). Clearly, the following proposition holds:

Proposition 1.8.8 Each net is the composition of all its atoms.


40 C H A P T E R 1. INTRODUCTION T O T R A C E T H E O R Y

ai

\ .
61 an

62 bm

Figure 1.11: An example of atomic net

The behaviour of atoms can be easily found. Namely, let the atom No be defined
as 7V0 = ({p},Au Z,F,m), where A = {e | (e,p) £ F},Z = {e \ (p, e) € F}. Say
that No is marked, if m = {p}, and unmarked otherwise, i.e. if m = 0. Then, by
the definition of behavioural function, trace behaviour BN0 is the trace language
[(ZA)*{Z U e)]D, if it is marked, and [(AZ)*(A U e)]£>, if it is unmarked, where
D = (AU Zf.

1.9 Conclusions
Some basic notions of trace theory has been briefly presented. In the next chapters
of this book this theory will be made broader and deeper; the intention of the
present chapter was to show some initial ideas that motivated the whole enterprise.
In the sequel the reader will be able to get acquainted with the development trace
theory as a basis of non-standard logic as well as with the basic and involved results
from the theory of monoids; with properties of graph representations of traces and
with generalization of the notion of finite automaton that is consistent with the
trace approach. All this work shows that the concurrency issue is still challenging
and stimulating fruitful research.
A c k n o w l e d g e m e n t s . The author is greatly indebted to Professor Grzegorz
Rozenberg for his help and encouragement; without him this paper would not have
appeared.
1.9. CONCLUSIONS 41

r
e 6 6 c

e b 6 c

9>J 4
<!>•

a a
•f
•< '

Figure 1.12: Atoms of net in Fig. 1.9

You might also like