Introduction To Trace Theory
Introduction To Trace Theory
Introduction to
Trace Theory
Antoni Mazurkiewicz
Institute of Computer Science, Polish Academy of Sciences
ul. Ordona 21, 01-237 Warszawa, and
Institute of Informatics, Jagiellonian University
ul. Nawojki 11, 31-072 Krakow, Poland
amazSwars.ipipan.waw.pi
Contents
1.1 Introduction 3
1.2 Preliminary Notions 5
1.3 Dependency and Traces 7
1.4 Dependence Graphs 14
1.5 Histories 19
1.6 Ordering of Symbol Occurrences 21
1.7 Trace Languages 27
1.8 Elementary Net Systems 33
1.9 Conclusions 40
1.1 Introduction
The theory of traces has been motivated by the theory of Petri Nets and the theory
of formal languages and automata.
Already in 1960's Carl Adam Petri has developed the foundations for the theory
of concurrent systems. In his seminal work [226] he has presented a model which
is based on the communication of interconnected sequential systems. He has also
presented an entirely new set of notions and problems that arise when dealing with
3
4 C H A P T E R 1. INTRODUCTION T O T R A C E T H E O R Y
concurrent systems. His model (or actually a family of "net-based" models) that
has been since generically referred to as Petri Nets, has provided both an intu-
itive informal framework for representing basic situations of concurrent systems,
and a formal framework for the mathematical analysis of concurrent systems. The
intuitive informal framework has been based on a very convenient graphical repre-
sentation of net-based systems, while the formal framework has been based on the
token game resulting from this graphical representation.
The notion of a finite automaton, seen as a restricted type of Turing Machine, has
by then become a classical model of a sequential system. The strength of automata
theory is based on the "simple elegance" of the underlying model which admits
powerful mathematical tools in the investigation of both the structural properties
expressed by the underlying graph-like model, and the behavioural properties based
on the notion of the language of a system. The notion of language has provided a
valuable link with the theory of free monoids.
The original attempt of the theory of traces [189] was to use the well developed
tools of formal language theory for the analysis of concurrent systems where the
notion of concurrent system is understood in very much the same way as it is
done in the theory of Petri Nets. The idea was that in this way one will get a
framework for reasoning about concurrent systems which, for a number of important
problems, would be mathematically more convenient that some approaches based
on formalizations of the token game.
In 1970's when the basic theory of traces has been formulated, the most popular
approach to deal with concurrency was interleaving. In this approach concurrency
is replaced by non-determinism where concurrent execution of actions is treated
as non-deterministic choice of the order of executions of those actions. Although
the interleaving approach is quite adequate for many problems, it has a number
of serious pitfalls. We will briefly discuss here some of them, because those were
important drawbacks that we wanted to avoid in trace theory.
By reducing concurrency to non-determinism one assigns two different meanings
to the term "non-deterministic choice": the choice between two (or more) possible
actions that exclude each other, and the lack of information about the order of
two (or more) actions that are executed independently of each other. Since in the
theory of concurrent systems we are interested not only in the question "what is
computed?" but also in the question "how it is computed?", this identification may
be very misleading.
This disadvantage of the interleaving approach is well-visible in the treatment of
refinement, see e.g. [49]. It is widely recognized that refinement is one of the basic
transformations of any calculus of concurrent systems from theoretical, method-
ological, and practical point of view. This transformation must preserve the basic
relationship between a system and its behaviour: the behaviour of the refined system
is the refined behaviour of the original system. This requirement is not respected if
the behaviour of the system is represented by interleaving.
Also in considerations concerning inevitability, see [192], the interleaving ap-
proach leads to serious problems. In non-deterministic systems where a choice
between two partners may be repeated infinite number of times, a run discriminat-
1.2. PRELIMINARY N O T I O N S 5
ing against one of the partners is possible; an action of an a priori chosen partner
is not inevitable. On the other hand, if the two partners repeat their actions in-
dependently of each other, an action of any of them is inevitable. However, in the
interleaving approach one does not distinguish between these two situations: both
are described in the same way. Then, as a remedy against this confusion, a special
notion of fairness has to be introduced, where in fact this notion is outside of the
usual algebraic means for the description of system behaviour.
Finally, the identification of non-deterministic choice and concurrency becomes
a real drawback when considering serializability of transactions [104]. To keep
consistency of a database to which a number of users have concurrent access, the
database manager has to allow concurrent execution of those transactions that do
not interfere with each other. To this end it is necessary to distinguish transactions
that are in conflict from those which are independent of each other; the identification
of the non-deterministic choice (in the case of conflicting transactions) with the
choice of the execution order (in case of independent transactions) leads to serious
difficulties in the design of database managing systems.
Above we have sketched some of the original motivations that led to the formula-
tion of the theory of traces in 1977. Since then this theory has been developed both
in breadth and in depth, and this volume presents the state of the art of the theory
of traces. Some of the developments have followed the initial motivation coming
from concurrent systems, while other fall within the areas such as formal language
theory, theory of partially commutative monoids, graph grammars, combinatorics
of words, etc.
In our paper we discuss a number of notions and results that have played a
crucial role in the initial development of the theory of traces, and which in our
opinion are still quite relevant.
is the string (a x , a 2 , . . . , an, b1, 6 2 , . . . , bm). Usually, sequences (au a 2 , . . . , an) are
written as aia2 .. .an. Consequently, a will denote symbol a as well as the string
consisting of single symbol a; the proper meaning will be always understood by a
context. String of length 0 (containing no symbols) is denoted by e. The set of
strings over an alphabet together with the concatenation operation and the empty
string as the neutral element will be referred to as the monoid of strings over E, or
the free monoid generated by E. Symbol E* is used to denote the monoid of strings
over E as well the set E* itself.
We say that symbol a occurs in string w, if w = W\aw2 for some strings u>i, w 2 .
For each string w define Alph(w) (the alphabet of w) as the set of all symbols
occurring in w. By w(a) we shall understand the number of occurrences of symbol
a in string w.
Let E be an alphabet and u b e a string (over an arbitrary alphabet); then ir-£(w)
denotes the (string) projection of w onto E defined as follows:
{ e,
7TS(M),
if w = e,
if w — ua, a 0 E,
7Tx;(«)a, if w = ua, a € E.
Roughly speaking, projection onto E deletes from strings all symbols not in E. The
subscript in 7T£ is omitted if E is understood by a context.
The right cancellation of symbol a in string w is the string w -f- a defined as
follows:
e + a = e, (1.1)
W
( b)— = i ' if a = 6, , .
\ ' ' | (w -=- a)6, otherwise. ' '
for all strings w and symbols a, b. It is easy to prove that projection and cancellation
commute:
7TE(TO) v a = 7r s (w T d ) , (1.3)
A° = {e},An+1=AnA,
oo
n=0
Extend the projection function from strings to languages defining for each language
A and any alphabet E the projection of A onto E as the language TTS{A) defined as
are called prefixes of w. Obviously, Pref (w) contains e and w. For any language A
over E define
Pref (A) = ( J Pref(w).
Elements of Pref (A) are called prefixes of A. Obviously, A C Pref (A). Language A
is prefix closed, if A = Pref (A). Clearly, for any language A the language Pref (A)
is prefix closed. The prefix relation is a binary relation C in E* such that u C w if
and only w € Pref (w). It is clear that the prefix relation is an ordering relation in
any set of strings.
Equivalence classes of =£> are called traces over D; the trace represented by
string w is denoted by [W]D. By [E*]D we shall denote the set {[w]n | w £ E | , } ,
and by [T,]D the set {[a]p | a € E/j}.
By definition, a single trace arises by identifying all strings which differ only
in the ordering of adjacent independent symbols. The quotient monoid M(Z?) =
E D / = D is called the trace monoidovei D and its elements the traces over D. Clearly,
M(D) is generated by [EJp. In the monoid M(Z>) some symbols from E commute
(in contrast to the monoid of strings); for that reason M(D) is also called a free
partially commutative monoid over D. As in the case of the monoid of strings, we
use the symbol M(D) to denote the monoid itself as well as the set of all traces
over D. It is clear that in case of full dependency, i.e. if D is a single clique, traces
reduce to strings and M(£>) is isomorphic with the free monoid of strings over E ^ .
We are going to develop the algebra of traces along the same lines as it has been
done in case of of strings. Let us recall that the mapping <pr> • E* —> P*].D such
that
</>£>(«;) = [W]D
is a homomorphism of E* onto M(D), called the natural homomorphism generated
by the equivalence =£>.
Now, we shall give some simple facts about the trace equivalence and traces.
Let D be fixed from now on; subscripts D will be omitted if it causes no ambiguity,
i" will be the independency induced by D, E will be the domain of D, all symbols
will be symbols in E D , all strings will be strings over E p , and all traces will be
traces over D, unless explicitly stated otherwise.
It is clear that u = v implies Alph (u) = Alph (v); thus, for all strings w, we can
define Alph ([w]) as Alph (w). Denote by ~ a binary relation in E* such that u ~ v
if and only if there are x, y € E*, and (a, b) E I such that u = xaby, v = xbay; it is
not difficult to prove that = is the symmetric, reflexive, and transitive closure of ~ .
In other words, u = v if and only if there exists a sequence (w0, w b . . . , wn),n > 0,
such that u>o = u,wn = v, and for each i, 0 < i < n, wi-1 ~ u>;.
1.3. D E P E N D E N C Y AND T R A C E S 9
The next theorem is a trace generalization of the Levi Lemma for strings, which
reads: for all strings u, v, x, y, if uv = xy, then there exists a string w such that
either uw = x, wy = v, or xw = u, wv = y. In case of traces this useful lemma
admits more symmetrical form:
T h e o r e m 1.3.4 (Levi Lemma for traces) For any strings u,v,x,y such that uv =
xy there exist strings zi,z2,z3,Z4 such that (z2,z3) £ / and
Proposition 1.3.5 Let [w] be a trace and [u],[v] be prefixes of [w]. Then there
exist the greatest common prefix and the least common dominant of [u] and [v].
1.3. D E P E N D E N C Y AND T R A C E S 11
[06]
[abb] [abc]
[abbe]
[abbca]
Proof: Since [u], [v] are prefixes of [w], there are traces [a;], [y] such that ux =
vy. Then, by Levi Lemma for traces, there are strings 21,22,23,24 such that
u = zxz2,x = z3z4,v = ziz3,y = 2224, and z2,z3 are independent. Then [zi] is
the greatest common prefix and [ziz2z3] is the least common dominant of [it], [v].
Indeed, [21] is a common prefix of both [u], [v]; it is the greatest common prefix,
since any symbol in z2 does not occur in 23 and any symbol in 23 does not occur in
2 2 ; hence any extension of [21] is not a prefix either of [2122] = [u], or of [2123] = [v].
Thus, any trace being a prefix of [u] and of [v] must be a prefix of [21]. Similarly,
[ziz2z3] is a common dominant of [u], [v]; but any proper prefix of \z\z2z3\ is either
not dominating [u], if it does not contain a symbol from 22, or not dominating [v],
if it does not contain a symbol from 23. •
Let us close this section with a characterization of trace monoids by homomor-
phisms (so-called dependency morphisms).
A dependency morphism w.r.t. D is any homomorphism <j> from the monoid of
strings over T,f> onto another monoid such that
<j>(wa) = 4>{{u±-b)a)
= (j}{(ua) -r- b), since a ^ b,
= (j)(v), from (1.12) by A3,
and
a
1.3. D E P E N D E N C Y AND T R A C E S 13
L e m m a 1.3.7 Let (/>, i/> be dependency morphisms w.r.t. the same dependency.
Then <j>(x) = <j>(y) => ip{x) = ij){y) for all x,y.
Proof: Let I? be a dependency and let 4>,yj be two concurrency mappings w.r.t.
dependency D, x,y € E*, and </>(x) — 4>(y)- H x = e, then by A l y = e and clearly
ip(x) = tl>(y). If x ^ e, then y ^ e. Thus, x = ua, y = vb for some u, v £ E*, a, 6 £ E
and we have
</>(ua) = <£(t;&). (1.14)
There can be two cases. In the first case we have a = b, then by A3 <f>(u) = <f>(v)
and by induction hypothesis ip(u) = ip(v); Thus ip(ua) = rj){vb), which proves
tp(x) = ip{y). In the second case, a^b. By A4 (a, 6) £ / . By Lemma 1.3.6 we get
(f>(u) = <f)(wb), 4>{v) = <f>{wa), for some w G E*. By induction hypothesis
hence
tp(ua) = ip(wba),i/i(vb) = ip(wab). (1-16)
By A3, since (a, 6) £ J,
V>(6a) = V(a&), (1.17)
hence ip(wba) = ip(wab), which proves ip(x) = ^{y). n
0(0M) - V H ,
for all w 6 E* is a homomorphism from M onto N. Since 6 has its inverse, namely
B-\rl>(w)) = 4>[w),
which is a homomorphism also, 6 is a homomorphisin from TV onto M, 9 is an
isomorphism. •
-Y = (V,R,tp)
such that
Two d-graphs 7', 7 " are isomorphic, 7' ~ 7", if there exists a bijection between their
nodes preserving labelling and arc connections. As usual, two isomorphic graphs are
identified; all subsequent properties of d-graphs are formulated up to isomorphism.
The empty d-graph (0, 0, 0) will be denoted by A and the set of all isomorphism
classes of d-graphs over D by To-
E x a m p l e 1.4.1 Let D = {o, b}2 U {a, c}2. Then the node labelled graph (V, R, ip)
with
V = {1,2, 3, 4, 5},
R = {(1, 2), (1,3), (1,4), (1, 5), (2,4), (2,5), (3,5), (4, 5)},
ip{l) = a, <p(2) = b, v{3) = c, v ( 4 ) = 6, v ( 5 ) = a,
is a d-graph. It is (isomorphic to) the graph in Fig. 1.3
and one minimum vertex. Since d-graphs are acyclic, the transitive and reflexive
closure of d-graph arc relation is an ordering. Thus, each d-graph uniquely deter-
mines an ordering of symbol occurrences. This ordering will be discussed later on;
for the time being we only mention that considering d-graphs as descriptions of
non-sequential processes and dependency relation as a model of causal dependency,
this ordering may be viewed as causal ordering of process events.
Observe that in case of full dependency D (i.e. if D = Sp x Sp) the arc relation
of any d-graph g over D is the linear order of vertices of g; in case of minimum
dependency D (i.e. when D is an identity relation) any d-graph over D consists of
a number of connected components, each of them being a linearly ordered set of
vertices labelled with a common symbol.
Define the composition of d-graphs as follows: for all graphs 71,72 in To the
composition (71 o 72) with 71 with 72 is a graph arising from the disjoint union of
71 and 72 by adding to it new arcs leading from each node of 71 to each node of 72,
provided they are labelled with dependent symbols. Formally, (V, R, ip) ~ (7! o 72)
iff there are instances (Vi, i?i, v?i), (V2, R2, ¥2) of 71,72, respectively, such that
Proof: In view of (1.23) and (1.25) it is clear that the composition 7 of two d-
graphs 71,72 is a finite graph, with nodes labelled with symbols from Y,£>- It is
acyclic, since 71 and 72 are acyclic and by (1.26), in 7 there is no arcs leading from
nodes of 72 to nodes of 71. Let vi,v2 be nodes of 7 with (<p(vi), ^p(v2)) € D. If
both of them are nodes of 71 or of 72, then by D-connectivity of components and
by (1.24) they are also joined in 7. If v\ is a node of 71 and v2 is a node of 72, then
by (1.24) and (1-26) they are joined in 7, which proves D-connectivity of 7. •
Composition of d-graphs can be viewed as "sequential" as well as "parallel":
composing two independent d-graphs (i.e. d-graphs such that any node of one of
them is independent of any node of the other) we get the result intuitively under-
stood as "parallel" composition, while composing two dependent d-graphs, i.e. such
that any node of one of them is dependent of any label of the other, the result can
be viewed as the "sequential" or "serial" composition. In general, d-graph composi-
tion is a mixture of "parallel" and "sequential" compositions, where some (but not
all) nodes of one d-graph are dependent on some nodes of the other, and some of
them are not. The nature of composition of d-graphs depends on the nature of the
underlying dependency relation.
Denote the empty graph (with no nodes) by e. The set of all d-graphs over a
dependency D with composition o defined as above and with the empty graph as a
distinguished element forms an algebra denoted by G(D).
1.4. DEPENDENCE GRAPHS 17
Proof: Since the empty graph is obviously the neutral (left and right) element
w.r.t. to the composition, it suffices to show that the composition of d-graphs is
associative. Let, for i = 1, 2, 3, (Vj, Ri, ipt) be a representative of d-graph 7$, such
that Vi n Vj = 0 for i ^ j . By simple calculation we prove that ((71 o 72) o 73) is
(isomorphic to) the d-graph (V, R, tp) with
V = VXUV2UV3, (1.27)
R = Ri U R2 U R3 U Rl2 U iZ13 U R23 (1.28)
</> = ^ U </>2 U ¥>3, (1-29)
for all strings w and symbols a. In other words, (wa) arises from the graph (w) by
adding to it a new node labelled with symbol a and new arcs leading to it from all
vertices of (w) labelled with symbols dependent of a.
D-graph (abbca)r> with dependency D = {a, 6}2L){a, c} 2 is presented in Fig. 1.3.
Let dependency D be fixed from now on and let £ , / , (w) denote Y,D,ID, {W)D,
respectively.
Proposition 1.4.6 For each d-graph 7 over D there is a string w such that (w) =
7-
and clearly <j>(e) = (e) = e. Thus, (f> is a homomorphism onto G(D). Condition A l
is obviously satisfied. If (a, b) € / , d-graph (ab) has no arcs, hence it is isomorphic
with (6a). It proves A2. By definition, d-graph (no), for string u and symbol a, has a
maximum vertex labelled with a; if (ua) = (v), then (v) has also a maximum vertex
labelled with a; removing these vertices from both graphs results in isomorphic
graphs; it is easy to see that removing such a vertex from (v) results in (v 4- a).
Hence, (u) = ( D T O ) , which proves A3. If 7 = (ua) = (vb) and a / 6, then 7 has at
least two maximum vertices, one of them labelled with a and the second labelled
with b. Since both of them are maximum vertices, there is no arc joining them. It
proves (a, b) € I and A4 is satisfied. P
T h e o r e m 1.4.8 The trace monoid M(D) is isomorphic with the monoid G(D) of
d-graphs over dependency D; the isomorphism is induced by the bisection 0 such
that 6([a\r>) = {a)o for all a E E D -
Proof: It follows from Corollary to Theorem 1.3.9. •
The above theorem claims that traces and d-graphs over the same dependency
can be viewed as two faces of the same coin; the same concepts can be expressed
in two different ways: speaking about traces the algebraic character of the concept
is stressed upon, while speaking about d-graphs its causality (or ordering) fea-
tures are emphasized. We can consider some graph - theoretical features of traces
(e.g. connectivity of traces) as well as some algebraic properties of dependence
graphs (e.g. composition of d-graphs). Using this isomorphism, one can prove facts
about traces using graphical methods and the other way around, prove some graph
properties using algebraic methods. In fact, dual nature of traces, algebraic and
graph-theoretical, was the principal motivation to introduce them for representing
concurrent processes.
An illustration of graph-theoretical properties of traces consider the notion (use-
ful in the other part of this book) of connected trace: a trace is connected, if the
d-graph corresponding to it is connected. The algebraic definition of this notion
could be the following: trace t is connected, if there is no non-empty traces £1,^2
with t — t]t2 and such that Alph(ti) x Alph(*2) C / . A connected component of
trace t is the trace corresponding to a connected component of the d-graph corre-
sponding to t; the algebraic definition of this notion is much more complex.
As the second illustration of graph representation of traces let us represent in
graphical form projection iri of trace [abed] with dependency {a, b, c} 2 U {6, c, d } 2
onto dependency {a, c} 2 U {c, d } 2 and projection 7r2 of the same trace onto depen-
dency {a,b}2 U {a, c} 2 U {c, d } 2 (Fig. 1.5). Projection 7Ti is onto dependency with
smaller alphabet than the original (symbol b is deleted under projection); projection
7r2 preserves all symbols, but delete some dependencies, namely dependency (6, c)
and (6, d).
1.5. HISTORIES 19
{a,c}2 U {c,d}2 {a, b, c} 2 U {b, c, d}2 {a, b}2 U {a, c} 2 U {c, d}2
a d a
/ /
Tl "T2
c
/ \ / /
d
1.5 Histories
The concepts presented in this section originate in papers of M.W. Shields [253,
254, 255]. The main idea is to represent non-sequential processes by a collection
of individual histories of concurrently running components; an individual history
is a string of events concerning only one component, and the global history is a
collection of individual ones. This approach, appealing directly to the intuitive
meaning of parallel processing, is particularly well suited to CSP-like systems [138]
where individual components run independently of each other, with one exception:
an event concerning a number of (in CSP at most two) components can occur only
coincidently in all these components ("handshaking" or "rendez-vous" synchroniza-
tion principle). The presentation and the terminology used here have been adjusted
to the present purposes and differ from those of the authors.
Let S = (Ei, £2, • • • i ^n) be a n-tuple of finite alphabets. Denote by P(S) the
product monoid
EtxE*x...xE;. (1.31)
for a l l u , v e S ; x S ^ x . . . x £ £ .
Let E = E i U E2 U • • • U E n and let TTJ be the projection from E onto Ej,
for i = 1,2, ...,n. By the distribution in S we understand here the mapping
•K : £* —> E^ x £2 x . . . E* defined by the equality:
For each a e £ the tuple ir(a) will be called the elementary history of a. Thus,
20 CHAPTER l. INTRODUCTION T O T R A C E T H E O R Y
_ J a, if a 6 Ej,
1
1 e, otherwise.
Thus, elementary history ir(a) is n-tuple consisting of one symbol strings a that
occur in positions corresponding to components containing symbol a and of empty
strings that occur in the remaining positions.
Let i l ( E ) be the submonoid of P ( E ) generated by the set of all elementary
histories in P ( S ) . Elements of -H"(E) will be called global histories (or simply,
histories), and components of global histories - individual histories in E .
TT(TO) = (a,a)(b,e)(b,e)(e,c)(a,a)
= Tr(abbca).
The pair (abba, cca) is not a history, since it cannot be obtained by composition of
elementary histories.
From the point of view of concurrent processes the subalgebra of histories can
be interpreted as follows. There is n sequential components of a concurrent system,
each of them is capable to execute (sequentially) some elementary actions, creating
in this way a sequential history of the component run. All of components can act
independently of each other, but an action common to a number of components
can be executed by all of them coincidently. There is no other synchronization
mechanisms provided in the system. Then the join action of all components is a n-
tuple of individual histories of components; such individual histories are consistent
with each other, i.e. - roughly speaking - they can be combined into one global
history. The following theorem offers a formal criterion for such a consistency.
{ a,
6,
e,
if a £ E;,
if b 6 Ej,
in the remaining cases.
Thus, A2 holds. To prove A3, assume Tt(ua) = 7r(v); then we have iVi(ua) = iTi(v)
for all i = 1, 2 , . . . , n; hence 7T;(ua) -r-a = 7T;(v) -j-a; since projection and cancellation
commute, 7Tj(ua -=- a) = 7r,(« -j- a), i.e. TTJ(U) = 7r»(u -=- a) for all i, which implies
7T(M) = TT(V -=- a). It proves A3. Suppose now 7r(wa) = n(vb); if it were i with
a £ Ej, 6 £ Ej, it would be 7T;(u)a = 7Tj(i))6, which is impossible. Therefore, there is
no such i, i.e. (a, 6) £ I. •
Proof: It follows from the Theorem 1.5.3 and Corollary to Theorem 1.3.9. •
Therefore, similarly to the case of d-graphs, histories can be viewed as yet an-
other, "third face" of traces; to represent a trace by a history, the underlying de-
pendency should be converted to a suitable tuple of alphabets. One possibility, but
not unique, is to take cliques of dependency as individual alphabets and then to
make use of Theorem 1.5.4 for constructing histories. As an example consider trace
[abbca] over dependency {a, b}2L){a, c} 2 as in the previous sections; then the system
components are ({a, b}, {a, c}) and the history corresponding to trace [abbca] is the
history (abba,aca).
the first determines the order of event occurrences within a process. The second is
the order of process states, since each prefix of a string can be interpreted as a partial
execution of the process interpreting by this string, and such a partial execution
determines uniquely an intermediate state of the process. Both orderings, although
different, are closely related: given one of them, the second can be reconstructed.
Traces are intended to be generalizations of strings, hence the ordering given by
traces should be a generalization of that given by strings. Actually, it is the case;
as we could expect, the ordering resulting from a trace is partial. In this chapter
we shall discuss in details the ordering resulting from trace approach.
We start from string ordering, since it gives us some tools and notions used
next for defining the trace ordering. We shall consider both kinds of ordering: first
the prefix ordering and next the occurrence ordering. Next, we consider ordering
defined within dependence graphs; finally, ordering of symbols supplied by histories
will be discussed.
In the whole section D will denote a fixed dependency relation, E be its alphabet,
/ the independency relation induced by D. All strings are assumed to be strings
over E. The set of all traces over D will be denoted by the same symbol as the
monoid of traces over D, i.e. by M(D).
Thus, in contrast to linearly ordered prefixes of a string, the set Pref(t) is
ordered by [ only partially. Interpreting a trace as a single run of a system, its
prefix structure can be viewed as the (partially) ordered set of all intermediate
(global) system states reached during this run. In this interpretation [e] represents
the initial state, incomparable prefixes represent states arising in effect of events
occurring concurrently, and the ordering of prefix represents the temporal ordering
of system states.
Occurrence ordering in strings. At the beginning of this section we give a
couple of recursive definition of auxiliary notions. All of them define some functions
on T,*D; in these definitions w denotes always a string over £ # , and e, e', e" symbols
in E D -
Let a be a symbol, w be a string; the number of occurrences of a in w is here
denoted by w(a). Thus, e(a) = 0, wa(a) = w(a) + l,wb(a) = w{a) for all strings w
and symbols a, b. The occurrence set of w is a subset of E x w:
It is clear that a string and its permutation have the same occurrence set, since
they have the same occurrence number of symbols; hence, all representatives of a
trace over an arbitrary dependency have the same occurrence sets.
{(a,l),(a,2),(fe,l),(6,2),(c,l)}.
Thus, Ord(w) C Occ(w) x Occ (TO). It is obvious that Ord(io) is a linear ordering
for any string w.
Occurrence ordering in traces. In this section we generalize the notion of
occurrence ordering from strings to traces. Let D be a dependency. As it has
been already noticed, u = v implies Occ (u) = Occ(«), for all strings u,v. Thus,
the occurrence set for a trace can be set as the occurrence set of its arbitrary
representative. Define now the occurrence ordering of a trace [w] as the intersection
of occurrence orderings of all representatives of [w]:
(a,l)
(M)
(c,l)
(6,2)
(a, 2)
Figure 1.6. The occurrence relation for d-graph {abbca)r) over D = {a, b}2 U {a, c} 2 .
precisely n — 1 vertices labelled with a. Observe that for each element (a, n) in the
occurrence set of 7 there exists precisely one vertex v(a,n).
Occurrence relation for 7 is the binary relation Q(j) in Occ (7) such that
((a, n), (6,TO))e Q(7) ^ (w(a, n), v(b, m)) e R). (1.40)
E x a m p l e 1.6.2 The diagram of the occurrence relation for the d-graph (abbca)
over {a, 6} 2 U {a, c} 2 is given in Fig.1.6
Since Q(7) is acyclic, its transitive and reflexive closure of Q(i) is an ordering
relation in the set Occ (7); this ordering will be called the occurrence ordering of 7
and will be denoted by Ord (7).
Proof: It follows easily from the definition of arc connections in d-graphs by com-
paring it with the definition of Ord (w) given by (1.37) •
Proof: By induction. If 7 is the empty graph, then set u = e. Let 7 be not empty
and let S be the extension of Ord (g) to a linear ordering. Let (a, n) be the maximum
element of S. Then 7 has a maximum vertex labelled with a. Delete from 7 this
vertex and denote the resulting graph by j3; delete also the maximum element from
S and denote the resulting ordering by R. Clearly, R is the linear extension of /?; by
induction hypothesis there is a string, say w, such that R = Ord (w) and f3 = (w).
But then S = Ord (wa) and 7 = (wa). Thus, u = wa meets the requirement of the
proposition. •
Proof: Since, by definition, ^ ( O c c (w)) C Occ (w), and 7ri(Occ (w)) — Occ (iVi(w))
for each i < n, hence U"=i ^ c c (ni(w)) ^= Occ(io). Let (e,k) G Occ(w); hence
(a, k) G 7T,(Occ (w)) for i such that a € E;; thus, (a, k) 6 Occ (-Ki(w)) for this i; but
it means that (o, k) £ \J™=1 Occ (iri(w)), which completes the proof. D
Let (wi, w2,..., wn) be a global history; define
n
Ord( W l ,«; 2 ,...,«; r l ) = (UOrd(«; i ))* (1.42)
i=l
The following theorem close the comparison of different types of defining trace
ordering.
26 C H A P T E R 1. INTRODUCTION T O T R A C E THEORY
6
a Q
0
b 0- -0 b
6
0
6
c 0-
Proof: Let D be such as assumed above, let (w) denotes (W)D, and let 7r(w) =
(wi,u>2, • • -,wn). Thus, for each i : ui» = 7r;(iu). Since Occ (w) = Occ((w)), and
Occ (u>i) C Occ(w), we have Occ (IUJ) C Occ ((it))). Any two symbols in E» are
dependent, hence Ord(u>i) C Ord((w)). Therefore, | J " = 1 Ord («;») C Ord((-u;)) and
since Ord ((to)) is an ordering, (U"=i Ord (WJ))* C Ord((w)). To prove the inverse
inclusion, observe that if there is an arc in an occurrence graph from (e',k') to
(e",k"), then (e',e") € £>; it means that there is i such that e',e" G Ej and that
((e',fc'),(e",fc")) 6 Ord («;;), hence ((e',k'),(e",k")) € Ord(ir(w)). It ends the
proof. O
The history ordering, defined by (1.42), is illustrated in Fig. 1.7.
It explains the principle of history ordering. Let us interpret ordering Ord (7rw)
1.7. T R A C E LANGUAGES 27
Lc\J[L]DandT=[[JT]D (1.43)
for all (string) languages L and (trace) languages T over D. String language L such
that L = \J[L]D is called to be (existentially) consistent with dependency D [3].
The composition 7\T 2 of trace languages Ti, T 2 over the same dependency D is the
set {tit2 | *i 6 T2 A t 2 A T2}. The iteration of trace language T over Z) is defined in
the same way as the iteration of string language:
00
rp* I I rpn
n=0
where
T° = [e]D and T n + 1 = TnT.
The following proposition results directly from the definitions:
[0b = 0, (1.44)
[•^IIDI^D = [LIL2]D, (1.45)
LrCL2^ [Li}D C [L2]D, (1.46)
[L1]DU[L2]D = [L1UL2]D, (1.47)
\J[Li]D = [\jLi]D, (1.48)
iei iei
[L]*D = [L*]D. (1.49)
C [e]c, Xt = [e]D,
7r
c(<) - < T C ( * I ) , if i = i i [ e ] D , e 0 S c ,
[ 7T C (ti)[e]c, if t = ti[e]£>,ee E c .
Intuitively, trace projection onto C deletes from traces all symbols not in E c and
weakens the dependency within traces. Thus, the projection of a trace over depen-
dency D onto a dependency C which is "smaller", but has the same alphabet as D,
is a "weaker" trace, containing more representatives the the original one.
The following proposition is easy to prove:
1.7. T R A C E LANGUAGES 29
Define the synchronization of the string language L\ over S i with the string
language L2 over £2 as the string language {L\ || L2) over (£1 U £ 2 ) such that
to e (Li || L2) •&• 7rKl (w) G L\ A TTE2 (W) G L2.
Proposition 1.7.3 Let L,, Li, L2, L3 be string languages, £ 1 be the alphabet of L\,
£2 be the alphabet of L2, let {L;};gj be a family of string languages over a common
alphabet, and let w be a string over £1 U £2. Then:
L\\L = L,
L1 \\L2 = L2\\LU
L1 || (L2 || L 3 ) = (L x || L2) || L3,
{H= l }l|{WsJ = {W=1uEJ,
(U^)l|i=U(^H L )
(L, || L 2 ){ W } = ( L j T r ^ H } ) II (L 2 {7r S2 ( W )}),
{«;}(£! || L2) = ( { ^ S l ( W ) } L 1 ) || ({nS2(w)}L2).
Proof: Equalities (1.50) - (1.50) are obvious. Let w G (Uiel -^«) II ^ ' ^ m e a n s that
7r S '(w) e {{jieILi) and 7r s (w) G (X); then 3i G 7 : 7T£<(w) G Z; and 7rE(w) G L,
i.e. to G (Uig/(-ki II ^)> which proves (1.50). To prove (1.50), let u G (Lx \\ L2){w}\
it means that there exists v such that v € (Li || -L2) and M = vw; by definition of
synchronization, ir^fv) G Li and ir-£2(v) G £ 2 ; because w is a string over £1 U £2,
this is equivalent to ^^(vw) G Li{7t-^1(w)} and TVS2(VW) G L2{n-^2(w)}\ it means
that u G ( L ^ T T S ^ W ) } ) || (L 2 {7r S2 (w)}). Proof of (1.50) is similar. •
Define the synchronization of the trace language T\ over Di with the trace
language T2 over £>2 as the trace language (Ti || T 2 ) over (7?i U D 2 ) such that
t G (Ti || T 2 ) ^ 7r Cl (t) G Tj A 7r D2 (t) G T 2 .
Proposition 1.7.4 For any dependencies Di, D2 and any string languages L\ over
£ D l , £2 over £ D 2 •'
[•^llci II [-^2]D2 = \L\ || ^ I d u D a
Proof:
T\\T = T, (1.50)
Ti || T2 = T2 || 7\, (1.51)
Ti || (T2 || T 3 ) = (Tj || T 2 ) || T3, (1.52)
{[e]D1}\\{[e]D,} = {[e]D1uDa}, (1.53)
(\jTi)\\T=\J{Ti\\T) (1.54)
iei iei
(2\ || T2){t} = (^{^(t)}) || (T 2 {7r D2 (t)}), (1.55)
W m I T2) = ({^Dl(t)}Ti) || ({^(*)}T 2 ). (1.56)
Proof: It is similar to that of Proposition 1.7.3. •
E x a m p l e 1.7.6 Let £>i = {a, b, d}2 U {6, c, d} 2 , D 2 = {a, 6,c} 2 U {a, c,e} 2 ,Ti =
{ [ c a b d j c , } , ^ = {[ac6e]£)2}. The synchronization 2\ || T 2 is the language {[acbde]^)}
over dependency D = DiU D2 = {a, b, c, d } 2 U {a, c, e} 2 . Synchronization of these
two trace languages is illustrated in Fig. 1.8 where traces are represented by corre-
sponding d-graphs (without arcs resulting by transitivity from others).
Observe that in a sense the synchronization operation is inverse w. r. to pro-
jection, namely we have for any trace t over Di U D2 the equality:
{*} = {*!>,(*)} || { W W -
Observe also that for any traces t1 over Z?i, t2 over D2, the synchronization {ti} ||
{t2} is either empty, or a singleton set. Thus the synchronization can be viewed as a
partial operation on traces. In general, it is not so in case of strings: synchronization
of two singleton string languages may not be a singleton. E.g., the synchronization
1.7. T R A C E LANGUAGES 31
of the string language {ab} over {a, b} with the string language {cd} over {c, d} is
the string language {abed, acbd, cabd, acdb, cadb, cdab}, while the synchronization of
the trace language {[a&]} over {a,b}2 with the trace language {[cd]} over {c,d} 2
is the singleton trace language {[afced]} over {a, b}2 U {c, d}2. It is yet another
justification of using traces instead of strings for representing runs of concurrent
systems.
te\\i£iTi&Vi€l-KDi(t)€Ti.
Fact 1.7.7 If T\,Ti are trace languages over a common dependency, then Ti ||
T2 = T i n T 2 .
Proof: Let Ti,T 2 be trace languages over dependency D; then t G (Tx || T 2 ) if and
only if 7T£)(t) G T\ and 7TD(£) £ T 2 ; but since t is a trace over D, 7r£>(t) = t, hence
t E Ti and t e T 2 . D
A trace language T is prefix-closed, if s C ( £ T =)• s 6 T.
Proof: Let Ti,T 2 be prefix-closed trace languages over dependencies Di,D2, re-
spectively. Let £ € (Ti || T 2 ) and let £' be a prefix of t; then there is t" such that t =
t't". But then, by properties of projection, KD1 (t) = 7rol (i'i") = TTD1 {t')^Di (t") £
Ti and 7TD2(£) = iTD2(t't") = TTr)2(t')Tr£)2{t") € T 2 ; since Ti,T 2 are prefix-closed,
KpAt') € T i a n d ^D2{t') G T 2 ; hence, by definition, t' <E (Ti || T 2 ). D
Fixed-point equations. Let X i , X 2 , . . . , X n , Y be families of sets, n > 0, and
let / : X i x X 2 x • • • x X „ — • Y; / is monotone, if
Proof: Clear. •
32 C H A P T E R 1. INTRODUCTION TO T R A C E THEORY
/ is congruent, if
Proof: Clear. •
M D = [/(LO)]D = [/b([Lo]u)
1.8. ELEMENTARY N E T SYSTEMS 33
by congruence of / . Thus, [LQ]D is a fixed point of [f]o- We show that [LO]D is the
least fixed point of [/]r>. Let T be a trace language over D such that [J]D(T) = T
and set L = \JT. Thus [L]D = T and \JT = \J[L}D = L. By (1.43) we have f(L) C
U [ / ( i ) ] n ; by congruence of / we have \J[f(L)]D = U[/]-D([ L b); b y definition of L
we have U [ / M M r > ) = U [ / b ( r ) , and by definition of T we get \J[f]D(T) = {JT
and again by definition of L: \JT = L. Collecting all above relations together we
get f(L) C L. Thus, as we have already proved, L0 C L, hence [Lo]o C [L]^ = T.
n
It proves [Lo] to be the least fixed point of [/]r>.
This theorem can be easily generalized for tuples of monotonic and congru-
ent functions. As we have already mention, functions built up from variables and
constants by means of union, concatenation and iteration operations can serve as
examples of monotonic and congruent functions. Theorem 1.7.13 allows to lift the
well-known and broadly applied method of defining string languages by fixed point
equations to the case of trace languages. It offers also a useful and handy tool for
analysis concurrent systems represented by elementary net systems, as shown in the
next section.
It is worthwhile to note that, under the same assumptions as in Theorem 1.7.13,
Mo can be the greatest fixed point of a function / , while [M0] is not the greatest
fixed point of [/].
N = {PN,EN,FN,m%)
where PN, EN are finite disjoint sets (of places and transitions, resp.), FN C PN x
EN U EN X PN (the flow relation) and m°N C PN. It is assumed that Dom(FN) U
COOI(FN) = PN^EN (there is no "isolated" places and transitions) and FnF~1 = 0.
Any subset of PN is called a marking of ./V; mN is called the (initial marking). A
place in a marking is said to be marked, or carrying a token.
As all other types of nets, elementary net systems are represented graphically
using boxes to represent transitions, circles to represent places, and arrows leading
from circles to boxes or from boxes to circles to represent the flow relation; in such
a representation circles corresponding to places in the initial marking are marked
34 C H A P T E R 1. INTRODUCTION T O T R A C E THEORY
r^hr<h
P = {1,2,3,4,5,6},
E = {a,b,c,d,e},
F = {(l,a),(a,2),(2,6),(6,3),(3,c),(c,l),
(l,d),(d,4),(5,6),(6,6),(6,e),(e,5)}),
m = {1,5}
is represented graphically.
Define P r e , Post, Prox as functions from E to 2 P such that for all a G E: as
follows:
places in Pre AT (a) are called the entry places of a, those in Post AT (a) are called the
exit places of a, and those in Prox N(O-) are neighbours of a; the set Prox A/-(a) is the
neighbourhood of a in TV. The assumption F n F _ 1 = 0 means that no place can
be an entry and an exit of the same transition. The subscript JV is omitted if the
net is understood.
Transition junction of net N is a (partial) function 6N : 2£ x EN — • 2PN such
that
Pre (a) C m i , Post (a) n mi = 0,
6 / v ( " i i j f f l ) = "^2 *> m 2 = (mi — Pre (a)) U Post (c
As usual, subscript N is omitted, if the net is understood; this convention will hold
for all subsequent notions and symbols related to them.
1.8. E L E M E N T A R Y N E T SYSTEMS 35
Clearly, this definition is correct, since for each marking m and transition a there
exists at most one marking equal to S(m, a). If S(m, a) is defined for marking m
and transition a, we say that a is enabled at marking m. We say that the transition
function describes single steps of the system.
Reachability function of net N is the function 6^ : 2P x E* — • 2P defined
recursively by equalities:
6*N(m,e)=m, (1.60)
8*N{m,wa) = 8jsi(6*N(m,w), a) (1-61)
P
for all m € 2 , w € E*, a G E.
Behavioural function of net N is the function /?jy : E* — • 2 defined by
/3N(io) = S*(m°, w) for all w e E*.
As usual, the subscript N is omitted everywhere the net N is understood.
The set (of strings) SV = Dorn(f3) is the (sequential) behaviour of net JV; the set
(of markings) RN = Cod((3) is the set of reachable markings of N. Elements of S will
be called execution sequences of N; execution sequence w such that /3(w) = m C P
is said to fead to m.
The sequential behaviour of iV is clearly a prefix closed language and as any prefix
closed language it is ordered into a tree by the prefix relation. Maximal linearly
ordered subsets of S can be viewed as sequential observations of the behaviour of
N, i.e. observations made by observers capable to see only a single event occurrence
at a time. The ordering of symbols in strings of S reflects not only the (objective)
causal ordering of event occurrences but also a (subjective) observational ordering
resulting from a specific view over concurrent actions. Therefore, the structure of
S alone does not allow to decide whether the difference in ordering is caused by a
conflict resolution (a decision made in the system), or by different observations of
concurrency. In order to extract from S the causal ordering of event occurrences
we must supply S with an additional information; as such information we take here
the dependency of events.
Dependency relation for net N is defined as the relation DM C E X E such that
Intuitively speaking, two transitions are dependent if either they share a common
entry place (then they "compete" for taking a token away from this place), or they
share a common exit place (and then they compete for putting a token into the
place), or an entry place of one transition is the exit place of the other (and then one
of them "waits" for the other). Transitions are independent, if their neighbourhoods
are disjoint. If both such transitions are enabled at a marking, they can be executed
concurrently (independently of each other).
Define the non-sequential behaviour of net ./V as the trace language [Sjp and
denote it by S / y That is, the non-sequential behaviour of a net arises from its
sequential behaviour by identifying some execution sequences; as it follows from
the dependency definition, two such sequences are identified if they differ in the
order of independent transitions execution.
36 C H A P T E R 1. INTRODUCTION T O T R A C E T H E O R Y
Proposition 1.8.1 For any net N the behavioural function (5 is congruent w.r. to
D, i.e. for any strings u, v £ E*,u =r> v implies /3(M) = P(v).
Proof: Let u =£> v; it suffices to consider the case u = xaby and v = xbay, for
independent transitions a,b and arbitrary x, y £ E*. Because of the behavioural
function definition, we have to prove only
Proposition 1.8.2 The trace behaviour of any net is a complete trace language.
e 6 b c
2Q i (•)—- d
4
^- a J
Q Cm O Vi e I: Qi Crrii,
R = Qr\m O Vie I: Ri = Qinrrii,
m = Q UR <£> Wi € I :rrii = QiURi,
R = m-Q O \/i e I: Ri = rrii - Qi.
P r o p o s i t i o n 1.8.5 D^ = [j^-iD^.
Xi = XifceU&Ue
X2 = X2abc UabUaUdUe
for languages Xi,X2 over alphabets {6, e}, {a, b, c, d}, respectively. By definition of
trace behaviour of nets, by Theorem 1.8.7, and Theorem 1.7.13 and we have
with £>i = {6, e} 2 , D2 = {a, 6, c, d}2. The equation for the synchronization Xi || X2,
is, by properties of synchronization, as follows:
[ * i U II [x2]D2 =
([Xi]Dl[be]Dl U [b]Dl U [e] Dl ) || {[X2}D2[abc\D2 U [ab]n2 U [a]Dl U [d[Da U [e}D2)
[Xi || X2}D =
[Xi || X2]D U [abc U abe U ab U a U d U e]D
ai
\ .
61 an
62 bm
The behaviour of atoms can be easily found. Namely, let the atom No be defined
as 7V0 = ({p},Au Z,F,m), where A = {e | (e,p) £ F},Z = {e \ (p, e) € F}. Say
that No is marked, if m = {p}, and unmarked otherwise, i.e. if m = 0. Then, by
the definition of behavioural function, trace behaviour BN0 is the trace language
[(ZA)*{Z U e)]D, if it is marked, and [(AZ)*(A U e)]£>, if it is unmarked, where
D = (AU Zf.
1.9 Conclusions
Some basic notions of trace theory has been briefly presented. In the next chapters
of this book this theory will be made broader and deeper; the intention of the
present chapter was to show some initial ideas that motivated the whole enterprise.
In the sequel the reader will be able to get acquainted with the development trace
theory as a basis of non-standard logic as well as with the basic and involved results
from the theory of monoids; with properties of graph representations of traces and
with generalization of the notion of finite automaton that is consistent with the
trace approach. All this work shows that the concurrency issue is still challenging
and stimulating fruitful research.
A c k n o w l e d g e m e n t s . The author is greatly indebted to Professor Grzegorz
Rozenberg for his help and encouragement; without him this paper would not have
appeared.
1.9. CONCLUSIONS 41
r
e 6 6 c
e b 6 c
9>J 4
<!>•
a a
•f
•< '