0% found this document useful (0 votes)
58 views

Model-Checking: A Tutorial Introduction: January 1999

This document provides a tutorial introduction to model checking. It defines different types of models used for model checking, including Kripke structures, labeled transition systems, and Kripke transition systems. It also discusses properties specified using temporal and modal logics. Finally, it outlines three main approaches to model checking: the semantic/iterative approach, the automata-theoretic approach, and the tableau method. The goal is to provide practitioners with fundamentals to translate verification problems into model checking questions.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views

Model-Checking: A Tutorial Introduction: January 1999

This document provides a tutorial introduction to model checking. It defines different types of models used for model checking, including Kripke structures, labeled transition systems, and Kripke transition systems. It also discusses properties specified using temporal and modal logics. Finally, it outlines three main approaches to model checking: the semantic/iterative approach, the automata-theoretic approach, and the tableau method. The goal is to provide practitioners with fundamentals to translate verification problems into model checking questions.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/221477319

Model-Checking: A Tutorial Introduction

Conference Paper · January 1999


Source: DBLP

CITATIONS READS
139 1,267

3 authors, including:

Markus Muller-olm Bernhard Steffen


University of Münster Technische Universität Dortmund
90 PUBLICATIONS   1,446 CITATIONS    479 PUBLICATIONS   11,623 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Genesys View project

LearnLib View project

All content following this page was uploaded by Bernhard Steffen on 01 June 2014.

The user has requested enhancement of the downloaded file.


Model-Checking
A Tutorial Introduction

Markus Müller-Olm1 , David Schmidt2 , and Bernhard Steffen1


1
Dortmund University, Department of Computer Science, FB 4, LS 5,
44221 Dortmund, Germany,
{mmo,steffen}@ls5.cs.uni-dortmund.de
2
Kansas State University, Department of Computing and Information Sciences,
Manhattan, Kansas 66506, USA,
[email protected]

Abstract. In the past two decades, model-checking has emerged as


a promising and powerful approach to fully automatic verification of
hardware systems. But model checking technology can be usefully ap-
plied to other application areas, and this article provides fundamentals
that a practitioner can use to translate verification problems into model-
checking questions. A taxonomy of the notions of “model,” “property,”
and “model checking” are presented, and three standard model-checking
approaches are described and applied to examples.

1 Introduction

In the last two decades model-checking [11,34] has emerged as a promising and
powerful approach to automatic verification of systems. Roughly speaking, a
model checker is a procedure that decides whether a given structure M is a
model of a logical formula φ, i.e. whether M satisfies φ, abbreviated M |= φ.
Intuitively, M is an (abstract) model of the system in question, typically a finite
automata-like structure, and φ, typically drawn from a temporal or modal logic,
specifies a desirable property. The model-checker then provides a push-button
approach for proving that the system modeled by M enjoys this property. This
full automation together with the fact that efficient model-checkers can be con-
structed for powerful logics, forms the attractiveness of model-checking.
The above “generic” description of model-checking leaves room for refine-
ment. What exactly is a model to be checked? What kind of formulas are used?
What is the precise interpretation of satisfaction, |=? We present a rough map
over the various answers to these questions, and in the process, we introduce the
main approaches.
The various model-checking approaches provide a cornucopia of generic deci-
sion procedures that can be applied to scenarios that go far beyond the problem
domains for which the approaches were originally invented. (The work of some of
the authors on casting data flow analysis questions as model-checking problems
is an example [35].) We intend to provide a practitioner with a basis she can use

A. Cortesi, G. Filé (Eds.): SAS’99, LNCS 1694, pp. 330–354, 1999.


c Springer-Verlag Berlin Heidelberg 1999

Model-Checking 331

x=0 x=1 x=2


y=2 y=1 y=0

Fig. 1. Example Kripke structure

to translate problems into model structures and formulas that can be solved by
model checking.
The rest of this article is organized as follows: In the next section we dis-
cuss the model structures underlying model-checking—Kripke structures, labeled
transition systems and a structure combining both called Kripke transition sys-
tems. Section 3 surveys the spectrum of the logics used for specifying properties
to be automatically checked for model structures. We then introduce three basic
approaches to model-checking: the semantic or iterative approach, the automata-
theoretic approach, and the tableau method. The paper finishes with a number
of concluding remarks.

2 Models
Model-checking typically depends on a discrete model of a system—the system’s
behavior is (abstractly) represented by a graph structure, where the nodes rep-
resent the system’s states and the arcs represent possible transitions between the
states. It is common to abstract from the identity of the nodes. Graphs alone
are too weak to provide an interesting description, so they are annotated with
more specific information. Two approaches are in common use: Kripke struc-
tures, where the nodes are annotated with so-called atomic propositions, and
labeled transition systems (LTS), where the arcs are annotated with so-called
actions. We study these two structures and introduce a third called Kripke tran-
sition systems, which combines Kripke structures and labeled transition systems
and which is often more convenient for modeling purposes.

Kripke Structures. A Kripke structure (KS) over a set AP of atomic propositions


is a triple (S, R, I), where S is a set of states, R ⊆ S × S is a transition relation,
and I : S → 2AP is an interpretation. Intuitively the atomic propositions, which
formally are just symbols, represent basic local properties of system states; I
assigns to each state the properties enjoyed by it. We assume that a set of
atomic propositions AP always contains the propositions true and false and that,
for any state s, true ∈ I(s) and false ∈ / I(s). A Kripke structure is called total
if R is a total relation, i.e. if for all s ∈ S there is a t ∈ S such that (s, t) ∈ R
otherwise it is called partial. For model-checking purposes S and AP are usually
finite.
Figure 1 displays an example Kripke structure whose propositions take the
form, var = num; the structure represents the states that arise while the pro-
gram’s components, x and y, trade two resources back and forth.
Kripke structures were first devised as a model theory for modal logic [5,25],
whose propositions use modalities that express necessity (“must”) and possibil-
332 M. Müller-Olm, D. Schmidt, and B. Steffen

ity (“may”). In this use of a Kripke structure, the states correspond to different
“worlds” in which different basic facts (the atomic propositions) are true; tran-
sitions represent reachability between worlds. The assertion that some fact is
possibly true is interpreted to mean there is a reachable state (world) in which
the fact holds; the assertion that a fact is necessarily true means that the fact
holds in all reachable worlds. Kripke showed that the axioms and rules in dif-
ferent systems of modal logics correspond to properties that hold in different
classes of Kripke structures [28,29]. These logical settings for Kripke structures
(in particular, the notion of “worlds”) can provide useful guidance for expressing
computing applications as Kripke structures [5,16].

Labeled Transition Systems. A labeled transition system (LTS) is a triple T =


(S, Act, →), where S is a set of states, Act is a set of actions, and →⊆ S ×Act×S
is a transition relation. A transition (s, a, s0 ) ∈→, for which we adopt the more
a
intuitive notation s → s0 , states that the system can evolve from state s to
a
state s0 thereby exchanging action a with its environment. We call s → s0 a
transition from s to s0 labeled by action a, and s0 is an a-successor of s. In an
LTS, the transitions are labeled with single actions, while in a Kripke structure,
states are labeled with sets of atomic propositions. Labeled transition systems
originate from concurrency theory, where they are used as an operational model
of process behavior [33]. In model-checking applications S and Act are usually
finite.
Fig. 3 displays two small examples of labeled transition systems that display
the actions a vending machine might take.

Kripke Transition Systems. Labels on arcs appear naturally when the labeling
models the dynamics of a system, whereas labels on node appear naturally when
the labeling models static properties of states. There are various ways to encode
arc labelings by node labelings and vice versa. (One of them is described below.)
And, logical considerations usually can be translated between these two repre-
sentations. For these reasons, theoretical analyses study just one form of labeling.
For modeling purposes, however, it is often natural to have both kinds of labeling
available. Therefore, we introduce a third model structure that combines labeled
transition systems and Kripke structures.
A Kripke transition system (KTS) over a set AP of atomic propositions is a
structure T = (S, Act, →, I), where S is a set of states, Act is a set of actions,
→⊆ S × Act × S is a transition relation and I : S → 2AP is an interpretation.
For technical reasons we assume that AP and Act are disjoint. Kripke transition
systems generalize both Kripke structures and labeled transition systems: A
Kripke structure is a Kripke transition system with an empty set of actions,
Act, and a labeled transition system is a Kripke transition system with a trivial
interpretation, I.
Kripke transition systems work well for modeling sequential imperative pro-
grams for data flow analysis purposes, as they concisely express the implied
predicate transformer scheme: nodes express the predicates or results of the
considered analysis, and edges labeled with the statements express the nodes’
Model-Checking 333

{true} {y}
z:=0 z:=0
i:=0 i:=0
{z=i*x} {i,y}
i!=y i=y i!=y i=y

{i!=y, z=i*x} {z=x*y} {z,i,x,y} {}


z:=z+x z:=z+x
i:=i+1 i:=i+1

Fig. 2. Two Kripke transition systems for a program

interdependencies. If data flow analysis is performed via model-checking, Kripke


transition systems thus enable to use the result of one analysis phase as input
for the next one.
Figure 2 shows two Kripke transition systems for the program, z:=0; i:=0;
while i!=y do z:=z+x; i:=i+1 end. Both systems label arcs with program
phrases. The first system uses properties that are logical propositions of the
form, var = expr; it portrays a predicate-transformer semantics. The second
system uses propositions that are program variables; it portrays the results of a
definitely-live-variable analysis.
Any Kripke transition system T = (S, Act, →, I) over AP induces in a nat-
ural way a Kripke structure KT which codes the same information. The idea
is to associate the information about the action exchanged in a transition with
the reached state instead of the transition itself. This is similar to the classic
translation of Mealy-Automata to Moore-Automata. Formally, KT is the Kripke
a0
structure (S × Act, R, I 0 ) over AP ∪ Act with R = {(hs, ai, hs0 , a0 i) | s → s0 }
and I 0 (hs, ai) = I(s) ∪ {a}. Logical consideration about T usually can straight-
forwardly be translated to considerations about KT and vice versa. Therefore,
logicians usually prefer to work with the structurally more simple Kripke struc-
tures. Nevertheless, the richer framework of Kripke transition systems is often
more convenient for modeling purposes.
Often we may want to designate a certain state s0 ∈ S in a KS, LTS, or KTS
as the initial state. Intuitively, execution of the system starts in this state. A
structure together with such a designated initial state is called a rooted structure.

3 Logics

The interpretation, I, in a Kripke transition system defines local properties of


states. Often we are also interested in global properties connected to the transi-
tional behavior. For example, we might be interested in reachability properties,
like, “Can we reach from the initial state a state where the atomic proposition
P holds?” Temporal logics [17,36] are logical formalisms designed for expressing
such properties.
334 M. Müller-Olm, D. Schmidt, and B. Steffen

coin coin coin

coffee tea coffee tea

Fig. 3. Two vending machines

Temporal logics come in two variants, linear-time and branching-time. Linear-


time logics are concerned with properties of paths. A state in a transition system
is said to satisfy a linear-time property if all paths emanating from this state
satisfy the property. In a labeled transition system, for example, two states that
generate the same language satisfy the same linear-time properties. Branching-
time logics, on the other hand, describe properties that depend on the branching
structure of the model. Two states that generate the same language but by us-
ing different branching structures can often be distinguished by a branching-time
formula.
As an example, consider the two rooted, labeled transition systems in Fig. 3,
which model two different vending machines offering tea and coffee. Both ma-
chines serve coffee or tea after a coin has been inserted, but from the customer’s
point of view the right machine is to be avoided, because it decides internally
whether to serve coffee or tea. The left machine, in contrast, leaves this decision
to the customer. Both machines have the same set of computations (maximal
paths): {hcoin, coffeei, hcoin, teai}. Thus, a linear-time logic will be unable to
distinguish the two machines. In a branching-time logic, however, the property,
“a coffee action is possible after any coin action” can be expressed, which differ-
entiates the two machines.
The choice of using a linear-time or a branching-time logic depends on the
properties to be analyzed. Due to their greater selectivity, branching-time logics
are often better for analyzing reactive systems. Linear-time logics are preferred
when only path properties are of interest, as when analyzing data-flow properties
of graphs of imperative programs.

3.1 Linear-Time Logics

Propositional linear-time logic (PLTL) is the basic prototypical linear-time logic.


It is often presented in a form to be interpreted over Kripke structures. Its
formulas are constructed as follows, where p ranges over a set AP of atomic
propositions:

φ ::= p | ¬φ | φ1 ∨ φ2 | X(φ) | U(φ, ψ) | F(φ) | G(φ)


Model-Checking 335

π |= p iff p ∈ I(π0 )
π |= ¬φ iff π 6|= φ
π |= φ1 ∨ φ2 iff π |= φ1 or π |= φ2
π |= X(φ) iff |π| > 1 and π 1 |= φ
π |= U(φ, ψ) iff there is k, 0 ≤ k < |π|, with π k |= ψ and for all i, 0 ≤ i < k, π i |= φ
π |= F(φ) iff there is k, 0 ≤ k < |π|, with π k |= ψ
π |= G(φ) iff for all k with 0 ≤ k < |π|, π k |= ψ

Fig. 4. Semantics of PLTL

PLTL formulas are interpreted over paths in a Kripke structure K = (S, R, I)


over AP. A finite path is a finite, non-empty sequence π = hπ0 , . . . , πn−1 i of
states π0 , . . . , πn−1 ∈ S such that (πi , πi+1 ) ∈ R for all 0 ≤ i < n − 1. n is
called the length of path, denoted by |π|. An infinite path is an infinite sequence
π = hπ0 , π1 , π2 . . .i of states in S such that (πi , πi+1 ) ∈ R for all i ≥ 0. The
length of an infinite path is ∞. For 0 ≤ i < |π|, πi denotes the i-th state in path
π, and π i is hπi , πi + 1, . . .i, the tail of the path starting at πi . In particular,
π 0 = π. A path in a Kripke structure is called maximal if it cannot be extended.
In particular, every infinite path is maximal.
In Fig. 4, we present an inductive definition of when a path, π, in a Kripke
structure K = (S, R, I) satisfies a PLTL formula, φ. Intuitively, π satisfies an
atomic proposition, p, if its first state does; atomic propositions represent local
properties. ¬ and ∨ are interpreted in the obvious way; further Boolean connec-
tives may be introduced as abbreviations in the usual way, e.g., φ1 ∧ φ2 can be
introduced as ¬(¬φ1 ∨ ¬φ2 ).
The modality X(φ) (“next φ”) requires the property φ for the next situation
in the path; formally, X(φ) holds if φ holds for the path obtained by removing
the first state. G(φ) (“generally φ” or “always φ”) requires φ to hold for all
situations; F(φ) (“finally φ”) for some (later) situation. Thus G and F provide a
kind of universal (resp., existential) quantification over the later situations in a
path. U(φ, ψ) (“φ until ψ”) requires ψ to become true at some later situation and
φ to be true at all situations visited before. This operator sometimes is called
“strong until” because it requires ψ to become true finally. This is different for
a variant of the until modality, called “weak until,” because the formula holds
true when φ is true forever. Strong- and weak-until can be defined from each
other using F and G:

U(φ, ψ) = WU(φ, ψ) ∧ F(ψ) and WU(φ, ψ) = U(φ, ψ) ∨ G(φ) .

They are also (approximate) duals:

¬U(φ, ψ) = WU(¬ψ, ¬φ ∧ ¬ψ) and ¬WU(φ, ψ) = U(¬ψ, ¬φ ∧ ¬ψ) .


336 M. Müller-Olm, D. Schmidt, and B. Steffen

φ
X(φ): ...
φ φ φ φ φ φ
G(φ): ...
φ
F(φ): ...
φ φ φ ψ
U(φ, ψ): ...
φ φ φ ψ
WU(φ, ψ): . . . or
φ φ φ φ φ φ
...

Fig. 5. Illustration of linear-time modalities

Moreover, F can easily be defined in terms of U, and G in terms of WU


F(φ) = U(true, φ) and G(φ) = WU(φ, false) ,
and F and G are duals:
F(φ) = ¬G(¬φ) and G(φ) = ¬F(¬φ) .
The meaning of the modalities is illustrated in Fig. 5.
While the basic structures of a linear-time logic are paths, the model-checking
question usually is presented for a given Kripke structure. The question is then
to determine, for each state, whether all paths emanating from the state satisfy
the formula. Sometimes, one restricts the question to certain kinds of paths,
e.g., just infinite paths, or maximal finite paths only, or all (finite or infinite)
maximal paths. Perhaps the most common case is to consider infinite paths in
total Kripke structures.
Variants of PLTL may also be defined for speaking about the actions in a KTS
or LTS. A (finite) path in a KTS or LTS is then a non-empty alternating sequence
π = hs0 , a1 , s1 , . . . , sn−1 i of states and actions that begins and ends with a state
ai+1
and satisfies si → si+1 for i = 0, . . . , n − 1. Again we call |π| = n the length of
path π, πi stands for si , and π i denotes the path hsi , ai+1 , si+1 , . . . , sn−1 i. Infinite
paths are defined similarly. With these conventions, PLTL can immediately be
interpreted on such extended paths with the definition in Fig. 4. We may now
also extend the syntax of PLTL by allowing formulas of the form (a), where a
is an action in Act. These formulas are interpreted as follows:
π |= (a) iff |π| > 1 and a1 = a ,
where a1 is the first action in π.

3.2 Branching-Time Logics


Hennessy-Milner Logic. Hennessy-Milner logic (HML) is a simple modal
logic introduced by Hennessy and Milner in [24,33]. As far as model-checking is
Model-Checking 337

φ
a ... a ...
a a
[a]φ: φ haiφ: φ
.. ..
a . a .
φ

Fig. 6. Illustration of branching-time modalities

concerned, Hennessy-Milner logic is limited because it can express properties of


only bounded depth. Nevertheless, it is of interest because it forms the core of
the modal µ-calculus, which appears in the next section.
HML is defined over a given set, Act, of actions, ranged over by a. Formulas
are constructed according to the grammar,
φ ::= true | false | φ1 ∧ φ2 | φ1 ∨ φ2 | [a]φ | haiφ
The logic is interpreted over labeled transition systems. Given an LTS T =
(S, Act, →), we define inductively when state s ∈ S satisfies HML formula φ:
s |= true s 6|= false
s |= φ1 ∧ φ2 iff s |= φ1 and s |= φ2
s |= φ1 ∨ φ2 iff s |= φ1 or s |= φ2
a
s |= [a]φ iff for all t with s → t, t |= φ
a
s |= haiφ iff there is t with s → t and t |= φ
All states satisfy true and no state satisfies false. A state satisfies φ1 ∧ φ2 if
it satisfies both φ1 and φ2 ; it satisfies φ1 ∨ φ2 if it satisfies either φ1 or φ2 (or
both). The most interesting operators of HML are the branching time modalities
[a] and hai. They relate a state to its a-successors. While [a]φ holds for a state
if all its a-successors satisfy formula φ, hai holds if an a-successor satisfying
formula φ exists. This is illustrated in Fig. 6. The two modalities provide a kind
of universal and existential quantification over the a-successors of a state.
As introduced above, HML has as its atomic propositions only true and false.
If HML is interpreted on a KTS T = (S, Act, →, I) over a certain set of atomic
propositions, AP, we may add atomic formulas p for each p ∈ AP. These formulas
are interpreted as follows:
s |= p iff p ∈ I(s)
Moreover, it is sometimes useful in practice to use modalities [A] and hAi
that range over set of actions A ⊆ Act instead of single actions. They can be
introduced as derived operators:
def
^ def
_
[A]φ = [a]φ hAiφ = haiφ .
a∈A a∈A

We also write [ ] for [Act] and h i for hActi. A version of HML suitable for Kripke
structures would provide just the modalities [ ] and h i.
338 M. Müller-Olm, D. Schmidt, and B. Steffen

Modal µ-Calculus. The modal mu-calculus [27] is a small, yet expressive


branching-time temporal logic that extends Hennessy-Milner logic by fixpoint
operators. Again it is defined over a set, Act, of actions. We also assume a given
infinite set Var of variables. Modal µ-calculus formulas are constructed according
to the following grammar:

φ ::= true | false | [a]φ | haiφ | φ1 ∧ φ2 | φ1 ∨ φ2 | X | µX . φ | νX . φ

Here, X ranges over Var and a over Act. The two fixpoint operators, µX and νX,
bind free occurrences of variable X. We will apply the usual terminology of free
and bound variables in a formula, closed and open formulas, etc. Given a least
(resp., greatest) fixpoint formula, µX . φ (νX . φ), we say that the µ (ν) is the
formula’s parity.
The above grammar does not permit negations in formulas. Of course, nega-
tion is convenient for specification purposes, but negation-free formulas, known
as formulas in positive form, are more easily handled by model-checkers. In most
logics, formulas with negation can easily be transformed into equivalent formulas
in positive form by driving negations inwards to the atomic propositions with
duality laws like

¬haiφ = [a]¬φ , ¬(φ1 ∧ φ2 ) = ¬φ1 ∨ ¬φ2 , and ¬(µX . φ) = νX . ¬φ[¬X/X] .

But there is a small complication: We might end up with a subformula of the


form ¬X from which the negation cannot be eliminated. We avoid this problem
if we pose this restriction on fixpoint formulas: in every fixpoint formula, µX . φ
or νX . φ, every free occurrence of X in φ must appear under an even number
of negations. This condition ensures also that the meaning of φ depends mono-
tonically on X, which is important for the well-definedness of the semantics of
fixpoint formulas.
Modal mu-calculus formulas are interpreted over labeled transition systems.
Given an LTS T = (S, Act, →), we interpret a closed formula, φ, as that subset
of S whose states make φ true. To explain the meaning of open formulas, we
part.
employ environments, partial mappings ρ : Var → 2S , which interpret the free
variables of φ by subsets of S; ρ(X) represents an assumption about the set of
states satisfying the formula X. The inductive definition of MT (φ)(ρ), the set
of states of T satisfying the mu-calculus formula φ w.r.t. environment ρ, is given
in Fig. 7. The meaning of a closed formula does not depend on the environment.
We write, for a closed formula φ and a state s ∈ S, s |=T φ if s ∈ MT (φ)(ρ) for
one (and therefore for all) environments.
Intuitively, true and false hold for all, resp., no states, and ∧ and ∨ are
interpreted by conjunction and disjunction. As in HML, haiφ holds for a state
s if there is an a-successor of s which satisfies φ, and [a]φ holds for s if all its
a-successors, satisfy φ. The interpretation of a variable X is as prescribed by the
environment. The least fixpoint formula, µX . φ, is interpreted by the smallest
subset x of S that recurs when φ is interpreted with the substitution of x for X.
Similarly, the greatest fixpoint formula, νX . φ, is interpreted by the largest such
Model-Checking 339

MT (true)(ρ) = S
MT (false)(ρ) = ∅
a
MT ([a]φ)(ρ) = {s | ∀s0 : s → s0 ⇒ s0 ∈ MT (φ)(ρ)}
a
MT (haiφ)(ρ) = {s | ∃s0 : s → s0 ∧ s0 ∈ MT (φ)(ρ)}
MT (φ1 ∧ φ2 )(ρ) = MT (φ1 )(ρ) ∩ MT (φ2 )(ρ)
MT (φ1 ∨ φ2 )(ρ) = MT (φ1 )(ρ) ∪ MT (φ2 )(ρ)
MT (X)(ρ) = ρ(X)
MT (µX . φ)(ρ) = fixµ Fφ,ρ
MT (νX . φ)(ρ) = fixν Fφ,ρ

Fig. 7. Semantics of modal mu-calculus

set. These sets can be characterized as the least and greatest fixpoints, fixµ Fφ,ρ
and fixν Fφ,ρ , of the functional Fφ,ρ : 2S → 2S defined by

Fφ,ρ (x) = MT (φ)(ρ[X 7→ x])

Here, ρ[X 7→ x] denotes, for a set x ⊆ S and a variable X ∈ Var, the environment
that maps X to x and that coincides on the other variables with ρ. We now must
review the basic theory of fixpoints.

3.3 Fixpoints in Complete Lattices

A convenient structure accommodating fixpoint construction is the complete


lattice, i.e., a non-empty, partially ordered set in which arbitrary meets and joins
exist. We assume the reader is familiar with the basic facts of complete lattices
(for a thorough introduction see the classic books of Birkhoff and Grätzer [3,21]).
We now recall some definitions and results directly related to fixpoint theory.
Let (A, ≤A ) and (B, ≤B ) be complete lattices and C a subset of A. C is a chain
if it is non-empty and any two elements of C are comparable with respect to
≤A . A mapping f ∈ (A → B) is monotonic if a ≤A a0 implies f (a) ≤ f (a0 )
for all a, a0 ∈ A. The mapping is ∨-continuous if it distributes over chains, i.e.,
for all chains C ⊆ A, f (∨C) = ∨{f (c) | c ∈ C}. The notion of ∧-continuity is
defined dually. Both ∨- and ∧-continuity of a function imply monotonicity. ⊥
and > denote the smallest and largest elements of a complete lattice. Finally, a
point a ∈ A is called a fixpoint of a function f ∈ (A → A) if f (a) = a. It is a
pre-fixpoint of f if f (a) ≤ a and a post-fixpoint if a ≤ f (a).
Suppose that f : A → A is a monotonic mapping on a complete lattice
(A, ≤). The central result of fixpoint theory is the following [40,31]:

Theorem 1 (Knaster-Tarski fixpoint theorem). If f : A → A is a mono-


tonic mapping on a complete lattice (A, ≤), then f has a least fixpoint fixµ f
340 M. Müller-Olm, D. Schmidt, and B. Steffen

as well as a greatest fixpoint fixν f which can be characterized as the smallest


pre-fixpoint and largest post-fixpoint respectively:
^ _
fixµ f = {a | f (a) ≤ a} and fixν f = {a | a ≤ f (a)} .

For continuous functions there is a “constructive” characterization that con-


structs the least and greatest fixpoints by iterated application of the function to
the smallest (greatest) element of the lattice [26]. The iterated application of f
is inductively defined by the two equations f 0 (a) = a and f i+1 (a) = f (f i (a)).
Theorem 2 (Kleene fixpoint theorem). For complete lattice (A, ≤), if f :
A → A is ∨-continuous, then its least fixpoint is the join of this chain:
_
fixµ f = {f i (⊥) | i ≥ 0} .

Dually, if f is ∧-continuous, its greatest fixpoint is the meet of this chain:


^
fixν f = {f i (>) | i ≥ 0} .

In the above characterization we have f 0 (⊥) ≤ f 1 (⊥) ≤ f 2 (⊥) ≤ · · · and,


dually, f 0 (>) ≥ f 1 (>) ≥ f 2 (>) ≥ · · ·. As any monotonic function on a finite
lattice is both ∨-continuous and ∧-continuous, this lets us effectively calculate
least and greatest fixpoints for arbitrary monotonic functions on finite com-
plete lattices: The least fixpoint is found with the smallest value of i such that
f i (⊥) = f i+1 (⊥); the greatest fixed point is calculated similarly. This observa-
tion underlies the semantic approach to model-checking described in Sect. 4.3.
The following variant of Kleene’s fixpoint theorem shows that the iteration
can be started with any value below the least or above the greatest fixpoint;
it is not necessary to take the extremal values ⊥ and >. This observation can
be exploited to speed up the fixpoint calculation if a safe approximation is al-
ready known. In particular, it can be used when model-checking nested fixpoint
formulas of the same parity (see Sect. 4.3).
Theorem 3 (Variant of Kleene’s fixpoint theorem). Suppose that f : A →
A is a ∨-continuous
W function on complete lattice (A, ≤), and a ∈ A. If a ≤ fixµ f ,
then fixµ f = {f i (a) | i ≥ 0} . V
Dually, if f is ∧-continuous and fixν f ≤ a, then fixν f = {f i (a) | i ≥ 0} .
In the context of the modal mu-calculus, the fixpoint theorems are applied
to the complete lattice, (2S , ⊆), of the subsets of S ordered by set-inclusion.
With the expected ordering on environments (ρ v ρ0 iff domρ = domρ0 and
ρ(X) ⊆ ρ0 (X) for all X ∈ domρ), we can prove that Fφ,ρ must be monotonic.
Thus, the Knaster-Tarski fixpoint theorem ensures the existence of fixµ Fφ,ρ and
fixν Fφ,ρ and gives us the following two equations that are often used for defining
semantics of fixpoint formulas:
T
MT (µX . φ)(ρ) = {x ⊆ S | MT (φ)(ρ[X 7→ x]) ⊆ x}
S
MT (νX . φ)(ρ) = {x ⊆ S | MT (φ)(ρ[X 7→ x]) ⊇ x}
Model-Checking 341

For finite-state transition systems, we have the Kleene fixpoint theorem:


S i
MT (µX . φ)(ρ) = {Fφ,ρ (∅) | i ≥ 0}
T i
MT (νX . φ)(ρ) = {Fφ,ρ (S) | i ≥ 0}

These characterizations are central for the semantic approach to model-checking


described in Sect. 4.3.
It is simple to extend the modal mu-calculus to work on Kripke transition
systems instead of labeled transition systems: We allow the underlying atomic
propositions p ∈ AP as atomic formulas p. The semantic clause for these formulas
looks as follows:
MT (p)(ρ) = {s ∈ S | p ∈ I(s)} .
If we replace the modalities [a] and hai by [ ] and h i we obtain a version of the
modal mu-calculus that fits to pure Kripke structures as model structures.

Computational Tree Logic. Computational Tree Logic (CTL) was the first
temporal logic for which an efficient model-checking procedure was proposed
[11]. Its syntax looks as follows:

φ ::= p | ¬φ | φ1 ∨ φ2 | AU(φ, ψ) | EU(φ, ψ) | AF(φ) | EF(φ) | AG(φ) | EG(φ)

CTL has the six modalities AU, EU, AF, EF, AG, AF. Each takes the form QL,
where Q is one of the path quantifiers A and E, and L is one of the linear-time
modalities U, F, and G. The path quantifier provides a universal (A) or existential
(E) quantification over the paths emanating from a state, and on these paths the
corresponding linear-time property must hold. For example, the formula EF(φ)
is true for a state, s, if there is a path, π, starting in s on which φ becomes
true at some later situation; i.e., the path π has to satisfy π |= F(φ) in the sense
of PLTL. In contrast, AF(φ) holds if on all paths starting in s φ becomes true
finally.
The meaning of the CTL modalities can be expressed by means of fixpoint
formulas. In this sense, CTL provides useful abbreviations for frequently used
formulas of the modal µ-calculus. Here are the fixpoint definitions of the U
modalities:
def
AU(φ, ψ) = µX . (ψ ∨ (φ ∧ [ ]X ∧ h itrue))
def
EU(φ, ψ) = µX . (ψ ∨ (φ ∧ h iX)) .

The F modalities can easily be expressed by the U modalities


def def
AF(φ) = AU(true, φ) EF(φ) = EU(true, φ) .

and the G modalities are easily defined as the duals of the F modalities:
def def
AG(φ) = ¬EF(¬φ) EG(φ) = ¬AG(¬φ) .
342 M. Müller-Olm, D. Schmidt, and B. Steffen

By unfolding this definitions, direct fixpoint characterizations of the F and G


modalities can easily be obtained.
The above described version of CTL operates on pure Kripke structures.
For LTSes and KTSes it is less useful, as it does not specifying anything about
the labels on arcs. We might extend CTL’s modalities by relativizing them with
respect to sets of actions A ⊆ Act—the path quantifiers consider only those paths
whose actions come from A; all other paths are disregarded. In the following
system, e.g.,
a b
s t u
P
c
the state s satisfies AF{a,b} (P ), as the path hs, a, t, b, ui is taken into account.
But s does not satisfy AF{a,c} (P ) as here only the path hs, a, ti is considered.
Again these modalities can be defined by fixpoint formulas, for example:
def def
AGA (φ) = νX . φ ∧ [A]X and EFA (φ) = µY . φ ∨ hAiY .

4 Model Checking
4.1 Global vs. Local Model-Checking
There are two ways in which the model-checking problem can be specified:

Global model-checking problem: Given a finite model structure, M ,


and a formula, φ, determine the set of states in M that satisfy φ.

Local model-checking problem: Given a finite model structure, M ,


a formula, φ, and a state s in M , determine whether s satisfies φ.

While the local model-checking problem must determine modelhood of a


single state the global problem must decide modelhood for all the states in the
structure. Obviously, solution of the global model-checking problem comprises
solution of the local problem, and solving the local model-checking problem
for each state in the structure solves the global model-checking problem. Thus,
the two problems are closely related, but global and local model-checkers have
different applications.
For example, a classic application of model-checking is the verification of
properties of models of hardware systems, where the hardware system contains
many parallel components whose interaction is modeled by interleaving. The
system’s model structure grows exponentially with the number of parallel com-
ponents, a problem known as the state-explosion problem. (A similar problem
arises when software systems, with variables ranging over a finite domain, are
analyzed—the state space grows exponentially with the number of variables.)
In such an application, local model-checking is usually preferred, because the
property of interest is often expressed with respect to a specific initial state—a
local model checker might inspect only a small part of the structure to decide
Model-Checking 343

Branching-time Linear-time Global Local


Semantic methods X X
Automata-theoretic methods X X X
Tableau methods X X X

Fig. 8. Classification of model-checking approaches

the problem, and the part of the structure that is not inspected need not even
be constructed. Thus, local model-checking is one means for fighting the state-
explosion problem.
For other applications, like the use of model-checking for data-flow analysis,
one is really interested in solving the global question, as the very purpose of the
model-checking activity is to gain knowledge about all the states of a structure.
(For example, in Figure 2, the structure is the flow graph of the program to be
analyzed, and the property specified by a formula might be whether a certain
variable is “definitely live.”) Such applications use structures that are rather
small in comparison to those arising in verification activities, and the state-
explosion problem holds less importance. Global methods are preferred in such
situations.

4.2 Model-Checking Approaches

Model checking can be implemented by several different approaches; prominent


examples are the semantic approach, the automata-theoretic approach, and the
tableau approach.
The idea behind the semantic (or iterative) approach is to inductively com-
pute the semantics of the formula in question on the given finite model. This
generates a global model-checker and works well for branching-time logics like
the modal mu-calculus and CTL. Modalities are reduced to their fixpoint defi-
nitions, and fixpoints are computed by applying the Kleene fixpoint theorem to
the finite domain of the state powerset.
The automata-theoretic approach is mainly used for linear-time logics; it re-
duces the model-checking problem to an inclusion problem between automata.
An automaton, Aφ , is constructed from formula, φ; Aφ accepts the paths satisfy-
ing φ. Another automaton, AM , is constructed from the model, M , and accepts
the paths exhibited by the model. M satisfies φ iff L(AM ) ⊆ L(Aφ ). This prob-
lem can in turn be reduced to the problem of deciding non-emptiness of a product
automaton which is possible by a reachability analysis.
The tableau method solves the local model-checking problem by subgoaling.
Essentially, one tries to construct a proof tree that witnesses that the given state
has the given property. If no proof tree can be found, this provides a disproof of
the property for the given state. Since the tableau method intends to inspect only
a small fraction of the state space, it combines well with incremental construction
of the state space, which is a prominent approach for fighting state explosion.
Figure 8 presents typical profiles of the three approaches along the axes of
branching- vs. linear-time and global vs. local model-checking. The classification
344 M. Müller-Olm, D. Schmidt, and B. Steffen

is, of course, to be understood cum grano salis. Applications of the methods


for other scenarios are also possible but less common. In the remainder of this
section, we describe each of these approaches in more detail.

4.3 Semantic Approach

Based on the iterative characterization of fixpoints, the semantics of modal mu-


calculus formulas can be effectively evaluated on finite-state Kripke transition
systems. But in general, this is quite difficult, due to the potential interference
between least and greatest fixpoints. As we see later in this section, the alternated
nesting of least and greatest fixpoints forces us to introduce backtracking into
the fixpoint iteration procedure, which causes an exponential worst-case time
complexity of iterative model checking for the full mu-calculus. Whether this
is a tight bound for the model-checking problem as such is a challenging open
problem.
Before we explain the subtleties of alternation, we illustrate iterative model
checking for formulas whose fixpoint subformulas contain no free variables. In
this case, the iteration process can be organized in a hierarchical fashion, giving
a decision procedure whose worst-case time complexity is proportional to the
size of the underlying finite-state system and the size of the formula:

1. Associate all variables belonging to greatest fixpoints to the full set of states
S and all variables belonging to least fixpoints with ∅.
2. Choose a subformula, φ = µX.φ0 (or νX.φ0 ), where φ0 is fixpoint free, de-
termine its semantics, and replace it by an atomic proposition Aφ , whose
valuation is defined by its semantics.
3. Repeat the second step until the whole formula is processed.
def
We illustrate this hierarchical procedure for φ = AF{b} (EF{b} (haitrue) and
the following transition system T :
b
b b a
s t u v
b
Intuitively φ describes the property that from all states reachable via b-transition,
an a-transition is finitely reachable along a path of b-transitions; u and v enjoy
this property while s and t do not.
Unfolding the CTL-like operators in φ using the corresponding fixpoint def-
initions, we obtain

φ = νX . (µY . haitrue ∨ hbiY ) ∧ [b]X .

A hierarchical model-checker first evaluates (µY . haitrue ∨ hbiY ), the inner fix-
point formula: Letting φY denote the formula haitrue ∨ hbiY ), we have, for any
environment ρ,
MT (φ)(ρ) = fixµ FφY ,ρ .
Model-Checking 345

Now, by the Kleene fixpoint theorem this fixpoint formula can be calculated
by iterated application of FφY ,ρ to the smallest element in 2S , ∅. Here are the
resulting approximations:

Fφ0Y ,ρ (∅) = ∅
Fφ1Y ,ρ (∅) = M(haitrue)(ρ[Y 7→ ∅]) ∪ M(hbiY )(ρ[Y 7→ ∅])
= {u} ∪ ∅ = {u}
2
FφY ,ρ (∅) = FφY ,ρ ({u})
= M(haitrue)(ρ[Y 7→ {u}]) ∪ M(hbiY )(ρ[Y 7→ {u}])
= {u} ∪ {t, u, v} = {t, u, v}
Fφ3Y ,ρ (∅) = FφY ,ρ ({t, u, v})
= M(haitrue)(ρ[Y 7→ {t, u, v}]) ∪ M(hbiY )(ρ[Y 7→ {t, u, v}])
= {u} ∪ {t, u, v} = {t, u, v} .

Thus, {t, u, v} is the meaning of µX . φX in any environment. Next, the hierar-


def
chical model-checker evaluates the formula φ0 = νX . pY ∧ [b]X, where pY is a
new atomic proposition that holds true for the states t, u, and v. Again, this is
done by iteration that starts this time with S = {s, t, u, v}, as we are confronted
with a greatest fixpoint. Let φX denote the formula pY ∧ [b]X. The iteration’s
results look as follows:

Fφ0X ,ρ (S) = S
Fφ1X ,ρ (S) = M(pY )(ρ[X 7→ S]) ∩ M([b]X)(ρ[X 7→ S])
= {t, u, v} ∩ S = {t, u, v}
2
FφY ,ρ (S) = FφY ,ρ ({t, u, v})
= M(pY )(ρ[X 7→ {t, u, v}]) ∩ M([b]X)(ρ[X 7→ {t, u, v}])
= {t, u, v} ∩ {s, u, v} = {u, v}
Fφ3Y ,ρ (S) = FφY ,ρ ({u, v})
= M(pY )(ρ[X 7→ {u, v}]) ∩ M([b]X)(ρ[X 7→ {u, v}])
= {t, u, v} ∩ {s, u, v} = {u, v} .

The model-checking confirms our expectation that just states u and v have
property φ.
In the above example, the inner fixpoint formula, (µY . haitrue ∨ hbiY ), does
not use the variable introduced by the outer fixpoint operator, X. Therefore,
its value does not depend on the environment, ρ. This is always the case for
CTL-like formulas and enables the hierarchical approach to work correctly. If,
however, the inner fixpoint depends on the variable introduced further outwards,
we must—at least in principle—evaluate the inner fixpoint formula again and
again in each iteration of the outer formula. Fortunately, if the fixpoint formulas
have the same parity, i.e., they are either both least fixpoint formulas or both
greatest fixpoint formulas, we can avoid the problem and correctly compute the
346 M. Müller-Olm, D. Schmidt, and B. Steffen

values of the inner and outer formulas simultaneously, because the value of a
fixpoint formula depends monotonically on the value of its free variables and the
iterations of both formulas proceed in the same direction.
In the case of mutual dependencies between least and greatest fixpoints,
however, the iterations proceed in opposite directions, which excludes a simple
monotonic iteration process. Such formulas are called alternating fixpoint formu-
las. They require backtracking (or resets) in the iteration process. The following
is a minimal illustrative example.
def
Example 1. Consider the formula ψ = νZ . µY . (hbiZ ∨ haiY ), which intuitively
specifies that there is a path consisting of a- and b-steps with infinitely many
b-steps. We would like to check it for the following LTS:
b
s t a
Here are the results of the iterations for the outer fixpoint variable Z with
the nested iterations for Y :

Iteration for Z 1 2 3
Assumption for Z {s, t} {t} ∅
Iterations for Y ∅ {t} {t} ∅ ∅ ∅ ∅

Thus, we correctly calculate that neither s nor t satisfies ψ.


If, however we do not reset the iterations of Y to ∅ in each iteration of Z but
simply start Y ’s iterations with the old approximations, we produce the wrong
result {t} that “stabilizes” itself:

Iteration for Z 1 2
Assumption for Z {s, t} {t}
Iterations for Y ∅ {t} {t} {t} {t}

Due to the nested iteration, the complexity of model-checking formulas with


alternation is high. A careful implementation of the iterative approach leads to
an asymptotic worst-case running time of O((|T | · |φ|)ad ) [13], where |T | is the
number of states and transitions in the model structure (LTS, KS or KTS) and
|φ| is the size of the formula (measured, say, by the number of operators). ad
refers to the alternation depth of the formula, which essentially is the number
of non-trivial alternations between least and greatest fixpoints. (The alternation
depth of an alternation-free formula is taken to be 1.) While the precise definition
of alternation depth is not uniform in the literature, all definitions share the
intuitive idea and the above stated model-checking complexity. By exploiting
monotonicity, a better time-complexity can be achieved but at the cost of a
large storage requirement [32]. It is a challenging open problem to determine the
precise complexity of µ-calculus model-checking; it might well turn out to be
polynomial (it is known to be in the intersection of the classes NP and co-NP)!
Model-Checking 347

q true
r

p q q∧r q

p∧r
p p

Fig. 9. An automaton corresponding to φ = U(U(P, Q), R)

Alternation-free formulas can be checked efficiently in time O(|T | · |φ|). This


holds in particular for all CTL-like formulas as they unfold to alternation-free
fixpoint formulas.

4.4 Automata-Theoretic Approach


For easy accessibility, we illustrate the automata-theoretic approach for checking
PLTL formulas on maximal finite paths in Kripke structures: Given a Kripke
structure (S, R, I), a state s in S, and a PLTL formula φ, we say that s0 satisfies
φ if any maximal finite path π = hs0 , s1 , . . . , sn i satisfies φ.
A path π as above can be identified with the finite word wπ = hI(s0 ), I(s1 ),
. . . , I(sn )i over the alphabet 2AP . Note that the letters of the alphabet are
subsets of the set of atomic propositions. In a straightforward way, validity of
PLTL formulas can also be defined for such words. Now, a PLTL formula, φ,
induces a language of words over 2AP containing just those words that satisfy φ.
For any PLTL formula, the resulting language is regular, and there are systematic
methods for constructing a finite automaton, Aφ , that accepts just this language.
(This automaton can also be used to check satisfiability of φ; we merely check
whether the language accepted by the automaton is non-empty, which amounts
to checking whether a final state of the automaton is reachable from the initial
state.) In general, the size of Aφ grows exponentially with the size of φ but is
often small in practice.
Example 2. Consider the formula φ = U(U(P, Q), R). The corresponding au-
tomaton Aφ is shown in Fig. 9. We adopt the convention that an arrow marked
with a lower case letter represents transitions for all sets containing the proposi-
tion denoted by the corresponding upper case letter. An arrow marked with p, for
example, represents transitions for the sets {P }, {P, Q}, {P, R} and, {P, Q, R}.
Similarly a conjunction of lower case letters represents transitions containing
both corresponding upper case propositions. For example, an arrow marked with
p ∧ q represents transitions marked with {P, Q} and {P, Q, R}.
It is easy to construct from the Kripke structure, K, a finite automaton, AK ,
accepting just the words wπ corresponding to the maximal finite paths starting
348 M. Müller-Olm, D. Schmidt, and B. Steffen

{Q} {Q},{P,Q} {Q} {Q},{P,Q}

{P,Q} {P,Q}
1 3 1 3

{},{P} {},{P} {},{P} {},{P}

2 2
Automata construction, Complementation
F(P) & G(Q)
Determinization, and
Minimization {},{P},{Q},{P,Q} {},{P},{Q},{P,Q}

Fig. 10. A formula and the corresponding automata

Q P,Q Q Q P,Q
1 2 3 1 2 3

{Q} {P,Q} {Q} {Q} {P,Q} {}


1 2 3 4 1 2 3 4

{P,Q} {P,Q}

{Q} {P,Q} {Q} {Q} {P,Q} {}


11 12 33 34 11 12 33 24

{P,Q} {P,Q}
{P,Q} {P,Q}
32 32

{P,Q} {P,Q}

Fig. 11. Two Kripke structures, the corresponding automata, and their products with
the formula automaton from Fig. 10. Model-checking succeeds for the left Kripke struc-
ture and fails for the right one.

in s: It is given by AK = (S ∪ {sf }, 2AP , δ, s, {sf }), where sf is a new state and is


the only final state in the automaton, and

δ = {(s, I(s), t) | (s, t) ∈ R} ∪ {(s, I(s), sf ) | s is final in K}

Here, a state s is called final in K if it has no successor, i.e., there is no t with


(s, t) ∈ R.
To answer the model-checking question amounts to checking whether L(AK ) ⊆
L(Aφ ). This is equivalent to L(AK ) ∩ L(Aφ )c = ∅. The latter property can effec-
tively be checked: Finite automata can effectively be complemented, the product
automaton of two automata describes the intersection of the languages of the two
automata, and the resulting automaton can effectively be checked for emptiness.
def
Example 3. Let us illustrate this approach for the formula φ = F(P ) ∧ G(Q)
over AP = {P, Q}. Figure 10 shows the automata generated from φ. The first
Model-Checking 349

automaton in the figure is Aφ , the automaton that accepts the language over
2{P,Q} corresponding to φ. Complementation of a finite automaton is partic-
ularly simple if the automaton is deterministic and fully defined, because the
automaton has for any state exactly one transition for any input symbol. An au-
tomaton can be made deterministic by the well-known power-set construction;
full definedness can be obtained by adding a (non-final) state that “catches”
all undefined transitions. The automaton Aφ shown in Fig. 10 has been made
deterministic and is fully defined; to keep it small, it has also been minimized.
Such transformations preserve the language accepted by the automaton and are
admissible and commonly applied. The second automaton in Fig. 10, to which
we refer by Acφ in the following, accepts the complement of the language corre-
sponding to φ. As Aφ has been made deterministic and is fully defined, it can
easily be complemented by exchanging final and non-final states.
In Fig. 11, we show two rooted Kripke structures K, the corresponding Au-
tomata AK , and the product automata Acφ × AK .1 In order to allow easy com-
parison, the states in Acφ × AK have been named by pairs ij; i indicates the
corresponding state in Acφ and j the corresponding state in AK .
It is intuitively clear that the left Kripke structure satisfies φ. An automata-
theoretic model-checker would analyze whether the language of Acφ × AK is
empty. Here, it would check whether a final state is reachable from the initial
state. It is easy to see that no final state is reachable; this means that K indeed
satisfies φ.
Now, consider the rooted right Kripke structure in Fig. 11. As its final state
does not satisfy the atomic proposition, Q, the Kripke structure does not satisfy
φ—a final state is reachable from the initial state in the product automaton.

A similar approach to model-checking can be used in the more common case


of checking satisfiability of infinite paths [42]. In this case, automata accepting
languages of infinite words, like Büchi or Muller automata [41], are used instead
of automata on finite words. Generation of the automata Aφ and AK as well
as the automata-theoretic constructions (product construction, non-emptiness
check) are more involved, but nevertheless, the basic approach remains the same.
PLTL model-checking is in general PSPACE-complete [17]; the exponential blow-
up in the construction of Aφ is unavoidable.
The main applications of the automata-theoretic approach are linear-time
logics as languages consisting of words. The approach can also be applied, how-
ever, to branching-time logics by using various forms of tree automata.

4.5 Tableau Approach

The tableau approach addresses the local-model checking problem: For a model,
M, and property, φ, we wish to learn whether s |=M φ holds for just the one
state, s—global information is unneeded. We might attack the problem by a
1
Strictly speaking, only that part of the state space is shown that is reachable from
the initial state, and not the full product automaton.
350 M. Müller-Olm, D. Schmidt, and B. Steffen

search of the state space accessible from s, driving the search by decomposing
φ. We write our query as s `∆ φ (taking the M as implicit) and use subgoaling
rules, like the following, to generate a proof search, a tableau. For the moment,
the rules operate on just the Hennessy-Milner calculus; the subscript, ∆, will be
explained momentarily:
s `∆ φ1 ∧ φ2 s `∆ φ1 ∨ φ2 s `∆ φ1 ∨ φ2
s `∆ φ1 s `∆ φ2 s `∆ φ1 s `∆ φ2
s `∆ [a]φ a s `∆ haiφ a
if {s1 , . . . , sn } = {s0 | s → s0 } if s → s0
s1 `∆ φ · · · sn `∆ φ s0 `∆ φ
A tableau for a Hennessy-Milner formula must be finite. We say that the tableau
succeeds if all its leaves are successful, that is, they have form (i) s `∆ true, or
(ii) s `∆ [a]φ (which implies there are no a-transitions from s).
It is easy to prove that a succesful tableau for s `∆ φ implies s |=M φ; con-
M
versely, if there exists no successful tableau, we correctly conclude that s6 |= φ.
(Of course, a proof search that implements and/or subgoaling builds just one
“meta-tableau” to decide the query.)
When formulas in the modal-mu calculus are analyzed by tableau, there is
the danger of infinite search. But for finite-state models, we can limit search due
to the semantics of the fixed-point operators: Say that s `∆ µX.φX subgoals to
s `∆ X. We conclude that the latter subgoal is unsuccessful, because
_ X0 = false
s |=M µX.φX iff s |=M Xi , where
Xi+1 = φXi
i≥0

That is, the path in the tableau from s `∆ µX.φX to s `∆ X can be unfolded
an arbitrary number of times, generating the Xi formulas, all of which subgoal
to X0 , which fails.
Dually, a path from s `∆ νX.φX to s `∆ X succeeds, because
^ X0 = true
s |=M νX.φX iff s |=M Xi , where
Xi+1 = φXi
i≥0

As suggested by Stirling and Walker [37], we analyze the fixed-point operators


with unfolding rules, and we terminate a search path when the same state, fixed-
point formula pair repeat. Of course, we must preserve the scopes of nested
fixed points, so each time a fixed-point formula is encountered in the search,
we introduce a unique label, U, to denote it. The labels and the formulas they
denote are saved in an environment, ∆.
Here are the rules for µ; the rules for ν work in the same way:
s `∆ µX.φX
where ∆0 = ∆ + [U 7→ µX.φX ] and U is fresh for ∆
s `∆ 0 U
s `∆ U
where ∆(U) = µX.φX . Important: See note below.
s `∆ φU
Model-Checking 351

Transition system:
Tableau for s `∅ φ :
b a s `∅ φ
s t s `∆1 U1
b s `∆1 φ0U1 ∧ [b]U1
s `∆1 φ0U1 s `∆1 [b]U1
Let : s `∆2 U2 s `∆1 U1
φ = νX.φ0X ∧ [b]X s `∆2 haitrue ∨ hbiU2
φ0X = µY.haitrue ∨ hbiY s `∆2 haitrue
∆1 = [U1 7→ φ] t `∆2 true
∆2 = ∆1 + [U2 7→ φ0U1 ]

Fig. 12. Transition system and model check by tableau

Note: The second rule can be applied only when s `∆0 U has not already ap-
peared as an ancestor goal. This is how proof search is terminated.
A leaf of form, s `∆ U, is successful iff ∆(U) = νX.φX . Figure 12 shows a
small transition system and proof by tableau that its state, s, has the property,
νX.(µY.haitrue ∨ hbiY ) ∧ [b]X, that is, an a-transition is always finitely reachable
along a path of b-transitions. The tableau uses no property of state, t.
The tableau technique is pleasant because it is elegant, immune to the trou-
bles of alternating fixpoints, and applicable to both branching-time and linear-
time logics.

5 Conclusion

One of the major problems in the application of model checking techniques to


practical verification problems is the so-called state explosion problem: models
typically grow exponentially in the number of parallel components or data ele-
ments of an argument system. This observation has led to a number of techniques
for tackling this problem [39,14].
Most rigorous are compositional methods [2,10,23], which try to avoid the
state explosion problem in a divide an conquer fashion. Partial order meth-
ods limit the size of the model representation by suppressing unnecessary inter-
leavings, which typically arise as a result of the serialization during the model
construction of concurrent systems [19,43,20]. Binary Decision Diagram-based
codings, todays industrially most successful technique, allow a polynomial sys-
tem representation, but may explode in the course of the model checking process
[4,6,18]. All these techniques have their own very specific profiles. Exploring these
profiles is one of the current major research topics.
Another fundamental issue is abstraction: depending on the particular prop-
erty under investigation, systems may be dramatically reduced by suppressing
352 M. Müller-Olm, D. Schmidt, and B. Steffen

details that are irrelevant for verification, see, e.g., [15,9,22,30]. Abstraction is
particularly effective when it is combined with the other techniques.
In this article we have focused on finite model structures, but recent research
shows that effective model-checking is possible also for certain classes of finitely
presented infinite structures. Work in this direction falls in two categories: First,
continuous variables have been added to finite structures. This work was moti-
vated by considerations on verified design of embedded controllers. Timed sys-
tems have found much attention (Alur and Dill’s work on timed automata [1]
is a prominent example) but also more general classes of hybrid systems have
been considered. A study of this work could start with [38] where besides a gen-
eral introduction three implemented systems, HyTech, Kronos, and Uppaal, are
described. Second, certain classes of discrete infinite systems have been studied
that are generated by various types of grammars in various ways. The interested
reader is pointed to the surveys [8,7] that also contain numerous references.

References
1. R. Alur and D. L. Dill, A theory of timed automata. Theoretical Computer Science
126 (1994) 183–235.
2. H. Andersen, C. Stirling, and G. Winskel, A compositional proof system for the
modal mu-calculus. In Proc. 9th LICS. IEEE Computer Society Press, 1994.
3. G. Birkhoff, Lattice Theory, 3d edition. Amer. Math. Soc., 1967.
4. R. Bryant, Graph-based algorithms for boolean function manipulation. IEEE
Transactions on Computation, 8(35), 1986.
5. R. Bull and K. Segerberg, Basic Modal Logic. In Handbook of Philosophical Logic,
Vol. 2, D. Gabbay and F. Guenther, eds., Kluwer, Dortdrecht, 1994, pp. 1-88.
6. J. Burch, E. Clarke, K. McMillan, D. Dill, and L. Hwang, Symbolic model checking:
1020 states and beyond. In Proc. 5th LICS. IEEE Computer Society Press, 1990.
7. O. Burkart, D. Caucal, F. Moller, and B. Steffen, Verification on infinite structures.
In Handbook of Process algebra, Jan Bergstra, Alban Ponse, and Scott Smolka, eds.,
Elsevier, to appear.
8. O. Burkart and J. Esparza, More infinite results. Electronic Notes in Theoretical
Computer Science 6 (1997).
URL: http://www.elsevier.nl/locate/entcs/volume6.html.
9. K. Čerāns, J.C. Godesken, and K.G. Larsen, Timed modal specification – theory
and tools. In Computer Aided Verification (CAV’93), C. Courcoubetis, ed., Lecture
Notes in Computer Science 697, Springer, 1993, pp. 253–267.
10. E. Clarke, D. Long, and K. McMillan, Compositional model checking. In Proc. 4th
LICS. IEEE Computer Society Press, 1989.
11. E. M. Clarke, E. A. Emerson, and A. P. Sistla, Automatic verification of finite-
state concurrent systems using temporal logic specifications. ACM Transactions
on Programming Languages and Systems 8 (1996) 244–263.
12. E. M. Clarke, O. Grumberg, and D. Long, Verification tools for finite-state con-
current systems. In A Decade of Concurrency: Reflections and Perspectives, J.W.
de Bakker, W.-P. de Roever, and G. Rozenberg, eds., Lecture Notes in Computer
Science 803, Springer, 1993, pp. 124-175.
13. R. Cleaveland, M. Klein, and B. Steffen, Faster model checking for the modal mu-
calculus. In Computer Aided Verification (CAV’92), G. v. Bochmann and D. K.
Probst, eds., Lecture Notes in Computer Science 663, 1992, pp. 410–422.
Model-Checking 353

14. R. Cleaveland, Pragmatics of Model Checking. Software Tools for Technology


Transfer 2(3), 1999.
15. P. Cousot and R. Cousot, Abstract interpretation: A unified lattice model for static
analysis of programs by construction or approximation of fixpoints. In Proceedings
4th POPL, Los Angeles, California, January 1977.
16. D. van Dalen, Logic and Structure, 3d edition. Springer, Berlin, 1994.
17. E. A. Emerson, Temporal and modal logic. In Handbook of Theoretical Com-
puter Science, Vol B. J. van Leeuwen, ed., Elsevier Science Publishers B.V., 1990,
pp. 995–1072.
18. R. Enders, T. Filkorn, and D. Taubner, Generating BDDs for symbolic model
checking in CCS. In Computer Aided Verification (CAV’91), K. G. Larsen and
A. Skou, eds., Lecture Notes in Computer Science 575, Springer, pp. 203–213.
19. P. Godefroid and P. Wolper, Using partial orders for the efficient verification of
deadlock freedom and safety properties. n Computer Aided Verification (CAV’91),
K. G. Larsen and A. Skou, eds., Lecture Notes in Computer Science 575, Springer,
pp. 332–342.
20. P. Godefroid and D. Pirottin, Refining dependencies improves partial-order verifi-
cation methods. In Computer Aided Verification (CAV’93), C. Courcoubetis, ed.,
Lecture Notes in Computer Science 697, Springer pp. 438–449.
21. G. Grätzer, General Lattice Theory. Birkhäuser Verlag, 1978.
22. S. Graf and C. Loiseaux, Program Verification using Compositional Abstraction.
In Proceedings FASE/TAPSOFT, 1993.
23. S. Graf, B. Steffen, and G. Lüttgen, Compositional minimization of finite state
systems using interface specifications. Formal Aspects of Computing, 8:607–616,
1996.
24. M. C. B. Hennessy and R. Milner, Algebraic laws for nondeterminism and concur-
rency. Journal of the ACM 32 (1985) 137–161.
25. G. Hughes and M. Cresswell. An Introduction to Modal Logic. Methuen, London,
1972.
26. S. Kleene, Introduction to Metamathematics. D. van Nostrand, Princeton, 1952.
27. D. Kozen, Results on the propositional mu-calculus, Theoretical Computer Science,
27 (1983) 333–354.
28. Kripke, S. A completeness theorem in modal logic. J. Symbolic Logic 24 (1959)
1–14.
29. Kripke, S. Semantical considerations on modal logic. Acta Philosophica Fennica 16
(1953) 83–94.
30. K. G. Larsen, B. Steffen, and C. Weise. A constraint oriented proof methodology
based on modal transition systems. In Tools and Algorithms for the Construction
and Analysis of Systems (TACAS’95), E. Brinksma, W. R. Cleaveland, K. G.
Larsen, T. Margaria, and B. Steffen, eds, Lecture Notes of Computer Science 1019,
Springer, pp. 17–40.
31. J.-L. Lassez, V. L. Nguyen, and E. A. Sonenberg, Fixed point theorems and se-
mantics: A folk tale. Information Processing Letters 14 (1982) 112–116.
32. D. E. Long, A. Browne, E. M. Clarke, S. Jha, and W. R. Marrero, An improved
algorithm for the evaluation of fixpoint expressions. In Computer Aided Verification
(CAV’94), David L. Dill, ed., Lecture Notes in Computer Science 818, Springer pp.
338–349.
33. Robin Milner, Communication and Concurrency. Prentice Hall, 1989.
34. J. P. Queille and J. Sifakis, Specification and verification of concurrent systems
in CESAR. In Proc. 5th Internat. Symp. on Programming, M. Dezani-Ciancaglini
and U. Montanari, eds., Lecture Notes in Computer Science 137, Springer, 1982.
354 M. Müller-Olm, D. Schmidt, and B. Steffen

35. D. Schmidt and B. Steffen, Program analysis as model checking of abstract in-
terpretations. In Static Analysis (SAS’98), Giorgio Levi, ed., Lecture Notes in
Computer Science 1503, Springer, 1998, 351–380.
36. C. Stirling, Modal and temporal logics. In Handbook of Logic in Computer Science
S. Abramsky, Dov M. Gabbay, and T. S. E. Maibaum, eds., Clarendon Press, 1992,
pp 477–563.
37. C. Stirling and D. Walker, Local model checking in the modal mu-calculus, Proc.
TAPSOFT ’89, J. Diaz and F. Orejas, eds., Lecture Notes in Computer Science
351, Springer, 1989, pp. 369–383.
38. Special section on timed and hybrid systems, Software Tools for Technology Trans-
fer 1 (1997) 64–153.
39. Special section on model checking, Software Tools for Technology Transfer 2/3
(1999).
40. A. Tarski, A lattice-theoretical fixpoint theorem and its application. Pacific Journal
of Mathematics 5 (1955) 285–309.
41. W. Thomas, Automata on infinite objects. In Handbook of Theoretical Computer
Science, Vol B. J. van Leeuwen, ed., Elsevier Science Publishers B.V., 1990,
pp. 133–191.
42. M. Y. Vardi and P. Wolper, Reasoning about infinite computations. Information
and Computation 115 (1994) 1–37.
43. A. Valmari, On-the-fly verification with stubborn sets. In Computer Aided Veri-
fication (CAV’93), C. Courcoubetis, ed., Lecture Notes in Computer Science 697,
Springer, pp. 397–408.

View publication stats

You might also like