Formal Verif Analysis
Formal Verif Analysis
Environment
Fady Copty, Amitai Irron, Osnat Weissberg, Nathan Kropp*, and Gila Kamhi
1 Introduction
Verification is increasingly becoming the bottleneck in the design flow of electronic
systems. Simulation of designs is very expensive in terms of time, and exhaustive
simulation is virtually impossible. As a result, designers have turned to formal meth-
ods for verification.
Formal verification guarantees full coverage of the entire state space of the de-
sign under test, thus providing high confidence in its correctness. The more auto-
mated and therefore the more popular formal verification technique is symbolic
model checking [2]. While gaining success as a valuable method for verifying
commercial sequential designs, it is still limited with respect to the size of the veri-
fiable designs.
The capacity problem manifests itself in an additional obstacle—low productiv-
ity. A lot of effort is spent decomposing the proofs into simpler proof obligations
on the modules of the design. A global property to be verified is usually deduced
from local properties verified on the manually created abstraction [10] of the envi-
ronment for each module. The abstractions and assumptions needed to get a verifi-
cation case through increase the chance of getting false failure reports. Further
T. Margaria and T. Melham (Eds.): CHARME 2001, LNCS 2144, pp. 275-292, 2001.
© Springer-Verlag Berlin Heidelberg 2001
276 F. Copty et al.
more, in case of valid failure reports, back-tracing to the root cause is especially
difficult.
Given the inherent capacity and productivity limitation of symbolic model check-
ing, we emphasize in this paper the importance of efficient debugging capabilities in
formal verification, and we present capabilities that we have developed in order to
augment debugging in a commercial formal verification setting. Our solution provides
debugging capabilities needed at three major stages of verification.
shows that these capabilities complement one another and can considerably help the
verification engineer diagnose and fix a reported failure.
The “multi-value” nature of the counter-example annotation mechanism enables
the concise reporting of all the failures (i.e., counter-examples) as a result of one
model checking run. Hence, the understanding of more than one root cause of a fail-
ing verification facilitates the rectification of the failure. The ability to fix more than
one root cause can reduce the number of model checking runs needed to get to a pass-
ing verification run. Most importantly, multi-value counter-example reports enable
the user to pinpoint the pertinent signal values causing the failure and aid in detecting
how to change the values to correct the failure.
“Constraint-based debugging” allows the verification engineer to restrict the set of
failures (i.e., counter-example traces) to only those that satisfy a specific sequential
constraint. If this subset is empty for a given sequential constraint, this means that the
constraint is sufficient to eliminate all counter examples found so far. However, the
model checker must still be run again to find out if the constraint is sufficient to re-
solve all counter-examples of all lengths.
The system solution that we provide reduces the time spent in the loop of model
checking, specification and design modification. The usage flow consists of running
the model checker, dumping all the model checking data needed to compute all the
counter-examples of a given length, and then debugging in an interactive environment
by loading the pre-dumped model checking data. The fact that we have taken the
model checker out of the “check-analyze-fix” loop reduces the debugging loop to
“analyze-fix” and consequently improves the time spent in debugging considerably.
The effective usage of secondary memory allows the verification engineer to post-
process model checking data and debug without the need to add the model checking
run to the verification loop.
This paper is organized as follows. In Sect. 2, we present an overview of the formal
verification system with enhanced debugging capabilities. Section 3 depicts in detail
the capabilities of the “counter-example wizard" Section 4 explains the algorithms
underlying our system solution. In Sect. 5, we illustrate the efficiency of these tech-
niques through verification case studies on Intel’s real-life designs. We summarize
our conclusions in Sect. 6.
2 System Overview
The formal verification system with the counter-example wizard consists of three
major components:
1. A state-of-the-art symbolic model checker which accepts an LTL-based formal
specification language
2. An interactive formal verification environment which enables access to all the
model checking facilities
3. A graphical user interface which allows the user to display annotated counter-
example traces and access interactive model checking capabilities
The usage flow consists of two major stages:
278 F. Copty et al.
1. Model Checking. The model checker is run with the option to dump the relevant
model checking data and counter-example information to secondary memory.
2. Interactive Debugging. The user loads and interacts with the model checking data
to access different counter-examples and perform “what-if analysis” on the exis-
tence of counter-examples under specific conditions.
The easy storage and loading of relevant model checking data is due to the “data
persistency mechanism” of the model checker. At any point of the model checking
run, the data persistency mechanism can represent all the relevant, computed model
checking information in a well-defined ASCII format which later can be loaded in
an interactive model checking environment and analyzed through the usage of a
functional language.
The fact that the model checker can dump the relevant information for debugging
at any point, enables easy integration of this mechanism into regression testing. When
regression test suites are run with the “counter-example data dump” facility enabled,
the analysis of the failing verification test cases can be done without rerunning the
model checker and regenerating the failing traces, which can be computationally
expensive. The computational benefit of the system is also witnessed in the specifica-
tion , model verification, and modification loop.
3 Counter-Example Wizard
1 The model checker generates counter-examples of the shortest path length. The underlying
symbolic model checking algorithms that enable “multi-value counter-example annotation”
will be explained in detail in Section 4.
Efficient Debugging in a Formal Verification Environment 279
• Strong 0/1 indicates that in all possible counter-examples that demonstrate the
failure, the value of the signal at the given phase of the trace is 0 or 1, respectively.
• Weak 0/1 indicates that although the value of the signal at the given phase is 0 or 1
respectively for this counter-example, the value of the signal at this phase can be
different for another counter-example illustrating the failure. Even though the
model checker has some leeway in the choice of a value for this signal, this signal
must preserve some relation with other signals at this phase.
• Weaker 0/1 is similar to “weak” designation, except that weaker values are basi-
cally arbitrary, and have little or no influence on the generation of a failure.
The strong values provide the most insight on the pertinent signals causing a fail-
ure. For example, if the value of a signal at a certain phase of a counter-example is
a strong zero, this means correcting the design so that the value of the signal will be
one at that phase will often correct the failure. Hence, the error rectification
problem is often reduced to determining how to cause a strong-valued signal to take
on a different value.
The counter-example wizard can make use of a waveform display to represent
the multi-value counter-example annotation. Figure 2 illustrates a screen shot of
the multi-value annotated counter-example graphical display. (Later we will show
the use of a text-based display.) The counter-example in the figure demonstrates
the violation of the specification “if X is high, W will be high a cycle later.” The
value of Y is clearly not relevant (i.e., weaker); therefore its waveform is shad-
owed out. Furthermore, the values of X and Z in the second cycle do not affect
the failure, so their waveforms in that cycle are shadowed out as well. Examina-
tion of the waveform reveals that the cause for the failure is the value of the sig-
nal Z, which causes the output of the AND gate to be low.
Our experience shows that strong values alone sometimes provide sufficient infor-
mation to figure out the root cause of a failure and speed up the debugging. Neverthe-
less, we have also witnessed many verification cases where the answer to the root
cause of failure lay in the weak values (as seen in Fig. 3). The debug of traces with
weak values is facilitated by the sequential constraint-based debugging capability
which is the second major feature of the counter-example wizard.
Fig. 2. Graphical counter-example display using multi-value annotation. In the waveform (top),
the strong (i.e., significant) signal values are represented by bold lines, while the weaker values
are represented by gray boxes
For example, let us assume that some input vector foo should be one-hot encoded
(exactly one of the bits in the vector is high and the rest are low), but in the counter-
example presented it is not encoded as one-hot. In the absence of constraint-based de-
bugging, the user would have to add an environmental assumption that foo is one-hot
and rerun the model checker to see if the erroneous encoding is indeed the root cause of
the failure—a task that can take hours. With constraint-based traces the user can write
the environmental assumption as a sequential constraint, and if there are no counter-
examples that satisfy the constraint, then the user knows that assumption is sufficient to
eliminate all current counter-examples. Thus the user is able to check whether an as-
sumption will cure the current failure without rerunning the model checker.
Additionally, constraint-based debugging allows “what-if analysis” and the ability
to investigate the relationships between signals. For example, setting the value of a
signal to a constant value at a specific phase and observing other signal values that
have consequently become strong, helps the user to understand the relationships be-
tween signals over time. Thus, constraint-based debugging refines the information
that weak values provide.
Efficient Debugging in a Formal Verification Environment 281
Figure 3 illustrates how the usage of two different constraints help debug a
failing verification task. The task is to check that the model illustrated in the
lower half of the figure satisfies the specification “if Z is high, W will be high a
cycle later.” On the upper half of the figure, three multi-value annotated counter-
example traces are illustrated. The leftmost trace shows all the counter-examples
of length three violating this specification. Viewing this trace, we observe that
only W, Z and the clock C get strong values in the annotated trace. The signals X
and Y have weak values for the first phase and weaker values for all the rest of
the phases (indicating that the values of these signals in second and third phases
are irrelevant to the failure). In this case, the strong values do not provide enough
information; therefore we analyze the weak values (i.e., the relationship between
X and Y in the first phase). The middle trace and the rightmost trace demonstrate
all the counter-examples of length three, under two constrained values of the sig-
nal X in the first phase (one and zero, respectively). When the value of X is high,
we can ignore the value of Y. Therefore, the value of Y becomes weaker under
this constraint as illustrated in the middle annotated trace. The second constraint,
as illustrated by the rightmost trace, assigns X a low value in the first phase. Un-
der this constraint, the signal Y gets a strong one value. Therefore, our conclusion
from this constraint-based debugging session is that Y must be high in the first
phase to get a violation. Furthermore, to rectify the violation both X and Y need
to get a low value in the first phase.
Fig. 3. An illustration of the usage of constraint based debugging. The significant (i.e., strong)
signal values are highlighted. The weak values are shadowed out, and weaker values are dis-
played as gray boxes
282 F. Copty et al.
4 Underlying Algorithms
4.1 Background
A common verification problem for hardware designs is to determine if every state
reachable from a designated set of initial states lies within a specified set of “good
states” (referred to as the invariant). This problem is variously known as invariant
verification,2 or assertion checking [4,5,6].
According to the direction of the traversal, invariant checking can be based on ei-
ther forward or backward analysis. Given an invariant G and an initial set of states A,
forward analysis starts with the BDD for A, and uses BDD functions to iterate up to a
fixed point, which is the set of states reachable from A, using the Img operator. Simi-
larly, in backward analysis, the PreImg operator is iteratively applied to compute all
states from which it is possible to reach the complement of the invariant. The BDDs
encountered at each iteration are commonly referred as frontiers.
The counter-example generation algorithm [8] is a combination of backward and for-
ward analysis. In Figure 4, we see that counter-example generation starts with the BDD
for all the states that do not satisfy the invariant (i.e., Fo) and the PreImg operator is ap-
plied until a fixed point or a non-empty intersection ( i.e, F i+1 ∩ So ≠ ∅) of the last back-
ward frontier with the initial states is reached. A counter-example is then constructed by
2
Although the “debugging wizard” is a valid tool both for invariant and liveness property
verification, for the sake of simplicity, in this section we will explain the underlying algo-
rithms of the debugging wizard in the context of invariant verification.
Efficient Debugging in a Formal Verification Environment 283
applying forward image computation to a state selected from the states in the intersection
(i.e., all the states from which there is a path to the states that complement the invariant).
Again by iteratively intersecting the forward frontiers with the corresponding backward
frontiers and choosing a state from each intersection as a representative for the corre-
sponding phase in the counter-example, the counter-example trace is built.
Target frontiers [4] are the sets of states obtained during backward analysis, start-
ing from the error states (states that violate the invariant) and iteratively applying the
PreImg operator till reaching an intersection with the initial states. More precisely, we
define the nth target frontier, Fn, as the set of states from which one can reach an error
state in n (and no less than n) steps.
F0 = ¬ Invariant (1)
n
Fn + 1 = PreImg(Fn) - U Fi
i =1
In what follows, we denote by N the index of the last target frontier before the
fixed-point, such that Target frontierN+1 = Target frontierN . The target frontiers are
disjoint, and their union, which we denote as Target represents all the states from
which a state violating the invariant can be reached:
N
(2)
Target = U Fi
i=1
The underlying data structure for all the algorithms of the counter-example wizard
is the reachable target frontiers. Two major characteristics of reachable target fron-
tiers make them very useful for the computations needed for debugging.
• Any trace through the frontiers is guaranteed to be a counter-example.
All possible counter-examples in this verification of length N (when N is the number
of frontiers) are included in the frontiers.
Therefore, these frontiers store all the information needed for the querying and ex-
traction of counter-examples in a very concise and flexible way. In our system, the
model checker dumps the BDDs representing the reachable target frontiers to secon-
dary memory. Interactive debugging is done by restoring the target frontiers in the
interactive environment and querying them through the functional language.
284 F. Copty et al.
• Strong, if Fi | x=0 = 0 or Fi | x = 1 = 1
• Weak, if Fi | x=0 ≠ Fi | x=1
• Weaker, if Fi | x=0 = Fi | x=1 (i.e., x is not in the support of Fi)
when Fi is the reachable target frontier corresponding to phase i. The value of the
signal is chosen according to the specific trace at hand.
fies a function over the signals in the design and a point in time when the function
should hold. The counter-example wizard looks for a trace that leads to a failure
and satisfies the constraint, and recalculates all the weak and strong signal values
relative to this subset of traces.
Constraints are internally represented as BDDs, and each phase constraint after-
wards is intersected with the corresponding reachable target frontier. When a condi-
tion is applied to the target frontiers, not every possible trace through the target fron-
tiers is a counter-example any more. States in a frontier that do not comply with the
constraint are thrown out leaving some of the traces through the frontiers dangling
(i.e., they are not of length N, when N is the number of target frontiers). We remedy
the target frontiers by performing an N-step forward propagation followed by an N-
step backward propagation through all the frontiers.
The task of calculating a new trace under the constraint now becomes simply find-
ing any trace through the newly calculated target frontiers. The multi-value annotation
is applied to the new set of target frontiers.
Table 1. Experimental results comparing the model checking time required to compute
counter-examples with multi-value annotation and single value annotation making use of eight
typical Intel verification test cases
@ @1 @@ @@ @@ @@ @@ @@ @@ @@ RegRd_s02
! !! 11 @@ @@ @@ @@ @@ @@ @@ RegRd_s03
@ @@ @@ 11 @@ @@ @@ @@ @@ @@ RegRd_s04
@ @@ @@ @@ @0 @@ @@ @@ @@ @@ RegRd_s05
@ @@ @@ @@ @@ 00 @@ @@ @@ @@ RegRd_s06
----------------------------
0 1 2 3 4 5 6 7 8 9
Fig. 6. In this text-based multi-value annotated counter-example trace, the columns represent
phases, and the rows represent the values of each of the signals at each phase. The ‘@’ and ‘!’
symbols represent weak 0 and weak 1 values, respectively, whereas ‘0’ and ‘1’ represent the
strong values
Another case for which constraints are useful is when the set of counter-examples
is due to multiple root causes. Constraints can be used to partition the set of counter-
examples to identify the different root causes. This partitioning is not explicit but hap-
pens naturally during analysis while experimenting with constraints and searching for a
root cause. Once one root cause is identified, analysis is then switched to the cases not
covered by that root cause. This is repeated until the set of counter-examples has been
fully covered. The set is thus partitioned according to the various root causes.
Multiple root causes usually cannot be identified when working with a single
counter-example at a time. A given counter-example may be due to only one of the
root causes; therefore only one root cause could be found from that counter-example.
Even when a single counter-example is due to multiple root causes, it is unlikely that
288 F. Copty et al.
all would be identified: When one apparent root cause is found, the verification engi-
neer typically assumes that this is the sole cause. Therefore when multiple root causes
exist, the counter-example wizard provides an additional advantage over traditional
single-counter-example debugging.
The counter-example wizard has also been helpful in identifying solutions to a set of
counter-examples. In general, once the root cause of a failure is found, the root cause
is usually eliminated either by expanding the model to include necessary guaranteeing
logic, or by making an environmental assumption about the behavior of signals. The
model is then re-verified after the necessary changes have been made. The expecta-
tion is that the changes will eliminate any counter-examples, yielding a successfully
completed proof.
Analyzing counter-examples one at a time is often a process of trial and error. In a
typical verification workflow, a verification run generates a single counter-example,
the verification engineer tries to determine the root cause of the failure, a solution is
implemented, the model is rebuilt, and the verification is run again. Unfortunately, the
root cause may or may not have been correctly identified. Often it results in another
counter-example report. When working with only a single counter-example at a time,
there is no way to avoid this trial and error process.
The counter-example wizard can eliminate some of this trial and error. Potential
solutions can be tested to see if they really do resolve the current set of counter-
examples. As noted above, a solution usually takes the form of either an expansion of
the model to include necessary guaranteeing logic, or an environmental assumption.
Therefore, a solution is essentially a restriction on signal behavior that disallows the
behavior observed in the counter-example. To check whether the proposed solution
will work, this restriction is given as a sequential constraint to the counter-example
wizard. If the wizard determines that there are no counter-examples that satisfy this
constraint, then the proposed solution successfully resolves all counter-examples of
the given trace length for this verification.
If the wizard does find counter-examples that satisfy the constraint, then the pro-
posed solution does not completely resolve the current set of counter-examples. In
this case a different solution can be tried, or the remaining counter-examples can be
analyzed to determine why the proposed solution did not resolve them. Possible solu-
tions to verification failures can thus be tested without implementing the solution and
rerunning the verification. This can significantly reduce the time spent in the trial and
error loop. The wizard gives feedback within seconds, instead of the minutes or some-
times hours it takes to rerun the verification.
In the example in Section 5.1 concerning the flip-flop whose clock failed to
toggle, we could quickly test our guess that this was the root cause by specifying
to the wizard the constraint that the clock should be high in the phase in which it
failed to toggle. We then searched for counter-examples under this constraint, and
when none were found, we knew that getting the clock to toggle during the phase
Efficient Debugging in a Formal Verification Environment 289
• Property: If a request is killed, then the register that contains the request gets
cleared. More specifically, if this “Active Register” holds a request that receives a
kill, then it will be clear for the next two cycles.
• Model Behavior:
− Eventually a request is received and is retained in the Holding Register.
− When the Active Register becomes free, the request moves from the Holding Reg-
ister into the Active Register.
− When the request is finished being serviced, it is cleared out of the Active Register.
• Micro-architectural Assumption :
The same request cannot be made twice.
0 00 00 00 00 00 00 00 00 00 01 00 01 00 00 Valid request
0 00 00 00 00 00 00 00 00 00 00 00 01 00 00 Kill request
--------------------------------------------
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Fig. 6. Traditional single counter-example trace. The columns represent phases, and the rows
represent the values of each of the signals at each phase
290 F. Copty et al.
@ @@ @@ @@ @@ @@ @@ @@ @@ @@ @! @@ @1 @@ @@ Valid request
@ @@ @@ @@ @@ @@ @@ @@ @@ @@ @@ @@ @1 @@ @@ Kill request
--------------------------------------------
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Fig. 7. Multi-value counter-example trace. The columns represent phases, and the rows repre-
sent the values of each of the signals at each phase. The ‘@’ and ‘!’ symbols represent weak 0
and weak 1 values, respectively, whereas ‘0’ and ‘1’ represent the strong values
In the above traces two requests (Valid request) arrive one cycle apart. For the first
request, the Write Enable for the Holding Register goes high in cycle 10, causing the
Holding Register to be valid in the next cycle (cycle 11). Also in cycle 11, the Hold-
ing→Active signal goes high, causing the request to move from the Holding Register
into the Active Register. Consequently, the Active Register is valid in cycle 12. Fur-
thermore, the trace shows a Kill request arriving in cycle 12. (This should kill the
request by clearing the Active Register, which it does: the Active Register is not valid
in cycle 13.)
However, the property states that the Active Register should remain clear for two
cycles, yet we see that it becomes valid again in the next cycle (cycle 14). This occurs
due to the second request, which moves into the Active Register by the same process
as the first request. Thus the Active Register becomes valid again after only one cy-
cle, rather than the two cycles specified by the property. Hence the violation of the
property and the failure of the proof.
Let us now illustrate how multi-value annotation helps the verification engineer to
find the root cause of the failure.
• Multi-value annotation helps narrow the scope of the search for the root cause.
− More strong values are associated with the second request than with the first.
Therefore, the focus should be on the second request.
• Multi-value annotation helps identify which logic has to be guaranteed to resolve
all counter-examples of the given length.
− Despite the assumption that the same request cannot be made more than once, the
first and second requests arrive two cycles apart from one another. A closer exami-
nation (not seen in the above trace) of the second request shows that it is indeed
identical to the first request but with one exception: its “Special” bit. Multi-value
annotation is helpful in identifying this conclusion. With single counter-example
debugging, there is no indication that the Special bit is the only exception; it sim-
Efficient Debugging in a Formal Verification Environment 291
ply receives a zero or one, just like every other bit that comprises the request.
However, with multi-value annotation the Special bit takes on strong values, so it
is certain that here the Special bit is the only important component of the request.
− Now that the second request, in particular its Special bit, has been identified as
important, the Special bit can be followed from the request through the registers, as
shown in the above trace. The Special bit is not getting passed from the Holding
Register to the Active Register in cycles 13-14, even though the request is getting
passed. Since the Special bit takes on strong values, we know that if the passing of
the Special bit from the Holding to the Active Register can be guaranteed, then we
will have resolved all counter-examples of this length.
This debugging session helped the verification engineer pinpoint the missing logic
that guarantees the transfer of the Special bit. Once that logic was included in the
model, the proof succeeded.
The verification case just described is derived from an actual debugging session
from our verification work. We have indeed encountered much more complex
counter-examples than the one just demonstrated. They include some with multiple
root causes that could be identified all at once using multiple counter-example capa-
bilities, but which single counter-example debugging could identify only one at a
time. Such examples are quite complex and are beyond the scope of this paper.
6 Conclusions
In this paper we have introduced a novel formal verification debugging aid, the
“counter-example wizard.” The novelty of the wizard is in its multi-value counter-
example annotation, sequential constraint-based debugging, and multiple root cause
detection mechanisms. The benefits of counter-example wizard have been observed in
an industrial formal verification setting in verifying real-life Intel designs.
The demonstrated advantages of the formal verification system augmented with the
counter-example wizard are a shorter debugging loop and unique help in diagnosing
and resolving failures. The time saved was due to faster determination of the root
cause of a set of counter-examples, and the ability to identify and resolve multiple
root causes in a single proof iteration. Furthermore, the wizard allows the verification
engineer to test solutions to verification failures and observe if they really do resolve
the apparent root cause.
References
[1] R. Bryant, “Graph-based Algorithms for Boolean Function Manipulations”,, IEEE Trans-
actions on Computers,C-35:677-691, August 1986.
[2] K.L. McMillan. “Symbolic Model Checking”, Kluwer Academics, 1993.
[3] K. Ravi, F. Somenzi, “Efficient Fixpoint Computation for Invariant Checking”, In Pro-
ceedings of ICCD’99, pp. 467-474.
292 F. Copty et al.