An Overview of Embedded Self-Test Design: Gaurav Gulati Ryan Fonnesbeck Sathya Vijayakumar Sudheesh Madayi
An Overview of Embedded Self-Test Design: Gaurav Gulati Ryan Fonnesbeck Sathya Vijayakumar Sudheesh Madayi
SELF-TEST DESIGN
A report by
Gaurav Gulati
Ryan Fonnesbeck
Sathya Vijayakumar
Sudheesh Madayi
Acknowledgements
CONTENTS
1.
2.
Introduction
1.1
1.2
1.3
1.4
1.5
1.6
1
2
2
3
4
4
Why BIST?
What is BIST?
Basic BIST Hierarchy
Basic BIST Architecture
Advantages of BIST
Disadvantages of BIST
5
5
6
8
9
10
11
12
3.
12
13
13
14
16
16
4.
17
17
18
19
4.1
4.2
4.3
Delay Faults
Delay Fault Models
Delay Faults and BIST
5.
Non-Intrusive BIST
21
6.
Appendix
6.1 An overview of different Fault Models.
23
23
7.
Bibliography
27
1. INTRODUCTION
This report contains an overview of Built In Self-Test (BIST), its
significance, its generic architecture (with detailed coverage of all the
components), and its advantages and disadvantages.
1.1 Why BIST?
Have you ever wondered about the reliability of electronic circuits aboard
satellites and space shuttles? Once launched in space how do these systems
maintain their functional integrity? How does one detect and diagnose any
malfunctions from the earth stations? BIST is a testing paradigm that offers a
solution to these questions.
To understand the need for BIST one needs to be aware of the various
testing procedures involved during the design and manufacture of any system.
There are three main phases in the design cycle of a product where testing plays
a crucial role:
With the above outline of the different kinds of testing involved at various
stages of a product design cycle, we now move on to the problems associated
with these testing procedures. The number of transistors contained in most VLSI
devices today have increased four orders of magnitude for every order increase
in the number of I/O (input-output) pins [3]. Add to it the surface mounting of
components and the implementation of embedded core functions all these
make the device less accessible from the point of view of testing, making testing
a big challenge. With increasing device sizes and decreasing component sizes,
the number and types of defects that can occur during manufacturing increase
drastically, thereby increasing the cost of testing. Due to the growing complexity
2
of VLSI devices and system PCBs, the ability to provide some level of fault
diagnosis (information regarding the location and possibly the type of the fault or
defect) during manufacturing testing is needed to assist failure mode analysis
(FMA) for yield enhancement and repair procedures. This is why BIST is needed!
BIST can partition the device into levels and then perform testing.
BIST offers a hierarchical solution to the testing problem such that the
burden on the system level test is reduced. The same testing approach could be
used to cover wafer and device level testing, manufacturing testing as well as
system level testing in the field where the system operates. Hence, BIST
provides for Vertical Testability.
1.2 What is BIST?
The basic concept of BIST involves the design of test circuitry around a
system that automatically tests the system by applying certain test stimulus and
observing the corresponding system response. Because the test framework is
embedded directly into the system hardware, the testing process has the
potential of being faster and more economical than using an external test setup.
One of the first definitions of BIST was given as:
the ability of logic to verify a failure-free status automatically, without
the need for externally applied test stimuli (other than power and clock), and
without the need for the logic to be part of a running system. Richard M.
Sedmak [3].
1.3 Basic BIST Hierarchy
Figure1.1 presents a block diagram of the basic BIST hierarchy. The test
controller at the system level can simultaneously activate self-test on all boards.
In turn, the test controller on each board activates self-test on each chip on that
board. The pattern generator produces a sequence of test vectors for the circuit
under test (CUT), while the response analyzer compares the output response of
the CUT with its fault-free response.
3
Figure 1.1: Basic BIST Hierarchy.
4
completed, and that the output response analyzer has determined whether the
circuit is faulty or fault-free.
1.4.3 Output Response Analyzer (ORA):
The response of the system to the applied test vectors needs to be
analyzed and a decision made about the system being faulty or fault-free. This
function of comparing the output response of the CUT with its fault-free response
is performed by the ORA. The ORA compacts the output response patterns from
the CUT into a single pass/fail indication. Response analyzers may be
implemented in hardware by making used of a comparator along with a ROM
based lookup table that stores the fault-free response of the CUT. The use of
multiple input signature registers (MISRs) is one of the most commonly used
techniques for ORA implementations.
Let us take a look at a few of the advantages and disadvantages now
that we have a basic idea of the concept of BIST.
1.5 Advantages of BIST
5
Additional Design time and Effort: During the design cycle of the
product, resources in the form of additional time and man power will be
devoted for the implementation of BIST in the designed system.
Added Risk: What if the fault existed in the BIST circuitry while the CUT
operated correctly. Under this scenario, the whole chip would be regarded
as faulty, even though it could perform its function correctly.
The above classes of test patterns are not mutually exclusive. A BIST
application may make use of a combination of different test patterns say
pseudo-random test patterns may be used in conjunction with deterministic test
patterns so as to gain higher fault coverage during the testing process.
2.2 Linear Feedback Shift Registers
The Linear Feedback Shift Register (LFSR) is one of the most frequently
used TPG implementations in BIST applications. This can be attributed to the
fact that LFSR designs are more area efficient than counters, requiring
comparatively lesser combinational logic per flip-flop. An LFSR can be
implemented using internal or external feedback. The former is also referred to
as TYPE1 LFSR while the latter is referred to as TYPE2 LFSR. The two
implementations are shown in Figure 2.1. The external feedback LFSR best
illustrates the origin of the circuit name a shift register with feedback paths that
are linearly combined via XOR gates. Both the implementations require the same
amount of logic in terms of the number of flip-flops and XOR gates. In the internal
feedback LFSR implementation, there is just one XOR gate between any two flipflops regardless of its size. Hence, an internal feedback implementation for a
given LFSR specification will have a higher operating frequency as compared to
its external feedback implementation. For high performance designs, the choice
would be to go for an internal feedback implementation whereas an external
feedback implementation would be the choice where a more symmetric layout is
desired (since the XOR gates lie outside the shift register circuitry).
The question to be answered at this point is: How does the positioning of
the XOR gates in the feedback network of the shift register effect, rather govern,
the test vector sequence that is generated? Let us begin answering this question
using the example illustrated in Figure 2.2. Looking at the state diagram, one can
deduce that the sequence of patterns generated, is a function of the initial state
of the LFSR, i.e. with what initial value it started generating the vector sequence.
The value that the LFSR is initialized with, before it begins generating a vector
sequence is referred to as the seed. The seed can be any value other than an all
zeros vector. The all zeros state is a forbidden state for an LFSR as it causes the
LFSR to infinitely loop in that state.
This can be seen from the state diagram of the example above. If we
consider an n-bit LFSR, the maximum number of unique test vectors that it can
generate before any repetition occurs is 2n - 1 (since the all 0s state is forbidden).
An n-bit LFSR implementation that generates a sequence of 2n 1 unique
patterns is referred to as a maximal length sequence or m-sequence LFSR. The
LFSR illustrated in the considered example is not an m-sequence LFSR. It
generates a maximum of 6 unique patterns before repetition occurs. The
positioning of the XOR gates with respect to the flip-flops in the shift register is
defined by what is called the characteristic polynomial of the LFSR. The
8
characteristic polynomial is commonly denoted as P(x). Each non-zero coefficient in it represents an XOR gate in the feedback network. The Xn and X0
coefficients in the characteristic polynomial are always non-zero but do not
represent the inclusion of an XOR gate in the design. Hence, the characteristic
polynomial of the example illustrated in Figure 2.2 is P(x)= X 4 + X 3 + X + 1. The
degree of the characteristic polynomial tells us about the number of flip-flops in
the LFSR whereas the number of non-zero coefficients (excluding Xn and X0)
tells us about the number of XOR gates that would be used in the LFSR
implementation.
2.3 Primitive Polynomials
Characteristic polynomials that result in a maximal length sequence are
called primitive polynomials while those that do not, are referred to as nonprimitive polynomials. A primitive polynomial will produce a maximal length
sequence irrespective of whether the LFSR is implemented using internal or
external feedback. However, it is important to note that the sequence of vector
generation is different for the two individual implementations. The sequence of
test patterns generated using a primitive polynomial is pseudo-random. The
internal and external feedback LFSR implementations for the primitive polynomial
P(x) = X 4 + X + 1 are shown below in Figure 2.3(a) and Figure 2.3(b)
respectively.
9
Observe their corresponding state diagrams and note the difference in the
sequence of test vector generation. While implementing an LFSR for a BIST
application one would like to select a primitive polynomial that would have the
minimum possible non-zero coefficients, as this would minimize the number of
XOR gates in the implementation. This would lead to considerable savings in
power consumption and die area two parameters that are always of concern to
a VLSI designer! Table 2.1 lists primitive polynomials for the implementation of 2bit to 74-bit LFSRs.
10
2.5 Generic LFSR Design
Suppose a BIST application required a certain set of test vector
sequences but not all the possible 2n 1 patterns generated using a given
primitive polynomial this is where a generic LFSR design would find
application. Making use of such an implementation would make it possible to
reconfigure the LFSR to implement a different primitive/non-primitive polynomial
on the fly. A 4-bit generic LFSR implementation making use of both internal and
external feedback is shown in Figure 2.4. The control inputs C1, C2 and C3
determine the polynomial implemented by the LFSR. The control input is logic 1
corresponding to each non-zero coefficient of the implemented polynomial.
11
Figure 2.5: Modified LFSR implementations for the generation of the all zeros pattern
12
2.7 LFSRs used as Output Response Analyzers (ORAs):
LFSRs are used for Response analysis. While the LFSRs used for test
pattern generation are closed system (initialized only once), those used for
response/signature analysis need input data, specifically the output of the CUT.
Figure 2.7 shows a basic diagram of the implementation of a single input LFSR
for response analysis.
Here the input is the output of the CUT x. The final state of the LFSR is x)
which is given by
x) = x mod P(x)
where P(x) is the characteristic polynomial of the LFSR used. Thus x) is the
remainder obtained by the polynomial division of the output response of the CUT
and the characteristic polynomial of the LFSR used. The next section explains
the operation of the output response analyzers, also called signature analyzers in
detail.
13
3.1 Principle behind ORAs
The response sequence R for a given order of test vectors is obtained
from a simulator and a compaction function C(R) is defined. The number of bits
in C(R) is much lesser than the number in R. These compressed vectors are then
stored on or off chip and used during BIST. The same compaction function C is
used on the CUTs response R* to provide C(R*). If C(R) and C(R*) are equal, the
CUT is declared to be fault-free. For compaction to be practically used, the
compaction function C has to be simple enough to implement on a chip, the
compressed responses should be small enough and above all, the function C
should be able to distinguish between the faulty and fault-free compression
responses. Masking [3.3] or aliasing occurs if a faulty circuit gives the same
response as the fault-free circuit. Due to the linearity of the LFSRs used, this
occurs if and only if the error sequence obtained by the XOR operation from the
correct and incorrect sequence leads to a zero signature.
Compression can be performed either serially, or in parallel, or in any
mixed manner. A purely parallel compression yields a 'global' value C describing
the complete behavior of the CUT. On the other hand, if additional information is
needed for fault localization then a serial compression technique has to be used.
Using such a method, a special compacted value C(R*) is generated for any
output response sequence R* where R* depends on the number of output lines
of the CUT.
3.2 Different Compression Methods
We now take a look at a few of the serial compression methods that are
used in the implementation of BIST. Let X=(x1...xt) be a binary sequence. Then,
the sequence X can be compressed in the following ways:
3.2.1 Transition counting:
In this method, the signature is the number of 0-to-1 and 1-to-0 transitions
in the output data stream. Thus the transition count is given by
t -1
T(X) = (xi xi+1)
(Hayes, 1976);
i=1
Here the symbol ' ' is used to denote the addition modulo 2, but the sum
sign must be interpreted by the usual addition.
3.2.2 Syndrome testing (or ones counting):
In this method, a single output is considered and the signature is the
number of 1s appearing in the response R. It is mathematically expressed
as:
t
Sy(X) = xi
(Savir, 1980);
i=1
14
3.2.3 Accumulator compression testing:
t k
A(X) = xi
(Saxena, Robinson1986);
k=1 i=1
In each one of these cases, the compaction rate n is of the order of
O(log n). The following well-known methods also lead to a constant length
of the compressed value.
3.2.4 Parity check compression:
In this method, the compression is performed with the use of a simple
LFSR whose primitive polynomial is G(x) = x + 1. The signature S is the
parity of the circuit response it is zero if the parity is even, else it is one.
This scheme detects all single and multiple bit errors consisting of an odd
number of error bits in the response sequence but fails for a circuit with
even number of error bits.
t
P(X) =
i=1
where the bigger symbol '' is used to denote the repeated addition
modulo 2.
3.2.5 Cyclic redundancy check (CRC):
A linear feedback shift register of some fixed length n >=1 performs CRC.
Here it should be mentioned that the parity test is a special case of the
CRC for n = 1.
3.3 Response Analysis:
The basic idea behind response analysis is to divide the data polynomial
(the input to the LFSR which is essentially the compressed response of the CUT)
by the characteristic polynomial of the LFSR. The remainder of this division is the
signature used to determine the faulty/fault-free status of the CUT at the end of
the BIST sequence. This is illustrated in Figure 3.1 for a 4-bit signature analysis
register (SAR) constructed from an internal feedback LFSR with characteristic
polynomial from Table 2.1. Since the last bit in the output response of the CUT to
enter the SAR denotes the co-efficient x0, the data polynomial of the output
response of the CUT can be determined by counting backward from the last bit to
the first. Thus the data polynomial for this example is given by K(x), as shown in
the Figure 3.3(a). The contents for each clock cycle of the output response from
the CUT are shown in Figure 3.3(b) along with the input data K(x), shifting into
the SAR on the left hand side and the data shifting out the end of the SAR, Q(x),
on the right-hand side. The signature contained in the SAR at the end of the
BIST sequence is shown at the bottom of Figure 3.3(b) and is denoted R(x). The
polynomial division process is illustrated in Figure 3.3(c) where the division of the
CUT output data polynomial, K(x), by the LFSR characteristic polynomial, P(x)
15
results in a quotient, Q(x), which is shifted out of the right end of the SAR, and a
remainder R(x), which is contained in the SAR at the end of the BIST sequence.
16
3.4 Multiple Input Signature Registers (MISRs)
The example above considered a signature analyzer that had a single
input, but the same logic is applicable to a CUT that has more than one output.
This is where the MISR is used. The basic MISR is shown in Figure 3.4.
This is obtained by adding XOR gates between the inputs to the flip-flops of the
SAR for each output of the CUT. MISRs are also susceptible to signature aliasing
and error cancellation. In what follows, masking/aliasing is explained in detail.
3.5 Masking / Aliasing
The data compressions considered in this field have the disadvantage of
some loss of information. In particular, the following situation may occur. Let us
suppose that during the diagnosis of some CUT, any expected sequence Xo is
changed into a sequence X due to any fault F such that Xo X. In this case, the
fault would be detected by monitoring the complete sequence X. On the other
hand, after applying some data compaction C, it may be that the compressed
values of the sequences are the same, i.e. C(Xo) = C(X). Consequently, the fault
F that is the cause for the change of the sequence Xo into X cannot be detected
if we only observe the compression results instead of the whole sequences. This
situation is said to be masking or aliasing of the fault F by the data compression
C. Obviously, the background of masking by some data compression must be
intensively studied before it can be applied in compact testing. In general, the
masking probability must be computed or at least estimated, and it should be
sufficiently low.
The masking properties of signature analyzers depend widely on their
structure, which can be expressed algebraically by properties of their
characteristic polynomials. There are three main ways of measuring the masking
properties of ORAs:
(i) General masking results either expressed by the characteristic
polynomial or in terms of other LFSR properties;
(ii) Quantitative results, mostly expressed by computations or estimations
of error probabilities;
(iii) Qualitative results, e.g. concerning the general possibility or
impossibility of LFSR to mask special types of error sequences.
17
The first one includes more general masking results, which are based
either on the characteristic polynomial or on other ORA properties. The
simulation of the circuit and the compression technique to determine which faults
are detected can achieve this. This method is computationally expensive
because it involves exhaustive simulation. Smiths theorem states the same point
as:
Any error sequence E=(e1,...,et) is masked by an ORA S if and only if its
error polynomial pE(x) = e1xt-1+...+et-1x+et is divisible by the characteristic
polynomial pS(x) [4].
The second direction in masking studies, which is represented in most of
the papers [7],[8] concerning masking problems, can be characterized by
quantitative results mostly expressed by some computations or estimations of
masking probabilities. This is usually not possible and all possible outputs are
assumed to be equally probable. But this assumption does not allow one to
correlate the probability of obtaining an erroneous signature with fault coverage
and hence leads to a rather low estimation of faults. This can be expressed as an
extension of Smiths theorem as:
If we suppose that all error sequences having any fixed length are equally
likely the masking probability of any n-stage ORA is not greater than 2-n.
The third direction in studies on masking contains qualitative results
concerning the general possibility or impossibility of ORAs to mask error
sequences of some special type. Examples of such a type are burst errors, or
sequences with fixed error-sensitive positions. Traditionally, error sequences
having some fixed weight are also regarded as such a special type, where the
weight w(E) of some binary sequence E is simply its number of ones. Masking
properties for such sequences are studied without restriction of their length. In
other words,
If the ORA S is non-trivial then masking of error sequences having the
weight 1 by S is impossible.
18
layers, and an ever growing search space that is perpetuated by ever-decreasing
device size.
4.2 Delay Fault Models
In this section, we will explore the advantages and limitations of three
delay fault models. Other delay fault models exist; but they are essentially
derivatives of these three classical models.
4.2.1 Gate Delay
The gate delay model assumes that the delays through logic gates can be
accurately characterized. It also assumes that the size and location of probable
delay faults is known. Faults are modeled as additive offsets to the propagation
of a rising or falling transition from the inputs to the gate outputs. In this
scenario, faults retain quantitative values. A delay fault of 200 picoseconds, for
example, is not the same as a delay fault of 400 picoseconds using this model.
Research efforts are currently attempting to devise a method to prove that
a test will detect any fault at a particular site with magnitude greater than a
minimum fault size at a fault site. Certain methods have been proposed for
determining the fault sizes detected by a particular test, but are beyond the
scope of this discussion.
4.2.2 Transition
A transition fault model classifies faults into two categories: slow-to-rise,
and slow-to-fall. It is easy to see how these classifications can be abstracted to a
stuck-at-fault model. A slow-to-rise fault would correspond to a stuck-at-zero
fault, and a slow-to-fall fault would is synonymous to a stuck-at-one fault. These
categories are used to describe defects that delay the rising or falling transition of
a gates inputs and outputs.
A test for a transition fault is comprised of an initialization pattern and a
propagation pattern. The initialization pattern sets up the initial state for the
transition. The propagation pattern is identical to the stuck-at-fault pattern of the
corresponding fault.
There are several drawbacks to the transition fault model. Its principal
weakness is the assumption of a large gate delay. Often, multiple gate delay
faults that are undetectable as transition faults can give rise to a large path delay
fault. This delay distribution over circuit elements limits the usefulness of
transition fault modeling. It is also difficult to determine the minimum size of a
detectable delay fault with this model.
4.2.3 Path Delay
The path delay model has received more attention than gate delay and
transition fault models. Any path with a total delay exceeding the system clock
19
interval is said to have a path delay fault. This model accounts for the distributed
delays that were neglected in the transition fault model.
Each path that connects the circuit inputs to the outputs has two delay paths.
The rising path is the path traversed by a rising transition on the input of the path.
Similarly, the falling path is the path traversed by a falling transition on the input
of the path. These transitions change direction whenever the paths pass through
an inverting gate.
Below are three standard definitions that are used in path delay fault testing:
Definition 1: Let G be a gate on path P in a logic circuit, and let r be an
input to gate G; r is called an off-path sensitizing input if r is not on path P.
Definition 2: A two-pattern test < VI, V2 > is called a robust test for a
delay fault on path P, if the test detects that fault independently of all other
delays in the circuit.
Definition 3: A two-pattern test < VI, V2 > is called a non-robust test for a
delay fault on path P, if it detects the fault under the assumption that no
other path in the circuit involving the off-path inputs of gates on P has a
delay fault.
4.3 Delay Faults and BIST
Deriving tests for each of the delay fault models described in the previous
section consists of a sequence of two test patterns. This first pattern is denoted
as the initialization vector. The propagation vector follows it. Deriving these two
pattern tests is know to be NP-hard. Even though test pattern generators exist
for these fault models, the cost of high speed Automatic Test Equipment (ATE)
and the encapsulation of signals generally prevent these vectors from being
applied directly to the CUT. BIST offers a solution to the aforementioned
problems.
Sequential circuit testing is complicated by the inability to probe signals
internal to the circuit. Scan methods have been widely accepted as a means to
externalize these signals for testing purposes. Scan chains, in their simplest
form, are sequences of multiplexed flip-flops that can function in normal or test
modes. Aside from a slight increase in die area and delay, scannable flip-flops
are no different from normal flip-flops when not operating in test mode. The
contents of scannable flip-flops that do not have external inputs or outputs can be
externally loaded or examined by placing the flip-flops in test mode. Scan
methods have proven to be very effective in testing for stuck-at-faults.
20
Figure 4.2: A scan-BIST TPG (a) and its associated subsequence (b).
21
5. NON-INTRUSIVE BIST
So as to save on silicon area BIST designs make use of existing flip-flops,
used to implement the CUT design, to create the test pattern generator and
response analyzer functions. In the case of scan based BIST, points of
controllability and abservability for the application of test vectors and the capture
of the output responses are realized by adding some additional logic in the
combinational path between two flip-flops. As a result of this some amount of a
performance penalty is incurred since the CUT now operates at a lower clock
frequency when compared to a CUT not implementing BIST. Such a BIST
implementation is said to be intrusive since it interferes with/affects the operation
of the CUT. The basic BIST architecture shown in Figure1 (in the introduction) is
an illustration of non-intrusive BIST. Here, the BIST circuitry is separate from the
CUT and hence does not impose any performance penalties. As a result nonintrusive BIST implementations are preferred for high performance applications.
Since the test pattern generator (TPG) and output response analyzer (ORA)
functions are external to the CUT design, they can be used to test multiple CUT
blocks (each of the multiple CUT blocks may have a different functionality). This
concept is illustrated in Figure 5.1 below. This approach leads to major savings in
terms of silicon area.
22
Figure 5.1: Same TPG and ORA blocks used for multiple CUTs.
As can be seen from the figure above there exists an input isolation
multiplexer between the primary inputs and the CUT. This leads to an increased
set-up time constraint on the timing specifications of the primary input signals.
There is also some additional clock to output delay since the primary outputs of
the CUT also drive the output response analyzer inputs. These are some
disadvantages of non-intrusive BIST implementations.
To further save on silicon area current non-intrusive BIST implementations
combine the TPG and ORA functions into one block. This is illustrated in Figure
5.2 below. The common block (referred to as the MISR in the figure) makes use
of the similarity in design of a LFSR (used for test vector generation) and a MISR
(used for signature analysis). The block configures it-self for test vector
generation/output response
23
6. APPENDIX
6.1 AN OVERVIEW OF DIFFERENT FAULT MODELS
A good fault model accurately reflects the behavior of the actual defects
that can occur during the fabrication and manufacturing processes, as well as
the behavior of the faults that can occur during system operation. A brief
description of the different fault models in use is presented here:
At this point a question may arise in our minds what could cause
the input/output of a logic gate to be stuck-at logic 0 or stuck-at logic1?
This could happen as a result of a faulty fabrication process, where the
input/output of a logic gate is accidentally routed to power (logic1) or
ground (logic0).
24
25
Vz = Vdd[Rn/(Rn + Rp)]
Here, Rn and Rp represent the effective channel resistances of the pulldown and pull-up transistor networks respectively. Depending upon the
ratio of the effective channel resistances, as well as the switching level
of the gate being driven by the faulty gate, the effect of the transistor
stuck-on fault may or may not be observable at the circuit output. This
behavior complicates the testing process, as Rn and Rp are a function of
the inputs applied to the gate. The only parameter of the faulty gate that
will always be different from that of the fault-free gate will be the steadystate current drawn from the power supply (IDDQ), when the fault is
excited. In the case of a fault-free static CMOS gate only a small leakage
current will flow from Vdd to Vss. However, in the case of the faulty gate
a much larger current flow will result between Vdd and Vss when the
fault is excited. Monitoring steady-state power supply currents has
become a popular method for the detection of transistor-level stuck
faults.
26
The dominant bridging fault model is yet another popular model used to
emulate the occurrence of bridging faults. The dominant bridging fault
model accurately reflects the behavior of some shorts in CMOS circuits
where the logic value at the destination end of the shorted wires is
determined by the source gate with the strongest drive capability. As
illustrated in Figure3, the driver of one node dominates the drive of
the other node. A DOM B denotes that the driver of node A dominates,
as it is stronger than the driver of node B.
27
7. BIBLIOGRAPHY
1. V D Agrawal, C R Kime and K K Saluja A Tutorial on Built-In Self-Test
Part 1: Principles, IEEE Design & Test Of Computers, Volume: 10 Issue: 1,
March 1993 Page(s): 73 82.
2. Charles E. Stroud, A Designers Guide to Built-In Self-Test, Kluwer Academic
Publishers, Massachusetts, 2002.
3. V D Agrawal, C R Kime and K K Saluja A Tutorial on Built-In Self-Test
Part 2: Applications, IEEE Design & Test Of Computers, Volume: 10 Issue: 2,
June 1993 Page(s): 69 77.
4. Lutz Voelkel On the problem of masking special errors by signature analyzers
TR95014 April 1995.
5. Abramovici, Miron Digital systems testing and testable design, Piscataway,
NJ: IEEE Press, c1990.