Test and Testability U1 U2 U3
Test and Testability U1 U2 U3
VLSI
TESTING
digital and mixed
analogue/digital
techniques
STANLEY L. HURST
VLSI
TESTING
digital and mixed
analogue/digital
techniques
Other volumes in this series:
Volume 1 GaAs technology and its impact on circuits and systems
D. G. Haigh and J. Everard (Editors)
Volume 2 Analogue IC design: the current-mode approach
C. Toumazou, F. J. Lidgey and D. G. Haigh (Editors)
Volume 3 Analogue-digital ASICs R. S. Soin, F. Maloberti and
J. Franca (Editors)
Volume 4 Algorithmic and knowledge-based CAD for VLSI
G. E. Taylor and G. Russell (Editors)
Volume 5 Switched-currents: an analogue technique for digital technology
C. Toumazou, J. B. Hughes and N. C. Battersby (Editors)
Volume 6 High frequency circuits F. Nibler and co-authors
Volume 7 MMIC design I. D. Robertson (Editor)
Volume 8 Low-power HF microelectronics G. A. S. Machado (Editor)
VLSI
TESTING
digital and mixed
analogue/digital
techniques
STANLEY L. HURST
While the author and the publishers believe that the information and
guidance given in this work is correct, all parties must rely upon their own
skill and judgment when making use of it. Neither the author nor the
publishers assume any liability to anyone for any loss or damage caused
by any error or omission in the work, whether such error or omission is
the result of negligence or any other cause. Any and all such liability is
disclaimed.
The moral right of the author to be identified as author of this work has
been asserted by him/her in accordance with the Copyright, Designs and
Patents Act 1988.
Preface xi
Acknowledgments xiii
List of symbols and abbreviations xv
1 Introduction 1
1.1 The need for testing 1
1.2 The problems of digital testing 7
1.3 The problems of analogue testing 9
1.4 The problems of mixed analogue/digital testing 11
1.5 Design for test 11
1.6 Printed-circuit board (PCB) testing 13
1.7 Software testing 15
1.8 Chapter summary 16
1.9 Further reading 16
2 Faults in digital circuits 19
2.1 General introduction 19
2.2 Controllability and observability 20
2.3 Fault models 25
2.3.1 Stuck-at faults 26
2.3.2 Bridging faults 30
2.3.3 CMOS technology considerations 31
2.4 Intermittent faults 35
2.5 Chapter summary 38
2.6 References 40
Appendix B Minimum cost maximum length cellular automata for n < 100 519
Index 527
Preface
Historically, the subject of testing has not been one which has fired the
imagination of many electronic design engineers. It was a subject rarely
considered in academic courses except, perhaps as the final part of some
laboratory experiment or project, and then only to confirm the correct (or
incorrect!) working of some already-designed circuit. Hov/ever, the vast
increase in on-chip circuit complexity arising from the evolution of LSI and
VLSI technologies has brought this subject into rightful prominence, making
it an essential part of the overall design procedure for any complex circuit or
system.
The theory and practice of microelectronic testing has now become a
necessary part of both IC design and system design using ICs. It is a subject
area which must be considered in academic courses in microelectronic
design, and which should be understood by all practising design engineers
who are involved with increasingly complex and compact system
requirements.
Written as a self-contained text to introduce all aspects of the subject, this
book is designed as a text for students studying the subject in a formal taught
course, but contains sufficient material on more advanced concepts in order
to be suitable as a reference text for postgraduates involved in design and test
research. Current industrial practice and the economics of testing are also
covered, so that designers in industry who may not have previously
encountered this area may also find information of relevance to their work.
The book is divided into nine principal chapters, plus three appendices.
Chapter 1 is an introductory chapter, which explains the problems of
microelectronic testing and the increasing need for design for test (DFT)
techniques. Chapter 2 then continues with a consideration of the faults which
may arise in digital circuits, and introduces the fundamentals of
controllability, observability, fault models and exhaustive versus non-
exhaustive test. This consideration of digital circuit testing continues in
xii Preface
Stanley L. Hurst
Acknowledgments
I would like to acknowledge the work of the many pioneers who have
contributed to our present state of knowledge about and application of
testing methodologies for microelectronic circuits and systems, oftimes with
belated encouragement from the manufacturing side of industry.
Conversations with many of these people over the years have added greatly to
my awareness of this subject area, and the many References given in this text
are therefore dedicated to them.
On a more personal note, may I thank former research colleagues at the
University of Bath, England, and academic colleagues in electronic systems
engineering at the Open University who have been instrumental in helping
my appreciation of the subject. Also very many years of extremely pleasant co-
operation with the VLSI design and test group of the Department of
Computer Science of the University of Victoria, Canada, must be
acknowledged. To them and very many others, not excluding the students
who have rightfully queried many of my loose statements, specious arguments
or convoluted mathematical justifications, my grateful thanks.
Finally, I must acknowlege the cheerful help provided by the administrative
and secretarial staff of the Faculty of Technology of the Open University, who
succeeded in translating my original scribbles into recognisable text. In
particular, may I thank Carol Birkett, Angie Swain and Lesley Green for their
invaluable assistance. Any errors remaining in the text are entirely mine!
S.L.H.
List of symbols and abbreviations
The following are the symbols and abbreviations that may be encountered in
VLSI testing literature. Most but not necessarily all will be found in this text.
C capacitor
CA cellular automata
CAD computer-aided design
xvi List of symbols and abbreviations
DA design automation
DAC digital to analogue converter
DBSC digital boundary scan cell
DDD defect density distribution
DEC double error correction
DED double error detection
DFM design for manufacturing
DFR decreasing failure rate
DFT design for test, or design for testability
DIL or DIP dual-inline package, or dual-inline plastic package
DL default level
DMOS dynamic MOS
DR data register
DRC design rule checking
DSP digital signal processing
D-toA digital to analogue
DUT device under test
IC integrated circuit
Q
quiescent current in CMOS
IEE Institution of Electrical Engineers
IEEE Institute of Electrical and Electronics Engineers
IFA inductive fault analysis
IFR increasing failure rate
IGFET insulated gate FET
I/O input/output
IR instruction register
ISL integrated Schottky logic
quiescent current in CMOS
k 1000
K 1024
kHz kilohertz
R resistor
RAM random-access memory
RAPS random path sensitising
RAS random access scan
ROM read-only memory
RTL register transfer language
RTOK retest OK
TC test coverage
TAP test access port
TCK, TMS, TDI, TDO boundary-scan terminals of IEEE standard 1149.1
TDD test directed diagnosis
TDL test description language
TMR triple modular redundancy
TPG test pattern generation (see also ATPG)
TPL test programming language
TQM total quality management
TTL transistor-transistor logic
The first of these five activities is the sole province of the vendor, and does not
involve the OEM in any way. The vendor normally incorporates 'drop-ins'
located at random positions on the wafer, these being small circuits or
geometric structures from which the correct resitivity and other parameters
of the wafer fabrication can be verified before any functional checks are
undertaken on the surrounding circuits. This is illustrated in Figure 1.1. We
will have no occasion to consider wafer fabrication tests any further in this
text.
Figure 1.1 The vendor's check on correct wafer fabrication, using drop-in test circuits
at selected points on the wafer. These drop-ins may be alternatively known
as process evaluation devices (PEDs), process device monitors (PDMs),
process monitoring circuits (PMCs), or similar terms by different IC
manufacturers (Photo courtesy Micro Circuit Engineering, UK)
In the case of standard off the shelf ICs, available for use by any OEM, the
next two test activities are also the sole responsibility of the vendor. However,
in the case of customer-specific ICs, usually termed ASICs (application-
specific ICs) or more precisely USICs (user-specific ICs), the OEM is also
involved in phase (ii) in order to define the functionality and acceptance
details of the custom circuit. The overall vendor/OEM interface details
therefore will generally be as shown in Figure 1.2.
The final two phases of activity are clearly the province of the OEM, not
involving the vendor unless problems arise with incoming production ICs.
Such testing will be unique to a specific product and its components,
although in its design the requirements of testing, and possibly the
incorporation of design for test (DFT) features such as will be covered later
in this text, must be considered.
Introduction 3
vendor
package prototype
quantity chips
ok
^prototype\ not ok
chip tests
incoming
not ok \ c h i p tests
wafer fabrication
production wafers
Figure 1.2 The IC vendor/OEM test procedures for a custom IC before assembly and
test in final production systems. Ideally, the OEM should do a 100 %
functional test of incoming ICs in a working equipment or equivalent test
rig, although this may be impractical for VLSI circuits
4 VLSI testing: digital and mixed analogue/digital techniques
V = O.C 1
90
~
80
70 • — ^
V fl 1n \
e
60
50
\ l\ \
^ y = 0.25 X
\
I
40 \
^ ^
A
30 Y - n so
-3
\ \
20
— — - ^
\ \
--—^>
\
Y = 0.90 y = o 75^—• — — ^
10 =—=
r—
- . —
• — • , .
• — - — —
—
0 Y = 0.99
10 20 30 40 50 60 70 80 90 100
seven ICs faulty, which is far too high for most customers. Hence to ensure a
high percentage of fault-free circuits after test, either the manufacturing yield
For the fault coverage FQ or both, must be high.
This, therefore, is the dilemma of testing complex integrated circuits, or
indeed any complex system. If testing efficiency is low then faulty circuits may
escape through the test procedure. On the other hand, the achievement of
near 100 % fault detection (FC= 1.0) may require such extensive testing as to
be prohibitively costly unless measures are taken at the design stage to
facilitate such a level of test.
Before we continue with the main principles and practice of integrated
circuit testing, let us consider a little further Figure 1.2 and problems which
can specifically arise with OEM use of custom ICs. During the design phase,
simulation of the IC will have been undertaken and approved before
fabrication was commenced. This invariably involves the vendor's CAD
resources for the final post-layout simulation check, and from this simulation
a set of test vectors for the chip may be automatically generated which can be
downloaded into the vendor's sophisticated general-purpose test equipment.
The vendor's tests on prototype custom ICs may therefore be based upon this
simulation data, and if the circuits pass this means that they conform to the
simulation results which will have been approved by the OEM.
6 VLSI testing: digital and mixed analogue/digital techniques
Unfortunately, history shows that very many custom IC designs which pass
such tests are subsequently found to be unsatisfactory under working product
conditions. This is not because the ICs are faulty, but rather that they were
not designed to provide the exact functionality required in the final product.
The given IC specification was somehow incomplete or faulty, perhaps in a
very minor way such as the active logic level of a digital input signal being
incorrectly specified as logic 1 instead of logic 0, or some product
specification change not being transferred to the IC design specification.
Other problems may also arise between parties, such as:
• a vendor's general-purpose computer-controlled VLSI test system, see
Figure 1.4, which although it may have cost several million dollars, may
not have the capability to apply some of the input conditions met in the
final product, for example nonstandard digital or analogue signals or
Schmitt trigger hysteresis requirements;
• similarly, some of the output signals which the custom circuit provides
may not be precisely monitored by the vendor's standard test system;
• the vendor may also only be prepared to apply a limited number of tests
to each production IC, and not anywhere near an exhaustive test;
• the vendor's test system may not test the IC at the operating speed or
range of speeds of the final product.
The main point we need to make is that in the world of custom
microelectronics the OEM and the vendor must co-operate closely and
intelligently to determine acceptable test procedures. When the OEM is using
standard off the shelf ICs then the responsibility for incoming component
and product testing is entirely his. However, in both cases the concepts and
Figure 1.4 A typical general-purpose VLSI tester as used by IC vendors and test
houses (Photo courtesy Avantest Corporation, Japan)
Introduction 7
Table 1.1 An example of a test set for a combinational netxuork with eight inputs and
four outputs
Test vectors Output response
x
i x2 x3 x4 X
5 X
6 X
7 X
8 Y\ Y2 Y3 Yi
First test pattern 0 ) 1 1 0 0 I 1 0 0 0 1
Next test pattern 0 1 ! I 0 1 0 0 0 I 0 0
Next test pattern 1 I 1 i 0 1 0 0 0 1 i 0
than any fundamental difficulty in the testing requirements. For circuits with
a small number of logic gates and internal storage circuits, the problem is not
acute; fully-exhaustive functional testing may be possible. As circuits grow to
LSI and VLSI complexity, then techniques to ease this problem such as will
be considered later become increasingly necessary.
In contrast to the very large number of logic gates and storage circuits
encountered in digital networks, purely analogue networks are usually
characterised by having relatively few circuit primitives such as operational
amplifiers, etc. The numbers complexity is replaced by the increased
complexity of each building block, and the need to test a range of parameters
such as gain, bandwidth, signal to noise ratio, common-mode rejection
(CMR), offset voltage and other factors. Although faults in digital ICs are
usually catastrophic in that incorrect 0 or 1 bits are involved, in analogue
circuits degraded circuit performance as well as nonfunctional operation has
to be checked.
Prototype analogue ICs are subject to comprehensive testing by the vendor
before production circuits are released. Such tests will involve wafer
fabrication parameters as well as circuit parameters, and the vendor must
ensure that these fabrication parameters remain constant for subsequent
production circuits. The vendor and the OEM, however, still need to test
production ICs, since surface defects or mask misalignments or other
production factors may still cause unacceptable performance, and therefore
some subset of the full prototype test procedures may need to be determined.
Completed analogue systems obviously have to be tested by the OEM after
manufacture, but this is unique to the product and will not be considered
here.
The actual testing of analogue ICs involves standard test instrumentation
such as waveform generators, signal analysers, programmable supply units,
voltmeters, etc., which may be used in the test of any type of analogue system.
Comprehensive general-purpose test systems are frequently made as an
assembly of rack-mounted instruments, under the control of a dedicated
microcontroller or processor. Such an assembly is known as a rack-and-stack
test resource, as illustrated in Figure 1.5. The inputs and outputs of the
individual instruments are connected to a backplane general purpose
instrumentation bus (GPIB), which is a standard for microprocessor or
computer-controlled commercial instruments. (The HP instrumentation bus
HPIB is electrically identical.)
In the case of custom ICs it is essential for the OEM to discuss the test
requirements very closely with the vendor. With very complex analogue
circuits a high degree of specialist ability may be involved, which can only be
acquired through considerable experience in circuit design.
10 VLSI testing: digital and mixed analogue/digital techniques
power supplies -• •
precision
.• • ' digitiser
voltmeter —
digital
' oscilloscope
1.3 GHz signal
analyser
connector • minicomputer
to device
under test
sweep oscillator - o
D D
hard disk
pulse generators-
relay actuators — • •« power supplies
• •
Figure 1.5 A rack-and-stack facility for the testing of analogue circuits, involving a
range of instrumentation with compute?' control of their operation (Photo
courtesy IEEE, ©1987 IEEE)
Introduction 11
Testing of the analogue part and testing of the digital part of a combined
analogue/digital circuit or system each require their own distinct forms of
test, as introduced in the two preceding sections.* Hence, it is usually
necessary to have the interface between the two brought out to accessible test
points so that the analogue tests and the digital tests may be performed
separately.
In the case of relatively unsophisticated mixed circuits containing, say, a
simple input A-to-D converter, some internal digital processing, and an
output D-to-A converter, it may be possible to define an acceptable test
procedure without requiring access to the internal interfaces. All such cases
must be individually considered, and so no hard and fast rules can be laid
down.
There are on-going developments which seek to combine both analogue
and digital testing by using multiple discrete voltage levels or serial bit
streams to drive the analogue circuits, or voltage-limited analogue signals for
the digital circuits. However, this work is still largely theoretical; more
successful so far in the commercial market has been the incorporation of
both analogue test instrumentation and digital test instrumentation within
one test assembly, as illustrated in Figure 1.6. Here the separate resources are
linked by an instrumentation bus, with the dedicated host processor or
computer being programmed to undertake the necessary test procedures.
When such resources are used it is essential to consider their capabilities, and
perhaps constraints, during the design phase of a mixed analogue/digital IC,
particularly as appropriate interface points between the two types of circuit
may still be required.
From the previous sections it will be clear that the problems of test escalate as
the complexity of the IC or system increases. If no thought is given to 'how-
* We exclude here in our considerations the testing of individual A-to-D and D-to-A
converters, particularly high speed flash converters. The vendor testing of these and
similar mass-production circuits usually involves extremely expensive test resources
especially designed for this purpose, similar in appearance to the VLSI test equipment
shown in Figure 1.4.
12 VLSI testing: digital and mixed analogue/digital techniques
GPIB ^
anal<Dgue diejital
instrum 3ntation ^ synchronisation w 7 instrumentation
i i
integrated signal
distribution and
device f xturing
Figure 1.6 A mixed analogue/digital test resource, with the analogue and the digital
tests synchronised under host computer control. This can be a rack-and-
stack assembly, as in Figure 1.5 for the analogue-only case
shall-we-test-it' at the design stage then a product may result which cannot be
adequately tested within an acceptable time scale or cost of test.
Design for test (DFT) is therefore an essential part of the design phase of
a complex circuit. As we will see later, DFT involves building into the circuit
or system some additional feature or features which would not otherwise be
required. These may be simple features, such as:
• the provision of additional input/output (I/O) pins on an IC or system,
which will give direct access to some internal points in the circuit for
signal injection or signal monitoring;
• provision to break certain internal interconnections during the test
procedure, for example feedback loops;
• provision to partition the complete circuit into smaller parts which may be
individually tested;
or more complicated features such as reconfigurable circuit elements which
have a normal mode of operation and a test mode of operation.
As will be seen later, one of the most powerful test techniques for digital
VLSI circuits is to feed a serial bit stream into a circuit under test to load test
signals into the circuit. The resulting signals from the circuit are then fed out
in an output serial bit stream, and checked for correct response. Such
techniques are scan test techniques; they become essential when, for
example, the IC under test is severely pin-limited, precluding parallel feeding
in of all the desired test signals in a set of wide test vectors. One penalty for
having to adopt scan test methods is an increase in circuit complexity, and
hence chip size and cost. We will be considering all these factors in detail in
later chapters of this text.
Introduction 13
All OEM systems employ some form of printed-circuit board (PCB) for the
assembly of ICs and other components. PCB complexity ranges from very
simple one- or two-sided boards to extremely complex multilayer boards
containing ten or more layers of interconnect which may be necessary for
avionics or similar areas.
PCB testing falls into three categories, namely:
(i) bare-board testing, which seeks to check the continuity of all tracks on
the board before any component assembly is begun;
(ii) in-circuit testing, which seeks to check the individual components,
including ICs, which are assembled on the PCB;
(iii) functional testing, which is a check on the correct functionality of the
completed PCB.
Bare-board testing
Simple one- or two-sided bare boards may be checked by visual inspection.
However, as layout size and complexity increases, then expensive PCB
continuity testers become necessary. Connections to the board under test are
made by a comprehensive 'bed-of-nails' fixture, which is unique for every
PCB layout, test signals being applied and monitored by a programmed
sequence from the tester's dedicated processor or computer. One hundred
per cent correct continuity is required from such tests.
In-circuit testing
In-circuit testing, the aim of which is to find gross circuit faults before
commencing any fully-detailed functional testing, may or may not be done. If
it is, then electrical connections to the individual components are again
made via a bed-of-nails fixture, the processor of the in-circuit tester being
programmed to supply and monitor all the required test signals.
The fundamental problem with in-circuit passive component measurement
is that the component being measured is not isolated from preceding and/or
following components on the board. For discrete components a
measurement technique as shown in Figure 1.7 is necessary. Here Zx and Z2
are the impedance paths either side of the component Zx being measured. By
connecting both of these paths to ground, virtually all the current flowing
into the in-circuit measuring operational amplifier comes from the test
source vs via the component Z r Current flowing through Z2 is negligible
because of the virtual earth condition on the inverting input of the
operational amplifier. Hence:
V
OUT = ~jT~ X
Rfi
14 VLSI testing: digital and mixed analogue/digital techniques
whence
v R
_ s jb
Functional testing
In contrast to the bed of nails fixtures noted above, PCB functional testing
must access the circuit through its normal edge connector(s) or other I/O
V
SOURCE
(low impedance)
OUT
Figure 1.7 The technique used for the in-circuit testing of passive component values.
In practice additional complexity may be present to overcome errors due to
source impedance, lead impedance, offset voltages, etc.
Introduction 15
terminals. Probing of internal tracks on the PCB may be done for specific
fault-finding purposes, but not during any automated test procedure.
With fully-assembled PCBs we are effectively doing a systems test. This will
be unique for every OEM product, and may involve an even greater
complexity of test than individual VLSI circuits. We will not pursue functional
PCB testing any further in this text, except to note that all the theory and
techniques which will be discussed in the following chapters apply equally to
complex PCB assemblies; design for test (DFT) techniques must be con-
sidered at the design stage, and PCB layouts must incorporate the necessary
provisions for the final functional tests. Scan testing (see Chapter 5) in
particular may need to be incorporated to give test access from PCB I/Os to
internal IC packages.
This first chapter has been a broad overview of the problems of testing
circuits of VLSI complexity and systems into which they may be assembled.
As will be appreciated, the testing problem is not usually one of fundamental
technical difficulty, but much more one of the time and/or the cost neces-
sary to undertake a procedure which would guarantee 100 % correct
functionality.
The subsequent chapters of this text will therefore consider the types of
failures which may be encountered in microelectronic circuits, fault models
for digital circuits, the problems of observability and controllability and the
various techniques that are available to ease the testing of both digital and
mixed analogue/digital circuits. Finally, the financial aspects of testing which
reflect back upon the initial design of the circuit or system will be considered,
as well as the production quantities that may be involved.
We will conclude this chapter with a list of publications which may be
relevant for further general or specific reading. The more specialised ones
may be referenced again in the following chapters.
* The extreme case of this is possibly the Star Wars research and development
programme, which would have been impossible to test under operational conditions.
Introduction 17
10 BARDELL, P.H., McCANNEY, W.H., and SAVIR, J.: 'Built-in test for VLSI:
pseudorandom techniques' (Wiley, 1987)
11 BENNETTS, R.G.: 'Design of testable logic circuits' (Addison-Wesley, 1984)
12 BATESON, J.: 'In-circuit testing' (Van Nostrand Reinholt, 1985)
13 GULATI, R.K., and HAWKINS, C.F.: lIDDQ testing of VLSI circuits' (Kluwer, 1993)
14 BLEEKER, H., Van den EIJNDEN, P., and De JONG, R: 'Boundary scan test: a
practical approach' (Kluwer, 1993)
15 RAJSUMAN, R.: 'Digital hardware testing: transistor-level fault-modeling and
testing' (Artech House, 1992)
16 MILLER, D.M. (Ed.): 'Developments in integrated circuit testing' (Academic
Press, 1987)
17 WILLIAMS, T.W. (Ed.): 'VLSI testing' (North-Holland, 1986)
18 RUSSELL, G., and SAYERS, I.L.: 'Advanced simulation and test methodologies for
VLSI design' (Van Nostrand Reinhold, 1989)
19 RUSSELL, G. (Ed.): 'Computer aided tools for VLSI system design' (Peter
Peregrinus, 1987)
20 MASSARA, R.E. (Ed.): 'Design and test techniques for VLSI and WSI circuits'
(Peter Peregrinus, 1989)
21 SOIN, R., MALOBERT, R, and FRANCA, J. (Eds.): 'Analogue digital ASICs: circuit
techniques, design tools and applications' (IEE Peter Peregrinus, 1991)
22 TRONTELJ, J., TRONTELJ, L., and SHENTON, G.: 'Analogue digital ASIC
design' (McGraw-Hill, 1989)
23 ROBERTS, G.W., and LU, A.K.: 'Analogue signal generation for the built-in-self-
test of mixed signal ICs' (Kluwer, 1995)
24 NAISH, P., and BISHOP, P.: 'Designing ASICs' (Wiley, 1988)
25 'Design for testability'. Open University microelectronics for industry publication
PT505DFT, 1988
26 HURST, S.L.: 'Custom VLSI microelectronics' (Prentice Hall, 1992)
27 NEEDHAM, W.M.: 'Designer's guide to testable ASIC devices' (Van Nostrand
Reinhold, 1991)
28 DI GIACOMOJ.: 'Designing with high performance ASICs' (Prentice Hall, 1992)
29 WHITE, D.E.: 'Logic design for array-based circuits: a structure design
methodology' (Academic Press, 1992)
30 BENNETTS, R.G.: 'Introduction to digital board testing' (Edward Arnold, 1982)
31 MAUNDER, C: 'The board designer's guide to testable logic circuits' (Addison-
Wesley, 1992)
32 O'CONNOR, P.D.T.: 'Practical reliability engineering' (Wiley, 1991)
33 CHRISTOU, A.: 'Integrating reliability into microelectronics manufacture'
(Wiley, 1994)
34 MYERS, G.J.: 'Software reliability: principles and practice' (Wiley, 1976)
35 MYERS, G.J.: 'The art of software testing' (Wiley, 1979)
36 SMITH, D.J., and W7OOD, K.B.: 'Engineering quality software' (Elsevier, 1989)
37 MITCHELL, R.J. (Ed.), 'Managing complexity in software engineering'
(Institution of Electrical Engineers, 1990)
38 SOMMERVILLE, I.: 'Software engineering' (Addison-Wresley, 1992)
39 SIMPSON, W.R., and SHEPPARD, J.W.: 'System test and diagnosis' (Kluwer, 1994)
40 ARSENAULT, J.E., and ROBERTS, J.A. (Eds.): 'Reliability and maintainability of
electronic systems' (Computer Science Press, 1980)
41 KLINGER, D.J., NAKADA, Y, and MENENDIZ, M.A. (Eds.): 'AT+T reliability
manual' (Van Nostrand Reinhold, 1990)
Chapter 2
Faults in digital circuits
In considering the techniques that may be used for digital circuit testing, two
distinct philosophies may be found, namely:
(a) to undertake a series of functional tests and check for the correct (fault-
free) 0 or 1 output response(s);
(b) to consider the possible faults that may occur within the circuit, and
then to apply a series of tests which are specifically formulated to check
whether each of these faults is present or not.
The first of the above techniques is conventionally known as functional
testing. It does not consider how the circuit is designed, but only that it gives
the correct outputs during test. This is the only type of test which an OEM can
do on a packaged IC when no details of the circuit design and silicon layout
are known.
The second of the above techniques relies upon fault modelling. The
procedure now is to consider faults which are likely to occur on the wafer
during the manufacture of the ICs, and compute the result on the circuit
output(s) with and without each fault present. Each of the final series of tests
is then designed to show that a particular fault is or is not present. If none of
the chosen set of faults is detected then the IC is considered to be fault free.
Fault modelling relies upon a choice of the types of fault(s) to consider in
the digital circuit. It is clearly impossible to consider every conceivable
imperfection which may be present, and therefore only one or two types of
fault are normally considered. These are commonly stuck-at faults, where a
particular node in the circuit is always at logic 0 or at logic 1, and bridging
faults, where adjacent nodes or tracks are considered to be shorted together.
We will consider these faults in detail in the following sections.
The potential advantage of using fault modelling for test purposes over
functional testing is that a smaller set of tests is necessary to test the circuit.
20 VLSI testing: digital and mixed analogue/digital techniques
This is aided by the fact that a test for one potential fault will often also test
for other faults, and hence the determination of a minimum test set to cover
all the faults being modelled is a powerful objective. However, in theory a
digital circuit which passes all its fault modelling tests may still not be fully
functional if some other, possibly obscure, fault is present, but the probability
of this is usually considered to be acceptably small.
Clearly, the further a node is from the primary inputs the more indirect it is
to control the logic value of that node.
Some nodes in a working circuit may not be controllable from the primary
inputs. Consider the monitoring circuit shown in Figure 2.2a. With a healthy
circuit the outputs from the two subcircuits will always be identical, and no
means exists of influencing the output of the monitoring circuit. An addition
such as shown in Figure 2.2b is necessary in order to provide controllability of
the monitoring circuit response.
Faults in digital circuits 21
E o
F o
primary
inputs
equality
- • check
output
I node not
I controllable
i from inputs
i
i
equality
•• check
output
node now
controllable
from input
Figure 2.2 An example of a circuit node that is not controllable from a primary input
a basic system with duplicated circuits A and B
b an addition necessary to give controllability
22 VLSI testing: digital and mixed analogue/digital techniques
Monitoring circuits and also circuits containing redundancy both present
inherent difficulties in controllability; additional primary inputs just for test
purposes become necessary. Another possible difficulty may arise in the case
where the IC contains a ROM structure, the outputs of which drive further
logic circuits. Here the ROM programming may preclude the application of
certain desirable test signals to the further logic, limiting the possible
controllability of the latter.
The controllability of circuits containing latches and flip-flops (sequential
circuits) is also often difficult or impossible, since a very large number of test
vectors may have to be applied to change the value of a node in or controlled
by the sequential circuits. Additionally there will often be certain states of the
circuit which are never used, for example in a four-bit BCD counter which
only cycles through ten of the possible 16 states. It is, therefore, frequently
necessary to partition or reconfigure counter assemblies into a series of
smaller blocks, each of which can be individually addressed under test
conditions. (This is one of the design for test philosophies that we will
consider in greater detail in a later chapter.)
Turning to observability, consider the simple circuit shown in Figure 2.3.
Suppose it is necessary to observe (monitor) the logic value on node 2. In
order that this logic value propagates to the primary output Z, to give a
different logic value at Z depending upon whether the node is at logic 0 or
logic 1, it is clear that nodes 1 and 4 must be set to logic 1 and node 6 to logic
0. Hence, the primary signals must be chosen so that these conditions are
present on nodes 1, 4 and 6, in which case output Zwill be solely dependent
upon node 2. Node 2 will then be observable. This procedure is sometimes
termed sensitising, forward driving or forward propagating the path from a
node to an observable output.
The general characteristics of controllability and observability for any
given network are therefore as shown in Figure 2.4. Provided that there is no
redundancy in the network, that is all paths must at sometime switch in order
to produce a fault-free output, then it is always possible to determine two (or
2
—^~\jL,
w. -7
^/
fi L observable
output
more) input test vectors which will check that each internal node of the
circuit correctly switches from 0 to 1 or from 1 to 0, or fails to do so if there
is a fault at the node. However, the complexity of determining the smallest set
of such vectors to test all internal nodes is high, way beyond the bounds of
computation by hand except in the case of extremely simple circuits. If
sequential circuits are also present, then there is the additional complexity of
ensuring that the sequential circuits are in their required states as well.
Many attempts have been made to quantify the controllability and
observability of a given circuit, to allow difficult to test circuits to be identified
and hopefully modified during the design phase1"10. The software packages
which were developed include TMEAS, 19791 TEST/80, 19793, SCOAP,
19804, CAMELOT, 19816, VICTOR, 19827, and COMET, 19828. These are
discussed in Bennetts11, Bardell et al12 and Abramovici et #Z.1S. The basic
concepts used in the majority of these developments involve (i) the com-
putation of a number intended to represent the degree of difficulty in setting
each internal node of the circuit under test to 0 and 1 (O-controllability and
1-controllability) and (ii) the computation of another number intended to
represent the difficulty of forward propagating the logic value on each node
to an observable output. The difficulty of testing the circuit is then related to
some overall consideration of these individual numerical values. Further
developments in the use of this data so as to ease the testing difficulty were
also pursued14.
In the majority of these developments, controllability was normalised to the
range 0 to 1, with 0 representing a node which was completely uncontrollable
from a primary input, to 1 representing a node with direct controllability.
Typically, a controllability transfer factor, CTF, for every type of combinational
logic gate or macro is derived from the expression:
CTF = 1 - (2.1)
CO n
CO
<5
where N(0) and N(l) are the number of input patterns for which the
component output is logic 0 and logic 1, respectively, in other words N(0) is
the number of Os in the truth table or Karnaugh map for the component, and
iV(l) is the number of Is. Components such as an inverter gate or an
exclusive-OR gate have a value of CTF= 1, since they have an equal number
of Os and Is in their truthtable; n-input AND, NAND, OR and NOR gates,
however, have a controllability transfer factor value of l/2 n ~ 1 , as may readily
be shown by evaluating the above equation.
The controllability factor, CY, of a component output with inputs which are
not directly accessible from primary inputs is then computed by the equation:
Wouiput= (CTFx CYmpuls) (2.2)
where CTF is the controllability transfer factor for the component and
CYinputs is the average of the CTFs on the component input lines. (For
components with inputs which are directly accessible CYinputs= 1, and hence
CY0Utput = CTF for this special case.) Hence, working through the circuit the
controllability value of every node from primary input to primary output can
be given a numerical value.
In a similar manner to determining the controllability transfer factor value
for each type of gate or macro, in considering observability an observability
transfer factor, OTF, is determined for each component. This factor is the
relative difficulty of determining from a component output value whether
there is a faulty input value (error) on an input to the component. An
inverter gate clearly has an observability transfer factor value of 1; for rc-input
AND, NAND, OR and NOR gates the value is l/2 n " 1 , the same as the CTF
value. The exact equation for OFT may be found in the literature11"13, and
does not necessarily have the same value as CTFTor all logic macros.
The observability factor, OF, for each node in a circuit is next determined
by working backwards from primary outputs through components towards
the primary inputs, generating the observability value OFfor every node. The
value of OYis given by an equation similar to eqn. 2.2, namely:
OYmputs= (OTF x OYoutputs) (2.3)
where OTF is the observability transfer factor for each component, and
OY outputsx% t n e average of the observability values for the individual output
nodes of the component or macro.
This computation of controllability and observability values, however, is
greatly complicated by circuit factors such as reconvergent fan-out, feedback
paths and the presence of sequential circuits. Combining the two values so as
to give a single measure of testability, 7T, is also problematic. The simple
relationship:
TV"
1 (M nodes \ node >
*overall " " ~ (2.5)
number of nodes
but this in turn is not completely satisfactory since it does not, for example,
show any node(s) which are not controllable or observable, i.e. nodes which
have a value TYnode = 0. Although these and other numerical values for
controllability and observability generally follow the relationships shown in
Figure 2.4, experience has shown that quantification has relatively little use.
In particular:
• the analysis does not always give an accurate measure of the ease or
otherwise of testing;
• it is applied after the circuit has been designed, and does not give any
help at the initial design stage or direct guidance on how to improve
testability;
• it does not give any help in formulating a minimum set of test vectors for
the circuit.
Hence, although controllability and observability are central to digital
testing, programs which merely compute testability values have little present-
day interest, particularly with VLSI circuits where their computational
complexity may become greater than that necessary to determine a set of
acceptable test vectors for the circuit, see later. SCOAP4, CAMELOT6 and
VICTOR7 possibly represent the most successful programs which were
developed. For further comments see Savir15, Russell16, and Agrawal and
Mercer17.
We will have no need to refer again to testability quantification in this text,
but the concept of forward propagating test signals from primary inputs
towards the primary outputs and backward tracing towards the inputs will
arise in our further discussions.
* For a function with TVnodes a total of 2iV*single stuck-at faults have to be considered,
but the theoretical number of possible multiple stuck-at faults is 3^-2^1. This is
clearly a very large number to consider.
Faults in digital circuits 27
Table 2.1 All possible stuck-at faults on a three-input NAND gate; the wrong outputs
have been circled
Inputs Output Z
ABC Fault-free A A 8 8 C C Z Z
s-a-0 s-a-l s-a-0 s-a-l s-a-0 s-a-l s-a-0 s-a-l
0 0 0 I I I I I I I © I
0 0 1 I I I I I I I © I
0 1 0 I I I I I I I © I
0 1 1 I I © I I I I © I
1 0 0 I I I I I I I © I
1 0 1 I I I I ® I I © I
110 I I I I I I © © I
I I I 0 © O Q O C D O O ®
Table 2.2 The minimum test set for detecting all possible stuck-at faults in a three-input
NAND gate
Input Healthy Wrong Faults detected
test vector output output by test vector
ABC
0 1 1 1 0 A s-a-1 or Z s-a-0
1 0 1 1 0 8 s-a-1 or Z s-a-0
1 1 0 1 0 C s-a-1 or Z s-a-0
1 1 1 0 1 A or 8 or C s-a-0 or Z s-a-1
observable output
(0 in the presence of
the stuck-at 0 fault)
don't
care
Figure 2.5 A simple example showing the test vector required to test for and propagate
the output of gate Gl stuck-at 0 to the primary output, the three lower
inputs being all don yt cares
Figure 2.6 A further example giving a minimum test set of 15 test vectors which test
all the circuit nodes for possible s-a-0 and s-a-l faults. The Xs in the input
vectors are don't cares; 1/0, 2/1, etc. in the faults-covered listing means
node 1 s-a-0 tested, node 2 s-a-l tested, and so on {Acknowledgement,
Oxford University, UK)
A o
B o-
C o
D
E
F
This assumes that a short may occur between any two lines, but in practice
shorts between physically adjacent lines are clearly more realistic. However,
if bridging between more than two lines is considered, then the number
of theoretically possible bridging faults escalates rapidly13. In general, the
number of feasible bridging faults between two or more lines is usually
greater than the theoretical number of possible stuck-at faults in a
circuit, and although it is straightforward to derive a single test vector that
will cover several stuck-at faults in the circuit, this is not so for bridging
faults.
A further difficulty with bridging faults is that a fault may result in a
feedback path being established, which will then cause some sequential
circuit action. Hence, bridging faults have been classified as sequential
bridging faults or combinational bridging faults, depending upon the nature
of the fault. A sequential bridging fault may also result in circuit oscillation if
the feedback loop involves an odd number of inversions in an otherwise
purely combinational network.
Extensive consideration has been given to bridging faults, including the
consideration that bridged lines are logically equivalent to either wired-OR or
wired-AND functions25"33, but no general fault model is possible which caters
for the physical complexity of present-day VLSI circuits. It has been suggested
that most (but not all) shorts in combinational logic networks are detected by
a test set based upon the stuck-at model, provided that the latter is designed
to cover 99 % or more of the possible stuck-at circuit faults12. This statement,
however, is increasingly debatable as chip size increases and where CMOS
technology is involved, see the following discussions.
Faults in digital circuits 31
* Because the resistance of p-type FETs is higher than that of similar dimension n-type
FETs, it is preferable to series the n-channel FETs and parallel the p-channel FETs.
Hence, NAND gates rather than NOR gates become the preferred basic logic element.
32 VLSI testing: digital and mixed analogue/digital techniques
AB
/777
/Z?7
Table 2.3 An exhaustive test set for single open-circuit (O/C) and short-circuits (S/C)
faults in a two-input CMOS NAND gate
Input test Healthy gate Check
vector AB output Z
00 1 V/Vf= 1/0
0 1 1 T3 S/C check*
1 1 0 T l orT2 S/C check*;T3 orT4 O/C check
1 0 1 T2 O/C check
00 1
1 0 1 T4 S/C check*
1 1 0 (as test vector 1 1 above)
01 1 Tl O/C check
00 1
* excessive current if transistor short circuit V/Vf= 1/0
Bridging faults within CMOS gates also cause failures which may not be
modelled by the stuck-at fault model, particularly where more complex
CMOS structures are present. Consider the circuit shown in Figure 2.9. The
bridging fault shown will connect the gate output to ground under the input
conditions of AB + CD + AD + BC = 1 instead of the normal conditions of
AB + CD = 1. However when input conditions AD or BC = 1 are present, there
34 VLSI testing: digital and mixed analogue/digital techniques
DD
Figure 2.9 A possible bridging fault within a complex CMOS gate, the fault-free
output being Z = (AB + CD)
a the circuit topology
b the equivalent Boolean circuit
master slave
alternative
symbol for
transmission
gates
clock
it has been reported that a major portion of digital system faults when
in service are intermittent (temporary) faults, and that the investigation of
such faults accounts for more than 90 % of total maintenance
expenditure42'43.
Nonpermanent faults may be divided into two categories, namely:
(i) transient faults, which are nonrecurring faults generally caused by some
extraneous influence such as cosmic radiation, power supply surges or
electromagnetic interference;
(ii) intermittent faults, which are caused by some imperfection within the
circuit or system and which appear at random intervals.
In practice it may not always be clear from the circuit malfunctioning which
of these categories is present, particularly if the occurrence is infrequent.
By definition it is not possible to test for transient faults. In VLSI memory
circuits a-particle radiation can cause wrong bit patterns in the memory
arrays, and in other latch and flip-flop circuits it is possible to encounter
latch-up due to some strong interference. However, once the memory or
storage circuits have been returned to normal, no formal analysis of the exact
cause of the malfunctioning is possible. Experience and intuition are the
principal tools in this area.
Intermittent faults, however, may possibly be detected if tests are repeated
enough times. This may involve repeating again what the system was doing
and the state of its memory at the time of the transitory fault, if known.
Alternatively, some abnormal condition may be imposed upon the circuit or
system, such as slightly increasing or decreasing the supply voltage, raising the
ambient temperature or applying some vibration, with a view to trying to
make the intermittent fault a permanent one which can then be investigated
by normal test means.
Since intermittent faults are random, they can only be modelled math-
ematically using probabilistic methods. Several authorities have considered
this, and developed probabilistic models intended to represent the behaviour
of a circuit or system with intermittent faults12'34'42"49. All assume that only
one intermittent fault is present (the single fault assumption), and develop
equations which attempt to relate the probability of a fault being present
when a test is being applied, and/or estimate the test time necessary in order
to detect an intermittent fault in the circuit under test.
An example of this work is that of Savir12'44'45. Two probabilities are first
introduced, namely:
(i) the probability of the presence of a fault, fi9 in the circuit, expressed as
PFi= probability (faulty is present);
(ii) the probability of the activity of the fault, fi9 expressed as
PA{- probability (faulty is active, faulty being present).
The circuit is assumed to have a fault set/1? f2, ..., fm, with only one fault, fi9
being present at a time. It should be appreciated that faulty- can be present
Faults in digital circuits 37
but not affect the circuit output because that part of the circuit containing
the fault is not actively controlling the present output; probability PAi
therefore has a (usually very considerable) lower value than PF{. Also PFiy
i = 1 to m, forms the fault probability distribution
m
£«$ = l (2-6)
where k is the time between test vectors, from which the lowest upper bound
on the number of tests, T, necessary to detect faulty with a given confidence
limit, cf[, is expressed by:
T= (2.9)
loge(l-PDt)
(ii) build into the circuit or system some self-checking monitoring such as
parity checks (see Chapter 4), which will detect some if not all of the
possibly intermittent faults and ideally provide a clue to their location
in the circuit;
(iii) in the case of computer systems, continuously run a series of check tests
when the computer is idle, and from these hopefully build up
information pointing to the intermittent fault source when particular
check tests fail;
(iv) as previously mentioned, subject the circuit or system under test to
some abnormal working condition in an attempt to make the
intermittent fault permanent.
In the case of circuits used in safety critical applications, redundancy
techniques which will mask the effect of intermittent faults must be used. If it
is then found that, say, one of three systems operating in parallel is
occasionally out of step with the other two, this is an indication that there is
an intermittent fault in the first circuit, with possibly some clue to its source.
Finally, although this chapter has been particularly concerned with faults
in and fault models for digital circuits, no mention has yet been made of the
possible physical causes of such failures. A major problem here is that IC
manufacturers are reluctant to reveal exact details of their fabrication
defects; those statistics which are available are usually obsolete due to the
rapid and continual developments in fabrication technology and expertise.
There is a considerable volume of information available on the potential
causes of IC failure58"64. We must, however, distinguish between:
(a) some major failure during manufacture, such as incorrect mask
alignment or a process step incorrectly carried out, which it is the
province of the professional production engineer to detect and correct
before chips are bonded and packaged;
(b) the situation where wafer processing is within specification, but circuits
still need to be tested for individual failings.
The latter category of random scattered failures is our concern.
Considering the possible failure mechanisms which can occur in VLSI
circuits, these may be chip related, that is some fault within the circuit itself,
or assembly related, that is some fault in scribing, bonding and encapsulating
the chip, or operationally related, for example caused by interference or oc-
particle radiation. Further details of these categories may be found discussed
in Rajsuman^8 and in Prince64 for memory circuits, but no up to date global
information on the frequency of occurrence of these failures is available. An
early failure mode statistic quoted by Glazer and Subak-Sharpe59 gives the
following breakdown:
metalisation failures, 26 % of all failures;
bonding failures, 33 %;
photolithography defects, 18 %;
surface defects, 7 %;
others, 16 %.
Metalisation defects still seem to be a prominent category of defect, caused
particularly by microcracks in metal tracks where they have to descend into
steep narrow vias to make contact with an underlying layer of the planar
process, together with the difficulties of cleaning out etched vias before
metalisation. Dielectric breakdown also remains a problem should a SiO2
insulation layer be of imperfect thickness*, and chip bonding is still at times
a known cause of failure.
These failings may readily be related to the stuck-at fault model and open-
circuit faults, but from the test engineer's point of view (as distinct from the
IC manufacturer's point of view) the precise cause of a functional fault is of
academic interest only. We shall, therefore, have no occasion to look deeper
* The dielectric breakdown strength of SiO2 is about 8x 166V/cm, and the usual
thickness of a SiO2 layer is about 200 A0 = 2 x 10"6 cm. There is, therefore, not a very
great safety factor if the SiO2 is too thin.
40 VLSI testing: digital and mixed analogue/digital techniques
into failure mechanisms in this text, but the reader is referred to the
references cited above for more in-depth information if required.
2.6 References
1 GRASON, J.: 'TMEAS—a testability measurement program'. Proceedings of 16th
IEEE conference on Design automation, 1979, pp. 156-161
2 STEVENSON, J.E., and GRASON, J.: 'A testability measure for register transfer
level digital circuits'. Proceedings of IEEE international symposium on Fault
tolerant computing, 1976, pp. 101-107
3 BREUER, M.A., and FRIEDMAN, A.D.: TEST/80—a proposal for an advanced
automatic test generation system'. Proceedings of IEEE Autatestcon, 1979, pp.
205-312
4 GOLDSTEIN, L.M., and THIGAN, EX.: 'SCOAP: Sandia controllability and
observability analysis program'. Proceedings of 17th IEEE conference on Design
automation, 1980, pp. 190-196
5 KOVIJANIC, P.G.: 'Single testability figure of merit'. Proceedings of IEEE
international conference on Test, 1981, pp. 521-529
6 BENNETTS, R.G., MAUNDER, CM., and ROBINSON, G.D.: 'CAMELOT: a
computer-aided measure for logic testability', lEEProc. K, 1981, 128, pp. 177-189
7 RATIU, I.M., SANGIOVANNI-VINCENTELLI, A. and PETERSON, D.O.:
'VICTOR: a fast VLSI testability analysis programme'. Proceedings of IEEE
international conference on Test, 1982, pp. 397-401
8 BERG, W.C., and HESS, R.D.: 'COMET: a testability analysis and design
modification package'. Proceedings of IEEE international conference on Test,
1982, pp. 364-378
9 DEJKA, W.J.: 'Measure of testability in device and system design'.
Proceedings of IEEE Midwest symposium on Circuits and systems, 1977, pp. 39-52
10 FONG, J.Y.O.: 'A generalised testability analysis algorithm for digital logic circuits'.
Proceedings of IEEE symposium on Circuits and systems, 1982, pp. 1160-1163
11 BENNETTS, R.G.: 'Design of testable logic circuits' (Addison-Wesley, 1984)
12 BARDELL, P.H., McANNEY, W.H., and SAVIR, J., 'Built-in test for VLSI:
pseudorandom techniques' (Wiley, 1987)
13 ABRAMOVICI, M., BREUER, M.A. and FRIEDMAN, A.D.: 'Digital system testing
and testable design' (Computer Science Press, 1990)
14 CHEN, T-H., and BREUER, M.A.: 'Automatic design for testability via testability
measures', IEEE Trans., 1985, CAD-4, pp. 3-11
15 SAVIR, J.: 'Good controllability and observability do not guarantee good
testability', IEEE Trans., 1983, C-32, pp. 1198-1200
16 RUSSELL, G. (Ed.): 'Computer aided tools for VLSI system design' (Peter
Peregrinus, 1987)
17 AGRAWAL, V.D., and MERCER, M.R.: 'Testability measures—what do they tell
us?'. Proceedings of IEEE international conference on Test, 1982, pp. 391-396
18 KOHAVI, I., and KOHAVI, Z.: 'Detection of multiple faults in combinational
networks', IEEE Trans., 1972, C-21, pp. 556-568
19 HLAVICKA, J., and KOLTECK, E.: 'Fault model for TTL circuits', Digit. Process.,
1976, 2, pp. 160-180
20 HUGHES, J.L., and McCLUSKEY, E.J.: 'An analysis of the multiple fault detection
capabilities of single stuck-at fault test sets'. Proceedings of IEEE international
conference on Test, 1984, pp. 52-58
21 NICKEL, V.V.: 'VLSI—the inadequacy of the stuck-at fault model'. Proceedings of
IEEE international conference on Test, 1980, pp.378-381
Faults in digital circuits 41
22 KARPOVSKI, M, and SU, S.Y.H.: 'Detecting bridging and stuck-at faults at the
input and output pins of standard digital computers'. Proceedings of IEEE
international conference on Design automation, 1980, pp. 494-505
23 SCHERTZ, D.R., and METZE, G.: 'A new representation of faults in
combinational logic circuits', IEEE Trans., 1972, C-21, pp. 858-866
24 BATTACHARYA, B.B., and GUPTA, B.: 'Anomalous effect of a stuck-at fault in a
combinational circuit', Proc. IEEE, 1983, 71, pp. 779-780
25 TIMOC, C, BUEHLER, M., GRISWOLD, X, PINA, C, SCOTT, E, and HESS, L.:
'Logical models of physical failures'. Proceedings of IEEE international
conference on Test, 1983, pp. 546-553
26 KARPOVSKI, M., and SU, S.Y.H.: 'Detection and location of input and feedback
bridging faults among input and output lines', IEEE Trans., 1980, C-29, pp.
523-527
27 ABRAHAM, J.A., and FUCHS, W.K.: 'Fault and error models for VLSI', Proc. IEEE,
1986, 75, pp. 639-654
28 MEI, K.C.Y.: 'Bridging and stuck-at faults', IEEE Trans., 1974, C-23, pp. 720-727
29 FRIEDMAN, A.D.: 'Diagnosis of short-circuit faults in combinational circuits',
IEEE Trans., 1974, C-23, pp. 746-752
30 ABRAMOVICI, M., and MENON, P.R.: 'A practical approach to fault simulation
and test generation for bridging faults', IEEE Trans., 1985, C-34, pp. 658-663
31 XU, S., and SU, S.Y.H.: 'Detecting I/O and internal feedback bridging faults',
IEEE Trans., 1985, C-34, p. 553-557
32 KODANDAPANI, KX., and PRADHAM, D.K: 'Undetectability of bridging faults
and validity of stuck-at fault tests', IEEE Trans., 1980, C-29, pp. 55-59
33 MALAIYA, Y.K.: 'A detailed examination of bridging faults'. Proceedings of IEEE
international conference on Computer design, 1986, pp. 78-81
34 LALA, P.K.: 'Fault tolerant and fault testable hardware design' (Prentice Hall,
1985)
35 WADSACK, R.L.: 'Fault modelling and logic simulation of CMOS and MOS
integrated circuits', BellSys. Tech.]., 1978, 57, pp. 1449-1474
36 GALIAY, J., CROUZET, Y, and VERGNIAULT, M.: 'Physical versus logic fault
models in MOS LSI circuits', IEEE Trans., 1980, C-29, pp. 527-531
37 EL-ZIQ, Y.M., and CLOUTIER, R.J.: 'Functional level test generation for stuck-
open faults in CMOS VLSI'. Proceedings of IEEE international conference on
Test, 1981, pp. 536-546
38 CHIANG, K.W., and VRANESIC, Z.G.: 'Test generation for complex MOS gate
networks'. Proceedings of IEEE international symposium on Fault tolerant
computing, 1982, pp. 149-157
39 RENOVELL, M., and CAMBON, G.: 'Topology dependence of floating gate faults
in MOS integrated circuits', Electron. Lett, 1986, 22, pp. 152-157
40 JAIN, S.K., and AGRAWAL, V.D.: 'Modelling and test generation algorithms for
MOS circuits', IEEE Trans., 1985, C-34, pp. 426-433
41 REDDY, M.K., and REDDY, S.M.: 'On FET stuck-open fault detectable CMOS
memory elements'. Proceedings of IEEE international conference on Test, 1985,
pp. 424-429
42 LALA, P. K., and MISSEN, J.I.: 'Method for the diagnosis of a single intermittent
fault in combinational logic circuits', Proc. IEE, 1979, 2, pp. 187-190
43 CLARY, J.B., and SACANE, R.A.: 'Self-testing computers', IEEE Trans., 1979, C-28,
pp.49-59
44 SAVIR, J.: 'Detection of single intermittent faults in sequential circuits', IEEE
Trans., 1980, C-29, pp. 673-678
45 SAVIR, J.: 'Testing for single intermittent failures in combinational circuits by
maximizing the probability of fault detection', IEEE Trans., 1980, C-29, pp.
410-416
42 VLSI testing: digital and mixed analogue/digital techniques
46 TASAR. O., and TASAR, V.: 'A study of intermittent faults in digital computers'.
Proceedings of MIPS conference, 1977, pp. 807-811
47 McCLUSKEY, E.J., and WAKERLY, J.F.: 'A circuit for detecting and analysing
temporary failures'. Proceedings of IEEE COMCON, 1981, pp. 317-321
48 MALAIYA, Y.K., and SU, S.Y.H.: 'A survey of methods for intermittent fault
analysis'. Proceedings of national Computer conference, 1979, pp.577-584
49 STIFLER, J.I.: 'Robust detection of intermittent faults'. Proceedings of IEEE
international symposium on Fault tolerant computing, 1980, pp. 216-218
50 SHOOMAN, M.L., 'Probabilistic reliability: an engineering approach' (McGraw-
Hill, 1968)
51 KOREN, L, and KOHAVT, Z.: 'Diagnosis of intermittent faults in combinational
networks', IEEE Trans., 1977, C-26, pp. 1154-1158
52 SU, S.Y.H., KOREN, I., and MALAIYA, Y.K.: 'A continuous parameter Marcov
model and detection procedure for intermittent faults', IEEE Trans., 1978, C-27,
pp. 567-569
53 KRISHNAMURTHY, B., and TALLIS, I.G.: 'Improved techniques for estimating
signal probabilities', IEEE Trans., C-38, pp. 1041-1045
54 BREUER, M.A., and PARKER, A.C.: 'Digital circuit simulation: current states and
future trends'. Proceedings of IEEE conference on Design automation, 1981, pp.
269-275
55 RENESEGERS, M.T.M.: The impact of testing on VLSI design methods', IEEEJ.,
1982, SC-17, pp. 481-486
56 'Test synthesis seminar digest of papers'. IEEE international conference on Test,
Washington, DC, USA, October 1994
57 BERTRAM, W. J.: 'Yield and reliability', in SZE, S.M. (Ed.): 'VLSI technology'
(McGraw-Hill, 1983)
58 RAJSUMAN, R.: 'Digital hardware testing: transistor level fault modelling and
testing' (Artech House, 1992)
59 GLASER, A.B., and SUBAK-SHARPE, G.E.: 'Integrated circuit engineering: design
fabrication and applications' (Addison-Wesley, 1979)
60 GALLACE, L.J.: 'Reliability', in Di GIACOMO, J. (Ed.): 'VLSI handbook'
(McGraw-Hill, 1989)
61 GULATI, R. K., and HAWKINS, C.F. (Eds.): 'I D D Q testing of VLSI circuits' (Kluwer,
1993)
62 CHRISTOU, A.: 'Integrating reliability into microelectronics and packaging'
(Wiley, 1994)
63 SABNIS, A.G.: 'VLSI reliability' (Academic Press, 1990)
64 PRINCE, B.: 'Semiconductor memories' (Wiley, 1995, 2nd edn.)
Chapter 3
Digital test pattern generation
test strategies
and procedures
••• • w
fault location,
fault location,
objective replacement
objective to improve
and repair of system
IC yield
Figure 3.1 The heirarchy of digital testing objectives. Many test procedures are equally
applicable to IC and system test, but some may be IC or system specific.
Analogue testing, see later, has an identical heirarchy
accessible accessible
primary inputs pr mary outputs
\
input r x1 • output
network
: ""' f 1
test -7 under ^ test
set test response
n binary m binary
inputs outputs
outputs
input test set / circuit or network
/ under test
total agreement = pass
" ^ any disagreement = fail
gold circuit
circuit or network
computer-controlled under test
test facility, for agreement = pass
digital inputs
example, see disagreement = fail
expected digital outputs
Figure 1.4
healthy response
from store
generate set
of test vectors, T
evaluate fault
coverage of T using
fault simulation data
no / sufficient yes
\ ^ fault coverage?
1
\ /
modify or done
enhance test
setT
Figure 3.3 The general concept of determining the fault coverage of a given test set
from fault simulation. An eventual modification to increase FC may be by
interactive intervention by the test designer
All of these differ from the serial fault simulation method by simultaneously
simulating a set of faults rather than just one fault at a time. In parallel fault
simulation, one n-bit word in the simulation program (where n = 8, 16 or 32
bits) is used to define the logic signal on a node when the node is fault free
and when n - 1 chosen faults are present within the circuit. Since the
computer operates on words rather than bits, logical operations between
words corresponding to the logic of the circuit between the nodes (i.e. AND,
NAND, OR, NOR, etc.), allows the simultaneous simulation of the n copies of
the circuit to be implemented on each input test vector, thus speeding up the
simulation by a factor of about n compared with the one at a time serial fault
simulation. The main problem is that the circuit being simulated must be
expressed in Boolean terms, which means that memory and large sequential
circuit blocks are impractical or impossible to handle.
Deductive fault simulation, however, relies upon fault list data, that is the
input/output relationship of logic gates and macros under healthy and
chosen fault conditions. A fault list is associated with every signal line within
the circuit, including fault data flow through storage and memory elements.
For each input test vector fault lists are serially propagated through the
circuit to the primary output(s), all the faults covered by the test vector being
Digital test pattern generation 47
noted after each pass. The time taken for one pass through the simulator is
much greater than the time for one pass through a parallel simulator, but a
large number of circuit faults will be covered on each pass. Dynamic memory
capacity, however, has to be very large in order to handle all the continuously
changing data in the propagation of the fault lists.
Concurrent fault simulation is the preferred present method of fault
simulation. It avoids the complexity of implementing deductive fault simu-
lation, yet retains a speed advantage over series and parallel fault simulation.
Here a comprehensive concurrent fault list for each line is compiled for each
test vector, which includes a chosen fault on the line plus all preceding faults
which propagate to this line; if a preceding fault produces a response which
is the same as the healthy response on this line to the test vector, the former
is deleted from this concurrent fault list. By completing this procedure from
primary inputs to primary outputs, a record is built up of the number of the
chosen faults detected by the given set of test vectors, and hence the fault
coverage; only those paths in the circuit which differ between the faulty and
the fault-free state need be considered in each simulation.
Further details of fault simulation procedures may be found in the
literature3'7"13. It must, however, be appreciated that when using any fault
model simulation to determine a fault model coverage value, FMQ for a given
set of test vectors, the resultant value only relates to the set of faults which
have been chosen in the simulation. Thus, FMC = 100 % only indicates that
all the faults introduced in the fault simulation will be detected, which is not
the same as saying that the circuit is completely fault free. The implication of
this is that the value of FMC is not necessarily the same as the value of fault
coverage, FC, introduced in Chapter 1, see Figure 1.3, and in theory should
not be used in the defect level equation DL = (1 - Yl~FC). However, if FMC is
nearly 100 %, then it is often assumed that FMC- FC, and this value of FMC
may be used in the equation for DL A more direct use for FMC is as a useful
parameter in the development and grading of automatic test pattern
generation (ATPG) algorithms, see later. We will use the designation FC
rather than FMC subsequently, but the distinction when used in the equation
for DL should not be forgotten.
prepare the
required fault list
FC= 100%
coverage can be rapid. However, for complex circuits the processing time to
find the tests for the final remaining faults may become unacceptably long,
particularly if feedback loops or other circuit complications are present. If
any redundancy is present then the ATPG program will, of course, never
succeed in finding a test for certain nodes. Hence, economics may dictate the
termination of an ATPG program when FC has reached an acceptable level,
say 99.5 %, or has run for a given time, leaving the outstanding faults to be
considered by the circuit designer if necessary. For very complex VLSI circuits
even ATPG programs are now proving to be inadequate, being replaced by
self test and other test strategies such as will be considered in later chapters.
All ATPG programs based upon fault models assume that a single fault is
present when determining the test vectors. The usual fault model is the stuck-
at model, which as we have seen does in practice cover a considerable
50 VLSI testing: digital and mixed analogue/digital techniques
number of other types of faults, but not all. The results of an ATPG program
cannot, therefore, guarantee a defect-free circuit.
A basic requirement in test pattern generation is to propagate a fault at a
given node in the circuit to an observable output, such that the output is the
opposite value in the presence of the fault compared with the fault-free
output under the same input test vector. This procedure may be termed path
sensitising or forward driving. A second requirement is that the test input
vector shall establish a logic value on the node in question which is opposite
to the stuck-at condition under consideration, i.e. to test for a s-a-0 fault at
node .x the test vector will give a logic 1 at the node under fault-free
conditions, and vice versa.
The principle of propagating a stuck-at fault condition on a node to an
observable output has been illustrated in Chapter 2, Figure 2.5. In this earlier
example a single path was sensitised to the primary output, but in more
complex circuits it may be necessary to consider more than single-path
sensitisation. Consider, for example, a simple part of a larger circuit as shown
in Figure 3.5, and let us consider the signals required to drive a stuck-at 0
fault on line Q through Gl to the observable output node. Clearly we require
P= 1, Q = 1 and R = 0 to establish this single path. However, if due to the
preceding logic it is not possible to have R = 0 when Q= 1, then this single
path sensitisation is not possible. But making P= 1, Q= 1, /?= 1 will allow the
fault to be detected at the output, the parallel paths through Gl and G2 both
being sensitised. This is known as parallel reconvergence with dual-path
sensitisation.
In Figure 3.5 the two paths which reconverge always took the same logic
value when testing for stuck-at 0. However, it is possible to encounter recon-
vergence where the two converging signals are always opposite to each other
under test conditions. (This is sometimes termed negative reconvergence, as
distinct from positive reconvergence where the signals are the same.) This
form of reconvergence is not testable, and indicates some local redundancy
left in the circuit design. The more complex the combinational logic network,
the more likely it will be that reconvergence is present in the circuit. Hence,
the need to sensitise more than one path from a stuck-at node is often
encountered, which necessitates effective algorithms that can handle this
situation.
Most test pattern generation algorithms but not all have as their underlying
basis the procedure which we have now indicated, namely:
(i) choose a faulty node in the circuit;
(ii) propagate the signal on this node to an observable output;
(iii) backward trace to the primary inputs in order to determine the logic
signals on the primary inputs which correctly propagate this fault signal
to the observable output.
Additionally the procedure should:
(iv) ensure that the derived test vectors cater for all possible fan-out and
reconvergence effects in the circuit, with the possibility of multiple-path
sensitisation;
(v) keep a record of what additional faulty nodes are covered by each test
vector when generating a test for a chosen node (i.e. the fault cover of
each test vector), so that duplication of effort is minimised.
Here we will consider two methods which have been used for test pattern
generation, the first of which does not in fact use the above signal
propagation procedure, and the second which does use such a procedure.
not a true differential operator in the full mathematical sense, since it does
not distinguish between a change of f(X) from 0 to 1 or vice versa, and hence
it is defined as a difference operator rather than a differential operator. Also
notice that the functions being exclusive-ORed together in eqn. (3.2)
represent the complete truthtable of J{X), and are not concerned with just
one input combination.
Further properties of the Boolean difference operator are as follows15:
• function complementation:
literal complementation:
d ,.,,„,, d
J-{f(X).g(X)\
1
= I f(X).-±g(X) ]0L(X).-±/(X)l©(j-fiX).J-g(X)) (3.6)
«fc,- ' { dx, ) { dx, ) [dx, dx, )
y{f()g()} g ( ) ^ f ( ) (3.8)
dXj dXj
and
f(X) = (
dxxxx d
<*,-_«.
= (#2*4) (#1x2*3 +*2x3;c4) since {x2x4) is zero
^/xj " dxx
= (x2x4).(x2x3x4) .(x2xs)
- (#2 + *4 ) (*2 + *3 +
= x2x33c4 (3.12)
Thus, the test vector x^x^x4 will test for Xj s-a-0, and x^x^x4 will test for xx
s-a-1.
This result is very easily confirmed by looking at the Karnaugh map ofJ[X)
given in Figure 3.66, and considering the xx = 0 and xx = 1 halves of the map.
These two decompositions of J{X) differ only in the minterms x^x^x4 and
x 1X2X5X4, being/(X) = 0 in the former a n d / ( X ) = 1 in the latter. Hence the
Boolean difference (d/dxx)f(X) is x2x?>x4, giving the stuck-at test vectors for xx
shown above.
If the Boolean difference method is used to generate test vectors for
internal nodes of the circuit, the output function must be expressed in terms
54 VLSI testing: digital and mixed analogue/digital techniques
*3*4\ 00 01 11 10 XOXA
• 00 01 11 10
\ 4 \ 00 01 11 10
00 1 1 00 1 1 00 1 1 1
01 01 01 1 1 1 1
11 1 1 11 11 1 1 1 1
10 1 J] 1 10 1 1 10 1 1 1
^ f{x2x3x4) Xj f(x2x3x4)
Figure 3.6 A simple circuit to illustrate the use of the Boolean difference
a circuit, f(X) = x^x^x^ + #2*3 x4 + xgc^
b Karnaugh map off(X)t showing the decomposition about xx
c output function when internal node / is stuck at 0, the difference
between this function and the fault-free output being as indicated
d as c but /stuck at 1
of the internal node being considered. For example, should internal node /
in Figure 3.6 be considered, the output function becomes
whence
dl dl
(3.13)
I X-^X<2 "i" X4 I
Digital test pattern generation 55
=
X2X^ • X^X2
Table 3.1 The primitive D-cubes offailure for three-input Boolean logic gates, giving
the input signals necessary to distinguish the presence of the output line
stuck-at
Required inputs to detect faulty output Corresponding D-cubes of failure
inputs output
A 8 c Z A 8 C Z
A N D gate 1 1 1 s-a-0 1 1 1 D
0 X X s-a-l 0 X X D
X 0 X s-a-l X 0 X 0
X X 0 s-a-l X X 0 D
N A N D gate 1 1 1 s-a-l 1 1 1 D
0 X X s-a-0 0 X X D
X 0 X s-a-0 X 0 X D
X X 0 s-a-0 X X 0 D
OR gate 0 0 0 s-a-l 0 0 0 D
1 X X s-a-0 1 X X D
X 1 X s-a-0 X 1 X D
X X 1 s-a-0 X X 1 D
NOR gate 0 0 0 s-a-0 0 0 0 D
1 X X s-a-l 1 X X D
X 1 X s-a-l X 1 X D
X X 1 s-a-l X X 1 D
* In this context, a cube is an ordered set of symbols such that each symbol position
defines a particular input or output node, and the value of the symbol identifies its
logic state.
Digital test pattern generation 57
gate we have the D-cube 1 D 1 D, see the top line of Table 3.2. Notice also
that:
(i) the propagating Z>cubes also define the propagation when more than
one input is D or D, which can arise in a circuit with reconvergent fan-
out from the original D (or D) source. However, should D and D both
converge on a Boolean gate there will be no further propagation of the
D or D value; D and D on an AND gate will always give an output 0 and
on an OR gate an output 1, and hence the D and 5 values will be lost.
(ii) for the propagation of a D (or D) value through a Boolean gate there is
only one possible input condition; there is therefore no choice of logic
0 or 1 signals on the gate inputs to propagate the D signal (s).
Table 3.2 The propagation D-cubes for (a) three-input AND and NAND gates, and (b)
three-input OR and NOR gates, the gates being fault free
a
Gate inputs, x ( x 2 x 3 Gate output f[X)
AND gate NAND gate
I I D or I D I or D I I D D
I I D or I D I or O i l D D
I D D o r D I D o r D D I D 0
I D D or D I D or D D I D D
ODD D D
D D D D D
b
Gate inputs, X| x 2 x 3 Gate output f(X)
OR gate NOR gate
0 0 D or 0 D 0 or D 0 0 D D
0 0 D or 0 D 0 or D 0 0 D D
0 D D or D 0 D or D D 0 D D
0 D D or D 0 D or D D 0 D D
D D D D D
D D D D D
than the two-input gates shown here, then the propagation Z>cubes for these
gates would define the required logic signals for forward driving the D or D
conditions. All possible paths from the D or D source towards the primary
outputs are normally considered 16 ' 18 , although only one primary output
needs to be finally monitored for the stuck-at test.
AND OR inverter
^ o 1 X D D ^ 0 1 X D D A Z
0 0 0 0 0 0 1 X D D 0 1
0 1 X D D 1 1 1 1 1 1 0
X X X X X X 1 X X X X X
0 D X D 0 D D 1 X D 1 D D
0 D X 0 D 5 D 1 X 1 D D D
NAND NOR
equivalent three-input
1 X D D 0 1 X D D AND gate
1 1 1 1 1 1 0 X D 0 , j
1 0 X D D 0 0 0 0 0
1 X X X X X 0 X X X
1 D X D 1 D D 0 X D 0
1 D X 1 D D 0 X 0 D
Figure 3.7 Roth s five-valued D-notation applied to two-input Boolean logic gates. The
relationships for three (or more) input gates may be derived by considering
a cascade of two-input gates, since commutative and associative
relationships still hold
Roth's full algorithm for test pattern generation thus consists of three
principal operations as shown in Figure 3.9, namely:
(i) choose a stuck-at fault source, and from the primitive D-cubes of failure
data identify the signals necessary to detect this fault;
(ii) forward drive this fault D or D through all paths to at least one primary
output, using the information contained in the propagation D-cubes;
(iii) perform a consistency operation, that is backward trace from a primary
output to which D or D has been driven, to the primary inputs,
allocating further logic 0 and 1 values as necessary to give the final test
input vector.
Digital test pattern generation 59
MX)
Figure 3.8 An example using Roths notation, shoxving a stuck-at 1 fault being
driven to both primary outputs. D represents the same logic value on all
lines so marked, with the line marked D having the opposite logic value.
The numbers in parentheses are used later in the text
This procedure is repeated until all the chosen stuck-at paths have been
covered.
In undertaking the Zklrive, the operation known as D-intersection is
performed for each gate encountered from the source fault to the primary
output(s). This is an algebraic procedure which formally matches the
logic signals on the gates with the appropriate propagation Z>cube data.
Recall that the propagation data for any D or D gate input is unique. The
i>drive procedure for the simple circuit shown in Figure 3.8 would therefore
proceed as shown in Table 3.3, having first identified all the paths in the
circuit.
start
D-drive to
primary outputs
any
inconsistencies
in backward allocate
trace? other 0, 1
list input test vector conditions
and all stuck-at
fault lines covered
all
no stuck-at
faults covered?
end
Figure 3.9 The outline schematic of Roth s D-algorithm ATPG procedure
Digital test pattern generation 61
Table 3.4 The backward tracing consistency operation for Table 3.3
Circuit path 1 2 3 4 5 6 7 8 9 10 1 1 12
End of D-drive 0 # # # 1 1 D 1 D 0 D D
Check (8) is at 1 from G2; OK if (4) = 1 0 • • 1 1 1 D 1 D 0 D D
Check (10) is at 0 from G1
O K i f ( l ) (2) &(3) = 0 0 0 0 1 1 I D 1 D 0 D D
In this example no inconstancies are encountered in completing the
backward tracing operation. However, if we had started with the equally valid
primitive i>cube of failure for gate G3 of xx xb = 1 0 or 1 1 instead of 0 1, then
we would have encountered an inconstancy in gate Gl when backward
tracing from node 10 to the inputs. Hence, in practice, we may have to
recompute the 2>drive conditions trying alternative primitive />cubes as the
starting point.
Further difficulties with the Z>algorithm arise when exclusive-OR/NOR
gates or local feedback conditions are encountered. The problem with
exclusive gates is that there is not a unique input condition for propagating
a D or D signal, see Table 3.5, and reconvergence of D or D or D and D will
be nonpropagating.
ex-OR ex-NOR
0 D or D 0 D D
0 D or D0 D D
1D or D 1 D D
1D or D 1 D D
conditions on lines (9) and (12) and s-a-0 on line (11) as well as the
fault source on line (7).
(iii) it is the difficulty of assigning input signals for a given gate output in
the backward tracing operation which largely causes problems; unlike
the forward D-drive operation there can be a choice of gate input
signals for a given output, since 2n - 1 input combinations of any n-
input Boolean gate give rise to the same gate output condition, and
hence several retries of the algorithm to find a consistent backward
tracing operation may be necessary.
A more detailed analysis and discussion of the D-algorithm may be found in
the appendix of Bennetts2(k*; other texts have worked examples1'2'11'20'21
and software code fragments11. However, it remains a difficult topic to learn,
partly because of the terminology which was introduced in the original
disclosures, and because of the supporting mathematics based upon the so-
called calculus of D-cubes. Nevertheless, it remains a foundation stone in
ATPG theory.
Finally, for interest, the Boolean difference technique applied to the
circuit of Figure 3.8 would give the Boolean difference functions:
at^(X). Notice that our Roth's algorithm example merely identified one of
the possible test vectors which detect the s-a-1 fault on line (7).
* The terminology 'dual' used in Reference 20c should be read with care. It is not the
Boolean dual/Z)(X) of/(X) where fD(X) is the complement o f / ( ^ with jill gate inputs
individually complemented, but is the changing of all DstoD and all Ds to D in any
given propagation Z>cube, leaving the 0s and Is unchanged.
Digital test pattern generation 63
I start
JS1(=0)
yes; now D (or D) or 0 (= D) now establishe no; still X
at the selected
yes; output ault source?
D(orD)
D (or D) already
driven to a primary
utput?
randomly assign further primary
inputs from X to 0 or 1 until
simulation
D (or D) is driven to a primary
output on resimulation
all
faults in fault
list now
covered?
Figure 3.10 Outline schematic of the PODEM ATPG program, starting with all
nodes and primary inputs at X. Exit paths (not shown) are present if no
test for a given stuck-at fault is possible
66 VLSI testing: digital and mixed analogue/digital techniques
stuck-at 0 node
Attempting a single-path 2>drive from this fault source to the primary output
via gate G5 or gate G6 would reveal an inconsistency; for example, driving
through G5 only would result in 0 D 0 1 on gate G8 with the input test vector
0 0 0 1, or driving through gate G6 only would result in 1 0 D 0 with the input
test vector 10 0 0, neither of which would give a D output. The only test
possible is the test vector 0 0 0 0, which drives D through both G5 and G6 to
G8. The PODEM algorithm, however, would have found this test almost
immediately by trying this test vector from the given starting point of X 0 0 X.
Notice that with this fairly trivial example there are only four possible test
vectors to try, namely 0 0 0 0, 0 0 0 1 , 1 0 0 0 and 1 0 0 1 . Also the actual
circuit is highly artificial, being merely Z= x1x2%*4 + *i*2*3*4 which changes
to x^x^x^x4 under the given stuck-at fault condition.
A further development by IBM of PODEM-X has been used for the test
pattern generation of circuits containing tens of thousands of gates26.
PODEM-X incorporates an initial determination of a small set of test vectors
which will cover a high percentage of faults in the fault list, leaving the
PODEM procedure to cover the remainder. This will be considered further in
section 3.2.3. However, a more distinct variation of the PODEM algorithm is
the FAN (fan-out oriented) ATPG program of Fujiwara and Shimono29,
which specifically considered the fan-out nodes of a circuit, and uses multiple-
path forward and backward tracing.
The major introduction in FAN is when considering a backward trace from
a given D or D node. The procedure is broadly as follows:
Digital test pattern generation 67
start
any
logic value
inconsistencies?
has
D (or D) reached
a primary output?
backward trace from all fan-out nodes backward trace from furthestmost
and other lines to establish primary inputs D (or D) nodes, assigning Jogic 0, 1
so as to propagate D or D further
end
Figure 3.12 Outline schematic of the FAN ATPG program, starting ivith all nodes
and primary inputs at X. Exit paths (not shown) are present if no test
for a given stuck-at fault is possible
generating this minimum test set.* On the other hand a fully exhaustive test
set will incur no ATPG costs, but will usually be too long to employ for large
circuits. There is, however, an intermediate possibility which has been used.
* It has been reported28 that millions of retries have been found necessary in some
circuits of VLSI complexity before the test vectors to cover the complete set of faults
were determined.
Digital test pattern generation 69
(3.14)
where X is a constant for the particular circuit under test. The general
characteristic of this relationship is shown in Figure 3.13, which confirms the
intuitive concept that it is relatively easy to begin the fault coverage but
becomes increasingly difficult to cover the more difficult remaining faults in
the fault list.
Table 3.6 The fault coverage obtained on two circuits by the application of random test
vectors. Note, a fully-exhaustive functional test set would contain 263 and
2 test vectors, respectively
Circuit No. of primary inputs No. of gates % fault coverage obtained
with N random test patterns
N = 100 1000 10000
100-i
Figure 3.13 The general characteristics of fault coverage versus the number of
random test vectors applied to the circuit
( X)
X
'
secondary storage
inputs (memory) secondary
elements outputs
Figure 3.14 The model for sequential logic netzvorks, where all combinational logic
gates are lumped into one half of the model and all memory elements are
lumped into the other half
linearly in time instead of going around the one circuit model on each clock
pulse, but unfortunately this introduces the equally difficult problem of
having to model multiple stuck-at combinational faults. From the dates of the
references which we have cited it will be seen that there is very little new
reported work in this area; the only realistic way of continuing from the
initialisation stage is a functional approach, verifying the sequential circuit
operation by consideration of its state table or state diagram or ASM
(algorithmic state machine) chart, rather than by any computer modelling
and simulation technique 2>43.
This functional approach in turn becomes impractical as circuit size
increases to, say, 20 or more storage elements and possibly tens of thousands
of used and unused circuit states. For VLSI it is now imperative to consider
testing requirements at the circuit design stage, and build in appropriate
means of testing large sequential circuits more easily than would otherwise be
the case. This will be a major topic in Chapter 5; as will be seen partitioning,
re-configuration and other techniques may be introduced, giving both a
normal operating mode and a test mode for the complete circuit design.
An ATPG program that produces a set of test vectors which detects all the
faults in a given fault list for a circuit has obvious advantages, since it provides
a minimum length test vector sequence to test the circuit to a known standard
of test. The difficulty and cost of generating this test set, which is a one-off
operation at the design stage, must be set against the resulting minimum
amount of data to be stored in the test system, see Figure 3.2c, and the
minimum time to test each production circuit.
In general, the order of generating and applying the test vectors in a system
such as in Figure 3.2c is fully flexible, the test vectors and expected (healthy)
output responses being stored in ROM. However, deterministic test pattern
generation based upon (usually) the stuck-at model does not generally
require any specific ordering of the test vectors, each test being independent
of the other tests. Unfortunately the difficulties of determining this test set for
complex VLSI circuits has become too great to undertake, and therefore
present test philosophies are moving away from the cost of ATP generation to
design for test strategies with the use of exhaustive, nonexhaustive or pseudo-
random test patterns. The cost of test pattern generation in the latter cases is
now usually some relatively simple hardware circuitry, such as we shall
consider below.
* We are ignoring in our present discussions the peculiar problems of CMOS testing,
which will be considered further in Section 3.5.
Digital test pattern generation 75
Table 3.7 The number of test vectors available from binary and BCD counters
No. of input bits No. of input test vectors
fully exhaustive binary binary-coded decimal
16 10 (I decade)
256 100 (2 decades)
n = 16 65536 10000 (4 decades)
4.3 x I0 9 I x I0 8 (8 decades)
However, there is not a very substantial saving in this alternative, and fault
coverage is now unknown. If some other subset of a full binary sequence is
considered then the greater the reduction in the number of test vectors the
lower the potential fault coverage. A more satisfactory nonexhaustive test set
strategy is that discussed in Section 3.2.1, where the circuit designer specifies
from his or her knowledge of the circuit a set of vectors which will exercise
the circuit with certain key or critical input/output functional requirements,
or alternatively will cause all or most of the internal gates to change state.
This procedure will produce a nonexhaustive set of test vectors. As covered
in Section 3.2.1, the effectiveness of this suggested test set may be
investigated by a computer check to determine how many of the internal
nodes of the circuit are toggled by this set of vectors; if this coverage is near
100 % then the probability of passing faulty circuits when under test will be
acceptably small.
The source of the test vectors for nonexhaustive tests such as above cannot
be made using simple hardware in the form of binary or BCD counters.
Instead we have to revert to supplying these vectors from a programmable
source such as ROM. This is back to the test set arrangement illustrated in
Figure 3.2^ rather than the simple hardware generation of Figure 3.15.
circuit or
network
under test
hardware
test pattern <\ | compantor
generator
/ healthy
/ response
Figure 3.15 Hardware test pattern generation, similar to the general cases shown in
Figure 3.2a and b. The input test sequence is usually exhaustive
pseudorandom circuit or
test pattern network output
generator under test check
/
/
input test vectors
Figure 3.16 Test generation using a pseudorandom test pattern generator. The
output check is often by signature analysis (see later) rather than by
comparison against a healthy response
* The terminology linear is because the exclusive logic relationship realises the mod2
addition of binary values, which is a linear relationship that preserves the principle of
superposition. For example, (0 0 1 1 0 1 0 0 1 ) 0 0 0 1 0 = 1 0 0 0; 1 0 0 0 0 (0 0 1 1
0 1001) = 0 0 1 0 , and so on. No information is lost going throilgh an exclusive-OR
(or exclusive-NOR) gate.
Digital test pattern generation 77
t ttM M ttM t l t t t
16-bit linear feedback shift register
12 16
Figure 3.17 The linear feedback shift register consisting of n D-type flip-flops in
cascade, with feedback arranged to generate a maximum length pseudo-
random sequence when clocked. Sixteen stages are indicated here, which
would give a maximum length sequence of216 - 1 = 65,535 states before
the sequence repeats. Alternative feedback connections to those shown
here are also possible, see Appendix A
i
D Q D Q D Q D 0
clock -
P
>ck
-rr—'
0 o-
r -rQ
wk
Q
|
5
Considering any one of the n bits and visualising its value written in a
continuous circle of 2n- 1 points, we may consider the runs of consecutive Os
and 1 in the sequence, the length of a run being the number of Os or Is in a
like-valued group. Then:
(iii) In the complete M-sequence there will be a total of 2n~l runs in each bit;
one half of the runs will have length 1, one quarter will have length 2,
one eighth will have length 3, and so on as long as the fractions 1/2,
1/4, 1/8, ... are integer numbers, plus one additional run of nls. In the
example of Figure 3.18 there are 23 = 8 runs, four of length 1, two of
length 2, one of length 3 plus one run of four Is in each bit.
(iv) From (iii), it follows that the number of transitions between 0 and 1 and
vice versa of each bit in a complete M-sequence is 2n~l.
(v) Every M-sequence has a cyclic shift and add property such that if the
given sequence is term-by-term added mod2 to a shifted copy of itself,
then a maximum length sequence results which is another shift of the
given sequence. For example, if the M-sequence shown in Figure 3.186
is added mod2 to the same sequence shifted up six places, it can easily
be shown that this results in the sequence which is a cyclic shift of eight
states from the given sequence.
(vi) Finally, if the autocorrelation of the M-sequence of Os and Is in each bit
is considered, that is knowing a particular entry has the value 0 (or 1)
how likely is any other entry in the same sequence to be 0 (or 1), we
have the autocorrelation function:
where I is the shift between the entries in the same sequence being
compared, l < T < 2 n - 2 , i.e. when T = l adjacent entries in the
sequence are being compared:
p=2n-l
and ax = 1 if the two entries being compared have the same value 0
or 1, = -1 if the two entries being compared have differing values.*
The value of C(T) for any M-sequence and any value of I is:
C(r)
v}
p
This may be illustrated by taking the Qi sequence in Figure 3.186 and
considering a shift of, say, three positions. This gives the results tabulated in
* We may express C{x) more explicitly than above using logic values of +1 and -1
instead of 0 and 1. This will be introduced in Chapter 4, but we have no need to do
so at the present.
80 VLSI testing: digital and mixed analogue/digital techniques
Table 3.8, with the total agreements and disagreements being 7 and 8
respectively, giving the autocorrelation value of-1/15.
It will also be appreciated that the shift register action between stages of a
LFSR means that all the n bits in the sequence have exactly the same
properties.
m
mx (3.18)
ro=0
where oP, x , x2 ... represent the positions in the bit sequence with increasing
l
The algebraic manipulations of G(x) are all in the Galois field of GF(2), that
is mod2 addition, subtraction, multiplication and division of binary data.
Recall also that mod2 addition and subtraction are identical, being:
82 VLSI testing: digital and mixed analogue/digital techniques
i J L
= exclusive-OR = addition mod2
±
Figure 3.19 The general schematic of a LFSR with n stages, the taps cj, c2, c> ..., cn
being open (c{ = 0) if no connection is present, y^ is the input signal at
particular time fy
0 +0=0
0 + 1=1
1+0 = 1
1 + 1=0
0-1 =1
1-0 = 1
1-1=0
and hence polynomial multiplication and division follow as illustrated
below;*
(a? + x2 + x + 1) x (A? + x + 1) is given by:
x3 + x 2 + x + 1
x2 + x + 1
x3 + x 2 + x + 1
4 3 2
X + ^ + X +X
x5 + x4 + x3 + x2
* Note, as we are doing all the algebra here in GF(2), we use the conventional algebra
addition sign + rather than the logical exclusive-OR symbol ©. The latter symbol may
be found in some publications in this area, but not usually. Also, we will discontinue
the circle in the symbol I used in eqn. 3.17 from here on.
Digital test pattern generation 83
x 3 + x2 + x + 1
* 4 + 0 + x2
4 3 2
* 3 +0 +0
x3 + x2 + x
x2 + x + 1
0
{0,1} aQx = y0 =
sequence = yx = cly0+c2y_l+c3y_2+.
G(x) with = y2 =
increasing
time
= clym_1+c2ym_2+c3ym_?i+...cnym_n
Hence the input signal, yit of eqn. 3.17 may now be replaced by ym, where m
denotes the increasing time of the sequence defined by eqn. 3.18.*
We may therefore rewrite eqn. 3.17 as:
-I'
n
(3.17a)
* Notice that the relationships shown above are in a matrix-like form. It is possible to
use matrix operations for this subject area, see Yarmolik21 for example, but we will not
do so in this text.
84 VLSI testing: digital and mixed analogue/digital techniques
This has rearranged the terms in the brackets { } into negative powers of x
(= past time), and positive powers of x (= present and future time), and has
eliminated the summation to infinity. Collecting together terms:
(3.20)
7=1
Because addition and subtraction are the same in GF(2), we may replace
the minus sign in the denominator of eqn. 3.20 with plus, giving the
denominator:
C
>*'' (3.21)
closed (present). For the four-stage LFSR shown in Figure 3.18 the
denominator of G{x) would therefore be:
1 + xl + 0 + 0 + x4
= 1 + X1 + X4
This denominator which controls the sequence which the circuit generates
from a given initial condition is known as the characteristic polynomial P(x)
for the sequence; the powers of x in the characteristic polynomial are the
same as the stages in the shift register to which the feedback taps are
connected. Two further points may be noted, namely:
(i) if the initial conditions 31.1, y~2> • • •>y~n were all zero, then the numerator
of G(x) would be zero, and the sequence would be 0 0 0 ... irrespective
of the characteristic polynomial;
(ii) if the initial conditions were all zero except y__n which was 1, then the
numerator would become cn, which if cn = 1 gives:
G 3 2 2
() \ < )
1 + tf + 0 + 0 + x4
[
+ 0 +• 0 + x 4
X
1 2
+ 0 + 0 + x5
x2 + 0 + x 4 + x 5
x2 (-0 + 0 + x6
, 4 ,
+ x6
3 7
x •0-t- 0 + x
x5 + x 6 + x 7
x 5 + x 6 + 0 + 0 + x9
x1 + 0 + x 9
86 VLSI testing: digital and mixed analogue/digital techniques
1110 1
1 1 0 ljl 0 0 0 0 0 0 1
110 1
10 10
110 1
1110
110 1
110 1
110 1
0
This may be further illustrated by evaluating the previous LFSR example of
Figure 3.18, dividing the primitive polynomial 1 + xl + x4 (1 1 0 0 1 ) into the
polynomial 1 + x15. The result of this division is the same as in the previous
worked example on page 85, except that we now have a 1 rather than a 0 in
the 16th position of the numerator, which causes the division to terminate
rather than continue onwards.
Digital test pattern generation 87
All the primitive polynomials for any n are therefore prime factors of the
polynomial 1 + A2""1. Fortunately we do not have to calculate the primitive
polynomials for our own use, since they have been extensively calculated and
published. Appendix A at the end of this text gives the minimum primitive
polynomials for n< 100, together with further comments and references to
other published tabulations. The theory and developments of these poly-
nomial relationships may be found in MacWilliams and Sloane45, Brillhart
et a/.46, Bardell et a£47 and elsewhere, but some further comments may be
appropriate to include here.
First, as n increases the number of possible primitive polynomials increases
rapidly. The formula for this number may be found in the developments by
Golomb48 and listed in Bardell et al.Al, being for example 16 possibilities for
n = 8, 2048 possibilities for n = 16 and 276 480 possibilities for n = 24. Not all
are minimum, that is containing the fewest number of nonzero terms, but
even so there are alternative possibilities with the fewest number of terms for
n > 3. The listings given in Appendix A therefore are not the only
possibilities. A complete listing of all the possible primitive polynomials for
up to n =16 is given in Peterson and Weldon44.
Secondly, given any minimum primitive polynomial P(x) such as listed in
Appendix A, there is always the possibility of determining its reciprocal
polynomial P*(x), which is also a minimum primitive polynomial yielding a
maximum length sequence11'47'49. The reciprocal of the polynomial is
defined by:
| (3.23)
')-
Table 3.9 The possible primitive polynomials for n = 15 with the least number of
nonzero terms (trinomials)
P(x) = 1 + x'+x 1 5 P|*(x) = 1 + X 1 4 H-x1
1
P2(x) = 1 4 - 4 4- 15 P2*(x) = 1 +X"H-x
8
P,(x) = 1 + X7 + X 15 P3*(x) = 1 +X +X 1 5
Finally, the LFSR circuit configuration that we have considered so far has the
feedback taps from the chosen LFSR stages all exclusive-ORed back to the
first stage. However, for any given circuit and characteristic polynomial, an
alternative circuit configuration with the same characteristic polynomial P(x)
and the same output sequence G(x), see eqn. 3.22, is possible by including the
exclusive-OR logic gates between appropriate LFSR stages. This is illustrated
in Figure 3.20. For each nonzero q in Figure 3.20a there is an effective
exclusive-OR gate between stages n - i and n-i+l in Figure 3.206; cn is
always nonzero, and therefore there is always a connection between Q^ and
the first stage as shown. For example, the equivalent of the LFSR circuit
shown in Figure 3.18 with the characteristic polynomial P(x)l + xl + x4 would
have one exclusive-OR gate between the third and final stages as shown in
Figure 3.20a
Notice that the same number of two-input exclusive-OR gates is necessary
in both possible circuit configurations, which we have termed type A and type
B in Figure 3.20. However, although the characteristic polynomial and hence
the output sequence of both type A and type B can be the same, the precise
n-bit data held in the n stages of the type A and type B LFSRs after each clock
pulse will not always be exactly the same. It is left as an exercise for the reader
to compile the state table for the type B LFSR with the characteristic poly-
nomial 1 + x1 + x4, and compare it with the state table given in Figure 3.186.
In general the type A LFSR of Figure 3.20 is preferable to the type B from
the manufacturing point of view, and most practical circuits show this
configuration. However, we will briefly come back to the type B in Section 4.5
of the following chapter, since it has a certain mathematical advantage when
the n-bit data in the LFSR rather than the 2 n - 1 pseudorandom output
sequence is of interest.
Further alternative circuit configurations have been investigated,
particularly the hybrid Wang-McCluskey circuits which seek to minimise the
number of exclusive-OR gates by a combination of the two circuit concepts
Digital test pattern generation 89
D Q D Q D Q D Q
\ \
exclusive-OR feedback
Q. On
D Q OQ D Q
to— D Q
X"
D o D Q
Figure 3.20 Txvo alternative circuit configurations for the same maximum length
pseudorandom sequence generation
a type A LFSR, which is the circuit so far considered
b alternative type B circuit configuration
c type B realisation of the maximum length LFSR of Figure 3.18 with
the characteristic polynomial P(x) = 1 + xl + x4. Note, other
publications may refer to these two configurations as type 1 and type
2 LFSRs, but regretably there is a lack of consistancy whether a is type
1 and b is type 2, or vice versa
shown in Figure 3.2051. We will not pursue these and other alternatives such
as partitioning a LFSR into smaller autonomous LFSRs, particularly as the
cellular automata (CA) pseudorandom generators which we will introduce
below have theoretical advantages over LFSR generators. Further reading
may be found in References 11, 13, 44, 47 and 50.
90 VLSI testing: digital and mixed analogue/digital techniques
0 0 0 0
0 0 1 1
0 1 0 0
0 1 1 1
1 0 0 1
1 0 1 0
1 1 0 1
1 1 1 0
* Other functions of Q^\, Qk and Q^+1 have been investigated57'58, but it has now
been formally proved that only the 90 and 150 functions are appropriate to generate
maximum length sequences. We will, therefore, only consider the 90 and 150
functions in this text.
Digital test pattern generation 91
Q,
clock clock
Figure 3.21 Two basic cells from which maximum length pseudorandom CA
generators may be built
a type 90 cell
b type 150 cell
150 cells, the more expensive cell of the two, and maximising the use of the
90 cell. It has been shown59 that for n<150 at most two 150 cells are
required, the remainder all being 90 cells. The list for n< 150 is given in
Appendix B.
The circuit and resulting M-sequence for a four-stage autonomous CA
generator with alternating 90 and 150 cells is given in Figure 3.22. As will be
seen, the resulting maximum length sequence does not have the simple shift
characteristic of the LFSR generator, but instead has a much more random-
like relationship between the successive output vectors Qj Q2 Q3 ^4- A
forbidden state of 0 0 0 0 is present as in an autonomous LFSR generator,
necessitating a seed of ...1... to be present to allow the M-sequence to
proceed. The circuit for n = 4 using the data tabulated in Appendix B would
be similar to Figure 3.22 but with the 90 and 150 cells interchanged—there is
always a choice of circuit configurations for n > 2.
The analysis of the sequence generated by a given CA may easily be found
by a matrix computation, where all multiplications and additions are in
GF(2). For example, consider the string of 90 and 150 cells shown in Figure
3.23; then the state transition matrix [T] is given by:
0 1 0 0 0 0 0 0
1 1 1 0 0 0 0 0
0 1 0 1 0 0 0 0
0 0 1 1 1 0 0 0
0 0 0 10 10 0
0 0 0 0 1 1 1 0
0 0 0 0 0 10 1
0 0 0 0 0 0 11
92 VLSI testing: digital and mixed analogue/digital techniques
Q3
*ir
D Q i! D Q D 0 D 0
i!
>ck i! >ck >ck >ck
i!
Q o ! Q o- ! 0
j
90 || 150 90 150 j
jl J Jl .J
Q2 Q^ Q3 Q4 Q3—' '—0
o2 QA
Q^ Q2 Q3 QA
initialisation
1 0 0 0 (seed) before first
clock pulse
clock pulse 1 0 1 0 0
2 1 1 1 0
3 1 1 1 1
4 1 1 0 0
5 1 0 1 0
6 0 0 0 1
7 0 0 1 1
8 0 1 1 0
9 1 0 1 1
10 0 0 1 0
11 0 1 0 1
12 1 1 0 1
13 1 0 0 1
14 0 1 1 1
15 1 0 0 0
sequence
0 1 0 0 \ repeats
This is a tridiagonal matrix, that is all zeros except on three diagonals, the
diagonal row entries being 1 1 1 for internal 150 cells and 1 0 1 for internal
90 cells. For any present-state output vector Q), the next-state output vector
(£"] of the CA is given by:
[T]Q]=Q + ] (3.24)
Digital test pattern generation 93
• i
For example, taking the eighth vector in Figure 3.226, namely 0 110, the
next-state vector for this CA is given by:
0 1 0 0 0 0+1+0+0 1
1 1 1 0 1 0+1+1+0 0
0 1 0 1 1 0+1+0+0 " l
0 0 1 1 0 0+0+1+0 1
[T] Q]
The following vector may be obtained by the transformation of 1 0 1 1, or by
transforming the previous vector 0 1 1 0 by [T]2, that is:
0 10 0 0 100 1110
1110 1 1 10 1111
0 10 1 0 10 1 110 1
00 11 00 11 0 110
[T] [T] [T] 2
In general, the kth vector Q+fi] after any given vector may be obtained by:
Q+*] (3.25)
where [T] is the given state transition matrix.
There is, therefore, relatively little difficulty in the analysis of a given
cellular automaton.* The theory for the synthesis of a maximum length CA,
however, is much more difficult; unlike the LFSR where polynomial division
over GF(2) is involved, for the CA no polynomial operations are directly
applicable and generally unfamiliar matrix operations are deeply involved.
It has been shown by Serra et a/.61"63 that autonomous LFSRs and CAs are
isomorphic to each other, so that given any maximum length LFSR a
corresponding maximum length CA may be determined; more precisely,
given any characteristic polynomial for a LFSR a corresponding CA may be
found. In this context corresponding does not imply the same output
sequence, but a reordered output sequence of the same length. The
procedure involves three principal steps namely:
(i) compile the state transition matrix of the chosen LFSR generator;
(ii) determine the companion matrix64 of this state transition matrix using
similarity transformations, the former being a matrix which is
isomorphic to the transition matrix;
(iii) tridiagonalise the companion matrix into a tridiagonal form, this being
undertaken by a development of the Lanczos tridiagonalisation
algorithm64.
The last step generates the state transition matrix for the corresponding
cellular automaton, which as we have seen must be in tridiagonal form
because all interconnections involve only Q^lt Q^ and Q^+1 signals. Further
details may be found in Serra et al, particularly in References 61-63. Note,
the type 1 and type 2 LFSRs in Serra is what we have termed type B and type
A LFSRs, respectively.
However, this method of developing a maximum length autonomous
pseudorandom CA generator is found to produce far from minimum GA
assemblies, that is with the fewest number of the more expensive 150 cells,
even when developed from minimum length primitive polynomials. No
relationship exists between minimal LFSRs and minimal CAs, and hence the
search for minimal cost CA realisations has been undertaken by a search
procedure based upon the tridiagonal characteristics of the CA state
transition matrix.
Looking back at the example state transition matrix of Figure 3.23 it will be
seen that the main diagonal has the value 0 for a type 90 cell, and 1 for a type
150 cell; the two adjacent side diagonals are always all Is. Also, if the
transition matrix for any n produces the maximum length sequence of 2n - 1
states, then [T] k will yield the identity matrix [I] for k = 2", but will not yield
[I] for any k < 2n. This is so because the maximum length sequence repeats
after 2n - 1 different states to its starting state. The search procedure to
identify the smallest number of type 150 cells for a maximum length
sequence is therefore to set all the main diagonal entries initially to zero, and
then progressive insert one 1, two Is, ... in the main diagonal. The search is
stopped for any given n when [T]2"= [I]. Further details may be found in
Reference 59. This procedure has, as previously noted, shown that only two
type 150 cells are necessary for n < 150, see details in Appendix B.
Looking at the autonomous LFSR circuits and the CA circuits which we
have now examined, it will be appreciated that either could be used as a
standalone hardware generator for supplying pseudorandom test vectors50'57.
The total circuit requirements of a CA generator are more complex than that
of a LFSR generator, since more exclusive-OR gates are required, but no
interconnections running the full length of the shift register are ever
required in the CA case as is always necessary in a LFSR generator. This may
Digital test pattern generation 95
The problem of CMOS testing with its dual p-channel and n-channel FET
configurations was introduced in the preceding chapter. Functional testing
and IDDQ current tests were seen to be the most appropriate rather than test
pattern generation based upon, say, the stuck-at fault model.
To cover open-circuit and short-circuit faults in a CMOS circuit we have
seen that:
(i) for any open-circuit fault it is necessary to apply a specific pair of test
vectors, the first being an initialisation vector to establish a logic 0(1) at
the gate output, the second being the test vector which checks that the
output will then switch to 1 (0);
(ii) for any short-circuit fault then on some test vectors there will be a
conducting path from VDD to ground through the gate, which may be
detected by monitoring the supply current IDD under quiescent (non-
transitory) conductions, this being the //w> measurement.
The latter feature is illustrated by Figure 3.24.
Let us consider first the functional tests for open-circuit faults. Table 2.3
in Chapter 2 illustrated the exhaustive test set for a simple two-input
CMOS NAND gate; the pair of input test vectors 0 1,11 checked for
transistor T3 or T4 open circuit, the pair 11,10 checked for T2 open circuit
and the pair 1 1 , 0 1 checked for Tl open circuit. The mathematics for
determining an appropriate test vector sequence was not, however,
considered.
VLSI
The importance of testing
integrated circuits (ICs) has
escalated with the increasing
complexity of circuits fabricated
TESTING on a single IC chip. No longer
is it possible to design a new IC
digital and mixed and then think about testing:
such considerations must be
analogue/digital part of the initial design activity,
techniques and testing strategies should be
part of every circuit and system
designer’s education. This book is a comprehensive
introduction and reference for all aspects of IC testing.
It includes all of the basic concepts and theories
necessary for advanced students, from practical test
strategies and industrial practice, to the economic and
managerial aspects of testing. In addition to detailed
coverage of digital network testing, VLSI testing also
considers in depth the growing area of testing analogue
and mixed analogue/digital ICs, used particularly in
signal processing.
ISBN 978-0-852-96901-5