0% found this document useful (0 votes)
21 views69 pages

Atpg 2

The document discusses various algorithms for Automatic Test Pattern Generation (ATPG), focusing on the D Algorithm, PODEM, and FANout-oriented test generation methods. It highlights the strengths and weaknesses of each approach, emphasizing the importance of decision points and implications in reducing search space for testable faults. Additionally, it introduces simulation-based methods for sequential circuits and the use of nine-valued logic to overcome limitations in traditional ATPG techniques.

Uploaded by

Chetan Cherry
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views69 pages

Atpg 2

The document discusses various algorithms for Automatic Test Pattern Generation (ATPG), focusing on the D Algorithm, PODEM, and FANout-oriented test generation methods. It highlights the strengths and weaknesses of each approach, emphasizing the importance of decision points and implications in reducing search space for testable faults. Additionally, it introduces simulation-based methods for sequential circuits and the use of nine-valued logic to overcome limitations in traditional ATPG techniques.

Uploaded by

Chetan Cherry
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

Test Generation

D Algorithm
D Algorithm plus/Minus
• + D Algorithm is complete ATPG
• Guarantee to generate a pattern for testable fault

• - Large search space


• Assignment of values are allowed to internal signals
• Backtracking could occur at each gate
• Very large search space
• D Algorithm can be an overkill for decision making
at the gate level and unproductive (J-Frontier is
unnecessary??)

PODEM (Path-Oriented Decision Making)
• Also a branch-and-bound search

• Decisions only on PIs - Less decision points as


compared to D Algo
▪ No J-Frontier needed
▪ No internal conflicts

• D-frontier may still become empty


▪ Backtrack whenever D-frontier becomes empty
▪ Backtrack also when no X-path exists from any D/D-bar to a PO

• Decisions selected based on a backtrace from the


current objective
PODEM (Path-Oriented Decision Making)
• Idea
○ Only allow assignments to Pi
■ Doesn’t assign internal nodes
■ Greatly reduces search tree

○ Assigned Pi then forward implications

○ Flip the Pi when


■ Fault not activated
■ no propagation path to Po
X-Path used by PODEM

• The D in the circuit has no path of X’s to any PO


▪ i.e., the D is blocked by every path to any PO

• PODEM checks if there is an


X-path from fault to PO

• X-path means D can be


propagated otherwise D-Frontier
is blocked
PODEM Example

Target fault: f/0

• 1st Objective: f=1 in order to excite the target fault Backtrace


from the object: c=0
• Simulate(c=0): D-Frontier = {g}, some gates have been
assigned {c=d=e=h=0, f=D}
• 2nd Objective: advance D-frontier, a=1
• Backtrace from the object: a=1
• Simulate(a=0): Fault detected at z
Another PODEM Example

Target fault: b/0

• 1st Objective: excite fault: b=1


• Backtrace from objective: a=0
• Simulate(a=0): b=D, c=0, d=0: empty D-frontier.
• Must backtrack
• Change decision to a=1
• Simulate(a=1): b=0, c=1, d=1, D-frontier still empty
• Backtrack, no more decisions. Fault untestable.
• D-Algorithm decision at internal nodes, too
many decision points
• PODEM decision at Pi -> too little decision
points, mistake prone
• FAN decision at headlines and fanout stems ->
good tradeoff
• FANout oriented test generation

• Four improvements over PODEM


1. Make decisions at head lines or fanout stem
2. Forward/Backward implications
3. Unique sensitization
4. Multiple backtraces
• Headlines are output signals of fanout-free regions
• Any value on headlines can always be justified by the PIs

We only need to
backtrace to the
headlines to
reduce the number
of decisions
• Bound Line:- Line fed directly or indirectly by fanout
stem
• Free Line:- Line that is not Bound
• Objectives: {k=0, m=1}
• Backtrace from k=0 may favor b=0, but simulate(b=0) would
violate the second objective m=1!
• Makes backtrace more intelligent to avoid future conflicts


• Direct implications for f=1:
▪ {d=1, e=1, g=1, j=1, k=1}
• Direct implications for j=0:
▪ {h=0, g=0, f=0, w=1, w=0, z=0}
• Direct implications for f=1:
• {d=1, e=1, g=1, j=1, k=1}
• Indirect Implications for f=1 obtained by simulating the
direct implications of f=1:
• {x=1}
• This is repeated for every node in the circuit
• Direct and indirect implications for f=1:
▪ {d=1, e=1, g=1, j=1, k=1, x=1}
• Ext. Back. Implications obtained by
enumerating cases for unjustified gates
▪ Unjustified gates: {d=1}
• In order to justify d=1, need either a=1 or b=1
▪ Simulate(a=1, impl(f=1)) = Sa
▪ Simulate(b=1, impl(f=1)) = Sb
• Intersection of Sa and Sb is the the set of ext. back. Implications for f=1
▪ f=1 implies {z=0}
• This is repeated for every unjustified gate, as well as for every node in
the circuit
• Static Implications are computed one time for entire
circuit; Dynamic Implications are derived during
ATPG.
• Suppose c=1 has already been assigned
▪ Then to obtain z=0, b must be 0
▪ This is the intersection of having either d=0 or e=0 in the
presence of c=1
• A sequential circuit has memory in addition to
combinational logic.
• Test for a fault in a sequential circuit is a sequence of
vectors, which
• Initializes the circuit to a known state
• Activates the fault, and
• Propagates the fault effect to a primary output
• Methods of sequential circuit ATPG
• Time-frame expansion methods
• Simulation-based methods
Huffman’s model of a FSM
● Sequential ATPG
○ Generate test pattern for
sequential circuit
○ without DFT or scan
Assumptions: Sequential ATPG
▪ Control only PI: Flip-Flops are not controllable

▪ Observe only PO: Flip-Flops are not observable

▪ Faults in Combinational logic only


• Introduction
• Time-frame expansion method
• Simulation-based method

• Assumption
• No SCAN allowed
■ control Pi and observe Po
• Faults only in CL only
■ No fault in FF or latch

● Idea: Replicate circuits and connect time frames by wires
○ yi = “States”; No FF
○ Replace clock cycles by space
● Become Combinational ATPG problem
○ NOTE: Target fault appears in every time frame
1. Select a target fault
2. Create a copy of the combinational logic, set it to time frame 0
3. Generate a test for f for time frame 0 using D Algorithm
4. If the fault effect is propagated to the FF’s, continue the fault
effect propagation in the next time frame
5. If there are values required in the FF outputs, continue the
justification in the previous time frame
● STEP 2: Create Time frame 0
● STEP 3: generate a test
○ a0=1; y1=0; Y1=D’
● STEP 4: Fault effect propagation to time frame 1
○ a1=1
● STEP 5: Fault activation back to time frame -1
○ a-1=0
● SA0 at input of OR
● No way to propagate
○ This fault is untestable by sequential ATPG
● Endless timeframe expression
○ Memory explosion..!
Extended D-Algorithm
Extended D-algorithm Fails!
Time frame -1 Time frame 0
b a conflict 1 b a0 z
z
x sa1 x sa1 D’ D D’
0
1

0
0 1 D’ 0

0
Y0 Over-specified!!

⚫ Extended- D algorithm fails due to a conflict


◆ Requires a=0 in time frame -1, but SA1
◆ Actually, Y0 is over-specified in 5-valued logic
Why Fails?
Time frame -1 Time frame 0
b a conflict 1 b a0 z
z
x sa1 x sa1 D’ D D’
0
1

0
0 1 D’ 0

0
Y0 Over-specified!!

⚫ Traditional 5-valued logic (0/0, 1/1, x/x, 0/1, 1/0) is NOT sufficient
◆ cannot express 1/x, 0/x, x/0, x/1

Q: How many total cases do we need?


ANS:
Nine-valued D-algorithm [Muth 76]
⚫ Solution: use 9-valued logic, instead of 5-valued logic

Symbol Meaning Roth’s 5-valued logic Muth’s 9 valued logic

Fault-free faulty Fault-free faulty

D (1/0) 1 0 1 0
D’ (0/1) 0 1 0 1
0 (0/0) 0 0 0 0
1 (1/1) 1 1 1 1
X (x/x) X X X X
G0 (0/x) - - 0 X
G1 (1/x) - - 1 X
F0 (x/0) - - X 0
F1 (x/1) - - X 1
Nine-valued Truth Table
⚫ Example of AND gate

AND 0 0/x D’ x/0 x/x x/1 D 1/x 1


0 0 0 0 0 0 0 0 0 0
0/x 0 0/x 0/x 0 0/x 0/x 0 0/x 0/x
D’ 0 0/x D’ 0 0/x D’ 0 0/x D’
x/0 0 0 0 x/0 x/0 x/0 x/0 x/0 x/0
x/x 0 0/x 0/x x/0 x/x x/x x/0 x/x x/x
x/1 0 0/x D’ x/0 x/x x/1 x/0 x/x x/1
D 0 0 0 x/0 x/0 x/0 D D D
1/x 0 0/x 0/x x/0 x/x x/x D 1/x 1/x
1 0 0/x D’ x/0 x/x x/1 D 1/x 1
Nine-Valued Test Generation

0/X 0/X 1/X 0/1


b a b a
z z
x sa1 x sa1 0/1
1/0

0/1
1/X

0/X
0/X 1/X 0/1 0/0
0/X

a b

V1 0 0
Test pattern successfully generated
V2 0 1
Comparison: 9 v.s. 5 valued
9-valued
0/X 0/X 1/X 0/1
b a b a
z z
x sa1 x sa1 0/1 1/0

0/X 0/1
1/X

0
0 1/X 0/1 0/0
0/X

5-valued
b a conflict 1b a0
z z
x sa1 x sa1 D’
D D’
0 1

0
0 1 D’ 0
0
Over-specified!!
8
Simulation-Based Methods
⚫ Idea: use logic/fault simulators to guide ATPG [Seshu 62]
◆ Simulation is faster than ATPG
⚫ Approach
◆ Generate candidate test vectors
◆ Fitness* of candidates evaluated by logic or fault
simulation
◆ Select best candidate based on a certain cost function

⚫ Advantage:
◆ No time frame expansion. Easy memory management

Simulate Many Vectors and Choose Best


CONTEST– Concurrent Test Generator
for Sequential Circuits [Agrawal & Cheng 89]

⚫ Based on event-driven concurrent fault simulator


⚫ Search for test vectors guided by cost functions
⚫ Three phases
 Initialization
 Concurrent fault detection
 Single fault detection
1. Initialization Phase
⚫ Start with arbitrary test vector
◆ Start with FFs in unknown states
⚫ Use logic simulation (not fault simulation)
◆ Cost = number of FFs in unknown state
◆ Trial vectors are generated by single-bit change
of the current vector. A trial vector is accepted
and becomes the current vector if it lowers the
cost
⚫ Stop this phase when cost drops below a desired
value
QUIZ
Q1: Initially the FF is unknown. Given three trial vectors,
please simulate the circuit and decide their costs, where cost =
number of unknown FF.

Q2: Which trial vector would you pick?


AB cost = number
of unknown FF
cost = number
of unknown states 00 1
A FF
01
K 10
B
J A
B
C
00 1
FF
C
01 0
X
10 1
2. Concurrent Fault Detection Phase
⚫ Start with fault simulation of generated initialization sequence
◆ Detected faults are dropped from the fault list
⚫ Compute the cost of the last vector
◆ Cost of an undetected fault f
– COST(f) = minimum distance of its fault effect to a PO
– distance = level of logic gates
◆ Cost of a vector
– Sum of costs of all undetected faults
⚫ Trial vectors are generated by single-bit change
◆ Only accept the vectors that reduce the cost
Example
A X 000
010 AB cost = distance
SA0 001 of D or D’ to PO
K
001 001 J 00 
B

C 110 10 2
FF 01 
C
0

A X DD0
110 SA0 AB cost = distance
D’D’1 of D or D’ to PO
K
010 111 J 10 0
B

C 000 11 0

FF 00 
C1

Test Vector Sequence for the SAO fault detection: AB = (01),


10, 11 (From the cost functions) VLSI Test 8.4
Q1: Given FF initial state is zero, please evaluate the cost of three
trial vectors: 00, 10, 01

Q2: Which test vector would you pick?


AB cost = distance
of D or D’ to PO
cost = distance
of D or D’ to PO 00
10

A 0D’0 01
010 AB
D’01
K
001 D’D’1 J 00 0
B
X
C SA1 DD0 10 1
FF 01 
C
0
Need Phase 3
⚫ Experience shows test patterns for all stuck-at faults are
usually clustered instead of being evenly distributed
⚫ When only a few faults are left, their tests will be isolated
vectors and we need a different test generation strategy

Phase 2: Concurrent Fault Detection Phase 3: Single Fault Detection


3. Single Fault Detection Phase
⚫ Start with any vector
⚫ Generate new vectors by single-bit change to reduce cost of the
selected fault until it is detected
◆ The lowest cost fault is picked first

⚫ Cost of a fault f at signal line g is


◆ If not activated yet:
KCA(f)+ CP(f)
K = constant ; CA=activation ;Cp=propagation cost
◆ If activated:
Min(CP(i)), i the set of inputs to signal g
CA and DC
⚫ Activation Cost, CA
◆ CA(g stuck-at-v) = DCv’(g) = dynamic controllability of line g at v’
⚫ Dynamic Controllability, DC
◆ Similar to sequential controllability in SCOAP except logic
values known
DC0(C) DC1(C)
A min[DC0(A),DC0(B)] , if C=1 or x DC1(A) + DC1(B) , if C=0 or x 0
B C 0 , if C=0 , if C=1

A DC0(A) + DC0(B) , if C=1 or x min[DC1(A),DC1(B)] , if C=0 or x


C 0 , if C=0 0 , if A=1
B
DC1(A) , if C=1 or x DC0(A) , if C=0 or x
A C 0 , if C=0 0 , if C=1
1 , if C=1 or x 1 , if C=0 or x
Primary inputs
0 , if C=0 0 , if C=1
DC0(A)+K , if C=1 or x DC1(A)+K* , if C=0 or x
C = FF(A) , if C=0 0 , if C=1

* K is a chosen constant
CA and DC Example
⚫ CA(g1 stuck-at 0) = DC1(g1) =10  easier
⚫ CA(g2 stuck-at 1) = DC0(g2) =100

(0,16)
(10,0) g1
1 0 0
0
(0,4) (0,10)
0 (0,4) (0,10)
0
1 0
X
FF
(DC0, DC1) = (6,0) X
(100,10)
(100,104) g2
Propagation Cost, CP
⚫ Cp(g) = Dynamic Observability of node g
⚫ Dynamic observability (DO)
◆ Similar to combinational observability in SCOAP
◆ Measure the effort to observe the fault on a
given node
– the number of gates between N and PO’s, and
– the minimum number of PI assignments
required to propagate the logical value on
node N to a primary output.
Dynamic Observability (DO)
⚫ Similar to combinational observability in SCOAP

DO(A)

A
C DO(C) + DC1(B) + 1
B
A
C DO(C) + DC0(B) + 1
B

A C DO(C) + 1

C1
A min[DO(C1),DO(C2)]
C2
Primary outputs 0
Cp and DO Example
⚫ Cp(g1) = DO(g1) =1
⚫ Cp(g2) = DO(g2) =1

DO = 1
DC=(0,16)
(10,0) g1 0 0
1
0
DC=(0,4) 0 DO=0
(0,4) DC=(0,4) DO = 1
DO=1
0 DO = 101
1 X
FF DO=0
DC=(DC0, DC1) = (6,0) g2
X
DC=(100,104)
DO = 1
Total Cost
⚫ Fault g1: CA = 10, Cp= 1
⚫ Fault g2: CA = 100, Cp=1
❑ Choose g1 SA0 as target fault to generate test vector

DO = 1
DC=(0,16)
(10,0) g1 0 0
1
0
DC=(0,4) 0 DO=0
(0,4) DC=(0,4) DO = 1
DO=1
0 DO = 101
1 X
FF DO=0
DC=(DC0, DC1) = (6,0) g2
X
DC=(100,104)
DO = 1
• Random and weighted-random ATPG are the simplest forms
of simulation-based ATPG

• Challenge: how to guide the search to generate effective


vectors to obtain high fault coverage, low computation costs,
and small test sets?
• A GA made up of
▪ A population of individuals (chromosomes)
– Each individual is a candidate solution

▪ Each individual has an associated fitness


– Fitness measures the quality of the individual

▪ Genetic operators to evolve from one


generation to the next
– Selection, crossover, mutation
Genetic Algorithms (GA) [Holland 1975]
⚫ General Principle: Survival of fittest(s)
◆ Keep a population of feasible solutions, not just one
◆ Parent population generates child population
– by gene crossover, mutation etc
◆ Select only best children, remove weak children
◆ Repeat the above for many generations
Crossover and Mutation
⚫ Test vectors are represented by bit-stream “gene”
⚫ Crossover: Two feasible solutions generate child by switching gene

01011 001 01011 110

10000 110 10000 001

⚫ Mutation: some gene can change by a random probability

01x011001 01111001
Summary
⚫ Simulation-based methods
◆ Randomly generate many trial test vectors
◆ Evaluate test vectors by simulation and pick the best
◆ Need many testability measure to help smart decision
⚫ Advantages
◆ Better memory management than time frame expansion
◆ Timing can be considered
◆ Use genetic algorithm to optimize
⚫ Disadvantages
◆ Cannot identify untestable faults
◆ Test length can be longer than time frame expansion

You might also like