0% found this document useful (0 votes)
64 views49 pages

Concept of LEC and DFT

Lec and dtf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views49 pages

Concept of LEC and DFT

Lec and dtf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

LEC(Logic Equivalence Check)

• LEC is a critical step in the ASIC design cycle that ensures the logical
functionality of a design remains unchanged throughout the various
stages of design transformation, such as synthesis, place and route,
and ECOs (Engineering Change Orders).
• If the functionality is altered at any stage in the design process, it can
render the entire chip unusable. This makes LEC one of the most
crucial checks in the entire chip design process.
• With shrinking technology nodes and increasing design complexity, LEC
plays an indispensable role in verifying that the design’s logic remains
consistent from the RTL to the final layout.
Reference Design Standard Library Revised design

Constraint and design Setup Mode


Modelling

Compare Parameter Mapping and


Compare
Mode
Compare Design

Compare
Yes
Diagnose
Fail
No

LEC complete
Setup Mode:
Input Files:
• Golden Design: Typically, this is the synthesized netlist, regarded as the
reference or correct version.
• Revised Design: This is the version that has undergone modifications after
synthesis, such as post-layout or post-ECO netlists.
• Standard Library: The library of standard cells used in the design, which is
essential for accurate comparison.
Supporting Files:
• <design_name>.lec file: A script file guiding the tool through various
commands systematically.
• <design_name>.scan_const file: Contains scan-related constraints.
• <design_name>.stdlib file: Points to the standard cell library used in the
design.
Flattening and Modelling:
After loading the input files, the tool flattens and models both the Golden
and Revised designs. It then automatically detects and maps critical
elements, including:
- Primary Inputs/Outputs
- D Flip-Flops
- D Latches
- TIE-E (Error) Gates
- TIE-Z (High Impedance) Gates
- Black Boxes
Mapping Mode
Automatic Mapping: In this phase, the tool works to align the key points
between the Golden and Revised designs.
Mapping Methods:
•Name-Based Mapping: This approach matches key points based on their
signal or gate names, which is effective for handling minor logic changes.
•No-Name Mapping: This method is used when the designs have different
naming conventions or undergo significant structural changes.
Unmapped Points: Key points that the tool cannot map are categorized
into three types:
•Extra Unmapped Points: Points that exist in only one of the designs.
•Unreachable Unmapped Points: Points that lack an observable connection,
such as those not connected to a primary output.
•Not-Mapped Unmapped Points: Points that are accessible but not correctly
aligned between the two designs.
Compare Mode
Comparison Process:
The tool analyzes the mapped key points to determine if they are
logically equivalent. The comparison yields one of the following
outcomes:
•Equivalent
•Non-Equivalent
•Inverted-Equivalent
•Aborted (Inconclusive results)

Adjusting Effort Levels:


For points marked as "Aborted," you can increase the comparison effort
to attempt to resolve these inconclusive points.
Report Generation:
After the comparison, the tool generates several reports:
•Non-Equivalence Report: Details the points that are not logically
equivalent.
•Unmapped Report: Lists points that could not be mapped.
•Blackbox Report: Identifies portions of the design treated as black
boxes.
•Abort.rpt, Unreachable.rpt, Floating.rpt: Provide information on
aborted points, unreachable points, and floating signals, respectively.
•Mapped.rpt: Lists the points that were successfully mapped.
Handling LEC Failures: A Step-by-Step Guide
LEC failures can be challenging, but you can systematically troubleshoot
them by following these steps:
Step 1: Analyse the Non-Equivalent Report
• Non-Equivalent Points: Begin by examining the "non-equivalent.rpt" file,
which lists the compare points (e.g., flip-flops) that failed the LEC.
• Multibit Flops Example: In the example, 152 compare points are flagged
as non-equivalent, including multibit flops (where two single-bit flops are
merged). The report counts each bit separately, leading to a higher
number.
• Actual Issue: The real issue may involve fewer points (e.g., 72 actual non-
equivalent flops, with each contributing two bits to the total count).
Step 2: Check the Unmapped Report
• Unmapped Points: Review the "unmapped.rpt" file to identify nets with
broken logical connections. These are the nets that the tool couldn’t map.
• Tracing the Issue: Trace the unmapped nets to locate missing connections.
For example, net BUFT_net_362908 might be unmapped due to an
unintentional disconnection from its driver, possibly caused by a missing
inverter.
• Fanout Check: Verify the fanout of the net in the LEC pass database to
confirm the missing connection.
Step 3: Fix the LEC Issue
• Inserting Missing Components: Once the issue is identified (e.g., a missing
inverter), manually insert the necessary component and reconnect the net.
• Rerun LEC: After correcting the issue, rerun the LEC. The comparison
should now pass, and the "non-equivalent.rpt" file should report zero non-
equivalent points.
Common Areas of LEC Failures
• Multibit Flops: Mapping challenges often arise due to name changes in
the revised netlist.
• Clock Gating Cells: These cells may fail to map correctly after
modifications.
• Manual ECOs: Logical connections might break during the
implementation of manual ECOs.
• Functional ECOs and DFT Constraints: Missing constraints can lead to
discrepancies and mismatches.
Benefits of LEC
• Reduced Reliance on Gate-Level Simulation: LEC ensures logic
equivalence without the need for extensive simulations.
• Enhanced Confidence in Tool Revisions: LEC helps validate new tools for
synthesis and place & route.
• Bug Detection: It aids in identifying bugs that may be introduced during
the back-end process.
DFT(Design For Testability)
• Definition: Design for Test (DFT) refers to a set of techniques and
methodologies used in VLSI (Very-Large-Scale Integration) design to ensure
that integrated circuits (ICs) are testable and can be effectively verified for
manufacturing defects and functional correctness. DFT involves
incorporating additional test structures and features into the design to
facilitate the testing of the circuit’s functionality and internal logic.
Need for DFT in VLSI:
1.Manufacturing Defect Detection
2.Improved Test Coverage
3.Reduction in Testing Time and Cost
4.Facilitation of Automated Testing
5.Debugging and Diagnosis
6.Compliance with Industry Standards
Testing:
Functional Testing:
• If a circuit has N inputs, the number of possible test cases is 2N.
This exponential growth means that as N increases, the number of
test cases required grows very quickly.
• Challenges: For large N, the number of test cases becomes
impractically large, making exhaustive testing infeasible. This
makes functional testing challenging for circuits with many inputs.
Structural Testing:
• If the circuit is analysed in terms of the number of logic gates M,
the number of test cases needed is 2M. However, M typically
represents the number of gates in the design, not the number of
inputs. For many practical designs, M is often smaller than N
because there are usually fewer gates compared to the total
number of input combinations.
Fault Model in VLSI Design
A fault model in VLSI design is a theoretical framework used to represent and
analyse potential faults that can occur in a circuit. It helps in understanding
how different types of faults might affect the functionality of the circuit and
guides the design of testing methods to detect these faults.
Types of Fault Models:
1. Stuck-At Fault Model:
A fault model where a circuit node is assumed to be stuck at a constant logic
value (0 or 1) regardless of the actual input.
2. Bridging Fault Model:
A fault model where two or more nodes in a circuit are unintentionally
connected (bridged) together, causing incorrect logic levels.
3. Open Fault Model:
A fault model where a circuit node is disconnected due to a break in the
connection or an open circuit.
4. Delay Fault Model:
A fault model where the timing of signal transitions is affected, causing
delays beyond the expected or acceptable range.
5. Transition Fault Model:
A fault model where a node fails to transition from one logic state to
another as expected.
6. Pattern Sensitive Fault Model:
A fault model where the presence of a fault depends on specific input
patterns or sequences.
Importance of Fault Models:
1.Test Design: Fault models help in designing effective test patterns and
methodologies to detect specific types of faults in the circuit.
2.Coverage Measurement: They provide a basis for measuring test
coverage and evaluating how well the testing process detects potential
faults.
3.Fault Diagnosis: Fault models assist in diagnosing and locating faults
within the circuit, improving debugging and repair processes.
4.Reliability Assessment: They contribute to assessing the reliability and
robustness of the circuit by simulating and analyzing how faults affect its
performance.
Detecting Faults Using Test Vectors
Test vectors are sequences of input values applied to a circuit to verify
its correctness and detect faults. The process involves applying these
vectors, observing the circuit’s response, and comparing it with the
expected output to identify discrepancies.
Steps to Detect a Fault Using Test Vectors:
Define Fault Model:
Identify the type of faults to test for, such as stuck-at faults, bridging faults,
or delay faults. Use fault models to understand how these faults could
affect the circuit.
Generate Test Vectors:
Develop test vectors (input patterns) that can potentially detect the faults
identified in the circuit. This can be done manually or using automated test
generation tools.
Detecting Faults Using Test Vectors
Apply Test Vectors:
Input the test vectors into the circuit through simulation tools or physical
testing setups. Record the circuit’s outputs for each test vector.
Compare Results:
Calculate the expected outputs based on the fault-free circuit. Compare
the observed outputs from the circuit under test with the expected outputs.
Identify Faults:
Any mismatch between the expected and actual outputs indicates the
presence of a fault. Use diagnostic tools or fault simulation to determine
the location and type of the detected fault.
Example of Fault Detection
Fault Model: Stuck-At Fault
Circuit: A simple combinational logic circuit with a 2-input AND gate and a
2-input OR gate.
A
Test Vectors: AND Stuck at 0
Vector 1: A = 0, B = 0 B

Vector 2: A = 0, B = 1
Vector 3: A = 1, B = 0 A
OR Stuck at 1
Vector 4: A = 1, B = 1 B

Faults to Detect:
Stuck-at-0 Fault: Suppose the output of the AND gate is stuck at 0.
Stuck-at-1 Fault: Suppose the output of the OR gate is stuck at 1.
Observations:
Applying Test Vectors: •If the AND gate is stuck-at-0: For all test
1.Apply Vector 1 (A = 0, B = 0) vectors, the AND gate output should be 0
Expected AND output: 0 regardless of input values.
Expected OR output: 0 •If the OR gate is stuck-at-1: For all test
2.Apply Vector 2 (A = 0, B = 1) vectors, the OR gate output should be 1
Expected AND output: 0 regardless of input values.
Expected OR output: 1
Vector 1: Output matches expected values.
3.Apply Vector 3 (A = 1, B = 0) Vector 2: Output of the AND gate is correct, but
Expected AND output: 0 OR gate output should be 1, not 0 if stuck-at-1.
Expected OR output: 1 Vector 3: Output of the AND gate is correct, but
4.Apply Vector 4 (A = 1, B = 1) OR gate output should be 1, not 0 if stuck-at-1.
Expected AND output: 1 Vector 4: Output of the AND gate is correct, but
Expected OR output: 1 OR gate output should be 1, not 0 if stuck-at-1.
combinational circuit: controllability and
observability
In combinational circuits, controllability and observability are critical
metrics used to evaluate the testability of a circuit. These metrics help in
determining how easy it is to set a circuit node to a particular logic value
(controllability) and how easy it is to observe the effect of a particular node
at the circuit's output (observability).
Input Port

Output Port
Fanin
XOR Fanout
Controllability:
Controllability refers to the ease with which a specific internal signal or
node in a circuit can be controlled (set to a desired logic value) through the
circuit's primary inputs.
• High Controllability: A signal or node has high controllability if it can be
easily set to a logic '1' or '0' through the available inputs. This is usually the
case when the node is directly connected to the primary inputs or when
there is a simple path from the inputs to the node.
• Low Controllability: A signal or node has low controllability if it is difficult
to set to a particular logic value. This might occur if the node is buried
deep within the circuit, has a complex logic path from the inputs, or is
dependent on multiple inputs that need to be set in a specific way.
Observability
Observability refers to how easily the value of a particular internal signal or
node can be observed at the circuit's primary outputs.
• High Observability: A node has high observability if changes in its value
can be easily detected at the circuit’s outputs. This usually happens when
there is a direct and uncomplicated path from the node to the output.
• Low Observability: A node has low observability if it is difficult to observe
its effect on the circuit's outputs. This might occur if the node is deep
inside the circuit or its effects are masked by other logic elements before
reaching the output.
Importance in Testing
1.Test Generation: High controllability and observability are desired
because they make it easier to generate test vectors that can thoroughly
test the circuit. If a circuit node has low controllability or observability, it
may be challenging to create a test that fully verifies that part of the circuit.
2.Fault Detection: Low controllability or observability can make certain
faults difficult to detect. For example, if a fault occurs on a node with low
controllability, it might be difficult to stimulate the fault condition.
Similarly, if the node has low observability, it might be challenging to
observe the fault at the output.
3.Design for Testability (DFT): Techniques like scan chains and built-in self-
test (BIST) are often employed to improve the controllability and
observability of circuit nodes, making the circuit easier to test and
improving fault coverage.
Sequential Circuit: Controllability and
Observability
Controllability: Input Combinational Logic Output

•Problem in Sequential
Circuits: In sequential
circuits, setting a specific XOR Feedback

value at any pin is more


challenging than in
combinational circuits. This
difficulty arises because the Memory
state of the circuit depends Element
(latch)
not only on the current inputs
but also on previous states.
• State Traversal: To set a specific value at a pin, the circuit may need to
traverse through several states over multiple clock cycles. This state
traversal process requires careful manipulation of inputs over time.
• Time-Consuming Process: Finding the correct sequence of inputs to set a
desired state or value is often a time-consuming task, typically handled by
a sequential Automatic Test Pattern Generation (ATPG) tool.

Observability:
• Feedback Loop: The circuit has a feedback loop, meaning the outputs of
the combinational logic are fed back as inputs to the flip-flops, which store
the present state. This feedback complicates the observability, as the
effect of a particular input on the output may not be immediately visible.
• State Dependency: The observability of the circuit depends on the current
state stored in the flip-flops. To observe the impact of a particular input on
the output, the circuit may need to be in a specific state, adding complexity
to the observation process.
Solutions and Techniques for Sequential Circuit
In sequential circuits, where controllability and observability are challenging
due to the dependency on previous states and feedback loops, there are
several solutions and techniques to address these difficulties:
1. Scan Design (Scan Chains):
• Scan Chains: The most common solution is to design the sequential
circuit with scan chains. This technique involves converting flip-flops into
scan cells, allowing them to be configured as shift registers during the test
mode.
• Test Mode: In test mode, scan chains shift in test patterns, making it easier
to control (set) and observe (capture) the internal state of the circuit.
• Benefits: This technique improves controllability by allowing the direct
setting of flip-flop states and enhances observability by easily shifting out
the contents of flip-flops for comparison.
2. Built-In Self-Test (BIST):
• Self-Test Mechanisms: BIST is an approach where the circuit has built-in
hardware to test itself. This is especially useful for complex systems
where external testing is difficult.
• Test Pattern Generation and Analysis: BIST circuits generate test
patterns and compare the results internally. This approach bypasses the
need for external test pattern generation and response analysis, thus
simplifying controllability and observability.
3. Partial Scan Design:
• Targeting Specific Flip-Flops: In some cases, not all flip-flops are
included in the scan chain. Only those that are critical for achieving better
controllability and observability are targeted.
• Balancing Overhead: This approach strikes a balance between improving
testability and minimizing the design overhead (area, power, and timing)
associated with full scan chains.
4. Test Point Insertion (TPI):
• Adding Control and Observation Points: Test point insertion involves
adding additional logic (e.g., multiplexers or control points) into the circuit
to improve the ability to control or observe certain signals.
• Strategic Placement: These test points are strategically placed at
locations where controllability or observability is particularly challenging.
5. ATPG with Sequential Patterns:
• Advanced Pattern Generation: Using Automatic Test Pattern Generation
(ATPG) tools capable of handling sequential circuits is another solution.
These tools can generate test patterns that account for state
dependencies, though they are more complex and time-consuming than
those for combinational circuits.
• Sequential Analysis: ATPG tools perform state traversal and generate
patterns that ensure the desired values are set and observed through
multiple clock cycles.
Scan Chain in Design for Testability (DFT)
• A scan chain is a series of flip-flops connected in a shift register
configuration. This configuration allows data to be shifted in and out
sequentially, making it easier to control and observe the internal states of a
circuit during testing.

Input Output Input Output


Combinational Combinational
Logic Logic Feedback
Feedback

Memory Memory

D Q_out D Q_out
D Flip-
TE Scan Flop
Flop
SE SO
SI
Clk Clk
Key Modifications in the Design:
1. Addition of Extra Primary Ports:
Test Mode (TM): A control signal used to switch the circuit between
normal operation and test mode.
Scan Enable (SE): Enables the scan operation, allowing the shift of data
through the scan chain.
Scan In (SI): The input port for the scan chain where test data is fed into
the scan cells.
Scan Out (SO): The output port of the scan chain where data is shifted
out for observation.
2. Replacement of D Flip-Flops with Scan Cells:
The original D flip-flops in the memory are replaced with scan cells,
which are specialized flip-flops that can function in both normal and test
modes.
1. Normal Mode: The circuit operates
Input Output
normally, with scan cells acting as Combinational
Logic Feedback
regular flip-flops. Scan path is
inactive; Test Mode (TM) is off.
2. Shift Mode: Test data is shifted in Memory
and out through the scan chain. D Q_out
Scan Enable (SE) is on, allowing
TE Scan Flop
data to move through the scan SE SO
cells. SI
Clk
3. Capture Mode: Outputs from the
combinational logic are captured Mode TE SE
into scan cells after test vectors are Normal 0 0
applied. SE is off, capturing data
from logic into scan cells with a Shift 1 1

clock pulse. Capture 1 0


Scan Cell: MUXED-D
A Mux-D scan cell is a fundamental building block used in scan design to
enable testing of digital circuits. It integrates a multiplexer (Mux) and a D flip-
flop to create a scan cell that can operate in both normal and test modes.

• Multiplexer (Mux): Selects


D 0
D SO/Q between the normal data input
SI 1 and the scan input based on
SE SEL the Scan Enable (SE) signal.
CLK • D Flip-Flop: Stores the
selected input and passes it to
CLK the output.
Operation:
1. Normal Mode:
• SE = 0: The multiplexer passes the regular data input (D) to the D flip-
flop.
• The flip-flop behaves like a standard storage element, holding the data
for regular circuit operation.
2. Shift Mode:
• SE = 1: The multiplexer selects the scan input (SI), allowing test data
to be shifted through the scan chain.
• The D flip-flop shifts the test data through the scan cells.
3. Capture Mode:
• SE = 0: After applying the test vectors, the circuit enters capture
mode, where the D flip-flop captures the output from the
combinational logic into the scan chain.
Scan Chain Forming
A F3
B D Q Out
Scan chain forming is a crucial C
D Q
D Q
D
step in Design for Testability F2
F1
(DFT) that involves connecting
Clk
multiple scan cells in a
sequential manner to create a
scan chain. This scan chain is
used to facilitate the testing of F3
A
complex digital circuits by B D Q Out
D Q
improving the controllability C
D Q SI
SI
D SE SO
and observability of internal SI SI SE F2
SE F1
flip-flops or latches.
Clk
SE
Steps to Form a Scan Chain:
• Identify Scan Cells:
Replace regular flip-flops in the circuit with scan cells (e.g., Mux-D scan
cells).Each scan cell should have additional inputs for Scan In (SI), Scan
Enable (SE), and Scan Out (SO) signals.
• Connecting Scan Cells:
Input: Connect the SO (Scan Out) of one scan cell to the SI (Scan In) of the
next scan cell in the sequence. Continue this process until all the scan cells
are connected in series. The first scan cell in the chain receives its SI from an
external test pin, and the last scan cell's SO is connected to another test pin
for observation.
• Creating the Scan Path:
The scan cells are now part of a serial shift register, allowing test data to be
shifted in and out through the scan chain. This serial path enables the
application of test vectors and the capture of circuit responses without
interfering with normal operation.
• Testing Modes:
Shift Mode: When SE is enabled, test data is shifted through the scan chain.
Capture Mode: After the test vectors are applied, SE is disabled, and the
circuit operates normally to capture the response in the scan cells.
Normal Mode: During regular operation, the scan chain does not interfere
with the circuit's functionality.

By forming a scan chain, designers can ensure that the circuit is


thoroughly tested for manufacturing defects, thereby improving the reliability
and quality of the final product.
Scan Design Flow: Tasks
1. Design Preparation
Ensure that the design is ready for scan insertion.
Tasks:
• Identify Scannable Flip-Flops: Determine which flip-flops in the design will
be converted into scan flip-flops.
• Clock and Reset Domain Identification: Identify different clock and reset
domains to handle them appropriately during scan insertion.
• Design Rule Checks (DRC): Run checks to ensure the design adheres to the
rules necessary for scan insertion.
• Boundary Scan Analysis: Prepare the design for boundary scan insertion if
required.
2. Scan Synthesis
Insert scan chains into the design to facilitate testing.
Tasks:
a) Scan Configuration:
Define Scan Architecture: Determine the type of scan architecture (e.g.,
full scan, partial scan, Mux-D scan).
Set Scan Parameters: Configure the scan chain length, scan cell types,
and other related parameters.
b) Scan Replacement:
Replace Regular Flip-Flops with Scan Flip-Flops: Modify the design to
replace standard flip-flops with scan-enabled flip-flops.
Handle Clock and Reset Pins: Ensure the scan flip-flops are correctly
connected to the clock and reset domains.
c) Scan Reordering:
Optimize Scan Chain Order: Reorder the scan cells to optimize the
routing and reduce wire length.
Scan Chain Balancing: Ensure balanced scan chains for efficient testing.
3. Test Vector Generation
Generate test vectors to validate the functionality of the scan chains.
Tasks:
• Automatic Test Pattern Generation (ATPG): Generate test vectors to
detect stuck-at faults, transition faults, etc.
• Fault Coverage Analysis: Analyze the fault coverage provided by the
generated test vectors.
4. Scan Verification
Verify the integrity and functionality of the inserted scan chains.
Tasks:
• Scan Chain Integrity Check: Ensure the scan chains are correctly connected and
functional.
• Simulate Test Vectors: Run simulations using the generated test vectors to verify
proper scan operation.
ATPG(Automatic Test Pattern Generation)
• ATPG (Automatic Test Pattern Generation) is crucial in the semiconductor
industry as it automates the creation of test patterns that detect
manufacturing defects in integrated circuits (ICs).
• These defects, such as stuck-at faults and bridging faults, can arise during
fabrication and impact the functionality and reliability of the final product.
By generating high-quality test patterns, ATPG ensures that defects are
detected and addressed early in the production process, leading to higher
fault coverage and product reliability.
• This process is particularly important in achieving cost efficiency, as it
reduces test time and helps in optimizing yield, which in turn lowers the
overall production costs.
ATPG: Controlling/Non-Controlling
• Controlling Value: A controlling value Non-
Controlling
is a value that can be assigned to any Type of Gate
Values
Controlling
input of a multi-input gate such that the Values
output is determined or fixed, AND 0 1
irrespective of the values on the other
inputs. NAND 0 1

• Non-Controlling Value: A non- OR 1 0


controlling value is a value that can be
assigned to any input of a multi-input NOR 1 0
gate such that the output is decided by
the value on the other inputs. XOR Not Defined Any Value

This information is essential in ATPG as it helps in understanding how to


propagate faults through different gates by assigning appropriate values to
inputs.
ATPG using the Path Sensitization Method
• ATPG (Automatic Test Pattern Generation) using the Path Sensitization
Method. This method involves three key steps:
1.Fault Sensitization: The process of applying specific input values to
sensitize (activate) a fault, making it affect the internal logic of the circuit.
2.Fault Propagation: Ensuring that the effect of the fault is propagated
through the circuit to an observable output (e.g., a scan-out or primary
output), where it can be detected.
3.Line Justification: Setting the values of the remaining inputs to justify the
path and ensure the correct propagation of the fault effect to the output.
These steps collectively allow for the generation of test patterns that can
detect and diagnose faults within the circuit, which is crucial for verifying the
integrity and functionality of the design.
ATPG: challenges and solutions
Challenges
• Re-convergent Fanouts: A significant challenge in ATPG is managing re-
convergent fanouts, where multiple paths merge back together. This can
create complications because decisions made along different paths may
conflict when they come together.
• Decision-Making Points: ATPG involves making critical decisions at
various points in the circuit to ensure faults are propagated and paths are
justified. These decisions have broader implications for the generation of
test patterns.
• Incorrect Decisions: Some decisions may turn out to be incorrect, leading
to conflicts and necessitating backtracking to explore alternative options.
Backtracking Limits
• Excessive Backtracking: If the ATPG process requires too much
backtracking, it might suggest that the fault being tested is redundant and
cannot be detected by any generated test pattern.
• User-Defined Backtrack Limits: To prevent excessive backtracking, ATPG
tools often use a predefined backtrack limit (e.g., 1000 backtracks). Once
this limit is reached, the tool will stop attempting to generate a test vector.
Algorithmic Innovations
• D-Algorithm (Roth, 1966): The original algorithm for ATPG, which laid the
foundation for systematic fault detection and test pattern generation.
• PODEM (Path-Oriented Decision Making, Prabhu Goel, 1981): An
improved approach that enhances decision-making efficiency by focusing
on paths within the circuit.
• FAN (Fanout-Oriented Algorithm, Fujiwara): An advancement aimed at
effectively managing the challenges posed by fanouts in circuits.
Automatic Test Equipment (ATE)-based testing
ATE-based testing is a method of testing electronic devices using
specialized equipment called Automatic Test Equipment (ATE). This
equipment is designed to automatically perform various tests on electronic
components or systems to ensure they are functioning correctly.
Drawbacks of ATE-based testing:
• High cost: Testing with ATE is expensive due to the equipment itself and
the large amount of data that needs to be processed.
• Limited testing environment: ATE testing is only suitable for production
environments and cannot be used for field testing.
• Difficult at-speed testing: The inductance and capacitance of the test
probes can slow down the testing process.
BIST as a solution:
Built-in self-testing: BIST incorporates features within the IC itself that
allow it to test its own operation.
Reduced ATE dependence: BIST reduces the reliance on external ATE
equipment, enabling testing and repair in the field.
Cost-effective: BIST is a more cost-effective solution compared to ATE, as
it doesn't require expensive external equipment.
Features:
1. Pseudo-random pattern generator:
Generates test patterns on-the-fly
Reduces storage requirements
2. Signature analysis:
Detects failures without comparing exact responses
Reduces storage requirements
BIST: Architecture

Test
Pattern
Generator BIST Controller

ROM Test Response Results


analyzer

Circuit Under Test ROM


Input
Primary
Selector
Input/ SCAN In
Primary
output/
SCAN out
BIST: Architecture
BIST Architecture is a technique that allows a circuit to test itself. It
consists of several key components:
1.Test Pattern Generator (TPG): This component generates patterns
used to test the circuit. It can optionally store these patterns in a Read-
Only Memory (ROM).
2.Test Response Analyzer (TRA): This component analyzes the circuit's
response to the test patterns and generates a signature. It compares
this signature to a stored "golden signature" to determine if there are any
errors.
3.BIST Controller: This component oversees the entire BIST process,
controlling the TPG and TRA and coordinating the overall testing
strategy.
References:
1. VLSI Design Flow: RTL to GDS, IIIT Delhi (Prof. Sneh Saurabh)
2. https://www.design-reuse.com/articles/45547/a-guide-on-logical-
equivalence-checking-flow-challenges-and-benefits.html

You might also like