ET UserGuide
ET UserGuide
Contents
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
About Encounter Test and Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Typographic and Syntax Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Encounter Test Documentation Roadmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Getting Help for Encounter Test and Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Contacting Customer Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Encounter Test And Diagnostics Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
What We Changed for this Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1
Introduction to Automatic Test Pattern Generation . . . . . . . . . . . 25
ATPG Process Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Additional Pattern Compaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
General Types of Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Scan Chain Tests ................................................... 30
Logic Tests ........................................................ 31
Path Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
IDDq Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Parametric(Driver/Receiver) Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
IO Wrap Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
IEEE 1149.1 Boundary Scan Verification Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Core Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Low Power Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Committing Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Inputs for ATPG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Test Generation Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Invoking ATPG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Stored Pattern Test Generation (SPTG) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
True-Time Test: An Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Designing a Logic Model for True-Time Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
True-Time Test Pre-Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
True-Time Test Pattern Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Types of True-Time ATPG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Automatic True-Time Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
At-Speed and Faster Than At-Speed True-Time Testing . . . . . . . . . . . . . . . . . . . . . . 40
Static ATPG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2
Using the True-Time Use Model Script . . . . . . . . . . . . . . . . . . . . . . . . 41
Executing the Encounter Test True Time Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Setup File Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3
Static Test Generation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Static Test Pattern Generation Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Performing Scan Chain Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Important Information from Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Debugging No Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Additional Tests Available . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
An Overview to Scan Chain Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Performing Flush Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Debugging No Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Performing Create Logic Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Using Checkpoint and Restart Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Committing Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4
Delay and Timed Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Testing Manufacturing Defects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Manufacturing Delay Test Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Creating Scan Chain Delay Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Delay Scan Chain Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Performing Build Delay Model (Read SDF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Timing Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
IEEE 1497 SDF Standard Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Delay Model Timing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Delay Path Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Specifying Wire Delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Performing Read SDC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Performing Remove SDC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
An Overview to Prepare Timed Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Performing Prepare Timed Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Create Logic Delay Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Producing True-Time Patterns from OPCG Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Processing Custom OPCG Logic Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Processing Standard, Cadence Inserted OPCG Logic Designs . . . . . . . . . . . . . . . 120
Delay Timing Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Path Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Design Constraints File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Process Variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Pruning Paths from the Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Verifying Clocking Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Command Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
5
Writing and Reporting Test Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Writing Test Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Write Vectors Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Write Vectors Input Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Write Vectors Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Default Timings for Clocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
6
Customizing Inputs for ATPG. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Linehold File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
General Lineholding Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Linehold File Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
General Semantic Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Linehold Object - Defining a Test Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
Coding Test Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Introduction to Test Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Getting Started with Test Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Stored Pattern Test Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Sequences with On-Product Clock Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Setup Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Endup Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Specifying Linehold Information in a Test Sequence . . . . . . . . . . . . . . . . . . . . . . . . 223
Using Oscillator Pins in a Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
7
Utilities and Test Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Committing Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Performing on Uncommitted Tests and Committing Test Data ................ 234
Deleting Committed Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Deleting a Range of Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Deleting an Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Encounter Test Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
Define_Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
Timing_Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Test_Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Tester_Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Test_Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Test_Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Test Data for Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
8
Reading Test Data and Sequence Definitions . . . . . . . . . . . . . . . 245
Reading Test Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Read Vectors Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Reading Encounter Test Pattern Data (TBDpatt) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Encounter Test Pattern Data Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Reading Standard Test Interface Language (STIL) ........................... 248
Support for STIL Standard 1450.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
STIL Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
9
Simulating and Compacting Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Compacting Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Simulating Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Analyzing Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Timing Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Test Simulation Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Specifying a Relative Range of Test Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Fault Simulation of Functional Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Recommendation for Fault Grading Functional Tests . . . . . . . . . . . . . . . . . . . . . . . . 267
Functional Test Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Zero vs Unit Delay Simulation with General Purpose Simulator . . . . . . . . . . . . . . . . . . 268
Using the ZDLY Attribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Simulating OPC Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
InfiniteX Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Pessimistic Simulation of Latches and Flip-Flops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Resolving Internal and External Logic Values Due to Termination . . . . . . . . . . . . . . . . 272
Overall Simulation Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
10
Advanced ATPG Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Create IDDq Tests .................................................... 275
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Iddq Compaction Effort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Create Random Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
Create Exhaustive Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Create Core Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Create Embedded Test - MBIST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Create Parametric Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
Create IO Wrap Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
IEEE 1149.1 Test Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
Configuring Scan Chains for TAP Scan SPTG . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
1149.1 SPTG Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
1149.1 Boundary Chain Test Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Reducing the Cost of Chip Test in Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Using On-Product MISR in Test Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
On-Product MISR Restrictions ........................................ 300
Parallel Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Load Sharing Facility (LSF) Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Parallel Stored Pattern Test Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Parallel Simulation of Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
Performing Test Generation/Fault Simulation Tasks Using Parallel Processing . . . . 304
Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Prerequisite Tasks for LSF ........................................... 305
Input Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
Stored Pattern Test Generation Scenario with Parallel Processing . . . . . . . . . . . . . 309
.................................................................... 312
11
Test Pattern Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Debugging Miscompares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Using Watch List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Viewing Test Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Viewing Test Sequence Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
SimVision Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Performing Test Pattern Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Input Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
12
Logic Built-In Self Test (LBIST) Generation . . . . . . . . . . . . . . . . . . 323
LBIST: An Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
LBIST Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
Performing Logic Built-In Self Test (LBIST) Generation . . . . . . . . . . . . . . . . . . . . . . . . . 329
Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
Input Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
Seed File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
Parallel LBIST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
Task Flow for Logic Built-In Self Test (LBIST) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Debugging LBIST Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Prepare the Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Check for Matching Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Find the First Failing Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
Diagnosing the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
A
Using Contrib Scripts for Higher Test Coverage . . . . . . . . . . . . . 343
Preparing Reset Lineholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Creating Reset Delay Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
B
Three-State Contention Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
List of Tables
Table 4-1 Common Delay Model Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Table 4-2 SDC Statements and their Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Table 4-3 Methods to Generate Timed Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
List of Figures
Figure 1-1 Encounter Test Process Flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Figure 1-2 ATPG flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Figure 1-3 Fault List Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Figure 1-4 ATPG Menu in GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Figure 3-1 Encounter Test Static Pattern Test Processing Flow . . . . . . . . . . . . . . . . . . . . 60
Figure 3-2 Static Scan Chain Wave Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Figure 4-1 Flow for Delay Test Methodologies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Figure 4-2 Delay Testing and Effects of Clocking and Logic in Backcones . . . . . . . . . . . . 74
Figure 4-3 Dynamic Test Waveform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Figure 4-4 SDF Delays that Impact Optimized Testing and the Effect in the Resulting Patterns
86
Figure 4-5 Period and Width for Clocking Data into a Memory Element . . . . . . . . . . . . . . 92
Figure 4-6 Setup Time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Figure 4-7 Hold Time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Figure 4-8 Scenario for Calculating Path Delay. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Figure 4-9 Wire Delay Scenarios. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Figure 4-10 set_case_analysis Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Figure 4-11 set_disable Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Figure 4-12 set_false_path Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Figure 4-13 set_multicycle_path Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Figure 4-14 false and multicycle Path Grouping Example . . . . . . . . . . . . . . . . . . . . . . . . 104
Figure 4-15 Use Model with SDC Constraints File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Figure 4-16 Creating True-time Pattern Using Custom OPCG Logic . . . . . . . . . . . . . . . 118
Figure 4-17 Creating True-time Pattern Using RTL Compiler Inserted OPCG Logic . . . 122
Figure 4-18 Creating True-time Pattern Using RC Run Script. . . . . . . . . . . . . . . . . . . . . 123
Figure 4-19 Process Variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Figure 4-20 Robust Path Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Figure 4-21 Non-Robust Path Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Preface
Encounter Test is integrated with Encounter RTL Compiler global synthesis and inserts a
complete test infrastructure to assure high testability while reducing the cost-of-test with on-
chip test data compression.
Encounter Test also supports manufacturing test of low-power devices by using power intent
information to automatically create distinct test modes for power domains and shut-off
requirements. It also inserts design-for-test (DFT) structures to enable control of power shut-
off during test. The power-aware ATPG engine targets low-power structures, such as level
shifters and isolation cells, and generates low-power scan vectors that significantly reduce
power consumption during test. Cumulatively, these capabilities minimize power consumption
during test while still delivering the high quality of test for low-power devices.
Example: [simulation=gp|hsscan]
■ User interface elements, such as field names, button names, menus, menu commands,
and items in clickable list boxes, appear in Helvetica italic type.
Example: Select File - Delete - Model and fill in the information about the model.
En
ria
Getting co
uto
Started un
tT
te
s
rD
Te
ia
ter
gn
un
tic
s
Tu
Click the Help or ? buttons on Encounter Test forms to navigate to help for the form and its
related topics.
Refer to the following in the Graphical User Interface Reference for additional details:
■ “Help Pull-down” describes the Help selections for the Encounter Test main window.
■ “View Schematic Help Pull-down” describes the Help selections for the Encounter Test
View Schematic window.
1
Introduction to Automatic Test Pattern
Generation
Test pattern generation is the process of determining what stimuli to apply to a design to
demonstrate the design’s correct construction. Application of these test vectors is used to
prove the design contains no manufacturing induced defects. The test vectors may be either
automatically generated (by Encounter Test), or they can be manually generated.
The following figure shows where the test pattern generation fits in a typical Encounter Test
flow.
Build Model
Encounter Test automatic test pattern generation (ATPG) supports designs with compression
and low power logic in a static, untimed, or timed environment. The following types of tests
are supported:
■ Scan Chain - refer to “Scan Chain Tests” on page 30
■ Logic - refer to “Logic Tests” on page 31
❑ Static
❑ Dynamic (with or without SDF timings)
■ Path - refer to “Path Tests” on page 32
■ IDDq - refer to “IDDq Tests” on page 32
■ Driver and Receiver - refer to Parametric Test in the Custom Features Reference
■ IO Wrap tests
■ IEEE 1149.1 JTAG, Boundary Scan Verification patterns
■ Core tests - refer to “Core Tests” on page 34
■ Low power tests - refer to Creating Low Power Tests in the Encounter Test Low Power
Guide for more information.
■ Commit test patterns - refer to “Committing Tests” on page 233
In addition to the above-mentioned types of tests, Encounter Test provides the following
features:
■ Compacting and manipulating test patterns
■ Compressing test patterns
■ Simulating and fault grading of test patterns
■ Fault sub-setting
■ Generating Low Power Tests
It is recommended that you perform Test Structure Verification (TSV) to verify the
conformance of a design to general design guidelines. Non conformance to these guidelines
result in poor test coverage or invalid test data.
Generated tests are passed to the active compaction step and accumulated. Test compaction
merges multiple test patterns into one if they do not conflict with a user selectable effort.
Note: If the generated tests are applied at different oscillator frequencies, the test
compaction function does not produce any warning message but applies all the tests at the
same oscillator frequency.
When the test group reaches certain limit, such as 32 or 64 vectors, the tests are passed to
the fault simulator that runs a good fault-free and a faulty machine simulation to predict the
output states and identify tested faults. The tested faults are marked off in the fault list. A fault
is considered to be tested when the good machine value differs from the fault machine value
at a valid measure point.
The results of test pattern generation, active compaction, and fault simulation are collectively
called an experiment. An experiment can be saved or committed if the user is satisfied with
the pattern and test coverage results (refer to “Committing Tests” on page 233 for more
information). It can be appended to, as in top-off or add-on patterns, or it can be discarded
and started from scratch. More complex procedures involving multiple fault lists, test modes,
experiments, and cross mark-offs of faults can be done with this type of process flow.
This process repeats till the unprocessed fault list is empty or user-defined process limits are
reached. Encounter Test then reports a summary of the test generation process with the
number of test sequences and the test coverages achieved.
Notes:
1. To perform Test Generation using the graphical interface, refer to “ATPG Pull-down” in
the Graphical User Interface Reference.
2. Also refer to “Static Test Generation” on page 59.
These scan chain tests may be static, dynamic, or dynamic timed. See “Test Vector Forms”
on page 242 for more information on these test formats.
You can generate scan chain tests for designs using test compression, which will verify the
correct operation of the spreader, compactor, channels, and masking logic.
The scan test walks each scannable flop through all possible transitions by loading an
alternating sequence (00110011?.), and simulating several pulses of the shift clock(s).
■ LSSD Flush Test
For Level Sensitive Scan Design (LSSD) designs, it is possible to turn on all the shift A
clocks and all of the shift B clocks simultaneously. In this state of the design, any logic
values placed on the scan inputs will “flush” through to the scan output pins of the
respective scan chains. The LSSD flush test generated by Encounter Test sets all scan
input pins to zero and lets that value flush through to the scan outputs. Then all scan input
pins are set to one and that value is let flush through to the scan outputs. Finally, all the
scan input pins are set back to zero.
LSSD flush tests are sometimes used to screen for different speed chip samples. Since
the scan chain usually traverses every portion of the chip, the LSSD flush test may be a
reasonable gauge of the overall chip speed.
A LSSD flush test is not generated if Logic Test Structure Verification (TSV) has not been
run. TSV tests determine whether a LSSD flush test can be applied. A LSSD flush test
cannot be applied if:
❑ Either the shift A or shift B clocks are chopped
❑ The scan chain contains one or more edge-sensitive storage elements (flip-flops).
❑ The shift clocks are ANDed with other shift clocks which could result in unpredictable
behavior when they are all ON.
Logic Tests
These tests are used to verify the correct operation of the logic of the chip. This is not to be
confused with functional testing. The objective of the logic tests is to detect as many
manufacturing defects as possible.
Logic tests can be static or dynamic (with or without timings). You can specify the timings
through a clock constraint file (refer to “Clock Constraints File” on page 125 for more
information) or an SDF file.
See “Test Vector Forms” on page 242 for additional detail on these test formats.
Static logic tests are the conventional mainstream logic tests. They detect common defects
such as stuck-at and shorted net defects. They may also detect open defects. Encounter Test
does not target the CMOS open defects, but stuck-at fault tests detect most of the CMOS
open defects.
Dynamic tests are used to detect dynamic, or delay types of defects. Dynamic tests specify
certain timed events, and this format can be applied to test patterns targeted for static faults.
The converse is not supported; Encounter Test does not allow static-formatted tests when
dynamic faults are targeted.
Static tests for scan designs have the following general format:
Scan_Load
Stim_PI
Pulse one or more clocks
Measure_PO (optional)
Scan_Unload
Dynamic events:
Pulse launch clock
Pulse capture clock ( this the at speed, timed, quickly)
Measure_PO (optional)
Scan_Unload
The dynamic events section is the only part of the test that can be applied quickly (at speed).
The other events in the test are applied slowly.
The example pattern format mentioned above represents a typical delay test. Other delay test
formats can also be produced.
Path Tests
Path tests produce a transition along an arbitrary path of combinational logic. The paths to be
tested may be specified using the pathfile keyword or selected by Encounter test by
specifying the maxpathlength keyword.
Path delay tests may or may not be timed. Paths may be tested in various degrees of
strictness ranging from Hazard-Free Robust(most strict), Robust, Nearly-Robust, and Non-
Robust(least strict). Refer to “Path Tests” on page 137.
IDDq Tests
For static CMOS designs, it is possible to detect certain kinds of defects by applying a test
pattern and checking for excessive current drain after the switching activity quietens down.
This is called IDDq testing (IDD stands for the current and q stands for quiescent). Refer to
“Create IDDq Tests” on page 275 for more information.
Parametric(Driver/Receiver) Tests
Parametric tests exercise the off-chip drivers and on-chip receivers. For each off-chip driver,
objectives are added to the fault model for DRV1, DRV0, and if applicable, DRVZ.
For each on-chip receiver, objectives are added to the fault model for RCV1 and RCV0 at
each latch fed by the receiver. These tests are typically used to validate that the driver
produces the expected voltages and that the receiver responds at the expected thresholds.
IO Wrap Tests
These tests are produced to exercise the driver and receiver logic. The tests use the chip’s
internal logic to drive known values onto the pads and to observe these values through the
pad’s receivers. IO wrap tests may be static or dynamic. Static IO wrap tests produce a single
steady value on the pad. Dynamic IO wrap test produce a transition on the pad.
In regard to IEEE 1149.6 standard, Encounter Test also supports these boundary scan
structures ensuring compliance to the IEEE 1149.6 standard. Verification is limited to the
digital IEEE 1149.6 constructs.
Encounter Test ATPG can be run on testmodes containing 1149.x constructs. Refer to “” on
page 312 for more information.
Core Tests
Encounter Test supports core testing as per IEEE 1500 standard. Refer to “Create Core
Tests” on page 283 and IEEE 1500 Embedded Core Test in the Encounter Test Synthesis
User Guide for more information.
Committing Tests
This is an Encounter Test concept where a set of test patterns and their corresponding tested
faults are saved and stored. All subsequent runs start at the test coverage achieved by the
saved patterns. Refer to “Committing Tests” on page 233 for more information
Invoking ATPG
Use the following syntax to invoke ATPG through command line:
create_<test type>_tests EXPERIMENT=<experiment name> TESTMODE=<testmode name>
WORKDIR=<directory>
To invoke ATPG using Graphical User interface, select ATPG - Create Tests - <Test
type>
■ Static compaction is the merging of several test patterns into a single test pattern to
target multiple defects. This reduces the number of test patterns required to test a
design.
■ Fault simulation accomplishes the following:
❑ Computes the expected response values for a defect free design. The test patterns,
including the expected response values, are written to an output file. This file
contains information to be applied on a stored pattern tester.
❑ Determines which faults are detected by the test patterns.
Fault simulation acts as a filter. For example, if a test pattern is simulated and it does
not detect any additional remaining faults (in other words, is ineffective), you can
choose to not write the test pattern to the output file. Also, if any test pattern could
potentially damage the product, you can choose to not write the pattern to the output
file.
The results of fault simulation are used to compute test coverage (the percentage of
the faults that are detected by the tests). Refer to “Fault Statistics” in the Modeling
User Guide for details on the calculation of test coverage.
Stored Pattern Test Generation applies the concepts of test pattern generation, compaction,
and fault simulation in the process shown in Figure 1-2.
The clock sequences are combined into a multi-domains sequence that simultaneously tests
the combined sequences. Refer to preceding section in this chapter for related information.
The determination of best sequences is performed by generating test patterns for a statistical
sample of randomly selected dynamic faults. Each unique test pattern is evaluated to
ascertain how many times the test generator used it to test a fault. The set of patterns used
most often are considered the best sequences. Normally, the top four or five will test 80
percent of the chip. An additional option, maxsequences, is available to use more
sequences, if desired.
The maximum path length is determined by generating a distribution curve of the longest data
path delays between a random set of release and capture scan chain latch pairs. The area
under the curve is accumulated from shortest to longest and the maximum path length is the
first valley past the cutoff percentage. This method constrains the timings to ignore any
outlying paths that over inflate the test timings. Additional options are available to control this
step by providing the cutoff percentage for the curve and a maximum path length to override
the calculation.
The best sequences, and their timings and constraints are stored in the TBDseq file for the
test mode. Refer to “TBDpatt and TBDseqPatt Format” in the Test Pattern Data Reference
for related information.
To perform pre-analysis using the graphical user interface, refer to “Prepare Timed
Sequences” in the Graphical User Interface Reference.
To generate patterns using static ATPG, refer to “Static Test Generation” on page 59.
The recommended flow for delay ATPG for timed tests is to have test generation run off the
results from pre-analysis done using Prepare Timed Sequences. This is not a required step,
but is highly recommended. If the pre-analysis phase is performed using Prepare Timed
Sequences, the applications automatically detect and process prepared sequence definitions
unless otherwise specified (through the useprep keyword). Create Logic Delay Tests
provides an option to exclude the sequence definitions produced by Prepare Timed
Sequences.
Usually, all of the best sequences are processed in the order of most to least productive.
However, the option is there to process an individual sequence or an ordered list of
sequences. The number of faults processed for each job can be controlled by specifying
maxfaults= on the command line.
Static ATPG
Encounter Test has full support for a static ATPG flow. The processing is very simple and
straight forward and has been used on 1000s of chips. Refer to “Static Test Generation” on
page 59 for more information.
2
Using the True-Time Use Model Script
Encounter Test provides a script named true_time to perform Encounter Test true time on
a design. The script takes a design through build model, test pattern generation using ATPG,
and writes out patterns in desired language. The script also gives you an option of using a
portion or the entire Encounter Test flow. For example, you can use the script to just perform
ATPG and not build model. This script is designed and maintained to allow the best flow
through Encounter Test for the general market place.
Note: You might get more optimized results using direct system calls, but the script is
designed in a way to allow good results without having the expertise in all the Encounter Test
domains.
where setup_file is the file with options and parameters to run through the Encounter
True Time flow.
Prerequisite Tasks
Complete the following tasks before executing the true_time script:
■ Create a working directory by using the mkdir <directory name> command.
■ Fill in the setup file (<setup_file>) with required steps.
■ Set up Encounter Test into your environment (using et -c or et)
Create a copy of either of these files in your working directory. The tt_setup_static
template is a subset of tt_setup. The following topics discuss the various sections and
inputs of the tt_setup file.
This information controls the execution of the script and allows you to start or stop after
certain steps and control log files.
The following table lists the parameters and the corresponding values for this section of the
setup file:
Model Information
This section contains information used to create the Encounter Test model and fault model.
The following information is required only for build_model and/or build_faultmodel
steps.
Refer to build_model and build_faultmodel in the Command Line Reference for more
information on the respective parameters.
The information in this section is used to create the Encounter Test test modes. The
information in the following table is required only if you are using the true_time script to
build a testmode(s).
This information is used to control ATPG settings. The parameters listed in the following table
are required only if you use the true_time script to run ATPG.
This input is used to control delay ATPG settings. The parameters listed in the following table
are required only if you use the true_time script to run delay ATPG.
Note: This section does not exist in the tt_setup_static file.
For more information on delay ATPG, refer to “Delay and Timed Test” on page 71.
TEMPLATESOURCE=<fi Used by
le>
read_celldelay_tem
plate to identify a list of
required delays for a cell.
This is optional.
This information is used to control the path delay ATPG settings. The parameters listed in the
following table are required only if you use the true_time script to run path delay ATPG.
This section does not exist in the tt_setup_static file.
This information is used to control the writing of the test vectors into different formats.
For more information on Test Vector Formats, refer to write_vectors in the Command Line
Reference or “Test Vector Forms” on page 242.
NC-Sim Information
This information is used to control the execution of NC-sim. The parameters listed in the
following table require the writing out of test patterns in the Verilog format.
Parameter Description
NCVERILOG_DESIGN=<files> Specify the design and techlib files
separated by “,”to use for NCSIM
separated,
NCVERILOG_OPTIONS=<optons> Specify the ncverilog options to
use in ncverilog separated by
”y”
NCVERILOG_DEFINE=<options>
NCVERILOG_DEFINE=<options> Specify the +define options to use
in ncverilog separated by “,”
Parameter Description
NCVERILOG_DIRECTORY=<directory> Specify the directory to find
ncverilog or ncsim. This avoids
a potential conflict between the
ncverilog used by Encounter
Test and the ncverilog used for
simulation
Output
Before processing, the script parses the setup file to analyze data and checks the incorrect
values:
INFO - COMPRESSION keyword not set to recommended values: opmisrplus,
opmisrplus_topoff, xor, or xor_topoff. COMPRESSIONTESTMODE keyword not set.
No compression mode in the design.
Only TESTMODE=FULLSCAN_TIMED testmode is active.
INFO (TVE-003): Verilog write vectors output file will be: ./testresults/verilog/
VER.FULLSCAN_TIMED.data.scan.ex1.ts1. [end TVE_003]
INFO (TVE-003): Verilog write vectors output file will be: ./testresults/verilog/
VER.FULLSCAN_TIMED.mainsim.v. [end TVE_003]
Completed Successfully. Continuing.
If any information is missing, for example, no NC-sim information supplied, the script exits
before starting the step.
*******************************************************************************
Setup file indicates exit before run_ncverilog. Exiting.
The output summary highlights the coverage and pattern count achieved during the run.
3
Static Test Generation
This chapter explains the concepts and commands to perform static ATPG with Encounter
Test.
Several types of tests are available for static pattern test generation. Refer to “General Types
of Tests” on page 30 for more information.
■ To perform Test Generation using the graphical interface, refer to “ATPG Pull-down” in
the Graphical User Interface Reference.
■ To perform Test Generation using command lines, refer to descriptions of commands
for creating and preparing for tests in the Command Line Reference.
1. build_model - Reads in verilog netlists and builds the Encounter Test model
For complete information on Build Model, refer to “Performing Build Model” in the
Modeling User Guide.
2. build_testmode - Creates scan chain configurations of the design
For complete information, refer to “Performing Build Test Mode” in the Modeling User
Guide.
3. verify_test_structures - Verifies scan chains and checks for design violations
Resolve any TSV violation before running ATPG.
For complete information, refer to “Logic Test Structure Verification (TSV)” in the
Verification User Guide.
4. build_faultmodel - Creates the fault model for ATPG
For complete information, refer to “Building a Fault Model” in the Modeling User Guide.
5. create_scanchain_tests - Creates test patterns to validate and test faults on the scan
chain path
For complete information, refer to “Scan Chain Tests” on page 30.
6. commit_tests - Saves the patterns and marks off faults from the scan chain test
The scan chain test is written added to the master pattern set for this testmode so that
subsequent ATPG runs do not need to generate tests for these faults.
For complete information, refer to “Utilities and Test Vector Data” on page 233.
7. create_logic_tests - Creates patterns to test the remaining static faults
An experiment is created that contains the required type of tests. Most often these will
be logic tests.
For more complete information of the various test types, refer to “General Types of Tests”
on page 30.
Refer to “Advanced ATPG Tests” on page 275 for information on other types of ATPG
capabilities such as Iddq, core test, and parametric testing.
8. Should the results be kept?
If the results of the experiment are satisfactory, the experiment can be committed,
appending it’s test patterns and fault detection to the master pattern set for the testmode.
❑ Yes - commit tests
❑ No - If the results are not satisfactory, another experiment can be run with different
command line options.
You can also analyze untested faults. For complete information, refer to “Deterministic
Fault Analysis”, in the Verification User Guide.
9. write_vectors - Writes out the patterns in WGL, Verilog, or STIL format
For complete information, refer to “Writing and Reporting Test Data” on page 169.
For compression modes, the scan chain tests test many of the faults in the compression
networks. This also applies to testing faults in any existing masking logic.
To create scan chain tests using the graphical interface, refer to “Create Scan Chain Tests” in
the Graphical User Interface Reference.
To perform create scan chain tests using command lines, refer to "create_scanchain_tests"
in the Command Line Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode
■ experiment= name of the test patterns
Prerequisite Tasks
Before executing create_scanchain_tests, build a design, testmode, and fault model.
Refer to the Modeling User Guide for more information.
Global Statistics
Debugging No Coverage
If you do not achieve the desired ATPG coverage, check for the following problems:
■ Contention in the design - Look for TSV-193 and TSV-093 messages from
verify_test_structures to identify internal contention.
■ Broken scan chains - Analyze the verify_test_structures log for broken scan
chains.
For more information, refer to “Creating Reset Delay Tests” on page 344.
In summary:
■ The static scan chain test is always in the scan state.
■ All clocks are configured at scan speeds.
■ Additional clocking and sequences are added while testing for compression, MISRs, or
masking.
■ The order of sequence can change slightly based on custom-scan protocols and custom-
scan preconditioning.
To create flush tests using the graphical interface, refer to “Create LSSD Flush Tests” in the
Graphical User Interface Reference.
To create flush tests using the command line, refer to “create_lssd_flush_tests” in the
Command Line Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode
■ experiment= name of the test patterns
Prerequisite Tasks
Before executing create_lssd_flush_tests, build a design, testmode, and fault model.
Refer to the Modeling User Guide for more information.
Debugging No Coverage
If you do not achieve the desired ATPG coverage, check for the following problems:
■ Contention in the design - Look for TSV-193 and TSV-093 messages from
verify_test_structures to identify internal contention.
■ Broken scan chains - Analyze the verify_test_structures log for broken scan
chains.
To perform Create Logic Tests using the graphical interface, refer to "Create Logic Tests" in
the Graphical User Interface Reference.
To perform Create Logic Tests using the command line, refer to “create_logic_tests” in the
Command Line Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ experiment= name of the test that will be generated
Prerequisite Tasks
Complete the following tasks before executing Create Logic Tests:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide for more information.
2. Create a Test Mode. See “Performing Build Test Mode” in the Modeling User Guide for
more information.
3. Build a fault model including static faults. See “Building a Fault Model” in the Modeling
User Guide for more information.
Output
Encounter Test stores the test patterns in the experiment name.
Command Output
The output log contains information about testmode, global coverage, and the number of
patterns used to generate those results.
****************************************************************************
----Stored Pattern Test Generation Final Statistics----
Global Statistics
■ Static Test Coverage is the percentage of detected faults out of detectable faults. It is
calculated as #tested in mode/(#test mode faults - #redundant -
#globally ignored)
■ Static Fault Coverage is the percentage of detected faults out of all faults. It is calculated
as #tested globally/(#global faults + #globally ignored)
■ Static ATPG Effectiveness is the percentage of ATPG-resolvable faults out of all faults. It
is calculated as (#Tested + #Redundant + #globally ignored + #ATPG
Untestable + #Possibly Tested)/(#global faults + #globally
ignored)
If you do not achieve the desired ATPG coverage, check for the following problems:
■ Contention in the design - Look for TSV-193 and TSV-093 messages from
verify_test_structures to identify internal contention.
■ Broken scan chains - Analyze the verify_test_structures log for broken scan
chains.
Note: Another tool to analyze low ATPG coverage is deterministic fault analysis. Refer to
Analyze Faults in the Verification User Guide for more information.
A checkpoint allows the application to save its current results. You can set the checkpoints for
the application by using the checkpoint=<value> keyword, where value is the number of
minutes between each checkpoint. The default is 60 minutes. For example, if you specify
checkpoint=120, Encounter Test sets up checkpoints and saves application data after
every two minutes.
You can use the checkpoint data to restart in case of network or machine failure during the
execution of the application.
Note: Some processes cannot be interrupted to take a checkpoint. If one of these processes
takes longer than the specified number of minutes, the next checkpoint will be taken as soon
as the process ends.
Using the Andrew File System (AFS) might occasionally result in the checkpoint files getting
out of sync, which cannot be restored for restart. However, this problem has never occurred
when using local disk or NFS.
To restart the application using checkpoint data, re-run the same ATPG command after
adding the following:
restart=no|yes|end
Specifying restart allows you to restart an application that produced a checkpoint file
before it ended abnormally. If a checkpoint file exists, the default is to restart the application
from the checkpoint and continue processing (restart=yes).
Use restart=end to restart the application from the existing checkpoint but immediately
cleanup, write out results files, and then end.
Use restart=no to start the application from the beginning instead of using an existing
checkpoint file.
If you use the restart keyword, Encounter Test ignores any other specified keywords, that
is, only the keyword values from the original command line are used.
Committing Tests
By default, Encounter Test makes all test generation runs as uncommitted tests in a test
mode. Commit Tests moves the uncommitted test results into the committed vectors test data
for a test mode. Refer to “Performing on Uncommitted Tests and Committing Test Data” on
page 234 for more information on the test generation processing methodology.
Refer to “Writing and Reporting Test Data” on page 169 for more information.
4
Delay and Timed Test
This chapter provides information on the concepts and commands to perform delay and timed
ATPG with Encounter Test.
With current technologies, more and more defects are being identified that cause the product
to run slower than its designed speed. As stuck-at testing is a slow test that is only designed
to check the final result, paths with above average resistance or capacitance due to impurities
or a bad etch can go undetected. These defects can change the speed of a path to the point
where it is outside of the typically thin tolerance designed in to the functional circuitry. This
can result in a defective product.
Delay defects take the form of spot and parametric defects. An example of the spot defect is
a partially open wire that adds resistance to the wire thereby slowing the signal passing
through the wire. A parametric defect is a change in the process that causes a common slight
variation across multiple gates, which in turn causes the arrival of a signal or transition to take
longer than expected. An example of a problem causing a parametric failure would be an
increase in the transistor channel length.
Encounter Test provides Manufacturing (slow to rise and fall faults) and Characterization
(path delay) Tests for identifying spot and parametric defects. Encounter Tests provides a use
model script that allows you to create the model to dynamic ATPG test generation. Refer to
“Using the True-Time Use Model Script” on page 41 for more information on the script
The following sections discuss the details about the commands that constitute the true-time
use model.
The following steps represent a typical delay ATPG use model flow for manufacturing defects:
■ Scan chain and reset fault testing
❑ create_scanchain_delay_test - refer to “Creating Scan Chain Delay Tests” on
page 76
❍ prepare_reset_lineholds - refer to “Preparing Reset Lineholds” on page 343
❍ create_reset_delay_tests - refer to “Creating Reset Delay Tests” on page 344
■ Logic Testing
❑ read_sdf - refer to “Performing Build Delay Model (Read SDF)” on page 80
❑ prepare_timed_sequences - refer to “Performing Prepare Timed Sequences” on
page 111
❍ AT Speed and Faster than AT Speed Tests can be achieved by specifying the
appropriate clock frequency in a clock constraint file.
❑ create_logic_delay_test - refer to “Create Logic Delay Tests” on page 114
■ Exporting and saving patterns
❑ commit_tests - refer to “Committing Tests” on page 233
❑ write_vectors - refer to “Writing Test Vectors” on page 169
The following steps represent a typical delay ATPG use model flow for Characterization
defects (path test)
■ create_path_delay_tests - refer to “Performing Create Path Tests” on page 134
■ Exporting and saving patterns
❑ commit_tests - refer to “Committing Tests” on page 233
❑ write_vectors - refer to “Writing Test Vectors” on page 169
Refer to the following figure for a high level flow of the various delay test methodologies:
In Figure 4-2, a transition fault F exists in the logic between registers A and B. See Figure 4-
22 for an example of a transition fault.
Figure 4-2 Delay Testing and Effects of Clocking and Logic in Backcones
■ Test function input pin switching (1->0, 0 ->1). See “Identifying Test Functions Pins in the
Modeling User Guide for related information.
■ Clock domains (controlled by lineholds or constrained timings). See CLOCKDOMAIN in
the Modeling User Guide for more information.
■ Early or late modes in the standard delay file, refer to “Timing Concepts” on page 85
■ Constrained timings, as described below.
It is realistic to assume that the tests produced to accommodate all timings will not effectively
test the fastest (1x) transitions. Those transitions must be operated twice as fast as the 2x
transitions if they are to operate at system speed in the design. Encounter Test can do this
through a technique known as Constrained Timings. This technique allows you to specify
options to exclude consideration of any paths greater than a certain cycle time (length). If a
register is fed by any path longer than the specified cycle time, the register is marked as not
observable causing no further detections to be possible at this register for this given timing
sequence. If a transition can occur in the specified cycle time or less, the register will be
measured and fault effects can be detected.
The timings specified with constrained timings can be tight timings for a given clock
sequence that allows the shorter paths to complete while the longer paths are ignored. When
consecutively run, tests for all timings (for all timing domains in the functional design) can be
generated for manufacturing by appending runs with shorter-to-longer timings after each
other. When appending runs, the fault lists (detected and undetected) are transferred
between the runs to accelerate follow-on processes. This approach detects faults at the
fastest speeds that they can be measured. The result is an increased pattern count but with
higher quality test patterns in terms of ability to detect real defects.
Lineholds and/or test inhibits can also be used to shut down paths in the design. Once a path
is no longer active, it is no longer considered by Encounter Test. Only the longest active path
will be used for any given clock sequence. Automatic, At-Speed, and Faster Than At-Speed
flows can automatically generate constraints for each clocking sequences which can also
shut down long paths with respect to timing. While the test generator will target the longer
paths, the simulator can still detect faults on shorter paths.
Since the SDF data can contain best, normal and worst case timings, these values can be
used as is or can be linearly scaled to test for particular process points (like Vdd and/or
temperature). When this is done, Early Mode and Late Mode linear combination settings can
be used during test generation to affect the distribution of delays (variance of normal
distribution).
The methodology most used to produce a range of tests for given cycle times on the design
is to make runs with different constrained timings. That is, run with the fastest (shortest) cycle
times first. Subsequent runs are then used to increase the constrained times until the slowest
(longest) cycle times are reached. A common approach with this technique is when multiple
clock domains exist on the design. The cycle times for each clock domain are used as the
specification of the constrained timings.
To create dynamic scan chain tests using the graphical interface, refer to Create Dynamic
Scan Chain Tests in the Graphical User Interface Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode
■ experiment= name of the test patterns
Prerequisite Tasks
Output
The output log, as shown below, will contain a summary of the number of patterns generated
and their representative coverage. Both static and dynamic faults should be tested.
****************************************************************************
----Stored Pattern Test Generation Final Statistics----
Testmode Statistics: FULLSCAN
#Faults #Tested #Redund #Untested %TCov %ATCov
Total Static 908 425 0 437 46.81 46.81
Total Dynamic 814 316 0 498 38.82 38.82
Global Statistics
#Faults #Tested #Redund #Untested %TCov %ATCov
Total Static 1022 425 0 551 41.59 41.59
Total Dynamic 1014 316 0 698 31.16 31.16
****************************************************************************
----Final Pattern Statistics----
Test Section Type # Test Sequences
----------------------------------------------------------
Scan 2
----------------------------------------------------------
Total 2
Debugging No Coverage
If you do not achieve the desired ATPG coverage, check for the following problems:
■ Contention in the design - Look for TSV-193 and TSV-093 messages from
verify_test_structures to identify internal contention.
■ Broken scan chains - Analyze the verify_test_structures log for broken scan
chains.
Additional tests
There are additional scripts to help test the set and reset faults on scan flops. These ATPG
commands are not supported officially but are available to achieve higher test coverages.
For more information, refer to “Creating Reset Delay Tests” on page 344.
■ Delay shift #8
❑ Stim_PI - Load the next value on scan inputs to continue 1000111 pattern from
scan input.
❑ Pulse - Pulse the scan clocks
❑ Measure_PO - Measure values on scan outputs
❑ Stim_PI - Load the next value on scan inputs to continue 1000111 pattern from
scan input.
❑ Pulse - Pulse the scan clocks
❑ Measure_PO - Measure values on scan outputs
■ Scan_Unload - Observe all scan bits
■ Scan_Load - Load the scan chain bits with repeating 110 patterns. This tests the clock
slow-to-turn-off faults.
■ Stim_PI - Stay in scan state
■ Static Shift
❑ Stim_PI - Load the next value on scan inputs to continue 110 pattern
❑ Pulse - Pulse scan clocks
❑ Stim_PI - Load the next value on scan inputs to continue 110 pattern.
■ Delay Shift #1
❑ Pulse - Pulse scan clocks
❑ Stim_PI - Load the next value on scan inputs to continue 110 pattern from scan
input
❑ Pulse - Pulse scan clocks
❑ Measure_PO - Measure values on scan outputs
❑ Stim_PI - Load the next value on scan inputs to continue 110 pattern from scan
input
■ .....
■ Delay Shift #5
❑ Pulse - Pulse scan clocks
❑ Stim_PI - Load the next value on scan inputs to continue 110 pattern from scan
input.
❑ Pulse - Pulse scan clocks
❑ Measure_PO - Measure values on scan outputs
❑ Stim_PI - Load the next value on scan inputs to continue 110 pattern from scan
input
■ Static Pulse
❑ Pulse - Pulse scan clocks
❑ Scan_Unload - Observe all scan bits
In summary:
■ The delay scan chain test is always in the scan state.
■ All clocks are configured at scan speeds.
❑ Clocks are not at system speed as scan paths are typically not timed to system
speeds.
❑ Timed scan chain tests are timed to the slowest flop to flop pair for all scan clocks.
■ Additional clocking and sequences are added while testing for compression, MISRs, or
masking.
■ The order of sequence can slightly change based on custom-scan protocols and custom-
scan preconditioning.
To perform Build Delay Model using the graphical interface, refer to “Build Delay Model” or
“Read SDF” in the Graphical User Interface Reference.
To perform Build Delay Model using command lines, refer to “build_delaymodel” or “read_sdf”
in the Command Line Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode for timed test
■ delaymodel= Internal Encounter Test name of the delay model. Multiple names can
exist for a delay model at the same time.
■ sdfpath= directory in which sdf file exists
■ sdfname= name of sdf file
■ clockconstraints = the file that includes clocking constraints. The clock domains
that are outside the domain specified in the file are not used in the delay test generation.
This ensures that the command only generates messages that are relavant to the clock
domains being tested.
Alternatively, you can specify either of the following keywords to limit the clock domains
that will be checked while building delay model:
❑ testsequence - The release/capture clocks are extracted from the specified
sequences and used to determine the relevant clock domains
❑ dynseqfilter - the type of sequences that the dynamic test generation is allowed
to generate. The default value is any for LSSD testmode, which means that the
command will only check for intra domains in such case.
Prerequisite Tasks
that any checks performed when building the delay model will all be interpreted through
the selected test mode. Therefore, if a given test mode has certain paths disabled, the
delay model build procedure will not check for delays along these paths, nor will it include
any delay information along these paths in the delay model. This could result in missing
delay information in another test mode where these paths are enabled.
If using multiple test modes for a design, the recommended practice is to build a delay
model for each test mode unless the test modes are very similar.
3. Create an SDF file.
Use the tool of choice such as ETS (Encounter Timing System) to create an SDF file for
the expected conditions (voltage levels, temperature) to run at the tester.
Note: When reading an SDF, Encounter Test expects to map its delay information to the
cell or block level of hierarchy within the Encounter Test design model. Refer to “Delay
Timing Concepts” on page 124 for more information.
information is missing for a real path, which is a multi-cycle path, the timing engine will
be unable to realize that it is a multi-cycle path, and will expect to measure it successfully
at speed (resulting in a miscompare).
Output
This task generates a list of cells with missing or unnecessary delays. The task also creates
an Encounter Test delay model in the tbdata directory.
Messages with IDs TDM-013, TDM-014, TDM-041, TDM-051, TDM-055 represent delay
model errors and need to be resolved, failing which may cause incomplete timings.
The following table lists some common delay model error messages and the corresponding
debugging technique:
Encounter Test supports the ability to customize the SDF compatibility checks. Refer to
“Customized Delay Checking” on page 149 for information on creating
TDMcellDelayTemplateDatabase files.
Timing Concepts
The positioning of the events within the dynamic pattern (timed section) ensures that the
arrival times of each signal at a latch or PO occur at the proper time. This requires that the
delays of the product’s cells and interconnects be known accurate. The information is
imported into Encounter Test using the Standard Delay File (SDF) and stored in Encounter
Test's delay model. The delay information is generated and exported to the SDF by a timing
tool, such as IBM’s Einstimer or the Encounter Timing System (ETS). The method in which
the timing tool is run (delay mode, voltage, and temperature parameters) has a great impact
on the effectiveness of the timed tests at the tester. The range of parameters specified to the
timing tool should accurately reflect the range of conditions to be applied at the tester. SDF
information is required for the At-Speed and Faster Than At-speed True-Time methodologies.
Automatic True-Time Test can be run without SDF timings, but will benefit from SDF timing
information.
Build Delay Model creates a delay model for use in generating timing for transition tests, path
delay tests, and for use in delay test simulation.
For delay testing, the SDF should contain information about all delays in the design, cell
delays, and interconnect delays (see Figure 4-4).
Figure 4-4 SDF Delays that Impact Optimized Testing and the Effect in the Resulting
Patterns
Note that Encounter Test reads an SDF, it expects to find delay information at the highest level
of hierarchy that is a "cell" or "block" in the Encounter Test design model. This is the only valid
layer of hierarchy for delay descriptions other than the very top level of the design (for delays
to and from PIs and POs). Delays that are described at any other layer of hierarchy are not
successfully imported into Encounter Test.
Use the Encounter Test GUI to check the hierarchical layer of a given block. Select
Information Window option Loc: Hierarchical Level and mouse over a block to query it.
Refer to the “Information Window Display Options” in the Graphical User Interface
Reference for details.
If the levels of hierarchy in your Encounter Test design do not correlate to the SDF, force a
block to be at a given level of hierarchy by adding the TYPE attribute to the block. For example,
to force a given module to be a "CELL" in Verilog, code the following:
module myBlock ( in1, in2, in3, out1 ); //! TYPE="CELL"
Refer to the Modeling User Guide for additional information on adding attributes to the
model.
The SDF can also contain best, nominal, and worst case timings or can be produced for
certain process conditions that will be used in manufacturing, such as temperature and
voltage.
Build Delay Model reads the SDF and creates a delay model for Encounter Test.
Encounter Test supports SDF version 2.1, however the following 2.1 specification is not
supported:
■ INSTANCE statements with wildcards
The following features of SDF 2.1 are tolerated but not used in Encounter Test:
■ PATHPULSE and GLOBALPATHPULSE
■ PATHCONSTRAINT, SUM, DIFF, COND, SKEWCONSTRAINT
■ CORRELATION
■ SKEW
Note: Some of the preceding definitions are taken from the content of the document P1497
DRAFT Standard for Standard Delay Format (SDF) for the Electronic Design
Process.
The following constructs are not tolerated and not used by Encounter Test:
❑ PERIODCONSTRAINT
❑ PATHCONSTRAINT
❑ DIFF
❑ SUM
❑ EXCEPTION
❑ NAME
❑ ARRIVAL
❑ DEPARTURE
❑ WAVEFORM
❑ SLACK
❑ SKEWCONSTRAINT
The following timing specifications are tolerated but not used by Encounter Test:
■ TIMINGENV
■ LABEL
SDF information is typically created using a static timing analysis tool. You can annotate this
information to the Encounter Test database and automatically determine the locations of
multi-cycle paths, setup violations, and hold violations.
Note: SDF annotation does not automatically determine the locations of false paths,
therefore, it is recommended to use an SDC file with SDF information.
The SDF information is mapped to the Encounter Test database by locating the correct cells
in the design and attaching the information there. The hierarchy definitions in the database
must match the hierarchy assumed in the SDF information for the annotation to be successful.
Tip
It is important to have complete and correctly mapped data, therefore, investigate all
the messages reported during this process rigorously. Any incomplete or incorrect
information can result in patterns that will fail good devices on the tester.
Use the following parameters to read and use SDF delay data. The following parameter points
to the directory where the SDF file resides:
SDFPATH=sdf_directory_name
The following parameter points to the name of the file containing the SDF delay data,
propagation delays, and timing check information for the design:
SDFNAME=sdf_file_name
These delays model the connection from the input pins to output pins of a cell. The I/O path
delays model a delay for a path or paths from a specific input pin to output pin. The I/O path
delay can include the explicit transitions that occur at the beginning and end of the paths.
The device delays are more general than I/O paths. They state that any of the paths from any
of the cell's inputs to any of the cell's outputs or a specific output pin are covered by this delay
value. The device delays contain no notion of the phase of the transitions either at the input
or the output.
Wire Delays
These delays describe the wire connections between different entities (hierBlocks) on a
single product or a hierarchy of products (cell, chip, and card). There are three types of wire
delays: interconnect, net, and port. Interconnect delay runs from any source pin on a net to
any sink pin on the same net, at the same or different levels of the package hierarchy. Net and
port delays run only between pins in the same net at a single level of the package hierarchy.
The exact meaning of a net delay depends upon how it is specified. If a source pin is
specified, the delay is from this pin to any sink pin on the net (at this package level). If the net
name is specified, the delay is for any source pin to any sink pin on the net (again, at the same
package level). A port delay is used from any source to a single sink pin at the same package
level. Refer to “Specifying Wire Delays” on page 97 for more information.
These delays describe a required minimum time between two events on one or two input pins
of a given cell (often relating to a memory element). Maintaining the minimum time between
the two events ensures that the cell(s) will perform correctly. In the functional mode there may
be timing check delays between input pins of non-memory elements but they are ignored by
the test applications except when checking the Minimum Pulse Width or Skew Constraints.
The minimum size pulse (or glitch) that will allow the memory element inside a given cell to
correctly operate.
These are features of a clock's edges when they arrive at a memory element. The width is
the minimum size and polarity the pulse must be for the memory element to properly clock in
data. The period is the minimum time required between a given edge of a clock pulse (leading
to leading or trailing to trailing).
Figure 4-5 Period and Width for Clocking Data into a Memory Element
Period
LATCH
CO
Width
The setup and hold times describe the minimum duration of time that must be maintained
between two signals that are arriving at different inputs of a cell. One pin must be a clock (C0),
and the second is usually the data pin (D0), but may be a clock (C1). The setup time is the
duration of time that the data (or C1 clock) must be stable before the specified clock edge
arrives at the clock (C0) pin.
The hold time is the amount of time the data pin must be stable after the clock edge arrives
at the clock (C0) pin.
CO
DO
CO
DO
No Change
The SDF uses this to define a window surrounding a clock pulse during which some given pin
must be stable.
Notes:
1. To perform Build Delay Model using the graphical interface, refer to “Build Delay Model”
in the Graphical User Interface Reference.
2. To perform Build Delay Model using command lines, refer to “build_delaymodel” in the
Command Line Reference.
Also refer to “Performing Build Delay Model (Read SDF)” on page 80.
This allows targeting the early mode or late mode arrival time to a particular portion of
the process curve. For example (0,1,0) uses the nominal values, while (0,.5,.5) will use
the average of the nominal and worst delays.
■ Arrival Time - The sum of the delays from a source to sink along the fastest or slowest
path.
The following is brief description of the timing algorithm used in Encounter Test. It describes
a clock to clock sequence but the same basic idea is used with pi to clock and clock to po
sequences.
1. A test sequence consisting of launch and capture events is presented to the timer.
2. The pulse width requirements are calculated. Pulse timing checks are present at the cell
level. These requirements are backed up to the primary inputs to account for any pulse
width shrinkage which may have occurred.
3. The arrival times for all the latch data and clock inputs are calculated for the best, early,
late and worst cases. The user has control of early and late. Best and worst are the very
fastest and slowest (respectively) and are used only in special cases that are not
discussed here.
The arrival times take into account tie values, lineholds on primary inputs and in certain
cases, lineholds on latches and the values on test function primary inputs. The values
are used to disable paths that are not to be timed.
4. Linear dependencies (Setup, Minimum Pulse, and Hold time tests) are calculated. See
Figure 4-8.
The times are distance in time between transitions, (i.e. the leading edge of the release
clock to the trailing edge of the capture clock). Also since the launch clock launches
transitions that will be captured at many latches, there are many dependencies. Only the
largest one that describes the relationships between edges is kept.
When running with maxpathlength or a clockconstraints file (constrained
timings), the dependencies that do not meet the timing constraints are dropped and the
involved capture latches are ignored. When running the Automatic True-Time flow, the
maximum path length is automatically determined for each clocking sequence. In the At-
Speed and Faster Than At-Speed flows, a clock constraints file is user-specified.
In Figure 4-8, the sink “P3” would be a candidate to be ignored to test paths that are
smaller that 2 Logics. The figure also shows setup and hold test that traverse logical
paths between FFs, however these are not the only types of relationships; they also
occur between clocks that feed the same cell or at clock gates. For example, a clock-to-
clock relationship is used in a sequence with a scan clock release and a system capture
clock capture. There is a relationship from the trailing edge of the release clock to the
leading edge of the capture clock.
5. The set of linear dependencies are optimized into the timings (timeplates). In this step,
the setup, hold and pulse dependencies are combined into one set of timings to be
applied at the primary inputs. Though the maximum path lengths that are greater than the
specified maxpathlength have been discarded, the combined set of edges still may be
longer than the maxpathlength. For example, if the following dependencies are used
in the figure, the total delay will be greater than the largest of the parts:
Setup Time = 4 ns
hold time = 2 ns
pulse width = 2.5 ns.
The total delay is 7 ns not 4 ns. The reason for this is (hold time + (2 x pulsewidth) = 7 ns.
The following terms and corresponding definitions are associated with wire delays:
Wire Cell A cell containing a wire that leads directly from an input to an
output with no intervening primitives.
Partial Delay A delay passing through the wire of a wire cell that could be
incorporated into a larger delay passing through the same wire.
Parent Delay Any interconnect delay leading from and going to a primitive
block. Some parent delays can be broken down into two or
more partial delays.
Partial Parent A delay that exhibits properties of a partial delay and a parent
delay. Figure 4-9 displays examples of the preceding terms.
Note that PORT and NET delays do not include the length of wire within the cell; they only
include the interconnecting wires between the cells.
Delays through wire cells can either be specified as INTERCONNECT delays which span
several cells, or each segment can be specified in IOPATH and INTERCONNECT delays. If
delays are specified by parts, then all of the partial delays that comprise the parent delay must
be specified or the delay information will be incomplete.
For example, in Figure 4-9, the delays can be specified as one INTERCONNECT from A to F,
or they can be specified as INTERCONNECT delays from A to B and E to F, as well as an
IOPATH delay from B to E.
Note: Any existing SDC in the current testmode from a previous run should be removed
before reading the current SDC. Refer to “remove_sdc” in the Command Line Reference
for more information on this.
While reading and parsing an SDC file, Encounter Test creates design constraints. For more
information on Encounter Test handling and processing of these constraints, refer to
“Dynamic Constraints” on page 150.
To perform Read SDC using the graphical interface, refer to “Read SDC” in the Graphical
User Interface Reference.
To perform Read SDC using command line, refer to “read_sdc” in the Command Line
Reference.
where
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ sdcpath= directory where the sdc file exists
■ sdc= name of the sdc file
Prerequisite Tasks
Output
Encounter Test stores the constraints internal to the testmode. When running ATPG, add
usesdc=yes at the command line to use the SDC.
Refer to “Constraint Checking” on page 151 for information on the test generator and
simulators handle design constraints.
The design constraints file supplements or replaces the SDF or for delay tests that consider
small delay defects, particularly for incorporating faster technologies with specialized timing
algorithms such as on-product clocks for at-speed testing. The functional characteristics
associated with these technologies, also known as designer’s intent, are recorded in a
design constraints file. True-Time Test accepts a Synopsys® Design Constraints (SDC) file
that configures clock and delay constraint parameters.
Set_Case_Analysis
Set_Disable
Set_False_Path
The syntax is set_false_path -from pin_A -to pin_B. Figure 4-12 on page 103
depicts a configured example.
Refer to “False and Multicycle Path Grouping” on page 104 for additional information.
Figure 4-12 on page 103 depicts an example.
Set_Multicycle_Path
Refer to “False and Multicycle Path Grouping” on page 104 for additional information.
Falsepath and multicycle path statements are not required to correspond to individual
combinational paths. An individual false or multicycle path statement may represent many
paths using one falsepath or multicycle statement. Specify multiple locations at the
-from, -to, and -through points. The paths can traverse as desired as long as the path
goes through at least one point from the -from, -to, and -through point list.
Specify either the -from or -to point on a clock path. The paths start or end at the flip-flop/
latches downstream of the clock pin. Figure 4-14 depicts a configured example where the
-from or -to are clock pins. The dashes identify the falsepaths.
Boolean Constraints
The Boolean constraints are equiv, onehot, onecold, zeroonehot, and zeroonecold.
The following is the syntax to describe boolean constraints in either the design constraints file
or linehold file(s):
onehot - instructs only this entity may be a logic 1; all others must be logic 0
zeroonehot - instructs that all entities are logic 0 or just one entity is a logic 1
onecold - instructs that only this entity may be a logic 0; all others must be logic 1
zeroonecold - instructs that all entities are logic 1 or only one entity is a logic 0
The preceding flow occurs after a model and test mode are created and prior to test
generation.
The following is a typical sequence of events for incorporating use of the SDC:
1. Develop/create an SDC that describes the design’s intended function.
2. Use the SDC, lineholds, and test mode to customize True-Time Test generation to
produce synthesis and timing analysis results.
Important
Ensure the generate_clocks statement occurs first in the SDC file or supporting
TCL scripts.
A currently used constraints file may be changed using either of the following methods:
Use Read SDC to read and verify the constraints in a design constraints file and a linehold
file. Read SDC incrementally updates existing constraints based on the content of the input
constraints/linehold file(s).The verified constraints are stored in an output constraints file for
subsequent use by test generation. The output files are in the following forms:
■ constraints.testmode
■ constraints.testmode.experiment to allow simultaneously running multiple
experiments with different SDC files
If multiple test modes are specified, the SDC is verified using the first specified test mode.
The results are applied to all specified test modes. Refer to the following to perform Read
SDC and Remove SDC:
■ “Read SDC” in the Graphical User Interface Reference
■ “read_sdc” in the Command Line Reference
■ “Remove SDC” in the Graphical User Interface Reference
■ “remove_sdc” in the Command Line Reference
Note: An RTL Compiler license is required to run Read SDC.
To perform Remove SDC using the graphical interface, refer to “Remove SDC” in the
Graphical User Interface Reference
To perform Remove SDC using command line, refer to “remove_sdc” in the Command Line
Reference.
where:
■ workdir = name of the working directory
The clock sequences are combined into a multi domain sequence that simultaneously tests
the combined sequences. Refer to Table 4-3 on page 112 for more information.
The best sequences are determined by generating test patterns for a statistical sample of
randomly selected dynamic faults. Each unique test pattern is evaluated to ascertain the
number of times the test generator used it to test a fault. The set of patterns used most often
are considered the best sequences. Typically, the top four or five patterns will test 80 percent
of the chip. Specify the maxsequences option to use more sequences, if required.
Measure_Latch
...
The maximum path length is determined by generating a distribution curve of the longest data
path delays between a random set of release and capture scan chain latch pairs. The area
under the curve is accumulated from the shortest to the longest and the maximum path length
is the first valley past the cutoff percentage. This method limits the timings to ignore any
outlying paths that over inflate the test timings. Additional options are available to control this
by providing the cutoff percentage for the curve and a maximum path length to override the
calculation.
The distribution summary is printed into the log, an example is given below:
Random Path Length Distribution
Percentage 80 90 95 97 98 99 100
MaxPathLength 5300 6250 6500 7250 7500 7500 12350
Dynamic constraints are automatically determined by generating timings for the set of best
sequences. This occurs after the maximum path length has either been provided or
calculated. The events for each sequence are applied and the calculated delays are
evaluated for violations of the maximum allowed distance. For every detected violation, the
sources are traced and a dynamic constraint is generated to specify to the test generator to
disallow these sources to switch during the dynamic portion of the test. If no transitions occur
in these paths, the cause of the timing violation is removed, thus resulting in faster timings. In
addition to the violations, a set of ignored measure latches is generated for analysis. When
the number of ignored measures exceeds 10 percent of all of the measures for the chip, the
maximum path length for the sequence is increased to allow for better test coverage. A
summary of this activity is printed in the output log, as shown in the following example.
Encounter Test might create design constraints while running prepare_timed_sequences with
a delay model. Refer to “Dynamic Constraints” on page 150 for more information on how
Encounter Test handles and processes these constraints.
To perform pre-analysis using the graphical user interface, refer to “Prepare Timed
Sequences” in the Graphical User Interface Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ delaymodel=<name> - Name of the delay model for the timed ATPG tests.
Note: The commonly used keywords for the prepare_timed_sequences command are:
■ clockconstraints =<file name> - List of clocks and frequencies to perform
ATPG. For more information, refer to “Clock Constraints File” on page 125.
■ dynseqfilter= <value> - Type of clocking, for example launch of capture with
same clocking or launch off shift. For more information on sequence types, refer to
“Delay Test Clock Sequences” on page 146.
The following table discusses the various methods of getting timings in the patterns:
Prerequisite Tasks
Complete the following tasks before executing Prepare Timed Sequences:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in Modeling User Guide.
2. Build Encounter Test Testmode
3. Build Fault model with dynamic faults
Output
Encounter Test stores the test sequences internal to the testmode. The sequences can be
examined by reporting the test sequence. Refer to report_sequences in the Command Line
Reference for more information.
Check the log file to see if a large number of constraints were added to the data base. A large
number of constraints may prevent the test generator from ramping up the coverage,
especially in signature-based testing.
Command Output
Stim_Latch
Stim_PI_Plus_Random
Dynamic pattern events:
Pulse_Clock -ES CLK2
Pulse_Clock -ES CLK2
Static measure event:
Measure_Latch
Encounter Test might create and use design constraints while running
create_logic_delay_tests with a delay model or SDC. Refer “Dynamic Constraints” on
page 150 to for more information.
To perform create logic delay tests using the graphical interface, refer to “Create Logic Delay
Tests” in the Graphical User Interface Reference
To perform create logic delay tests using command line, refer to ““create_logic_delay_tests”
in the Command Line Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ experiment= name of the test that to be generated
The most commonly used keywords for the create_logic_delay_tests command are:
■ clockconstraints=<file name> - List of clocks and frequencies to perform
ATPG. For more information, refer to “Clock Constraints File” on page 125.
■ dynseqfilter=<value> - Type of clocking, for example, launch of capture with same
clocking or launch off shift. For more information on sequence types, refer to “Delay Test
Clock Sequences” on page 146.
Prerequisite Tasks
Complete the following tasks before executing Create Logic Delay Tests:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in Modeling User Guide.
2. Build Encounter Test Testmode
3. Build Fault model with dynamic faults
Tip
For timed tests, it is recommended to run prepare_timed_sequences to
precondition the clocking sequences and timing constraints.
Output
Encounter Test stores the test patterns in the experiment name.
Command Output
The output log contains information on testmode, global coverage and the number of patterns
used to generate the results.
Experiment Statistics: FULLSCAN.prep2
#Faults #Tested #Redund #Untested %TCov %ATCov
Total Static 908 229 0 640 25.22 25.22
Total Dynamic 814 111 0 699 13.64 13.64
INFO (TDA-001): System Resource Statistics. Maximum Storage used during the run
and Cumulative Time in hours:minutes:seconds:
Note: For information on reporting domain specific test coverage, refer to “Reporting Domain
Specific Fault Coverages” on page 345.
The following figure shows the tasks required to use custom OPCG logic within a design.
OPCG logic usually requires a special initialization sequence and sometimes requires special
event sequences for issuing functional capture clocks during application of the ATPG
patterns. This example creates special event sequences for initializing the chip and for
launching the functional capture clock.
Note that as each design is unique, customized designs require different settings and event
sequences.
Two functional clock pulses are issued from the OPCG logic. The first clock launches the
transition and the second clock captures the logic output in a downstream flip-flop.
In addition, there should be an OPCG statement in either the mode definition file or the pin
assign file. The OPCG statement block allows specifying the PLLs to be used, the reference
clocks that are used to drive them, and the programming registers that are available to
program them. It also allows specifying each OPCG clock domain, the PLL output that drives
the domain, and programming registers that are available for the domain. Refer to OPCG in
the Modeling User Guide for more information on the OPCG statement syntax.
The initialization sequence defines the launch-capture timing within the OPCG logic and waits
10,000 cycles for the PLL to lock.
The test application sequence defines the sequence of events required to get the OPCG logic
to issue the desired launch and capture clock pulses.
RTL Compiler creates the pin assign file that defines the cutpoints and the OPCG logic to
Encounter Test; you still need to create the mode initialization sequence to correctly program
any PLLs that will be used for OPCG testing.
The RTL Compiler also generates a run script that runs automates the various steps of
Encounter Test to produce the test patterns.
The following figure depicts the tasks required generate test sequences using Cadence
inserted, standard OPCG logic without using the run script generated by RTL Compiler:
Figure 4-17 Creating True-time Pattern Using RTL Compiler Inserted OPCG Logic
When using the OPCG logic inserted by RTL Compiler, you can provide the mode
initialization sequence as input to the write_et_atpg command in RC. This generates an
RC run script named runet.atpg, which automates the various steps of Encounter Test to
produce the test patterns. The script processes the OPCG and non-OPCG test modes. For
the OPCG test modes, it runs the prepare_opcg_test_sequences command that
automatically generates intradomain delay tests to be used by ATPG. It can also generate
static ATPG tests, if desired. You can modify the script to generate inter-domain tests if you
have included delay counters in the OPCG logic.
The following figure depicts the tasks required to create the test patterns using the RC run
script:
Build test mode initialization sequence This initializes the PLLs and starts the
reference oscillators to be used for test.
Run the RC command define_dft opcg_mode This specifies the sequence used to
initialize the PLLs
Optionally modify the run script, runet.atpg to For example, if you inserted delay counters in
customize it for the desired output OPCG domains and want to apply interdomain
tests, add “interdomain=yes” to the invocation of
the prepare_opcg_test_sequences command
line in the script
The following section identifies the settings to control the timings for a test generation run.
One situation where these may be useful is in testing different clock domains.
Path Length
Maximum and minimum timings represent the time between the clock edge that triggers the
release event and the clock edge that captures the result. With reference to Figure 4-24 on
page 145, note that this is not exactly equal to the path length, because the paths from the
Clock A input to LatchA and LatchB may not be equal. Regardless, it is often convenient to
think in terms of the path lengths. Paths longer than the maximum path length (after adjusting
for the clock skew as previously noted) cannot be observed by the test since they would be
expected to fail. Any latch or primary output that would have been the capture point for these
long paths is given a value of X (no measure).
Paths shorter than the minimum path length will be observed, but some small delay defects
could go undetected in a short path. Note that any dynamic test that involves paths of unequal
lengths must be timed to the longest path, so this short path concern exists regardless of
whether a minimum path length is specified.
The clock constraints file guides the creation of clock sequences used for fixed-time, at-speed
or faster than at-speed test generation. The delay test generator builds tester-specific test
sequences using the clock constraints file information and the tester description rule (TDR).
Multiple domains and inter-domain sequences can be defined in a single clock constraints
file. Certain timing-related TDR parameters are also overridden using a clock constraints file.
Statements supported are:
■ Clock domain statements
■ Return to stability statements
■ TDR overrides statements
The domains to be tested are specified by the clocks that control them. Several syntax
variations are supported. Each clock domain statement is used to generate a specific clock
sequence during ATPG. Multiple clock domain statements are not combined into a single
sequence.
clockname {edge, width timeUnit} {speed timeUnit};
or
clockname1 {edge, width timeUnit} clockname2{edge, width timeUnit} {speed
timeUnit};
or
clockname {speed timeUnit};
or
clockname1 clockname2 {speed timeUnit};
where:
■ clockname - Name of a clock pin. This can be a primary input pin or a pseudo primary
input (PPI).
Note: Different clock PIs can be used within the same statement in the clock constraints file.
❑ If there is only one pin, it is used as both the launch and capture clocks.
❑ If there are two pins, the first pin is the launch clock and the second pin is the capture
clock
■ edge - Either posedge or negedge, referring to which edge of the clock pin is the active
edge (opposite of stability state) for the domain.
■ width timeUnit - The pulse width. Specified as an integer. Specify timeUnit in ns,
ps, or MHz (for frequency).
■ speed timeUnit - The time between the leading edges of the launch and capture
clocks. As with width, the speed can be defined in ns, ps, or MHz.
Note:
■ The clockname must have a space after it.
■ Speed timeUnit is optional but the surrounding braces {} are required.
Examples
Example 1:
Example 2:
The return to stability (RTS) statement specifies when the trailing edge of a clock is to occur
relative to the END of the tester cycle. Specify the RTS time for clock domains using one of
the following formats:
RTS clockname {edge , speed timeUnit} ;
or
RTS ALLCLKS {edge , speed timeUnit} ;
where:
■ clockname - The name of a clock pin. This can be a primary input pin or a pseudo
primary input (PPI).
■ edge - Either posedge or negedge, referring to which edge of the clock pin is the active
edge (opposite of stability state) for the domain.
■ speed timeUnit - The pulse width. Specify speed as an integer. Specify timeUnit
in ns, ps, or MHz (for frequency).
Note:
■ The clockname must have a space after it.
■ The Return to Stability statement applies to all Clock Domain Statements that reference
the same clock PI name.
Example:
The following example shows how the clock domain statement is modified by a return to
stability statement:
// CLKA clock domain is 50 MHz with 10 ns clock
CLKA {posedge , 10 ns} CLKA {posedge , 10 ns} {50 Mhz} ;
// Define the time consumed before the end of the tester cycle
// that CLKA returns to stability
rts CLKA {1 ns} ;
This example defines the timing for the CLKA launch and capture. If the TDR specifies that
only one clock pulse per tester cycle is to be issued, the resulting sequence will place CLKA
in two consecutive tester cycles:
The leading edge of the launch clock will be at time 90ns and the leading edge of the capture
clock will be at time 10 in the next cycle. Both should have a pulse width of 10 ns.
However the RTS entry modifies this timing. It specifies that the trailing edge of both the
launch and capture CLKA pulses should be at 99ns (1ns from the end of the cycle). To
accommodate this, the launch pulse width is changed to 9 ns and the capture pulse width is
changed to 89 ns.
In addition to clock domain specifications, certain TDR parameters can be overridden. The
following statements may be used in the clockconstraints file:
■ resolution {speed timeUnit} ;
■ accuracy {speed timeUnit} ;
■ period {speed timeUnit} ;
Resolution identifies the smallest increment of time on the tester. Accuracy is added to the
time between release and capture timings. Tester period identifies the time for one tester
cycle.
Example:
// tester resolution...smallest increment of time on the tester
Process Variation
Control the timing calculation by selecting a point on the process curve. This curve is a
mathematical representation of the delay value, which if measured on a sample of parts,
would vary about some mean value; that is, the process curve is the statistical distribution of
the average delay values of the chip. Figure 4-19 shows an example process curve for a given
delay. Set the coefficients relating to best case, nominal case, and worst case delays. The
delays are calculated as
delay=(best_case_coefficient * best_case_delay) +
(nominal_case_coefficient * nominal_case_delay) +
(worst_case_coefficient * worst_case_delay)
This formula allows the selection of either the best case, worst case, or nominal delays by
setting one of the coefficients to 1 and the others to 0. However, Encounter Test accepts any
decimal number for each of the coefficients to scale the delays (with a coefficient other than
1) or use averages (with two or more non-zero coefficients).
Create tests to sort the product based on its speed are created by making several different
runs and selecting a different point on the process curve each time. To ensure that all received
product is faster than the halfway point between nominal and worst case (see Figure 4-19),
specify a process variation of (0,.5,.5). Not all manufacturers will do such sorting, therefore
we recommend verifying whether an override is acceptable.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ clockconstraints=<file name> - List of clocks and frequencies to perform
ATPG. Refer to “Clock Constraints File” on page 125 for more information.
Prerequisite Tasks
Complete the following tasks before executing Verify Clocking Constraints:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in Modeling User Guide.
2. Build Encounter Test Testmode. Refer to “Performing Build Test Mode” in the Modeling
User Guide for more information.
Output Files
None
Command Output
The output log contains information if the sequences match the data stored in the testmode.
Error in Syntax:
file: clock.domain.txt.bad , the clock name CLKa2 was not found in the model.
Correct the clock constraint file value(s) and re-run. [end TTU_418]
ERROR (TTU-414): Clock_constraint file parse failed, due to syntax error at line
3. [end TTU_414]
INFO (TTU-402): Verify Clock Constraints is complete. [end TTU_402]
Another method to verify the clock constraints information with the TDR information is to use
the verify_clock_constraints command. All the aforementioned commands and the
verify_clock_constraints command check if the following criteria are met:
■ The minimum pulse width value obtained from the clock constraint file should be equal
to or greater than the specified value for the MIN_PULSE_WIDTH parameter in the TDR
■ The maximum pulse width value obtained from the clock constraint file should be less
than or equal to the specified value for the MAX_PULSE_WIDTH parameter in the TDR
■ The minimum leading to leading edge width obtained from the clock constraint file should
be equal to or greater than the specified value for the
MIN_TIME_LEADING_TO_LEADING parameter in the TDR
■ The minimum trailing to trailing edge width obtained from the clock constraint file should
be equal to or greater than the specified value for the
MIN_TIME_TRAILING_TO_TRAILING parameter in the TDR
■ The specified pulse width and clock speed values for a clock in the clock constraints file
should be unique (semantic check)
The above-mentioned clock constraint checks are invoked in the initial stages of the test
generation and the applications quickly notify if any criterion is not met.
Note: For any of the above-mentioned checks, if the data to compare is not available from
the TDR, the command does not perform the corresponding check.
Characterization Test
Characterization test is a type of path test used to primarily speed-sort a product. That is,
using certain paths, the maximum speed at which the design can run is determined.
The goal of characterization test is to provide a starting point for schmoo-ing at the tester.
Characterization test is based upon a path test, however many of the timing attributes of the
Manufacturing Delay test apply. Many manufacturing sites have tools that facilitate changing
the timings of the patterns therefore Encounter Test produces a basic path test that can be
manipulated on the ATE. The default process for generating path tests when delays are
available (a delay model has been built) is to generate tests for the specified longest (critical)
paths and to individualize the patterns that test each path. Each test pattern can have its own
unique timeplate that can be manipulated on the ATE independent of the timeplates used for
other patterns. Encounter Test does not limit path tests (logic values and timings) to just
testing and timings individual paths; multiple test paths can be simultaneously tested. When
the paths are simultaneously tested, the speed of the tests is affected by the entire product.
Constrained timings should be used so the timings do not take into account the paths longer
than the specified paths.
An important consideration when performing path test is the pattern volume. In path test, the
trade-off is between processing time, pattern count, test coverage and the number of paths
that can be tested. A minimal pattern set is difficult with path tests. Compressing patterns is
often not possible when focusing on a particular path and ignoring other capture registers in
the design. This can cause data volume/tester time problems which must be balanced with
the number of paths to test. As the number of paths increase, they should be grouped into
paths with similar timings, then applied with the longest timing for that group. This means that
some paths will have their timings relaxed but fewer timing changes will have to be made on
the manufacturing test equipment.
Paths for characterization testing can be partially or fully specified. Encounter Test selects
paths using delay information and or the gate level information. If delay information exists (a
delay model was specified), the paths are determined by specifying the partial path and filling
out the rest of the paths, enumerated from longest, shortest or random order. Also, if a path
is given entirely at the cell level, it is considered only partially specified because the paths are
stored at the cell level and there may be multiple paths within a cell. By default, Encounter
Test finds a certain number of the longest paths upon which to attempt test generation based
on a user-specified path. This is rarely an exhaustive set of paths due to the number of cells
(and paths through those cells) in a specified path.
The specified path may not be the sole determining factor for the timing of the test. The logic
in the back-cone of the capture register also affects the test's timing. In Figure 4-2, this is
shown in the smaller dotted region. For a robust test, other logic which affects the path (feeds
into the path) must be considered. If logic in the back-cone operates slower than the path,
then that is the timing that must be used. This ensures that late values can only be due to the
cone of logic the target path exists within. To limit the effect of this, the timing is done using
the actual pattern so only the logic experiencing transitions is timed.
The tests of paths can be specifically stored patterns generated for paths or previously
generated patterns from another ATPG run such as dynamic stored pattern, WRPT or LBIST
patterns that have been simulated against the defined set of faults. For example, LBIST
patterns can be resimulated to investigate whether the longest paths are tested.
Refer to “Path Tests” on page 137 for information on different types of path patterns.
To perform create path delay tests using the graphical interface, refer to Create Path Delay
Tests in the Graphical User Interface Reference.
To perform create path delay tests using the command line, refer to
“create_path_delay_tests” in the Command Line Reference
where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ experiment= name of the test that will be generated
■ pathfile= name of paths to be generated. Refer to “Path File” on page 136 for more
information.
and observed by the same clock will be processed. For more information on sequence
types, refer to “Delay Test Clock Sequences” on page 146.
■ maxnumberpaths=<number> - The maximum number of paths to generate for any
one path group. There is a rising and falling group created for each specified path. The
default is 20.
■ pathtype=nearlyrobust|robust|nonrobust|hazardfree - Type of test to
generate. Refer to “Path Tests” on page 137 for more information. The default is
nearlyrobust.
To simulate paths against existing patterns, use the inexperiment and tbcmd keywords.
Prerequisite Tasks
Complete the following tasks before executing Create Path Delay Tests:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in Modeling User Guide.
2. Build Encounter Test testmode
Output Files
Encounter Test stores the test patterns in the experiment.
Command Output
The output log contains information about the number of paths tested and the type of
generated test. A sample output log is given below:
Hzfree= Hazard Free Robust
Robust = Robust
NrRob = Nearly Robust
NoRob = Non Robust
****************************************************************************
----Dynamic Path Test Generation Final Statistics----
Final Experiment Statistics for Path Faults
#Faults #Tested #HzFree #Robust #NrRob #NoRob #NoTest %TCov %HFree %Rob
%NrRob %NoRob %NoTest
All Paths 4040 321 0 0 0 321 0 7.95
0.00 0.00 0.00 7.95 0.00
OR Groups 920 138 0 0 0 138 0 15.00
0.00 0.00 0.00 15.00 0.00
AND Groups 0 0 0 0 0 0 0 0.00
0.00 0.00 0.00 0.00 0.00
****************************************************************************
Path File
A path file is used as input to Create Path Delay Tests and Create Path Timed Delay Tests to
assist in processing specific paths on the design. The guidelines for path file syntax are as
follows:
■ The syntax for the content of the path file is: identifier block/pin/net ......;
■ There is no limit to the number of paths.
In the preceding example, MyPath and MyPath2 are the path identifiers that name the two
path fault groups. Specify names of nodes along the path after the path fault group name. The
nodes may be specified in any order with any number of intermediate nodes excluded. When
nodes are excluded, the intermediate nodes are determined based on how the parameter
selectby is set, whose default value is longest. Use a semi-colon (;) to indicate the end
of the path fault group node list.
Path Tests
These tests analyze whether a transition propagates through a specified user-defined path
and a directed acyclic set of gates.
Hazard-free tests require that the only possible way for the transition to arrive at the sink is
without any interference from the gating signals that intersect the path. That is, the gating
signals must be at a constant steady state; glitches are not allowed. To make a test hazard-
free, all inputs into the path must be at a steady state (A=1s). Being at the steady state does
not allow two paths to intersect. The design in Figure 4-20 can be made hazard-free by
ensuring C and H are at a steady 1 (1s).
Robust path tests are “relaxed” hazard-free tests. The difference is that for robust tests the
controlling to non-controlling transitions do not require a steady value from the gating signals.
This means the off path pin may be unstable until the final state is achieved. Conversely, a
non-controlling to controlling transition on the path requires the other pins to be at a steady
value.
Robust path test tests the delay of a path independent of the delays of the other paths,
including paths that intersect it. So if path B-I-L in Figure 4-20 is faulty, then regardless of the
delay on the other paths, L will detect the 1 to 0 transition late. If a fault is present on B and
C, then L will detect the transition late, however we are unable to diagnose from which path
the fault has come.
Non-robust path tests allow hazards on both controlling to non-controlling on the off path
gating. The only requirement is that the initial and final state of the path's sink is a function of
the transition of the transition of the source. For example, if the source does not change state,
the sink will settle at the state that is a function of the source.
The non-robust tests create a path test, however a faulty value on another line can invalidate
the test. An example is the hazard on C in Figure 4-21. If the glitch widens the value at L, the
value still appears reasonable even though there is a fault on B. The glitch is typically caused
by values changing from 0 to 1 and 1 to 0 on two inputs of AND or OR gates.
If the robust test is invalidated by a signal with a known definite glitch (a two input AND gate
with both inputs transitioning in opposite directions would create a definite glitch or hazard),
then the test is identified as a nearly-robust path test. A plain non-robust path test is one
where off path gating requires a steady state value on an off path input, and it is impossible
to get a steady state (for robustness) or definite hazard (for near robustness) on that pin.
To report path faults using the command line, refer to report_pathfaults in the Command
Line Reference.
Refer to Reporting Path Faults in the Modeling User Guide for more information.
Refer to “Path Tests” on page 137 for information on different types of path test.
To perform Prepare Path Delay using the command line, refer to prepare_path_delay in the
Command Line Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ pathfile= name of paths to generate
Prerequisite Tasks
Complete the following tasks before executing Prepare Path Delay:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in Modeling User Guide.
2. Build Encounter Test Testmode. Refer to “Performing Build Test Mode” in the Modeling
User Guide for more information.
Output Files
Encounter Test stores the alternate fault model name as output.
Command Output
The output log displays the summary of the number of faults created. A sample log is shown
below:
There are 1 test mode(s) defined:
FULLSCAN
Delay Defects
Delay defects are modeled by dynamic faults. Catastrophic spot point defects are modeled by
dynamic pin and pattern faults. The cell level defect and process variations use the dynamic
path pattern fault. Both use a single transition to excite the fault effect. Dynamic pin and
pattern faults require values at one gate. Path faults require a specific transition to be excited
and the effect of that transition propagated along the chosen path. Path faults also have a
variable detection criteria. They can be detected as hazard-free, robust, nearly-robust or non-
robust.
The following example shows an AND gate with a single (slow-to-rise) pin fault. The only pin
required to change state is the faulty pin. The other input to the AND gate may or may not see
a transition, but it must have a non-controlling value (1 for the AND gate) in the final state. The
final output state is 1 in this example. If the defect exists, the output will be 0 for longer than
expected time during which its state must be observed. The time the defect is present can be
as small as a few hundred picoseconds or 10's of nanoseconds. Though there is a time
component to determine whether the defect is active, the detection of the fault does not take
this into account. In our example, if the value (0) which is opposite the expected value (1) is
captured in a latch (by a clock event) or PO (by a PO measure) immediately following the
launch of the transition, the fault is detected.
The path pattern faults model a defect or process variation that accumulate across multiple
gates causing a signal to arrive late. Refer to “Path Pattern Faults” in the Modeling User
Guide.
Parametric process problems cannot necessarily be observed at a single fault site. The delay
effects of such a variation accumulate across the path. Characterization Test provides for
testing process variations by testing specific paths. The paths may include be a complete set
of logic between memory elements or smaller segments within technology cells. The paths
are modeled by dynamic path pattern faults. The locations and paths of these faults can be
user-determined or paths can be selected based on their path lengths.
An element that is essential to delay testing is that the test patterns must be applied with as
little slack at the measure points (observable scan chains or POs) as possible. This can entail
running at or near the system speed, or in some cases, faster than system speed for short
paths. Encounter Test uses information contained within a Standard Delay File (SDF) as input
to the True-Time Test methodologies. The SDF is used to automatically set the timings of PI
switching and clock pulses at the tester.
In the True-Time Test methodology, the SDF is also used to automatically determine which
memory elements have paths that are significantly longer than the rest of the paths within a
given clock domain (such as multi-cycle paths). These outliers are automatically made to not
cause failures at the tester by either measuring X at these elements (ignoring the measure)
or by constraining the logic that creates the long path during test pattern generation.
In the At-Speed and Faster Than At-Speed True-Time methodologies, the SDF is used in
conjunction with a clock constraints file (described in “Clock Constraints File” on page 125).
The clock constraints file specifies the desired operating speeds of each clock domain to be
tested as well as several other user-defined timing relationships. By using these timing
relationships, the outlier paths are determined the same way as in the automatic flow and the
logic that contributes to paths that do not meet this user-defined timing are either ignored
(measure X) or constrained during test pattern generation.
The At-Speed and Faster Than At-Speed flows require a method of achieving a high clock
rate on the product during test. Sometimes this clocking can be provided by the tester, but it
is more likely some hardware assistance will be required on the part itself. Additional On-
Product Clock Generation support is available to achieve the system speeds. Refer to “On-
Product Clock Generation (OPCG) Test Modes” in the Modeling User Guide for additional
information.
Use of a design constraints file is another method to supplement At-Speed and On-Product
test generation. Refer to “Design Constraints File” on page 100 for details.
The dotted line in Figure 4-23 shows the effect of a delay defect and the difference between
a delay (AC) test and a static (DC) Test. A delay defect causes the expected switch time
(represented by the solid line), to be delayed (represented by the dotted line). The difference
between the delay and static tests is that the delay test generates tests to explicitly cause the
designs to switch and when the result should be measured, while the static tests do not
require a transition. In a static test, the measure can be done later since a static defect is
considered to be tied high or low.
A basic delay test is a two pattern (or two clock, or two cycle) test. The first pattern is called
the launch event because it launches the transition in the design from 0 to 1, or 1 to 0
(additional transitions of 0 to Z, 1 to Z, Z to 0, and Z to 1 are also possible in some designs).
The second pattern is called the capture event because it captures, or measures, the
transition at an observe point (register or primary input).
In its simplest form, a dynamic test consists of initiating a signal transition (the release
event) and then some measured time later, observing the effect of this transition (the
capture event) at some downstream observation point (typically a latch). For a transition to
occur, the design must be in some known initial state from which the transition is made.
Therefore, a dynamic test usually includes a setup or “.initialization” phase prior to
the timed portion of the test. When the point of capture is an internal latch or flip-flop,
observation of the test involves moving the latch state to an observable output.This is done
with scan design by applying a scan operation to unload the scan chain.
The three phases of a dynamic test, initialization, timed, and observation are depicted in
Figure 4-24 along with the three types of events in the timed portion. In general, a single
dynamic pattern may have several events of each type.
A dynamic test may be either timed or not timed. When it is timed, Encounter Test defines the
timing in a sequence definition which is a sort of pattern template and accompanies the test
data. When the dynamic tests are not timed, the tests are structured exactly the same as for
timed tests, but there is no timing template. In this case, the timing, if used at all, is supplied
by some other means, such as applying the clock pulses at some constant rate according to
the product specification. Refer to “Default Timings for Clocks” on page 172 for related
information.
The optimized timed test calculated by using delays from the SDF and tester information from
the Tester Description Rule is, depending on the tester's accuracy, a test that can run faster
than the tester's cycle time. If the release and capture clocks are different, they can be placed
in the same tester cycle and their pulses can overlap to obtain an optimized (at-speed) test.
If the release and capture events both require the same clock to be pulsed, as is the case for
most edge-triggered designs, then the two clock pulses must be placed into consecutive
tester cycles. By timing these two consecutive tester cycles differently, the pulses can be
pushed out to near the end of the first cycle and pulled in near the beginning of the following
cycle. This allows the pulses to occur with a cycle time much less than that of the tester and
much closer to that of the product. This allows the test to run at the speed necessary to truly
test the potential defect because it is “.at-[the]-speed”. of the design, not at-the-speed (or at
the mercy) of the test equipment.
There are two techniques for generating the two time frames:
■ Launch on last shift requires that the final shift create a transition at the outputs of the
required flops
■ Double clocking (also known as functional release) uses two clocks in system operation
(or capture) mode to create the transition and capture the results.
In Figure 4-25, the timings for the two transition generation techniques indicate that the clock
is essentially the same. The scan can be run slow (to control power), but there must be two
cycles of the clock at the test time and the time from active edge to active edge is controlled.
In launch-on-last-scan, the Mux-Scan "Scan Enable" signal must be toggled after the launch
clock edge and before the capture clock edge. In double-clock, the scan is completed, and
the clocks can even be held off for an indefinite period while the scan enable signal is toggled.
Then two timed clocks are issued, and there is again an indefinite amount of time to toggle
the scan enable. This places no significant restrictions on the scan enable distribution and is
easier timing in a Mux-Scan design but more difficult test generation.
The default for MuxScan designs is to use double-clock only. It is recommended to allow the
test pattern generator to decide which method to use. If the SDF is provided, the timings of
the Scan_Enable signal can be appropriately controlled. The keywords of the
create_logic_delay_tests command for controlling clock sequence generation are:
■ dynfuncrelease=yes|no where yes restricts the clock sequences to double-clock
only, and no allows the test pattern generator to use either.
■ dynseqfilter=any|repeat|norepeat|onlyclks|clkpiclk|clockconstraint
where repeat will allow only patterns that repeat the same clock (within a domain), and
onlyclks will also allow inter-domain sequences.
A method of calculating the values to scan into the flops in double-clock test generation is
illustrated in Figure 4-26. Three levels of flops are depicted.
To generate a test that detects a slow-to-rise fault on the output of the OR gate, a 0 or 1
transition must be created on the output of the OR gate using the following steps:
1. Place an initial value of 0 on the output of the OR gate by scanning a 0 into the B0 flop
and B1 flops. The three values shown on each output pin are the values after the last shift
clock, launch clock, and capture clock, respectively.
2. Place a value of 1 on the output of the OR gate. To accomplish this, the B0 flop must
capture a 1 at the launch event of the capture clock. Therefore, a value of 1 must be
scanned in the A0 flop in the previous scan shift. Similarly, for B1 to capture a 0, A1 must
have 0 scanned into it during the previous scan shift. Set input pin IN1 to 1 to propagate
the fault to C0.
The second pulse of the clock is the capture event. If the transition propagates with normal
timing, a 1 is captured into the C0 flop. If there is a delay defect, a 0 will be captured. During
fault simulation, additional transition and static faults will also be detected by this pattern.
Build Delay Model performs an integrity check is the data contained in the SDF file. The check
verifies the created design that is patterned according to the SDF data is correctly modeled
by Encounter Test. The secondary purpose of the check is to verify the completeness of the
information within the SDF file.
CELL AND
(
TIE1 A
)
{
IOPATH B RISING Z RISING
IOPATH B FALLING Z FALLING
}
This shows two definitions for the same AND cell, but in the second case, the first input pin is
tied to 1. This latter definition will only apply instances of the AND cell in which the first input
is tied to 1. Otherwise, the first definition will apply, as it is more generic.
Once exported in this form, the definitions can be manually edited and read back in to the
binary database used by build_delaymodel for delay integrity checking. To read a
modified set of definitions back in, use read_celldelay_template and specify the your
text file containing the definitions for templatesource keyword.
Run build_delaymodel to build the delay model with the new definitions for checking.
Dynamic Constraints
Dynamic constraints are intended to prevent pattern miscompares for transitions propagating
through the following types of delays:
■ Long paths (negative slacks or setup time violations)
■ Short paths (hold time violations)
■ Delaymux logic
■ Unknown delays
Constraints are included in the linehold and sequence definition files. The sequence definition
file (TBDseq) includes constraints as keyed data. The keyed data is found on the define
sequence and the key is CONSTRAINTS. The entire set of constraints for a sequence
definition is found in the string that follows the key. See “Linehold Object - Defining a Test
Sequence” on page 204 and “Keyed Data” in the Test Pattern Data Reference for
additional information.
Refer to “Linehold File Syntax” on page 199 for details on linehold syntax supporting the
transition entity.
Constraint Checking
Timing constraints are enforced in the following two ways.
■ The constraints are justified by the ATPG engine using the constraintjustify
keyword.
■ During simulation, the ATPG engine verifies the test patterns do not violate the
constraints (using the keyword constraintcheck).
In some cases, the ATPG engine is unable to generate patterns in the presence of all the
constraints, therefore, turning off constraintjustify allows the ATPG engine to generate
patterns. In this case, the simulation engine can verify that the constraints were not violated,
and if required, to take some action (either remove or keep them). The constraintcheck
keyword controls this (defaults to yes).
If your delay test flow uses prepare_timed_sequences, then the ATPG engine will try to
locate paths that will not be safe to measure within the frequency in design’s clock constraints
file. If the engine finds such paths, it creates "transition constraints" to prevent transitions from
occurring along these long paths. If the test generator can successfully stop transitions from
occurring along these paths, then the ATPG engine will not be required to create a timing X
at the measure location of this path, which will improve coverage. If Encounter Test fails to
prevent transitions down this path, it will have to measure an X on every flop that this long
path feeds. Using constraintcheck=yes allows this.
If you do not specify constraintcheck=yes, but are using one or more transition
constraints (as a result of prepare_timed_sequences or SDC), then it is highly
recommended to use an SDF-annotated simulation, such as ncsim, to verify your patterns;
otherwise the patterns might not work on the tester because of the violated constraints. The
alternative is to use the default option and let Encounter Test perform the verification and
correction (if required) in a single step.
In case of a performance issue, first examine the number of constraints that are being created
in the prepare_timed_sequences phase. If this number is excessive, check the clock
frequency(s), because it means that Encounter Test has identified too many paths that do not
appear to meet your timings. Check specifically the number of violations reported by
prepare_timed_sequences and compare this number to the number of flops in the given
domain. If it is more than 1% or 2%, then your timings might be too aggressive.
(SDF) as the input for all the delays in a chip. See “Timing Concepts” on page 85 for additional
information.
The accuracy of the SDF drives the quality of the timings produced by Encounter Test. The
level of SDF accuracy directly relates to the ability of Encounter Test to create timed test
patterns that work on the tester after a single test generation iteration.
A Timed Test Pattern is comprised by 3 basic building blocks. The building blocks are:
1. The release pulse, used to launch captured data into the system logic.
2. Separation time, the calculated amount of time that is required between the release and
capture pulses.
3. The capture, used to capture the design values in a measure point. See “General View
Circuit Values Information” in the Graphical User Interface Reference for related
information.
The release and capture pulses are comprised of the two following basic elements:
1. The time at which the pulses occur
2. The length of the pulses (pulse width)
The time and width of the pulses calculated by Encounter Test are based on the SDF.
Figure 4-28 shows the five elements that control the clocks at the tester:
When Timed Test Patterns fail at the tester, it is key to understand why and where these
failures are occurring. The first step in determining why Timed Test Patterns fail is to slow the
patterns down to verify that they pass at system speed similar to static tests. Static Tests are
identical to a Timed Test Patterns except that a static pattern is not run at faster tester speeds.
The speed of a Timed Test Pattern can be slowed to the speed of a Static Test Pattern by
performing the following steps.
1. Widen the separation time between the clock pulses. If the separation time between the
release and capture clocks is 5ns, consider widening the separation to 200 ns.
2. Increase the widths of the release and capture pulses. Encounter Test creates the Timed
Test Patterns with the minimum pulse widths required to release or capture a value at a
latch. These calculated values (from the SDF) might not be large enough for the real
product to correctly perform. Widening the pulses by 20x or 30x from the original values
helps ensure that the latches have enough time to release or capture the values. The
basic concept is to increase all the times associated with the timeplates to a big value.
If viewing the modified times associated with the 5 elements that control the clocks at the
tester we would see:
Time of Release Pulse = T1
Width of Release Pulse = W1*30
Separation Time = P1*200
Time of Capture Pulse = T2 = T1+W1*30+P1*200
Width of Capture Pulse = W2*30
These values can be changed at the Tester using software provided by the Tester Company
or can be modified by changing the timeplates inside of the WGL files.
Applying this new slow Test Pattern at the tester verifies that the patterns work. Once the
patterns are successful at slow speeds, the timings can be moved closer to the originally
computed values. Reduce pulse widths to their point of failure before changing the separation
time. When the pulse widths are satisfactory, the separation between the clock pulses can be
lowered. Determine which latch is failing by lowering the values until some failures start to
occur.
After identifying some latches that miscompare at faster tester cycles that pass at slower test
cycles, consider the following:
Are the timings associated with the logic feeding into the latch correct in the SDF? Should
this latch be ignored because it will never make its timings? Do all chips fail at the same
latch?
If the identified latch should be ignored or will not make the timings in the SDF, there are
two options:
a. Change the SDF so that it contains the correct values. Rerun Build Delay Model to
recreate or re-time the pattern set.
b. Use an ignore latch file to instruct Encounter Test to ignore this latch during
simulation. If the delay information appears correct, consider analyzing the failure
information using the Encounter Diagnostics tools to ascertain whether Diagnostics
can determine the point of failure. Refer to “Encounter Diagnostics” in the
Diagnostics User Guide for additional information.
Delay Test can also be used to print out the longest paths it finds for a certain latch.
The primary goal at the tester is to achieve passing timed test patterns. If the calculated
timings from Encounter Test do not work at the tester, verify the patterns work at a slow speed
then increase speed to a point of failure.
SDQL: An Overview
To compute SDQL, a delay defect distribution function F(s) is used to determine how likely
any one defect is to occur as a function of its size (as shown in Figure 4-29 on page 156).
Typically this is provided by the manufacturing facility. Encounter Test uses a default delay
defect distribution function if one is not provided.
For any given fault, defects smaller than a certain size cannot be observed due to slack in the
fault's longest path. This is often referred to as Tmgn or the timing redundant area.
When a fault is detected, the longest sensitized path (LSP) is determined. LSP is the longest
path along which the fault can be observed by the test patterns. The detection time (or Tdet)
is computed as the difference between the test clock period (Ttc) and the LSP. Defects larger
than this are detected by the test.
The area under the curve that remains undetected by the test reflects the probability that a
defect will occur for the given fault. This is referred to as the SDQL (or untested SDQL) for a
fault.
The SDQL for the design is computed by accumulating the SDQL for all individual faults.
SDQL Effectiveness
SDQL effectiveness is calculated as:
Tested SDQL
------------------------------ x 100%
Untested SDQL + Tested SDQL
Similar to test coverage, this measurement is independent of the number of faults (or in this
case ppm) in the design.
As small delay testing tends to run longer and generates increased pattern count as
compared to traditional delay testing, a keyword (tgsdql=n.nnn) is provided to select only
the most critical faults for small delay test generation. Any fault with a potential SDQL greater
than the tgsdql value is selected for small delay test generation. All other faults use
traditional delay test generation.
During ATPG, many patterns may detect a single transition fault. Typically, a transition fault is
marked tested as soon as a test pattern detects the fault. To identify the longest path that
detects a fault, full small delay simulation requires that a transition fault be simulated for every
test pattern, which is a time consuming process. To reduce the run time, a keyword
(simsdql=m.mmm) is provided to allow a fault to be marked tested as soon as it is detected
along a path long enough so that the untested SDQL is less than m.mmm. After the command
is executed, any detected faults not meeting the threshold are marked tested.
Prerequisite Tasks
The following prerequisite tasks are required for small delay simulation:
1. Small delay simulation uses delay data provided in the form of an SDF. Therefore, run
the read_sdf command before performing small delay simulation. Refer to “Performing
Build Delay Model (Read SDF)” on page 80 for more information.
2. Since delay data is provided only to the technology cell boundaries, SDQL for faults
internal to a technology cell cannot be computed accurately. Therefore, you should
perform small delay simulation using a cell fault model. If industrycompatible=yes
is specified for the build_model command, a cell fault model will be created by default.
Otherwise, create a cell fault model by using the build_faultmodel command with
cellfaults=yes. Refer to the Modeling User Guide for more information.
3. Create test patterns using create_logic_delay_tests or
prepare_timed_sequences with release-capture timing specified using a
clockconstraints file. For patterns without release-capture timing, a clockconstraints file
may be specified when performing small delay simulation (analyze_vectors). Refer to
“Create Logic Delay Tests” on page 114 for more information.
4. To determine the appropriate values for tgsdql and simsdql keywords, run
report_sdql to report a histogram of the potential SDQL without specifying the
experiment keyword. Specify the clockconstraints keyword instead. The
report_sdql command reports a histogram showing the distribution of the total
potential SDQL among the transition faults. Recommendations for tgsdql and simsdql
keywords are also reported, or you may select your own value.
■ tgsdql=<n.nnn> - faults with potential SDQL greater than this are chosen for small
delay ATPG
■ simsdql=<m.mmm> - faults with an untested SDQL less than this are marked tested
In addition to simsdql, the following threshold values are available to determine when a
transition fault should be marked tested.
■ ndetect=<nn> - mark a fault as tested after it has been detected the specified number
of times (default is 10000)
■ percentpath=<nn> - mark a fault as tested if it is detected along a path that is the
specified percentage of its longest possible path (default is 100)
Note:
■ When simsdql, ndetect, or percentpath is specified, faults are no longer simulated
once they meet the detection threshold. As a result, the untested SDQL reported may be
larger than the true untested SDQL for the pattern set.
■ As a fault may never reach one of the specified thresholds, it may not be marked tested
until the end of the test generation command when all faults that were detected but did
not meet a threshold are marked tested. To prevent faults that did not meet a threshold
from being marked tested, specify marksdfaultstested=no.
In this example, faults with a potential SDQL greater than the tgsdql value are processed
using small delay ATPG. Other faults are processed with traditional ATPG. Faults are not
marked tested until they are detected along a path with an untested SDQL less than the
simsdql value.
Figure 4-32 Flow for Small Delay Simulation of Traditional ATPF Patterns
In this example, patterns are created using traditional ATPG. Small delay simulation is then
performed for all faults during analyze_vectors. As simsdql is not specified, faults are
not marked tested unless they are detected along their longest possible path. Any faults that
were detected along a shorter path are marked tested after the simulation is complete.
Setting simsdql to 0 (or not specifying simsdql) provides the most accurate SDQL report,
but analyze_vectors may run longer. report_sdql is then run to report the
effectiveness of the resimulated pattern set.
In this example, small delay simulation is performed during traditional ATPG eliminating the
analyze_vectors step. Traditional ATPG is used for all faults because the tgsdql
keyword has not been specified. If prepare_timed_sequences is not run, a
clockconstraints file is required.
Note: When small delay simulation is being performed during traditional ATPG, fewer faults
will be marked tested during simulation due to the requirement to meet a simsdql or other
threshold. This will increase the run time for ATPG. Any faults detected but not meeting the
simulation threshold are marked tested after the test generation is complete.
In this example, instead of the default probability function, a user-specified probability function
is provided. Refer to report_sdql in the Command Line Reference for more information
on the probfile keyword.
Use the report_sdql command to create reports about the SDQL for a pattern set. Refer
to report_sdql in the Command Line Reference for more information on the command.
The first section lists the clock domains being processed and the number of faults associated
with them. Faults on PIs, POs, RAMs, ROMs, latches and flops are not included in this count.
Faults that exist in multiple domains are included in the counts for each domain. The
maximum time for integration and area under the defect probability distribution are also listed.
The next section lists the SDQL by domain for the faults that were tested during small delay
simulation (as well as those that would have been tested if they had met the detection
threshold). For faults that are included in multiple domains, their SDQL is included in each
domain, but only once in the Chip SDQL.
INFO (TDL-052): SDQL By Clock Domain - Tested Faults
--------------------------------------------------------------------------------
Untestable Untested Tested Total Test
Domain # (ppm) (ppm) (ppm) (ppm) Effectiveness
--------------------------------------------------------------------------------
1 0.0105 0.0066 0.1065 0.1236 94.66%
Chip 0.0105 0.0066 0.1065 0.1236 94.66%
--------------------------------------------------------------------------------
Note that there is a column for Untestable SDQL. During small delay ATPG, some faults may
be determined to be Untestable along their longest possible paths (LPPs). Rather than
reducing the LPPs, an Untestable SDQL is computed. This allows a better comparison to
patterns created using traditional ATPG since the total SDQL should remain the same.
The Tested Fault SDQL section of the report is followed by similar section that includes
untested faults as well as tested faults.
INFO (TDL-052): SDQL By Clock Domain - All Faults
--------------------------------------------------------------------------------
Untestable Untested Tested Total Test
Domain # (ppm) (ppm) (ppm) (ppm) Effectiveness
--------------------------------------------------------------------------------
1 0.0105 0.0125 0.1065 0.1294 90.35%
Chip 0.0105 0.0125 0.1065 0.1294 90.35%
--------------------------------------------------------------------------------
Dropped Faults:
TMGN <= 0 : 0
Redundant : 0
Untestable : 1371
Clock Line : 416
Tested Untimed:725
Total :2512
[end TDL_052]
A table listing untested faults that were not included in the SDQL calculations is also included
to help explain any differences in total SDQL from one experiment to the next.
This report lists those faults whose untested SDQL (Defect Probability) remains above 0.002.
The report shows how the potential SDQL (all of the area under the curve except for the timing
redundant area) is distributed among the faults. This is useful when determining the simsdql
threshold to use when running small delay simulation.
This report shows that 68 faults have a potential SDQL of 0.000420 or greater and comprise
21.40% of the design's total potential SDQL. These are the most critical faults to detect along
long paths. In fact, only 457 faults account for 70.6% of the design's total SDQL.
The remaining 5859 faults have a potential SDQL less than 0.000044 and comprise only
29.4% of the total SDQL. Setting simsdql=0.00044 during small delay simulation will get
these faults marked off on the first detect, saving simulation run time.
The default defect probability distribution function may be overridden during report_sdql.
Ideally this function would be provided by the manufacturing facility. The function must be
defined in a perl script and stored in a file with the name SDQL.pl. It may be located in any
directory. The sample function below is provided in the $Install_Dir/etc/contrib directory:
sub probfunc {
my $s = @_[0]; # s is the input value.
my $Fs; # Fs is the output value.
# Compute the probability distribution function
$Fs = 1.58e-3 * exp(-(2.1*($s))) + 4.94e-6;
return $Fs;
}
The report_sdql command will create a file recording the defect size versus the
percentage of defects of that size (tested, untested and total). This file is used as input to the
graph_sdql command which produces a graphical representation of how defects are
distributed among various defect sizes.
Graphing SDQL
Use the graph_sdql command to create a graphical representation showing how potential
defects are distributed across the various defect sizes. A plot of the total defects, tested
defects and untested defects is drawn. One graph is displayed for each clock domain.
Jpeg files are also created for each of the graphs. They are stored in the directory specified
in the jpegdir keyword (default: $WORKDIR/testresults). If not specified, jpegdir defaults
to the current directory.
The graph_sdql command takes the plotfile keyword as input, which is the name of a
fully qualified input file containing the numeric data used to plot the test effectiveness graphs.
The plotfile is produced as an output of the report_sdql command.
Note: Specifying the plotfile keyword for report_sdql is a prerequisite for running the
graph_sdql command.
The graphs are stored in jpeg files named as sdql_domain_n.jpeg, where n identifies the
clock domain number listed in the report_sdql output.
Refer to graph_sdql in the Command Line Reference for more information on the
command.
Committing Tests
By default, Encounter Test makes all test generation runs as uncommitted tests in a test
mode. Commit Tests moves the uncommitted test results into the committed vectors test data
for a test mode. Refer to “Performing on Uncommitted Tests and Committing Test Data” on
page 234 for more information on the test generation processing methodology.
■ Standard Test Interface Language (STIL) - an ASCII format standardized by the IEEE.
■ Waveform Generation Language (WGL) - an ASCII format from Fluence Technology, Inc.
■ Verilog - an ASCII format from Cadence Design Systems., Inc.
Refer to “Writing and Reporting Test Data” on page 169 for more information.
5
Writing and Reporting Test Data
This chapter provides information on exporting test data and sequences from Encounter Test.
Refer to Test Pattern Data Reference for detailed descriptions of these formats.
To write vectors using the graphical interface, refer to “Write Vectors” in the Graphical User
Interface Reference.
To write vectors using commands, refer to “write_vectors” in the Command Line Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode of experiment to export pattern
■ inexperiment= name of the experiment from which to export data
■ language=stil|verilog|wgl|tdl = Type of patterns to export
The following table lists the commonly used options for write_vectors for all languages:
When writing TDL output, you need to specify the configfile keyword to define the TDL
design configuration information. Refer to subsequent TDL section for more information.
To limit the number of test vector output files, specify combinesections=yes, which
combines multiple test sections based on test types and creates a maximum of four files, one
each for storing static scan tests, static logic tests, dynamic scan tests, and dynamic logic
tests.
If Set_Scan_Data and Measure_Scan_Data events do not exist in all test modes, the
default timing is the following:
■ scanpioffset=scanbidioffset=scanstrobeoffset=0
■ initpulsewidth
■ initstrobeoffset
■ initstrobetype
■ inittimeunits
write_vectors uses the modeinit timings only if they are different from the test sequence
timings. In case of different modeinit timings:
■ The modeinit timings replace the test timings for all the processed modeinit sequences.
■ The modeinit timings are used for all events encountered within the init Test_Sequence
except for Scan_Load events. write_vectors uses scan timings to process any
Scan_Load event it encounters.
■ Using the modeinit timings, write_vectors writes out accumulated values
immediately after processing the modeinit sequence, prior to processing the first test. In
other words, write_vectors does not compress patterns when transitioning between
modeinit timings and test timings.
■ write_vectors prints the modeinit timings in the header area for each output vector
file.
Define the order of pins in the file by including the pin names separated by one or more blank
spaces. Block and line comments are supported in the file.
Tip
If the pin order specified for an invocation of write_vectors differs from the
previously used order, you must recreate the test vectors for the previously exported
Test_Sections.
Specifying the keywords as mentioned above results in the following dynamic timeplate
configuration for write_vectors:
■ The dynamic clock events use the timeplates in the order of Release and Capture. For
example, the first dynamic clock event goes in the Release timeplate, the second
dynamic clock event goes in the Capture timeplate, the third event in the Release
timeplate, and so on.
■ The dynamic Stim_PI events are included in the timeplate associated with the next
dynamic clock event.
■ All Measure PO events are in the Capture timeplate.
■ Cycle Offset - This is zero at the start of the scan event and the end of the scan event,
the value is scan length - 1. For all measure_PO events, this value is -1.
■ Scan Length - The number of clock shifts during a scan operation.
STIL Restrictions
The testsectiontype field contains the test section type value from the TBDbin and is
used to identify the type of test data contained within the STIL file, for example, logic.
The ex#, ts#, and optional tl# fields differentiate multiple files generated from a single
TBDbin. The TBDbin hierarchical element id is substituted for #, i.e., ex# receives the
uncommitted test number within the TBDbin, ts# receives the test section number within the
uncommitted test, and tl# receives the tester loop number within the test section.
Committed and uncommitted TBDbins contain test coverage and tester cycle count
information for each test sequence.
The optional signals file contains declarations common to the STIL vector files, for example,
I/O signal names, in order to eliminate redundant definition of these elements.
The preceding files represent default names if keyword outputfilename (or Write Vectors
form field Set the output file names) is not specified. If a value for outputfilename is
specified, multiple output files are differentiated by the presence of a numeric suffix appended
to the file name. For example, multiple committed vectors files are named
outputfilename_value.1.stil, outputfilename_value.2.stil, and so on.
The signals file is named outputfilename_value.signal.stil.
Use the latchnametype keyword to specify whether to report the output vectors at the
primitive level or at the cell level.
Note: If a Verilog model does not have the cells and macros correctly marked and you specify
latchnametype=cell, then the instance names might match and be invalid. If the model
does not have a cell defined, then write_vectors uses primitive level for the instance
name.
WGL Restrictions
The testsectiontype field contains the test section type value from the TBDbin and is
used to identify the type of test data contained within the WGL file, for example, logic.
The ex#, ts#, and optional tl# fields differentiate multiple files generated from a single
TBDbin. The TBDbin hierarchical element id is substituted for #, i.e., ex# receives the
uncommitted test number within the TBDbin, ts# receives the test section number within the
uncommitted test, and tl# receives the tester loop number within the test section.
Committed and uncommitted TBDbins contain test coverage and tester cycle count
information for each test sequence.
The optional signals file contains declarations common to the WGL vector files, for example,
I/O signal names, in order to eliminate redundant definition of these elements.
The preceding files represent default names if keyword outputfilename (or Write Vectors
form field Set the output file names) is not specified. If a value for outputfilename is
specified, multiple output files are differentiated by the presence of a numeric suffix appended
to the file name. For example, multiple committed vectors files are named
TDL Restrictions
The preceding files represent default names if keyword outputfilename (or Write Vectors
form field Set the output file names) is not specified. If a value for outputfilename is
specified, multiple output files are differentiated by the presence of a numeric suffix appended
to the file name. For example, multiple committed vectors files are named
outputfilename_value.1.tdl, outputfilename_value.2.tdl, and so on.
The signals file is named outputfilename_value.signal.tdl.
Writing Verilog
Write Vectors accepts either a committed vectors or an uncommitted vectors file for a
specified test mode and translates its contents into files that represent the TBDbin test data
in Verilog format.
■ To write Verilog vectors via the GUI, select Verilog as the Language option on the Write
Vectors form
■ To write Verilog vectors using commands, specify language=verilog for
write_vectors.
Parallel Verilog patterns always measure the nets at the time frame 0 as opposed to serial
patterns that measure the nets at other poissible times. Overriding the Test or Scan Strobe
Offsets must have the measures before any stimulus to get the correct parallel simulation
results.
Miscompares in parallel mode result are reported on internal nets at the beginning of the scan
cycle and are not flagged on a specific scan out pin.
Verilog Restrictions
The testsectiontype field contains the test section type value from the TBDbin and is
used to identify the type of test data contained within the Verilog file, for example, logic.
The ex#, ts#, and optional tl# fields differentiate multiple files generated from a single
TBDbin. The TBDbin hierarchical element id is substituted for #, i.e., ex# receives the
uncommitted test number within the TBDbin, ts# receives the test section number within the
uncommitted test, and tl# receives the tester loop number within the test section.
Committed and uncommitted TBDbins contain test coverage and tester cycle count
information for each test sequence.
The optional signals file contains declarations common to the Verilog vector files, for example,
I/O signal names, in order to eliminate redundant definition of these elements.
The preceding files represent default names if keyword outputfilename (or Write Vectors
form field Set the output file names) is not specified. If a value for outputfilename is
specified, multiple output files are differentiated by the presence of a numeric suffix appended
to the file name. For example, multiple committed vectors files are named
outputfilename_value.1.verilog, outputfilename_value.2.verilog,
and so on. The mainsim file is named outputfilename_value.mainsim.v.
NC-Sim Considerations
When using Write Vectors for subsequent simulation by NC-Sim, the following keywords may
be specified for NC-Sim:
The TB_VERILOG_SCAN_PIN model attribute is used to control the selection of scan, stim,
and measures points in exported Verilog test vectors when specifying the write_vectors
scanformat=parallel option, or via the graphical user interface, selecting a Scan
Format of parallel.
The TB_VERILOG_SCAN_PIN attribute may be placed on any hierarchical pin on any cell.
However the pin must be on the scan path (or intended to be on the scan path if used within
a cell definition).
When encountered on input pins, the parent net associated with the attributed pin is selected
as a stimuli net. When encountered on output pins, the associated parent net is selected as
a measure net. This attribute may be selectively used, i.e., default net selection takes place
if no attribute is encountered for a specific bit position.
The preceding report is produced for STIL, Verilog, and WGL vectors.
Figure 5-1 on page 187 shows a graphical representation of the scan cycles for the reported
vectors:
Figure 5-1 Scan Cycle Overlap in Test Sequence Coverage Summary Report
■ Sequence Cycle Count - shows the clock cycles taken by the test sequence
■ Overlapped Cycle Count - shows the cycles that are overlapping with the next
reported vector. For example, for vector 1.1.1.2.1, out of 17 cycles, 7 cycles overlap with
the next vector 1.1.1.2.2.
While calculating the Sequence Cycle Count for a vector, the cycles overlapping with
the preceding vector are ignored.
For example, as shown in the figure, out of the 17 cycles taken by vector 1.1.1.2.2, 7
cycles overlap with the preceding vector 1.1.1.2.1. Therefore, though vector 1.1.1.2.2
takes total of 17 cycles, the 7 cycles that overlap with the preceding vector 1.1.1.2.1 are
ignored and not reported in the Sequence Cycle Count column for the vector.
■ Total Cycle Count - shows the cumulative sequence cycle count of the current and
all preceding vectors. For example, the total cycle count reported for vector 1.1.1.2.2 is
the combination of the cycle count for this vector (10) and all the vectors reported before
it, i.e, 1.1.1.2.1 (17) and 1.1.1.1.1 (1). Therefore, the reported cycle count for the vector
is 28.
There is a single vector correspondence file per test mode. The intent is that there be only
one set of vector correspondence data ever used for a given mode. Making changes to the
vector correspondence file should be undertaken only with utmost caution. If you want to
import a TBDpatt file that was produced by Encounter Test, the vector correspondence must
not be altered during the interim.
You can change the ordering within vectors by changing the positions of pins or latches in the
vector correspondence file. This may be useful if you are going to use TBDpatt as an
intermediate interface to some other format which requires that the vectors be defined in a
specific way. In such a case, you might be able to modify TBDvect in such a way as to avoid
re-mapping the vectors in another conversion program.
events, and weight events. For example, the fifth position in a Stim_PI vector is the value to
be placed on the fifth input in the primary input correspondence list.
A scan design in Encounter Test, even if composed of flip-flops is always modeled using level-
sensitive latches. Each latch that is controlled by the last clock pulse in the scan operation
must necessarily end up in the same state as the latch that precedes it in the register. In rare
circumstances, the preceding latch may also be clocked by this same clock, and then both
latches have the same state as the next preceding latch (and so on). Some latches may be
connected in parallel, being controlled by the same scan clock and preceded by a common
latch. These latches also must end up at a common state. Encounter Test identifies all the
latches that have common values as the result of the scan operation, and each such set of
latches is said to be “correlated”. TBDpatt does not explicitly specify the value for every latch,
but only for one latch of each group of correlated latches. This latch in each group is called a
“representative stim latch” or “representative measure latch”, depending upon whether the
context is a load operation or an unload operation. The Stim Latch Correspondence List is a
list of all the latches that can be independently loaded by a scan or load operation. This
consists mainly of the representative stim latches, but contains also “skewed stim latches”
which require special treatment for the Skewed_Scan_Load event described in the Test
Pattern Data Reference. The Measure Latch Correspondence List is a list of the
representative measure latches.
In the case of PRPG and MISR latches, used for built-in self test, similar correlations may
exist; the representative latches for these are called “representative cell latches”. The PRPG
and MISR correspondence lists include the representative cell latches for the PRPGs and
MISRs respectively.
See TBDpatt and TBDseqPatt Format in the Test Pattern Data Reference for information
on vector correspondence language definition.
To perform Create Vector Correspondence using the graphical interface, refer to “Write Vector
Correspondence” in the Graphical User Interface Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode
To verify cell models created to run with Encounter Test, you can simulate vectors created by
our system with other transistor level simulators. To do this, it is necessary to expand the scan
chain load and unload data into individual PI stims, clock pulses, and PO measures. Select
the Expand scan operations option on the Report Vectors Advanced form or specify the
keyword pair expandscan=yes on the command line to expand any scan operations found
in the test data.
To report Encounter Test vectors using the graphical interface, refer to “Report Vectors” in the
Graphical User Interface Reference.
To report Encounter Test vectors using command line, refer to “report_vectors” in the
Command Line Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode of experiment to export pattern
■ experiment= name of the experiment from which to export data
Refer to “report_vectors” in the Command Line Reference for more information on these
parameters.
report_vectors requires a valid model and experiment or committed test data as input.
To report sequence definition data using the graphical interface, refer to “Report Sequences”
in the Graphical User Interface Reference.
To report sequence definition data using command line, refer to “report_sequences” in the
Command Line Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode to report the sequences on
To convert vectors for core tests using the graphical interface, refer to “Convert Vectors for
Core Tests” in the Graphical User Interface Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode of experiment to export pattern
■ inexperiment= name of the experiment from which to export data
■ exportfile= file name of exported data
Input Files for Converting Test Data for Core Tests (TDM)
Converting vectors for core tests requires a valid model and experiment data.
Output Files for Converting Test Data for Core Tests (TDM)
where:
■ workdir = name of the working directory
■ inputfile= file name of exported data
6
Customizing Inputs for ATPG
Various tasks can be performed in preparation for Encounter Test test generation
applications. This section describes concepts and techniques related to preparing special
input to test applications. Refer to the following:
■ “Linehold File”
■ “Coding Test Sequences” on page 206
■ “Ignoremeasures File” on page 230
■ “Keepmeasures file” on page 230
Linehold File
A linehold file provides a means whereby you can define statements that identify a set of
nodes to be treated specially by Encounter Test applications that support lineholding. You
may name this file any arbitrary name permitted by the operating system. Although special
treatment of lineholds is application-specific, there is a set of global semantic rules common
to all Encounter Test applications which govern the specification of linehold information.
Anyone generating a linehold file must know the rules and syntax. When applying lineholds
to a flop or a net sourced from a flop, the linehold value is not guaranteed to persist through
the duration of the test, but will be set at the beginning of the test.
The linehold file also supports changing the frequency of oscillators that are connected to
OSC pins and were started by the Start_Osc event in the mode initialization sequence.
Use the following linehold statement types to specify this test generation behavior:
■ An OSCILLATE linehold overrides the frequency of a previously defined oscillating signal
or disables the oscillator. OSCILLATE statements do not have a direct effect on the test
generation or simulation step. However, the information is incorporated into the
experiment output and used by processes such as write_vectors.
■ A HOLD linehold is a hard linehold. Test patterns can never override this value. The HOLD
value is always in effect.
■ A DEFAULT linehold is a soft linehold. Test patterns can override this value if required to
generate a test for a fault. Otherwise, the DEFAULT value is in effect.
❑ A DEFAULT linehold is automatically generated for any SCAN_ENABLE,
CLOCK_ISOLATION, or OUTPUT_INHIBIT test function pin.
Important
During scan, opposite values are returned on these nets creating 0/1 hard
contention. The test generator can ensure that the scanned in data will be 0+s for
these flops.
Refer to “Linehold File Syntax” on page 199 and “General Semantic Rules” on page 203 for
additional information.
The following criteria are used to determine the final linehold set:
■ Test mode test function pin specifications. Refer to “Identifying Test Functions Pins in the
Modeling User Guide for related information.
■ An input linehold file. The values in this file override test function pin specifications.
■ User-defined test sequences. These override the above two criteria. Refer to “Linehold
Object - Defining a Test Sequence” on page 204.
Boolean constraints can also be described in linehold files. Refer to “Boolean Constraints” on
page 104 for details.
Clock pins may not be held away from stability. This is the case whether a clock pin is held
directly, or the justification of an internal node would require a clock pin held ON. Any test
function pins lineheld to the stability value may be taken out of stability for the scanning
operation or to capture the effect of a fault in a latch.
If a linehold is specified on any pin or latch in a correlated set, then all the pins or latches in
that set will be lineheld accordingly. It is recommended that you specify linehold values only
for the representative pins or representative stim latches.
Whenever a cut point is lineheld (HOLD or DEFAULT), this applies to the associated Pseudo
Primary Input (PPI), and thus to all the cut points associated with the PPI at their respective
values. See “Linehold File Syntax” on page 199 for additional information.
where:
■ entity is the pin name optionally preceded by the keyword PIN.
■ up is a decimal number specifying the duration in nanoseconds for which the signal is 1
on each oscillator cycle. There is no default value for this parameter and the specified
value should be greater than 0.
■ down is a decimal number specifying the duration in nanoseconds for which the signal
is 0 on each oscillator cycle. There is no default value for this parameter and the specified
value should be greater than 0.
■ pulsespercycle is an integer specifying the number of oscillator cycles to be applied
in each tester cycle. There is no default value for this parameter and the specified value
should be greater than 0.
Note: To turn off an oscillator for the experiment, specify the STOPOSC statement, as shown
below:
STOPOSC entity;
Where entity is the pin name optionally preceded by the keyword PIN.
where:
■ entity is one of the following:
❑ [PIN] hier_pin_name
❑ NET hier_net_name
❑ BLOCK usage_block_name
❑ PPI name
■ value is one of the following:
❑ 0 - logic zero
❑ 1 - logic one
❑ Z - high impedance
❑ X - logic X (unknown)
❑ W - weak logic X
❑ L - weak logic zero
❑ H - weak logic one
■ restriction is either:
❑ all (default) - in effect for the entire sequence
❑ dynamic - in effect only for the dynamic section of the sequence
■ action is either:
❑ ignore (default) - if a test pattern voilates a given constraint, then assign an unknown
(X) value to the endpoint of the constraint. This improves the runtime performance
as the test generation engine needs to protect only those constraints that would
otherwise block (with an X) one of the faults it is targeting in the test pattern.
❑ remove - consider a test pattern as invalid if a given constraint is violated. The
simulator removes all such invalid patterns from the pattern set.
where:
RELEASE entity;
where:
■ entity is one of the following:
❑ [PIN] hier_pin_name
❑ PPI name
where:
■ entity is one of the following:
❑ [PIN] hier_pin_name
❑ NET hier_net_name
❑ BLOCK usage_block_name
❑ PPI name
■ Multiple statements per line are permitted (each ending with a semi-colon).
■ Only a single entity is permitted per statement.
■ Only a single value is permitted per HOLD or DEFAULT statement.
■ Keywords and entity identifiers must be immediately followed by at least one space.
■ There is no case sensitivity except for entity names.
■ Comments delimited by '//' or '/* */' may be used freely.
❑ floating latch
❑ latch which is only randomly controllable
The entity must be a pin that has the OSC test function and is referenced by the
Start_Osc event in the mode initialization sequence.
Block.f.l.mle1.nl.mle1.slave = 0
Block.f.l.mle1.nl.mle2.slave = 0
Block.f.l.mle1.nl.mle3.slave = 0
Block.f.l.mle1.nl.mle4.slave = 0
Block.f.l.mle1.nl.mle5.slave = 0
Block.f.l.mle1.nl.mle6.slave = 0
Block.f.l.mle1.nl.mle7.slave = 0
Block.f.l.mle1.nl.mle8.slave = 0
;
] Lineholds;
[ Pattern 1 ( pattern_type = static );
. . .
] Pattern 1;
] Define_Sequence mle1;
The following rules apply to the specification of linehold objects in user-defined test
sequences. Lineholds from user-defined test sequences override those from all other
sources.
] Keyed_Data;
[ Pattern 1.1 (pattern_type = static);
Event 1.1.1 Stim_PI ():
"Pin.f.l.tstatesq.nl.D"=0;
] Pattern 1.1 ;
] Define_Sequence ;
For WRPT and LBIST use, this is the exact sequence of stimuli that eventually get put in the
TBDbin (Vectors) file output from the test generation applications, and finally get applied at
the tester. In a similar fashion, test sequences for stored-pattern tests can also be user-
defined. User specification of sequences for stored-pattern test is required for on-product
generation or BIST controller support (referred to generically as OPC), but there may be
instances other than OPC where it is desired to constrain the test sequences for stored
patterns.
For stored patterns, the user-defined test sequence is augmented by the automatically
generated input (and scannable latch) vectors for each test. See section “Stored Pattern
Processing of User Sequences” on page 209 for a description of this process.
Common to all user-defined test sequence scenarios are the basic coding process and the
process of importing the sequence definitions. On the other hand, the detailed contents of the
sequence definition depend on its intended use (stored patterns vs. WRPT or LBIST). There
are three basic steps in creating a sequence definition:
1. Know the names of the primary inputs, pseudo PIs, and latches or other internal nets that
will be referenced in the test sequence.
2. Use the text editor of your choice to code the test sequence in TBDpatt format.
3. Import the test sequence to TBDseq.
The next subsection tells you how to get started. This is followed by an explanation of
sequence coding for stored patterns, then an explanation of sequence coding for WRPT or
LBIST, and then a brief explanation of the import process.
The sequences are coded in an ASCII file, which is created using the text editor of your
choice. Of course, if you can find an existing test sequence, it is usually quicker to copy and
edit it instead of entering one from scratch. The name of the file is arbitrary, as long as it does
not conflict with the name of some other file in the directory where it is located. Test Pattern
Data Reference contains detailed explanations of the TBDpatt format.
A TBDpatt file always starts with a TBDpatt_Format statement, which gives information
about the formatting options that are used. There are two TBDpatt format modes, denoted
by mode=node or mode=vector. In writing sequence definitions, you usually have no
need to specify values on many inputs or latches all at once, so all examples in this section
use mode=node. Mode=node is a list format that specifies only the pins that are to be set,
and the value of each.
TBDpatt statements that refer to specific pins, blocks, or nets can use either their names or
their indexes to identify them. The second TBDpatt_Format parameter is
model_entity_form, which specifies either name, index, or flat_index.
Comments can be placed in the TBDpatt file to record any explanatory information desired.
Comments can be placed in any white space in the file, but whatever follows the comment
must begin on a new line. Comments are not copied into the TBDseq and TBDbin (Vector)
files. Refer to “TBDpatt and TBDseqPatt Format” in the Test Pattern Data Reference for
additional information.
The events that go inside each pattern depend upon the type of test the sequence definition
will be used for.
TBDpatt_Format (mode=node,model_entity_form=name);
[ Define_Sequence spseq1 (test);
[ Pattern 1;
Event 1.1.1 Scan_Load (): ;
] Pattern 1;
[ Pattern 2;
Event 1.2.1 Put_Stim_PI (): ;
Event 1.2.2 Measure_PO (): ;
] Pattern 2;
[ Pattern 3;
Event 1.3.1 Pulse ():
"Pin.f.l.samplckt.nl.C1"=-;
] Pattern 3;
[ Pattern 4;
Event 1.4.1 Scan_Unload (): ;
] Pattern 4;
] Define_Sequence ;
In the example of Figure 6-1, “spseq1” is an arbitrary sequence name assigned by the user.
“(test)” denotes this as a test sequence, as opposed to a modeinit sequence or a scan
operation sequence. The numbers following the keywords “Pattern” and “Event” are arbitrary
identification numbers; they are shown in the example exactly as Encounter Test would
construct them when exporting the sequence definition to an ASCII file.
“Pin.f.l.samplckt.n1.C1” is the fully-qualified name by which Encounter Test knows
the C1 clock primary input pin. The short form of the name, just “C1” could be used in place
of the longer name shown. The “-” following the clock name signifies a negative-going pulse
(the quiescent state of this clock is a 1).
“Scan_Load” and “Scan_Unload” are the names of events which represent the operations
of loading and unloading the scan chains, respectively. Note that these particular events may
not be imported during test mode creation. If the sequence includes these events, create the
test mode first, then import the sequence definition.
No latch state vectors are specified, because as test generation is performed using this
sequence definition, Encounter Test will fill in these values. “Put_Stim_PI” is a placeholder
for another event, called “Stim_PI”. Stim_PIs are the events that hold the primary input
state vectors for the test. The Put_Stim_PI event was devised specifically for use in a test
sequence definition, and is not used any other place. In the context of a test sequence
definition, Put_Stim_PI is a placeholder for the generated primary input vector, while
Stim_PI is used only for manipulating control primary inputs and cannot be modified by the
test pattern generator. (In this simple example, no such control signals are used, therefore no
Stim PI events are shown). When the test vector is inserted into the sequence, the
Put_Stim_PI event is changed to a Stim_PI event. Note that there is no such distinction
made for the latch vectors because by its nature a Scan_Load event is not suitable for
applying control signals to a design, so its appearance in a sequence definition always
denotes where the latch test vector is to be placed.
For both the Put_Stim_PI and the Scan_Load events, PI or latch values may be specified.
When any PI or latch value is thus specified, it takes precedence over any value specified on
that PI or latch by the test generator; the automatically generated vectors are used only to fill
in the unspecified positions of the vectors in the Put_Stim_PI and Scan_Load events.
Note: When writing sequences for an OPMISR test mode, each OPMISR test sequence
should begin with a scan_load event and end with a channel_scan event.
When writing sequences for a test mode which has scan control pipelines, each test
sequence must begin with a scan_load event and end with a either a scan_unload or
channel_scan event.
The testsequence option specifies which testsequence(s) the test generator can use for
creating tests for faults and how the user-defined test sequence(s) are used by the ATPG
process.
The first step is to write the test sequence template(s) be used by ATPG. The sequence
definition is then imported into the test mode. Note that when non-scan flush sequences (data
pipes) are applicable, then the non-scan flush patterns should be included in the user-defined
test sequence (these patterns should include the non-scan flush attribute i.e.
nonscanflush).
The testsequence keyword is used to specify a list of imported test sequences which are
available to ATPG for fault detection. Test generation progresses as usual with the tests being
created for faults independent of any user-defined test sequences. A best fit process is
applied to match the ATPG sequence to user-defined test sequence specified via
testsequence. The testsequence method provides special event types (for example,
Put_Stim_PI and keyed data) to allow variability in the test sequence definition and test
sequence matching. A description of the sequence matching process and guidelines is
described later in this chapter. Note that it is possible to have ATPG generate tests which are
discarded due to the ATPG test sequence not fitting with any specified user-defined
sequences.
Tip
Test Sequence processing can be invoked from the GUI as follows:
e. Enter a range of test sequences in the Sequence name field or enter a file that
contains a range of test sequences in the Read sequence from file field.
When user sequences are specified for stored pattern tests, Encounter Test takes the
automatically generated test vectors (the primary input and scannable latch values) and
places these vectors into the user-supplied sequence. The resulting test is then simulated
and written to the TBDbin output file for eventual use at the tester. This seemingly simple
process is made more complicated by the fact that sometimes the automatic test consists of
a sequence of vectors, as in the case of a partial scan design or a design containing
embedded RAM.
Each primary input vector and each latch vector must be inserted some place in the user-
defined sequence. This has greater significance in the specification of user test sequences
when the automatic test sequence contains more than one primary input vector or more than
one latch vector. Latch vectors are inserted in the user-defined sequence at each
Scan_Load (or Skewed_Scan_Load) event. If the automatic test sequence contains the
same number of these events as the user sequence, then the first latch vector found is copied
into the first Scan_Load (or Skewed_Scan_Load) event of the user sequence, the second
is copied into the second stim latch event of the user sequence, and so on. If the number of
stim latch events in the user sequence is more or less than the number of stim latch events
(latch vectors) in the automatic sequence, then Encounter Test cannot map this automatic
test into the user-defined sequence. Either some latch vectors from the automatic sequence
would have to be discarded, or some stim latch events from the user sequence would have
to be discarded. Instead of doing that, Encounter Test discards the entire automatic
sequence, and it is not used at all.
With primary input vectors, the mapping is from Stim_PI events in the automatic sequence
to Put_Stim_PI events in the user-defined sequence. Stim_PI events in the user-defined
sequence (along with all other events except Scan_Load, Skewed_Scan_Load, and
Put_Stim_PI) are kept exactly as specified by the user. Thus, the user sequence can stim
a control primary input at any appropriate point in the sequence without any consideration of
whether the test generator will be generating a corresponding primary input vector.
The same considerations explained above for latch vectors apply equally to primary input
vectors. For each primary input vector (Stim_PI event) of the automatic test sequence there
must be a corresponding Put_Stim_PI event in the user-defined sequence to map the
vector into. If the number of Stim_PI events in the automatic sequence does not exactly
match the number of Put_Stim_PI events in the user sequence, then the mapping fails and
the automatic test is discarded.
For most designs, the automatic test generator will produce tests which vary with respect to
the number of primary input and latch vectors contained therein. For a full-scan design having
no embedded RAM, most tests will contain a single latch vector and a single primary input
vector, but when non-scan sequential elements are present, the test generator will produce
some tests with multiple input vectors to be applied in sequence with other activity such as
clock pulses.
If your test protocol can support a variety of test templates, and especially if the test generator
will produce tests with variable numbers of latch or primary input vectors, it is important to
introduce some flexibility to accommodate this variety of test templates. This flexibility is
supported in two ways. One way is to define each different sequence, or template, as a unique
test sequence and provide a list of test sequences to the test generation run.
When multiple test sequences are specified for a stored-pattern test generation run, each
generated test will be mapped, if possible, into one of the specified sequences. Encounter
Test has a two-tiered algorithm for choosing which user sequence to map the test into.
The first step of the algorithm determines whether each user sequence has the requisite
numbers of latch stims and Put_Stim_PI events, in accordance with the criteria explained
in the preceding section, “Mapping Automatic Vectors into User Sequences” on page 210.
Only those sequences which “match” the automatic test with respect to the number of latch
stims and the number of primary input stims (Put_Stim_PI vs. Stim_PI) are considered in
the second step.
The second step of the matching algorithm selects from those user sequences which can be
mapped into, the one which most closely resembles the automatic test, considering all other
factors, such as which clocks are being pulsed, the order of the clock pulses, type of scan
(skewed load, skewed unload), number and placement of Measure_PO events, etc. In this
selection, a high weight is given to the comparison of the measure latch events between the
user sequence and the test sequence. If the test generator produced a
Skewed_Scan_Unload event for an LSSD design, it is reasonable to assume that it has
stored some meaningful test information in the L1 latches; if the user sequence has a
Scan_Unload event (no preceding B clock) then the scan out will begin with a shift A clock,
destroying the test information in the L1 latches. In this case, a user sequence with a
Skewed_Scan_Unload event would get a much better “score” for matching than a user
sequence with a Scan_Unload event.
Besides allowing multiple user sequences to be specified for a test generation run, Encounter
Test has additional flexibility in the sequence matching by means of “keyed data” statements
that can be placed in the sequence definition. See “Keyed Data” in the Test Pattern Data
Reference for additional information.
There are three unique types of sequence matching control statements, and one of these has
two variations.
STIM=DELETE
In the hypothetical example of Figure 6-2, there is a choice of two system clocks, C2 and C3.
The automatic test generator picks clock C2 but the user wants to use C3. The C2 clock signal
is gated by a control primary input. If we assume that the test generator is sophisticated
enough to figure this out and place the gating signal in a separate Stim_PI event prior to
pulsing the C2 clock, the automatic test could have two Stim_PI events. The sequence
matching and mapping algorithms count each Stim_PI event in the automatic test as a
separate vector, and hence the user sequence definition must have two Put_Stim_PI
events. But let us say the user wants to leave the C2 clock gating primary input at its original
state from the first vector. This can be accomplished as shown in the figure, using the second
Put_Stim_PI event to “match” the second Stim_PI event in the automatic sequence, and
the STIM=DELETE statement to throw away this “vector” after the sequence matching has
been done.
[ Test_Sequence ;
[ Pattern 1;
Event 1 Stim_PI (): 1011001010001;
Event 2 Measure_PO (): ;
] Pattern 1;
[ Pattern 2;
Event 1 Stim_PI (): .....1.......;
Event 2 Pulse (): "C2"=-;
Event 3 Measure_PO (): ;
] Pattern 2;
] Test_Sequence ;
PI_STIMS/LATCH_STIMS=n
This is the second type of sequence matching control statement, and there are two
statements of this type:
PI_STIMS=n
LATCH_STIMS=n
These statements are interpreted and applied before sequence matching takes place. Their
primary purpose is to allow the same user test sequence definition to be “matched” with
different automatic tests that contain different numbers of primary input and/or latch vectors.
For example, an event to which a PI_STIMS=3 statement is attached is effectively removed
from the sequence definition if the automatic test sequence does not have at least three
STIM_PI events. Similarly, an event with an attached LATCH_STIMS=2 event is effectively
removed from the sequence definition if the automatic sequence does not have at least two
Scan_Load and/or Skewed_Scan_Load events.
Again using a hypothetical example, Figure 6-2 shows a possible use for the PI_STIMS=n
statement. The user's sequence definition “matches” both automatic sequences 1 and 2, and
so can be used for both of these automatic tests. In the case of test sequence 1, the sequence
contains one Stim_PI event, so the second Put_Stim_PI event of the sequence definition
is removed for this usage as it specifies PI_STIMS=2. With the second Put_Stim_PI event
removed, the number of Put_Stim_PI events remaining (one) matches the number of
primary input vectors in the first test sequence.
In the case of test sequence 2, the sequence contains two Stim_PI events, so the
Put_Stim_PI event with the PI_STIMS=2 statement is used for the purpose of matching
with the automatic test sequence. The number of Put_Stim_PI events (two) matches the
number of primary input vectors in the second test sequence.
[ Test_Sequence 1 ;
[ Pattern 1;
Event 1 Stim_PI (): 1001001000011;
Event 2 Measure_PO (): ;
] Pattern 1;
] Test_Sequence 1 ;
[ Test_Sequence 2 ;
[ Pattern 1;
Event 1 Stim_PI (): 0100101011100;
Event 2 Measure_PO (): ;
] Pattern 1;
[ Pattern 2;
Event 1 Stim_PI (): .....1.......;
Event 2 Pulse (): "C2"=-;
Event 3 Measure_PO (): ;
] Pattern 2;
] Test_Sequence 2 ;
In the example of Figure 6-3, the conditional Put_Stim_PI event is discarded after
sequence matching, but the conditional (i.e., the PI_STIMS=n statement) is useful also in
cases where the vector is to be kept, as illustrated in the next example.
The PI_STIMS=n and LATCH_STIMS=n statements can be attached to any event, and are
not limited to Stim_PI and stim latch events. In the example of Figure 6-4, an entire chunk
of events consisting of a stim, a measure and a pulse is made conditional based upon
whether the automatic test sequence contains one, two, or three Stim_PI events.
TG=keyed data
This is another statement to control which pattern will be ignored by the test generator, and
there are three statements of this type:
TG=IGNORE
TG=IGNORE_FIRST
TG=IGNORE_LAST
The statement TG=IGNORE is used for a single pattern, while TG=IGNORE_FIRST and
TG=IGNORE_LAST statements allow multiple patterns to be ignored. To ignore multiple
patterns, specify TG=IGNORE_FIRST for the first pattern to be ignored, add other patterns
after this pattern, and specify TG=IGNORE_LAST for the last pattern to be ignored. All the
patterns starting from the one with TG=IGNORE_FIRST through the one with
TG=IGNORE_LAST will be ignored by the test generator.
Using a hypothetical example, the following snippet shows the possible use of the
TG=IGNORE statement in a user sequence:
[ Define_Sequence Jones (test);
[ Pattern 1;
Event 1 Scan_Load :
] Pattern 1;
[ Pattern 2;
Event 1 Put_Stim_PI (): ;
] Pattern 2;
[ Pattern 3;
[Keyed_Data ;
TG=IGNORE
] Keyed_Data ;
Event 1 Pulse () : "TCK"=-;
] Pattern 3;
[ Pattern 4;
Event 1 Pulse () : "C3"=-;
Event 2 Measure_PO () : ;
] Pattern 4;
] Define_Sequence Jones ;
The test generator discards Event 1 in Pattern3 because it has the TG=IGNORE statement
specified for it and considers the user sequence as follows:
[ Define_Sequence Jones (test) ;
[ Pattern 1;
Event 1 Scan_Load :
] Pattern 1;
[ Pattern 2;
Event 1 Put_Stim_PI (): ;
] Pattern 2;
[ Pattern 4;
Event 1 Pulse () : "C3"=-;
Event 2 Measure_PO () : ;
] Pattern 4;
] Define_Sequence Jones ;
The following figure shows a design with clock generated by OPC logic.
C1
-EC GSD
Circuit
OPCG Logic
master slave
C2
+SC
C2
opcgout
An example of a static stored-pattern test sequence definition for the design of Figure 6-5 is
shown in Figure 6-6. For ease of reference the standard Encounter Test numbering
convention for patterns and events has not been followed here. Test data import ignores them
anyway, so the example is valid.
Figure 6-6 A stored-pattern test sequence definition for a design with OPC logic
TBDpatt_Format (mode=node,model_entity_form=name);
# Mode Test sequence for opcgckt
# Created by TDA 11/08/96
# Normal Load and Unload
[ Define_Sequence TDA_opcgckt_test1 (test);
[ Pattern 1;
Event 1 Force ():
"Block.f.l.opcgckt.nl.opcg1.master"=0 ;
] Pattern 1;
[ Pattern 2;
Event 2 Scan_Load (): ;
] Pattern 2;
[ Pattern 3;
Event 3 Put_Stim_PI (): ;
Event 4 Measure_PO (): ;
] Pattern 3;
[ Pattern 4;
Event 5 Pulse ():
"Pin.f.l.opcgckt.nl.C2"=-;
] Pattern 4;
[ Pattern 5;
Event 6 Pulse ():
"Pin.f.l.opcgckt.nl.C2"=-;
Event 7 Pulse_Pseudo_PI ():
"opcgoutPPI"=+ ;
] Pattern 5;
[ Pattern 6;
Event 8 Scan_Unload (): ;
] Pattern 6;
] Define_Sequence TDA_opcgckt_test1;
Event 1 is a Force event which tells a simulator that the given net (usually a latch, as in the
present case) should be at the specified state. This event is needed here because most
simulators start out by assuming an unknown initial state in all latches, and this example
design does not have a homing sequence. In the present case, the lack of a homing sequence
is not a concern, since the static behavior of the design is the same no matter which state the
frequency divider starts in. The initial state affects the phase relationship between the input
clock pulses and the derived clock pulses on net opcgout. For static logic tests this does not
matter. No Force event is needed to initialize the other OPCG logic latch, because it is flushed
in the stability state and therefore will obtain its initial value through normal simulation.
The Force event could have been placed in a setup sequence, but since no setup sequence
was defined, it is more convenient, and does no harm to put it inside the test sequence as
shown.
Scan_Load (Event 2) is usually the first event in an automatic test sequence for a scan
design. If you have an LSSD design with A and B scan clocks, you will do well to define a
second sequence identical to this one except with a Skewed_Scan_Load in place of this
Scan_Load event, and supply both sequences to the test generator. Encounter Test will use
the sequence definition that most closely matches the generated test. The latch values from
the generated sequence are moved into the event in the sequence definition; if the
Scan_Load types do not match, the test coverage could be adversely affected.
Events 2 and 3 together comprise what is commonly thought of as the “test vector,” the latch
and primary input values required for the test. Event 3 is actually a placeholder telling
Encounter Test where to put the Stim_PI portion of the test vector. Besides acting as a
placeholder, the Put_Stim_PI event allows the specification of primary input values that will
override any values specified by the test generator. When the test sequence is constructed
from this sequence definition and the generated test vector, the Put_Stim_PI event will be
replaced by the Stim_PI event.
Event 4 is an instruction to measure the primary outputs. When creating its own test
sequences, Encounter Test has algorithms that determine the sequential relationship
between clocks, primary input stims, primary output measures, and scan operations.
However, when the user specifies the sequence, Encounter Test must be told where the stims
and measures go.
Events 5 and 6 produce two cycles of the clock primary input, which cause the frequency
divider (the OPC logic in this design) to produce a single pulse on the derived clock, opcgout,
which is identified as a “cut point” in the design source or mode definition file.
Event 7 is the pulse on the derived clock, which is referred to by the name of its corresponding
pseudo PI. For this example the name opcgoutPPI was chosen. This event is used when
verifying the sequence, and for Encounter Test simulation of the tests. Note that the
automatically generated sequences will be in terms of the pseudo PIs, so this event is also
used for sequence matching when multiple sequence definitions are given to the test
generation application.
Event 8, the final step in this sequence, tells Encounter Test to scan out the latch values. As
with the Scan_Load event, when you are using LSSD with A and B scan clocks, there are
two forms of this event, the other one being Skewed_Scan_Unload. The choice of which
measure latch event to use is related to the clocking, so if the wrong event is used, the test
coverage is likely to be dramatically reduced. As with the Scan_Load event (see the
discussion of Event 2 above), you may want to define two sequences, one with each measure
latch event. Combined with the two different stim latch events, you will often have four similar
sequence definitions for an LSSD design. In contrast, a single clock edge-triggered design
may require only a single static test sequence for good fault coverage.
Setup Sequences
Encounter Test automatically creates a setup sequence for WRPT and LBIST to hold the
weight information (for WRPT), to apply the optional initializing channel scan from PRPG (for
LBIST), and to apply any lineheld primary input values that are specified.
If you code your own setup sequence, Encounter Test will augment it with the above
information if needed. You would need to code a setup sequence only if there are additional
stimuli that need to be applied, or some of the “lineholds” are to pseudo primary inputs.
Neither of these reasons is likely to exist unless you are processing in a test mode with cut
points specified.
It should be helpful to think of the setup sequence as an initialization step for the WRPT or
LBIST test loop.
In some cases of processing with cut points, especially with an on-board BIST controller, it
may also be necessary to code a pseudo PI event (Stim_PPI) in the setup sequence just to
remind Encounter Test that the corresponding cut point nets were initialized to that state in
the modeinit sequence. For example, it is possible, especially with an on-product BIST
controller, that some control signal represented as a PPI is set in the mode initialization
sequence and then left in that state to begin the test sequence. If this PPI does not have a
stability value (that is, it is not attributed as a clock, TI, nor TC), Encounter Test will assume
the test sequence starts with the PPI at X unless told otherwise.
When a setup sequence is used, it is defined as type setup and then referenced in the test
sequence definition by the following construct:
[ Define_Sequence setup_seq_name (setup) ;
.
.
.
] Define_Sequence setup_seq_name;
[ Define_Sequence test_seq_name (test) ;
[ SetupSeq=setup_seq_name ];
.
.
.
If you import the setup sequence definition, but do not refer to it by the SetupSeq object
within a test sequence, Encounter Test will ignore it. The SetupSeq object is required within
the test sequence to tell Encounter Test to use the named sequence definition as the base
for the setup sequence that will precede the loop test sequence in the test generation output
file for WRPT or LBIST.
Endup Sequences
At the end of a BIST test procedure, it may be necessary to de-activate a phase-locked-loop
or cycle the BIST controller into some special state, or quiesce the design in some other
fashion. If this is the case, you may be able to provide the stimuli to do this in the signature
observation sequence. On the other hand, you may choose to separate this operation from
the MISR observation sequence, or it may be necessary to apply this quiescing sequence
between two concatenated test sequence loops, without an intervening signature
observation.
The quiescing, or endup sequence can be coded as a separate sequence definition and
imported along with your test sequence definitions. Encounter Test will use the endup
sequence only if so directed by the presence of an EndupSeq object within a test sequence
definition. This is specified in a similar manner as a setup sequence:
[ Define_Sequence endup_seq_name (endup) ;
.
.
.
] Define_Sequence endup_seq_name;
[ Define_Sequence test_seq_name (test) ;
[ EndupSeq=endup_seq_name ];
.
.
.
This object appears before the first pattern of the test sequence definition, but it gets applied
at the end of the test sequence by means of an Apply:endup_seq_name; event that
Encounter Test will automatically insert after the loop.
We note that in the case of a “lineheld” primary input, the value can be specified directly by
means of a Stim_PI event in the sequence definition. Also, lineholds are specified because
the verification and the test generation/simulation are performed in two separate steps, and
the need for the verification step is unique to the use of cut points and pseudo primary inputs.
Therefore, it is recommended that the use linehold specification within a test sequence
definition be limited to fixed value latches in OPC logic, but Encounter Test places no such
restriction on its use, in case the user should have some reason to make more extensive use
of it.
The linehold object is placed before the first pattern in the sequence definition, with the
following TBDpatt syntax:
[ Define_Sequence Example_Name (test);
[ Lineholds;
Netname_A=1;
Netname_B=0;
] Lineholds;
[ Pattern ;
.
.
.
Lineholds specified in this manner through a sequence definition are limited to primary inputs
and FLH (fixed-value) latches.
Lineholds specified in a sequence definition override any conflicting linehold specified in the
model source, test mode definition, or linehold file.
This linehold object is valid in a test sequence only, not in the setup sequence.
The scan sequence definitions, which must be provided manually, are shown in Figure 6-8 on
page 229.
The cut point nets, A1_Clk, A2_Clk, B1_Clk, and B2_Clk of Figure 6-7 are defined by either
the model source or mode definition statements as pseudo primary inputs A_Clk and B_Clk.
Thus, the sequence definitions in Figure 6-8 make reference to these PPI (pseudo primary
input) names.
A shift gate signal named Scan_Enable, with a test function attribute of -SE, is assumed to
be gating the scan data path. This signal is not part of the clock circuitry of Figure 6-7.
The free-running oscillator, “Osc_In” in Figure 6-7 , is assumed to have a test function of oTI,
indicating that it is “tied” to an oscillator. The mode initialization sequence must have applied
a Start_Osc event on this pin. Synchronization of the events in this sequence definition with
the oscillator is by means of Wait_Osc events.
The Wait_Osc event specifies how many oscillator cycles must have elapsed since the
previous Wait_Osc event. In our example, when the scan operation is not being executed,
the clock generation circuitry of Figure 6-7 is dormant, and has no effect upon the application
of the test patterns being applied between the scan operations. Thus, there need be no
Wait_Osc events in that portion of the test data between scan operations.
When the scan operation is started, the Scan_Enable signal is first set to its scan state of
0. Then, in preparation for turning on the Scan_Ctl signal to initiate the scan operation, a
Wait_Osc (Cycles=0) event is specified. This tells us that we must start paying attention to
the oscillator. “Paying attention” means simply to start counting oscillator pulses so that
subsequent events can be synchronized with the oscillator. Subsequent Wait_Osc events
will tell how many oscillator cycles should have elapsed at the given point in the test data.
Thus, the initial Wait_Osc event with Cycles=0 serves only as a signal to turn on one's
stopwatch, so to speak.
The next pattern turns on the Scan_Ctl input. When an oscillator is active (meaning that a
Wait_Osc has been encountered), any events following the Wait_Osc signal (even though
possibly in a different “pattern”) are assumed to be applied immediately within the first
oscillator cycle following the Wait_Osc event.
After the scansequence has completed (eight repetitions in our example), the scansectionexit
sequence, OPCKTscanSectExit, is executed. This sequence serves two purposes:
■ Provides a place to specify the Wait_Osc event which tells how many oscillator cycles
should elapse before the tester can assume that the scan operation is complete.
■ This is where the Scan_Ctl signal is returned to its normal state of 0.
Note from the timing diagram of Figure 6-7 that the scan operation automatically terminates
and the clock generation logic goes dormant four oscillator cycles after the raising of the
Scan_Ctl signal. Thus, the restoring of the Scan_Ctl signal to the 0 state does not have to
be timed, and can be done at any time following the four oscillator pulses and before it is time
for the next scan operation. The Wait_Osc (Cycles=4,Off) event not only causes the four
oscillator cycles to elapse before proceeding, but the “Off” parameter also signifies that
subsequent events are not being synchronized with the oscillator; the oscillator is considered
to be inactive (i.e. ignored but still running) at this point, and will remain inactive until the next
Wait_Osc event is encountered.
When an oscillator is active, the intervening events between Wait_Osc events are applied
as follows:
■ All external events (e.g., Stim_PI, Pulse, Measure_PO) are applied immediately.
■ All pseudo PI events are assumed to occur in their order of appearance in the sequence,
and be complete by the number of cycles in the following Wait_Osc event.
■ No other external activity occurs until the time specified in the following Wait_Osc event.
Encounter Test will not rearrange the order of the events in the sequence definitions. This
means that when composing the sequences, the user should adhere to the above guideline.
To understand how Encounter Test handles Wait_Osc events, it helps to think in terms of an
oscillator signal being either non-existent, running, or active. An oscillator comes into
existence when it is connected to a pin by a Start_Osc event. A Stop_Osc event effectively
kills the oscillator, returning the pin to a static level. The oscillator is said to be running as long
as it “exists”--that is, as long as it is connected to the pin. But connecting a pin to an oscillator
does not automatically mean that the design is responding to the oscillator. There may be
(and usually are) some enabling signals, such as Scan_Ctl in our example, that control the
effect of the oscillator upon the logic. You can think of the oscillator as being “active” whenever
it is causing design activity. While the oscillator is active, the design is usually running
autonomously, perhaps communicating to the outside world through asynchronous
interfaces.
Encounter Test does not understand asynchronous interfaces, and continually simulating a
free-running oscillator signal would be prohibitively expensive. Therefore, Encounter Test
supports oscillators only if they affect the design through cut points, and all Encounter Test
programs except for Verify OPC Sequences treat those cut points as primary inputs, ignoring
the oscillator completely. Still, consideration has to be given to the relationship between the
oscillator and other primary input stimuli so that the tester knows how to apply the patterns
and Verify On Product Clock Sequences knows how to simulate the sequence to assure that
it actually produces predictable results and that the cut point (pseudo PI) information is
correctly specified for the mainstream Encounter Test processes.
When the first Wait_Osc event is encountered, the specified pin must have previously been
connected to an oscillator by a Start_Osc event. The first Wait_Osc is a sign that the
oscillator is about to become active.
When the pattern sequence reaches a phase where the design will no longer be “listening”
to the oscillator signal, and primary input stimuli can be applied without any regard to the
oscillator, the Wait_Osc event provides a flag, Off. The off attribute on a Wait_Osc event is
a sign that the oscillator is no longer causing any significant design activity, and is considered
as being reverted to the should be in an inactive state after the specified number of cycles.
If two or more oscillators are active simultaneously, Encounter Test assumes that they are
controlling independent portions of the design.
Ignoremeasures File
An ignoremeasures file is used to specify measure points to be ignored during test generation
or fault simulation. Measures are suppressed for all latches in the file by assigning a measure
X value instead of measure 1 or measure 0. Specify the keyword value
ignoremeasures=ignoremeasure_filename to utilize the ignoremeasures file.
The format of the file is a list of block or pin names, one name per line. The names may be
specified using either full proper form (Pin.f.l.topcell.nl.hier_name) or short form (hier_name).
You can add comments in the ignoremeasures file using any of the following characters:
//
/* <comment block> */
The Scan_Unload and/or the Measure PO for the specified latches/POs will be set to X.
The following are potential results when resimulating with an ignoremeausure file:
■ Miscompare messages are produced since the miscompare checking is done before the
measures are X’d out.
■ If a measure is X’d out, the fault coverage is adjusted to remove the fault(s) detected by
that pattern.
Keepmeasures file
An keepmeasures file is used to specify measure points to be retained during test generation
or fault simulation. This file is useful if the number of measures to be ignored is greater than
the number to be kept.
All measures except those specified in the file are suppressed by assigning a measure X
value to all latches. The keepmeasures file is utilized by specifying the keyword value
keepmeasures=keepmeasure_filename.
The format of the file is a list of block or pin names, one name per line. The names may be
specified using either full proper form (Pin.f.l.topcell.nl.hier_name) or short form (hier_name).
You can add comments in the keepmeasures file using any of the following characters:
//
/* <comment block> */
The Scan_Unload and/or the Measure PO for the specified latches/POs are retained.
7
Utilities and Test Vector Data
This chapter discusses the commands and concepts related to test patterns and the ATPG
process.
Committing Tests
All test generation runs are made as uncommitted tests in a test mode. Commit Tests moves
the uncommitted test results into the committed vectors test data for a test mode. Refer to
“Performing on Uncommitted Tests and Committing Test Data” on page 234 for more detailed
information about the overall test generation processing methodology.
To perform Commit Tests using the graphical interface, refer to “Commit Tests” in the
Graphical User Interface Reference.
To perform Commit Tests using command line, refer to “commit_tests” in the Command Line
Reference.
where:
The most commonly used keyword for the commit_tests command is:
■ force=no/yes - To force uncommitted tests to be saved even if potential errors have
been detected. Some potential errors are:
❑ TSV was not run or detected severe errors
❑ Date/time indicates that the patterns were created before a previous save
Refer to “commit_tests” in the Command Line Reference for more information on the
keyword.
Prerequisite Tasks
Complete the following tasks before running Test Simulation:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide.
2. Create a Test Mode. Refer to “Performing Build Test Mode” in the Modeling User
Guide.
3. Build a fault model for the design. See “Building a Fault Model” in the Modeling User
Guide for more information.
4. Run ATPG and create a pattern to be saved. Refer to “Invoking ATPG” on page 35 for
more information.
If you use the uncommitted test as input to another uncommitted test, append to it and the
results of the second uncommitted test are not satisfactory, both uncommitted tests will have
to be thrown away since they will have been combined into a single uncommitted test.
By committing an uncommitted test, you append the results of the uncommitted test to a
“committed vectors” set of test data for the test mode. The committed vectors for a test mode
is the data deemed worth sending to manufacturing. After an uncommitted test has been
committed, it is removed from the system and can no longer be manipulated independent
from the other committed vectors data. For future reference, the names of the committed tests
are saved with the committed test patterns.
After test data from one or more uncommitted test has been committed to the committed
vectors set of test data, any subsequent test generation uncommitted tests will target only
those faults left untested by the committed test patterns. To get the most benefit from this, it
is advisable to achieve acceptable uncommitted test results before beginning a new
uncommitted test. While it is possible to run many different test generation uncommitted tests
in parallel, if they are committed there will be many unnecessary test patterns included in the
committed vectors test set since the uncommitted tests may have generated overlapping test
patterns that test the same faults.
When uncommitted test results are committed to the committed vectors for a test mode, the
commit process checks whether it is allowed to give test credit for faults in any other test
modes that have been defined. This processing is referred to as cross mode fault mark-off.
This is a mechanism that allows a fault that was detected in one test mode to be considered
already tested when a different test mode is processed in which that fault is also active.
Sometimes cross mode fault mark-off is explicitly disabled, such as when two test modes are
targeting the same faults, but for different testers (and different chip manufacturers).
To perform Delete Commit Tests using the graphical interface, refer to "Delete Committed
Tests" in the Graphical User Interface Reference.
To perform Delete Commit Tests using command line, refer to “delete_committed_tests” in the
Command Line Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode
Prerequisite Tasks
Complete the following tasks before running Test Simulation:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide.
2. Create a Test Mode. Refer to “Performing Build Test Mode” in the Modeling User
Guide.
3. Build a fault model for the design. See “Building a Fault Model” in the Modeling User
Guide for more information.
4. Commit test data. Refer to “Committing Tests” on page 233 for more information.
After removing the tests from the experiment, resimulate the new set of tests in the following
conditions:
■ If the tests in the experiment are order dependent
■ If fault status exists for the experiment. Resimulation is required to recalculate and reset
the fault status for the remaining tests.
Refer to “Simulating Vectors” on page 258 for more information.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode
■ inexperiment= name of experiment from which to delete patterns
■ testrange= odometer value of the test patterns to delete. The values can be:
❑ all - process all test sequences
❑ odometer - process a single experiment, test section, tester loop, procedure, or
test sequence in the TBDbin file (for example 1.2.1.3)
❑ testNumber - process a single test in the set of vectors by specifying the relative
test number (for example 12)
❑ begin:end - process range of test vectors. The couplet specifies begin and end
odometers (such as 1.2.1.3.1:1.2.1.3.15) or relative test numbers (such as 1:10)
❑ begin: - process the vectors starting at the beginning odometer or relative test
number to the end of the set of test vectors.
❑ :end - process the test vectors starting at the beginning of the set of test vectors
and ending at the specified odometer or relative test number
❑ : - process the entire range of the set of test vectors (the same as testrange = all)
❑ A zero (0) value in any odometer field specifies all valid entities at that level of the
TBD hierarchy. For example, 1.1.3.0.1 indicates the first test sequence in each test
procedure.
Refer to delete_testrange in the Command Line Reference for more information on these
keywords.
Deleting an Experiment
Use the delete_tests command to remove all data associated with the specified uncommitted
experiment(s) from the working directory.
To perform Delete Tests using the graphical interface, refer to "Delete Tests" in the Graphical
User Interface Reference.
To perform Delete Tests using command line, refer to “delete_tests” in the Command Line
Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode
■ experiment= name of experiment to delete
Experiment
Define_Sequence
Timing_Data
Pattern
Event
Test_Sequence
Tester_Loop
Test_Procedure
Test_Sequence
Pattern
Event
If the TBDbin contains signature based TBD data, the TBDbin hierarchy consists of these
following entities:
Experiment
Test_Section
Tester_Loop
Test_Procedure
Signature interval
Iteration 1
Iteration 2
Iteration 3
Iteration 4
Iteration 5
Signature interval 2
Iteration 1
Iteration 2
.
.
.
Although these elements are not numbered in the TBDbin, individual elements can be
referenced in Encounter Test tools by using a hierarchical numbering scheme (commonly
referred to as odometer represent levels in the hierarchy shown in Figure 7-1 on page 238.
So, as an example, 1.2.3.4.5.6.7 would be uncommitted test 1, test_section 2, tester_loop 3,
test_procedure 4, test_sequence 5, pattern 6, event 7.
The following gives a brief description of each of these levels of hierarchy. Additional
information about this and other data contained in the TBDbin is included in “Test Pattern
Data Overview” in the Test Pattern Data Reference.
Refer to “Viewing Test Data” in the Graphical User Interface Reference for details on
viewing and analyzing test data using the graphical interface.
Experiment
The experiment groups a set of test_sections generated from “one” test generation run.
The experiment name is specified at the invocation of the test generation application. The
“one” run may actually be multiple invocations of test generation appended to the same
experiment name. Refer to “Experiment” in the Test Pattern Data Reference for details.
Define_Sequence
This is basically a template of a test sequence that is used within this experiment. There will
be one of these for each unique test sequence template used in the experiment. The
sequence definition includes timing information for delay testing and a description (template)
of the patterns and events. The Define_Sequence currently is found only within
experiments that contain timed dynamic tests. Refer to “TBDpatt and TBDseqPatt Format” in
the Test Pattern Data Reference.
Timing_Data
This defines a tester cycle and the specific points within the tester cycle when certain events
(defined in terms of the event type and signal or pin id) are to occur. Refer to “AC Test
Application Objects” in the Test Pattern Data Reference for a more detailed description of
timing data. The sequence definition may contain several Timing_Data objects, each one
defining a different timing specification for testing the product. The choice of which
Timing_Data object is controlled by the Test_Sequence which uses this sequence
definition.
Test_Section
The test_section groups a set of tester_loops within an experiment that have the
same tester setup attributes (such as tester termination), and the same types of test (see
“General Types of Tests” on page 30). Refer to “Test_Section” in the Test Pattern Data
Reference for details.
Tester_Loop
The tester_loop groups a set of test_procedures that is guaranteed to start with the
design in an unknown state. Thus, the tester_loops can be applied independently at the
tester. Refer to “Tester_Loop” in the Test Pattern Data Reference for details.
Test_Procedure
The test_procedure is a set of test_sequences. Test_Procedures are sometimes
able to be applied independently at the tester. If the test_procedures within a
tester_loop cannot be applied independently, an attribute to that effect is included in the
containing tester_loop. Refer to “Test_Procedure” in the Test Pattern Data Reference
for details.
Test_Sequence
A test_sequence is a set of test patterns that are geared toward detecting a specific set of
faults. The patterns are intended to be applied in the given order. Refer to “Test_Sequence”
in the Test Pattern Data Reference for details.
Pattern
A pattern is a set of events that are applied at the tester in the specific order specified. Refer
to “Pattern” in the Test Pattern Data Reference for details.
Event
An event is the actual stimulus or response data (or any other data for which ordering is
important). The data within the event is represented as a string of logic values called a vector.
The pin to which the value applies is determined by the relative position of the logic value in
the vector. Encounter Test maintains a vector correspondence list, that indicates the order of
the primary inputs, primary outputs and latches as they appear in vectors.
There is a file created in Encounter Test called TBDvect. This file contains the vector
correspondence information. It can be edited if you need to change the order of the PIs, POs,
and/or latches in the vectors. Refer to “Event” in the Test Pattern Data Reference.
The following sections give information on providing test data to manufacturing and analyzing
test data.
3. Verilog** - this format is used by many manufacturers to drive a “sign off” simulation of
the vectors against the component model prior to fabrication. It can also be translated into
many tester formats by the manufacturers.
See “Verilog Pattern Data Format” in the Test Pattern Data Reference for more detail
on Verilog.
4. Encounter Test Pattern Data (TBDpatt) - The TBDpatt that is output from Encounter
Test contains all the same “manufacturing data” as the TBDbin, WGL, and Verilog forms.
The TBDpatt is an ascii form of the TBDbin.
See “TBDpatt and TBDseqPatt Format” in the Test Pattern Data Reference for
additional information.
5. STIL data - Encounter Test exports test vectors in STIL format, conforming to the IEEE
Standard 1450-1999 Standard Test Interface Language (STIL), Version 1.0 standard.
See “STIL Pattern Data Format” in the Test Pattern Data Reference for additional
details.
Note: See “Writing and Reporting Test Data” on page 169 for information about creating
(exporting) these test data forms.
Static
Static tests are structured to detect static (DC) defects. The detection of DC defects does not
require transitions, so the design is expected to settle completely before the next event is
applied.
Dynamic
Dynamic tests are structured to create a rapid sequence of events. These events are found
inside the dynamic pattern and are identified as release events (launch the transition),
propagate events (propagate the transition to the capture latch) and capture events (capture
the transition). Within the dynamic pattern the events are expected to be applied in rapid
succession. The speed is at the discretion of the manufacturing site, since there are no
timings to describe how fast they can be applied.
Dynamic Timed
Dynamic timed tests are dynamic tests that have associated timings. In this case the speed
with which the events are expected to be applied at the tester is explicitly stated in the timing
data.
It is possible to create tests that can be used to sort the product by applying the test at several
rates of speed. Not all manufacturers support this, so you must contact your manufacturer
before sending them test data with several different sets of timing data.
The manufacturing process variation curve is a normal distribution. Sorting the tests is done
by selecting a point on the process variation curve through the setting of the coefficients (best,
nominal, worst) of a linear combination.
8
Reading Test Data and Sequence
Definitions
This chapter discusses the commands and concepts to read in test patterns and test
sequences into Encounter Test.
To read Encounter Test Pattern Data using the graphical interface, refer to “Read Vectors” in
the Graphical User Interface Reference.
To read Encounter Test Pattern Data using command lines, refer to “read_vectors” in the
Command Line Reference.
where:
The most commonly used keywords for the read_vectors command are:
■ language= stil|tbdpatt|evcd - Type of language in which the patterns are being
read
■ importfile=STDIN|<infilename> - Allows a filename or piping of data
■ uniformsequences=no|yes - STIL import options to create test procedures with
uniform clocking sequences. Default is no.
■ identifyscantest=no|yes - STIL import options to identify a scan integrity test, if
exists.
Refer to “read_vectors” in the Command Line Reference for more information on these
keywords.
Prerequisite Tasks
Complete the following tasks before running Test Simulation:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide.
2. Create a Test Mode. Refer to “Performing Build Test Mode” in the Modeling User
Guide.
3. Have the test patterns in the supported format.
■ Conversion of vectors produced from designs with grandparent test modes is not
supported. Refer to "Building Multiple Test Modes" in the Modeling User Guide for
related information.
TBDpatt is the recommended format for entering manually generated test vectors into
Encounter Test because the following reasons:
■ TBDpatt format translates directly into TBDbin format
The probability of information loss is less when translating between TBDpatt and
TBDbin formats.
■ TBDpatt format can be easily created
You can create a TBDpatt file for input to Encounter Test by first generating test vectors
(even just random ones) and then converting the resulting TBDbin file into a TBDpatt
file. You can then edit the TBDpatt file to reflect the desired primary input and latch
stimulus values.
To read Encounter Test Pattern Data using the graphical interface, refer to “Read Vectors” in
the Graphical User Interface Reference.
To read Encounter Test Pattern Data using command lines, refer to “read_vectors” in the
Command Line Reference.
Initial support for STIL that conforms to the IEEE 1450.1 standard is available. Refer to
“Support for STIL Standard 1450.1.”
Encounter Test reads, parses, and performs semantic checks on the constructs found in the
input STIL file in addition to validating structural references and test vector data.
For information on Encounter Test STIL pattern data, refer to the following sections of the Test
Pattern Data Reference.
■ “STIL Pattern Data Format”
■ Appendix E, “STIL Pattern Data Examples”
STIL Restrictions
It is strongly recommended that the STIL data that you read should be in an Encounter Test-
generated file, produced by “Writing Standard Test Interface Language (STIL)” on page 179.
The results for STIL data generated outside of Encounter Test are not guaranteed.
STIL scan protocol must match the scan procedures automatically generated by Encounter
Test. Custom Scan Protocol procedures are not supported. If you do use custom scan
protocol, we recommend that the read tests be simulated prior to sending data to the tester.
Refer to “Custom Scan Sequences” in the Modeling User Guide for more information.
Encounter Test does not support reading STIL data for designs with scan pipelines in
compression testmodes.
This comparison helps identify the mode initialization sequence even when the end of the
sequence is merged with the first real test pattern in the STIL data.
Note: The read_vectors command compares only the events directly controlled by the
tester. Internal events in the mode initialization sequence, such as the stimulation of pseudo-
primary inputs or the pulsing of pseudo-primary input, clocks will be ignored in this
comparision.
If the first few STIL vectors match the mode initialization sequence, then read_vectors
replaces the matching vectors with the mode initialization sequence identified for this test
mode.
This replacement enables the resulting TBDbin (binary file of TBDPatt) to correctly identify
any internal events associated with the mode initialization. This enables read_vectors to
support the automatic creation of internal events for the mode initialization sequence, but not
for normal test patterns.
If the first few patterns in the STIL data do not match the mode initialization sequence
identified in the test mode, then the read_vectors command explicitly adds a mode
initialization sequence from the TBDseq file to the output patterns before converting any of
the STIL vectors. This makes sure that the mode initialization sequence occurs before the
patterns in the STIL data in the resulting test pattern file.
As the mode initialization sequence is taken from the TBDseq file, this comparision will also
correctly identify it as a mode initialization sequence in the resulting TBDbin file. When you
resimulate the patterns, the simulator will not insert another copy of the mode initialization
sequence.
Note: The read_vectors command does not identify a mode initialization sequence that
is functionally equivalent to the mode initialization sequence of the test mode, but does not
exactly match the test mode initialization sequence.
An EVCD file is an ASCII file that contains information about value changes on selected
variables in the design. The file contains header information, variable definitions, and the
value changes for all specified variables. Encounter Test accepts EVCD files through the
Read Vectors application. For more information on creation and content of the files, refer to
the following sections in Cadence® NC-Verilog® Simulator Help:
■ Generating a Value Change Dump (VCD) File
■ Generating an Extended Value Change Dump (EVCD) File for a Mixed-Language Design
Tip
It is recommended to follow some rules while reading EVCD:
❑ Ensure that the input data only changes when the clock(s) are OFF.
❑ Do not have the data and clock signals changing at the same time.
❑ Do not have overlapping clocks (unless they are correlated).
To read the generated EVCD file into Encounter Test using the graphical interface, refer to
“Read Vectors” in the Graphical User Interface Reference. Select EVCD for Input
Format.
To read the generated EVCD file into Encounter Test using command lines, refer to
“read_vectors” in the Command Line Reference. Specify language=evcd.
Tip
The TBDbin consists of a single Experiment, Test_Section, and Tester_Loop. The
type of Test_Section created in the TBDbin depends on the specified value for
testtype or Test section type if using the GUI. The default Test_Section type is
logic. Refer to “Encounter Test Vector Data” on page 238 for more information.
The termination keyword sets the tester termination value to be assumed for the
Test_Section. The default termination setting is none.
The inputthreshold keyword defines the input threshold. Signals less than or equal
to the specified threshold are interpreted as Z.
The outputthreshold keyword defines the output threshold. Signals less than or
equal to the specified threshold are interpreted as X.
The measurecyclesboundary keyword is used to insert a Measure_PO event on a
cycle boundary when no measure exists in the EVCD input.
EVCD Restriction
Encounter Test supports only the single variable type of port. A message is issued if a port of
any other variable type is detected.
EVCD Output
EVCD values that refer to Primary Inputs are translated to Stim_PI or Pulse events.
Primary Output values are translated to Measure_PO events. Refer to “Event” in the Test
Pattern Data Reference for descriptions of these events.
After reading the test vector, you can use the Test Simulation application to simulate the
vectors. You can choose to perform good machine or fault machine simulation and forward or
reverse vector simulation.
To read Sequence Definition Data using the graphical interface, refer to “Read Sequence
Definition” in the Graphical User Interface Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ importfile= location of sequences to import
Prerequisite Tasks
Complete the following tasks before running Test Simulation:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide.
2. Create a Test Mode. Refer to “Performing Build Test Mode” in the Modeling User
Guide.
9
Simulating and Compacting Vectors
Encounter Test has capabilities to simulate and manipulate experiments to reduce the
number of patterns, fault simulate patterns, calculate new values, and verify existing patterns.
Compacting Vectors
With limited tester resources, when you want to limit the number of ATPG patterns to be
loaded onto a tester, it is strongly recommended that you compact the vectors to produce a
more ideal set of patterns before they limit the patterns being applied to the tester. Typically,
the test pattern set created during ATPG does not represent the most ideal order of patterns
to apply at a tester. For example, because of different clocking and random fault detection,
pattern #100 might test more faults than pattern #60. If you can only apply 70 patterns, they
would want pattern #100 to be applied because of the number of faults it detects.
Compact Vectors is designed to help order the patterns to allow the steepest test coverage
curve.
By default the compact vectors sorts an experiment from ATPG based on the number of faults
tested per experiment. The tools round the test coverage down to the nearest 0.05% and use
this as the cut-off coverage number. If the experiment is based on dynamic patterns, both the
static and dynamic coverages are reduced by 0.05%, and both numbers need to be reached
before stopping. The coverage number is based on AT-Cov or adjusted fault coverage.
When the input consists of multiple experiments, test sections, or tester loops (refer to
“Encounter Test Vector Data” on page 238 for more information on these terms), Compact
Vectors tries to combine all the vectors into a single test section and tester loop. However, for
that to happen, all the following conditions must be met:
■ Test section type must be logic or WRP
■ Test types (static, dynamic) must be the same
■ Tester termination must be the same
■ Pin timings must be the same
■ Any linehold fixed-value latch (FLH) values must be consistent (that is, for each FLH, its
value must be the same for all experiments or test sections being combined)
■ There must be consistent usage (or non-usage) across the test sections of tester
PRPGs, tester signatures, product PRPGs, and product signatures
To perform Compact Vectors using the graphical interface, refer to "Compact Vectors" in the
Graphical User Interface Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ inexperiment= name of the experiment to run re-order
The most commonly used keywords for the compact_vectors command are:
■ resimulate=yes|no - Set to no to not resimulate the result patterns. Default is yes
to resimulate the results.
■ reordercoverage=both|static|dynamic - Specify the fault types to drive sorting.
The default is based on the type of ATPG patterns that are being simulated.
■ maxcoveragestatic=# - Stop patterns at a specific static coverage number (for
example 99.00)
■ maxcoveragedynamic=# - Stop patterns at a specific dynamic coverage number
(for example 85.00)
■ numtests=# - Stop at a specific pattern count. This is an estimate as multiple patterns
are simulated in parallel, so the total final pattern count might be slightly higher (less than
64 away).
Refer to “compact_vectors” in the Command Line Reference for more information on these
keywords.
Prerequisite Tasks
Complete the following tasks before running Test Simulation:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide.
2. Create a Test Mode. Refer to “Performing Build Test Mode” in the Modeling User
Guide.
3. Run ATPG or have experiments to compact. Refer to “Invoking ATPG” on page 35 for
more information.
Output
The output includes the re-ordered experiment. Use this output experiment for exporting or
committing test data.
Output Message
■ INFO (TWC-016)
Simulating Vectors
Simulate Vectors reads an ASCII pattern set and simulates it with one command. Another
way to do this is to perform Read Vectors (refer to “Reading Encounter Test Pattern Data
(TBDpatt)” on page 247) and then Analyze Vectors (refer to “Analyzing Vectors” on
page 260).
To perform Test Simulation using the graphical interface, refer to “Simulate Vectors” in the
Graphical User Interface Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ experiment= name of the output experiment from simulation
■ language= Type of patterns to import
■ importfile= Location of patterns to import
The most commonly used keywords for the simulate_vectors command are:
Refer to “simulate_vectors” in the Command Line Reference for more information on these
keywords.
Prerequisite Tasks
Complete the following tasks before running Simulate Vectors:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide.
2. Create a Test Mode. Refer to “Performing Build Test Mode” in the Modeling User
Guide.
3. Run ATPG or have experiments to compact. Refer to “Invoking ATPG” on page 35 for
more information.
4. Have STIL or TBDpatt pattern files. For limitations on patterns, refer to “Reading
Encounter Test Pattern Data (TBDpatt)” on page 247 for more information.
Refer to “Overall Simulation Restrictions” on page 272 for general simulation restrictions.
Output
The output is a new experiment containing the simulated results of the input patterns. The
output also contains the new calculated values in case of miscompares.
Output Messages
■ INFO (TWC-015)
Analyzing Vectors
Use Analyze Vectors to simulate an existing Encounter Test experiment. This can be useful
when cross checking simulation results, manipulating patterns, or creating scope sessions for
further debug.
To perform Test Simulation using the graphical interface, refer to “Analyze Vectors” in the
Graphical User Interface Reference.
To perform Test Simulation using command lines, refer to “analyze_vectors” in the Command
Line Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ inexperiment= name of the experiment to be simulated
■ experiment= name of the output experiment resulting from simulation
Refer to “analyze_vectors” in the Command Line Reference for more information on these
keywords.
Prerequisite Tasks
Complete the following tasks before running Analyze Vectors:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide.
2. Create a Test Mode. Refer to “Performing Build Test Mode” in the Modeling User
Guide.
3. Run ATPG or have experiments to analyze. Refer to “Invoking ATPG” on page 35 for
more information.
Refer to “Overall Simulation Restrictions” on page 272 for general simulation restrictions.
Output
The output is a new experiment containing the simulated results of the input patterns. The
output also contains the new calculated values in case of miscompares.
Timing Vectors
Use Time Vectors to time/re-time an existing Encounter Test experiment. This will time only
dynamic events found in the test patterns. This simulation is good machine only.
To perform Time Vectors using the graphical interface, refer to "Time Vectors" in the
Graphical User Interface Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ inexperiment= name of the experiment to simulate
■ experiment= name of the output experiment resulting from simulation
■ delaymodel= name of the delay model to use
The most commonly used keywords for the time_vectors command are:
■ constraintcheck=yes|no - Ability to turn off constraint checks. Default is yes to
check constraints.
■ earlymodel/latemode - Ability to customize delay timings. Default is 0.0,1.0,0.0
for each keyword. Refer to “Process Variation” on page 129 for more information.
■ clockconstraints=<file name> - List of clocks and frequencies to perform
ATPG. Refer to “Clock Constraints File” on page 125 for more information.
■ printtimings=no|yes - Print timing summary for each clock
Refer to “time_vectors” in the Command Line Reference for more information on these
keywords.
Prerequisite Tasks
Complete the following tasks before running Time Vectors:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide.
2. Create a Test Mode. Refer to “Performing Build Test Mode” in the Modeling User
Guide.
3. Have an existing dynamic experiment.
Refer to “Overall Simulation Restrictions” on page 272 for general simulation restrictions.
Output
The output a new experiment that is timed to the new timings found in the clock constraint
files or the specified options.
For more information, refer to “Reading Test Data and Sequence Definitions” on page 245.
Once the test patterns have been imported, they can be simulated by Encounter Test. The
types of simulation available in Encounter Test are:
■ High Speed Scan-Based Simulation - high speed simulation for designs and patterns
that adhere to the scan design guidelines.
■ General Purpose Simulation - simulation for designs and patterns independent of
whether they adhere to the scan design guidelines.
■ Good Machine Delay Simulation - simulation for designs and patterns that have a delay
model to accurately reflect the timing in the design.
High Speed Scan-Based Simulation provides high-speed fault simulation for efficient test
pattern generation. It requires designs and input patterns to adhere to the LSSD and GSD
guidelines for Encounter Test. Input patterns must adhere to the following constraint:
■ Patterns cannot place a test inhibit (TI) function pin or a test constraint (TC) function pin
away from its stability value.
General Purpose Simulation provides a more flexible simulation capability than High Speed
Scan-Based Simulation. A design is not required to adhere to design guidelines. Since this is
a fully event-driven simulator, general sequential logic is handled. Also, this simulation does
not impose particular clock or control input sequencing constraints on the input patterns. This
type of simulation is used:
■ to supplement High Speed Scan-Based Simulation for parts of a design that are not
compatible with automatic test generation.
■ to provide a means to insert functional tests into the tester data stream and to perform
fault grading of functional tests.
■ to verify test patterns before they are sent to a design manufacturer.
General Purpose Simulation compares its measurement results to those previously achieved
and alerts you to any differences. For example, when a miscompare between an expected
result and a simulator predicted result is found, analysis is needed to determine why the
miscompare occurred. You can use analyze test patterns using View Vectors. Select the
View Vectors icon on the Encounter Test schematic display to view simulation results and
provide visual insight into the test data.
Note: Although General Purpose Simulation provides a less constrained simulation
capability compared to High Speed Scan-Based Simulation, it does so at the price of
simulation speed. General Purpose Simulation will typically be at least an order of magnitude
slower than High Speed Scan-Based Simulation on the same design with the same patterns.
The key here is that General Purpose Simulation should only be used in those cases where
High Speed Scan-Based Simulation cannot be applied.
General Purpose Simulation provides some features for optimizing its performance under
varying conditions of design size, fault count, and computer resource availability. The primary
means for controlling the run time and memory usage during simulation is a technique known
as multi-pass fault simulation. This means that General Purpose Simulation can simulate a
given pattern set multiple times. For each pass, the unsimulated subset of the fault list is
chosen for simulation. This reduces the memory requirements of fault simulation.
There is no way to determine exactly how much memory is required and hence calculate the
ideal number of faults that should be simulated for a given pass. General Purpose Simulation
makes decisions for this number, but you can specify the Number of Faults per Pass and the
Maximum Number of Overflow Passes to control the multi-pass process.
Another feature of General Purpose Simulation that controls run time is fault dropping. As
simulation progresses, the simulator detects that certain faults are consuming an inordinate
amount of run time. When detected, these faults are dropped from the simulation for the
current run. Note that any faults so dropped are attempted for simulation on successive runs,
thus detection of the fault is still a possibility. The Machine Size sets the threshold at which
fault dropping occurs.
Good Machine Delay Simulation is similar to General Purpose Simulation in that it is an event
driven simulation. As the name implies, however, it does not perform fault simulation. Good
Machine Delay Simulation should be used when simulation of a design with delays and/or
time-based stimulus and measurements is required. Good Machine Delay Simulation can not
be used for weighted-random (WRPT) or LBIST tests unless these tests are first converted
to stored pattern format using the Manipulate Tests tool.
Notes:
1. To perform Test Simulation using the graphical interface, refer to “Simulate Vectors” in the
Graphical User Interface Reference.
2. To perform Test Simulation using command lines, refer to “simulate_vectors” in the
Command Line Reference.
3. Also refer to “Simulating Vectors” on page 258.
For example, specify write_vectors testrange=1:5 to report the first five test
sequences. The following is a sample output:
TEST SEQUENCE COVERAGE SUMMARY REPORT
Test |Static |Static |Dynamic |Dynamic |Sequence|Overlapped|Total |
Sequence |Total |Delta |Total |Delta |Cycle |Cycle |Cycle |
|Coverage|Coverage|Coverage|Coverage|Count |Count |Count |
You can also specify a combination of odometer value with relative numbers. Any testrange
entry in a comma separated list of testranges that contains a period is assumed to be an
odometer entry. Any entry without a period is assumed to be a relative test number.
For example, to extract the first 10 sequences along with the sequence 2.4.1.1.1, specify the
test range as follows:
testrange=1:10,2.4.1.1.1
Note: A relative test number cannot be used to identify a modeinit, scan or setup sequence.
These sequences must be identified using the odometer value.
To specify a range of experiments in the testrange keyword, use a period (.) between the
experiment numbers, such as shown below:
testrange=1.:2.
For more information, refer to “Reading Test Data and Sequence Definitions” on page 245.
You can import this input into Encounter Test and use it as input to the simulator. This
simulation capability provides a means of fault grading functional tests (or other manual
patterns) and inserting them into the tester data.
Fault simulation of functional tests is typically faster using zero-delay mode instead of unit-
delay mode, however in dealing with a large design size, a large number of patterns required
for a functional test, and a large number of faults remaining untested after scan chain tests,
it may not be feasible to perform full fault simulation within a reasonable time. Based on these
conditions, we recommend using the option for Random Fault Selection, nrandsel to fault
grade your functional tests. This will allow you to obtain an estimate of the combined fault
coverage of the scan-based ATPG tests plus the functional tests.
Refer to “1149.1 Boundary Controls” in the Modeling User Guide for additional
information on these latch states.
Encounter Test detects when more than one clock is stimulated or pulsed in any one
event, and reports the clock nets that have been changed as a result. This function,
called multiclock, is not performed for init procedures, scan chain tests, and macro tests.
See “Tester_Loop” in the Test Pattern Data Reference for a description of init
procedure.
■ Bi-Directional I/Os
The functional test case should be written so as not to create three-state contentions.
Typically this would require the product drivers go to Z before any BIDI PI Stims to 1/0,
and BIDI PI Stims to Z should occur before the product drivers can go out of Z. These
rules can be difficult to follow while developing a functional test case because the chip
driver enable state must be known when working with BIDI Stims. Note that it is not
impossible to have a simultaneous clock event to enable or disable a chip driver and a
Stim Event on a BIDI PI (causing only a transient contention) because Clock and Data
PI Events must be separated to achieve race-free simulation results.
First is the issue of valid simulation results so that the tester does not fail a correctly designed
and manufactured chip. If the ATPG patterns or functional tests have been written correctly,
(refer to “Functional Test Guidelines” on page 267) and if there are no unpredictable races in
the design (that is, no severe TSV violations or data hold-time violations), either a zero-delay
or unit-delay mode of simulation can work if the models are developed correctly. A zero-delay
simulation predicts the clock always wins the clock/data race. That is, for clock gating or data
hold time races, zero-delay simulation will make calculations assuming that the clock goes
Off before new data arrives. For a unit-delay simulation model to work, the inherent races
among Flip-Flops, latches, and RAMs have to be accounted for in the models, taking clock
distribution unit-delay skews into consideration. If gate unit delays are not accurate
representations of clock distribution skew (for example, due to heavily loaded tests), you
might have to modify the models to make the simulation work.
In addition to balancing the clock distribution unit delays, another way to make the unit-delay
simulation model work properly is to insert gates (buffers) into the data inputs or outputs of
all latch flip-flops or RAM models. This delays the propagation of new data signals feeding to
opposite phase latches until their clocks can turn Off. The number of delay gates required in
the latch/flip-flop/RAM models depends on how well-balanced is the clock distribution tree
logic, how much logic already exists between latches/flip-flops/RAMs in the design, and
whether you need to use the xwidth option to predict unknown values for some types of truly
unpredictable design races. This unit-delay simulation technique using the xwidth option
(Delay Uncertainty Specific Sim option on the graphical user interface) is a major reason
to select unit-delay simulation over zero-delay simulation.
Unit-delay simulation mode allows for more accurate simulation in the presence of structures
which present problems for zero-delay simulation provided the model is unit-delay accurate.
For zero-delay simulation to function properly, the clock shaping logic must be identified to the
simulation engine to prevent the suppression of pulses. As TSV is intended to identify such
errors, unit delay is used for rules checking as the identification of TSV violations is required
for accurate simulation . For an accurate unit delay model, zero delay simulation does not
provide any advantage over unit delay simulation and will yield incorrect results in the
presence of incorrectly modeled clock shaping networks.
Confidence in Encounter Test data that is produced with cut points can be gained by use of
the Verify On-Product Clock Sequences tool (see “On Product Clock Sequence Verification”
in the Verification User Guide) if a simulation model (instead of a black box) is available for
the OPC logic. Additional confidence may be gained by resimulating the test data without the
use of cut points. You may be able to export the test data and re-import it to another test mode
that does not have cut points. However, in some cases this may not be possible. For example,
if the cut points are necessary for the definition of scan strings, and you define a test mode
without any cut points, the simulator would require an expanded form of the tests where each
scan operation appears as a long sequence of scan clock pulses. You may or may not want
to simulate the scan operations at this detailed level.
To avoid the necessity of defining additional test modes for simulation without cut points, and
to cover the case where the detailed simulation of scan operations is not needed, there is a
simulation option for ignoring cut points. Using this option, you can generate test data using
cut points and resimulate the tests in the same test mode, ignoring the cut points. Specify
useppis=no on the simulation (analyze_vectors) command line to use this feature.
InfiniteX Simulation
InfiniteX simulation can be used to control how High Speed Scan Based Simulation simulates
StimPI and StimLatch events when race conditions have been identified by Verify Test
Structures. This mode of simulation employs a pessimistic approach to simulating race
conditions by introducing Xs into the simulation of a StimPI or StimLatch event to
pessimistically simulate the race condition. For example, if Verify Test Structures has
identified certain clock-data conditions, the simulator introduces an X into StimPI events
where the state of the PI is changing state (0->X->1 as opposed to 0->1 directly).
Invoke InfiniteX simulation by either specifying a value for the infinitex keyword in the
command environment or by selecting High Speed Scan Based Simulation option to Use
pessimistic simulation on the GUI. The default for infinitex is controlled by whether TSV
was previously run. If TSV was not run, standard simulation is performed. If TSV was run, the
default behavior for infiniteX simulation is dictated by the following conditions:
■ InfiniteX simulation for latches is performed if:
Verify Test Structures check Analyze potential clock signal races did not complete
or was not run
Or, Analyze potential clock signal races was run and produced message TSV-059.
For additional information, refer to “Analyze Potential Clock Signal Races” in the Custom
Features Reference and “Analyze Potential Clock Signal Races” in the Verification
User Guide.
■ InfiniteX simulation for PIs is performed if:
Verify Test Structures check Analyze test clocks' control of memory elements did not
complete or was not run
Or, Analyze clocks' control of memory elements was run and produced message
TSV-008 or message TSV-310.
For additional information, refer to “Analyze test clocks’ control of memory elements” for
LSSD in the Custom Features Reference and “Analyze test clocks’ control of memory
elements” for GSD in the Verification User Guide.
Note: You can turn off the infiniteX simulation mode by specifying infinitex=none for the
create_logic_tests command.
All latches are simulated the same way in that all or simulated either optimistically or
pessimistically based on the selected option.
The following are limitations associated with the pessimistic method of simulation:
■ Simulation activity that occurs during Build Test Mode or Verify Test Structures is not
affected. This may cause the following conditions:
For a detailed analysis of the interactions of the terms and parameters used to determine the
termination values during Test Generation and Simulation, refer to “Termination Values” in the
Modeling User Guide.
❑ scan
❑ macro
❑ driver_receiver
■ Supports only the test type of static.
■ Pattern elimination for three-state contention or ineffective tests is not supported.
■ Only test sections of Logic, IDDQ, Driver_Receiver, ICT (all types), and IOWRAP (all
types) can be sorted by sequence.
■ SimVision is not supported on Linux-64 bit platforms.
■ The capability to allow simulation run time controls to be established from information
stored in vector files generated by Encounter ATPG has a limitation that applies to the
detection of syntax errors introduced by a user manually editing the simOptions keyed
data in an Encounter Test vector file. Only invalid keyword values can be detected and
reported. Erroneous keywords themselves are not detected as syntax errors but are
simply ignored. It is generally recommended that "simOptions" keyed data not be
modified.
10
Advanced ATPG Tests
This chapter discusses the commands and techniques to generate advanced ATPG test
patterns. These techniques are performed on designs at the same stage as in the basic ATPG
tests. Some of these techniques require special fault models to be built.
A few issues must be considered when attempting to integrate IDDq testing into a high-
volume manufacturing test process:
■ Measurement of the supply current takes longer than measuring signal voltages. The
design has to be given time to settle into a quiescent state after the test is applied, and
the current measurement has to be averaged over some time interval (maybe over tests)
to eliminate noise in the measurement. Therefore, the test time per pattern is quite large
and so the number of current sensing tests must be kept small (considerably less than
one hundred for what most manufacturers would consider reasonable throughput in chip
testing).
■ Some design may contain high steadystate current conditions that are to be avoided.
■ The techniques used to find the cutoff between good and faulty levels of IDDq current are
often ad hoc and empirical, based on experience gained through extensive
experimentation.
Despite the challenges associated with IDDq testing, the benefit of detecting faults which are
not detected by conventional voltage sensing techniques (for example, gate-oxide shorts) has
given IDDq testing a place in many chip manufacturers' final test processes.
The Encounter Test stored pattern test generation application can automatically generate
IDDq test patterns. Since application of IDDq patterns may take significantly longer that
normal patterns due to having to wait for the design activity to quiesce, Encounter Test
provides some options for helping to keep the size of the IDDq test pattern set small.
IDDq test vectors can be used to support extended voltage screen testing via the generation
of a scan chain unload (Scan_Unload event) immediately after each Measure_Current
event. The Scan_Unload event supports the existing Ignore Latch functions as found in the
existing ATPG pattern generation.
To perform Create IDDq tests using the graphical user interface, refer to “Create Iddq Tests”
in the Graphical User Interface Reference.
To perform Create IDDq tests using the command line, refer to “create_iddq_tests” in the
Command Line Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ experiment= name of the test that will be generated
The most commonly used keywords for the create_iddq_tests command are:
■ compactioneffort=<value> - The amount of time and effort to reduce down the
pattern size. Default is medium effort.
■ iddqeffort=<value> - The amount of time and effort for the test generator to create
tests for hard-to-test faults. Default is low effort.
■ iddqunload=no|yes - Whether to enable the measure of the scan flops. Default is no.
■ ignoremeasures=<filename> - List of flops to ignore during measures. Refer to
“Ignoremeasures File” on page 230 for more information.
■ iddqmaxpatterns=# - The default number of sequences as defined in the Tester
Description Rule (TDR) used when creating the testmode. This can be overridden. Refer
to Tester Description Rule (TDR) File Syntax in the Modeling User Guide for more
information.
Prerequisite Tasks
Complete the following tasks before executing Create IDDq Tests:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide for more information.
2. Create a Test Mode. See “Performing Build Test Mode” in the Modeling User Guide for
more information.
3. Build a fault model for the design. See “Building a Fault Model” in the Modeling User
Guide for more information.
Output
Encounter Test stores the test patterns in the experiment name.
Command Output
The output log contains information about testmode, global coverage, and the number of
patterns used to generate those results.
If you do not achieve the desired ATPG coverage, check for the following problems:
■ Contention in the design - Look for TSV-193 and TSV-093 messages from
verify_test_structures to identify internal contention.
■ Broken scan chains - Analyze the verify_test_structures log for broken scan
chains.
Note: Deterministic Faults Analysis will not be available based on random pattern
generation.
With compactioneffort=medium, the test generator compacts tests to a certain point and
then randomly fills/simulates the patterns multiple times. With compactioneffort=high,
more tests are compacted into the patterns, thus causing a higher test coverage for a given
number of tests. If you do not set the pattern limit (which is 20 patterns by default), the end
result with compactioneffort=high will most likely have a higher coverage with lesser
number of patterns. This is because Iddq patterns are mostly driven by simulation time so less
patterns simulated means less over time.
The test generator works harder but the way Iddq patterns are simulated and fault graded
results in simulation time making up a majority of the Iddq pattern generation task.
To perform Create Random Tests using the graphical interface, refer to “Create Random
Tests” in the Graphical User Interface Reference.
To perform Create Random Tests using the command line, refer to create_random_tests in
the Command Line Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ experiment= name of the test that will be generated
The most commonly used keywords for the create_iddq_tests command are:
■ maxrandpatterns=# - Specify the maximum number of random patterns to generate.
■ minrandpatterns=# - Specify the minimum number of random patterns to simulate.
■ detectthresholdstatic=# - Specify a decimal number (such as 0.1) as the
minimum percentage of static faults that must be detected in a given interval. Simulation
terminates for the current clocking sequence when this threshold exceeds.
Prerequisite Tasks
Complete the following tasks before executing create IDDq tests:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide for more information.
2. Create a Test Mode. See “Performing Build Test Mode” in the Modeling User Guide for
more information.
3. Build a fault model for the design. See “Building a Fault Model” in the Modeling User
Guide for more information.
Output
Encounter Test stores the test patterns in the experiment name.
Command Output
The output log contains information about testmode, global coverage, and the number of
patterns used to generate those results.
Global Statistics
#Faults #Tested #Redund #Untested %TCov %ATCov
Total Static 1022 737 52 187 72.11 75.98
Total Dynamic 1014 242 44 713 23.87 24.95
****************************************************************************
----Final Pattern Statistics----
Test Section Type # Test Sequences
----------------------------------------------------------
Logic 45
----------------------------------------------------------
Total 45
If you do not achieve the desired ATPG coverage, check for the following problems:
■ Contention in the design - Look for TSV-193 and TSV-093 messages from
verify_test_structures to identify internal contention.
■ Broken scan chains - Analyze the verify_test_structures log for broken scan
chains.
Note: Deterministic Faults Analysis will not be available based on random pattern
generation.
It is possible to control fault detection during Create Exhaustive Tests by inserting Keyed Data
into the test sequence. Create Exhaustive Tests checks each event for the existence of keyed
data, and if found, allows fault detection to begin with that event. The keyed data should only
be placed on a single event within a test sequence. If the keyed data exists on more than one
event, Create Exhaustive Tests begins fault detection on the last event having keyed data.
This scenario does affect the measure points where Create Exhaustive Tests detects faults.
For example, if the test sequence contains a Scan_Unload event, the slave latches are still
the only latches used as the detect points.
You can perform Create Exhaustive Tests using only the command line. Refer to
“create_exhaustive_tests” in the Command Line Reference for more information.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ experiment= name of the test that will be generated
Prerequisite Tasks
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide for more information.
2. Create a Test Mode. See “Performing Build Test Mode” in the Modeling User Guide for
more information.
3. Build a fault model for the design. See “Building a Fault Model” in the Modeling User
Guide for more information.
Output Files
Encounter Test stores the test patterns in the experiment name.
Command Output
The output log contains information about testmode, global coverage, and the number of
patterns used to generate those results.
Restrictions
Macro tests may be static, dynamic, or dynamic timed. See “Test Vector Forms” on page 242
for more information on these test formats. Also refer to Properties for Embedded Core Tests
in the Custom Features Reference for more information.
To perform Create Core Tests using the graphical interface, refer to “Create Core Tests” in the
Graphical User Interface Reference.
To perform Create Core Tests using the command line, refer to “create_core_tests” in
the Command Line Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ experiment= name of the test that will be generated
■ tdminput= Indicates the name of the input TBDbin file containing pre-existing test data
for the macro (core) being processed. This file specification is required only if you invoke
TCTmain for migrating pre-existing core tests from the core boundary to the package
boundary.
■ tdmpath= Indicates the directory path of input TBDbin files containing pre-existing test
data for the macro (core) being processed. The specification is a colon separated list of
directories to be searched, from left to right, for the input TBDtdm files. You can also set
this option using Setup Window in the graphical user interface.
To perform Create MBIST patterns using the graphical interface, refer to "Create Embedded
Tests" in the Graphical User Interface Reference.
For each on-chip receiver, objectives are added to the fault model for RCV1 and RCV0 at
each latch fed by the receiver. These tests are typically used to validate that the driver
produces the expected voltages and that the receiver responds at the expected thresholds.
These tests require the fault model including driver/receiver faults (build_faultmodel
includedrvrcvr=yes)
To perform Create Parametric Tests using the graphical interface, refer to "Create Parametric
Tests" in the Graphical User Interface Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ experiment= name of the test that will be generated
Prerequisite Tasks
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide for more information.
2. Create a Test Mode. See “Performing Build Test Mode” in the Modeling User Guide for
more information.
3. Build a fault model for the design. See “Building a Fault Model” in the Modeling User
Guide for more information.
Output Files
Encounter Test stores the test patterns in the experiment name.
Command Output
The output log contains information about testmode, global coverage, and the number of
patterns used to generate those results.
These tests require the fault model including stuck driver and shorted net objects
(build_faultmodel sdtsnt=yes).
To perform Create I/O Wrap Tests using the command line, refer to “create_iowrap_tests” in
the Command Line Reference.
To perform Create I/O Wrap Tests using the graphical user interface, refer to Create IOWrap
Tests in the Graphical User Interface Reference.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
Prerequisite Tasks
Output Files
Encounter Test stores the test patterns in the experiment name.
Command Output
The output log contains information about testmode, global coverage, and the number of
patterns used to generate those results.
repeatedly manipulate the finite state machine of the TAP controller every time a test is
being developed.
Figure 10-1 Overview of Scan Structure for 1149.1 LSSD/GSD Scan SPTG
Figure 10-1 shows an overview of the scan structure used for 1149.1 LSSD or GSD scan
SPTG. Note the following important points:
a. The TAP controller is in the Test-Logic-Reset state at all times, both for test
generation and for scanning.
b. The scan chains used are all the defined LSSD or GSD scan chains; the 1149.1 TDI-
to-TDO scan chains are not used for scanning.
c. The scan protocol, which is automatically generated by Encounter Test, is the usual
A-E-B scan sequence, utilizing the LSSD or GSD scan clocks and gates.
2. TAP scan
For 1149.1 TAP scan SPTG the design is brought to a user-specified TAP controller state
by the mode initialization sequence, and all SPTG takes place with the controller held in
the specified state. This state is not maintained for the scan operation, however, which is
done exclusively by way of the TAP and therefore necessitates TAP controller state
changes. The test generation state desired is specified by the TAP_TG_STATE
parameter of the SCAN TYPE mode definition statement. This state may be Test-Logic-
Reset, the same as is automatically assumed for 1149.1 LSSD or GSD scan SPTG, or
may be either Run/Test-Idle, Shift-DR or Pause-DR.
Figure 10-2 Overview of Scan Structure for 1149.1 TAP Scan SPTG
Figure 10-2 shows an overview of the scan structure used for 1149.1 TAP scan SPTG. In
contrast with Figure 10-1 note the following points:
a. The TAP controller is in one of four supported states for test generation, selectable
by the user: Test-Logic-Reset, Run-Test/Idle, Shift-DR or Pause DR. (The selected
TG state does not hold for the scanning operation.)
b. The scan chain used is whatever test data register is enabled between TDI and
TDO. Optionally, other defined scan chains (SI/SO) will also be used for scanning,
but only if scannable via the TAP scan protocol (see next).
Whether to do 1149.1 SPTG by LSSD or GSD scan or TAP scan is dependent on how your
scan chains are configured. If the tester on which the chip is to be tested is not too severely
pin limited then the best way to do stored pattern test generation is probably by LSSD or GSD
scan, utilizing totally all the normal scan capabilities of the chip. If, on the other hand, you are
dealing with a severely constrained tester pin interface, as might very well be the case, for
instance, with burn-in test, then you might want to consider TAP scan SPTG. With TAP scan
SPTG all scan operations are performed by the standard 1149.1 defined scan protocol. TCK
and TMS are manipulated to cause scan data to move along the TDI-to-TDO scan chain, as
well as possibly other defined SI-to-SO scan chains. So you are not limited to just one scan
chain under TAP scan SPTG, but it is important to realize that all scan chains that are defined
must be scannable in parallel and all by the same TCK-TMS scan protocol.
The way that these two things are accomplished is by defining a special instruction - call it
TAPSCAN - that when loaded into the 1149.1 instruction register (IR) will bring the necessary
clock and scan path gating into play. Refer to Figure 10-3, which describes TAP controller
operation, will aid in understanding why clock gating is necessary.
As seen from this figure, scanning of a test data register (TDR) must take place by exercising
the state transitions in the Select-DR-Scan column. Thus we have the following sequence of
state transitions for a TDR scan: Select-DR-Scan, Capture-DR, Shift-DR (repeat), Exit1-DR,
Update-DR. For this TDR scan protocol it is necessary to pass through the Capture-DR state,
in which data is parallel loaded into the TDR selected by the current instruction. Bearing in
mind that SPTG will already have captured test response data into this TDR prior to scanning
you must not allow the Capture-DR state to capture new data, thus destroying the test
generation derived response. It is thus necessary that the use of TCK as a scan clock to the
TDR be gated off except when in the Shift-DR state. Figure 10-4 shows an example of how
this can be done for a MUXed scan design.
Figure 10-4 Example of TCK Clock Gating for TAP Scan SPTG
The TAPSCAN signal in this figure is hot (logic 1) when the instruction loaded into the IR is
TAPSCAN, the name we have arbitrarily assigned to the special instruction implemented in
support of TAP scan SPTG.
Gating of scan data must similarly use the ShiftDR TAP controller signal and the decode of
the TAPSCAN IR state. Figure 10-5 shows an example of how this might be implemented for
a MUXed scan design.
Figure 10-5 Example of scan chain Gating for TAP Scan SPTG
The first thing necessary for 1149.1 SPTG is that the test mode of interest be recognizable
as having 1149.1 characteristics. This means that it must have at least one each of the
following pins:
■ TEST_CLOCK Input
■ TEST_MODE_SELECT Input
■ TEST_DATA_INPUT
■ TEST_DATA_OUTPUT
Refer to “1149.1 Boundary Controls” in the Modeling User Guide for details on these pin
types.
Optionally, there may also be a Test Reset Input -TRST. There are three ways of identifying
these pins to Encounter Test - either in the model, in the BSDL or in Test Mode Define
ASSIGN statements. Conflicts between these three possible sources of information are
resolved by having the mode definition ASSIGN statement take precedence over the BSDL,
which in turn takes precedence over the model.
Given that the test mode being processed is 1149.1, then there are two components to the
test generation process:
1. Fault simulation of 1149.1 BSV verification sequences
Algorithmic SPTG will likely have some difficulty generating tests for faults associated
with the 1149.1 test logic (TAP controller, BYPASS register, etc.). The most efficient way
to cover these faults is by invoking the General Purpose Simulator to simulate the
verification sequences developed by 1149.1 Boundary Scan Verification (BSV). Here is
a sample mode definition which will serve both for BSV and invocation of the General
Purpose Simulator:
Tester_Description_Rule = tdr1
;
test_types none
;
faults static
;
2. Algorithmic SPTG
The determination of whether SPTG is to proceed using LSSD or GSD scan or TAP scan
is made by interrogating the mode definition TEST_TYPES and SCAN TYPE
statements:
❑ TEST_TYPES not NONE, SCAN TYPE = LSSD or GSD indicates LSSD or GSD
scan
❑ TEST_TYPES not NONE, SCAN TYPE = 1149.1 indicates TAP scan
Following are two sample mode definitions for 1149.1 SPTG, one for LSSD or GSD scan
and one for TAP scan:
/**************************************************************/
/*
/* Sample mode definition for 1149.1 LSSD or GSD scan SPTG
/*
/**************************************************************/
Tester_Description_Rule = tdr1
;
faults static
;
/**********************************************************/
/*
/* Sample mode definition for 1149.1 TAP scan SPTG
/*
/**********************************************************/
Tester_Description_Rule = tdr1
;
faults static
;
One other consideration applies with respect to mode definition, but only if TAP scan is called
for. The SCAN TYPE mode definition statement must specify both an INSTRUCTION and a
TAP_TG_STATE.
■ INSTRUCTION:
Specify the instruction to be loaded into the IR to select a test data register (TDR) for
scanning through the TAP. This instruction will configure the design under test so that
SPTG can work effectively in the TAP controller state designated by TAP_TG_STATE. It
not only gates the selected TDR for scanning but also causes the correct TCK gating to
be brought into effect for all those memory elements to be scanned. (An example of the
type of clock gating necessary is shown in Figure 10-4 on page 291.
The instruction to be loaded is specified in one of two ways, either:
❑ bit_string
Specify the binary bit string to be loaded into the IR, with the bit closest to TDI being
the left-most bit and the bit closest to TDO being the right-most bit.
❑ instruction_name
Specify the name of the instruction to be extracted from the BSDL.
■ TAP_TG_STATE
This parameter is used to specify the TAP controller state in which test generation is to
be performed. Acceptable values:
a. RUN_TEST_IDLE (RTI)
TG is to take place in the un-Test/Idle state of the TAP controller.
b. TEST_LOGIC_RESET (TLR)
TG is to take place in the Test-Logic-Reset state of the TAP controller.
c. SHIFT_DR (SDR)
TG is to take place in the Shift-DR state of the TAP controller.
d. PAUSE_DR (PDR)
TG is to take place in the Pause-DR state of the TAP controller.
e. CAPTURE_DR (CDR)
This option is intended for use only if you have implemented parallel scan capture
clocking via the CAPTURE_DR state. This is not a recommended way to implement
internal scan via an 1149.1 interface, but Encounter Test will support it in a limited
fashion.
For CAPTURE_DR 1149.1 test modes, Encounter Test generates a test sequence
definition that can be used when performing ATPG. When there are clocks defined
beyond TCK, an additional sequence definition is generated. It is permissible to copy
this additional sequence to use as a template for defining additional test sequences
for use during ATPG. Note that all such test sequences must include a single TCK
pulse and may optionally include as may pulses as desired of other clocks.
See TAP_TG_STATE, described under “SCAN” in the Modeling User Guide for
additional information.
From this point on, for both LSSD or GSD scan and TAP scan, the usual SPTG methodology
is followed for the 1149.1 test mode. Test Mode Define will automatically derive the mode
initialization sequence and scan protocol necessary for test generation.
Refer to “” on page 312 for a typical IEEE 1149.1 Test Generation task flow.
1. Build Model
For complete information on Build Model, refer to “Performing Build Model” in the
Modeling User Guide.
2. Build Test Mode for 1149.1 Boundary
A sample of a mode definition file for this methodology follows:
TDR=bs_tdr_name
SCAN TYPE=1149.1 IN=PI OUT=PO;
TEST_TYPE=NONE;
FAULTS=NONE;
.
.
.
For complete information, refer to “Performing Build Test Mode” in the Modeling User
Guide.
3. Verify 1149.1 Boundary
For complete information, refer to the “Verify 1149.1 Boundary” in the Verification User
Guide.
During the verification of the 1149.1 structures, Encounter Test creates an experiment
called 1149. This experiment can be fault graded to get coverage.
4. Analyze 1149.1 patterns
Fault grade the 1149 test patterns. Refer to “Analyzing Vectors” on page 260 for more
information.
5. Commit Results
Refer to “Committing Tests” on page 69 for more information.
Based on the preceding paragraph, the following assumptions are used to address the
reduction of chip manufacturing cost:
■ Tester time and test data volume are among the primary sources of test cost.
■ Any decrease in the time to apply patterns required to detect a faulty design translates
to cost savings.
Breaking the elements of cost into finer detail, it can also be assumed that a near-minimal set
of test vectors will take too long to apply and require too much buffer resource, resulting in the
conclusion that test vector cost should be reduced. The two aspects that contribute to the cost
of a test vector are:
■ stimulus data - the data stored on the tester which allows us to excite certain areas of the
design.
■ response data - data stored on the tester which allows us to detect when the design
under test responds differently than expected.
For stored pattern testing, the stimulus and response data comprise about 98% of the test
data for a large chip, with the remaining 2% attributed to clocking template and control
overhead. The response data can be as much as twice as large as the stimulus data
depending on the type of tester being used and how the data bits are encoded. However,
stimulus data can be more of a problem for highly sequential (partial scan) designs. For large
chips, most of the stimulus and response data is located in the scan events.
The Encounter Test Stored Pattern test methodology includes a method, based on the
creation and use of an On-Product MISR test mode, to reduce data volumes while supporting:
■ Both stored pattern and WRP test methods
■ Use of test reordering to improve fault coverage versus pattern counts
■ Diagnostics
■ Reduced pin count test methodology
Refer to “On-Product MISR (OPMISR) Test Mode” in the Modeling User Guide for
additional information.
The resulting test data from Stored Pattern Test Generation will contain a Channel_Scan
event and optionally, Diagnostic_Scan_Unload events to represent the scan-out operation.
There will also be Product_MISR_Signature events to denote the expect data for each test.
See the “Event” section of the Test Pattern Data Reference for details on these event
types.
When a failing MISR signature is detected in the application of OP-MISR tests, diagnosis of
the failure proceeds by switching to the diagnostics mode. In this mode the design is
reconfigured so that channel latch values can be scanned out. Encounter Test automatically
generates the Diagnostics_Observe and Diagnostics_Return sequences for
purposes of getting to and from the diagnostic test mode. Refer to “Define_Sequence” types
in the Test Pattern Data Reference for details.
Using On-Product MISR, the test data file can realize a potential reduction of up to 90% for a
large design. For tests that include Diagnostic_Scan_Unload events for diagnostics,
there are corresponding increases to the test data volume. You can limit the amount of
diagnostic test data by specifying a fault coverage threshold (diagnosticmeasures) to limit
the volume of diagnostic measure data to retain.
Parallel Processing
The ongoing quest to reduce test generation processing time has resulted in the introduction
of the data parallelism approach to the Encounter Test Generation and Fault Simulation
applications. Parallel Processing implements the concept of data parallelism, the
simultaneous execution of test generation and simulation applications on partitioned groups
of faults or patterns. This reduces the elapsed time of the application.
The data set is partitioned either dynamically (during the test generation phase of stored
pattern test generation) or statically (at the beginning of the simulation phase).
The set of hosts used for parallel processing can be designated using one of these methods:
■ Specifying a list of machines (available via either graphical user interface for the
application being run or via command line).
■ Using Load Sharing Facility (LSF)* (available command line only, see“Load Sharing
Facility (LSF) Support” on page 301).
Encounter Test supports parallel processing for Stored Pattern Test, Weighted Random
Pattern Test, Logic Built-In Self Test, and Test Simulation. Refer to the following for conceptual
information on these applications:
■ “Advanced ATPG Tests” on page 275
■ “LBIST Concepts” on page 323
■ “Test Simulation Concepts” on page 263
■ “Stored Pattern Test Generation Scenario with Parallel Processing” on page 309 for a
sample processing flow incorporating Parallel Processing Test Generation.
A controller process is started on the machine that the run is started from. Test generation
and simulation processes are started on the machines identified by the user. At any given
time, only one of the processes will be working. The controller process primarily performs test
generation and simulation processes. It performs test generation when it is not busy with
simulation and handling communication from the processes. The controller process is
responsible for generating the patterns and fault status file.
An overview of how parallel processing works for stored pattern test generation:
For command line invocation, refer to “Parallel Processing Keywords” in the Command Line
Reference.
Parallel processing support is available for simulation of patterns. The following features are
supported:
■ High Speed Scan and General Purpose simulation (via command line
simulation=hsscan|gp)
■ Simulation of Scan Chain, Driver-Receiver, IDDq and Logic Tests.
An option for queuename is Numslaves=# that specifies the number of ATPG slave
processors on which to run parallel processing.
Currently, it does not support signature-based patterns (WRPT and LBIST patterns). The
Fault simulation phase of the algorithm is parallelized.
A controller process is started on the machine that the run is started from. Fault simulation
processes are started on the machines identified by the user. The controller process performs
good machine simulation and coordination of fault simulation processes. It also produces the
patterns and fault status files. The host used to start the application is used to run the
controller. The fault simulation phase is run in parallel on the hosts you select. Therefore, if n
hosts are selected, there are n simulation jobs running.
The Good Machine Simulation (in order to write patterns) on the originating host is performed
in parallel with the Fault Simulation running on the selected hosts. Therefore, for optimal
results, do not select the host that you originate the run from as one of the hosts to run Fault
Simulation on.
Optimum performance is achieved if the selected machines running in parallel have been
dedicated to your run.
In a parallel run, a greater number of patterns may be found effective as compared to a serial
run.
To perform parallel processing using command line, refer to “Parallel Processing Keywords”
in the Command Line Reference.
Restrictions
For Logic Built-In Self Test restrictions, refer to “Restrictions” on page 330
using the expert option parallelpattern=no. This option causes the algorithm to
revert to the parallel processing support available with previous releases.
■ The parallel processing architecture does not support dynamic distribution of faults This
forces a user to run on dedicated machines in order to benefit from parallel processing.
It may also produce more patterns.
■ General Purpose Simulation is not supported for WRPT and LBIST when running in
parallel mode. High Speed Scan Based Simulation should be used.
■ LSF machines that access more than one job can cause parallel LSF to terminate. The
workaround is to use the lsfexclusive option. If you specify lsfexclusive,
ensure your queue is configured to accept exclusive jobs.
Ensure the following conditions are met in your execution environment prior to using LSF:
■ Ensure that the statements in your .profile or profile files for your shell (.kshrc,
.cshrc) are error-free; otherwise you will not be able to run in the parallel environment.
The statements for profile-related information must be syntactically correct, i.e.
invocations that do not exist or are typed incorrectly will prevent successful setup of the
parallel processing environment.
■ Ensure that you can rsh to the machines selected for the parallel run from the machine
that you originate your run from and be able to write to the directory that contains your
part. The following command executed from the machine that you plan to originate your
run from will validate this requirement:
rsh name_of _machine_selected touch directory_containing_part/name_of
some_file
You should also be able to rsh to the selected host and be able to read the directory (and
all it's subdirectories) where the code resides. Execute the following command from the
machine you plan to originate your run from in order to verify this:
rsh name_of_machine_selected ls directory_where_code_is_installed
❑ The version of rsh you use should inherit tokens obtained via the klog command
(versus one obtained as a result of login).
Notes:
a. If you are starting the run on AIX, verify that your PATH environment variable
ensures that the rsh command from the AFSRCMDS package will be picked up.
This version of rsh inherits tokens obtained via klog. This is usually installed in /
usr/afsrcmds/bin. In addition, ensure that the environment variable KAUTH is
set to yes.
a. Ensure that the directory containing your .profile or .cshrc have read and
lookup access.
b. Create an executable file in /tmp on all machines you will using that obtains tokens
using the klog command.
c. Ensure that the file can be read and executed only by you.
❑ xhost + - enables all machines to display on the machine that issues the xhost
command.
■ The PATH environment variable should be set up to find the bsub command. Specify this
setting in your .login file for .cshrc users and .profile for korn shell.
■ VERY IMPORTANT: The user running the LSF job must be allowed to rsh into the
machines being used while the LSF job is running.
Refer to “Parallel Processing Keywords” in the Command Line Reference for additional
information.
Also note that numerous repositories of LSF information exist on the WWW.
Input Files
Refer to the following for application-specific input files:
■ For Logic Built-In Self Test, “Input Files” on page 330
Output Files
Refer to the following for application-specific output files:
■ For Logic Built-In Self Test, “Output” on page 331
■ For Test Simulation, “Output” on page 259
During parallel processing, interim output files are created for each process.
When the parallel processing ends, these files are combined into individual output file for
faultStatus.
■ remotetb. pid.source_string
■ pid . myapp
■ pid . mynodes
1. Build Model
For complete information on Build Model, refer to “Performing Build Model” in the
Modeling User Guide.
For complete information, refer to “Performing Build Test Mode” in the Modeling User
Guide.
3. Analyze Test Structure
11
Test Pattern Analysis
This chapter discusses the tools and concepts that can be used to analyze test patterns
within Encounter Test.
Debugging Miscompares
Miscompares can be caused by various factors, some of which are given below:
■ The model is logically incorrect
■ The model does not account for the delay mode requirements of the simulator that
generated the original data (or the one that is being used for the re-simulation). For
example, if the cell description has some extra blocks to model some logical
configuration, the unit delay simulator may find that signals do not get through the logic
on time since it assigns a unit of delay to each primitive in the path. This might work better
with a zero delay simulation.
■ The input patterns are incorrect (either due to an error in the manually generated
patterns; or due to a problem with the original simulation).
The following are some recommended considerations while analyzing the patterns:
1. The first thing to consider is where the “expected responses” come from.
❑ If you are writing manual patterns, the expected responses are included in these
patterns as an Expect_Event (see “TBDpatt and TBDseqPatt Format” in the Test
Pattern Data Reference).
❑ If you are analyzing a TBD from an Encounter Test test generation process, the
simulation portion of that process creates measures to indicate that the tester
should measure a particular set of values on primary outputs and/or latches. When
you resimulate the test data, the simulator compares its results with the previous
results specified by measure events (Measure_PO and Scan_Unload).
2. The next thing to consider is the analysis technique you plan to use. This choice will
influence the type of simulation to use for the re-simulation. Using the Test Simulation
application, you may select a variety of simulation techniques. In addition, there is an
interactive simulation technique specifically aimed at test pattern analysis (and
diagnostics).
❑ Use SimVision facilities to select and display design signals of interest. Refer to
“SimVision Overview” on page 321 for additional information.
2. To view the miscomparing patterns on the graphical view of the design, use View
Vectors to select a test sequence analyze and invoke View Circuit Values to see the
values displayed on the View Schematic window.
See “Viewing Test Data” on page 317 and “General View Circuit Values Information” in
the Graphical User Interface Reference for more information.
3. If you are satisfied after viewing the patterns and the model and making your own
correlation of the data or, if you are limited to this choice by the data you are analyzing,
then use the following process:
c. View the logic on the Encounter Test View Schematic window and manually
analyze the problem.
To manually create a watch list, use the following syntax and semantics:
■ Each line in the file is a statement, which can be of one of the following types:
Model Object Statement
Each Model Object Statement identifies one Encounter Test Model Object by type and
name. It also allows you to specify an alias name for the Object. The syntax of the
statement is the Model Object name optionally followed by the alias name. The Model
Object name in the full name or short name format. The simple name should have a type
specified before the name. The types are NET, PIN, or BLOCK. If a type is not specified,
then net is assumed first, then pin, and then block. The alias name can be any
combination of alpha-numeric or the following special characters:
!#$%&()+,-.:<=>?@[]^_`|~.
Note: If an entry is a BLOCK, Encounter Test will create a watch list for all ports/pins on
that block. Refer to “expandnets” on page 316 for information on how to identify signals
within a BLOCK.
Facility Start Statement
The Facility Start statement marks the beginning of a group of Model Objects that will be
associated together. The statement syntax is the word FACILITY followed by white space
followed by a facility name and ending with an open brace '}'. The facility name must start
with an alphabetic character. It may contain alpha-numeric characters or underscores. It
cannot end in an underscore nor have more than one underscore repeated. These are
the same rules for identifiers in VHDL. The name will always be folded to upper case and
is therefore case insensitive. When the same facility name appears more than once in
the file, only one facility by that name is created containing all the Model Objects
associated with the facility. It is an error to nest facilities. Here is an example Facility Start
Statement:
FACILITY TARGET {
directs Encounter Test to record net switching activities for all nets within block xyz and
block abc.
Comments
The characters ’//’ and ’/*’ begin a comment. Comments are allowed at the end of a
statement or on a line by itself. Once a comment is started, it continues to the end of the
line.
■ An example of a watch list:
facility unit_a_buss_byte_0 {
"unit_a.buss_out[7]"
"unit_a.buss_out[6]"
"unit_a.buss_out[5]"
"unit_a.buss_out[4]"
"unit_a.buss_out[3]"
"unit_a.buss_out[2]"
"unit_a.buss_out[1]"
"unit_a.buss_out[0]"
}
facility expandnets {
block unit_b
block unit_c
}
"sys_clock"
"a_clock"
"b_clock"
"init_a.sys_clock"
There can be only one statement per line. Each statement must be on a single line.
See “Encounter Test Vector Data” on page 238 for information about each level of the
hierarchy.
You can perform a variety of tasks using the hierarchy. Its capabilities include:
■ Moving interactively up and down a hierarchy.
■ Expanding and collapsing levels of a Vectors file.
■ Displaying a design view with simulation values.
■ Displaying Details about specific objects in a Vectors file. This consists of information
also available in a TBDpatt file, including data, an audit summary, and a listing of the
contents of the Vectors file for the selected object.
■ Creating a minimum Vectors file containing the minimal amount of data needed to
perform simulation.
■ Displaying Attributes of specific entities in a Vectors file.
■ Displaying simulation results requiring special analysis (for example, miscompare data).
■ Display of failing patterns based on the currently selected failures. See Reading Failures
in the Diagnostics User Guide for more information.
The vector information is collected and summarized according to test sequences having
similar stream of patterns, events, and clock used. The report considers each unique event
stream as a template and displays it as a string of characters, with each character
representing a different event type.
If you set the vectors option to yes, the report displays the short form of vectors in the
summary. Set the key option to yes to print the description for each character used in the
template string.
The following is a sample vector summary generated for a test pattern by using the
report_vector_summary command with key=yes and vectors=yes:
Measure_PO
Stim_PI
Pulse(CLOCK)
Measure_PO
Stim_PI
Pulse(CLOCK)
Measure_PO
Scan_Unload
----Template #2---- logic
Scan_Load
Stim_PI
Pulse(CLOCK)
Measure_PO
Scan_Unload
----Template #3---- logic
Scan_Load
Stim_PI
Stim_Clock(CLOCK)
Measure_PO
Stim_Clock(CLOCK)
Scan_Unload
----Template #4---- logic
Scan_Load
Stim_PI
Measure_PO
Scan_Unload
----Template #5---- init
Stim_PI
SimVision Overview
You can use SimVision to analyze simulation results. Its capabilities include:
■ Displaying digital waveforms
■ Displaying values on nets during any time period in a simulation
■ Arranging signals (move, copy, delete, repeat, show, hide) in the window for easy
viewing, enabling better interpretation of results
■ Multiple waveform graphs per window
■ Multiple windows that allow you to organize data and to view multiple test data segments
■ Annotating waves with text
■ Performing logical and arithmetic operations on waveforms
A .dsn or .trn file can be input to SimVision for waveform analysis. The files can be created
by either running Encounter Test test generation or simulation applications or from View
Vectors by selecting a test section with scope data then clicking the custom mouse button to
invoke Create Waveforms for Pin Timing. Refer to the description for the View Vectors
“View Pull-down” in the Graphical User Interface Reference.
Prerequisite Tasks
■ Select an experiment.
■ Create a Vectors file by running a test generation or simulation application.
Input Files
An existing Encounter Test experiment, sequence definition files, and failure data (if existing).
Output Files
If Create Minimum Vectors is selected on the Test View window, a new Vectors file is
created as output.
12
Logic Built-In Self Test (LBIST)
Generation
This chapter discusses the Encounter Test support for Logic Built-in Self Test (LBIST).
LBIST: An Overview
Logic Built-In Self Test (LBIST) is a mechanism designed into the design which allows the
design to effectively test itself. Encounter Test supports the design LBIST approach called
STUMPS (Self Test Using MISR and Parallel SRSG). A STUMPS design contains linear
feedback shift registers (LFSRs) which implement a Pseudo-Random Pattern Generator
(PRPG), sometimes referred to a Shift Register Sequence Generator or SRSG, to generate
the (random) test vectors to be applied to a scan design. It also includes an LFSR which
implements a Multiple Input Signature Register (MISR) to collect the scanned response data.
Refer to “Modeling Self Test Structures” in the Custom Features Reference for additional
information.
This application can generate tests for either/both of the following test types based on user
controls:
■ scan chain tests - refer to “Scan Chain Tests” on page 30
■ logic tests - refer to “Logic Tests” on page 31
❑ Logic tests are done using manually entered test sequences.
LBIST Concepts
Logic Built-In Self Test (LBIST) is a mechanism implemented into a design to allow a design
to effectively test itself. LBIST is most often used after a design has been assembled into a
product for power-on test or field diagnosis. However, some or all of the on-board LBIST
circuitry may also be used when testing the design stand-alone using a general-purpose logic
tester. Design primary inputs and primary outputs may be connected to pseudo-random
pattern generators and signature analyzers or controlled by a boundary scan technique. The
tester may exercise considerable control over the execution of the test, unlike the situation in
a “pure” LBIST environment where the design is in near-total control. This is being pointed
out here to emphasize that:
■ Encounter Test does not require LBIST pattern generators and signature analyzers to
reside in a design.
■ Encounter Test assumes that the LBIST operations are controlled through primary input
signals.
Encounter Test supports the STUMPS style of LBIST. STUMPS is a compound acronym for
Self Test Using MISR and Parallel SRSG. MISR is an acronym for Multiple Input Signature
Register. SRSG stands for Shift Register Sequence Generator. In Encounter Test
documentation, we call an SRSG a pseudo-random pattern generator (PRPG).
STUMPS follows scan guidelines for either LSSD or GSD. STUMPS scan chains are often
called channels. The basic layout of the STUMPS registers is illustrated in Figure 12-1.
During a scan operation, the channels are completely “unloaded” into the MISR and
replenished with pseudo-random values from the PRPG. The scan operation consists of a
number of scan cycles equal to or larger than the number of bits in the longest channel. Each
scan cycle advances the PRPG by one step, the MISR by one step, and moves the data down
the channels by one bit position. Between scan operations, a normal LSSD or GSD test is
performed by pulsing the appropriate clocks that cause outputs of the combinational logic
between channels to be observed by the channel latches (and possibly the primary outputs).
STUMPS requires a means of initializing the LFSRs and a means of reading out the contents
of the signature registers. The initializing sequence is supplied to Encounter Test by the
customer. Refer to “SEQUENCE_DEFINITION” in the Modeling User Guide for more
detailed information about initialization sequences. The only support for reading out the
signature registers provided by Encounter Test is to verify signature registers are observable
by a parent test mode's scan operation.
The specification of a parent mode for scanning also allows you to alter the initial state of
some latches from one LBIST test generation run to another. Having once verified that the
initialization sequence works, Encounter Test is able to re-specify the latch states within the
initialization sequence when the sequence is copied into the test pattern data. No fixed-value
latches are allowed to be changed in this manner, because altering their initial states would
invalidate the TSI and TSV results.
The control logic that steers the LBIST operation must be modeled so that to Encounter Test
it appears that all clock signals and any other dynamic signals emanate from primary inputs
or pseudo primary inputs; control signals that are constant throughout the LBIST operation
may be fed by either primary inputs or fixed-value latches.
Encounter Test provides limited support for the initialization of RAM by non-random patterns.
The test mode initialization sequence may initialize embedded RAM either by explicit, non-
randomized patterns or, if the RAM supports it, by the use of a reset signal. Encounter Test
determines through simulation which RAMs are initialized; any uninitialized RAMs are treated
as X-generators for LBIST, and therefore they must be blocked from affecting any observation
point.
In the general case, you supply two input pattern sequences in support of fast forward:
A PRPGSAVE sequence that moves the PRPG seed into the shadow register and a
PRPGRESTORE sequence that moves the seed from the shadow register into the
PRPG.
Alternatively, Encounter Test supports a specific implementation of the PRPG save and
restore operations whereby they occur concurrently with a channel scan cycle under
control of PV (PRPG save) and PR (PRPG restore) signals. When this implementation
is used, these signals must be identified by test function pin attributes and the
PRPGSAVE and PRPGRESTORE sequences are implicitly defined. For convenience,
we refer to the support of user-specified PRPGSAVE and PRPGRESTORE sequences
as “fast forward with sequences” and we refer to the support of PV and PR pins as “fast
forward with pins”. If the PV and PR pins are defined and user sequences are supplied
also, then “fast forward with sequences” takes precedence; in this case the PV and PR
pins are held to their inactive states throughout the application of test data, much like TIs.
“Fast forward with sequences” assumes that the PRPGSAVE sequence does not disturb
any latches except the shadow register and that the PRPGRESTORE sequence does
not disturb any latches except the PRPG. This of course implies that these operations
can not be executed concurrently with a channel scan cycle, but must be inserted
(between scan cycles in the case of the PRPGSAVE sequence).
There is no corresponding shadow register for MISRs. Skipping some scan cycles
changes the signature, and there is no way to avoid this fact. When fast forward is used,
two sets of MISR signatures are computed.
The PRPG shadow register must, like the PRPG itself, not be contaminated during self
test operations. The PRPGs are checked to verify that in the test mode, they have no
other function. To ensure that the PRPG shadow register is not corruptible, it must be
modeled as fixed value latches, which means that the TI inputs for this test mode prohibit
the shadow register latches from being changed.
The PRPGSAVE and PRPGRESTORE sequences will necessarily violate some TI
inputs. This is okay because Encounter Test does not simulate these sequences except
when checking to ensure that they work properly. TSV ensures that the PRPGSAVE and
PRPGRESTORE sequences work and that they do not disrupt any latches other than the
PRPG and its shadow latches.
In “fast forward with pins” the channels and MISR shift one cycle concurrently with the
restore operation, and the restore is executed on the last cycle of the scan. This works
only if the PRPG is never observed between scans; either the existence of explicit A, B,
or E shift clocks in the test sequence or a path from the PRPG to anything other than a
channel input would make it infeasible to use fast forward with pins. This is because it
would then be impossible to predict the effect of some tests being skipped; the PRPG
state would depend upon how many subsequent tests are to be skipped, and this
information is not available until a fault simulation is performed, but the fault simulation is
not possible until the PRPG state is determined.
■ Channel input signal weighting
Channel input signal weighting improves the test coverage from LBIST. A multiple-input
AND or OR design is allowed to exist between the PRPG and a channel input. The signal
weight is determined by the selection of the logic function and the number of pseudo-
random inputs to it. This selection is made by control signals fed from primary inputs.
These controls are not allowed to change during the scan operation, and so every latch
within a given channel will be weighted the same, varying only as to polarity which
depends upon the number of inversions between the channel input and the latch. The
primary inputs that are used to control the channel input weight selection must be flagged
with the test function WS (for weight selection).
Weighting logic also may be fed by Test Inhibit (TI) or Fixed Value Linehold (FLH) latches.
FLH latches may be changed from one experiment to another, causing channels to be
weighted differently. The FLH latch values may be changed by adding them as lineholds
to the lineholds file.
■ Channel scan that considers the presence of a pipelined PRPG spreading network in
LBIST test modes
The pipelined spreading PRPG network between PRPGs and channel latches enables
simultaneous calculation of spreader function and weight function in one scan cycle. The
sequential logic in the PRPG spreading network can be processed by placing a Channel
Input (CHI) test function in the design source or the test mode definition file. Encounter
Test uses the CHI to identify PRPG spreading pipeline latches during creation of the
LBIST test mode.
Encounter Test LBIST does not require scan chains to be connected to primary pins or on-
board PRPG and MISR. For example, you may have some scan chains connected to scan
data primary inputs and scan data primary outputs, other scan chains connected to PRPG(s)
and MISR(s), other scan chains connected to scan data primary inputs and MISR(s), and
other scan chains connected to PRPG(s) and scan data primary outputs, all on the same part,
all in the same test mode. The scan data primary inputs and outputs, if used, must connect
to tester PRPGs and SISRs (single-input signature registers). Encounter Test assumes that
all the scan chains are scannable simultaneously, in parallel.
See “Task Flow for Logic Built-In Self Test (LBIST)” on page 335 for the processing flow.
To perform Logic Built-In Self Test (LBIST) using command lines, refer to “create_lbist_tests”
in the Command Line Reference.
An Encounter True-Time Advanced license is required to run Create LBIST Tests. Refer to
“Encounter Test and Diagnostics Product License Configuration” in What’s New for
Encounter ® Test and Diagnostics for details on the licensing structure.
where:
■ workdir = name of the working directory
■ testmode= name of the LBIST testmode
■ testsequence = name of the test sequence being used (optional but generally
specified)
The commonly used keywords for the create_lbist_tests command are given below:
■ extraprpgcycle=#
Indicates the number of times PRPGs are shifted by a clock or a pulse events in the test
sequence. If using parallel simulation by setting forceparallelsim=yes, also set
extraprpgcycle to accurately simulate the test sequence.
■ extramisrcycle=#
Indicates the number of times the MISRs are shifted by a clock or a pulse events in the
test sequence. If using parallel simulation by setting forceparallelsim=yes, also set
extramisrcycle to accurately simulate the test sequence.
■ prpginitchannel=no|yes
Specifies whether the channels need to be initialized by the PRPG before the first test
sequence.
■ reportmisrmastersignatures=yes/reportprpgmastersignatures=yes
Reporting options to print signature and channel data at specific times.
Refer to “create_lbist_tests” in the Command Line Reference for information on all the
keywords available for the command.
Restrictions
■ General Purpose Simulation is not supported for LBIST when running in parallel mode.
High speed scan-based simulation should be used instead.
■ If stored-pattern (or OPMISR) tests are coexistent on the same TBDbin file with WRPT
or LBIST data, then resimulation of this TBDbin cannot be accomplished in a single Test
Simulation (analyze_vectors) pass. The selection of test types being processed is
controlled by the channelsim parameter. Channelsim=no (the default) will process
all stored-pattern (and OPMISR) tests; channelsim=yes will process all WRPT and
LBIST tests.
■ Support for multiple oscillators works only in cases where the oscillators are controlling
independent sections of logic that are not communicating with each other. In some
cases, it may be possible to use Encounter Test support of multiple oscillators if the two
domains are operating asynchronously and the communication is one-way only.
■ Within a test sequence definition, lineholds can be specified only on primary inputs and
fixed value latches (FLH).
■ The useppis=no option in the Good Machine Delay Simulator is not guaranteed to
work unless scan sequences have been fully expanded.
Input Files
Output
The following is a sample output log displaying the coverage achieved for a specific clocking
sequence:
----------------- Sequence Usage Statistics ----------------------
Sequence_Name EffCycle TotCycle DC_% AC_%
==================================================================
<userSequence01> 10 512 69.03 0.00
<< cross reference for user defined sequences >> <userSequence01>:Test1
Starting Fault Imply processing... fault status may be updated.
------------------------------------------------------------------
LBIST Statistics
-----------------------------------------------------------------
Logic Test Results
Patterns simulated :512
Effective patterns :10
Static fault coverage (DC) :69.0323%
Dynamic fault coverage (AC) :0%
Seed File
The purpose of this file is to allow the specification of unique starting seeds per run for on-
board PRPGs and MISRs. This allows flexibility in specifying the initial values for floating
latches and scannable latches.
where,
type is node or vector
nameform is name, hier_index, or flat_index
latchvalues represents the specification of the latch initial values in either TBDpatt
node list format or vector format, as specified by the TBDpatt_Format statement.
The [SeqDef statement is optional, and is used primarily when multiple Scan_Load
events (seeds) are specified. seqname is the name of a test sequence definition that has
already been imported
For more information about TBDpatt syntax, refer to the Test Pattern Data Reference
manual.
The specification of an initial value to any latch that is not scannable in the parent mode or is
a fixed value latch in the target mode is ignored. Encounter Test does not modify the initial
value of any latches that are not scannable in the parent mode, and overriding the initial value
of fixed values is not allowed because it invalidates processing done by Test Mode Build and
Test Structure Verification.
If so desired, multiple seeds (Scan_Load events) can be specified. When there are multiple
seeds, or when there is only one seed and it has an associated [SeqDef statement, each
seed is used as the starting PRPG state for only one test sequence. At the end of the
specified number of test iterations, a final signature is provided; if there are additional seeds,
then the design is reinitialized with the new PRPG seed and additional test iterations are
applied from that starting state using a different (or possibly the same) test sequence as
indicated by the associated [SeqDef. When multiple seeds are specified, any Scan_Load
event with no associated [SeqDef will be applied using the first test sequence named in the
create_wrp_tests testsequence parameter list.
Following is an example:
A seed file can also have multiple seeds, as shown in the following example.
Note that the " " associated with the SeqDef is enclose an optional time/date stamp. The
quotation marks are required even if you choose to leave out the date/time stamp.
Parallel LBIST
This function is currently available using graphical user interface and command line. Optimum
performance is achieved if the selected machines running in parallel have been dedicated to
your run. In a parallel run, a greater number of patterns will be found effective as compared
to a serial run.
For command line invocation, TWTmainpar -h displays the available options for specifying
a list of machines.
Note: For LBIST, General Purpose Simulation is not supported when running in parallel
mode. High Speed Scan Based Simulation should be used.
The host used to start the application acts as the coordinator for the parallel process. As a
default, the run is made in two phases. In the first phase, the faults are partitioned across the
selected hosts. The controller performs Good Machine Simulation only to write out the
patterns.
If, for any given simulation interval, the Good Machine Simulation time exceeds the fault
simulation time, the run is switched to the second phase. In the second phase, the patterns
are partitioned across all of the processes (including the controller) and each process works
on all of the faults.
Figure 12-4 on page 334 and Figure 12-5 on page 335 parallel process.
Figure 12-6 Encounter Test Logic Built-In Self Test Processing Flow
1. Build an Encounter Test model. Refer to “Performing Build Model” in the Modeling User
Guide for more information.
2. Build a full scan test mode. Refer to “Performing Build Test Mode” in the Modeling User
Guide for more information.
A sample mode definition file is given below:
TDR=lbist_tdr_name
SCAN_TYPE=LSSD BOUNDARY=NO IN=PI OUT=PO;
TEST_TYPES=STATIC LOGIC SIGNATURES=NO,STATIC MACRO,SCAN_CHAIN;
TEST_FUNCTION_PIN_ATTRIBUTES=TF_FLAG;
.
.
.
3. It is recommended that you perform Test Structure Verification (TSV) to verify the
conformance of a design to the Encounter Test LSSD guidelines or the GSD guidelines.
Non conformance to these guidelines may result in poor test coverage or invalid test data.
Refer to “Verify Test Structures” in the Verification User Guide.
4. Generate initialization sequence.
The LBIST methodology requires a user-defined initialization sequence for the test mode
to initialize the on-board BIST elements such as PRPG, MISR, and fixed-value latches.
Encounter Test does not automatically identify how to get these latches initialized; in fact,
it is only the results of the initialization sequence that tell Encounter Test which state each
fixed-value latch is supposed to be set to. Figure “An Example Test Mode Initialization
Sequence in TBDpatt Format” in the Modeling User Guide shows an example mode
initialization sequence definition for an LBIST test mode.
If the LBIST testing uses fast forward with sequences, you must also generate the
PRPGSAVE and PRPGRESTORE sequences at this time. Include these sequence
definitions in the same file with the mode initialization sequence.
LBIST processing also requires sequence definitions to specify the order of the clocks
and other events which get applied for each test. Each test generation run may use a
different sequence of clocks. These test sequence definitions may be included in the
mode initialization sequence input file, or they may be imported separately. An example
is given below.
5. Build LBIST test mode. The SCAN_TYPE must be LSSD or GSD.
Following is a sample mode definition file for LBIST:
Tester_Description_Rule = dummy_tester;
scan type = gsd boundary=internal length=1024 in = on_board out = on_board;
test_types static logic signatures only shift_register;
faults static ;
comet opcglbist markoff LBIST stats_only ;
misr Net.f.l.top.nl.BIST_MODULE.BIST_MISR.MISRREG[1048]=(0,2,21,23,53);
misr Net.f.l.top.nl.BIST_MODULE.BIST_MISR.MISRREG[1089]=(0,2,21,23,41);
prpg Net.f.l.top.nl.BIST_MODULE.BIST_PRPG.PRPGREG[52]=(1,2,6,53);
prpg Net.f.l.top.nl.BIST_MODULE.BIST_PRPG.PRPGREG[105]=(1,2,6,53);
6. Perform the TSV checks for LBIST to verify conformance to the LBIST guidelines. Refer
to “Performing Verify Test Structures” in the Verification User Guide for more
information.
7. Build a fault model for the design. Refer to the “Building a Fault Model” in the Modeling
User Guide for more information.
8. To use user-defined clock sequences, read the test sequence definitions. See “Coding
Test Sequences” on page 206 for an explanation of how to manually create test (clock)
sequences.
For complete information, refer to “Reading Test Data and Sequence Definitions” on
page 245.
The following example uses internal clocking (cutpoints) for the LBIST sequence:
TBDpatt_Format (mode=node, model_entity_form=name);
[Define_Sequence Universal_Test (test);
[ Pattern 1.1; # Set Test Constraints to pre-scan values Event 1.1.1 Stim_PPI ():
BIST_MODULE.BIST_SCAN"=0;
] Pattern 1.1;
[ Pattern 1.2;
Event 1.2.1 Pulse_PPI ():"BIST_MODULE.BIST_CASCADE[7]"=+;
] Pattern 1.2;
[ Pattern 1.3;
Event 1.3.1 Pulse_PPI ():"BIST_MODULE.BIST_CASCADE[14]"=+;
] Pattern 1.3;
[ Pattern 1.4; # Set Test Constraints to pre-scan values
Event 1.4.1 Stim_PPI ():"BIST_MODULE.BIST_SCAN"=1;
] Pattern 1.4;
[ Pattern 1.5;
Event Channel_Scan ();
] Pattern 1.5;
] Define_Sequence Universal_Test;
Encounter Test's Verify Test Structures tool is designed to identify many such design
problems so they can be eliminated before proceeding to test generation. Even so, it is
advisable to use your logic simulator of choice to simulate the BIST operation on your design
for at least a few test iterations (patterns) and compare the resulting signature with the
signature produced by Encounter Test's Logic Built-In Self Test generation tool for the same
number of test iterations. This simulation, along with the checking offered by Encounter Test
tools, provides high confidence that the signature is correct and that the test coverage
obtained from Encounter Test's fault simulator (if used) is valid.
When the signatures from a functional logic simulator and Encounter Test's LBIST tool do not
match, the reason will not be apparent. It can be tedious and technically challenging to
identify the corrective action required. The problem may be in the BIST logic, its
interconnection with the user logic, or in the Encounter Test controls. The purpose of this
section is to explain the use of signature debug features provided with Encounter Test's Logic
Built-In Self Test generation tool.
Submit a Logic Built-In Self Test generation run, specifying the chosen number of test
iterations (called “.patterns”. in the control parameters for the tool). You will need to obtain the
MISR signatures; this can be done in any of three ways:
1. Request “.scope ”. data from the test generation run: simulation=gp
watchpatterns=range watchnets=misrnetlist where range is one of the valid
watchpatterns options and misrnetlist is any valid watchnets option that
includes all the MISR positions.
2. Specify report=misrsignatures in the test generation run, or
3. After the test generation run, export the test data and look at the TBDpatt file.
In the first method, you will use View Vectors to look at the test generation results as signal
waveforms. Refer to “Test Data Display” in the Graphical User Interface Reference for
details on viewing signal waveforms. This may seem the most natural if you are used to this
common technique for debugging logic. However, you may find it more convenient to have the
MISR states in the form of bit strings when comparing the results with your functional logic
simulator.
The MISR latch values found in the signatures are manually compared with the results of the
functional logic simulator, often by reading a timing chart.
If the signatures match for the first signature interval and then fail later on, several potential
causes can be eliminated, and the problem is likely to be found in the system clocking, or
“release-capture” phase of the test. You may have to figure out which channel latch is failing
and backtrace from there to find the problem, but first you should look carefully at the clock
sequence that Encounter Test is simulating to make sure it agrees with the functional logic
simulation input. If there is no obvious discrepancy from looking at the TBDpatt form of the
sequence definition, use the waveform display to compare the waveforms with the timing
chart from the functional logic simulator. Start by comparing the waveforms at the clock
primary inputs. If this does not provide any clues, select some representative clock splitter
design and compare the waveforms at the clock splitter outputs.
As Encounter Test signatures can be observed either by looking at the signal waveforms or
by having them printed in the output log, so the channel latch states can also be observed by
either of these methods. Using the response compaction of the MISR, you can find the failing
channel latch by means of a binary search instead of having to compare the simulation states
of thousands upon thousands of channel latches. The binary search technique requires that
the Encounter Test simulation of the channel scan be expanded in some fashion.
Using a waveform display is not the recommended way of finding the failing channel latch, but
some users may be more comfortable doing it this way. Once the failing channel latch is
identified, the waveform display may prove invaluable in the next step of the diagnosis of a
failing signature.
Another debug test generation run must be made to generate the waveform data for the
channel scan. From the command line, you would use create_lbist_tests
simulation=gp watchpatterns=n watchnets=scan watchscan=yes along with
any other create_lbist_tests parameters necessary for your design (such as
testsequence), where n is the number of the failing test iteration. Note that when a single
test iteration is specified on the watchpatterns parameter, create_lbist_tests will
expand the channel scan for that test iteration so the detailed waveforms can be generated.
Refer to “Test Data Display” in the Graphical User Interface Reference for details on
viewing signal waveforms.
You may find it more convenient, when locating the failing latch, to use the debug reporting
options instead of the waveform display. Instead of, or in addition to, the BIST parameters
listed in the previous section, specify create_lbist_tests reportlatches= n1:n2
to obtain MISR signatures for each iteration of the test from iteration n1 through iteration n2.
Note that it is not necessary to specify simulation=gp to use the printlatches
option. This reporting option also produces the states of all the channel latches in addition to
the MISR values. The information is printed to the log.
Once the failing test iteration is identified, if there appears to be a mismatch in the scan
operation, similar debug printout can be obtained for each scan cycle by specifying only a
single test iteration: lbist reportlatches=n.
A
Using Contrib Scripts for Higher Test
Coverage
This appendix discusses the following scripts in Encounter Test contrib directory that you can
use to achieve higher test coverage in ATPG.
■ Reset/Set Generation
❑ Prepare Reset Lineholds
❑ Create Resets Tests (static/delay)
■ Reporting Domain Coverages
where:
■ workdir = name of the working directory
■ testmode= name of the testmode for timed test
Note: This script is not formally supported.
Prerequisite Tasks
Complete the following tasks before executing prepare_reset_lineholds:
1. Build a design, testmode, and fault model. Refer to the Modeling User Guide for more
information.
The output log after this task will look like the following:
The script identifies the set/reset clocks for the design. If it successfully identifies the set/reset
pins, it creates a linehold file in testresults/reset_lineholds.<TESTMODE NAME>.
You should point to your ATPG runs (linehold=<filename>).
Static:
create_reset_tests workdir=<directory> testmode=<modename> experiment=<name>
Dynamic:
create_reset_delay_tests workdir=<directory> testmode=<modename>
experiment=<name>
where:
■ workdir = name of the working directory
■ testmode= name of the testmode
■ experiment= name of the test patterns
Note: This script is not formally supported.
Prerequisite Tasks
Complete the following tasks before executing create reset delay tests:
1. Build a design, testmode, and fault model. Refer to the Modeling User Guide for more
information.
The output log contains a summary of the number of generated patterns and their
representative coverage. Both static and dynamic faults should be tested.
INFO (TDA-220):--- Tests ---Faults---- ATCov ------- CPU Time ------[end TDA_220]
INFO (TDA-220):Sim. Eff. Detected Tmode Global Sim. Total[end TDA_220]
INFO (TDA-220):2 2 17 2.09% 1.68% 00:00.01 00:00.03[end TDA_220]
INFO (TDA-220):4 4 17 4.18% 3.35% 00:00.01 00:00.03[end TDA_220]
INFO (TGR-101): Simulation of a logic test section with dynamic test type completed.
[end TGR_101]
Testmode Statistics: FULLSCAN
Total
Static 908 82 20 0 806 18 7 9.03 9.03
Total
Dynamic 814 34 0 0 780 15 0 4.18 4.18
Debugging No Coverage
If no ATPG coverage was achieved, check for the following:
■ Contention in the design - Look for TSV-193 and TSV-093 messages from
verify_test_structures for internal contention.
■ Broken scan chains - Analyze the verify_test_structures log for broken scan chains.
where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
Prerequisite Tasks
Complete the following tasks before reporting domain specific fault coverages:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in Modeling User Guide.
2. Build Encounter Test Testmode. Refer to “Performing Build Test Model” in the Modeling
User Guide for more information.
3. Generate ATPG test patterns. Refer to “Invoking ATPG” on page 35 for more information.
Output Files
None.
Command Output
The following is a sample report for experimental test coverage based on clocking sequences.
Experiment Statistics: FULLSCAN.printCLKdom
#Faults #Tested #Possibly #Redund #Untested #PTB #TestPTB
%TCov %ATCov %PCov %APCov %PTBCov %APTBCov
Total 3152 1107 68 0 1977 0 0 35.12
35.12 37.28 37.28 35.12 35.12
Total Static 1584 690 47 0 847 0
0 43.56 43.56 46.53 46.53 43.56 43.56
Total Dynamic 1568 417 21 0 1130 0
0 26.59 26.59 27.93 27.93 26.59 26.59
Collapsed 1730 612 45 0 1073 0
0 35.38 35.38 37.98 37.98 35.38 35.38
Collapsed Static 928 411 33 0 484 0
0 44.29 44.29 47.84 47.84 44.29 44.29
Collapsed Dynamic 802 201 12 0 589 0
0 25.06 25.06 26.56 26.56 25.06 25.06
B
Three-State Contention Processing
This appendix describes the uses of Encounter Test test generation and simulation keywords
that control processing and reporting of three-state contention.
The following are some combinations of the keywords and resulting actions:
■ contentionreport=soft contentionmessage=all contentionremove=yes
contentionprevent=yes (all the defaults)
The test generator will prevent soft (known vs. X) and hard (0 vs. 1)contention from
occurring in the patterns it creates. If it cannot create a pattern for the fault without
causing contention, it will not create a pattern for that fault. There are no "contention
messages" issued from this process. Theoretically, the simulator will not find any
patterns to remove; BUT, just in case a burnable pattern gets through, the simulator is
there to ensure it doesn’t get into the final set of vectors. If the test generator successfully
eliminates contention from all patterns, no contention messages are produced.
■ contentionreport=soft contentionmessage=all contentionremove=yes
contentionprevent=no
When specifying contentionprevent=no, the test generator will not ensure that the
pattern does not cause contention. The simulator will remove any patterns that would
cause soft or hard contention. Messages are produced AND, you will see messages for
patterns discarded due to contention. The final vectors will not include burnable patterns.
■ contentionreport=soft contentionmessage=all contentionremove=no
contentionprevent=yes
The test generator will prevent hard and soft contention from occurring in the patterns it
creates. If it cannot create a pattern for the fault without causing contention, it will not
create a pattern for that fault. There are no "contention messages" issued from this
process. Theoretically, the simulator will not find any patterns to report; however, if a
burnable pattern gets through, the simulator will report it; but it will not remove it since
contentionremove=no was specified. If the test generator allows a burnable pattern
to get through, contention messages are produced and the final vectors will include the
burnable vectors.
■ contentionreport=soft contentionmessage=all contentionremove=no
contentionprevent=no
When specifying contentionprevent=no, the test generator will not ensure that the
pattern doesn’t cause contention. The simulator will report the patterns that cause hard
or soft contention, however, because contentionremove=no was specified, the
simulator will not discard them; if you see any contention messages with these settings,
the final vectors include patterns that may burn.
The following table lists contention keyword combinations and their respective results.
contentionmessage
contentionprevent
contentionremove
contentionreport
Contention
Contention
Contention
Messages
Removes
Removes
Vectors
in Final
Sim
Log
TG
contentionmessage
contentionprevent
contentionremove
contentionreport
Contention
Contention
Contention
Messages
Removes
Removes
Vectors
in Final
Log
Sim
TG
hard, yes no greater yes yes no no
soft, than 0
all
hard, no yes greater no no yes Should not
soft, than 0 be
all
hard, no no greater yes no no yes if
soft, than 0 reported
all
none yes yes greater no no hard only Should not
than 0 (contention be hard
prevent= contention
yes wins
over
contentionr
eport
=none)
none no no greater no no no Could be:
than 0 not
reported,
prevented
or
removed.
Note: If contentionmessage=0 then no messages will be printed; but the rest of the
processing is the same.
Index
Numerics verify primary input waveforms 341
DEFAULT statement rules 204
1149.1 test generation 286 delay model
1450.1 IEEE standard 248 concepts 85
delay model, creating
input files 82
A output files 83
perform 80
analysis techniques prerequisite tasks 81, 99
for test patterns 314 delay model, overview 85
delay test
characterization test 133
C delay defects 141
delay path calculation 93
circuit values, viewing 315 dynamic constraints 150
clock constraints file/clock domain file 125 manufacturing delay test 73
clock sequences, delay test 146 timed pattern failure analysis 151
cmos testing 32, 275 true-time test 37
coding test sequences 206 wire delays 91
commit test data 233, 234 delay test lite 71
commit tests design constraints file 100
overview 27, 233 DUT values 152
concepts dynamic constraints 150
LBIST (logic built-in self test) 323 dynamic logic test 31
stored pattern test generation 275
conditional events 213
constraints, dynamic 150 E
create exhaustive tests
overview 281 Encounter Test pattern data, exporting
prerequisite tasks 281, 284, 286 export files 191
restrictions 282 input files 191
customer service, contacting 23 overview 190
customizing delay checks 149 Encounter Test pattern data, importing
output files 247
overview 247
D endup sequences 223
environment variables
debugging LBIST structures TB_PERM_SPACE 306
check for matching signatures 339 evcd (extended value change dump) data,
diagnosing 340 reading 250
DUT preparation 339 events
find a failing latch 341 conditional 213
finding the first failing test 340 removing 212
overview 339 exporting test data
reporting options 342 Encounter Test pattern data 190
scoping the channel scan 341 printing structure-neutral TBDbin 194