0% found this document useful (0 votes)
426 views

ET UserGuide

CADENCE ENCOUNTER TEST USER GUIDE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
426 views

ET UserGuide

CADENCE ENCOUNTER TEST USER GUIDE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 354

Encounter® Test Automatic Test Pattern

Generation User Guide


Product Version 10.1.100
November 2010
© 2003–2010 Cadence Design Systems, Inc. All rights reserved.
Portions © IBM Corporation, the Trustees of Indiana University, University of Notre Dame, the Ohio State
University, Larry Wall. Used by permission.
Printed in the United States of America.
Cadence Design Systems, Inc. (Cadence), 2655 Seely Ave., San Jose, CA 95134, USA.
Product Encounter® Test and Diagnostics contains technology licensed from, and copyrighted by:
1. IBM Corporation, and is © 1994-2002, IBM Corporation. All rights reserved. IBM is a Trademark of
International Business Machine Corporation;.
2. The Trustees of Indiana University and is © 2001-2002, the Trustees of Indiana University. All rights
reserved.
3. The University of Notre Dame and is © 1998-2001, the University of Notre Dame. All rights reserved.
4. The Ohio State University and is © 1994-1998, the Ohio State University. All rights reserved.
5. Perl Copyright © 1987-2002, Larry Wall
Associated third party license terms for this product version may be found in the README.txt file at
downloads.cadence.com.
Open SystemC, Open SystemC Initiative, OSCI, SystemC, and SystemC Initiative are trademarks or
registered trademarks of Open SystemC Initiative, Inc. in the United States and other countries and are
used with permission.
Trademarks: Trademarks and service marks of Cadence Design Systems, Inc. contained in this document
are attributed to Cadence with the appropriate symbol. For queries regarding Cadence’s trademarks,
contact the corporate legal department at the address shown above or call 800.862.4522. All other
trademarks are the property of their respective holders.
Restricted Permission: This publication is protected by copyright law and international treaties and
contains trade secrets and proprietary information owned by Cadence. Unauthorized reproduction or
distribution of this publication, or any portion of it, may result in civil and criminal penalties. Except as
specified in this permission statement, this publication may not be copied, reproduced, modified, published,
uploaded, posted, transmitted, or distributed in any way, without prior written permission from Cadence.
Unless otherwise agreed to by Cadence in writing, this statement grants Cadence customers permission to
print one (1) hard copy of this publication subject to the following conditions:
1. The publication may be used only in accordance with a written agreement between Cadence and its
customer.
2. The publication may not be modified in any way.
3. Any authorized copy of the publication or portion thereof must include all original copyright,
trademark, and other proprietary notices and this permission statement.
4. The information contained in this document cannot be used in the development of like products or
software, whether for internal or external use, and shall not be used for the benefit of any other party,
whether or not for consideration.
Disclaimer: Information in this publication is subject to change without notice and does not represent a
commitment on the part of Cadence. Except as may be explicitly set forth in such agreement, Cadence does
not make, and expressly disclaims, any representations or warranties as to the completeness, accuracy or
usefulness of the information contained in this document. Cadence does not warrant that use of such
information will not infringe any third party rights, nor does Cadence assume any liability for damages or
costs of any kind that may result from use of such information.
Restricted Rights: Use, duplication, or disclosure by the Government is subject to restrictions as set forth in
FAR52.227-14 and DFAR252.227-7013 et seq. or its successor
Automatic Test Pattern Generation User Guide

Contents
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
About Encounter Test and Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Typographic and Syntax Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Encounter Test Documentation Roadmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Getting Help for Encounter Test and Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Contacting Customer Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Encounter Test And Diagnostics Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
What We Changed for this Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

1
Introduction to Automatic Test Pattern Generation . . . . . . . . . . . 25
ATPG Process Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Additional Pattern Compaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
General Types of Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Scan Chain Tests ................................................... 30
Logic Tests ........................................................ 31
Path Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
IDDq Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Parametric(Driver/Receiver) Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
IO Wrap Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
IEEE 1149.1 Boundary Scan Verification Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Core Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Low Power Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Committing Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Inputs for ATPG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Test Generation Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

November 2010 5 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

Invoking ATPG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Stored Pattern Test Generation (SPTG) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
True-Time Test: An Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Designing a Logic Model for True-Time Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
True-Time Test Pre-Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
True-Time Test Pattern Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Types of True-Time ATPG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Automatic True-Time Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
At-Speed and Faster Than At-Speed True-Time Testing . . . . . . . . . . . . . . . . . . . . . . 40
Static ATPG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

2
Using the True-Time Use Model Script . . . . . . . . . . . . . . . . . . . . . . . . 41
Executing the Encounter Test True Time Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Setup File Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

3
Static Test Generation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Static Test Pattern Generation Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Performing Scan Chain Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Important Information from Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Debugging No Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Additional Tests Available . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
An Overview to Scan Chain Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Performing Flush Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Debugging No Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Performing Create Logic Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Using Checkpoint and Restart Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Committing Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

November 2010 6 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

Writing Test Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

4
Delay and Timed Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Testing Manufacturing Defects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Manufacturing Delay Test Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Creating Scan Chain Delay Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Delay Scan Chain Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Performing Build Delay Model (Read SDF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Timing Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
IEEE 1497 SDF Standard Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Delay Model Timing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Delay Path Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Specifying Wire Delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Performing Read SDC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Performing Remove SDC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
An Overview to Prepare Timed Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Performing Prepare Timed Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Create Logic Delay Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Producing True-Time Patterns from OPCG Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Processing Custom OPCG Logic Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Processing Standard, Cadence Inserted OPCG Logic Designs . . . . . . . . . . . . . . . 120
Delay Timing Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Path Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Design Constraints File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Process Variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Pruning Paths from the Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Verifying Clocking Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Command Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

November 2010 7 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

Verifying Clock Constraints Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132


Characterization Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Performing Create Path Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Path File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Path Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Reporting Path Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Performing Prepare Path Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Delay Defects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Delay ATPG Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Delay Test Clock Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Customized Delay Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Dynamic Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Constraint Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Timed Pattern Failure Analysis for Tester Miscompares . . . . . . . . . . . . . . . . . . . . . . . . 151
Performing and Reporting Small Delay Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
SDQL: An Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
SDQL Effectiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Small Delay ATPG Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Performing Small Delay Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Small Delay Simulation of Traditional ATPG Patterns . . . . . . . . . . . . . . . . . . . . . . . . 160
Committing Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Writing Test Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166

5
Writing and Reporting Test Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Writing Test Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Write Vectors Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Write Vectors Input Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Write Vectors Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Default Timings for Clocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

November 2010 8 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

Adding Extra Timing Set for Initialization Sequences . . . . . . . . . . . . . . . . . . . . . . . . 175


Processing Dynamic Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Changing Default Pin Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Limiting Dynamic Timeplates for Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Creating Cycle Map for Output Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Writing Standard Test Interface Language (STIL) . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Writing Waveform Generation Language (WGL) . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Writing Tester Description Language (TDL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
Writing Verilog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Understanding the Test Sequence Coverage Summary . . . . . . . . . . . . . . . . . . . . . 186
Create Vector Correspondence Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Create Vector Correspondence Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Reporting Encounter Test Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Reporting Sequence Definition Data ...................................... 192
Converting Test Data for Core Tests (Test Data Migration) . . . . . . . . . . . . . . . . . . . . . . 193
Reporting a Structure-Neutral TBDbin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Input for Reporting Structure-Neutral TBDbin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Output for Reporting Structure-Neutral TBDbin ........................... 194

6
Customizing Inputs for ATPG. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Linehold File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
General Lineholding Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Linehold File Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
General Semantic Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Linehold Object - Defining a Test Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
Coding Test Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Introduction to Test Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Getting Started with Test Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Stored Pattern Test Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Sequences with On-Product Clock Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Setup Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Endup Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Specifying Linehold Information in a Test Sequence . . . . . . . . . . . . . . . . . . . . . . . . 223
Using Oscillator Pins in a Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224

November 2010 9 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

Importing Test Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229


Ignoremeasures File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
Keepmeasures file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230

7
Utilities and Test Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Committing Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Performing on Uncommitted Tests and Committing Test Data ................ 234
Deleting Committed Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Deleting a Range of Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Deleting an Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Encounter Test Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
Define_Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
Timing_Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Test_Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Tester_Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Test_Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Test_Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Test Data for Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241

8
Reading Test Data and Sequence Definitions . . . . . . . . . . . . . . . 245
Reading Test Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Read Vectors Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Reading Encounter Test Pattern Data (TBDpatt) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Encounter Test Pattern Data Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Reading Standard Test Interface Language (STIL) ........................... 248
Support for STIL Standard 1450.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
STIL Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249

November 2010 10 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

Identifying Scan Tests in STIL Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249


Identifying Mode Initialization Sequences in STIL Vectors . . . . . . . . . . . . . . . . . . . . 249
Reading Extended Value Change Dump (EVCD) File . . . . . . . . . . . . . . . . . . . . . . . . . . 250
EVCD Restriction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
EVCD Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Reading Sequence Definition Data (TBDseq) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Sequence Definition Data Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253

9
Simulating and Compacting Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Compacting Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Simulating Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Analyzing Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Timing Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Test Simulation Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Specifying a Relative Range of Test Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Fault Simulation of Functional Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Recommendation for Fault Grading Functional Tests . . . . . . . . . . . . . . . . . . . . . . . . 267
Functional Test Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Zero vs Unit Delay Simulation with General Purpose Simulator . . . . . . . . . . . . . . . . . . 268
Using the ZDLY Attribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Simulating OPC Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
InfiniteX Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Pessimistic Simulation of Latches and Flip-Flops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Resolving Internal and External Logic Values Due to Termination . . . . . . . . . . . . . . . . 272
Overall Simulation Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272

November 2010 11 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

10
Advanced ATPG Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Create IDDq Tests .................................................... 275
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Iddq Compaction Effort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Create Random Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
Create Exhaustive Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Create Core Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Create Embedded Test - MBIST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Create Parametric Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
Create IO Wrap Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
IEEE 1149.1 Test Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
Configuring Scan Chains for TAP Scan SPTG . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
1149.1 SPTG Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
1149.1 Boundary Chain Test Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Reducing the Cost of Chip Test in Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Using On-Product MISR in Test Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
On-Product MISR Restrictions ........................................ 300
Parallel Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Load Sharing Facility (LSF) Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Parallel Stored Pattern Test Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Parallel Simulation of Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
Performing Test Generation/Fault Simulation Tasks Using Parallel Processing . . . . 304
Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Prerequisite Tasks for LSF ........................................... 305
Input Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
Stored Pattern Test Generation Scenario with Parallel Processing . . . . . . . . . . . . . 309
.................................................................... 312

November 2010 12 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

11
Test Pattern Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Debugging Miscompares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Using Watch List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Viewing Test Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Viewing Test Sequence Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
SimVision Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Performing Test Pattern Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Input Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322

12
Logic Built-In Self Test (LBIST) Generation . . . . . . . . . . . . . . . . . . 323
LBIST: An Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
LBIST Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
Performing Logic Built-In Self Test (LBIST) Generation . . . . . . . . . . . . . . . . . . . . . . . . . 329
Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
Input Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
Seed File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
Parallel LBIST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
Task Flow for Logic Built-In Self Test (LBIST) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Debugging LBIST Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Prepare the Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Check for Matching Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Find the First Failing Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
Diagnosing the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340

A
Using Contrib Scripts for Higher Test Coverage . . . . . . . . . . . . . 343
Preparing Reset Lineholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Creating Reset Delay Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344

November 2010 13 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344


Debugging No Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
Reporting Domain Specific Fault Coverages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
Prerequisite Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346

B
Three-State Contention Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 347

Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351

November 2010 14 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

List of Tables
Table 4-1 Common Delay Model Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Table 4-2 SDC Statements and their Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Table 4-3 Methods to Generate Timed Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

November 2010 15 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

November 2010 16 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

List of Figures
Figure 1-1 Encounter Test Process Flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Figure 1-2 ATPG flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Figure 1-3 Fault List Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Figure 1-4 ATPG Menu in GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Figure 3-1 Encounter Test Static Pattern Test Processing Flow . . . . . . . . . . . . . . . . . . . . 60
Figure 3-2 Static Scan Chain Wave Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Figure 4-1 Flow for Delay Test Methodologies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Figure 4-2 Delay Testing and Effects of Clocking and Logic in Backcones . . . . . . . . . . . . 74
Figure 4-3 Dynamic Test Waveform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Figure 4-4 SDF Delays that Impact Optimized Testing and the Effect in the Resulting Patterns
86
Figure 4-5 Period and Width for Clocking Data into a Memory Element . . . . . . . . . . . . . . 92
Figure 4-6 Setup Time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Figure 4-7 Hold Time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Figure 4-8 Scenario for Calculating Path Delay. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Figure 4-9 Wire Delay Scenarios. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Figure 4-10 set_case_analysis Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Figure 4-11 set_disable Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Figure 4-12 set_false_path Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Figure 4-13 set_multicycle_path Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Figure 4-14 false and multicycle Path Grouping Example . . . . . . . . . . . . . . . . . . . . . . . . 104
Figure 4-15 Use Model with SDC Constraints File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Figure 4-16 Creating True-time Pattern Using Custom OPCG Logic . . . . . . . . . . . . . . . 118
Figure 4-17 Creating True-time Pattern Using RTL Compiler Inserted OPCG Logic . . . 122
Figure 4-18 Creating True-time Pattern Using RC Run Script. . . . . . . . . . . . . . . . . . . . . 123
Figure 4-19 Process Variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Figure 4-20 Robust Path Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Figure 4-21 Non-Robust Path Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

November 2010 17 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

Figure 4-22 Example of a Transition Fault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142


Figure 4-23 General Form of an AC Test. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Figure 4-24 Dynamic Test Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Figure 4-25 Delay Test Two-Frame Clocking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Figure 4-26 Delay Test Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Figure 4-27 Building Blocks of a Timed Test Pattern. . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Figure 4-28 Elements that Control Clocks at the Tester . . . . . . . . . . . . . . . . . . . . . . . . . 153
Figure 4-29 Delay Defect Distribution Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Figure 4-30 Computing Cumulative SDQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Figure 4-31 Small Delay ATPG Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Figure 4-32 Flow for Small Delay Simulation of Traditional ATPF Patterns . . . . . . . . . . . 161
Figure 4-33 SQDL Effectiveness Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Figure 5-1 Scan Cycle Overlap in Test Sequence Coverage Summary Report . . . . . . . 187
Figure 6-1 A Simple Typical Stored-Pattern Test Sequence Definition . . . . . . . . . . . . . . 208
Figure 6-2 Illustration of the STIM=DELETE control statement. . . . . . . . . . . . . . . . . . . . 213
Figure 6-3 Illustration of the PI_STIMS=n control statement. . . . . . . . . . . . . . . . . . . . . . 215
Figure 6-4 Illustration of the PI_STIMS=n control statement. . . . . . . . . . . . . . . . . . . . . . 216
Figure 6-5 GSD Circuit with Clock Generated by OPC Logic . . . . . . . . . . . . . . . . . . . . . 219
Figure 6-6 A stored-pattern test sequence definition for a design with OPC logic . . . . . 220
Figure 6-7 Clock Generation Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Figure 6-8 A scanop sequence definition using a free-running oscillator . . . . . . . . . . . . 229
Figure 7-1 Hierarchy of TBDbin Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Figure 10-1 Overview of Scan Structure for 1149.1 LSSD/GSD Scan SPTG . . . . . . . . . 287
Figure 10-2 Overview of Scan Structure for 1149.1 TAP Scan SPTG. . . . . . . . . . . . . . . 288
Figure 10-3 TAP Controller State Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
Figure 10-4 Example of TCK Clock Gating for TAP Scan SPTG. . . . . . . . . . . . . . . . . . . 291
Figure 10-5 Example of scan chain Gating for TAP Scan SPTG. . . . . . . . . . . . . . . . . . . 292
Figure 10-6 Encounter Test 1149.1 Boundary Chip Processing Flow . . . . . . . . . . . . . . . 297
Figure 10-7 Test Generation Parallel Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
Figure 10-8 Fault Simulation Parallel Processing Flow . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Figure 12-1 A Simplified Example STUMPS Configuration . . . . . . . . . . . . . . . . . . . . . . . 325

November 2010 18 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

Figure 12-2 Example of a Seed File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332


Figure 12-3 Example of a Multiple Seed File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
Figure 12-4 LBIST Parallel Processing Flow, Phase 1 . . . . . . . . . . . . . . . . . . . . . . . . . . 334
Figure 12-5 LBIST Parallel Processing Flow, Phase 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Figure 12-6 Encounter Test Logic Built-In Self Test Processing Flow . . . . . . . . . . . . . . . 336

November 2010 19 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

November 2010 20 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

Preface

About Encounter Test and Diagnostics


Encounter® Test uses breakthrough timing-aware and power -aware technologies to enable
customers to manufacture higher-quality power-efficient silicon, faster and at lower cost.
Encounter Diagnostics identifies critical yield-limiting issues and locates their root causes to
speed yield ramp.

Encounter Test is integrated with Encounter RTL Compiler global synthesis and inserts a
complete test infrastructure to assure high testability while reducing the cost-of-test with on-
chip test data compression.

Encounter Test also supports manufacturing test of low-power devices by using power intent
information to automatically create distinct test modes for power domains and shut-off
requirements. It also inserts design-for-test (DFT) structures to enable control of power shut-
off during test. The power-aware ATPG engine targets low-power structures, such as level
shifters and isolation cells, and generates low-power scan vectors that significantly reduce
power consumption during test. Cumulatively, these capabilities minimize power consumption
during test while still delivering the high quality of test for low-power devices.

Encounter Test uses XOR-based compression architecture to allow a mixed-vendor flow,


giving flexibility and options to control test costs. It works with all popular design libraries and
automatic test equipment (ATE).

Typographic and Syntax Conventions


The Encounter Test library set uses the following typographic and syntax conventions.
■ Text that you type, such as commands, filenames, and dialog values, appears in Courier
type.
Example: Type build_model -h to display help for the command.
■ Variables appear in Courier italic type.
Example: Use TB_SPACE_SCRIPT=input_filename to specify the name of the
script that determines where Encounter Test results files are stored.
■ Optional arguments are enclosed in brackets.

November 2010 21 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Preface

Example: [simulation=gp|hsscan]
■ User interface elements, such as field names, button names, menus, menu commands,
and items in clickable list boxes, appear in Helvetica italic type.
Example: Select File - Delete - Model and fill in the information about the model.

Encounter Test Documentation Roadmap


The following figure depicts a recommended flow for traversing the documentation structure.
l

En
ria

Getting co
uto

Started un
tT

te
s

rD
Te

ia
ter

gn
un

Test Synthesis User Guide


os
co

tic

Modeling User Guide


En

s
Tu

Verification User Guide


tor
ia

ATPG User Guide


l

Diagnostics User Guide


Low Power User Guide

Command Line Reference


Graphical User Interface Reference

Message Reference Test Pattern Data Reference

Extension Language Reference Custom Features Reference

Glossary Memory Built-In Self Test Reference

Getting Help for Encounter Test and Diagnostics


Use the following methods to obtain help information:

November 2010 22 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Preface

1. From the <installation_dir>/tools/bin directory, type cdnshelp at the command


prompt.
2. To view a book, double-click the desired product book collection and double-click the
desired book title in the lower pane to open the book.

Click the Help or ? buttons on Encounter Test forms to navigate to help for the form and its
related topics.

Refer to the following in the Graphical User Interface Reference for additional details:
■ “Help Pull-down” describes the Help selections for the Encounter Test main window.
■ “View Schematic Help Pull-down” describes the Help selections for the Encounter Test
View Schematic window.

Contacting Customer Service


Use the following methods to get help for your Cadence product.
■ Cadence Online Customer Support
Cadence online customer support offers answers to your most common technical
questions. It lets you search more than 40,000 FAQs, notifications, software updates,
and technical solutions documents that give step-by-step instructions on how to solve
known problems. It also gives you product-specific e-mail notifications, software updates,
service request tracking, up-to-date release information, full site search capabilities,
software update ordering, and much more.
Go to http://www.cadence.com/support/pages/default.aspx for more information on
Cadence Online Customer Support.
■ Cadence Customer Response Center (CRC)
A qualified Applications Engineer is ready to answer all of your technical questions on
the use of this product through the Cadence Customer Response Center (CRC). Contact
the CRC through Cadence Online Support. Go to http://support.cadence.com and click
the Contact Customer Support link to view contact information for your region.
■ IBM Field Design Center Customers
Contact IBM EDA Customer Services at 1-802-769-6753, FAX 1-802-769-7226. From
outside the United States call 001-1-802-769-6753, FAX 001-1-802-769-7226. The e-
mail address is [email protected].

November 2010 23 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Preface

Encounter Test And Diagnostics Licenses


Refer to “Encounter Test and Diagnostics Product License Configuration” in What’s New for
Encounter Test and Diagnostics for details on product license structure and requirements.

What We Changed for this Edition


Added the following topics:
■ “Writing Tester Description Language (TDL)” on page 182

November 2010 24 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

1
Introduction to Automatic Test Pattern
Generation

Test pattern generation is the process of determining what stimuli to apply to a design to
demonstrate the design’s correct construction. Application of these test vectors is used to
prove the design contains no manufacturing induced defects. The test vectors may be either
automatically generated (by Encounter Test), or they can be manually generated.

The following figure shows where the test pattern generation fits in a typical Encounter Test
flow.

November 2010 25 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Introduction to Automatic Test Pattern Generation

Figure 1-1 Encounter Test Process Flow

Build Model

Build Fault Model

Build Test Mode

Verify Test Structures

Automatic Test Pattern Generation

Write Vectors for Manufacturing

Encounter Test automatic test pattern generation (ATPG) supports designs with compression
and low power logic in a static, untimed, or timed environment. The following types of tests
are supported:
■ Scan Chain - refer to “Scan Chain Tests” on page 30
■ Logic - refer to “Logic Tests” on page 31
❑ Static
❑ Dynamic (with or without SDF timings)
■ Path - refer to “Path Tests” on page 32
■ IDDq - refer to “IDDq Tests” on page 32
■ Driver and Receiver - refer to Parametric Test in the Custom Features Reference

November 2010 26 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Introduction to Automatic Test Pattern Generation

■ IO Wrap tests
■ IEEE 1149.1 JTAG, Boundary Scan Verification patterns
■ Core tests - refer to “Core Tests” on page 34
■ Low power tests - refer to Creating Low Power Tests in the Encounter Test Low Power
Guide for more information.
■ Commit test patterns - refer to “Committing Tests” on page 233

In addition to the above-mentioned types of tests, Encounter Test provides the following
features:
■ Compacting and manipulating test patterns
■ Compressing test patterns
■ Simulating and fault grading of test patterns
■ Fault sub-setting
■ Generating Low Power Tests

It is recommended that you perform Test Structure Verification (TSV) to verify the
conformance of a design to general design guidelines. Non conformance to these guidelines
result in poor test coverage or invalid test data.

ATPG Process Overview


The prerequisites to automatic test pattern generation (ATPG) in Encounter Test are building
the logic model, a test mode, and a fault model. A fault is selected from the Fault List and sent
to the test generation engine. Based on the input constraints of the test mode and optional
input parameters, the engine either generates a test for the fault, classifies it as untestable,
or exits on reaching process limits.

The following figure shows this iterative process.

November 2010 27 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Introduction to Automatic Test Pattern Generation

Figure 1-2 ATPG flow

Generated tests are passed to the active compaction step and accumulated. Test compaction
merges multiple test patterns into one if they do not conflict with a user selectable effort.
Note: If the generated tests are applied at different oscillator frequencies, the test
compaction function does not produce any warning message but applies all the tests at the
same oscillator frequency.

November 2010 28 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Introduction to Automatic Test Pattern Generation

When the test group reaches certain limit, such as 32 or 64 vectors, the tests are passed to
the fault simulator that runs a good fault-free and a faulty machine simulation to predict the
output states and identify tested faults. The tested faults are marked off in the fault list. A fault
is considered to be tested when the good machine value differs from the fault machine value
at a valid measure point.

The results of test pattern generation, active compaction, and fault simulation are collectively
called an experiment. An experiment can be saved or committed if the user is satisfied with
the pattern and test coverage results (refer to “Committing Tests” on page 233 for more
information). It can be appended to, as in top-off or add-on patterns, or it can be discarded
and started from scratch. More complex procedures involving multiple fault lists, test modes,
experiments, and cross mark-offs of faults can be done with this type of process flow.

This process repeats till the unprocessed fault list is empty or user-defined process limits are
reached. Encounter Test then reports a summary of the test generation process with the
number of test sequences and the test coverages achieved.

The following figure shows the fault list processing:

Figure 1-3 Fault List Processing

November 2010 29 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Introduction to Automatic Test Pattern Generation

Notes:
1. To perform Test Generation using the graphical interface, refer to “ATPG Pull-down” in
the Graphical User Interface Reference.
2. Also refer to “Static Test Generation” on page 59.

Additional Pattern Compaction


Pattern compaction is an optional step that is useful for reducing test pattern count even
further than test compaction. It repeats the fault simulation step by processing the test
patterns in a sorted or reverse order from how they were first simulated. The faults tested by
the experiment are first reset, and then as faults are tested, subsequent patterns may fail to
test new faults and are then discarded.

General Types of Tests


Encounter Test supports the following various types of tests:

Scan Chain Tests


These tests are used to verify the correct operation of scan chains. Since most scan based
test vectors presume the correct operation of the scan chains, it is important (for diagnostic
purposes) to first ensure that scan chains are working before the application of any test
vectors that presume they are working. The Encounter Test stored pattern test generation
application can automatically generate scan chain tests to check for the correct operation of
scan chains. If the design is operating in a level sensitive scan design (LSSD) mode, a scan
chain LSSD flush test may also be generated automatically.

These scan chain tests may be static, dynamic, or dynamic timed. See “Test Vector Forms”
on page 242 for more information on these test formats.

You can generate scan chain tests for designs using test compression, which will verify the
correct operation of the spreader, compactor, channels, and masking logic.

The scan test walks each scannable flop through all possible transitions by loading an
alternating sequence (00110011?.), and simulating several pulses of the shift clock(s).
■ LSSD Flush Test
For Level Sensitive Scan Design (LSSD) designs, it is possible to turn on all the shift A
clocks and all of the shift B clocks simultaneously. In this state of the design, any logic

November 2010 30 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Introduction to Automatic Test Pattern Generation

values placed on the scan inputs will “flush” through to the scan output pins of the
respective scan chains. The LSSD flush test generated by Encounter Test sets all scan
input pins to zero and lets that value flush through to the scan outputs. Then all scan input
pins are set to one and that value is let flush through to the scan outputs. Finally, all the
scan input pins are set back to zero.
LSSD flush tests are sometimes used to screen for different speed chip samples. Since
the scan chain usually traverses every portion of the chip, the LSSD flush test may be a
reasonable gauge of the overall chip speed.
A LSSD flush test is not generated if Logic Test Structure Verification (TSV) has not been
run. TSV tests determine whether a LSSD flush test can be applied. A LSSD flush test
cannot be applied if:
❑ Either the shift A or shift B clocks are chopped
❑ The scan chain contains one or more edge-sensitive storage elements (flip-flops).
❑ The shift clocks are ANDed with other shift clocks which could result in unpredictable
behavior when they are all ON.

Logic Tests
These tests are used to verify the correct operation of the logic of the chip. This is not to be
confused with functional testing. The objective of the logic tests is to detect as many
manufacturing defects as possible.

Logic tests can be static or dynamic (with or without timings). You can specify the timings
through a clock constraint file (refer to “Clock Constraints File” on page 125 for more
information) or an SDF file.

See “Test Vector Forms” on page 242 for additional detail on these test formats.

Static logic tests are the conventional mainstream logic tests. They detect common defects
such as stuck-at and shorted net defects. They may also detect open defects. Encounter Test
does not target the CMOS open defects, but stuck-at fault tests detect most of the CMOS
open defects.

Dynamic tests are used to detect dynamic, or delay types of defects. Dynamic tests specify
certain timed events, and this format can be applied to test patterns targeted for static faults.
The converse is not supported; Encounter Test does not allow static-formatted tests when
dynamic faults are targeted.

Static tests for scan designs have the following general format:

November 2010 31 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Introduction to Automatic Test Pattern Generation

Scan_Load
Stim_PI
Pulse one or more clocks
Measure_PO (optional)
Scan_Unload

In general, dynamic tests will contain the following events:


Scan_Load
Stim_PI

Dynamic events:
Pulse launch clock
Pulse capture clock ( this the at speed, timed, quickly)
Measure_PO (optional)
Scan_Unload

The dynamic events section is the only part of the test that can be applied quickly (at speed).
The other events in the test are applied slowly.

The example pattern format mentioned above represents a typical delay test. Other delay test
formats can also be produced.

Path Tests
Path tests produce a transition along an arbitrary path of combinational logic. The paths to be
tested may be specified using the pathfile keyword or selected by Encounter test by
specifying the maxpathlength keyword.

Path delay tests may or may not be timed. Paths may be tested in various degrees of
strictness ranging from Hazard-Free Robust(most strict), Robust, Nearly-Robust, and Non-
Robust(least strict). Refer to “Path Tests” on page 137.

Refer to the following for additional information:


■ “Delay and Timed Test” on page 71.
■ “Path Pattern Faults” in the Modeling User Guide
■ “create_path_delay_tests” in the Command Line Reference

IDDq Tests
For static CMOS designs, it is possible to detect certain kinds of defects by applying a test
pattern and checking for excessive current drain after the switching activity quietens down.
This is called IDDq testing (IDD stands for the current and q stands for quiescent). Refer to
“Create IDDq Tests” on page 275 for more information.

November 2010 32 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Introduction to Automatic Test Pattern Generation

Parametric(Driver/Receiver) Tests
Parametric tests exercise the off-chip drivers and on-chip receivers. For each off-chip driver,
objectives are added to the fault model for DRV1, DRV0, and if applicable, DRVZ.

For each on-chip receiver, objectives are added to the fault model for RCV1 and RCV0 at
each latch fed by the receiver. These tests are typically used to validate that the driver
produces the expected voltages and that the receiver responds at the expected thresholds.

IO Wrap Tests
These tests are produced to exercise the driver and receiver logic. The tests use the chip’s
internal logic to drive known values onto the pads and to observe these values through the
pad’s receivers. IO wrap tests may be static or dynamic. Static IO wrap tests produce a single
steady value on the pad. Dynamic IO wrap test produce a transition on the pad.

IEEE 1149.1 Boundary Scan Verification Tests


Verification patterns are produced to validate the IEEE 1149.1 standard functions. No faults
are processed and no specific defects are targeted.

IEEE 1149.1 boundary scan verification ensures that:


■ The design is fully compliant with the IEEE 1149.1 standard.
■ The Boundary Scan Design Language (BSDL) matches the design and visa versa, and
that both are compliant with the IEEE 1149.1 Standard.

In regard to IEEE 1149.6 standard, Encounter Test also supports these boundary scan
structures ensuring compliance to the IEEE 1149.6 standard. Verification is limited to the
digital IEEE 1149.6 constructs.

Encounter Test ATPG can be run on testmodes containing 1149.x constructs. Refer to “” on
page 312 for more information.

November 2010 33 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Introduction to Automatic Test Pattern Generation

Core Tests
Encounter Test supports core testing as per IEEE 1500 standard. Refer to “Create Core
Tests” on page 283 and IEEE 1500 Embedded Core Test in the Encounter Test Synthesis
User Guide for more information.

Low Power Tests


Refer to Encounter Test Low Power User Guide for details on configuring a low power
design and producing low power test patterns.

Committing Tests
This is an Encounter Test concept where a set of test patterns and their corresponding tested
faults are saved and stored. All subsequent runs start at the test coverage achieved by the
saved patterns. Refer to “Committing Tests” on page 233 for more information

Inputs for ATPG


The following figure represents the required and optional input for ATPG processing.

November 2010 34 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Introduction to Automatic Test Pattern Generation

Test Generation Restrictions


The restrictions of test generation are as follows:
■ Non conformance of designs to Encounter Test LSSD guidelines and GSD (General
Scan Design) guidelines can result in invalid test data and reduced test coverage. The
Test Structure Verification process verifies the conformance to these guidelines.
■ There is no test generation override option to process the dynamic faults that were
excluded from the fault model or from the test mode.

Invoking ATPG
Use the following syntax to invoke ATPG through command line:
create_<test type>_tests EXPERIMENT=<experiment name> TESTMODE=<testmode name>
WORKDIR=<directory>

For example, to create logic test, use the following command:


create_logic_tests EXPERIMENT=name TESTMODE=name WORKDIR=<directory>

Refer to create_logic_tests in the Command Line Reference for more information.

Use the following command to commit the test results:


commit_tests INEXPERIMENT=<name> TESTMODE=<testmode name> WORKDIR=<directory>

Refer to commit_tests in the Command Line Reference for more information.

To invoke ATPG using Graphical User interface, select ATPG - Create Tests - <Test
type>

For example, to generate scan chain tests using GUI, select:

ATPG - Create Tests - Specific Static Tests - Scan Chain

To commit tests, select:

ATPG - Commit Tests

The following figure shows the ATPG menu in GUI:

Figure 1-4 ATPG Menu in GUI

November 2010 35 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Introduction to Automatic Test Pattern Generation

Stored Pattern Test Generation (SPTG)


Stored Pattern Test Generation (SPTG) is an approach for component manufacturing test. It
performs test pattern generation, test compaction, and fault simulation to create test patterns
that can be applied to a design (using a stored pattern tester) to test for defects. You can
experiment and save the results of experiments.

The following concepts are important to this discussion.


■ Test pattern generation is the creation of a set of test patterns, sometimes referred to as
test vectors.

November 2010 36 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Introduction to Automatic Test Pattern Generation

■ Static compaction is the merging of several test patterns into a single test pattern to
target multiple defects. This reduces the number of test patterns required to test a
design.
■ Fault simulation accomplishes the following:
❑ Computes the expected response values for a defect free design. The test patterns,
including the expected response values, are written to an output file. This file
contains information to be applied on a stored pattern tester.
❑ Determines which faults are detected by the test patterns.
Fault simulation acts as a filter. For example, if a test pattern is simulated and it does
not detect any additional remaining faults (in other words, is ineffective), you can
choose to not write the test pattern to the output file. Also, if any test pattern could
potentially damage the product, you can choose to not write the pattern to the output
file.
The results of fault simulation are used to compute test coverage (the percentage of
the faults that are detected by the tests). Refer to “Fault Statistics” in the Modeling
User Guide for details on the calculation of test coverage.

Stored Pattern Test Generation applies the concepts of test pattern generation, compaction,
and fault simulation in the process shown in Figure 1-2.

True-Time Test: An Overview


Encounter Test supports both static and delay testing. For delay testing, Encounter test has
flows for both static and untimed pattern generation. The static and untimed test flows are
very similar with only slight differences in the commands and options. For timed ATPG, True-
Time Test provides an alternative methodology to the Encounter Test standard delay test
methodology by using a streamlined and simplified two-phase process for generating timed
patterns. This methodology filters out a large number of specialized and potentially
problematic sequences that detect few faults and promote numerous iterations at the tester.

The following are the three phases of True-Time Test:


1. Logic model preparation
2. Pre-analysis of the chip, refer to “True-Time Test Pre-Analysis” on page 38.
3. Test pattern generation, refer to “True-Time Test Pattern Generation” on page 39.

November 2010 37 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Introduction to Automatic Test Pattern Generation

Designing a Logic Model for True-Time Test


When building a logic model for True-Time Test, use the build_model command keyword
truetime=yes to enable additional checks. These checks help validate that the technology
cell levels of hierarchy are correctly identified to increase the likelihood of matching with the
Standard Delay File (SDF).

True-Time Test Pre-Analysis


Prepare Timed Sequences determines a set of optimal parameters to run test generation and
fault simulation for timed tests. These parameters include the following:
■ The best clock sequences
■ The frequency per sequence
■ Maximum path length for each selected sequence

You do not need to perform this step for static ATPG.

The clock sequences are combined into a multi-domains sequence that simultaneously tests
the combined sequences. Refer to preceding section in this chapter for related information.

The determination of best sequences is performed by generating test patterns for a statistical
sample of randomly selected dynamic faults. Each unique test pattern is evaluated to
ascertain how many times the test generator used it to test a fault. The set of patterns used
most often are considered the best sequences. Normally, the top four or five will test 80
percent of the chip. An additional option, maxsequences, is available to use more
sequences, if desired.

The presence of prepared sequences is denoted by sequences named DelayTestClockn


in the sequence definition.

The maximum path length is determined by generating a distribution curve of the longest data
path delays between a random set of release and capture scan chain latch pairs. The area
under the curve is accumulated from shortest to longest and the maximum path length is the
first valley past the cutoff percentage. This method constrains the timings to ignore any
outlying paths that over inflate the test timings. Additional options are available to control this
step by providing the cutoff percentage for the curve and a maximum path length to override
the calculation.

The best sequences, and their timings and constraints are stored in the TBDseq file for the
test mode. Refer to “TBDpatt and TBDseqPatt Format” in the Test Pattern Data Reference
for related information.

November 2010 38 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Introduction to Automatic Test Pattern Generation

To perform pre-analysis using commands, refer to “prepare_timed_sequences” in the


Command Line Reference.

To perform pre-analysis using the graphical user interface, refer to “Prepare Timed
Sequences” in the Graphical User Interface Reference.

True-Time Test Pattern Generation


Encounter Test supports both static and delay ATPG.

To generate patterns using static ATPG, refer to “Static Test Generation” on page 59.

The recommended flow for delay ATPG for timed tests is to have test generation run off the
results from pre-analysis done using Prepare Timed Sequences. This is not a required step,
but is highly recommended. If the pre-analysis phase is performed using Prepare Timed
Sequences, the applications automatically detect and process prepared sequence definitions
unless otherwise specified (through the useprep keyword). Create Logic Delay Tests
provides an option to exclude the sequence definitions produced by Prepare Timed
Sequences.

Usually, all of the best sequences are processed in the order of most to least productive.
However, the option is there to process an individual sequence or an ordered list of
sequences. The number of faults processed for each job can be controlled by specifying
maxfaults= on the command line.

To generate patterns, refer to the following:


■ “Create Logic Delay Tests” in the Graphical User Interface Reference
■ ““create_logic_delay_tests” in the Command Line Reference.

Types of True-Time ATPG

Automatic True-Time Testing


The Automatic True-Time flow automatically controls most delay test parameters. If SDF
information is used for this flow, clocking sequences, clock timings, and transition constraints
are automatically determined. To run this flow, perform the normal preparation required for
running timed delay tests and then use the prepare_timed_sequences command to start
the process. The minimum required inputs to prepare_timed_sequences are a test
mode and delay model. To achieve higher control of the process, specify any of the available

November 2010 39 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Introduction to Automatic Test Pattern Generation

test generation and timing options. Refer to “prepare_timed_sequences” in the Command


Line Reference for details on all associated keywords.

After the prepare_timed_sequences step is competed, run


create_logic_delay_tests. The information from prepare_timed_sequences is
automatically used.

At-Speed and Faster Than At-Speed True-Time Testing


The At-Speed and Faster Than At-Speed flows are run the same way as the Automatic True-
Time Test flow with the addition of an input clock constraints file which contains user-specified
timings for each clock domain to be used. For the At-Speed flow, specify the functional
speeds of each domain within the clock constraints file. For the Faster Than At-Speed flow, it
is recommended to start at the fastest speed, then perform subsequent runs with
incrementally less aggressive timings until arriving back at a functional system speed. This
ensures that defects are marked off only down paths with a minimum of slack. Longer paths
which are not measurable at the aggressive times are not used to mark off faults. Therefore,
only faults which only feed long paths are run at slower timings.

Static ATPG
Encounter Test has full support for a static ATPG flow. The processing is very simple and
straight forward and has been used on 1000s of chips. Refer to “Static Test Generation” on
page 59 for more information.

November 2010 40 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

2
Using the True-Time Use Model Script

Encounter Test provides a script named true_time to perform Encounter Test true time on
a design. The script takes a design through build model, test pattern generation using ATPG,
and writes out patterns in desired language. The script also gives you an option of using a
portion or the entire Encounter Test flow. For example, you can use the script to just perform
ATPG and not build model. This script is designed and maintained to allow the best flow
through Encounter Test for the general market place.
Note: You might get more optimized results using direct system calls, but the script is
designed in a way to allow good results without having the expertise in all the Encounter Test
domains.

The script provides the following:


■ Support for the four basic use models (static, fixed time, at-speed, and faster than at-
speed)
■ Ability to perform static top off after delay ATPG
■ Optional pattern sorting
■ Perform optional SDC
■ Perform up to three faster than at-speed runs
■ Support full scan and compression modes (OPMISR+, XOR)
■ Support for user-modified TDRs, modedefs, user sequences, and lineholds
■ Support for industry compatible fault modeling:
❑ Full fault
❑ Cell boundary
■ Support for optional cell delay template (to fix SDF related problems)
■ Perform testing with memories defined as blackboxes

Some other features of the script are:

November 2010 41 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Using the True-Time Use Model Script

■ Consistent with each release:


❑ Updated with new features for each release
❑ Removes obsolete commands and keywords
❑ Contains recommended keyword settings
■ Helps you set up Encounter Test True Time
❑ The setup file (tt_setup) contains a limited set of user inputs
❑ The script executes the required commands
❑ A single setup keyword may relate to several Encounter Test keywords or
commands (test_through_memories -> allowjustifytimeframes=n,
dynonly2clocks=no,...)
■ Helps determine scope of evaluation
■ The setup file serves as a common checklist for all evaluations
■ Prints the following output to xterm:
❑ Message summary only if warnings or errors
❑ Total fault counts
❑ Scan chain summary messages
❑ Test coverage and pattern count totals
❑ Complete logs also available
■ Stops on severe warnings
❑ Makes them more visible and encourages fixing them immediately
❑ Script allows override if necessary
■ Allows you to stop the execution after performing some steps
■ Allows restarting the execution after performing some steps
■ Saves commands in a file for additional editing, if required

November 2010 42 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Using the True-Time Use Model Script

Executing the Encounter Test True Time Script


The true_time command used to run the true-time script differs slightly from other
Encounter Test commands. The command takes a setup file as input and reads the rest of
the data from this file.

The syntax for the true_time command is given below.


true_time <setup_file>

where setup_file is the file with options and parameters to run through the Encounter
True Time flow.

Prerequisite Tasks
Complete the following tasks before executing the true_time script:
■ Create a working directory by using the mkdir <directory name> command.
■ Fill in the setup file (<setup_file>) with required steps.
■ Set up Encounter Test into your environment (using et -c or et)

Setup File Input


Encounter Test is shipped with the following two templates:
■ $Install_Dir/tb/etc/tb/contrib/tt_setup - Used for delay and then static ATPG
■ $Install_Dir/tb/etc/tb/contrib/tt_setup_static - Used only for static ATPG

Create a copy of either of these files in your working directory. The tt_setup_static
template is a subset of tt_setup. The following topics discuss the various sections and
inputs of the tt_setup file.

Script Control Information

This information controls the execution of the script and allows you to start or stop after
certain steps and control log files.

The following table lists the parameters and the corresponding values for this section of the
setup file:

November 2010 43 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Using the True-Time Use Model Script

Parameter Description Value


RESTART=<value> Restart processing from ■ build_model (the
a previous step. This is beginning)
optional. If not specified,
■ build_testmode
the script will start from
build_model. ■ read_sdc (also
does read_sdf)
■ atpg
■ write_vectors
■ run_ncverilog
EXITBEFORE=<val Exit script before the ■ build_model (the
ue> specified step. This is beginning)
optional. If not specified,
■ build_testmode
the script will stop at the
end. ■ read_sdc (also
does read_sdf)
■ atpg
■ write_vectors
■ run_ncverilog
■ end (the end)
REMOVE_OLD_LOG Clear the log file ■ yes - clear the logs
FILES=<value> directory. This is (default)
optional. If not specified,
the script will clear log ■ no - keep the logs
files.

SCRIPTFILE=<fil Save the executed


ename> commands in a file. If not
specified, the script will
save commands in
$WORKDIR/run_et

November 2010 44 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Using the True-Time Use Model Script

Parameter Description Value


CONTINUE_WITH_S Continue execution even ■ yes - Continue even
EVERE=<value> if severe errors are if severe messages
found. This is optional. are encountered
(default)
■ no - Stop processing
if severe messages
are encountered

EXECUTE=<value> Execute all the ■ yes - Run all


commands as the script commands (default)
is created. This is
optional. If not specified, ■ no - Create script, no
all commands are execution.
executed by default.

LOGFILE=<filena Save the log of the script


me> in the specified file. This
is optional. If not
specified, log from the
script is not saved.
METHNAME=<metho Create a methodology
dology name> file using the information
in the tt_setup file.
The methodology file is
stored in WORKDIR.

Model Information

This section contains information used to create the Encounter Test model and fault model.
The following information is required only for build_model and/or build_faultmodel
steps.

Refer to build_model and build_faultmodel in the Command Line Reference for more
information on the respective parameters.

November 2010 45 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Using the True-Time Use Model Script

Parameter Description Value


Specify a working
WORKDIR=<directory
> directory. Required for all
steps.
Specify a fully qualified
DESIGNSOURCE=<file
.v> name of a netlist.
Required only for the
build_model step.
TECHLIB=<file1.v,f Specify a fully qualified
ile2.v>
name of libraries (these
can be , or .). Required
only for the
build_model step.
TOPCELL=<name> Specify a top-level
design cell. This is
optional and if not
specified, Encounter
Test selects the last cell
in the design source.
CREATEBLACKBOXES=< Allow the tool to ■ no - Do not create
value>
automatically create black boxes (default)
blackbox for missing
cells. This is optional, ■ yes - Create black
and the default value is boxes
no.
BLACKBOXOUTPUTS=<v Tie output of black boxes x, z, X , Z, 0, 1
alue>
to a value. This is
optional, and the default
value is x.
INDUSTRYCOMPATIBLE Align fault models with ■ no - Do not create
=<value>
other tools. This is industry compatible
optional, and the default fault model (default)
value is no.
■ yes - Create special
fault model

November 2010 46 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Using the True-Time Use Model Script

Parameter Description Value


CELLFAULTS=<value> Build an entire fault ■ no - Create a full fault
model or cell fault model. model (default)
This is optional, and the
default value is no. ■ yes - Create only cell
faults

OPTIMIZE=<value> Optimize the logic model ■ dangling - Remove


for improved dangling logic
performance. This is (default)
optional and the default
value is dangling. ■ none - Do not
remove any logic

VLOGPARSER=<value> Specify the verilog ■ IEEEstandard -


parser to use. This is verilog 2001
optional. The default
value is ■ et - verilog with old /
IEEEstandard, but /! style attributes
Encounter Test also
determines the best
parser based on the
comments in files.

Test Mode Information

The information in this section is used to create the Encounter Test test modes. The
information in the following table is required only if you are using the true_time script to
build a testmode(s).

Refer to build_testmode in the Command Line Reference for more information.

November 2010 47 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Using the True-Time Use Model Script

Parameter Description Value


ASSIGNFILE=<file> Specify the file
containing test function
pins (such as clocks and
scan enables). When
using compression,
specify the fullscan test
mode assign file.
Required for the
build_testmode step.
COMPRESSION=<value If compression is ■ opmisrplus - use
>
present in design, OPMISRPLUS
specify the type of compression
compression being
■ opmisrplus_topoff
used. Required for the
build_testmode step. - use OPMISRPLUS

OPMISRPLUS=<value> Specify the compression ■ xor - use XOR


with ATPG topoff using compression
the fullscan mode
■ xor_topoff - use
XOR compression
with ATPG topoff
using the fullscan
mode
COMPRESSIONASSIGNF Specify the file
ILE=<file>
containing test function
pins (such as clocks,
scan enables) for
compression testmode.
Required for the
build_testmode step
to build compression
testmode.
TESTMODE=<file Specify the custom
name>
mode definition file
name. This is optional.

November 2010 48 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Using the True-Time Use Model Script

Parameter Description Value


TDRPATH=<directory Specify the TDR
>
directory to override the
default TDR. This is
optional.
MODEDEFPATH=<direc Specify the directory
tory>
containing custom mode
definition file defined
TESTMODE above. This
is optional.
STILFILE=<file> This is an alternative
method to specify test
function pins and mode
definition files (STIL SPF
file). This is optional.
SEQDEF=<file> The file where
TBDseqPatt mode
initialization and
customer scan protocols
are found. This is
optional.
COMPRESSIONTESTMOD Specify the custom
E= <file name>
mode definition file for
compression test mode.
This is optional.
COMPRESSIONSEQDEF= Specify the sequence
<file>
definition file name or
names that contain pre-
defined input pattern
sequences.
COMPRESSIONSTILFIL This is an alternative
E=<file>
method to specify test
function pins and mode
definition files (STIL SPF
file). This is optional

November 2010 49 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Using the True-Time Use Model Script

ATPG Controls and Static ATPG Information

This information is used to control ATPG settings. The parameters listed in the following table
are required only if you use the true_time script to run ATPG.

Refer to create_logic_tests and create_scanchain_tests in the Command Line Reference


and “Static Test Generation” on page 59 for more information.

Parameter Description Value


ATPGTYPE=<value> Specify the type of ATPG ■ static - Static
to perform. This is a ATPG
required parameter.
■ dynamic - Delay
ATPG
■ dynamic_topoff -
Delay ATPG with
static topoff
■ dynamic_only -
Only simulate on
dynamic faults

EFFORT=<value> Specify the ATPG effort. ■ low - Low ATPG


This is optional, and the effort
default value is low.
■ medium - More effort
■ high - Run higher
effort levels.
COMPACTION=<value> Specify ATPG ■ medium - Good
compaction effort. This amount of
is optional, and the compaction
default value is medium.
■ low - Less
compaction
■ high - Higher level of
compaction

November 2010 50 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Using the True-Time Use Model Script

Parameter Description Value


SORTPATTERNS=<valu Perform sorting of ■ no - Do not run extra
e>
resultant ATPG vectors. sorting
This reduces coverage
slightly, but can ■ Yes - Do additional
drastically reduce sorting
pattern counts. The
default value is no.
STATICSEQUENCES=<n Specify the name of test
ame>
sequence for ATPG. This
is imported by
SEQUENCEFILE and is
optional.
LINEHOLD=<file> Specify the user-
specified line hold file.
This is optional.
MAXCPUTIME=2880 Specify the maximum
time for ATPG steps (2
days). This is optional.
SEQUENCEFILE=<file Specify the user-
>
specified test sequence
for ATPG. This is
optional.

Delay ATPG Information

This input is used to control delay ATPG settings. The parameters listed in the following table
are required only if you use the true_time script to run delay ATPG.
Note: This section does not exist in the tt_setup_static file.

For more information on delay ATPG, refer to “Delay and Timed Test” on page 71.

November 2010 51 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Using the True-Time Use Model Script

Parameter Description Value


SDCPATH=<directory Specify the directory
>
containing SDC file. This
is optional and is used
for fixed time, at-speed,
and faster-than at-speed
tests.
SDCNAME=<name> Specify the name of
SDC file. This is optional
and is used for fixed
time, at-speed, and
faster than at-speed
tests.
SDFPATH=<directory Specify the directory
>
containing the SDF file.
This is optional and is
used for at-speed and
faster than at-speed
tests.
SDFNAME=<name> Specify the name of SDF
file. This is optional and
is used for at-speed, and
faster than at-speed
tests.

November 2010 52 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Using the True-Time Use Model Script

Parameter Description Value


CLOCKCONSTRAINTS=< Specify the clock
file>
constraint file with
required clocks and
frequencies. You need to
specify only this
parameter if using for at-
speed test. For faster-
than-at-speed, the
values in the file should
represent the faster-
than-at-speed times.
This is optional and is
used for fixed time, at-
speed and faster-than
at-speed tests.
CLOCKCONSTRAINTS2= When using faster-than-
<file>
at-speed methodology,
this would be the next
fastest frequencies. This
is optional and is used
for faster than at-speed
tests.
CLOCKCONSTRAINTS3 When using faster-than-
at-speed methodology,
this would be the slowest
frequencies. This is
optional and is used for
faster-than-at-speed
tests.
ANALYZE=<value> Analyze defect sizes. ■ no - Do not perform
This is optional and the default size analysis
default value is no.
■ yes - Perform default
size analysis

DYNAMICSEQUENCES=< Specify the name of


name>
sequences to use for
ATPG. This is optional.

November 2010 53 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Using the True-Time Use Model Script

Parameter Description Value


DYNAMICCOVERAGE=<v Specify the type of ■ intradomain -
alue>
dynamic faults to Perform ATPG on
process, that is, just faults within domains
within a domain or within
a domain and cross ■ alldomains -
domains. The default Perform atpg on fault
value is intradomain. within domains and
cross domains

LASTSHIFTLAUNCH=<v Specify whether to use ■ no - Use launch off


alue>
launch of last shift. The capture
default value is no.
■ yes - Use launch off
shift

DELAYTESTTHRUMEMOR Specify whether to ■ no - Do not generate


IES=<value>
generate tests through tests through
memories. The default memories
value is no.
■ yes - Generate tests
through memories

EARLYMODE=<value> Use linear combination Format: x,y,z where x,y,


of delays for timing and z add up to 1.
checks that use early
For example, 0,1,0 or
mode timing. This is
0,0.5,0.5
optional.
LATEMODE=<value> Use linear combination Format: x,y,z where
of delays for timing x,y and z add up to 1.
checks that use late
For example, 0,1,0 or
mode timing. This is
0,0.5,0.5
optional.

TEMPLATESOURCE=<fi Used by
le>
read_celldelay_tem
plate to identify a list of
required delays for a cell.
This is optional.

November 2010 54 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Using the True-Time Use Model Script

Parameter Description Value


MAXSEQ=99 Specify the maximum
number of sequences for
ATPG.
USEPREP=<value> Specify whether to run ■ yes - Execute
prepare_timed_sequ prepare_timed_s
ences before delay equences (default)
ATPG. The default value
is yes. ■ no - Do not execute
prepare_timed_s
equences

Path Delay ATPG Information

This information is used to control the path delay ATPG settings. The parameters listed in the
following table are required only if you use the true_time script to run path delay ATPG.
This section does not exist in the tt_setup_static file.

Parameter Description Value


PATHTEST=<values> Generate a path test. ■ no - Do not generate
This is optional and the any path tests
default value is no.
■ yes - Generate path
test in addition to
transition tests
■ only - Generate path
test without transition
tests
PATHFILE=<file> Specify the file
containing list of paths.
Required for path tests.
PATHNAME=<name> Specify an alternate fault
model name. This is the
last qualifier and is
optional.

November 2010 55 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Using the True-Time Use Model Script

Parameter Description Value


PATHTYPE=<value> Specify the type of path nearlyrobust,
tests to generate. This is robust, nonrobust,
optional and the default hazardfree
is nearlyrobust.

Write Patterns Information

This information is used to control the writing of the test vectors into different formats.

For more information on Test Vector Formats, refer to write_vectors in the Command Line
Reference or “Test Vector Forms” on page 242.

Parameter Description Value


WRITEVERILOG=<valu Specify whether to write ■ no - Do not write out
e>
out verilog patterns. This patterns in Verilog
is optional, and the (default)
default option is no.
■ yes - Write out the
patterns in Verilog
VERILOG_SCAN=<valu Specify the kind of ■ serial - The
e>
patterns to write out. expanded scan
This is required if writing format
out Verilog. The default
value is default. ■ parallel - Where
scan values are
applied directly to
and measured
directly at the scan
registers

November 2010 56 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Using the True-Time Use Model Script

Parameter Description Value


WRITESTIL=<value> Specify whether to write ■ no - Do not write out
out STIL patterns. This patterns in STIL
is optional, and the (default)
default value is no.
■ yes - Write out the
patterns in STIL

WRITEWGL=<value> Specify whether to write ■ no - Do not write out


out WGL patterns. This patterns in WGL
is optional, and the (default)
default value is no.
■ yes - Write out the
patterns in WGL

NC-Sim Information

This information is used to control the execution of NC-sim. The parameters listed in the
following table require the writing out of test patterns in the Verilog format.

For more information on NC-sim, refer to “NC-Sim Considerations” on page 184.

Parameter Description
NCVERILOG_DESIGN=<files> Specify the design and techlib files
separated by “,”to use for NCSIM
separated,
NCVERILOG_OPTIONS=<optons> Specify the ncverilog options to
use in ncverilog separated by
”y”
NCVERILOG_DEFINE=<options>
NCVERILOG_DEFINE=<options> Specify the +define options to use
in ncverilog separated by “,”

November 2010 57 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Using the True-Time Use Model Script

Parameter Description
NCVERILOG_DIRECTORY=<directory> Specify the directory to find
ncverilog or ncsim. This avoids
a potential conflict between the
ncverilog used by Encounter
Test and the ncverilog used for
simulation

Output
Before processing, the script parses the setup file to analyze data and checks the incorrect
values:
INFO - COMPRESSION keyword not set to recommended values: opmisrplus,
opmisrplus_topoff, xor, or xor_topoff. COMPRESSIONTESTMODE keyword not set.
No compression mode in the design.
Only TESTMODE=FULLSCAN_TIMED testmode is active.

Also, messages are reported to highlight the executing events:

INFO - Saving Encounter Test commands to script file: ./run_et


INFO - Using WORKDIR=.
....
Running commit_tests logic - to save logic patterns
*******************************************************************************
Total Static Coverage: 74.01%; Total Patterns 36;
*******************************************************************************
Completed Successfully. Continuing.
*******************************************************************************
Running write_vectors Verilog

INFO (TVE-003): Verilog write vectors output file will be: ./testresults/verilog/
VER.FULLSCAN_TIMED.data.scan.ex1.ts1. [end TVE_003]
INFO (TVE-003): Verilog write vectors output file will be: ./testresults/verilog/
VER.FULLSCAN_TIMED.mainsim.v. [end TVE_003]
Completed Successfully. Continuing.

If any information is missing, for example, no NC-sim information supplied, the script exits
before starting the step.
*******************************************************************************
Setup file indicates exit before run_ncverilog. Exiting.

The output summary highlights the coverage and pattern count achieved during the run.

November 2010 58 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

3
Static Test Generation

This chapter explains the concepts and commands to perform static ATPG with Encounter
Test.

Several types of tests are available for static pattern test generation. Refer to “General Types
of Tests” on page 30 for more information.
■ To perform Test Generation using the graphical interface, refer to “ATPG Pull-down” in
the Graphical User Interface Reference.
■ To perform Test Generation using command lines, refer to descriptions of commands
for creating and preparing for tests in the Command Line Reference.

The availability of test generation functions is dependent on licensing. Refer to “Encounter


Test and Diagnostics Product Configuration” in What’s New for Encounter ® Test for details
on the licensing server.

The chapter discusses the following ATPG tasks:


■ Scan Chain and Reset Fault Tests
❑ “Performing Scan Chain Tests” on page 62
❍ “Creating Reset Delay Tests” on page 344
❍ “Performing Flush Tests” on page 65
■ Logic Tests
❑ “Performing Create Logic Tests” on page 66
■ Exporting and Saving Patterns
❑ “Committing Tests” on page 69
❑ “Writing Test Vectors” on page 69

November 2010 59 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Static Test Generation

Static Test Pattern Generation Flow


The following figure shows a typical processing flow for running static test pattern generation.

Figure 3-1 Encounter Test Static Pattern Test Processing Flow

1. build_model - Reads in verilog netlists and builds the Encounter Test model
For complete information on Build Model, refer to “Performing Build Model” in the
Modeling User Guide.
2. build_testmode - Creates scan chain configurations of the design

November 2010 60 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Static Test Generation

For complete information, refer to “Performing Build Test Mode” in the Modeling User
Guide.
3. verify_test_structures - Verifies scan chains and checks for design violations
Resolve any TSV violation before running ATPG.
For complete information, refer to “Logic Test Structure Verification (TSV)” in the
Verification User Guide.
4. build_faultmodel - Creates the fault model for ATPG
For complete information, refer to “Building a Fault Model” in the Modeling User Guide.
5. create_scanchain_tests - Creates test patterns to validate and test faults on the scan
chain path
For complete information, refer to “Scan Chain Tests” on page 30.
6. commit_tests - Saves the patterns and marks off faults from the scan chain test
The scan chain test is written added to the master pattern set for this testmode so that
subsequent ATPG runs do not need to generate tests for these faults.
For complete information, refer to “Utilities and Test Vector Data” on page 233.
7. create_logic_tests - Creates patterns to test the remaining static faults
An experiment is created that contains the required type of tests. Most often these will
be logic tests.
For more complete information of the various test types, refer to “General Types of Tests”
on page 30.
Refer to “Advanced ATPG Tests” on page 275 for information on other types of ATPG
capabilities such as Iddq, core test, and parametric testing.
8. Should the results be kept?
If the results of the experiment are satisfactory, the experiment can be committed,
appending it’s test patterns and fault detection to the master pattern set for the testmode.
❑ Yes - commit tests
❑ No - If the results are not satisfactory, another experiment can be run with different
command line options.
You can also analyze untested faults. For complete information, refer to “Deterministic
Fault Analysis”, in the Verification User Guide.
9. write_vectors - Writes out the patterns in WGL, Verilog, or STIL format

November 2010 61 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Static Test Generation

For complete information, refer to “Writing and Reporting Test Data” on page 169.

Performing Scan Chain Tests


This command generates a scan chain test targeting static faults along the scan paths. After
creating the scan patterns, you can commit them to save the patterns and fault status for
future ATPG runs. For more information, refer to “Scan Chain Tests” on page 30.

For compression modes, the scan chain tests test many of the faults in the compression
networks. This also applies to testing faults in any existing masking logic.

To create scan chain tests using the graphical interface, refer to “Create Scan Chain Tests” in
the Graphical User Interface Reference.

To perform create scan chain tests using command lines, refer to "create_scanchain_tests"
in the Command Line Reference.

The syntax for the create_scanchain_tests command is given below:


create_scanchain_tests workdir=<directory> testmode=<modename> experiment=<name>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode
■ experiment= name of the test patterns

Prerequisite Tasks
Before executing create_scanchain_tests, build a design, testmode, and fault model.
Refer to the Modeling User Guide for more information.

Important Information from Log


The output log contains a summary of the number of patterns generated with their
representative coverage. Static faults should be tested.
****************************************************************************
----Stored Pattern Test Generation Final Statistics----
Testmode Statistics: FULLSCAN

#Faults #Tested #Redund #Untested %TCov %ATCov


Total Static 908 425 0 437 46.81 46.81

November 2010 62 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Static Test Generation

Global Statistics

#Faults #Tested #Redund #Untested %TCov %ATCov


Total Static 1022 425 0 551 41.59 41.59
****************************************************************************

----Final Pattern Statistics----

Test Section Type # Test Sequences


----------------------------------------------------------
Scan 1
----------------------------------------------------------
Total 1

Debugging No Coverage
If you do not achieve the desired ATPG coverage, check for the following problems:
■ Contention in the design - Look for TSV-193 and TSV-093 messages from
verify_test_structures to identify internal contention.
■ Broken scan chains - Analyze the verify_test_structures log for broken scan
chains.

Additional Tests Available


There are additional scripts to help test the set and reset faults on scan flops. These ATPG
commands are not supported officially but are available to achieve higher test coverages.

For more information, refer to “Creating Reset Delay Tests” on page 344.

An Overview to Scan Chain Patterns


The following is the structure of a static scan chain test mode:
■ Scan_Load - Load the scan chain bits with repeating 0011 pattern
■ Stim_PI - Stay in scan state
■ Static scan chain shift #1
❑ Stim_PI - Load the next value on scan inputs to continue 0011 pattern from scan
input
❑ Pulse - Pulse the scan clocks

November 2010 63 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Static Test Generation

❑ Measure_PO - Measure values on scan outputs


...

■ Static scan chain shift #5


❑ Stim_PI - Load the next value on scan inputs to continue 0011 pattern from scan
input.
❑ Pulse - Pulse the scan clocks
❑ Measure_PO - Measure values on scan outputs
❑ Scan_Unload - Observe all scan bits

In summary:
■ The static scan chain test is always in the scan state.
■ All clocks are configured at scan speeds.
■ Additional clocking and sequences are added while testing for compression, MISRs, or
masking.
■ The order of sequence can change slightly based on custom-scan protocols and custom-
scan preconditioning.

The following figure depicts a static scan chain wave form:

Figure 3-2 Static Scan Chain Wave Form

November 2010 64 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Static Test Generation

Performing Flush Tests


This command generates a flush test and test static faults along the scan paths. This test is
applicable only to LSSD clock designs. After creating the flush patterns, you can commit them
to save the patterns and fault status for future ATPG runs. Refer to “LSSD Flush Test” on
page 30 for more information.

To create flush tests using the graphical interface, refer to “Create LSSD Flush Tests” in the
Graphical User Interface Reference.

To create flush tests using the command line, refer to “create_lssd_flush_tests” in the
Command Line Reference.

The syntax for the create_lssd_flush_tests command is given below:


create_lssd_flush_tests workdir=<directory> testmode=<modename> experiment=<name>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode
■ experiment= name of the test patterns

Prerequisite Tasks
Before executing create_lssd_flush_tests, build a design, testmode, and fault model.
Refer to the Modeling User Guide for more information.

Debugging No Coverage
If you do not achieve the desired ATPG coverage, check for the following problems:
■ Contention in the design - Look for TSV-193 and TSV-093 messages from
verify_test_structures to identify internal contention.
■ Broken scan chains - Analyze the verify_test_structures log for broken scan
chains.

November 2010 65 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Static Test Generation

Performing Create Logic Tests


Create logic tests generates static ATPG patterns. For more information, refer to “Logic Tests”
on page 31.

To perform Create Logic Tests using the graphical interface, refer to "Create Logic Tests" in
the Graphical User Interface Reference.

To perform Create Logic Tests using the command line, refer to “create_logic_tests” in the
Command Line Reference.

The syntax for the create_logic_tests command is given below:


create_logic_tests workdir=<directory> testmode=<modename> experiment=<name>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ experiment= name of the test that will be generated

Prerequisite Tasks
Complete the following tasks before executing Create Logic Tests:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide for more information.
2. Create a Test Mode. See “Performing Build Test Mode” in the Modeling User Guide for
more information.
3. Build a fault model including static faults. See “Building a Fault Model” in the Modeling
User Guide for more information.

Output
Encounter Test stores the test patterns in the experiment name.

Command Output

The output log contains information about testmode, global coverage, and the number of
patterns used to generate those results.

November 2010 66 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Static Test Generation

****************************************************************************
----Stored Pattern Test Generation Final Statistics----

Testmode Statistics: COMPRESSION_ILL

#Faults #Tested #Redund #Untested %TCov %ATCov

Total Static 1857 1489 149 171 80.18 87.18

Global Statistics

#Faults #Tested #Redund #Untested %TCov %ATCov

Total Static 1950 1489 149 264 76.36 82.68


****************************************************************************

----Final Pattern Statistics----

Test Section Type # Test Sequences


----------------------------------------------------------
Logic 43
----------------------------------------------------------
Total 43

Specify reportoutput=industry to generate the report in the industry compatible


coverage format. A sample report for static (stuck at 0/1) faults generated using
reportoutput=industry is given below:
*******************************************************************************
#Faults #Tested #Possibly #Redund #Untested
Static Testmode 68 32 16 0 20
Static Global 68 32 16 0 20
Static Test Coverage 47.06%
Static Fault Coverage 47.06%
Static ATPG Effectiveness 76.47%
**************************************************************************

■ Static Test Coverage is the percentage of detected faults out of detectable faults. It is
calculated as #tested in mode/(#test mode faults - #redundant -
#globally ignored)
■ Static Fault Coverage is the percentage of detected faults out of all faults. It is calculated
as #tested globally/(#global faults + #globally ignored)
■ Static ATPG Effectiveness is the percentage of ATPG-resolvable faults out of all faults. It
is calculated as (#Tested + #Redundant + #globally ignored + #ATPG
Untestable + #Possibly Tested)/(#global faults + #globally
ignored)

November 2010 67 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Static Test Generation

Debugging Low Coverage

If you do not achieve the desired ATPG coverage, check for the following problems:
■ Contention in the design - Look for TSV-193 and TSV-093 messages from
verify_test_structures to identify internal contention.
■ Broken scan chains - Analyze the verify_test_structures log for broken scan
chains.
Note: Another tool to analyze low ATPG coverage is deterministic fault analysis. Refer to
Analyze Faults in the Verification User Guide for more information.

Using Checkpoint and Restart Capabilities


At times, the ATPG job can be terminate abnormally before completing. This section
discusses the techniques provided by Encounter Test to recover the application data in such
cases.

A checkpoint allows the application to save its current results. You can set the checkpoints for
the application by using the checkpoint=<value> keyword, where value is the number of
minutes between each checkpoint. The default is 60 minutes. For example, if you specify
checkpoint=120, Encounter Test sets up checkpoints and saves application data after
every two minutes.

You can use the checkpoint data to restart in case of network or machine failure during the
execution of the application.
Note: Some processes cannot be interrupted to take a checkpoint. If one of these processes
takes longer than the specified number of minutes, the next checkpoint will be taken as soon
as the process ends.

Using the Andrew File System (AFS) might occasionally result in the checkpoint files getting
out of sync, which cannot be restored for restart. However, this problem has never occurred
when using local disk or NFS.

To restart the application using checkpoint data, re-run the same ATPG command after
adding the following:

restart=no|yes|end

Specifying restart allows you to restart an application that produced a checkpoint file
before it ended abnormally. If a checkpoint file exists, the default is to restart the application
from the checkpoint and continue processing (restart=yes).

November 2010 68 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Static Test Generation

Use restart=end to restart the application from the existing checkpoint but immediately
cleanup, write out results files, and then end.

Use restart=no to start the application from the beginning instead of using an existing
checkpoint file.

If you use the restart keyword, Encounter Test ignores any other specified keywords, that
is, only the keyword values from the original command line are used.

Committing Tests
By default, Encounter Test makes all test generation runs as uncommitted tests in a test
mode. Commit Tests moves the uncommitted test results into the committed vectors test data
for a test mode. Refer to “Performing on Uncommitted Tests and Committing Test Data” on
page 234 for more information on the test generation processing methodology.

Writing Test Vectors


Encounter Test writes the following vector formats to meet the manufacturing interface
requirements of IC manufacturers:
■ Standard Test Interface Language (STIL) - an ASCII format standardized by the IEEE.
■ Waveform Generation Language (WGL) - an ASCII format from Fluence Technology, Inc.
■ Verilog - an ASCII format from Cadence Design Systems., Inc.

Refer to “Writing and Reporting Test Data” on page 169 for more information.

November 2010 69 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Static Test Generation

November 2010 70 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

4
Delay and Timed Test

This chapter provides information on the concepts and commands to perform delay and timed
ATPG with Encounter Test.

With current technologies, more and more defects are being identified that cause the product
to run slower than its designed speed. As stuck-at testing is a slow test that is only designed
to check the final result, paths with above average resistance or capacitance due to impurities
or a bad etch can go undetected. These defects can change the speed of a path to the point
where it is outside of the typically thin tolerance designed in to the functional circuitry. This
can result in a defective product.

Delay defects take the form of spot and parametric defects. An example of the spot defect is
a partially open wire that adds resistance to the wire thereby slowing the signal passing
through the wire. A parametric defect is a change in the process that causes a common slight
variation across multiple gates, which in turn causes the arrival of a signal or transition to take
longer than expected. An example of a problem causing a parametric failure would be an
increase in the transistor channel length.

Encounter Test provides Manufacturing (slow to rise and fall faults) and Characterization
(path delay) Tests for identifying spot and parametric defects. Encounter Tests provides a use
model script that allows you to create the model to dynamic ATPG test generation. Refer to
“Using the True-Time Use Model Script” on page 41 for more information on the script

The following sections discuss the details about the commands that constitute the true-time
use model.

The following steps represent a typical delay ATPG use model flow for manufacturing defects:
■ Scan chain and reset fault testing
❑ create_scanchain_delay_test - refer to “Creating Scan Chain Delay Tests” on
page 76
❍ prepare_reset_lineholds - refer to “Preparing Reset Lineholds” on page 343
❍ create_reset_delay_tests - refer to “Creating Reset Delay Tests” on page 344

November 2010 71 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

■ Logic Testing
❑ read_sdf - refer to “Performing Build Delay Model (Read SDF)” on page 80
❑ prepare_timed_sequences - refer to “Performing Prepare Timed Sequences” on
page 111
❍ AT Speed and Faster than AT Speed Tests can be achieved by specifying the
appropriate clock frequency in a clock constraint file.
❑ create_logic_delay_test - refer to “Create Logic Delay Tests” on page 114
■ Exporting and saving patterns
❑ commit_tests - refer to “Committing Tests” on page 233
❑ write_vectors - refer to “Writing Test Vectors” on page 169

The following steps represent a typical delay ATPG use model flow for Characterization
defects (path test)
■ create_path_delay_tests - refer to “Performing Create Path Tests” on page 134
■ Exporting and saving patterns
❑ commit_tests - refer to “Committing Tests” on page 233
❑ write_vectors - refer to “Writing Test Vectors” on page 169

Refer to the following sections for more information on delay faults:


■ “Delay Defects” on page 141
■ “Delay ATPG Patterns” on page 142
■ “Testing Manufacturing Defects” on page 73
■ “Timing Concepts” on page 85
■ “Delay Path Calculation” on page 93
■ “Characterization Test” on page 133
■ “Dynamic Constraints” on page 150

Refer to the following figure for a high level flow of the various delay test methodologies:

November 2010 72 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Figure 4-1 Flow for Delay Test Methodologies

Note: It is recommended to use Timed Tests whenever possible.

Testing Manufacturing Defects


Manufacturing delay test is a full-chip transition fault test useful for finding random defects
across the entire chip. As designs get faster, technology smaller, and more paths closer to the
critical arrival time, manufacturing delay test assists in assuring quality levels. True-Time Test
is a streamlined method for running manufacturing test.

November 2010 73 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

In Figure 4-2, a transition fault F exists in the logic between registers A and B. See Figure 4-
22 for an example of a transition fault.

Figure 4-2 Delay Testing and Effects of Clocking and Logic in Backcones

■ The logic between A and B operates at a speed of 1x.


■ The transition to test the fault is released (launched) by the C; lY from register A, and
captured by the ClkZ in register B. However, this ClkY then ClkZ sequence may initiate
other transitions in the design. For example, a transition in logic from register C to register
D. The logic in this part of the design may operate at a speed of 2x (that is, twice as slow).
■ By default, Encounter Test will time all ClkY-ClkZ sequences to the longest (slowest) path
in the system. This is to ensure the tests will work in manufacturing (the responses will
be accurate at the tester).

The following methods impact the tests and their timings:


■ Lineholds, see “Linehold File” on page 197

November 2010 74 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

■ Test function input pin switching (1->0, 0 ->1). See “Identifying Test Functions Pins in the
Modeling User Guide for related information.
■ Clock domains (controlled by lineholds or constrained timings). See CLOCKDOMAIN in
the Modeling User Guide for more information.
■ Early or late modes in the standard delay file, refer to “Timing Concepts” on page 85
■ Constrained timings, as described below.

It is realistic to assume that the tests produced to accommodate all timings will not effectively
test the fastest (1x) transitions. Those transitions must be operated twice as fast as the 2x
transitions if they are to operate at system speed in the design. Encounter Test can do this
through a technique known as Constrained Timings. This technique allows you to specify
options to exclude consideration of any paths greater than a certain cycle time (length). If a
register is fed by any path longer than the specified cycle time, the register is marked as not
observable causing no further detections to be possible at this register for this given timing
sequence. If a transition can occur in the specified cycle time or less, the register will be
measured and fault effects can be detected.

The timings specified with constrained timings can be tight timings for a given clock
sequence that allows the shorter paths to complete while the longer paths are ignored. When
consecutively run, tests for all timings (for all timing domains in the functional design) can be
generated for manufacturing by appending runs with shorter-to-longer timings after each
other. When appending runs, the fault lists (detected and undetected) are transferred
between the runs to accelerate follow-on processes. This approach detects faults at the
fastest speeds that they can be measured. The result is an increased pattern count but with
higher quality test patterns in terms of ability to detect real defects.

Lineholds and/or test inhibits can also be used to shut down paths in the design. Once a path
is no longer active, it is no longer considered by Encounter Test. Only the longest active path
will be used for any given clock sequence. Automatic, At-Speed, and Faster Than At-Speed
flows can automatically generate constraints for each clocking sequences which can also
shut down long paths with respect to timing. While the test generator will target the longer
paths, the simulator can still detect faults on shorter paths.

Since the SDF data can contain best, normal and worst case timings, these values can be
used as is or can be linearly scaled to test for particular process points (like Vdd and/or
temperature). When this is done, Early Mode and Late Mode linear combination settings can
be used during test generation to affect the distribution of delays (variance of normal
distribution).

The methodology most used to produce a range of tests for given cycle times on the design
is to make runs with different constrained timings. That is, run with the fastest (shortest) cycle
times first. Subsequent runs are then used to increase the constrained times until the slowest

November 2010 75 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

(longest) cycle times are reached. A common approach with this technique is when multiple
clock domains exist on the design. The cycle times for each clock domain are used as the
specification of the constrained timings.

Manufacturing Delay Test Commands


This section discusses the various commands used to test manufacturing delays in a design.

Creating Scan Chain Delay Tests


This command generates a scan chain test targeting dynamic faults along the scan paths.
After the scan patterns are created, they can be committed to save the patterns and fault
status for future ATPG runs. The scan chain test can be also a timed test where the timings
are based on the timing found in an SDF. These are the timings found only in the scan state.

To create dynamic scan chain tests using the graphical interface, refer to Create Dynamic
Scan Chain Tests in the Graphical User Interface Reference.

To perform Build Delay Model using command line, refer to "create_scanchain_delay_tests"


in the Command Line Reference.

The syntax for the create_scanchain_delay_tests command is given below:


create_scanchain_delay_tests workdir=<directory> testmode=<modename>
experiment=<name>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode
■ experiment= name of the test patterns

The following are the commonly-used timed options:


■ delaymodel=<name> - name of the delay model for timed ATPG tests
■ earlymodel/latemode - ability to customize delay timings. The defaults are
0.0,1.0,0.0 for each option. Refer to “Process Variation” on page 129 for more
information.

November 2010 76 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Prerequisite Tasks

Complete the following tasks before executing create_scanchain_delay_tests:


1. Build a design, testmode, and fault model. Refer to the Modeling User Guide for more
information.
2. If using timings, build a delay model from an SDF.

Output

The output log, as shown below, will contain a summary of the number of patterns generated
and their representative coverage. Both static and dynamic faults should be tested.
****************************************************************************
----Stored Pattern Test Generation Final Statistics----
Testmode Statistics: FULLSCAN
#Faults #Tested #Redund #Untested %TCov %ATCov
Total Static 908 425 0 437 46.81 46.81
Total Dynamic 814 316 0 498 38.82 38.82
Global Statistics
#Faults #Tested #Redund #Untested %TCov %ATCov
Total Static 1022 425 0 551 41.59 41.59
Total Dynamic 1014 316 0 698 31.16 31.16
****************************************************************************
----Final Pattern Statistics----
Test Section Type # Test Sequences
----------------------------------------------------------
Scan 2
----------------------------------------------------------
Total 2

Debugging No Coverage

If you do not achieve the desired ATPG coverage, check for the following problems:
■ Contention in the design - Look for TSV-193 and TSV-093 messages from
verify_test_structures to identify internal contention.
■ Broken scan chains - Analyze the verify_test_structures log for broken scan
chains.

Additional tests

There are additional scripts to help test the set and reset faults on scan flops. These ATPG
commands are not supported officially but are available to achieve higher test coverages.

For more information, refer to “Creating Reset Delay Tests” on page 344.

November 2010 77 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Delay Scan Chain Overview


When generating a delay scan chain test (create_scanchain_delay_tests), Encounter
Test generates a total of two scan patterns, which consist of the following structure:
■ Scan_Load - Load the scan chain bits with repeating 1000111 patterns. This tests the
slow to rise and slow to fall transition faults in the scan chain.
■ Stim_PI - Stay in scan state
■ Delay shift #1
❑ Stim_PI - Load the next value on scan inputs to continue 1000111 pattern from
scan input
❑ Pulse - Pulse the scan clocks
❑ Measure_PO - Measure values on scan outputs
❑ Stim_PI - Load the next value on scan inputs to continue 1000111 pattern from
scan input
❑ Pulse - Pulse the scan clocks
❑ Measure_PO - Measure values on scan outputs
......

■ Delay shift #8
❑ Stim_PI - Load the next value on scan inputs to continue 1000111 pattern from
scan input.
❑ Pulse - Pulse the scan clocks
❑ Measure_PO - Measure values on scan outputs
❑ Stim_PI - Load the next value on scan inputs to continue 1000111 pattern from
scan input.
❑ Pulse - Pulse the scan clocks
❑ Measure_PO - Measure values on scan outputs
■ Scan_Unload - Observe all scan bits
■ Scan_Load - Load the scan chain bits with repeating 110 patterns. This tests the clock
slow-to-turn-off faults.
■ Stim_PI - Stay in scan state

November 2010 78 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

■ Static Shift
❑ Stim_PI - Load the next value on scan inputs to continue 110 pattern
❑ Pulse - Pulse scan clocks
❑ Stim_PI - Load the next value on scan inputs to continue 110 pattern.
■ Delay Shift #1
❑ Pulse - Pulse scan clocks
❑ Stim_PI - Load the next value on scan inputs to continue 110 pattern from scan
input
❑ Pulse - Pulse scan clocks
❑ Measure_PO - Measure values on scan outputs
❑ Stim_PI - Load the next value on scan inputs to continue 110 pattern from scan
input
■ .....
■ Delay Shift #5
❑ Pulse - Pulse scan clocks
❑ Stim_PI - Load the next value on scan inputs to continue 110 pattern from scan
input.
❑ Pulse - Pulse scan clocks
❑ Measure_PO - Measure values on scan outputs
❑ Stim_PI - Load the next value on scan inputs to continue 110 pattern from scan
input
■ Static Pulse
❑ Pulse - Pulse scan clocks
❑ Scan_Unload - Observe all scan bits

In summary:
■ The delay scan chain test is always in the scan state.
■ All clocks are configured at scan speeds.

November 2010 79 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

❑ Clocks are not at system speed as scan paths are typically not timed to system
speeds.
❑ Timed scan chain tests are timed to the slowest flop to flop pair for all scan clocks.
■ Additional clocking and sequences are added while testing for compression, MISRs, or
masking.
■ The order of sequence can slightly change based on custom-scan protocols and custom-
scan preconditioning.

The following figure shows a delay scan chain test diagram:

Figure 4-3 Dynamic Test Waveform

Performing Build Delay Model (Read SDF)


This command reads a Standard Delay File (SDF) and creates an Encounter Test delay
model that represents the timings. For more information about the data retrieved and used
from the SDF, refer to:
■ “Timing Concepts” on page 85
■ “Delay Path Calculation” on page 93
■ “Specifying Wire Delays” on page 97

To perform Build Delay Model using the graphical interface, refer to “Build Delay Model” or
“Read SDF” in the Graphical User Interface Reference.

To perform Build Delay Model using command lines, refer to “build_delaymodel” or “read_sdf”
in the Command Line Reference.

The syntax for the read_sdf command is given below:

November 2010 80 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

read_sdf workdir=<directory> testmode=<modename> delaymodel=<name>


sdfpath=<directory> sdfname=<sdf name> clockconstraintsfile= <file name>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode for timed test
■ delaymodel= Internal Encounter Test name of the delay model. Multiple names can
exist for a delay model at the same time.
■ sdfpath= directory in which sdf file exists
■ sdfname= name of sdf file
■ clockconstraints = the file that includes clocking constraints. The clock domains
that are outside the domain specified in the file are not used in the delay test generation.
This ensures that the command only generates messages that are relavant to the clock
domains being tested.
Alternatively, you can specify either of the following keywords to limit the clock domains
that will be checked while building delay model:
❑ testsequence - The release/capture clocks are extracted from the specified
sequences and used to determine the relevant clock domains
❑ dynseqfilter - the type of sequences that the dynamic test generation is allowed
to generate. The default value is any for LSSD testmode, which means that the
command will only check for intra domains in such case.

Prerequisite Tasks

Complete the following tasks before executing Build Delay Model:


1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in Modeling User Guide.
When importing a design model with the intention of running timed delay tests, ensure
that your levels of hierarchy are correct such that the highest level of hierarchy with the
CELL or BOOK attribute is the layer at which you have delay information.
2. Create a Test Mode. Refer to “Performing Build Test Mode” in the Modeling User Guide
for additional information.
The test mode is required to establish the locations of clock pins and can also be used
to establish a design state through tied values and lineholds. The delay model data is not
tied to a particular test mode and may be used with any test mode. Maintain awareness

November 2010 81 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

that any checks performed when building the delay model will all be interpreted through
the selected test mode. Therefore, if a given test mode has certain paths disabled, the
delay model build procedure will not check for delays along these paths, nor will it include
any delay information along these paths in the delay model. This could result in missing
delay information in another test mode where these paths are enabled.
If using multiple test modes for a design, the recommended practice is to build a delay
model for each test mode unless the test modes are very similar.
3. Create an SDF file.
Use the tool of choice such as ETS (Encounter Timing System) to create an SDF file for
the expected conditions (voltage levels, temperature) to run at the tester.
Note: When reading an SDF, Encounter Test expects to map its delay information to the
cell or block level of hierarchy within the Encounter Test design model. Refer to “Delay
Timing Concepts” on page 124 for more information.

Optional Input Files


■ TDMcellDelayTemplateDatabase file
This is not a required input and is generated as required. Refer to “Customized Delay
Checking” on page 149 for details.
This binary file contains a list of timing arcs that are expected to be present for various
kinds of cells. The information is used for checking the completeness of the SDF. It can
be customized or automatically generated. The recommended timing arcs are derived
from the circuit model within Encounter Test. The application checks the topological
paths that can exist through each cell (to determine IOPATHs and INTERCONNECTs
required) and the pins that feed flops and latches (to determine where SETUP and HOLD
checks are needed).
If the delay information in this file does not match the information in the SDF, Encounter
Test generates the warning message TDM-041.
Note: Sometimes Encounter Test asks for over-specifications, such as setup checks on
reset lines (where you will never want to create a delay test) and some functionally
disabled paths (test-only paths). You can ignore the missing delay information warnings
as long as you do not create delay tests in these areas.
It is recommended that you provide the complete delay information because missing
delays (which were recommended during read_sdf) will result in the timing engine not
considering these paths during timings. Any missing piece of information tells the timing
engine that anything downstream from the point of missing information will not affect the
timing or has no timing requirements of its own. Therefore, for example, if delay

November 2010 82 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

information is missing for a real path, which is a multi-cycle path, the timing engine will
be unable to realize that it is a multi-cycle path, and will expect to measure it successfully
at speed (resulting in a miscompare).

Output

This task generates a list of cells with missing or unnecessary delays. The task also creates
an Encounter Test delay model in the tbdata directory.

Debugging Common Messages

Messages with IDs TDM-013, TDM-014, TDM-041, TDM-051, TDM-055 represent delay
model errors and need to be resolved, failing which may cause incomplete timings.

The following table lists some common delay model error messages and the corresponding
debugging technique:

Table 4-1 Common Delay Model Error Messages

Error Message Description


TDM-205 This message reports the number of timing templates created
using the read_sdf command. The number reported by this
message should match the number of different library cells used
in the design.
Refer to TDM-205 in the Message Reference for more
information.

November 2010 83 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Error Message Description


TDM-041 A recommended delay was not specified.
0ns delay paths may not be entered within an SDF. This
message is generated if a delay between a driver and receiver
is not in the SDF by default. The option read_sdf
interconnectdelay=0 allows Encounter Test to assume that
an unspecified path has 0 time delay.
The message explicitly states the driver and receiver instance
names.
You may also get this message if your clock path has
reconvergent fanout. There are designs where multiple parallel
clock drivers are wired together and then drive the FF’s.
Refer to TDM-041 in the Message Reference for more
information.
TDM-012 Unable to create a delay between the pins.
This is between two ports on the same cell instance. Probable
causes for this error can be that one port is tied to a logic value
or the instance referenced in the SDF is not the highest level
cell in the hierarchy.
Refer to TDM-012 in the Message Reference for more
information.
TDM-055 The specified SDF delays to a layer that is not a technology cell.
Block cell name: <viewName>
Instance name: <instName>
SDF line number: <sdfLine>
This indicates missing cell definitions in your netlist. Ensure that
all cells in your techlib are defined.
Refer to TDM-055 in the Message Reference for more
information.

Encounter Test supports the ability to customize the SDF compatibility checks. Refer to
“Customized Delay Checking” on page 149 for information on creating
TDMcellDelayTemplateDatabase files.

November 2010 84 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Timing Concepts
The positioning of the events within the dynamic pattern (timed section) ensures that the
arrival times of each signal at a latch or PO occur at the proper time. This requires that the
delays of the product’s cells and interconnects be known accurate. The information is
imported into Encounter Test using the Standard Delay File (SDF) and stored in Encounter
Test's delay model. The delay information is generated and exported to the SDF by a timing
tool, such as IBM’s Einstimer or the Encounter Timing System (ETS). The method in which
the timing tool is run (delay mode, voltage, and temperature parameters) has a great impact
on the effectiveness of the timed tests at the tester. The range of parameters specified to the
timing tool should accurately reflect the range of conditions to be applied at the tester. SDF
information is required for the At-Speed and Faster Than At-speed True-Time methodologies.
Automatic True-Time Test can be run without SDF timings, but will benefit from SDF timing
information.

Build Delay Model creates a delay model for use in generating timing for transition tests, path
delay tests, and for use in delay test simulation.

For delay testing, the SDF should contain information about all delays in the design, cell
delays, and interconnect delays (see Figure 4-4).

November 2010 85 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Figure 4-4 SDF Delays that Impact Optimized Testing and the Effect in the Resulting
Patterns

Note that Encounter Test reads an SDF, it expects to find delay information at the highest level
of hierarchy that is a "cell" or "block" in the Encounter Test design model. This is the only valid
layer of hierarchy for delay descriptions other than the very top level of the design (for delays
to and from PIs and POs). Delays that are described at any other layer of hierarchy are not
successfully imported into Encounter Test.

Use the Encounter Test GUI to check the hierarchical layer of a given block. Select
Information Window option Loc: Hierarchical Level and mouse over a block to query it.
Refer to the “Information Window Display Options” in the Graphical User Interface
Reference for details.

If the levels of hierarchy in your Encounter Test design do not correlate to the SDF, force a
block to be at a given level of hierarchy by adding the TYPE attribute to the block. For example,
to force a given module to be a "CELL" in Verilog, code the following:
module myBlock ( in1, in2, in3, out1 ); //! TYPE="CELL"

November 2010 86 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Refer to the Modeling User Guide for additional information on adding attributes to the
model.

The SDF can also contain best, nominal, and worst case timings or can be produced for
certain process conditions that will be used in manufacturing, such as temperature and
voltage.

Build Delay Model reads the SDF and creates a delay model for Encounter Test.

Encounter Test supports SDF version 2.1, however the following 2.1 specification is not
supported:
■ INSTANCE statements with wildcards

The following features of SDF 2.1 are tolerated but not used in Encounter Test:
■ PATHPULSE and GLOBALPATHPULSE
■ PATHCONSTRAINT, SUM, DIFF, COND, SKEWCONSTRAINT
■ CORRELATION
■ SKEW

IEEE 1497 SDF Standard Support


Encounter Test Build Delay Model supports the following IEEE 1497 constructs:

COND The COND construct allows any path delay to be made


conditional, that is, the delay value applies only when the
specified set of values is true. This allows for the state-
dependency of path delays where the path appears more
than once in the timing model with conditions to identify the
applicable design state. Only the defined design values from
the test mode or linehold files are honored during evaluation
of conditional statements.
CONDELSE The CONDELSE statement provides the default conditional
delay to be used if none of the conditions evaluate to a
logical True.

November 2010 87 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

RETAIN In an IOPATH, the RETAIN keyword specifies the time that


an output port shall retain its previous logic value after a
change at a related input port. The system will reads
RETAIN delays and replaces the best case delay of the
corresponding IOPATH delay will the best case numbers of
the RETAIN triplet.
REMOVAL The REMOVAL statement behavior is identical to the SDF 2.1
HOLD statement and is treated as an alias of the HOLD
statement. Delays are merged if a HOLD and REMOVAL exists
between the same two pins.
RECREM The RECREM statement behavior is identical to the SDF 2.1
SETUPHOLD statement and is treated as an alias of the
SETUPHOLD statement. Delays are merged if a SETUPHOLD
and RECREM exists between the same two pins.
BIDIRECTSKEW The BIDIRECTSKEW construct specifies limit values for
bidirectional signal skew timing checks. A signal skew limit is
the maximum allowable delay between two signals, which if
exceeded causes devices to behave unreliably.
SCOND and CCOND An alternative syntax can be used for SETUPHOLD and
RECREM timing checks. This associates the conditions with
the stamp and check events in the timing analysis tool. A
stamp event defines the beginning of a measured interval,
and a check defines the end of a measured interval.
Separate conditions can be supplied for the stamp and check
events using the SCOND and CCOND keywords. SCOND or
CCOND or both SCOND and CCOND take precedence over
COND. These keywords are read in for the SETUPHOLD and
RECREM checks and the conditions will be evaluated in the
same way as they are evaluated for the COND statement.
Example:
(SETUPHOLD d clk (5) (7) (CCOND enb))
If enb is evaluated to a logical zero, then both the delays will
be dropped.
The construct is only honored when the specifying
conditionals=yes for build_delaymodel.
PATHPULSE and These keywords specify the pulse propagation limits (r-
PATHPULSEPERCENT limit and the e-limit). Encounter Test does not use of
these parameters; the SDF is parsed however the delays are
ignored.

November 2010 88 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

e-limit and r-limit PATHPULSE numbers are sometimes specified with an


within IOPATH statements IOPATH delay. In this case, Encounter Test parses the
statement and ignores the e-limits and the r-limits.
Example:
(IOPATH a y ((1:1:2) (2) (3)) ((2:3:4) (4) (5)) )
Both the r-limit and e-limit values are ignored in this
case. The delay is read as:
(IOPATH a y (1:1:2) (2:3:4) )

Note: Some of the preceding definitions are taken from the content of the document P1497
DRAFT Standard for Standard Delay Format (SDF) for the Electronic Design
Process.

The following timing specifications are supported:


■ DELAY
■ TIMINGCHECK

Ignored SDF Statements and Timing Specifications

The following constructs are not tolerated and not used by Encounter Test:
❑ PERIODCONSTRAINT
❑ PATHCONSTRAINT
❑ DIFF
❑ SUM
❑ EXCEPTION
❑ NAME
❑ ARRIVAL
❑ DEPARTURE
❑ WAVEFORM
❑ SLACK
❑ SKEWCONSTRAINT

The following timing specifications are tolerated but not used by Encounter Test:

November 2010 89 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

■ TIMINGENV
■ LABEL

Annotating SDF Information

SDF information is typically created using a static timing analysis tool. You can annotate this
information to the Encounter Test database and automatically determine the locations of
multi-cycle paths, setup violations, and hold violations.
Note: SDF annotation does not automatically determine the locations of false paths,
therefore, it is recommended to use an SDC file with SDF information.

The annotated SDF information is useful for:


■ Running at-speed patterns
■ Running faster-than-at-speed
■ Creating interdomain test patterns. SDF information ensures that paths between clock
domains that cause setup or hold violations are correctly masked and will not cause
miscompares.

The SDF information is mapped to the Encounter Test database by locating the correct cells
in the design and attaching the information there. The hierarchy definitions in the database
must match the hierarchy assumed in the SDF information for the annotation to be successful.

Tip
It is important to have complete and correctly mapped data, therefore, investigate all
the messages reported during this process rigorously. Any incomplete or incorrect
information can result in patterns that will fail good devices on the tester.

Use the following parameters to read and use SDF delay data. The following parameter points
to the directory where the SDF file resides:
SDFPATH=sdf_directory_name

The following parameter points to the name of the file containing the SDF delay data,
propagation delays, and timing check information for the design:
SDFNAME=sdf_file_name

Delay Model Timing Data


The following sections describe the types of timing data that are created in the delay model.

November 2010 90 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

I/O Path and Device Delays

These delays model the connection from the input pins to output pins of a cell. The I/O path
delays model a delay for a path or paths from a specific input pin to output pin. The I/O path
delay can include the explicit transitions that occur at the beginning and end of the paths.

The device delays are more general than I/O paths. They state that any of the paths from any
of the cell's inputs to any of the cell's outputs or a specific output pin are covered by this delay
value. The device delays contain no notion of the phase of the transitions either at the input
or the output.

Wire Delays

These delays describe the wire connections between different entities (hierBlocks) on a
single product or a hierarchy of products (cell, chip, and card). There are three types of wire
delays: interconnect, net, and port. Interconnect delay runs from any source pin on a net to
any sink pin on the same net, at the same or different levels of the package hierarchy. Net and
port delays run only between pins in the same net at a single level of the package hierarchy.
The exact meaning of a net delay depends upon how it is specified. If a source pin is
specified, the delay is from this pin to any sink pin on the net (at this package level). If the net
name is specified, the delay is for any source pin to any sink pin on the net (again, at the same
package level). A port delay is used from any source to a single sink pin at the same package
level. Refer to “Specifying Wire Delays” on page 97 for more information.

Timing Check Delays

These delays describe a required minimum time between two events on one or two input pins
of a given cell (often relating to a memory element). Maintaining the minimum time between
the two events ensures that the cell(s) will perform correctly. In the functional mode there may
be timing check delays between input pins of non-memory elements but they are ignored by
the test applications except when checking the Minimum Pulse Width or Skew Constraints.

Minimum Path Pulse Width

The minimum size pulse (or glitch) that will allow the memory element inside a given cell to
correctly operate.

Period and Width

These are features of a clock's edges when they arrive at a memory element. The width is
the minimum size and polarity the pulse must be for the memory element to properly clock in

November 2010 91 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

data. The period is the minimum time required between a given edge of a clock pulse (leading
to leading or trailing to trailing).

Figure 4-5 Period and Width for Clocking Data into a Memory Element

Period
LATCH

CO
Width

Setup, Hold, and Recovery Time

The setup and hold times describe the minimum duration of time that must be maintained
between two signals that are arriving at different inputs of a cell. One pin must be a clock (C0),
and the second is usually the data pin (D0), but may be a clock (C1). The setup time is the
duration of time that the data (or C1 clock) must be stable before the specified clock edge
arrives at the clock (C0) pin.

The hold time is the amount of time the data pin must be stable after the clock edge arrives
at the clock (C0) pin.

Recovery time is treated the same as setup time by Encounter Test.

Figure 4-6 Setup Time

CO

DO

November 2010 92 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Figure 4-7 Hold Time

CO

DO

No Change

The SDF uses this to define a window surrounding a clock pulse during which some given pin
must be stable.

Notes:
1. To perform Build Delay Model using the graphical interface, refer to “Build Delay Model”
in the Graphical User Interface Reference.
2. To perform Build Delay Model using command lines, refer to “build_delaymodel” in the
Command Line Reference.

Also refer to “Performing Build Delay Model (Read SDF)” on page 80.

Delay Path Calculation


The following terms and definitions will aid in the understanding of path delay calculation:
■ early mode - The least amount of time required to evaluate logic. It is a user-specified
combination chosen of the best, nominal and worst case delays found in the SDF.
■ late mode - The most amount of time required to evaluate logic. It is a user-specified
combination chosen of the best, nominal and worst case delays found in the SDF.
■ combination - Also called a linear combination, three values or weights are used to skew
the importance of the best, nominal and worst case delay numbers from the SDF. For
example, earlymode=(a,b,c) is used as follows:

November 2010 93 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

earlymode delay = ((a x best case) + (b x nominal) + (c x worst))

This allows targeting the early mode or late mode arrival time to a particular portion of
the process curve. For example (0,1,0) uses the nominal values, while (0,.5,.5) will use
the average of the nominal and worst delays.
■ Arrival Time - The sum of the delays from a source to sink along the fastest or slowest
path.

The following is brief description of the timing algorithm used in Encounter Test. It describes
a clock to clock sequence but the same basic idea is used with pi to clock and clock to po
sequences.
1. A test sequence consisting of launch and capture events is presented to the timer.
2. The pulse width requirements are calculated. Pulse timing checks are present at the cell
level. These requirements are backed up to the primary inputs to account for any pulse
width shrinkage which may have occurred.
3. The arrival times for all the latch data and clock inputs are calculated for the best, early,
late and worst cases. The user has control of early and late. Best and worst are the very
fastest and slowest (respectively) and are used only in special cases that are not
discussed here.
The arrival times take into account tie values, lineholds on primary inputs and in certain
cases, lineholds on latches and the values on test function primary inputs. The values
are used to disable paths that are not to be timed.
4. Linear dependencies (Setup, Minimum Pulse, and Hold time tests) are calculated. See
Figure 4-8.
The times are distance in time between transitions, (i.e. the leading edge of the release
clock to the trailing edge of the capture clock). Also since the launch clock launches
transitions that will be captured at many latches, there are many dependencies. Only the
largest one that describes the relationships between edges is kept.
When running with maxpathlength or a clockconstraints file (constrained
timings), the dependencies that do not meet the timing constraints are dropped and the
involved capture latches are ignored. When running the Automatic True-Time flow, the
maximum path length is automatically determined for each clocking sequence. In the At-
Speed and Faster Than At-Speed flows, a clock constraints file is user-specified.
In Figure 4-8, the sink “P3” would be a candidate to be ignored to test paths that are
smaller that 2 Logics. The figure also shows setup and hold test that traverse logical
paths between FFs, however these are not the only types of relationships; they also
occur between clocks that feed the same cell or at clock gates. For example, a clock-to-
clock relationship is used in a sequence with a scan clock release and a system capture

November 2010 94 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

clock capture. There is a relationship from the trailing edge of the release clock to the
leading edge of the capture clock.
5. The set of linear dependencies are optimized into the timings (timeplates). In this step,
the setup, hold and pulse dependencies are combined into one set of timings to be
applied at the primary inputs. Though the maximum path lengths that are greater than the
specified maxpathlength have been discarded, the combined set of edges still may be
longer than the maxpathlength. For example, if the following dependencies are used
in the figure, the total delay will be greater than the largest of the parts:
Setup Time = 4 ns
hold time = 2 ns
pulse width = 2.5 ns.
The total delay is 7 ns not 4 ns. The reason for this is (hold time + (2 x pulsewidth) = 7 ns.

November 2010 95 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Figure 4-8 Scenario for Calculating Path Delay

November 2010 96 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Specifying Wire Delays


Encounter Test allows you to specify delays across wires which pass directly through a cell
to generate wire path delay data.

The following terms and corresponding definitions are associated with wire delays:

Wire Cell A cell containing a wire that leads directly from an input to an
output with no intervening primitives.
Partial Delay A delay passing through the wire of a wire cell that could be
incorporated into a larger delay passing through the same wire.
Parent Delay Any interconnect delay leading from and going to a primitive
block. Some parent delays can be broken down into two or
more partial delays.
Partial Parent A delay that exhibits properties of a partial delay and a parent
delay. Figure 4-9 displays examples of the preceding terms.

November 2010 97 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Figure 4-9 Wire Delay Scenarios

Note that PORT and NET delays do not include the length of wire within the cell; they only
include the interconnecting wires between the cells.

Delays through wire cells can either be specified as INTERCONNECT delays which span
several cells, or each segment can be specified in IOPATH and INTERCONNECT delays. If
delays are specified by parts, then all of the partial delays that comprise the parent delay must
be specified or the delay information will be incomplete.

For example, in Figure 4-9, the delays can be specified as one INTERCONNECT from A to F,
or they can be specified as INTERCONNECT delays from A to B and E to F, as well as an
IOPATH delay from B to E.

Performing Read SDC


The read_sdc command reads and verifies design constraints from a Design Constraint File
(SDC). For more information about the data retrieved and used from the SDF, refer to “Design
Constraints File” on page 124.

November 2010 98 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Note: Any existing SDC in the current testmode from a previous run should be removed
before reading the current SDC. Refer to “remove_sdc” in the Command Line Reference
for more information on this.

While reading and parsing an SDC file, Encounter Test creates design constraints. For more
information on Encounter Test handling and processing of these constraints, refer to
“Dynamic Constraints” on page 150.

To perform Read SDC using the graphical interface, refer to “Read SDC” in the Graphical
User Interface Reference.

To perform Read SDC using command line, refer to “read_sdc” in the Command Line
Reference.

The syntax for the read_sdc command is given below:


read_sdc workdir=<directory> testmode=<modename> sdcpath=<directory> sdc=<file
name>

where
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ sdcpath= directory where the sdc file exists
■ sdc= name of the sdc file

Prerequisite Tasks

Complete the following tasks before executing Build Delay Model:


1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in Modeling User Guide.
Note: If SDC is required on multiple test modes for a design, you should read an SDC
into each testmode.
2. Create an SDC file
Note: Use any tool such as ETS (Encounter Timing System) or Einstimer to create an
SDC file for the expected conditions (voltage levels, temperature) to run at the tester.

November 2010 99 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Output

Encounter Test stores the constraints internal to the testmode. When running ATPG, add
usesdc=yes at the command line to use the SDC.

Common Debug Using SDC

Refer to “Constraint Checking” on page 151 for information on the test generator and
simulators handle design constraints.

Design Constraints File

The design constraints file supplements or replaces the SDF or for delay tests that consider
small delay defects, particularly for incorporating faster technologies with specialized timing
algorithms such as on-product clocks for at-speed testing. The functional characteristics
associated with these technologies, also known as designer’s intent, are recorded in a
design constraints file. True-Time Test accepts a Synopsys® Design Constraints (SDC) file
that configures clock and delay constraint parameters.

Table 4-2 lists the SDC statements accepted by Encounter Test.

Table 4-2 SDC Statements and their Functions

SDC Statement Function


set_case_analysis Identifies points in the design to hold at a
known state. The values may be specified
for any hierarchical pin. Refer to
“Set_Case_Analysis” on page 101.
set_disable_timing Removes timing arcs and indicates the
logic can be don’t cared, that is, set to X
for ATPG and simulation. Refer to
“Set_Disable” on page 102.

November 2010 100 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

SDC Statement Function


set_false_path Identifies paths that are expected to be
inactive in the functional mode; checking
is not required. There are two types of
false paths:
■ Paths through which transitions
(including glitches) cannot propagate
■ Paths for which tests are not to be
created. A common use is to remove
inter-domain paths.
Refer to “Set_False_Path” on page 102.
set_multicyle_path Identifies logic that exceeds one cycle to
complete. Refer to “Set_Multicycle_Path”
on page 103.
No statement Describes logical constraints that are
expected in the functional process. Refer
to “Boolean Constraints” on page 104.
Note: Configured in the linehold file.

Design Constraint Syntax

The following syntax is accepted in the input design constraints file:

Set_Case_Analysis

The syntax is set_case_analysis value port_list. Figure 4-10 on page 102


depicts a configured example.

November 2010 101 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Figure 4-10 set_case_analysis Example

Set_Disable

The syntax is set_disable_timing port_list. Figure 4-11 depicts a configured


example.

Figure 4-11 set_disable Example

Set_False_Path

The syntax is set_false_path -from pin_A -to pin_B. Figure 4-12 on page 103
depicts a configured example.

November 2010 102 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Figure 4-12 set_false_path Example

Refer to “False and Multicycle Path Grouping” on page 104 for additional information.
Figure 4-12 on page 103 depicts an example.

Set_Multicycle_Path

The syntax is set_multicycle_path -from path. Figure 4-13 depicts a configured


example of a normal path and a multicycle path.

Figure 4-13 set_multicycle_path Example

Refer to “False and Multicycle Path Grouping” on page 104 for additional information.

November 2010 103 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

False and Multicycle Path Grouping

Falsepath and multicycle path statements are not required to correspond to individual
combinational paths. An individual false or multicycle path statement may represent many
paths using one falsepath or multicycle statement. Specify multiple locations at the
-from, -to, and -through points. The paths can traverse as desired as long as the path
goes through at least one point from the -from, -to, and -through point list.

Specify either the -from or -to point on a clock path. The paths start or end at the flip-flop/
latches downstream of the clock pin. Figure 4-14 depicts a configured example where the
-from or -to are clock pins. The dashes identify the falsepaths.

Figure 4-14 false and multicycle Path Grouping Example

Boolean Constraints

The Boolean constraints are equiv, onehot, onecold, zeroonehot, and zeroonecold.
The following is the syntax to describe boolean constraints in either the design constraints file
or linehold file(s):

[Boolean] constrainttype [(] entitylist [)] [-action] [-scope];

constraintype is one of the following:


equiv - instructs that all nodes must be the same value

November 2010 104 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

onehot - instructs only this entity may be a logic 1; all others must be logic 0
zeroonehot - instructs that all entities are logic 0 or just one entity is a logic 1
onecold - instructs that only this entity may be a logic 0; all others must be logic 1
zeroonecold - instructs that all entities are logic 1 or only one entity is a logic 0

entitylist is the list of locations specified as follows:


A comma-delimited list of points in the netlist. The value affects the source of the net that
feeds the pin or that is fed by the pin as seen in the following syntax:
[polarity] entity, [polarity] entity,...
Each entity is one of the following:
❍ PIN, the hierPin name
❍ NET, the hierNet_name
❍ BLOCK, the usage block name
❍ PPI, the pseudo-primary input name
The polarity is one of the following:
❑ Not specified instructs that it is positive logic
❑ ^ instructs to use the opposite of the expected negative value. For example,
a,^b; enforces that a and b are always opposite each other.

-action is one of the following:


-ignore, the default, instructs to ignore all the scan bits fed by the last propagation
entity in the propagation list.
-remove instructs to discard the sequence if a constraint is violated.

-scope is one of the following:


-dynamic, the default, instructs to only check or protect the dynamic pattern
-all instructs to check and protect across the entire test.

Design Constraints File Examples

The following is the sample content of an input SDC file:

November 2010 105 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

create_clock -period 100 CLK


create_clock -period 100 -name "CLK2" CLK2MUX/Z
set_false_path -from XPNDAO167P0/D1 -to XPNDAO138P0/Z
# next is an example of a grouped false path
set_false_path -from CLK -to CLK2
set_false_path ?fall -from CLK2 -to CLK
set_false_path -from CLK2 -through XPNDAO167P0/D1 -to XPNDAO138P0/Z
set_case_analysis 1 CLK2MUX/D1;
set_case_analysis 0 CLK2MUX/D0;
set_disable_timing XPNDXORS660/SD
set_multicycle_path 1 -from XPNDAO194P1_C/A \
-to XPNDAO119P0/D1 \
-through XPNDAO187P1/Z BOX841/Z

The following is the sample content of the output constraints file:


# ####################################################################
# Created by RTL Compiler (RC) v05.10-p001_1 on Fri Jul 08 06:23:10 PM EDT 2005
# ####################################################################
# current_design PIXAVE4
case CLK2MUX.D0 = 0;
case CLK2MUX.D1 = 1;
# please include the create_clock statements a a comments.
#create_clock -name "CLK" -add -period 100.0 -waveform {0.0 50.0} [get_ports CLK]
#create_clock -name "CLK2" -add -period 100.0 -waveform {0.0 50.0} [get_pins
CLK2MUX/Z]false -from XPNDAO167P0.D1 -to XPNDAO138P0.Z ;
false -from CLK -to CLK2MUX.Z ;
false ?from -from CLK2MUX.Z -to CLK ;
false -from CLK2MUX.Z -through XPNDAO167P0.D1] -to XPNDAO138P0.Z ;
multicycle setup -from XPNDAO194P1_C.A -through XPNDAO187P1.Z XPNDAO119P0.D1
BOX841.Z -to D1XPNDAO119P0.D1 ;
dontcare XPNDXORS660/SD;

The following is a basic muticycle path example:


set_multicycle_path -to c1/b7/D 2;# Don’t allow transitions to D pin of c1.b7
set_multicycle_path -hold 1 -to c1/b7/D ; # There is a hold violation on flop c1.b7

Using the SDC in True-Time Test

An overview of implementing an SDC constraints file in the True-time flow is depicted in


Figure 4-15 on page 107.

November 2010 106 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Figure 4-15 Use Model with SDC Constraints File

The preceding flow occurs after a model and test mode are created and prior to test
generation.

The following is a typical sequence of events for incorporating use of the SDC:
1. Develop/create an SDC that describes the design’s intended function.
2. Use the SDC, lineholds, and test mode to customize True-Time Test generation to
produce synthesis and timing analysis results.

Important
Ensure the generate_clocks statement occurs first in the SDC file or supporting
TCL scripts.

A currently used constraints file may be changed using either of the following methods:

November 2010 107 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

■ Rebuilding a test mode


■ Add or remove using Read SDC or Remove SDC
Note: The preceding techniques must be performed prior to test generation.

Use Read SDC to read and verify the constraints in a design constraints file and a linehold
file. Read SDC incrementally updates existing constraints based on the content of the input
constraints/linehold file(s).The verified constraints are stored in an output constraints file for
subsequent use by test generation. The output files are in the following forms:
■ constraints.testmode
■ constraints.testmode.experiment to allow simultaneously running multiple
experiments with different SDC files

If multiple test modes are specified, the SDC is verified using the first specified test mode.
The results are applied to all specified test modes. Refer to the following to perform Read
SDC and Remove SDC:
■ “Read SDC” in the Graphical User Interface Reference
■ “read_sdc” in the Command Line Reference
■ “Remove SDC” in the Graphical User Interface Reference
■ “remove_sdc” in the Command Line Reference
Note: An RTL Compiler license is required to run Read SDC.

Performing Remove SDC


The remove_sdc command removes the stored SDC data for a testmode or an experiment.

To perform Remove SDC using the graphical interface, refer to “Remove SDC” in the
Graphical User Interface Reference

To perform Remove SDC using command line, refer to “remove_sdc” in the Command Line
Reference.

The syntax for the remove_sdc command is given below:


Remove_sdc workdir=<directory> testmode=<modename>

where:
■ workdir = name of the working directory

November 2010 108 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

■ testmode= name of the testmode from which to remove SDC


Note: The most commonly used keyword for the remove_sdc command is Experiment.
Refer to “remove_sdc” in the Command Line Reference for more information on this
keyword.

An Overview to Prepare Timed Sequences


Prepare Timed Sequences determines a set of optimal parameters to run test generation and
fault simulation for timed tests. These parameters include the following:
■ The best clock sequences
■ The frequency per sequence
■ Maximum path length for each selected sequence
Note: You do not need to perform Prepare Timed Sequences for static ATPG.

The clock sequences are combined into a multi domain sequence that simultaneously tests
the combined sequences. Refer to Table 4-3 on page 112 for more information.

The best sequences are determined by generating test patterns for a statistical sample of
randomly selected dynamic faults. Each unique test pattern is evaluated to ascertain the
number of times the test generator used it to test a fault. The set of patterns used most often
are considered the best sequences. Typically, the top four or five patterns will test 80 percent
of the chip. Specify the maxsequences option to use more sequences, if required.

The prepared sequences are represented by sequences named DelayTestClockn in the


sequence definition. The following is an example of a sequence summary:
Best 5 Sequence Pattern Events Summary
DelayTestClockSeq1 292 out of 816 faults
Static setup patterns:
Stim_Latch
Pulse_Clock -EC TheSysClock
Dynamic pattern events:
Pulse_Clock -ES TheReleaseClock
Pulse_Clock -EC TheCaptureClock
Static measure event:
Measure_Latch

DelayTestClockSeq2 58 out of 816 faults


Static setup patterns:
Stim_Latch_Plus_Random
Stim_PI_Plus_Random
Dynamic pattern events:
Pulse_Clock -EC SystemClock
Stim_PI
Pulse_Clock -EC SystemClock
Static measure event:

November 2010 109 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Measure_Latch
...

DelayTestClockSeq5 18 out of 816 faults


Static setup patterns:
Stim_Latch
Stim_PI
Dynamic pattern events:
Pulse_Clock -ES TestClock1
Pulse_Clock -ES TestClock2
Static measure event:
Measure_Latch
Measure_PO

The maximum path length is determined by generating a distribution curve of the longest data
path delays between a random set of release and capture scan chain latch pairs. The area
under the curve is accumulated from the shortest to the longest and the maximum path length
is the first valley past the cutoff percentage. This method limits the timings to ignore any
outlying paths that over inflate the test timings. Additional options are available to control this
by providing the cutoff percentage for the curve and a maximum path length to override the
calculation.

The distribution summary is printed into the log, an example is given below:
Random Path Length Distribution
Percentage 80 90 95 97 98 99 100
MaxPathLength 5300 6250 6500 7250 7500 7500 12350

Path length after area cutoff of 98 percent = 7250 pico-seconds


plus two times accuracy of 250 ps = 7750 pico-seconds
User provided maximum path length = 7000 pico-seconds
Maximum Path Length to be used for timings = 7750 pico-seconds

Dynamic constraints are automatically determined by generating timings for the set of best
sequences. This occurs after the maximum path length has either been provided or
calculated. The events for each sequence are applied and the calculated delays are
evaluated for violations of the maximum allowed distance. For every detected violation, the
sources are traced and a dynamic constraint is generated to specify to the test generator to
disallow these sources to switch during the dynamic portion of the test. If no transitions occur
in these paths, the cause of the timing violation is removed, thus resulting in faster timings. In
addition to the violations, a set of ignored measure latches is generated for analysis. When
the number of ignored measures exceeds 10 percent of all of the measures for the chip, the
maximum path length for the sequence is increased to allow for better test coverage. A
summary of this activity is printed in the output log, as shown in the following example.

Best 1 Sequence Timing Violations Summary

Test #Ignore Final Max Final Max Path Length


Sequence #Violations Measures Pulse Width

November 2010 110 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

DelayTestClockSeq1 2197 5490 DEFAULT (ppi_rts->ppi_rts 2000ps )

Performing Prepare Timed Sequences


This task determines a set of optimal parameters to run test generation and fault simulation
for timed tests.

These parameters include the following:


■ Best clock sequences
■ frequency per sequence
■ Maximum path length for each selected sequence

Encounter Test might create design constraints while running prepare_timed_sequences with
a delay model. Refer to “Dynamic Constraints” on page 150 for more information on how
Encounter Test handles and processes these constraints.

To perform pre-analysis using command line, refer to “prepare_timed_sequences” in the


Command Line Reference.

To perform pre-analysis using the graphical user interface, refer to “Prepare Timed
Sequences” in the Graphical User Interface Reference.

The syntax for the prepare_timed_sequences command is given below:


prepare_timed_sequences workdir=<directory> testmode=<modename>
delaymodel=<delaymodel name>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ delaymodel=<name> - Name of the delay model for the timed ATPG tests.
Note: The commonly used keywords for the prepare_timed_sequences command are:
■ clockconstraints =<file name> - List of clocks and frequencies to perform
ATPG. For more information, refer to “Clock Constraints File” on page 125.
■ dynseqfilter= <value> - Type of clocking, for example launch of capture with
same clocking or launch off shift. For more information on sequence types, refer to
“Delay Test Clock Sequences” on page 146.

November 2010 111 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

■ maxbestsequences=<integer 1 to 99> - The default is to allow 99 different


clocking sequences to be generated, if allowed in the design.
■ addseqs =syscapture|domain|domainorder - Ensure that certain clocking
sequences will be included in the generated sequences used for ATPG.
■ earlymode/latemode - The ability to customize delay timings. The default is
0.0,1.0,0.0 for each option. For more information, refer to “Process Variation” on
page 129.

Refer to “prepare_timed_sequences” in the Command Line Reference for more


information on these keywords.

The following table discusses the various methods of getting timings in the patterns:

Table 4-3 Methods to Generate Timed Patterns

Method Encounter test will...


Specifying a clock Use the clock timings found in the clock constraint file. All paths
constraint file with no are treated as valid and can be achieved in the specified
delay model and no timings.
SDC
Specifying a clock Use the clock timings found in the clock constraint file. All paths
constraint file with no are treated as valid except for those found in the SDC.
delay model but with
SDC
Specifying a clock Use the clock timings found in the clock constraint file.
constraint file with Encounter Test will also use the information in the delay model
delay model with SDC to identify paths that do not make this timing and then X them
out. The SDC will be used to identify additional timing
exemptions.
Specifying a delay Automatically identify the clocking frequency at which to time
model with no clock the design by finding the longest paths without any constraints.
constraint file and SDC Encounter Test will create a path distribution curve data and
clock timings and pick a clock timing of about 95% of the
distribution. It will then time and ignore the longest paths based
on the determined frequency.
Specifying neither a Not generate timings in output patterns. Encounter Test will
delay model, SDC, nor a consider all the paths as valid and create only untimed transition
clocking constraint file tests.

November 2010 112 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Prerequisite Tasks
Complete the following tasks before executing Prepare Timed Sequences:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in Modeling User Guide.
2. Build Encounter Test Testmode
3. Build Fault model with dynamic faults

Output
Encounter Test stores the test sequences internal to the testmode. The sequences can be
examined by reporting the test sequence. Refer to report_sequences in the Command Line
Reference for more information.

Check the log file to see if a large number of constraints were added to the data base. A large
number of constraints may prevent the test generator from ramping up the coverage,
especially in signature-based testing.

Command Output

The following is an example of the three sequences identified by


prepare_timed_sequences and used for ATPG:
Best 3 Sequence Pattern Events Summary
DelayTestClockSeq1 21 out of 35 faults
Static setup patterns:
Stim_Latch
Stim_PI_Plus_Random
Dynamic pattern events:
Pulse_Clock -ES CLK1
Pulse_Clock -ES CLK1
Static measure event:
Measure_Latch
DelayTestClockSeq2 11 out of 35 faults
Static setup patterns:
Stim_Latch
Stim_PI_Plus_Random
Dynamic pattern events:
Pulse_Clock -ES CLK3
Pulse_Clock -ES CLK3
Static measure event:
Measure_Latch
DelayTestClockSeq3 3 out of 35 faults
Static setup patterns:

November 2010 113 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Stim_Latch
Stim_PI_Plus_Random
Dynamic pattern events:
Pulse_Clock -ES CLK2
Pulse_Clock -ES CLK2
Static measure event:
Measure_Latch

Create Logic Delay Tests


Refer to the following for more information on delay test concepts and faults:
■ “Delay Timing Concepts” on page 124
■ “Delay Defects” on page 141
■ “True-Time Test: An Overview” on page 37

Encounter Test might create and use design constraints while running
create_logic_delay_tests with a delay model or SDC. Refer “Dynamic Constraints” on
page 150 to for more information.

To perform create logic delay tests using the graphical interface, refer to “Create Logic Delay
Tests” in the Graphical User Interface Reference

To perform create logic delay tests using command line, refer to ““create_logic_delay_tests”
in the Command Line Reference.

The syntax for the create_logic_delay_tests command is given below:


creat_logic_delay_tests workdir=<directory> testmode=<modename> experiment=<name>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ experiment= name of the test that to be generated

The most commonly used keywords for the create_logic_delay_tests command are:
■ clockconstraints=<file name> - List of clocks and frequencies to perform
ATPG. For more information, refer to “Clock Constraints File” on page 125.
■ dynseqfilter=<value> - Type of clocking, for example, launch of capture with same
clocking or launch off shift. For more information on sequence types, refer to “Delay Test
Clock Sequences” on page 146.

November 2010 114 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

■ maxbestsequences=<integer 1 to 99> - Default is to allow 99 different clocking


sequences to be generated, if allowed in the design.

The following are the timed options:


■ delaymodel=<name> - Name of the delay model for timed ATPG tests.
■ earlymode/latemode - Ability to customize delay timings. Default is 0.0,1.0,0.0
for each option. For more information, refer to “Process Variation” on page 129.

Refer to ““create_logic_delay_tests” in the Command Line Reference for more information


on these keywords.
Note: To more information on generating test patterns with system timings, refer to
“Performing Prepare Timed Sequences” on page 111.

Prerequisite Tasks
Complete the following tasks before executing Create Logic Delay Tests:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in Modeling User Guide.
2. Build Encounter Test Testmode
3. Build Fault model with dynamic faults

Tip
For timed tests, it is recommended to run prepare_timed_sequences to
precondition the clocking sequences and timing constraints.

Output
Encounter Test stores the test patterns in the experiment name.

Command Output

The output log contains information on testmode, global coverage and the number of patterns
used to generate the results.
Experiment Statistics: FULLSCAN.prep2
#Faults #Tested #Redund #Untested %TCov %ATCov
Total Static 908 229 0 640 25.22 25.22
Total Dynamic 814 111 0 699 13.64 13.64

November 2010 115 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Experiment Global Statistics: FULLSCAN.prep2


#Faults #Tested #Redund #Untested %TCov %ATCov
Total Static 1022 229 0 754 22.41 22.41
Total Dynamic 1014 111 0 899 10.95 10.95
----------------------------------------------------------

Total # effective tests generated: 11

INFO (TDA-001): System Resource Statistics. Maximum Storage used during the run
and Cumulative Time in hours:minutes:seconds:

Note: For information on reporting domain specific test coverage, refer to “Reporting Domain
Specific Fault Coverages” on page 345.

Producing True-Time Patterns from OPCG Logic


The on-product clock generation (OPCG) feature in Encounter Test allows you to generate at-
speed tests using the OPCG circuitry built into the design. This is required where the tester
cannot generate the clocks at the desired speeds. Encounter Test implements OPCG by
defining cutpoints and assigning pseudo primary inputs (PPIs) to the internal clock domains.
The test generator then uses those internal PPIs as the launch and capture clocks in the
design. Encounter Test supports custom OPCG logic that you define and for which you must
provide all the information on how the clocking sequences are produced and their definition.
It also supports a standard set of OPCG logic that can be inserted by Encounter RTL
Compiler and for which sequences can be automatically generated.

Processing Custom OPCG Logic Designs


For custom OPCG logic, you need to design the OPCG logic, define it to Encounter Test when
building the test mode and provide a custom test mode initialization sequence that will
program any PLLs in use and start any reference oscillators. You will also need to define the
test sequences that can be applied by the OPCG logic. The test sequences include all activity
(pulses) at the domain clock root PPIs as well as activity at the real primary inputs that cause
the PPIs to behave that way (for example, changing the value of a trigger signal primary input
pin that launches the OPCG activity). When the OPCG logic is programmable, test
sequences may be defined that include the use of a setup sequence. A setup sequence
defines how to load the programming logic into the OPCG state elements prior to using the
programming to produce the desired sequence of internal domain clock pulses. If the OPCG
programming bits are static once they are loaded, the programming is applied through the
setup sequence only once for a set of test sequences to be applied to the device. If the OPCG
programming bits are part of the normal scan chains, they must be reloaded as part of the
scan_load data for each test.

November 2010 116 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

The following figure shows the tasks required to use custom OPCG logic within a design.

November 2010 117 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Figure 4-16 Creating True-time Pattern Using Custom OPCG Logic

November 2010 118 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

This is done by defining cutpoints and


Create an OPCG assignment file PPIs and/or using the OPCG statement in
the mode definition file or pin assign file

This sets the design to the correct starting


state, programs any PLLs to be used and,
Build test mode initialization sequence input file starts any reference oscillators that are used
to run the PPLs. It is recommended that
PLLs be run until they lock before exiting the
mode initialization sequence.

Specify the modedef file and assign file


Run build_testmode command defined in Step 1 with the mode
initialization file you defined in Step 2.

Run create_scanchain_delay_tests command

Run commit_tests command

Any defined test sequence should


specify the setup sequence it uses to
Create OPCG test pattern application sequence program the OPCG logic to produce the
sequence of PPI pulses shown in the
test sequence.

This reads in your custom OPCG test


Run read_sequence_definition command sequences and verifies that they are
syntactically correct.

Run create_logic_delay_tests command Specify the list of clocking sequences to


be used for targeting faults.

November 2010 119 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

OPCG logic usually requires a special initialization sequence and sometimes requires special
event sequences for issuing functional capture clocks during application of the ATPG
patterns. This example creates special event sequences for initializing the chip and for
launching the functional capture clock.

Note that as each design is unique, customized designs require different settings and event
sequences.

Two functional clock pulses are issued from the OPCG logic. The first clock launches the
transition and the second clock captures the logic output in a downstream flip-flop.

The pin assignments file defines the following:


■ Clock, scan enable, and other control pins. These are the standard set of pins that must
be controlled during application of the ATPG patterns.
■ An internal cutpoint and associated reference name, and also assigns the appropriate
test function to this PPI.
■ The input reference oscillator pin and an enable pin for the on-chip PLL.

In addition, there should be an OPCG statement in either the mode definition file or the pin
assign file. The OPCG statement block allows specifying the PLLs to be used, the reference
clocks that are used to drive them, and the programming registers that are available to
program them. It also allows specifying each OPCG clock domain, the PLL output that drives
the domain, and programming registers that are available for the domain. Refer to OPCG in
the Modeling User Guide for more information on the OPCG statement syntax.

The initialization sequence defines the launch-capture timing within the OPCG logic and waits
10,000 cycles for the PLL to lock.

The test application sequence defines the sequence of events required to get the OPCG logic
to issue the desired launch and capture clock pulses.

Processing Standard, Cadence Inserted OPCG Logic Designs


When Cadence-defined OPCG logic is inserted using Encounter RTL Compiler, most of the
complex input for Encounter Test is automatically generated, thus easing the process of
generating OPCG tests.

RTL Compiler creates the pin assign file that defines the cutpoints and the OPCG logic to
Encounter Test; you still need to create the mode initialization sequence to correctly program
any PLLs that will be used for OPCG testing.

November 2010 120 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

The RTL Compiler also generates a run script that runs automates the various steps of
Encounter Test to produce the test patterns.

The following figure depicts the tasks required generate test sequences using Cadence
inserted, standard OPCG logic without using the run script generated by RTL Compiler:

November 2010 121 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Figure 4-17 Creating True-time Pattern Using RTL Compiler Inserted OPCG Logic

This programs any PLLs to be used during


OPCG testing and starts any reference
Build test mode initialization sequence oscillators. It is recommended that all PLLs be
locked before exiting the mode initialization
sequence

Use the pin assign file provided by RTL


Run build_testmode command
Compiler and the mode initialization
sequence created in Step 1

Run create_scanchain_delay_tests command

Run commit_tests command

This creates valid clocking sequences


and the setup sequences that correctly
Run prepare_opcg_test_sequences command program the OPCG logic to produce such
tests. You can request intradomain
(default), inter-domain and static test
sequences

Run create_logic_delay_tests command specifyng


to use the sequences created in step 4

Run commit_tests command to get fault accounting


credit accumulated for all committed tests

Specify any static ATPG test sequences


generated by Step 4. These should top off
Run create_logic_delay_tests command the static fault coverage by targeting
those static faults not detected by the
generated delay tests.

November 2010 122 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Creating Test Patterns Using RC Run Script

When using the OPCG logic inserted by RTL Compiler, you can provide the mode
initialization sequence as input to the write_et_atpg command in RC. This generates an
RC run script named runet.atpg, which automates the various steps of Encounter Test to
produce the test patterns. The script processes the OPCG and non-OPCG test modes. For
the OPCG test modes, it runs the prepare_opcg_test_sequences command that
automatically generates intradomain delay tests to be used by ATPG. It can also generate
static ATPG tests, if desired. You can modify the script to generate inter-domain tests if you
have included delay counters in the OPCG logic.

The following figure depicts the tasks required to create the test patterns using the RC run
script:

Figure 4-18 Creating True-time Pattern Using RC Run Script

Refer to the Design for Test in Encounter


Insert the OPCG logic using RC RTL Compiler Guide for information on
inserting OPCG logic using RC

Build test mode initialization sequence This initializes the PLLs and starts the
reference oscillators to be used for test.

Run the RC command define_dft opcg_mode This specifies the sequence used to
initialize the PLLs

This generates all files needed to run Encounter


Run the write_et_atpg command Test and also generates the run script,
runet.atpg

Optionally modify the run script, runet.atpg to For example, if you inserted delay counters in
customize it for the desired output OPCG domains and want to apply interdomain
tests, add “interdomain=yes” to the invocation of
the prepare_opcg_test_sequences command
line in the script

Run the runet.atpg script

November 2010 123 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Delay Timing Concepts


A dynamic test may be either timed or not timed. When it is timed, Encounter Test defines the
timing in a sequence definition which is a sort of pattern template and accompanies the test
data. When the dynamic tests are not timed, the tests are structured exactly the same as for
timed tests, but there is no timing template. In this case, the timing, if used at all, is supplied
by some other means, such as applying the clock pulses at some constant rate according to
the product specification.

The following section identifies the settings to control the timings for a test generation run.
One situation where these may be useful is in testing different clock domains.

Path Length
Maximum and minimum timings represent the time between the clock edge that triggers the
release event and the clock edge that captures the result. With reference to Figure 4-24 on
page 145, note that this is not exactly equal to the path length, because the paths from the
Clock A input to LatchA and LatchB may not be equal. Regardless, it is often convenient to
think in terms of the path lengths. Paths longer than the maximum path length (after adjusting
for the clock skew as previously noted) cannot be observed by the test since they would be
expected to fail. Any latch or primary output that would have been the capture point for these
long paths is given a value of X (no measure).

Paths shorter than the minimum path length will be observed, but some small delay defects
could go undetected in a short path. Note that any dynamic test that involves paths of unequal
lengths must be timed to the longest path, so this short path concern exists regardless of
whether a minimum path length is specified.

Design Constraints File


This file supplements or replaces the SDF or for delay tests that consider small delay defects,
specifically for incorporating faster technologies with specialized timing algorithms such as
on-product clocks for at-speed testing. Refer to “Design Constraint Syntax” on page 101 for
more information on syntax.

November 2010 124 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Clock Constraints File

The clock constraints file guides the creation of clock sequences used for fixed-time, at-speed
or faster than at-speed test generation. The delay test generator builds tester-specific test
sequences using the clock constraints file information and the tester description rule (TDR).
Multiple domains and inter-domain sequences can be defined in a single clock constraints
file. Certain timing-related TDR parameters are also overridden using a clock constraints file.
Statements supported are:
■ Clock domain statements
■ Return to stability statements
■ TDR overrides statements

Command Line Syntax


create_logic_delay_tests...clockconstraints=<filename>

Clock Constraints File Syntax

The domains to be tested are specified by the clocks that control them. Several syntax
variations are supported. Each clock domain statement is used to generate a specific clock
sequence during ATPG. Multiple clock domain statements are not combined into a single
sequence.
clockname {edge, width timeUnit} {speed timeUnit};

or
clockname1 {edge, width timeUnit} clockname2{edge, width timeUnit} {speed
timeUnit};

or
clockname {speed timeUnit};

or
clockname1 clockname2 {speed timeUnit};

where:
■ clockname - Name of a clock pin. This can be a primary input pin or a pseudo primary
input (PPI).
Note: Different clock PIs can be used within the same statement in the clock constraints file.
❑ If there is only one pin, it is used as both the launch and capture clocks.

November 2010 125 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

❑ If there are two pins, the first pin is the launch clock and the second pin is the capture
clock
■ edge - Either posedge or negedge, referring to which edge of the clock pin is the active
edge (opposite of stability state) for the domain.
■ width timeUnit - The pulse width. Specified as an integer. Specify timeUnit in ns,
ps, or MHz (for frequency).
■ speed timeUnit - The time between the leading edges of the launch and capture
clocks. As with width, the speed can be defined in ns, ps, or MHz.
Note:
■ The clockname must have a space after it.
■ Speed timeUnit is optional but the surrounding braces {} are required.

Examples

Example 1:

Tests intra-clock domain CLKA to CLKA


CLKA {posedge , 5 ns} CLKA {posedge , 5 ns} {50 Mhz} ;

Example 2:

CLKA {posedge , 5 ns} { 10 ns } ;


CLKB {posedge , 10 ns} { 20 ns } ;
CLKA {posedge , 5 ns} CLKB {posedge , 10 ns} {10 ns} ;
Tests intra-clock domain CLKA to CLKA, intra-clock domain CLKB to CLKB, and inter-
domain CLKA to CLKB.

Return to Stability Statement

The return to stability (RTS) statement specifies when the trailing edge of a clock is to occur
relative to the END of the tester cycle. Specify the RTS time for clock domains using one of
the following formats:
RTS clockname {edge , speed timeUnit} ;

or
RTS ALLCLKS {edge , speed timeUnit} ;

where:

November 2010 126 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

■ clockname - The name of a clock pin. This can be a primary input pin or a pseudo
primary input (PPI).
■ edge - Either posedge or negedge, referring to which edge of the clock pin is the active
edge (opposite of stability state) for the domain.
■ speed timeUnit - The pulse width. Specify speed as an integer. Specify timeUnit
in ns, ps, or MHz (for frequency).
Note:
■ The clockname must have a space after it.
■ The Return to Stability statement applies to all Clock Domain Statements that reference
the same clock PI name.

Example:

The following example shows how the clock domain statement is modified by a return to
stability statement:
// CLKA clock domain is 50 MHz with 10 ns clock
CLKA {posedge , 10 ns} CLKA {posedge , 10 ns} {50 Mhz} ;

// Define the time consumed before the end of the tester cycle
// that CLKA returns to stability
rts CLKA {1 ns} ;

Resultant Clock Sequence

This example defines the timing for the CLKA launch and capture. If the TDR specifies that
only one clock pulse per tester cycle is to be issued, the resulting sequence will place CLKA
in two consecutive tester cycles:

November 2010 127 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

The leading edge of the launch clock will be at time 90ns and the leading edge of the capture
clock will be at time 10 in the next cycle. Both should have a pulse width of 10 ns.

However the RTS entry modifies this timing. It specifies that the trailing edge of both the
launch and capture CLKA pulses should be at 99ns (1ns from the end of the cycle). To
accommodate this, the launch pulse width is changed to 9 ns and the capture pulse width is
changed to 89 ns.

Statements to Override TDR Parameters

In addition to clock domain specifications, certain TDR parameters can be overridden. The
following statements may be used in the clockconstraints file:
■ resolution {speed timeUnit} ;
■ accuracy {speed timeUnit} ;
■ period {speed timeUnit} ;

Resolution identifies the smallest increment of time on the tester. Accuracy is added to the
time between release and capture timings. Tester period identifies the time for one tester
cycle.

Example:
// tester resolution...smallest increment of time on the tester

November 2010 128 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

resolution {10 ps} ;

// accuracy of the tester it will be added to the clk to clk time


accuracy {100 ps} ;

// tester period also called the tester cycle


period {100 ns} ;

// CLKA clock domain is 50 MHz with 10 ns clock


CLKA {posedge , 10 ns} CLKA {posedge , 10 ns} {50 Mhz} ;

Process Variation
Control the timing calculation by selecting a point on the process curve. This curve is a
mathematical representation of the delay value, which if measured on a sample of parts,
would vary about some mean value; that is, the process curve is the statistical distribution of
the average delay values of the chip. Figure 4-19 shows an example process curve for a given
delay. Set the coefficients relating to best case, nominal case, and worst case delays. The
delays are calculated as
delay=(best_case_coefficient * best_case_delay) +
(nominal_case_coefficient * nominal_case_delay) +
(worst_case_coefficient * worst_case_delay)

This formula allows the selection of either the best case, worst case, or nominal delays by
setting one of the coefficients to 1 and the others to 0. However, Encounter Test accepts any
decimal number for each of the coefficients to scale the delays (with a coefficient other than
1) or use averages (with two or more non-zero coefficients).

November 2010 129 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Figure 4-19 Process Variation

Create tests to sort the product based on its speed are created by making several different
runs and selecting a different point on the process curve each time. To ensure that all received
product is faster than the halfway point between nominal and worst case (see Figure 4-19),
specify a process variation of (0,.5,.5). Not all manufacturers will do such sorting, therefore
we recommend verifying whether an override is acceptable.

Pruning Paths from the Product


Use lineholds (LH attribute or linehold file) to hold a particular point to a static value. The effect
of the linehold is to make a path unobservable which will cause it to be ignored for timing.

November 2010 130 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Verifying Clocking Constraints


The verify_clock_constraints command verifies the data in the clock constraint file
against the Tester Descriptor Rule (TDR) syntax.

The syntax for the verify_clock_constraints command is given below:


verify_clock_constraints workdir=<directory> testmode=<modename>
clockconstraints=<filename>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ clockconstraints=<file name> - List of clocks and frequencies to perform
ATPG. Refer to “Clock Constraints File” on page 125 for more information.

Prerequisite Tasks
Complete the following tasks before executing Verify Clocking Constraints:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in Modeling User Guide.
2. Build Encounter Test Testmode. Refer to “Performing Build Test Mode” in the Modeling
User Guide for more information.

Output Files
None

Command Output
The output log contains information if the sequences match the data stored in the testmode.

A sample output is given below:


INFO (TTU-401): Verify Clock Constraints begins. [end TTU_401]
INFO (TTU-130): Reading ClockConstraint file clock.domain.txt. [end TTU_130]
INFO (TTU-402): Verify Clock Constraints is complete. [end TTU_402]

Error in Syntax:

INFO (TTU-130): Reading ClockConstraint file clock.domain.txt.bad. [end TTU_130]


ERROR (TTU-418): For the statement specified on line 2 within the clock constraints

November 2010 131 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

file: clock.domain.txt.bad , the clock name CLKa2 was not found in the model.
Correct the clock constraint file value(s) and re-run. [end TTU_418]
ERROR (TTU-414): Clock_constraint file parse failed, due to syntax error at line
3. [end TTU_414]
INFO (TTU-402): Verify Clock Constraints is complete. [end TTU_402]

Verifying Clock Constraints Information


Before running delay applications, it is highly recommended to check the clock constraint file
against the TDR information to verify that the clock constraints are within the TDR guidelines.
This reduces resource consumption and the subsequent effort to rectify the identified
discrepancies at the end of long-running jobs.

The create_logic_delay_tests, create_path_delay_tests,


prepare_timed_sequences, create_wrp_tests,
create_iowrap_timed_delay_tests, and time_vectors commands automatically
invoke the function to verify clock constraints whenever the application calls the clock
constraint parser.

Another method to verify the clock constraints information with the TDR information is to use
the verify_clock_constraints command. All the aforementioned commands and the
verify_clock_constraints command check if the following criteria are met:
■ The minimum pulse width value obtained from the clock constraint file should be equal
to or greater than the specified value for the MIN_PULSE_WIDTH parameter in the TDR
■ The maximum pulse width value obtained from the clock constraint file should be less
than or equal to the specified value for the MAX_PULSE_WIDTH parameter in the TDR
■ The minimum leading to leading edge width obtained from the clock constraint file should
be equal to or greater than the specified value for the
MIN_TIME_LEADING_TO_LEADING parameter in the TDR
■ The minimum trailing to trailing edge width obtained from the clock constraint file should
be equal to or greater than the specified value for the
MIN_TIME_TRAILING_TO_TRAILING parameter in the TDR
■ The specified pulse width and clock speed values for a clock in the clock constraints file
should be unique (semantic check)

The above-mentioned clock constraint checks are invoked in the initial stages of the test
generation and the applications quickly notify if any criterion is not met.
Note: For any of the above-mentioned checks, if the data to compare is not available from
the TDR, the command does not perform the corresponding check.

Refer to verify_clock_constraints in the Command Line Reference for more information.

November 2010 132 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Characterization Test
Characterization test is a type of path test used to primarily speed-sort a product. That is,
using certain paths, the maximum speed at which the design can run is determined.

The goal of characterization test is to provide a starting point for schmoo-ing at the tester.
Characterization test is based upon a path test, however many of the timing attributes of the
Manufacturing Delay test apply. Many manufacturing sites have tools that facilitate changing
the timings of the patterns therefore Encounter Test produces a basic path test that can be
manipulated on the ATE. The default process for generating path tests when delays are
available (a delay model has been built) is to generate tests for the specified longest (critical)
paths and to individualize the patterns that test each path. Each test pattern can have its own
unique timeplate that can be manipulated on the ATE independent of the timeplates used for
other patterns. Encounter Test does not limit path tests (logic values and timings) to just
testing and timings individual paths; multiple test paths can be simultaneously tested. When
the paths are simultaneously tested, the speed of the tests is affected by the entire product.
Constrained timings should be used so the timings do not take into account the paths longer
than the specified paths.

An important consideration when performing path test is the pattern volume. In path test, the
trade-off is between processing time, pattern count, test coverage and the number of paths
that can be tested. A minimal pattern set is difficult with path tests. Compressing patterns is
often not possible when focusing on a particular path and ignoring other capture registers in
the design. This can cause data volume/tester time problems which must be balanced with
the number of paths to test. As the number of paths increase, they should be grouped into
paths with similar timings, then applied with the longest timing for that group. This means that
some paths will have their timings relaxed but fewer timing changes will have to be made on
the manufacturing test equipment.

Paths for characterization testing can be partially or fully specified. Encounter Test selects
paths using delay information and or the gate level information. If delay information exists (a
delay model was specified), the paths are determined by specifying the partial path and filling
out the rest of the paths, enumerated from longest, shortest or random order. Also, if a path
is given entirely at the cell level, it is considered only partially specified because the paths are
stored at the cell level and there may be multiple paths within a cell. By default, Encounter
Test finds a certain number of the longest paths upon which to attempt test generation based
on a user-specified path. This is rarely an exhaustive set of paths due to the number of cells
(and paths through those cells) in a specified path.

The specified path may not be the sole determining factor for the timing of the test. The logic
in the back-cone of the capture register also affects the test's timing. In Figure 4-2, this is
shown in the smaller dotted region. For a robust test, other logic which affects the path (feeds
into the path) must be considered. If logic in the back-cone operates slower than the path,
then that is the timing that must be used. This ensures that late values can only be due to the

November 2010 133 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

cone of logic the target path exists within. To limit the effect of this, the timing is done using
the actual pattern so only the logic experiencing transitions is timed.

The tests of paths can be specifically stored patterns generated for paths or previously
generated patterns from another ATPG run such as dynamic stored pattern, WRPT or LBIST
patterns that have been simulated against the defined set of faults. For example, LBIST
patterns can be resimulated to investigate whether the longest paths are tested.

Performing Create Path Tests


This task generates dynamic ATPG patterns based on either a path list or starting/ending
points.

Refer to “Path Tests” on page 137 for information on different types of path patterns.

To perform create path delay tests using the graphical interface, refer to Create Path Delay
Tests in the Graphical User Interface Reference.

To perform create path delay tests using the command line, refer to
“create_path_delay_tests” in the Command Line Reference

The syntax for the create_path_delay_tests command is given below:


create_path_delay_tests workdir=<directory> testmode=<modename> experiment=<name>
pathfile=<file>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ experiment= name of the test that will be generated
■ pathfile= name of paths to be generated. Refer to “Path File” on page 136 for more
information.

The most commonly used general keywords for the create_path_delay_tests


command are:
■ clockconstraints=<file name> - List of clocks and frequencies to perform
ATPG. For more information, refer to “Clock Constraints File” on page 125.
■ dynseqfilter=<value> - Selects the type of clocking sequence and the path on
which faults will be processed. For example, the value repeat selects a sequence that
launches and captures with the same clock, and only paths in which all points are fed

November 2010 134 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

and observed by the same clock will be processed. For more information on sequence
types, refer to “Delay Test Clock Sequences” on page 146.
■ maxnumberpaths=<number> - The maximum number of paths to generate for any
one path group. There is a rising and falling group created for each specified path. The
default is 20.
■ pathtype=nearlyrobust|robust|nonrobust|hazardfree - Type of test to
generate. Refer to “Path Tests” on page 137 for more information. The default is
nearlyrobust.

The following are the timed options:


■ delaymodel=<name> - Name of the delay model for timed ATPG tests.
■ earlymode/latemode - Ability to customize delay timings. Default is 0.0,1.0,0.0
for each option. For more information, refer to “Process Variation” on page 129.

Options to simulate paths against existing patterns:


■ inexperiment=<name> - input experiment name
■ tbcmd=sim - whether to generate path tests (tbcmd=tg) or resimulate existing path
tests (tbcmd=sim).

To simulate paths against existing patterns, use the inexperiment and tbcmd keywords.

Refer to “create_path_delay_tests” in the Command Line Reference for more information


on these keywords.

Prerequisite Tasks
Complete the following tasks before executing Create Path Delay Tests:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in Modeling User Guide.
2. Build Encounter Test testmode

Output Files
Encounter Test stores the test patterns in the experiment.

November 2010 135 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Command Output

The output log contains information about the number of paths tested and the type of
generated test. A sample output log is given below:
Hzfree= Hazard Free Robust
Robust = Robust
NrRob = Nearly Robust
NoRob = Non Robust

****************************************************************************
----Dynamic Path Test Generation Final Statistics----
Final Experiment Statistics for Path Faults

#Faults #Tested #HzFree #Robust #NrRob #NoRob #NoTest %TCov %HFree %Rob
%NrRob %NoRob %NoTest
All Paths 4040 321 0 0 0 321 0 7.95
0.00 0.00 0.00 7.95 0.00
OR Groups 920 138 0 0 0 138 0 15.00
0.00 0.00 0.00 15.00 0.00
AND Groups 0 0 0 0 0 0 0 0.00
0.00 0.00 0.00 0.00 0.00
****************************************************************************

----Final Pattern Statistics----

Test Section Type # Test Sequences


----------------------------------------------------------
AC Path 126
----------------------------------------------------------
Total 126

Path File Syntax


“Name” “Launch Pin” “intermediate pins” “Capture Pin”;

Specifying intermediate pins is optional.

A sample entry in the path file is given below:


Path01 reg_0_.q RC_42747_0.a RC_42747_0.x newlreg_3_.d ;
Path02 flop1.q flopcapture.d;

Path File
A path file is used as input to Create Path Delay Tests and Create Path Timed Delay Tests to
assist in processing specific paths on the design. The guidelines for path file syntax are as
follows:
■ The syntax for the content of the path file is: identifier block/pin/net ......;
■ There is no limit to the number of paths.

November 2010 136 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

■ Spaces or semi-colons are not allowed in the names.


■ Entities in the path may be pins, blocks, or nets.
■ Spaces are used to separate the entities.
■ Names must be in Encounter Test format (that is, not Verilog or SDF formats).
■ The path may consist of one entity or describe an entire path. The order of the paths is
of no consequence, Encounter Test will fill in the rest of the entities in the path. The
number of expanded paths is 20.
■ The last entity of a path should be an input of the block. If a latch primitive output pin or
net is used to represent the end of a path, the path will not stop there, instead traversing
to the next latch.

The following is a path file example:


MyPath FF21.Q andGate1.outPin orGate1.outPin;
MyPath2 FF33.QN andGate1.outPin orGate2.outPin;
...

In the preceding example, MyPath and MyPath2 are the path identifiers that name the two
path fault groups. Specify names of nodes along the path after the path fault group name. The
nodes may be specified in any order with any number of intermediate nodes excluded. When
nodes are excluded, the intermediate nodes are determined based on how the parameter
selectby is set, whose default value is longest. Use a semi-colon (;) to indicate the end
of the path fault group node list.

Path Tests
These tests analyze whether a transition propagates through a specified user-defined path
and a directed acyclic set of gates.

Hazard-Free Path Tests

Hazard-free tests require that the only possible way for the transition to arrive at the sink is
without any interference from the gating signals that intersect the path. That is, the gating
signals must be at a constant steady state; glitches are not allowed. To make a test hazard-
free, all inputs into the path must be at a steady state (A=1s). Being at the steady state does
not allow two paths to intersect. The design in Figure 4-20 can be made hazard-free by
ensuring C and H are at a steady 1 (1s).

The benefits of these tests are:

November 2010 137 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

■ They allow the individual paths to be tested and diagnosed.


■ They find very small defects.
■ They can be used for characterization of paths to less than the system cycle. Refer to
“Characterization Test” on page 133 for additional information.

Figure 4-20 Robust Path Test

Robust Path Tests

Robust path tests are “relaxed” hazard-free tests. The difference is that for robust tests the
controlling to non-controlling transitions do not require a steady value from the gating signals.
This means the off path pin may be unstable until the final state is achieved. Conversely, a
non-controlling to controlling transition on the path requires the other pins to be at a steady
value.

Robust path test tests the delay of a path independent of the delays of the other paths,
including paths that intersect it. So if path B-I-L in Figure 4-20 is faulty, then regardless of the
delay on the other paths, L will detect the 1 to 0 transition late. If a fault is present on B and
C, then L will detect the transition late, however we are unable to diagnose from which path
the fault has come.

November 2010 138 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Non-Robust Path Tests

Non-robust path tests allow hazards on both controlling to non-controlling on the off path
gating. The only requirement is that the initial and final state of the path's sink is a function of
the transition of the transition of the source. For example, if the source does not change state,
the sink will settle at the state that is a function of the source.

The non-robust tests create a path test, however a faulty value on another line can invalidate
the test. An example is the hazard on C in Figure 4-21. If the glitch widens the value at L, the
value still appears reasonable even though there is a fault on B. The glitch is typically caused
by values changing from 0 to 1 and 1 to 0 on two inputs of AND or OR gates.

If the robust test is invalidated by a signal with a known definite glitch (a two input AND gate
with both inputs transitioning in opposite directions would create a definite glitch or hazard),
then the test is identified as a nearly-robust path test. A plain non-robust path test is one
where off path gating requires a steady state value on an off path input, and it is impossible
to get a steady state (for robustness) or definite hazard (for near robustness) on that pin.

Figure 4-21 Non-Robust Path Test

Reporting Path Faults


To report path faults using the graphical user interface, refer to Report Path Faults in the
Graphical User Interface Reference.

November 2010 139 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

To report path faults using the command line, refer to report_pathfaults in the Command
Line Reference.

The syntax for the report_pathfaults command is given below:


report_pathfaults experiment=<experiment_name> testmode=<testmode_name>
workdir=<directory> globalscope=no

Refer to Reporting Path Faults in the Modeling User Guide for more information.

Performing Prepare Path Delay


This task builds the Encounter Test alternate fault model for path file.

Refer to “Path Tests” on page 137 for information on different types of path test.

To perform Prepare Path Delay using the command line, refer to prepare_path_delay in the
Command Line Reference.

The syntax for the prepare_path_delay command is given below:


prepare_path_delay workdir=<directory> testmode=<modename> pathfile=<file>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ pathfile= name of paths to generate

The commonly used options for prepare_path_delay are:


■ maxnumberpaths = <number> - The maximum number of paths to generate for any
one path group. Default is 20.

Refer to prepare_path_delay in the Command Line Reference for more information on


these keywords.

Prerequisite Tasks
Complete the following tasks before executing Prepare Path Delay:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in Modeling User Guide.

November 2010 140 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

2. Build Encounter Test Testmode. Refer to “Performing Build Test Mode” in the Modeling
User Guide for more information.

Output Files
Encounter Test stores the alternate fault model name as output.

Command Output

The output log displays the summary of the number of faults created. A sample log is shown
below:
There are 1 test mode(s) defined:
FULLSCAN

#Faults #Active #Inactive %Active


Total 2 0 2 0.00
Total Static 0
Total Dynamic 2 0 2 0.00

Delay Defects
Delay defects are modeled by dynamic faults. Catastrophic spot point defects are modeled by
dynamic pin and pattern faults. The cell level defect and process variations use the dynamic
path pattern fault. Both use a single transition to excite the fault effect. Dynamic pin and
pattern faults require values at one gate. Path faults require a specific transition to be excited
and the effect of that transition propagated along the chosen path. Path faults also have a
variable detection criteria. They can be detected as hazard-free, robust, nearly-robust or non-
robust.

Refer to the following for related information:


■ “Encounter Test Fault Model Overview” in the Modeling User Guide.
■ “Create Exhaustive Tests” on page 281

The following example shows an AND gate with a single (slow-to-rise) pin fault. The only pin
required to change state is the faulty pin. The other input to the AND gate may or may not see
a transition, but it must have a non-controlling value (1 for the AND gate) in the final state. The
final output state is 1 in this example. If the defect exists, the output will be 0 for longer than
expected time during which its state must be observed. The time the defect is present can be
as small as a few hundred picoseconds or 10's of nanoseconds. Though there is a time
component to determine whether the defect is active, the detection of the fault does not take

November 2010 141 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

this into account. In our example, if the value (0) which is opposite the expected value (1) is
captured in a latch (by a clock event) or PO (by a PO measure) immediately following the
launch of the transition, the fault is detected.

Figure 4-22 Example of a Transition Fault

The path pattern faults model a defect or process variation that accumulate across multiple
gates causing a signal to arrive late. Refer to “Path Pattern Faults” in the Modeling User
Guide.

Delay ATPG Patterns


Manufacturing tests generate tests to detect random catastrophic spot delay defects. To
ensure the random defects throughout the product are detected, dynamic faults are modeled
at all gate or cell pins on the product. Dynamic faults are transition faults (slow-to-rise or slow-
to-fall). See Figure 4-22 on page 142 for an example. These dynamic faults are targeted by
True-Time Test flows as well as the fixed time delay test solution.

Parametric process problems cannot necessarily be observed at a single fault site. The delay
effects of such a variation accumulate across the path. Characterization Test provides for
testing process variations by testing specific paths. The paths may include be a complete set
of logic between memory elements or smaller segments within technology cells. The paths
are modeled by dynamic path pattern faults. The locations and paths of these faults can be
user-determined or paths can be selected based on their path lengths.

An element that is essential to delay testing is that the test patterns must be applied with as
little slack at the measure points (observable scan chains or POs) as possible. This can entail
running at or near the system speed, or in some cases, faster than system speed for short
paths. Encounter Test uses information contained within a Standard Delay File (SDF) as input
to the True-Time Test methodologies. The SDF is used to automatically set the timings of PI
switching and clock pulses at the tester.

In the True-Time Test methodology, the SDF is also used to automatically determine which
memory elements have paths that are significantly longer than the rest of the paths within a

November 2010 142 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

given clock domain (such as multi-cycle paths). These outliers are automatically made to not
cause failures at the tester by either measuring X at these elements (ignoring the measure)
or by constraining the logic that creates the long path during test pattern generation.

In the At-Speed and Faster Than At-Speed True-Time methodologies, the SDF is used in
conjunction with a clock constraints file (described in “Clock Constraints File” on page 125).
The clock constraints file specifies the desired operating speeds of each clock domain to be
tested as well as several other user-defined timing relationships. By using these timing
relationships, the outlier paths are determined the same way as in the automatic flow and the
logic that contributes to paths that do not meet this user-defined timing are either ignored
(measure X) or constrained during test pattern generation.

The At-Speed and Faster Than At-Speed flows require a method of achieving a high clock
rate on the product during test. Sometimes this clocking can be provided by the tester, but it
is more likely some hardware assistance will be required on the part itself. Additional On-
Product Clock Generation support is available to achieve the system speeds. Refer to “On-
Product Clock Generation (OPCG) Test Modes” in the Modeling User Guide for additional
information.

Use of a design constraints file is another method to supplement At-Speed and On-Product
test generation. Refer to “Design Constraints File” on page 100 for details.

The dotted line in Figure 4-23 shows the effect of a delay defect and the difference between
a delay (AC) test and a static (DC) Test. A delay defect causes the expected switch time
(represented by the solid line), to be delayed (represented by the dotted line). The difference
between the delay and static tests is that the delay test generates tests to explicitly cause the
designs to switch and when the result should be measured, while the static tests do not
require a transition. In a static test, the measure can be done later since a static defect is
considered to be tied high or low.

November 2010 143 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Figure 4-23 General Form of an AC Test

A basic delay test is a two pattern (or two clock, or two cycle) test. The first pattern is called
the launch event because it launches the transition in the design from 0 to 1, or 1 to 0
(additional transitions of 0 to Z, 1 to Z, Z to 0, and Z to 1 are also possible in some designs).
The second pattern is called the capture event because it captures, or measures, the
transition at an observe point (register or primary input).

In its simplest form, a dynamic test consists of initiating a signal transition (the release
event) and then some measured time later, observing the effect of this transition (the
capture event) at some downstream observation point (typically a latch). For a transition to
occur, the design must be in some known initial state from which the transition is made.
Therefore, a dynamic test usually includes a setup or “.initialization” phase prior to
the timed portion of the test. When the point of capture is an internal latch or flip-flop,
observation of the test involves moving the latch state to an observable output.This is done
with scan design by applying a scan operation to unload the scan chain.

The three phases of a dynamic test, initialization, timed, and observation are depicted in
Figure 4-24 along with the three types of events in the timed portion. In general, a single
dynamic pattern may have several events of each type.

November 2010 144 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Figure 4-24 Dynamic Test Sequence

A dynamic test may be either timed or not timed. When it is timed, Encounter Test defines the
timing in a sequence definition which is a sort of pattern template and accompanies the test
data. When the dynamic tests are not timed, the tests are structured exactly the same as for
timed tests, but there is no timing template. In this case, the timing, if used at all, is supplied
by some other means, such as applying the clock pulses at some constant rate according to
the product specification. Refer to “Default Timings for Clocks” on page 172 for related
information.

The optimized timed test calculated by using delays from the SDF and tester information from
the Tester Description Rule is, depending on the tester's accuracy, a test that can run faster
than the tester's cycle time. If the release and capture clocks are different, they can be placed
in the same tester cycle and their pulses can overlap to obtain an optimized (at-speed) test.
If the release and capture events both require the same clock to be pulsed, as is the case for
most edge-triggered designs, then the two clock pulses must be placed into consecutive

November 2010 145 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

tester cycles. By timing these two consecutive tester cycles differently, the pulses can be
pushed out to near the end of the first cycle and pulled in near the beginning of the following
cycle. This allows the pulses to occur with a cycle time much less than that of the tester and
much closer to that of the product. This allows the test to run at the speed necessary to truly
test the potential defect because it is “.at-[the]-speed”. of the design, not at-the-speed (or at
the mercy) of the test equipment.

Delay Test Clock Sequences


Transition fault testing is accomplished by passing a transition from a source flop, through the
fault site, to a sink flop, and controlling the launch to capture timing. This requires design
values to be set up in at least two time frames, the first to generate the transition and the
second to capture the results.

There are two techniques for generating the two time frames:
■ Launch on last shift requires that the final shift create a transition at the outputs of the
required flops
■ Double clocking (also known as functional release) uses two clocks in system operation
(or capture) mode to create the transition and capture the results.

In Figure 4-25, the timings for the two transition generation techniques indicate that the clock
is essentially the same. The scan can be run slow (to control power), but there must be two
cycles of the clock at the test time and the time from active edge to active edge is controlled.

Figure 4-25 Delay Test Two-Frame Clocking

In launch-on-last-scan, the Mux-Scan "Scan Enable" signal must be toggled after the launch
clock edge and before the capture clock edge. In double-clock, the scan is completed, and
the clocks can even be held off for an indefinite period while the scan enable signal is toggled.

November 2010 146 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Then two timed clocks are issued, and there is again an indefinite amount of time to toggle
the scan enable. This places no significant restrictions on the scan enable distribution and is
easier timing in a Mux-Scan design but more difficult test generation.

The default for MuxScan designs is to use double-clock only. It is recommended to allow the
test pattern generator to decide which method to use. If the SDF is provided, the timings of
the Scan_Enable signal can be appropriately controlled. The keywords of the
create_logic_delay_tests command for controlling clock sequence generation are:
■ dynfuncrelease=yes|no where yes restricts the clock sequences to double-clock
only, and no allows the test pattern generator to use either.
■ dynseqfilter=any|repeat|norepeat|onlyclks|clkpiclk|clockconstraint
where repeat will allow only patterns that repeat the same clock (within a domain), and
onlyclks will also allow inter-domain sequences.

A method of calculating the values to scan into the flops in double-clock test generation is
illustrated in Figure 4-26. Three levels of flops are depicted.

November 2010 147 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Figure 4-26 Delay Test Execution

To generate a test that detects a slow-to-rise fault on the output of the OR gate, a 0 or 1
transition must be created on the output of the OR gate using the following steps:
1. Place an initial value of 0 on the output of the OR gate by scanning a 0 into the B0 flop
and B1 flops. The three values shown on each output pin are the values after the last shift
clock, launch clock, and capture clock, respectively.
2. Place a value of 1 on the output of the OR gate. To accomplish this, the B0 flop must
capture a 1 at the launch event of the capture clock. Therefore, a value of 1 must be
scanned in the A0 flop in the previous scan shift. Similarly, for B1 to capture a 0, A1 must
have 0 scanned into it during the previous scan shift. Set input pin IN1 to 1 to propagate
the fault to C0.

The second pulse of the clock is the capture event. If the transition propagates with normal
timing, a 1 is captured into the C0 flop. If there is a delay defect, a 0 will be captured. During
fault simulation, additional transition and static faults will also be detected by this pattern.

November 2010 148 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Customized Delay Checking


Customized delay checking is an advanced capability to review and remove log messages
produced by Build Delay Model. It is highly recommended to first have a full understanding of
the information conveyed by the messages in order to successfully use this capability.

Build Delay Model performs an integrity check is the data contained in the SDF file. The check
verifies the created design that is patterned according to the SDF data is correctly modeled
by Encounter Test. The secondary purpose of the check is to verify the completeness of the
information within the SDF file.

The delay checking process is driven by information in a binary file named


TDMcellDelayTemplateDatabase. This file can be created via any of the following:
■ Automatic generation during the build_delaymodel process
■ Automatic generation using build_celldelay_template
■ Manual creation or update using read_celldelay_template

A TDMcellDelayTemplateDatabase file can be generated by either


build_delaymodel or explicitly running build_celldelay_template with appropriate
parameters to generate delays for the desired cells. A database file can be produced in
readable form by specifying the printtemplates=filename keyword option for
build_celldelay_template. The format of this file resembles an SDF file with all of the
numerical delay information removed. The file describes which pins have timing relationships
in each cell, but does not describe what those relationships actually are. The file contains an
entry for top-level CELLs only. Each definition can be customized by the configuration of the
pins on a given instance of the CELL as seen in the following example:
CELL AND
{
IOPATH A RISING Z RISING
IOPATH A FALLING Z FALLING
IOPATH B RISING Z RISING
IOPATH B FALLING Z FALLING
}

CELL AND
(
TIE1 A
)
{
IOPATH B RISING Z RISING
IOPATH B FALLING Z FALLING
}

This shows two definitions for the same AND cell, but in the second case, the first input pin is
tied to 1. This latter definition will only apply instances of the AND cell in which the first input
is tied to 1. Otherwise, the first definition will apply, as it is more generic.

November 2010 149 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Once exported in this form, the definitions can be manually edited and read back in to the
binary database used by build_delaymodel for delay integrity checking. To read a
modified set of definitions back in, use read_celldelay_template and specify the your
text file containing the definitions for templatesource keyword.

Run build_delaymodel to build the delay model with the new definitions for checking.

Dynamic Constraints
Dynamic constraints are intended to prevent pattern miscompares for transitions propagating
through the following types of delays:
■ Long paths (negative slacks or setup time violations)
■ Short paths (hold time violations)
■ Delaymux logic
■ Unknown delays

The preceding delay types may miscompare due to:


■ Long path is not measured within the expected time
■ Short paths are evaluated earlier than expected
■ The transition propagates through a delaymux

The inclusion of dynamic constraints is a method to constrain timings. Through dynamic


constraints, transitions could be prevented down long paths, such that the clock-to-clock
timing does not require expansion to accommodate the long path.

Constraints are included in the linehold and sequence definition files. The sequence definition
file (TBDseq) includes constraints as keyed data. The keyed data is found on the define
sequence and the key is CONSTRAINTS. The entire set of constraints for a sequence
definition is found in the string that follows the key. See “Linehold Object - Defining a Test
Sequence” on page 204 and “Keyed Data” in the Test Pattern Data Reference for
additional information.

Refer to “Linehold File Syntax” on page 199 for details on linehold syntax supporting the
transition entity.

November 2010 150 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Constraint Checking
Timing constraints are enforced in the following two ways.
■ The constraints are justified by the ATPG engine using the constraintjustify
keyword.
■ During simulation, the ATPG engine verifies the test patterns do not violate the
constraints (using the keyword constraintcheck).

In some cases, the ATPG engine is unable to generate patterns in the presence of all the
constraints, therefore, turning off constraintjustify allows the ATPG engine to generate
patterns. In this case, the simulation engine can verify that the constraints were not violated,
and if required, to take some action (either remove or keep them). The constraintcheck
keyword controls this (defaults to yes).

If your delay test flow uses prepare_timed_sequences, then the ATPG engine will try to
locate paths that will not be safe to measure within the frequency in design’s clock constraints
file. If the engine finds such paths, it creates "transition constraints" to prevent transitions from
occurring along these long paths. If the test generator can successfully stop transitions from
occurring along these paths, then the ATPG engine will not be required to create a timing X
at the measure location of this path, which will improve coverage. If Encounter Test fails to
prevent transitions down this path, it will have to measure an X on every flop that this long
path feeds. Using constraintcheck=yes allows this.

If you do not specify constraintcheck=yes, but are using one or more transition
constraints (as a result of prepare_timed_sequences or SDC), then it is highly
recommended to use an SDF-annotated simulation, such as ncsim, to verify your patterns;
otherwise the patterns might not work on the tester because of the violated constraints. The
alternative is to use the default option and let Encounter Test perform the verification and
correction (if required) in a single step.

In case of a performance issue, first examine the number of constraints that are being created
in the prepare_timed_sequences phase. If this number is excessive, check the clock
frequency(s), because it means that Encounter Test has identified too many paths that do not
appear to meet your timings. Check specifically the number of violations reported by
prepare_timed_sequences and compare this number to the number of flops in the given
domain. If it is more than 1% or 2%, then your timings might be too aggressive.

Timed Pattern Failure Analysis for Tester Miscompares


A Timed Test Pattern is used to detect transition faults (or dynamic faults) at a speed that is
as close as possible to the system clock speed. Encounter Test uses a Standard Delay File

November 2010 151 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

(SDF) as the input for all the delays in a chip. See “Timing Concepts” on page 85 for additional
information.

The accuracy of the SDF drives the quality of the timings produced by Encounter Test. The
level of SDF accuracy directly relates to the ability of Encounter Test to create timed test
patterns that work on the tester after a single test generation iteration.

A Timed Test Pattern is comprised by 3 basic building blocks. The building blocks are:
1. The release pulse, used to launch captured data into the system logic.
2. Separation time, the calculated amount of time that is required between the release and
capture pulses.
3. The capture, used to capture the design values in a measure point. See “General View
Circuit Values Information” in the Graphical User Interface Reference for related
information.

Figure 4-27 Building Blocks of a Timed Test Pattern

The release and capture pulses are comprised of the two following basic elements:
1. The time at which the pulses occur
2. The length of the pulses (pulse width)

The time and width of the pulses calculated by Encounter Test are based on the SDF.

Figure 4-28 shows the five elements that control the clocks at the tester:

November 2010 152 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Figure 4-28 Elements that Control Clocks at the Tester

1. Time of Release Pulse = T1


2. Width of Release Pulse = W1
3. Separation Time = S1
4. Time of Capture Pulse = T2 = T1+W1+S1
5. Width of Capture Pulse = W2

A WGL example for a generated Timed Test:


timeplate "parallel_cycle_lssd_dynamic_2010" period 80 ns
"CE0_TEST" := input[ 0ns:S ];
"D1" := input[ 0ns:S ];
"D2" := input[ 0ns:S ];
"D3" := input[ 0ns:S ];
"BCLK" := input[ 0ns:D, 8ns:S, 16ns:D ];
"SI1" := input[ 0ns:S ];
"SI2" := input[ 0ns:S ];
"CCLK" := input[ 0ns:D, 24ns:S, 32ns:D ];

Time of Release Pulse = BCLK = 8ns


Width of Release Pulse = 16ns - 8ns = 8ns
Separation Time = 24ns - 16ns = 8ns
Time of Capture Pulse = CCLK = 24ns
Width of Capture Pulse = 32 ns - 24ns = 8ns

November 2010 153 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

When Timed Test Patterns fail at the tester, it is key to understand why and where these
failures are occurring. The first step in determining why Timed Test Patterns fail is to slow the
patterns down to verify that they pass at system speed similar to static tests. Static Tests are
identical to a Timed Test Patterns except that a static pattern is not run at faster tester speeds.

The speed of a Timed Test Pattern can be slowed to the speed of a Static Test Pattern by
performing the following steps.
1. Widen the separation time between the clock pulses. If the separation time between the
release and capture clocks is 5ns, consider widening the separation to 200 ns.
2. Increase the widths of the release and capture pulses. Encounter Test creates the Timed
Test Patterns with the minimum pulse widths required to release or capture a value at a
latch. These calculated values (from the SDF) might not be large enough for the real
product to correctly perform. Widening the pulses by 20x or 30x from the original values
helps ensure that the latches have enough time to release or capture the values. The
basic concept is to increase all the times associated with the timeplates to a big value.

If viewing the modified times associated with the 5 elements that control the clocks at the
tester we would see:
Time of Release Pulse = T1
Width of Release Pulse = W1*30
Separation Time = P1*200
Time of Capture Pulse = T2 = T1+W1*30+P1*200
Width of Capture Pulse = W2*30

These values can be changed at the Tester using software provided by the Tester Company
or can be modified by changing the timeplates inside of the WGL files.

A modified WGL example for a generated Timed Test:


timeplate "parallel_cycle_lssd_dynamic_2010" period 2400 ns
"CE0_TEST" := input[ 0ns:S ];
"D1" := input[ 0ns:S ];
"D2" := input[ 0ns:S ];
"D3" := input[ 0ns:S ];
"BCLK" := input[ 0ns:D, 8ns:S, 248ns:D ];
"SI1" := input[ 0ns:S ];
"SI2" := input[ 0ns:S ];
"CCLK" := input[ 0ns:D, 1848ns:S, 2088ns:D ];

Time of Release Pulse = BCLK = 8ns


Width of Release Pulse = 8ns * 30 = 240ns
Separation Time = 8ns * 200 = 1600ns
Time of Capture Pulse = CCLK = 8ns + 240ns + 1600ns = 1848ns
Width of Capture Pulse = 8ns * 30 = 240ns

November 2010 154 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Applying this new slow Test Pattern at the tester verifies that the patterns work. Once the
patterns are successful at slow speeds, the timings can be moved closer to the originally
computed values. Reduce pulse widths to their point of failure before changing the separation
time. When the pulse widths are satisfactory, the separation between the clock pulses can be
lowered. Determine which latch is failing by lowering the values until some failures start to
occur.

After identifying some latches that miscompare at faster tester cycles that pass at slower test
cycles, consider the following:
Are the timings associated with the logic feeding into the latch correct in the SDF? Should
this latch be ignored because it will never make its timings? Do all chips fail at the same
latch?
If the identified latch should be ignored or will not make the timings in the SDF, there are
two options:

a. Change the SDF so that it contains the correct values. Rerun Build Delay Model to
recreate or re-time the pattern set.

b. Use an ignore latch file to instruct Encounter Test to ignore this latch during
simulation. If the delay information appears correct, consider analyzing the failure
information using the Encounter Diagnostics tools to ascertain whether Diagnostics
can determine the point of failure. Refer to “Encounter Diagnostics” in the
Diagnostics User Guide for additional information.
Delay Test can also be used to print out the longest paths it finds for a certain latch.

The primary goal at the tester is to achieve passing timed test patterns. If the calculated
timings from Encounter Test do not work at the tester, verify the patterns work at a slow speed
then increase speed to a point of failure.

Performing and Reporting Small Delay Simulation


Encounter Test enables you to perform test generation that targets small delay defects. Based
on the Statistical Delay Quality Model (SDQM), Encounter Test uses the Statistical (or Small)
Delay Quality Level (SDQL) of a transition fault as the metric for measuring the effectiveness
of Small Delay ATPG. When accumulated for all faults tested by a set of test patterns, SDQL
measures the number of small delay defects that may escape the set of test patterns in parts
per million (ppm).

November 2010 155 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

SDQL: An Overview
To compute SDQL, a delay defect distribution function F(s) is used to determine how likely
any one defect is to occur as a function of its size (as shown in Figure 4-29 on page 156).
Typically this is provided by the manufacturing facility. Encounter Test uses a default delay
defect distribution function if one is not provided.

Figure 4-29 Delay Defect Distribution Function

For any given fault, defects smaller than a certain size cannot be observed due to slack in the
fault's longest path. This is often referred to as Tmgn or the timing redundant area.

When a fault is detected, the longest sensitized path (LSP) is determined. LSP is the longest
path along which the fault can be observed by the test patterns. The detection time (or Tdet)
is computed as the difference between the test clock period (Ttc) and the LSP. Defects larger
than this are detected by the test.

November 2010 156 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

The area under the curve that remains undetected by the test reflects the probability that a
defect will occur for the given fault. This is referred to as the SDQL (or untested SDQL) for a
fault.

The SDQL for the design is computed by accumulating the SDQL for all individual faults.

Figure 4-30 Computing Cumulative SDQL

SDQL Effectiveness
SDQL effectiveness is calculated as:
Tested SDQL
------------------------------ x 100%
Untested SDQL + Tested SDQL

Similar to test coverage, this measurement is independent of the number of faults (or in this
case ppm) in the design.

November 2010 157 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Small Delay ATPG Flow


Small delay test generation is performed using the create_logic_delay_tests
command:

Figure 4-31 Small Delay ATPG Flow

Run read_sdf command


read_sdf sdfname=... delaymodel=...

Run report_sdql command


report_sdql clockconstraints=... delaymodel=...

Run prepare_timed_sequences command


prepare_timed_sequences clockconstraints=...
delaymodel=...

Run create_logic_delay_tests command


create_logic_delay_test delaymodel=...
smalldelay=yes tgsdql=... sinsdql=...

Run report_sdql command

As small delay testing tends to run longer and generates increased pattern count as
compared to traditional delay testing, a keyword (tgsdql=n.nnn) is provided to select only
the most critical faults for small delay test generation. Any fault with a potential SDQL greater
than the tgsdql value is selected for small delay test generation. All other faults use
traditional delay test generation.

During ATPG, many patterns may detect a single transition fault. Typically, a transition fault is
marked tested as soon as a test pattern detects the fault. To identify the longest path that
detects a fault, full small delay simulation requires that a transition fault be simulated for every
test pattern, which is a time consuming process. To reduce the run time, a keyword
(simsdql=m.mmm) is provided to allow a fault to be marked tested as soon as it is detected

November 2010 158 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

along a path long enough so that the untested SDQL is less than m.mmm. After the command
is executed, any detected faults not meeting the threshold are marked tested.

Prerequisite Tasks
The following prerequisite tasks are required for small delay simulation:
1. Small delay simulation uses delay data provided in the form of an SDF. Therefore, run
the read_sdf command before performing small delay simulation. Refer to “Performing
Build Delay Model (Read SDF)” on page 80 for more information.
2. Since delay data is provided only to the technology cell boundaries, SDQL for faults
internal to a technology cell cannot be computed accurately. Therefore, you should
perform small delay simulation using a cell fault model. If industrycompatible=yes
is specified for the build_model command, a cell fault model will be created by default.
Otherwise, create a cell fault model by using the build_faultmodel command with
cellfaults=yes. Refer to the Modeling User Guide for more information.
3. Create test patterns using create_logic_delay_tests or
prepare_timed_sequences with release-capture timing specified using a
clockconstraints file. For patterns without release-capture timing, a clockconstraints file
may be specified when performing small delay simulation (analyze_vectors). Refer to
“Create Logic Delay Tests” on page 114 for more information.
4. To determine the appropriate values for tgsdql and simsdql keywords, run
report_sdql to report a histogram of the potential SDQL without specifying the
experiment keyword. Specify the clockconstraints keyword instead. The
report_sdql command reports a histogram showing the distribution of the total
potential SDQL among the transition faults. Recommendations for tgsdql and simsdql
keywords are also reported, or you may select your own value.

Performing Small Delay Simulation


To perform small delay simulation run the create_logic_delay_tests command with
the following keywords:
■ smalldelay=yes - enables small delay simulation. When specified, the default value
of the writepatterns keyword is set to all. The keyword writepatterns=all
allows a pattern that may have detected a transition fault along a less than longest path
to be kept.
■ delaymodel=<dm> - the delay model used to compute path lengths

November 2010 159 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

■ tgsdql=<n.nnn> - faults with potential SDQL greater than this are chosen for small
delay ATPG
■ simsdql=<m.mmm> - faults with an untested SDQL less than this are marked tested

In addition to simsdql, the following threshold values are available to determine when a
transition fault should be marked tested.
■ ndetect=<nn> - mark a fault as tested after it has been detected the specified number
of times (default is 10000)
■ percentpath=<nn> - mark a fault as tested if it is detected along a path that is the
specified percentage of its longest possible path (default is 100)
Note:
■ When simsdql, ndetect, or percentpath is specified, faults are no longer simulated
once they meet the detection threshold. As a result, the untested SDQL reported may be
larger than the true untested SDQL for the pattern set.
■ As a fault may never reach one of the specified thresholds, it may not be marked tested
until the end of the test generation command when all faults that were detected but did
not meet a threshold are marked tested. To prevent faults that did not meet a threshold
from being marked tested, specify marksdfaultstested=no.

The following is an example of small delay simulation:

Example: Small Delay Test Generation Using report_sdql and


create_logic_delay_tests
report_sdql delaymodel=<dm> testmode=<tm> clockconstraints=<ccfile>
(determine the value for tgsdql and simsdql from the report_sdql log)
create_logic_delay_tests .. delaymodel=<dm> smalldelay=yes simsdql=<value>
tgsdql=<value>…

In this example, faults with a potential SDQL greater than the tgsdql value are processed
using small delay ATPG. Other faults are processed with traditional ATPG. Faults are not
marked tested until they are detected along a path with an untested SDQL less than the
simsdql value.

Small Delay Simulation of Traditional ATPG Patterns


The SDQL effectiveness of test patterns created using traditional ATPG may be determined
using small delay simulation:

November 2010 160 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Figure 4-32 Flow for Small Delay Simulation of Traditional ATPF Patterns

Run prepare_timed_sequences command


prepare_timed_sequences clockconstraints=...
delaymodel=...

Run create_logic_delay_tests command


create_logic_delay_test delaymodel=...
smalldelay=yes tgsdql=... sinsdql=...

Run analyze_vectors command


analyze_vectors smalldelay=yes delaymodel=...

Run report_sdql command

Small delay simulation after ATPG

Example 1: Small Delay Simulation of traditional ATPG patterns


prepare_timed_sequences delaymodel=<dm> clockconstraints=<ccfile>…
create_logic_delay_tests … delaymodel=<dm> experiment=<exp>
analyze_vectors … delaymodel=<dm> smalldelay=yes inexperiment=<exp>
experiment=<exp_resim>
report_sdql testmode=<tm> experiment=<exp_resim>

In this example, patterns are created using traditional ATPG. Small delay simulation is then
performed for all faults during analyze_vectors. As simsdql is not specified, faults are
not marked tested unless they are detected along their longest possible path. Any faults that
were detected along a shorter path are marked tested after the simulation is complete.
Setting simsdql to 0 (or not specifying simsdql) provides the most accurate SDQL report,
but analyze_vectors may run longer. report_sdql is then run to report the
effectiveness of the resimulated pattern set.

November 2010 161 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Example 2: Small Delay Simulation of Traditional ATPG Patterns


create_logic_delay_tests … delaymodel=<dm> experiment=<exp> smalldelay=yes
clockconstraints=<ccfile>
report_sdql testmode=<tm> experiment=<exp>

In this example, small delay simulation is performed during traditional ATPG eliminating the
analyze_vectors step. Traditional ATPG is used for all faults because the tgsdql
keyword has not been specified. If prepare_timed_sequences is not run, a
clockconstraints file is required.
Note: When small delay simulation is being performed during traditional ATPG, fewer faults
will be marked tested during simulation due to the requirement to meet a simsdql or other
threshold. This will increase the run time for ATPG. Any faults detected but not meeting the
simulation threshold are marked tested after the test generation is complete.

Example 3: Small Delay Simulation during ATPG with a User-specified Probability


Function
create_logic_delay_tests … delaymodel=<dm> smalldelay=yes probfile=<mydirectory>/
SDQL.pl

In this example, instead of the default probability function, a user-specified probability function
is provided. Refer to report_sdql in the Command Line Reference for more information
on the probfile keyword.

Reporting SDQL Effectiveness

Use the report_sdql command to create reports about the SDQL for a pattern set. Refer
to report_sdql in the Command Line Reference for more information on the command.

The reports produced by report_sdql are described below:

Reporting SDQL for Experiment


report_sdql testmode=<tm> experiment=<exp>

The following report is produced.


INFO (TDL-062): SDQL calculations will use a maximum time of 6100 ps and a total
area of 0.0008. [end TDL_062]

INFO (TDL-050): Clock Domains Being Processed


-------------------------------------------------------------------------
Domain # Faults Release Clock Capture Clock Test Period
-------------------------------------------------------------------------
1 6316 CLK CLK 6100 ps
-------------------------------------------------------------------------
[end TDL_050]

November 2010 162 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

The first section lists the clock domains being processed and the number of faults associated
with them. Faults on PIs, POs, RAMs, ROMs, latches and flops are not included in this count.
Faults that exist in multiple domains are included in the counts for each domain. The
maximum time for integration and area under the defect probability distribution are also listed.

The next section lists the SDQL by domain for the faults that were tested during small delay
simulation (as well as those that would have been tested if they had met the detection
threshold). For faults that are included in multiple domains, their SDQL is included in each
domain, but only once in the Chip SDQL.
INFO (TDL-052): SDQL By Clock Domain - Tested Faults
--------------------------------------------------------------------------------
Untestable Untested Tested Total Test
Domain # (ppm) (ppm) (ppm) (ppm) Effectiveness
--------------------------------------------------------------------------------
1 0.0105 0.0066 0.1065 0.1236 94.66%
Chip 0.0105 0.0066 0.1065 0.1236 94.66%
--------------------------------------------------------------------------------

Note that there is a column for Untestable SDQL. During small delay ATPG, some faults may
be determined to be Untestable along their longest possible paths (LPPs). Rather than
reducing the LPPs, an Untestable SDQL is computed. This allows a better comparison to
patterns created using traditional ATPG since the total SDQL should remain the same.

The Tested Fault SDQL section of the report is followed by similar section that includes
untested faults as well as tested faults.
INFO (TDL-052): SDQL By Clock Domain - All Faults
--------------------------------------------------------------------------------
Untestable Untested Tested Total Test
Domain # (ppm) (ppm) (ppm) (ppm) Effectiveness
--------------------------------------------------------------------------------
1 0.0105 0.0125 0.1065 0.1294 90.35%
Chip 0.0105 0.0125 0.1065 0.1294 90.35%
--------------------------------------------------------------------------------
Dropped Faults:
TMGN <= 0 : 0
Redundant : 0
Untestable : 1371
Clock Line : 416
Tested Untimed:725
Total :2512
[end TDL_052]

A table listing untested faults that were not included in the SDQL calculations is also included
to help explain any differences in total SDQL from one experiment to the next.

Reporting SDQL for Individual Faults


report_sdql testmode=<tm> experiment=<exp> defectprobability=0.002

A sample report is given below:

November 2010 163 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

INFO (TDL-051): Defect Probability Report


------------------------------------------------------------------------------
Defect Fault Defect System Longest Poss Timing Test Longest
Size Probability Period Path Redundant Period Sens Path
(ps) (ps) (ps) (ps) (ps)
------------------------------------------------------------------------------
8371 0.000303 6100 5633 467 6100 1342 4758
8372 0.000210 6100 5573 527 6100 4727 1373
9147 0.000281 6100 5765 335 6100 5108 992
9148 0.000272 6100 5693 407 6100 4825 1275
9622 0.000211 6100 5444 656 6100 1029 5071
9890 0.000271 6100 5573 527 6100 992 5108
10148 0.000343 6100 5693 407 6100 1030 5070
11382 0.000349 6100 5804 296 6100 4888 1212
------------------------------------------------------------------------------

This report lists those faults whose untested SDQL (Defect Probability) remains above 0.002.

Reporting SDQL Thresholds


report_sdql testmode=<tm> delaymodel=<dm> clockconstraints=<ccfile>
reporthistogram=yes

The report shows how the potential SDQL (all of the area under the curve except for the timing
redundant area) is distributed among the faults. This is useful when determining the simsdql
threshold to use when running small delay simulation.

INFO (TDL-063): Histogram of Per Fault Untested SDQL (Potential)

Faults w/SDQL # Faults Cum.SDQL SDQL


More Than (Cum.) (%) Histogram
------------- ---------- ---------- -----------
0.000420 68 21.40 ********************** 21.40%
0.000396 76 23.82 *** 2.42%
0.000358 146 42.86 ******************** 19.04%
0.000302 163 46.98 ***** 4.12%
0.000265 165 47.38 * 0.39%
0.000221 183 50.62 **** 3.25%
0.000176 199 52.78 *** 2.16%
0.000132 222 55.23 *** 2.45%
0.000088 323 63.87 ********* 8.64%
0.000044 457 70.60 ******* 6.73%
0.000000 6316 100.00

Recommended tgsdql: 0.000044


Recommended simsdql: 0.000044
[end TDL_063]

This report shows that 68 faults have a potential SDQL of 0.000420 or greater and comprise
21.40% of the design's total potential SDQL. These are the most critical faults to detect along
long paths. In fact, only 457 faults account for 70.6% of the design's total SDQL.

November 2010 164 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

The remaining 5859 faults have a potential SDQL less than 0.000044 and comprise only
29.4% of the total SDQL. Setting simsdql=0.00044 during small delay simulation will get
these faults marked off on the first detect, saving simulation run time.

Specifying the Defect Probability Distribution


report_sdql … probfile=<directory>/SDQL.pl …

The default defect probability distribution function may be overridden during report_sdql.
Ideally this function would be provided by the manufacturing facility. The function must be
defined in a perl script and stored in a file with the name SDQL.pl. It may be located in any
directory. The sample function below is provided in the $Install_Dir/etc/contrib directory:
sub probfunc {
my $s = @_[0]; # s is the input value.
my $Fs; # Fs is the output value.
# Compute the probability distribution function
$Fs = 1.58e-3 * exp(-(2.1*($s))) + 4.94e-6;
return $Fs;
}

Creating a Plotfile for graph_sdql


report_sdql … experiment=<exp> plotfile=<filename> plotpoints=<nn> …

The report_sdql command will create a file recording the defect size versus the
percentage of defects of that size (tested, untested and total). This file is used as input to the
graph_sdql command which produces a graphical representation of how defects are
distributed among various defect sizes.

Graphing SDQL

Use the graph_sdql command to create a graphical representation showing how potential
defects are distributed across the various defect sizes. A plot of the total defects, tested
defects and untested defects is drawn. One graph is displayed for each clock domain.

Jpeg files are also created for each of the graphs. They are stored in the directory specified
in the jpegdir keyword (default: $WORKDIR/testresults). If not specified, jpegdir defaults
to the current directory.

The graph_sdql command takes the plotfile keyword as input, which is the name of a
fully qualified input file containing the numeric data used to plot the test effectiveness graphs.
The plotfile is produced as an output of the report_sdql command.

November 2010 165 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

Note: Specifying the plotfile keyword for report_sdql is a prerequisite for running the
graph_sdql command.

The graphs are stored in jpeg files named as sdql_domain_n.jpeg, where n identifies the
clock domain number listed in the report_sdql output.

Refer to graph_sdql in the Command Line Reference for more information on the
command.

A sample usage of the graph_sdql command is shown below:


graph_sdql plotfile=<infilename> jpegdir=$WORKDIR/testresults

A sample SDQL graph is shown in the following figure:

Figure 4-33 SQDL Effectiveness Graph

Committing Tests
By default, Encounter Test makes all test generation runs as uncommitted tests in a test
mode. Commit Tests moves the uncommitted test results into the committed vectors test data
for a test mode. Refer to “Performing on Uncommitted Tests and Committing Test Data” on
page 234 for more information on the test generation processing methodology.

Writing Test Vectors


Encounter Test writes the following vector formats to meet the manufacturing interface
requirements of IC manufacturers:

November 2010 166 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

■ Standard Test Interface Language (STIL) - an ASCII format standardized by the IEEE.
■ Waveform Generation Language (WGL) - an ASCII format from Fluence Technology, Inc.
■ Verilog - an ASCII format from Cadence Design Systems., Inc.

Refer to “Writing and Reporting Test Data” on page 169 for more information.

November 2010 167 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Delay and Timed Test

November 2010 168 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

5
Writing and Reporting Test Data

This chapter provides information on exporting test data and sequences from Encounter Test.

The chapter covers the following topics:


■ “Writing Test Vectors” on page 169
■ “Create Vector Correspondence Data” on page 188
■ “Reporting Encounter Test Vector Data” on page 190
■ “Reporting Sequence Definition Data” on page 192
■ “Converting Test Data for Core Tests (Test Data Migration)” on page 193
■ “Reporting a Structure-Neutral TBDbin” on page 194

Writing Test Vectors


Encounter Test writes the following vector formats to meet the manufacturing interface needs
of IC manufacturers:
■ Standard Test Interface Language (STIL): an ASCII format standardized by the IEEE.
■ Waveform Generation Language (WGL): an ASCII format from Fluence Technology, Inc.
■ Verilog: an ASCII format from Cadence Design Systems., Inc.
■ Tester Description Language (TDL)

Refer to Test Pattern Data Reference for detailed descriptions of these formats.

To write vectors using the graphical interface, refer to “Write Vectors” in the Graphical User
Interface Reference.

To write vectors using commands, refer to “write_vectors” in the Command Line Reference.

The syntax for the write_vectors command is given below:

November 2010 169 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

write_vectors workdir=<directory> testmode=<modename> inexperiment=<name>


language=<type>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode of experiment to export pattern
■ inexperiment= name of the experiment from which to export data
■ language=stil|verilog|wgl|tdl = Type of patterns to export

The following table lists the commonly used options for write_vectors for all languages:

Language Parameter Description


STIL signalsfile=yes|no Create a file containing
common elements
(signals). Default is yes.
singletimeplate=no Create a single timeplate
|yes per file. Default is
conditional.
WGL signalsfile=yes|no Create a file containing
common elements
(signals). Default is yes.
singletimeplate=no Create a single timeplate
|yes per file. Default is
conditional.
Verilog scanformat=serial| Select the format of the
parallel scan in the output
vectors.
singletimeplate=no Create a single timeplate
|yes per file. Default is
conditional.
includenetnames=no Include net name in scan
|yes out miscompare
messages. Default is no.

November 2010 170 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

Language Parameter Description


TDL scanformat=serial| Select the format of the
parallel scan in the output
vectors.
singletimeplate=no Create a single timeplate
|yes per file. Default is
conditional.
configfile= Specify the file that
<filename> contains the TDL design
configuration information
All languages testperiod=<decima Global test timing - Set
l 0.000000 or the test period
greater>
scanperiod=<decima Global scan timing - Set
l 0.000000 or the scan period
greater>
scanoverlap=yes|no Overlap the scan out
with the scan in. Default
is yes.
EXPORTDIR=<outdir> Directory to include
export data (default is
part directory)
outputfilename=<st Change the default
ring> output file name to be
based on this name.

Write Vectors Restrictions


Restrictions differ depending on the selected language format. Refer to subsequent STIL,
WGL, Verilog, and TDL sections for details.

Write Vectors Input Files


Write Vectors uses an Encounter Test Experiment or Committed Test Data as input. The
command also uses an optional keyword removepinsfile, which allows removal of the
specified pins from the output test vector files (by default, all pins are written to the output test

November 2010 171 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

vectors). Refer to removepinsfile in the Command Line Reference for more


information.

When writing TDL output, you need to specify the configfile keyword to define the TDL
design configuration information. Refer to subsequent TDL section for more information.

Write Vectors Output Files


The output differs depending on the selected language format. Refer to subsequent STIL,
WGL, Verilog, and TDL sections for details.

To limit the number of test vector output files, specify combinesections=yes, which
combines multiple test sections based on test types and creates a maximum of four files, one
each for storing static scan tests, static logic tests, dynamic scan tests, and dynamic logic
tests.

Refer to write_vectors in the Command Line Reference for more information.

In this case, by default, the files are named as language.testmode.testtype, which


can be overridden using the outputfilename keyword. If the outputfilename keyword
is specified, the output file is named as outfilename.testtype, where testtype is
static_scan, dynamic_scan, static_logic, or dynamic_logic to indicate the type of test vectors
contained within the output test vector file.
Note: The write_vectors command does not write the PPIs used for test generation in
the output vector.

Default Timings for Clocks


For WGL, STIL, and Verilog vectors, default placement of test clocks within the test cycle
(specified by the value for testperiod) is based on the following algorithm:
■ clockoffset is initialized to: (the higher value of either testpioffset or
testbidioffset) + testpulsewidth
■ If, and only if, there are A_SHIFT_CLOCKs and/or pure C-clocks (not ES or BS):
❑ They are placed at clockoffset for a duration of the value specified for
testpulsewidth
❑ clockoffset is incremented by 2 times the testpulsewidth value
■ If, and only if, there are E-clocks:
❑ They are placed at clockoffset for a duration of the testpulsewidth value

November 2010 172 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

❑ clockoffset is incremented by 2 times the testpulsewidth value


■ If, and only if, there are B-SHIFT_CLOCKs and/or P-clocks:
❑ They are placed at clockoffset for a duration of the testpulsewidth value
❑ clockoffset is incremented by 2 times the testpulsewidth value
■ Note that clockoffset is incremented only if a particular clock type exists. This means
that clock placement is dependent on the type of clocks used in the design.
■ Final clockoffset time must be <= testperiod to enable all clocks to fit within the
specified test cycle.
■ Default placement of scan clocks within the scan cycle is identical except that
scanperiod, scanpioffset, scanbidioffset and scanpulsewidth values are
used and only A, E, and B clocks are placed (no C or P clocks).
■ Any clock may be explicity placed by using testpioffsetlist or
scanpioffsetlist.

The default timing for scanpioffset, scanbidioffset, and scanstrobeoffset


varies depending on the order of the Set_Scan_Data and Measure_Scan_Data events in
the scan sequences. The following defaults apply for designs without compression:
■ If the Measure_Scan_Data event precedes the Set_Scan_Data event, the defaults
are the following:
❑ scanstrobeoffset=0
❑ scanpioffset=scanbidioffset =0+2 * scanpulsewidth
■ If the Set_Scan_Data event precedes the Measure_Scan_Data event, the default are
the following:
❑ scanpioffset=scanbidioffset =0
❑ scanstrobeoffset=0+scanpulsewidth

If Set_Scan_Data and Measure_Scan_Data events do not exist in all test modes, the
default timing is the following:
■ scanpioffset=scanbidioffset=scanstrobeoffset=0

Specify testpioffset=CLK_1=120 testpulsewidth=80 to generate the timing shown


in the following figure:

November 2010 173 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

Specify testperiod=80 pioffset=0 bidioffset=0 strobeoffset=72


pulsewidth=8 pioffsetlist=EC=16 to generate the timings shown in the following
figure:

November 2010 174 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

Adding Extra Timing Set for Initialization Sequences


By default, the timing of the modeinit sequence matches the timing of the test sequences.
However, you can change the modeinit timing by using any of the following keywords and
specifying a value different than the current timing value for the test sequences:
■ initbidioffset
■ initperiod
■ initpioffset
■ initpioffsetlist

November 2010 175 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

■ initpulsewidth
■ initstrobeoffset
■ initstrobetype
■ inittimeunits

Refer to write_vectors in the Command Line Reference for more information.

write_vectors uses the modeinit timings only if they are different from the test sequence
timings. In case of different modeinit timings:
■ The modeinit timings replace the test timings for all the processed modeinit sequences.
■ The modeinit timings are used for all events encountered within the init Test_Sequence
except for Scan_Load events. write_vectors uses scan timings to process any
Scan_Load event it encounters.
■ Using the modeinit timings, write_vectors writes out accumulated values
immediately after processing the modeinit sequence, prior to processing the first test. In
other words, write_vectors does not compress patterns when transitioning between
modeinit timings and test timings.
■ write_vectors prints the modeinit timings in the header area for each output vector
file.

Processing Dynamic Sequences


While processing dynamic sequences, write_vectors only creates dynamic timeplates
that are unique. In other words, if a cycle of a dynamic sequence matches an already existing
dynamic sequence then the first dynamic sequence is used. For example, if the release cycle
of a dynamic sequence matches the capture cycle of any dynamic sequence, then the first
encounter cycle of the dynamic sequence is used and referenced throughout the output
vector files. In addition, if the keyword limittimeplates is specified to yes and the release
timeplate matches the capture timeplate then write_vectors defines only the capture
timeplate.

Specify compressdynamictimeplates=no to disable the automatic compression of


dynamic timeplates.

Refer to write_vectors in the Command Line Reference for more information.

November 2010 176 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

Changing Default Pin Order


As an optional input, you can provide a file that includes the order in which to sequence pin
data while creating test data. Use the write_vectors keyword pinorder=<pinorder file
name> to specify a pin order file.

Refer to “write_vectors” in the Command Line Reference for more information.


Note: If you do not specify a pin order file, the command writes the pin data in the order they
were configured in the Encounter Test model (PIentry and POentry order).

Define the order of pins in the file by including the pin names separated by one or more blank
spaces. Block and line comments are supported in the file.

Tip
If the pin order specified for an invocation of write_vectors differs from the
previously used order, you must recreate the test vectors for the previously exported
Test_Sections.

Limiting Dynamic Timeplates for Vectors


Specify limittimeplates=yes to limit the number of timeplates for the test vectors. When
you set limittimeplates=yes, Encounter Test defaults the singletimeplate keyword
to yes, which limits the test and scan timeplates to 1. If dynamic timings exist, then two
additional dynamic timeplates are created, the Release timeplate and the Capture timeplate.

Refer to write_vectors in the Command Line Reference for more information.

Specifying the keywords as mentioned above results in the following dynamic timeplate
configuration for write_vectors:
■ The dynamic clock events use the timeplates in the order of Release and Capture. For
example, the first dynamic clock event goes in the Release timeplate, the second
dynamic clock event goes in the Capture timeplate, the third event in the Release
timeplate, and so on.
■ The dynamic Stim_PI events are included in the timeplate associated with the next
dynamic clock event.
■ All Measure PO events are in the Capture timeplate.

November 2010 177 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

Creating Cycle Map for Output Vectors


Use the cyclemap keyword to generate cycle information for the output vectors.
The cyclemap keyword can have any of the following values:
■ yes - creates the cycle map file for all scan and PO measures. All output vectors will be
created.
■ only - creates the cycle map only for all scan measures. No output vectors will be
created.
■ measures - creates the cycle map file for all scan and PO measures. No output vectors
will be created.
■ no (default)- does not create a cycle map file

The generated cycle map file is named as


cycleMap.<testmode>|.<inexperiment> and saved in the workdir/
testresults/EXPORTDIR directory.

Refer to write_vectors in the Command Line Reference for more information.


--------------------------------------------------------------------------------
Total Relative Relative Test Pattern Event Cycle Scan Relative
Cycle Cycle Simulation Sequence Odometer Type Offset Length Scan
Count Count Time(ns) Number Number
// The vector file output name: STIL.FULLSCAN.scan.ex1.ts1.stil

2 2 ET10.1 1 1.1.1.2.1.1 Scan_Load 0 85 ET10.1


87 87 ET10.1 1 1.1.1.2.1.1 end_of_scan 84 0 ET10.1
97 97 ET10.1 1 1.1.1.2.1.9 Scan_Unload 0 85 ET10.1
182 182 ET10.1 1 1.1.1.2.1.9 end_of_scan 84 0 ET10.1
...................
...................
...................

The cycle map file has the following columns:


■ Total Cycle Count - write_vectors starts from the top of TBDbin with value zero
and continues to count the cycles across all output files.
■ Relative Cycle Count - Beginning from the top of each output vector file,
write_vectors starts the cycle count at zero on every new output file.
■ Test Sequence Number - The test sequence number in the TBDbin.
■ Pattern Odometer - The odometer value of the reported test sequence in the TBDbin.
■ Event Type - The event type associated with the reported test sequence in the TBDbin.

November 2010 178 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

■ Cycle Offset - This is zero at the start of the scan event and the end of the scan event,
the value is scan length - 1. For all measure_PO events, this value is -1.
■ Scan Length - The number of clock shifts during a scan operation.

Writing Standard Test Interface Language (STIL)


Write Vectors accepts either a committed vectors or an uncommitted vectors file for a
specified test mode and translates its contents into files that represent the TBDbin test data
in STIL format.
■ To write STIL vectors via the GUI, select STIL as the Language option on the Write
Vectors form
■ To write STIL vectors using commands, specify language=stil for write_vectors.

STIL Restrictions

The following restrictions apply:


■ Test data created for an assumed scan chain test mode cannot be written.
■ Overlapping is not allowed when using diagnostic measure events.

STIL Output Files

Write Vectors creates the following output files in STIL format:


■ An uncommitted input TBDbin.testmode.inexperiment file results in one or
more files named
STIL.testmode.inexperiment.testsectiontype.ex#.ts#.stil in the
specified directory.
■ A committed input TBDbin.testmode vectors file results in one or more files named
STIL.testmode.inexperiment.testsectiontype.ex#.ts#.stil in the
specified directory.
■ Input of either uncommitted vectors or committed vectors results in a file named
STIL.testmode.inexperiment.signals.stil in the specified directory if the
option to generate this file is selected.
The testmode and inexperiment fields contain the testmode and uncommitted
test names from the source TBDbin file.

November 2010 179 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

The testsectiontype field contains the test section type value from the TBDbin and is
used to identify the type of test data contained within the STIL file, for example, logic.

The ex#, ts#, and optional tl# fields differentiate multiple files generated from a single
TBDbin. The TBDbin hierarchical element id is substituted for #, i.e., ex# receives the
uncommitted test number within the TBDbin, ts# receives the test section number within the
uncommitted test, and tl# receives the tester loop number within the test section.

Committed and uncommitted TBDbins contain test coverage and tester cycle count
information for each test sequence.

The optional signals file contains declarations common to the STIL vector files, for example,
I/O signal names, in order to eliminate redundant definition of these elements.

The preceding files represent default names if keyword outputfilename (or Write Vectors
form field Set the output file names) is not specified. If a value for outputfilename is
specified, multiple output files are differentiated by the presence of a numeric suffix appended
to the file name. For example, multiple committed vectors files are named
outputfilename_value.1.stil, outputfilename_value.2.stil, and so on.
The signals file is named outputfilename_value.signal.stil.

Use the latchnametype keyword to specify whether to report the output vectors at the
primitive level or at the cell level.
Note: If a Verilog model does not have the cells and macros correctly marked and you specify
latchnametype=cell, then the instance names might match and be invalid. If the model
does not have a cell defined, then write_vectors uses primitive level for the instance
name.

Writing Waveform Generation Language (WGL)


Write Vectors accepts either a committed vectors or an uncommitted vectors file for a
specified test mode and translates its contents into files that represent the TBDbin test data
in WGL format.
■ To write WGL vectors via the GUI, select WGL as the Language option on the Write
Vectors form
■ To write WGL vectors using commands, specify language=wgl for write_vectors.

November 2010 180 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

WGL Restrictions

The following restrictions apply:


■ Test data created for an assumed scan chain test mode cannot be written.
■ Overlapping is not allowed when using diagnostic measure events.

WGL Output Files

Write Vectors creates the following output files in WGL format:


■ An uncommitted input TBDbin.testmode.inexperiment file results in one or
more files named
WGL.testmode.inexperiment.testsectiontype.ex#.ts#.wgl in the
specified directory.
■ A committed input TBDbin.testmode vectors file results in one or more files named
WGL.testmode.inexperiment.testsectiontype.ex#.ts#.wgl in the
specified directory.
■ Input of either uncommitted vectors or committed vectors results in a file named
WGL.testmode.inexperiment.signals.wgl in the specified directory if the
option to generate this file is selected.
The testmode and inexperiment fields contain the testmode and uncommitted
test names from the source TBDbin file.

The testsectiontype field contains the test section type value from the TBDbin and is
used to identify the type of test data contained within the WGL file, for example, logic.

The ex#, ts#, and optional tl# fields differentiate multiple files generated from a single
TBDbin. The TBDbin hierarchical element id is substituted for #, i.e., ex# receives the
uncommitted test number within the TBDbin, ts# receives the test section number within the
uncommitted test, and tl# receives the tester loop number within the test section.

Committed and uncommitted TBDbins contain test coverage and tester cycle count
information for each test sequence.

The optional signals file contains declarations common to the WGL vector files, for example,
I/O signal names, in order to eliminate redundant definition of these elements.

The preceding files represent default names if keyword outputfilename (or Write Vectors
form field Set the output file names) is not specified. If a value for outputfilename is
specified, multiple output files are differentiated by the presence of a numeric suffix appended
to the file name. For example, multiple committed vectors files are named

November 2010 181 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

outputfilename_value.1.wgl, outputfilename_value.2.wgl, and so on.


The signals file is named outputfilename_value.signal.wgl.

Writing Tester Description Language (TDL)


Write Vectors accepts either a committed vectors or an uncommitted vectors file for a
specified test mode and translates its contents into files that represent the TBDbin test data
in TDL format.

To write TDL vectors using commands, specify language=tdl for write_vectors.

TDL Restrictions

The following restrictions apply:


■ Test data created for an assumed scan chain test mode cannot be written.
■ Overlapping is not allowed when using diagnostic measure events.

TDL Output Files

Write Vectors creates the following output files in TDL format:


■ Input of either uncommitted vectors or committed vectors results in a file named
<pattern_set_name>_#.tdl in the specified directory.
■ Input of either uncommitted vectors or committed vectors results in a signals file named
<pattern_set_name>.signals.tdl in the specified directory if the option to
generate this file is selected.
The optional signals file contains declarations common to the TDL vector files, for
example, I/O signal names, in order to eliminate redundant definition of these elements.
pattern_set_name is the pattern set name specified in the input config file and # is
the test section number within the test.

The preceding files represent default names if keyword outputfilename (or Write Vectors
form field Set the output file names) is not specified. If a value for outputfilename is
specified, multiple output files are differentiated by the presence of a numeric suffix appended
to the file name. For example, multiple committed vectors files are named
outputfilename_value.1.tdl, outputfilename_value.2.tdl, and so on.
The signals file is named outputfilename_value.signal.tdl.

November 2010 182 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

Writing Verilog
Write Vectors accepts either a committed vectors or an uncommitted vectors file for a
specified test mode and translates its contents into files that represent the TBDbin test data
in Verilog format.
■ To write Verilog vectors via the GUI, select Verilog as the Language option on the Write
Vectors form
■ To write Verilog vectors using commands, specify language=verilog for
write_vectors.

Parallel vs Serial Timing

Parallel Verilog patterns always measure the nets at the time frame 0 as opposed to serial
patterns that measure the nets at other poissible times. Overriding the Test or Scan Strobe
Offsets must have the measures before any stimulus to get the correct parallel simulation
results.

Miscompares in parallel mode result are reported on internal nets at the beginning of the scan
cycle and are not flagged on a specific scan out pin.

Verilog Restrictions

The following restrictions apply:


■ Test data created for an assumed scan chain test mode cannot be written.
■ Overlapping is not allowed when using diagnostic measure events.

Verilog Output Files

Write Vectors creates the following output files in Verilog format:


■ An uncommitted input TBDbin.testmode.inexperiment file results in one or
more files named
VER.testmode.inexperiment.testsectiontype.ex#.ts#.verilog in
the specified directory.
■ A committed input TBDbin.testmode vectors file results in one or more files named
VER.testmode.inexperiment.testsectiontype.ex#.ts#.verilog in the
specified directory.

November 2010 183 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

■ Input of either uncommitted vectors or committed vectors results in a file named


VER.testmode.inexperiment.signals.verilog in the specified directory if the
option to generate this file is selected.
The testmode and inexperiment fields contain the testmode and uncommitted
test names from the source TBDbin file.

The testsectiontype field contains the test section type value from the TBDbin and is
used to identify the type of test data contained within the Verilog file, for example, logic.

The ex#, ts#, and optional tl# fields differentiate multiple files generated from a single
TBDbin. The TBDbin hierarchical element id is substituted for #, i.e., ex# receives the
uncommitted test number within the TBDbin, ts# receives the test section number within the
uncommitted test, and tl# receives the tester loop number within the test section.

Committed and uncommitted TBDbins contain test coverage and tester cycle count
information for each test sequence.

The optional signals file contains declarations common to the Verilog vector files, for example,
I/O signal names, in order to eliminate redundant definition of these elements.

The preceding files represent default names if keyword outputfilename (or Write Vectors
form field Set the output file names) is not specified. If a value for outputfilename is
specified, multiple output files are differentiated by the presence of a numeric suffix appended
to the file name. For example, multiple committed vectors files are named
outputfilename_value.1.verilog, outputfilename_value.2.verilog,
and so on. The mainsim file is named outputfilename_value.mainsim.v.

NC-Sim Considerations

When using Write Vectors for subsequent simulation by NC-Sim, the following keywords may
be specified for NC-Sim:

+DEBUG Specify this keyword to trace the process flow.


+HEARTBEAT Specify this keyword to produce progress status messages.
+FAILSET Specify this keyword to produce a Chip-Pad-Pattern record for
each miscomparing vector. Refer to “Reading Failures” in the
Diagnostics User Guide.
+START_RANGE Specify an odometer value to indicate the beginning of a
pattern range to be simulated.

November 2010 184 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

+END_RANGE Specify an odometer value to indicate the end of a pattern


range to be simulated.
+TESTFILEnum=filename
Specify an input data file to NC-Sim. Specify multiple files by
incrementing the num string. For example, specify
+TESTFILE1=data1 +TESTFILE2=data2.
+DEFINE+simvision Specify this optional compiler directive keyword to compile code
proprietary to Cadence Design Systems., Inc. Do not include
this for non-Cadence simulators. This command enables the
automatic dump of waveform data but does not invoke a
waveform dump.
+simvision Specify this optional keyword to invoke the SimVision dump of
waveform data during simulation.
+vcd Specify this optional keyword to invoke the SimVision dump of
VCD data during simulation.

The following is an example syntax:


ncxlmode \
+DEFINE+simvision \
+simvision \
+vcd \
+HEARTBEAT \
+FAILSET \
+START_RANGE=1.1.1.1.1.1. \
+END_RANGE=1.1.14.1.8 \
VER.${TESTMODE}*mainsim.v \
+TESTFILE1=VER.testmode.data.testsection.ex.ts1 \
+TESTFILE2=VER.testmode.data.testsection.ex.ts2

Another NC-sim option is to set SEQ_UDP_DELAY+<delay in ps> during elaboration state


to allow Encounter Test to add some delay to the sequential elements. This option can help
fix many miscompares when running in a zero delay simulation mode.

An example to add 50ps delay is given below:


If using NC Verilog command line:
ncverilog \
+ncseq_udp_delay+50ps \

Specify the following if using the NC Elaboration command line:


ncelab \
-seq_udp_delay 50ps \

November 2010 185 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

Using the TB_VERILOG_SCAN_PIN Attribute

The TB_VERILOG_SCAN_PIN model attribute is used to control the selection of scan, stim,
and measures points in exported Verilog test vectors when specifying the write_vectors
scanformat=parallel option, or via the graphical user interface, selecting a Scan
Format of parallel.

The TB_VERILOG_SCAN_PIN attribute may be placed on any hierarchical pin on any cell.
However the pin must be on the scan path (or intended to be on the scan path if used within
a cell definition).

When encountered on input pins, the parent net associated with the attributed pin is selected
as a stimuli net. When encountered on output pins, the associated parent net is selected as
a measure net. This attribute may be selectively used, i.e., default net selection takes place
if no attribute is encountered for a specific bit position.

The following example depicts the placement of the TB_VERILOG_SCAN_PIN attribute on an


instance within a cell:
DESFQ
\$blk_DESFQ$14
(.Q ( \$net__G001 ) //! TB_VERILOG_SCAN_PIN="YES"
,.C ( \$net__H05 )
,.D ( \$net__H01 ) //! TB_VERILOG_SCAN_PIN="YES"
,.R ( \$net_NET$1 )
,.S ( \$net_NET$2 )
);

Understanding the Test Sequence Coverage Summary


The following is a sample Test Sequence Coverage Summary Report produced by
write_vectors:
--------- -------- -------- -------- -------- -------- ---------- ------
Test Static Static Dynamic Dynamic Sequence Overlapped Total
Sequence Total Delta Total Delta Cycle Cycle Cycle
Coverage Coverage Coverage Coverage Count Count Count
--------- -------- -------- -------- -------- -------- ---------- ------
1.1.1.1.1 0.00 0.00 0.00 0.00 1 0 1
1.1.1.2.1 38.31 38.31 24.14 24.14 17 7 18
1.1.1.2.2 54.55 16.24 32.18 8.04 10 7 28
1.1.1.2.3 64.94 10.39 35.63 3.45 11 0 39

The preceding report is produced for STIL, Verilog, and WGL vectors.

November 2010 186 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

Figure 5-1 on page 187 shows a graphical representation of the scan cycles for the reported
vectors:

Figure 5-1 Scan Cycle Overlap in Test Sequence Coverage Summary Report

The output log includes the following columns:


■ Test Sequence - shows the sequence for which the test coverage and tester cycle
information is reported
■ Static Total Coverage - shows the static fault coverage for the test sequence
■ Static Delta Coverage - shows the difference in static fault coverage between the
current and the preceding vector. For example, the static fault coverage for vector
1.1.1.2.2 is 54.55 while the coverage of the preceding vector 1.1.1.2.1 is 38.31.
Therefore, the difference in the coverage by the two vectors is 16.24, which is reported
in the Static Delta Coverage column.
■ Dynamic Total Coverage - shows the dynamic fault coverage for the test sequence
■ Dynamic Delta Coverage - shows the difference in dynamic fault coverage between
the current and the preceding vector. For example, the dynamic fault coverage of vector
1.1.1.2.2 is 32.18 and the coverage of the preceding vector 1.1.1.2.1 is 24.14. Therefore,
the difference in the coverage by the two vectors is 8.04, which is reported in the
Dynamic Delta Coverage column.

November 2010 187 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

■ Sequence Cycle Count - shows the clock cycles taken by the test sequence
■ Overlapped Cycle Count - shows the cycles that are overlapping with the next
reported vector. For example, for vector 1.1.1.2.1, out of 17 cycles, 7 cycles overlap with
the next vector 1.1.1.2.2.
While calculating the Sequence Cycle Count for a vector, the cycles overlapping with
the preceding vector are ignored.
For example, as shown in the figure, out of the 17 cycles taken by vector 1.1.1.2.2, 7
cycles overlap with the preceding vector 1.1.1.2.1. Therefore, though vector 1.1.1.2.2
takes total of 17 cycles, the 7 cycles that overlap with the preceding vector 1.1.1.2.1 are
ignored and not reported in the Sequence Cycle Count column for the vector.
■ Total Cycle Count - shows the cumulative sequence cycle count of the current and
all preceding vectors. For example, the total cycle count reported for vector 1.1.1.2.2 is
the combination of the cycle count for this vector (10) and all the vectors reported before
it, i.e, 1.1.1.2.1 (17) and 1.1.1.1.1 (1). Therefore, the reported cycle count for the vector
is 28.

Create Vector Correspondence Data


When the input TBDpatt file contains vector format data, there must exist a specification of
the correspondence between vector bit positions and design pins and blocks. This
correspondence data is contained in a file (TBDvect) that is separate from the TBDpatt data.
The data is maintained separately since it is concerned only with formatting, and not data
content. The same vector format file is used both for importing TBDpatt data and for printing
a TBDpatt file.

There is a single vector correspondence file per test mode. The intent is that there be only
one set of vector correspondence data ever used for a given mode. Making changes to the
vector correspondence file should be undertaken only with utmost caution. If you want to
import a TBDpatt file that was produced by Encounter Test, the vector correspondence must
not be altered during the interim.

You can change the ordering within vectors by changing the positions of pins or latches in the
vector correspondence file. This may be useful if you are going to use TBDpatt as an
intermediate interface to some other format which requires that the vectors be defined in a
specific way. In such a case, you might be able to modify TBDvect in such a way as to avoid
re-mapping the vectors in another conversion program.

Test data written in vector format contains a Vector_Correspondence section consisting of


commented lines. Vector correspondence data is stored in the TBDvect file. The vector
correspondence lists define each position of the vectors that are used in stim events, measure

November 2010 188 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

events, and weight events. For example, the fifth position in a Stim_PI vector is the value to
be placed on the fifth input in the primary input correspondence list.

The vector correspondence data includes the following lists:


■ Primary Input Correspondence List
■ Primary Output Correspondence List
■ Stim Latch Correspondence List
■ Measure Latch Correspondence List
■ MISR Latch Correspondence List
■ PRPG Latch Correspondence List

A scan design in Encounter Test, even if composed of flip-flops is always modeled using level-
sensitive latches. Each latch that is controlled by the last clock pulse in the scan operation
must necessarily end up in the same state as the latch that precedes it in the register. In rare
circumstances, the preceding latch may also be clocked by this same clock, and then both
latches have the same state as the next preceding latch (and so on). Some latches may be
connected in parallel, being controlled by the same scan clock and preceded by a common
latch. These latches also must end up at a common state. Encounter Test identifies all the
latches that have common values as the result of the scan operation, and each such set of
latches is said to be “correlated”. TBDpatt does not explicitly specify the value for every latch,
but only for one latch of each group of correlated latches. This latch in each group is called a
“representative stim latch” or “representative measure latch”, depending upon whether the
context is a load operation or an unload operation. The Stim Latch Correspondence List is a
list of all the latches that can be independently loaded by a scan or load operation. This
consists mainly of the representative stim latches, but contains also “skewed stim latches”
which require special treatment for the Skewed_Scan_Load event described in the Test
Pattern Data Reference. The Measure Latch Correspondence List is a list of the
representative measure latches.

In the case of PRPG and MISR latches, used for built-in self test, similar correlations may
exist; the representative latches for these are called “representative cell latches”. The PRPG
and MISR correspondence lists include the representative cell latches for the PRPGs and
MISRs respectively.

See TBDpatt and TBDseqPatt Format in the Test Pattern Data Reference for information
on vector correspondence language definition.

To perform Create Vector Correspondence using the graphical interface, refer to “Write Vector
Correspondence” in the Graphical User Interface Reference.

November 2010 189 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

To perform Write Vector Correspondence using command line, refer to


“write_vector_correspondence” in the Command Line Reference.

The syntax for the write_vector_correspondence command is given below:


write_vector_correspondence workdir=<directory> testmode=<modename>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode

Create Vector Correspondence Output


If creating the vector correspondence file, the output will be written into tbdata/
TBDvect.<TESTMODE NAME>.

Specify vectorcor=yes to include the output in TBDpatt files.

Reporting Encounter Test Vector Data


You can report the binary TBDbin test vectors generated by Encounter Test into an ASCII
(TBDpatt) file. This is useful when you need to understand how the generated test vectors
exercise your design.

To verify cell models created to run with Encounter Test, you can simulate vectors created by
our system with other transistor level simulators. To do this, it is necessary to expand the scan
chain load and unload data into individual PI stims, clock pulses, and PO measures. Select
the Expand scan operations option on the Report Vectors Advanced form or specify the
keyword pair expandscan=yes on the command line to expand any scan operations found
in the test data.

To report Encounter Test vectors using the graphical interface, refer to “Report Vectors” in the
Graphical User Interface Reference.

To report Encounter Test vectors using command line, refer to “report_vectors” in the
Command Line Reference.

The syntax for the report_vectors command is given below:


report_vectors workdir=<directory> testmode=<modename> experiment=<name>

November 2010 190 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

where:
■ workdir = name of the working directory
■ testmode= name of the testmode of experiment to export pattern
■ experiment= name of the experiment from which to export data

The commonly used keywords for the report_vectors command are:


■ format=vector|node - Specify the pattern format. Vector produces a smaller file but
requires vector correspondence to find individual bit information. Default is vector.
■ vectorcor=yes|no - Whether to include vector correspondence data. Default is yes.
■ outputfile=STDOUT|<outfilename> - Output file name or STDOUT. Default is
output file name which is written to the testresults directory.
■ testrange=<odometerRange> - Select a range of tests. Default is all patterns.

Refer to “report_vectors” in the Command Line Reference for more information on these
parameters.

Input Files for Reporting Encounter Test Vector Data

report_vectors requires a valid model and experiment or committed test data as input.

Output Files for Reporting Encounter Test Vector Data

The report_vectors command produces the following default output files:


■ Reporting committed vectors results in a file named TBDpatt.testmode in the
<workdir>/testresults directory.
■ Reporting expanded committed vectors results in a file named
TBDpatt.testmode.experiment.EXPANDSCAN in the in the <workdir>/
testresults directory.
■ Reporting uncommitted vectors results in a file named
TBDpatt.testmode.experiment in the <workdir>/testresults directory.
■ Reporting expanded uncommitted vectors results in a file named
TBDpatt.testmode.experiment.EXPANDSCAN in the <workdir>/
testresults directory.

November 2010 191 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

■ The output file name can be changed by specifying the outputfile=<filename>


keyword for the report_vectors command or by entering the file name in the GUI
Output file name entry field.

Reporting Sequence Definition Data


Encounter Test supports the reporting of sequence definitions. The report_sequences
command reports sequence definitions in TBDpatt format. Sequence definitions identify the
patterns used to establish the testmode initialization state and to perform the scan operation.
These are not used by themselves to drive a tester, but are useful in the ASCII TBDpatt
format for analysis or to be used as a basis for writing custom mode initialization or scan
sequences. For more information about test mode initialization or custom sequences, refer to
“Mode Initialization Sequences (Advanced)” and Defining a Custom Scan Sequence in the
Modeling User Guide.

To report sequence definition data using the graphical interface, refer to “Report Sequences”
in the Graphical User Interface Reference.

To report sequence definition data using command line, refer to “report_sequences” in the
Command Line Reference.

The syntax for the report_sequences command is given below:


report_sequences workdir=<directory> testmode=<modename>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode to report the sequences on

The commonly used keywords for the report_sequences command are:


■ format=vector|node - Specify the pattern format. Vector produces a smaller file but
requires vector correspondence to find individual bit information. Default is vector.
Node format is required if the file is used for future build_testmodes processes.
■ outputfile=STDOUT|<outfilename> - Output file name or STDOUT. Default is
output file name which is written to the testresults directory.

Refer to “report_sequences” in the Command Line Reference for more information on


these pa-rameters.

November 2010 192 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

Input Files for Reporting Sequence Definition Data

report_sequences requires a valid model and testmode as input.

Output Files for Reporting Sequence Definition Data

Reporting Encounter Test sequence definitions results in a file named


TBDseqPatt.testmode in the <workdir>/testresults directory.

The output file name can be changed by specifying the outputfile=<filename>


keyword for the report_vectors command or by entering the file name in the GUI Output
file name entry field.

Converting Test Data for Core Tests (Test Data Migration)


The convert_vectors_for_core_tests command converts a standard TBDbin file for
a core to a structure-neutral form of TBDbin that references none of the internal structure of
the core. This is necessary so the migration process for the core test data does not require
the presence of a core logic model. The structure-neutral TBDbin, called TBDtdm, closely
resembles the standard TBDbin except for those events whose interpretation relies on
knowledge of the internal structure of the core are replaced by events that require knowledge
of only the core I/O pins. The structure-neutral file can be printed using the utility
report_vectors_for_tdm. See “Reporting a Structure-Neutral TBDbin” on page 194 for
details.

To convert vectors for core tests using the graphical interface, refer to “Convert Vectors for
Core Tests” in the Graphical User Interface Reference.

To convert vectors for core tests using commands, refer to “convert_vectors_for_core_tests”


in the Command Line Reference.

The syntax for the convert_vectors_for_core_tests command is given below:


convert_vectors_for_core_tests workdir=<directory> testmode=<modename>
inexperiment=<name> exportfile=<file>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode of experiment to export pattern
■ inexperiment= name of the experiment from which to export data
■ exportfile= file name of exported data

November 2010 193 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

Input Files for Converting Test Data for Core Tests (TDM)

Converting vectors for core tests requires a valid model and experiment data.

Output Files for Converting Test Data for Core Tests (TDM)

The convert_vectors_for_core_tests command creates the output file


TBDtdm.cellname.

Reporting a Structure-Neutral TBDbin


The structure-neutral file can be printed using the command report_vectors_for_tdm.

The output of convert_vectors_for_core_test can be reported and viewed using the


report_vectors_for_core_tests command. Performing
convert_vectors_for_core_tests is a prerequisite to executing
report_vectors_for_core_tests.
Note: report_vectors_for_core_tests reads only the data produced by
convert_vectors_for_core_tests.

Refer to “report_vectors_for_core_tests” in the Command Line Reference for the syntax.

The syntax for the report_vectors_for_core_tests command is given below:


report_vectors_for_core_tests workdir=<directory> inputfile=<file>

where:
■ workdir = name of the working directory
■ inputfile= file name of exported data

Input for Reporting Structure-Neutral TBDbin


The output from convert_vectors_for_core_test, TBDtdm.cellname, is the input to
report_vectors_for_tdm.

Output for Reporting Structure-Neutral TBDbin


Specify any meaningful path/filename to write the output.

November 2010 194 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

Note: This output is disallowed as input to read_vectors (tbdpatt) or


read_sequence_definition_data.

November 2010 195 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Writing and Reporting Test Data

November 2010 196 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

6
Customizing Inputs for ATPG

Various tasks can be performed in preparation for Encounter Test test generation
applications. This section describes concepts and techniques related to preparing special
input to test applications. Refer to the following:
■ “Linehold File”
■ “Coding Test Sequences” on page 206
■ “Ignoremeasures File” on page 230
■ “Keepmeasures file” on page 230

Linehold File
A linehold file provides a means whereby you can define statements that identify a set of
nodes to be treated specially by Encounter Test applications that support lineholding. You
may name this file any arbitrary name permitted by the operating system. Although special
treatment of lineholds is application-specific, there is a set of global semantic rules common
to all Encounter Test applications which govern the specification of linehold information.
Anyone generating a linehold file must know the rules and syntax. When applying lineholds
to a flop or a net sourced from a flop, the linehold value is not guaranteed to persist through
the duration of the test, but will be set at the beginning of the test.

The linehold file also supports changing the frequency of oscillators that are connected to
OSC pins and were started by the Start_Osc event in the mode initialization sequence.

Use the following linehold statement types to specify this test generation behavior:
■ An OSCILLATE linehold overrides the frequency of a previously defined oscillating signal
or disables the oscillator. OSCILLATE statements do not have a direct effect on the test
generation or simulation step. However, the information is incorporated into the
experiment output and used by processes such as write_vectors.
■ A HOLD linehold is a hard linehold. Test patterns can never override this value. The HOLD
value is always in effect.

November 2010 197 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

■ A DEFAULT linehold is a soft linehold. Test patterns can override this value if required to
generate a test for a fault. Otherwise, the DEFAULT value is in effect.
❑ A DEFAULT linehold is automatically generated for any SCAN_ENABLE,
CLOCK_ISOLATION, or OUTPUT_INHIBIT test function pin.

The following is an example of a HOLD statement:


HOLD PIN "Pin.f.l.newpart.nl.lat1.LATCH_2$3.P01DATA"=1;

Create a file with all flops to be lineheld. hold. Specify LINEHOLD=lineholdfilename to


apply linehold file content during test generation.

Important
During scan, opposite values are returned on these nets creating 0/1 hard
contention. The test generator can ensure that the scanned in data will be 0+s for
these flops.

Refer to “Linehold File Syntax” on page 199 and “General Semantic Rules” on page 203 for
additional information.

The following criteria are used to determine the final linehold set:
■ Test mode test function pin specifications. Refer to “Identifying Test Functions Pins in the
Modeling User Guide for related information.
■ An input linehold file. The values in this file override test function pin specifications.
■ User-defined test sequences. These override the above two criteria. Refer to “Linehold
Object - Defining a Test Sequence” on page 204.

Boolean constraints can also be described in linehold files. Refer to “Boolean Constraints” on
page 104 for details.

General Lineholding Rules


In general any internal node or pin may be lineheld to a desired value. However, there are
some exceptions when dealing with latches. One of the fundamental rules of a linehold set is
that it must be possible to apply the desired values in a single load operation. Encounter Test
does not support the application of a sequence of patterns with respect to lineholds.
Therefore, any latch specifications must be consistent with respect to load (normal or skewed)
consistency. Thus, it must be possible to justify internal nodes in a single time-frame.

Clock pins may not be held away from stability. This is the case whether a clock pin is held
directly, or the justification of an internal node would require a clock pin held ON. Any test

November 2010 198 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

function pins lineheld to the stability value may be taken out of stability for the scanning
operation or to capture the effect of a fault in a latch.

If a linehold is specified on any pin or latch in a correlated set, then all the pins or latches in
that set will be lineheld accordingly. It is recommended that you specify linehold values only
for the representative pins or representative stim latches.

Whenever a cut point is lineheld (HOLD or DEFAULT), this applies to the associated Pseudo
Primary Input (PPI), and thus to all the cut points associated with the PPI at their respective
values. See “Linehold File Syntax” on page 199 for additional information.

When specifying lineholds by way of a Linehold object in a Define_Sequence statement


(user-specified test sequence), each node must be either controllable in the target test mode,
or in the case of Line Hold (LH) fixed-value latches, controllable in the parent test mode.
Lineholds on other internal nodes are not permitted in Linehold objects. See
“Define_Sequence” in the Test Pattern Data Reference for related information.

Linehold File Syntax


The following defines the syntax for linehold file statements. See “General Semantic Rules”
on page 203 for additional information.

OSCILLATE entity = [up] [down] [pulsespercycle]; (using “=” is optional)

where:
■ entity is the pin name optionally preceded by the keyword PIN.
■ up is a decimal number specifying the duration in nanoseconds for which the signal is 1
on each oscillator cycle. There is no default value for this parameter and the specified
value should be greater than 0.
■ down is a decimal number specifying the duration in nanoseconds for which the signal
is 0 on each oscillator cycle. There is no default value for this parameter and the specified
value should be greater than 0.
■ pulsespercycle is an integer specifying the number of oscillator cycles to be applied
in each tester cycle. There is no default value for this parameter and the specified value
should be greater than 0.
Note: To turn off an oscillator for the experiment, specify the STOPOSC statement, as shown
below:

STOPOSC entity;

November 2010 199 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

Where entity is the pin name optionally preceded by the keyword PIN.

HOLD entity = value [restriction] [action]; (using “=” is optional)

where:
■ entity is one of the following:
❑ [PIN] hier_pin_name
❑ NET hier_net_name
❑ BLOCK usage_block_name
❑ PPI name
■ value is one of the following:
❑ 0 - logic zero
❑ 1 - logic one
❑ Z - high impedance
❑ X - logic X (unknown)
❑ W - weak logic X
❑ L - weak logic zero
❑ H - weak logic one
■ restriction is either:
❑ all (default) - in effect for the entire sequence
❑ dynamic - in effect only for the dynamic section of the sequence
■ action is either:
❑ ignore (default) - if a test pattern voilates a given constraint, then assign an unknown
(X) value to the endpoint of the constraint. This improves the runtime performance
as the test generation engine needs to protect only those constraints that would
otherwise block (with an X) one of the faults it is targeting in the test pattern.
❑ remove - consider a test pattern as invalid if a given constraint is violated. The
simulator removes all such invalid patterns from the pattern set.

DEFAULT entity = value; (using “=” is optional)

where:

November 2010 200 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

■ entity is one of the following:


❑ [PIN] hier_pin_name
❑ NET hier_net_name
❑ BLOCK usage_block_name
❑ PPI ppi_name
■ value is one of the following:
❑ 0 - logic zero
❑ 1 - logic one
❑ Z - high impedance
❑ X - logic X (unknown)
❑ W - weak logic X
❑ L - weak logic zero
❑ H - weak logic one

RELEASE entity;

where:
■ entity is one of the following:
❑ [PIN] hier_pin_name
❑ PPI name

TRANSITION entity = value [restriction] [action] [propagation]; (using “=”


is optional)

where:
■ entity is one of the following:
❑ [PIN] hier_pin_name
❑ NET hier_net_name
❑ BLOCK usage_block_name
❑ PPI name

November 2010 201 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

■ value is one of the following:


❑ NONE - no transition permitted
■ restriction is either:
❑ all (default) - in effect for the entire sequence
❑ dynamic - in effect only for the dynamic section of the sequence
■ action is either:
❑ ignore (default) - ignore all the scan bits fed by the last propagation entity in the
propagation list
❑ remove - discard the sequence if a constraint is violated
■ propagation is one of the following:
❑ none (default) - the constraint is considered violated even if not measured at an
observe point
❑ any - the constraint is considered violated if a value not equal to value occurs at the
entity and its effects propagate to any observable point in the forward cone. The
constraint is violated if value is violated and the effects of the transitions are
propagated to one of the listed entities.
❑ a comma-separated list of observe points as follows:
entity, entity,...
where each entity is one of the following:
❍ [PIN] hier_pin_name
❍ NET hier_net_name
❍ BLOCK usage_block_name
Note: The Transition constraint is only checked by the simulator.

The following is a sample linehold file.


default in2c = 1 ;
hold in2 = 0 ;
hold in2c = 0 ;

where in2c and in2 are pin names.

The syntax rules governing linehold parameter specification are as follows:


■ A statement may occupy only a single line.

November 2010 202 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

■ Multiple statements per line are permitted (each ending with a semi-colon).
■ Only a single entity is permitted per statement.
■ Only a single value is permitted per HOLD or DEFAULT statement.
■ Keywords and entity identifiers must be immediately followed by at least one space.
■ There is no case sensitivity except for entity names.
■ Comments delimited by '//' or '/* */' may be used freely.

General Semantic Rules


The following are general semantic rules for the specification of entity.
■ The entity name must be the proper form unless it exists only in the top-level of the
hierarchy (i.e., simple names may be specified for primary inputs).
■ The entity name is assumed to correspond to a pin if no entity type (pin, block, or net)
keyword is specified.
■ If the entity type keyword is block, this block may only have a single output pin (to avoid
ambiguity).
■ The corresponding node must be active in the current test mode.
■ The corresponding node should appear only ONCE in the linehold file. If conflicting
usages of a node are detected, the last valid statement containing the node takes
precedence.
■ The corresponding node may NOT be one of the following:
❑ Test Inhibit Primary Input, Test Inhibit pseudo primary input (PPI), or Test Inhibit
fixed-value latch, or a block, pin, or net controlled by such (to conflicting value).
❑ Test Constraint Primary Input, Test Constraint pseudo primary input (PPI), or Test
Constraint fixed-value latch, or a block, pin, or net controlled by such (to conflicting
value).
❑ RAM or ROM
❑ output of a chop block (CHOPL, CHOPT, NCHOPL, NCHOPT)
❑ tie block
❑ load clear latch
❑ load preset latch

November 2010 203 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

❑ floating latch
❑ latch which is only randomly controllable

Special Rules for the HOLD and DEFAULT statements

The following are special rules associated with entity.


■ If the associated node corresponds to a controllable latch, the value may only be 0 or 1
■ If the associated node is a primary input not contacted by the tester, the value may only
be Z or X
■ If the value is a weak value (W, L, H), the corresponding net must be sourced by a
resistor, nfet, or pfet.
■ If the associated node corresponds to a non-controllable point in the design, the
justification of this (together with all other held non-controllable points), must be able to
be done in a single time-frame. Otherwise this point will not be held.

Special Rules for the RELEASE Statement

The following are special rules associated with entity.

The entity must correspond to one of the following:


■ A Linehold (LH) flagged PI
■ A Linehold (LH) flagged PPI
■ A correlated Primary Input whose Representative PI is a Linehold (LH flagged pin).

Special Rules for the OSCILLATE Statement

The entity must be a pin that has the OSC test function and is referenced by the
Start_Osc event in the mode initialization sequence.

Linehold Object - Defining a Test Sequence


The following is an example test sequence definition containing a linehold object. Refer to
“Lineholds” in the Test Pattern Data Reference for additional information.
TBDpatt_Format (mode=node , model_entity_form=name);
[ Define_Sequence mle1 (test, user_defined);
[ Lineholds ():

November 2010 204 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

Block.f.l.mle1.nl.mle1.slave = 0
Block.f.l.mle1.nl.mle2.slave = 0
Block.f.l.mle1.nl.mle3.slave = 0
Block.f.l.mle1.nl.mle4.slave = 0
Block.f.l.mle1.nl.mle5.slave = 0
Block.f.l.mle1.nl.mle6.slave = 0
Block.f.l.mle1.nl.mle7.slave = 0
Block.f.l.mle1.nl.mle8.slave = 0
;
] Lineholds;
[ Pattern 1 ( pattern_type = static );
. . .
] Pattern 1;
] Define_Sequence mle1;

The following rules apply to the specification of linehold objects in user-defined test
sequences. Lineholds from user-defined test sequences override those from all other
sources.

These rules are enforced when importing test sequences.


■ No more than one Linehold object may be specified in a Define_Sequence statement.
Refer to “Define_Sequence” in the Test Pattern Data Reference for additional
information.
■ A Linehold object may be specified only a user sequence of type test. Refer to the
definition for test in the “Define_Sequence” section of the Test Pattern Data
Reference.
■ Entries within the Linehold object must be consistent (no conflicting linehold values on a
node).
■ Entries within the Linehold object must be consistent with any stimulus events supplied
with the sequence definition
■ The sequence definition must include stimulus events required to apply specified
lineholds.
■ The node associated with a Linehold object entry must be either:
❑ controllable in the target test mode(s) - those the test sequence is intended for.
or
❑ a Line Hold fixed-value latch, controllable in the parent mode of the target mode(s)

The following is an example sequence definition containing linehold information in keyed


data. Refer to “Keyed Data” in the Test Pattern Data Reference for additional information.
[ Define_Sequence Test1 1 (test);
[ Keyed_Data;
"CONSTRAINTS" = "transition A1 = T01; transition A2 = ANY dynamic ignore A1,
pin A2; equiv + A1, - A2;"

November 2010 205 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

] Keyed_Data;
[ Pattern 1.1 (pattern_type = static);
Event 1.1.1 Stim_PI ():
"Pin.f.l.tstatesq.nl.D"=0;
] Pattern 1.1 ;
] Define_Sequence ;

Coding Test Sequences

Introduction to Test Sequences


In the present context, test sequence refers to a template to be used for a set of tests. The
template typically specifies certain clock primary inputs to be pulsed in a given order for each
test as well as certain non-clock primary input (control pin) stimuli. The template, or test
sequence, is imported as a sequence definition and stored in the TBDseq.testmode file.

For WRPT and LBIST use, this is the exact sequence of stimuli that eventually get put in the
TBDbin (Vectors) file output from the test generation applications, and finally get applied at
the tester. In a similar fashion, test sequences for stored-pattern tests can also be user-
defined. User specification of sequences for stored-pattern test is required for on-product
generation or BIST controller support (referred to generically as OPC), but there may be
instances other than OPC where it is desired to constrain the test sequences for stored
patterns.

For stored patterns, the user-defined test sequence is augmented by the automatically
generated input (and scannable latch) vectors for each test. See section “Stored Pattern
Processing of User Sequences” on page 209 for a description of this process.

Common to all user-defined test sequence scenarios are the basic coding process and the
process of importing the sequence definitions. On the other hand, the detailed contents of the
sequence definition depend on its intended use (stored patterns vs. WRPT or LBIST). There
are three basic steps in creating a sequence definition:
1. Know the names of the primary inputs, pseudo PIs, and latches or other internal nets that
will be referenced in the test sequence.
2. Use the text editor of your choice to code the test sequence in TBDpatt format.
3. Import the test sequence to TBDseq.

The next subsection tells you how to get started. This is followed by an explanation of
sequence coding for stored patterns, then an explanation of sequence coding for WRPT or
LBIST, and then a brief explanation of the import process.

November 2010 206 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

Getting Started with Test Sequences


Sequence definitions are coded in the format of TBDpatt, which is the ASCII version of
TBDbin and TBDseq. You will need to know the names of the primary inputs and internal
blocks that you will be setting values on. Mostly these are clocks; you may want to specify
explicit static values on certain other inputs too. If you are not sure of the exact names by
which Encounter Test recognizes the pins, then you can copy the names from the vector
correspondence file (TBDvect.testmode). This file can be created any time after the
design is imported and the test mode is built by running Create Vector Correspondence from
the Tools pull-down. If the vector correspondence file already exists, then it is not necessary
to rebuild it. If you choose to use the vector correspondence file, then you can copy the
pertinent names directly from it into the TBDpatt file that you are creating.

The sequences are coded in an ASCII file, which is created using the text editor of your
choice. Of course, if you can find an existing test sequence, it is usually quicker to copy and
edit it instead of entering one from scratch. The name of the file is arbitrary, as long as it does
not conflict with the name of some other file in the directory where it is located. Test Pattern
Data Reference contains detailed explanations of the TBDpatt format.

A TBDpatt file always starts with a TBDpatt_Format statement, which gives information
about the formatting options that are used. There are two TBDpatt format modes, denoted
by mode=node or mode=vector. In writing sequence definitions, you usually have no
need to specify values on many inputs or latches all at once, so all examples in this section
use mode=node. Mode=node is a list format that specifies only the pins that are to be set,
and the value of each.

TBDpatt statements that refer to specific pins, blocks, or nets can use either their names or
their indexes to identify them. The second TBDpatt_Format parameter is
model_entity_form, which specifies either name, index, or flat_index.

Comments can be placed in the TBDpatt file to record any explanatory information desired.
Comments can be placed in any white space in the file, but whatever follows the comment
must begin on a new line. Comments are not copied into the TBDseq and TBDbin (Vector)
files. Refer to “TBDpatt and TBDseqPatt Format” in the Test Pattern Data Reference for
additional information.

Following the TBDpatt_Format statement can be any number of sequence definitions.


Each sequence definition starts with a [Define_Sequence statement and ends with a
]Define_Sequence statement. A sequence contains patterns which contain
events. The events specify exactly what is to be done and what it is to be done to, for
example: apply a negative-going pulse on clock pin Aclk (Pulse), set primary input DATA5 to
1 (Stim_PI), or scan out and look for a 0 in latch XYZ (Scan_Unload). The grouping of
events into patterns should coincide with tester cycles; that is, each pattern contains the
events to be applied in one tester cycle.

November 2010 207 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

The events that go inside each pattern depend upon the type of test the sequence definition
will be used for.

Stored Pattern Test Sequences


A typical test sequence for logic that conforms to the Encounter Test scan (LSSD or GSD)
guidelines starts by loading the scannable latches, then applies the primary input test vector,
measures primary outputs, then pulses a “system” clock, and then unloads the values
captured in the scannable latches by the system clock pulse. Figure 6-1shows this simple test
in the form of Encounter Test's TBDpatt.

Figure 6-1 A Simple Typical Stored-Pattern Test Sequence Definition

TBDpatt_Format (mode=node,model_entity_form=name);
[ Define_Sequence spseq1 (test);
[ Pattern 1;
Event 1.1.1 Scan_Load (): ;
] Pattern 1;
[ Pattern 2;
Event 1.2.1 Put_Stim_PI (): ;
Event 1.2.2 Measure_PO (): ;
] Pattern 2;
[ Pattern 3;
Event 1.3.1 Pulse ():
"Pin.f.l.samplckt.nl.C1"=-;
] Pattern 3;
[ Pattern 4;
Event 1.4.1 Scan_Unload (): ;
] Pattern 4;
] Define_Sequence ;

In the example of Figure 6-1, “spseq1” is an arbitrary sequence name assigned by the user.
“(test)” denotes this as a test sequence, as opposed to a modeinit sequence or a scan
operation sequence. The numbers following the keywords “Pattern” and “Event” are arbitrary
identification numbers; they are shown in the example exactly as Encounter Test would
construct them when exporting the sequence definition to an ASCII file.
“Pin.f.l.samplckt.n1.C1” is the fully-qualified name by which Encounter Test knows
the C1 clock primary input pin. The short form of the name, just “C1” could be used in place
of the longer name shown. The “-” following the clock name signifies a negative-going pulse
(the quiescent state of this clock is a 1).

November 2010 208 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

“Scan_Load” and “Scan_Unload” are the names of events which represent the operations
of loading and unloading the scan chains, respectively. Note that these particular events may
not be imported during test mode creation. If the sequence includes these events, create the
test mode first, then import the sequence definition.

No latch state vectors are specified, because as test generation is performed using this
sequence definition, Encounter Test will fill in these values. “Put_Stim_PI” is a placeholder
for another event, called “Stim_PI”. Stim_PIs are the events that hold the primary input
state vectors for the test. The Put_Stim_PI event was devised specifically for use in a test
sequence definition, and is not used any other place. In the context of a test sequence
definition, Put_Stim_PI is a placeholder for the generated primary input vector, while
Stim_PI is used only for manipulating control primary inputs and cannot be modified by the
test pattern generator. (In this simple example, no such control signals are used, therefore no
Stim PI events are shown). When the test vector is inserted into the sequence, the
Put_Stim_PI event is changed to a Stim_PI event. Note that there is no such distinction
made for the latch vectors because by its nature a Scan_Load event is not suitable for
applying control signals to a design, so its appearance in a sequence definition always
denotes where the latch test vector is to be placed.

For both the Put_Stim_PI and the Scan_Load events, PI or latch values may be specified.
When any PI or latch value is thus specified, it takes precedence over any value specified on
that PI or latch by the test generator; the automatically generated vectors are used only to fill
in the unspecified positions of the vectors in the Put_Stim_PI and Scan_Load events.
Note: When writing sequences for an OPMISR test mode, each OPMISR test sequence
should begin with a scan_load event and end with a channel_scan event.

When writing sequences for a test mode which has scan control pipelines, each test
sequence must begin with a scan_load event and end with a either a scan_unload or
channel_scan event.

Stored Pattern Processing of User Sequences

The testsequence option specifies which testsequence(s) the test generator can use for
creating tests for faults and how the user-defined test sequence(s) are used by the ATPG
process.

The first step is to write the test sequence template(s) be used by ATPG. The sequence
definition is then imported into the test mode. Note that when non-scan flush sequences (data
pipes) are applicable, then the non-scan flush patterns should be included in the user-defined
test sequence (these patterns should include the non-scan flush attribute i.e.
nonscanflush).

November 2010 209 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

Processing via Test Sequence

The testsequence keyword is used to specify a list of imported test sequences which are
available to ATPG for fault detection. Test generation progresses as usual with the tests being
created for faults independent of any user-defined test sequences. A best fit process is
applied to match the ATPG sequence to user-defined test sequence specified via
testsequence. The testsequence method provides special event types (for example,
Put_Stim_PI and keyed data) to allow variability in the test sequence definition and test
sequence matching. A description of the sequence matching process and guidelines is
described later in this chapter. Note that it is possible to have ATPG generate tests which are
discarded due to the ATPG test sequence not fitting with any specified user-defined
sequences.

Tip
Test Sequence processing can be invoked from the GUI as follows:

a. Click the Advanced button on a test generation form

b. Click the General tab

c. Select Use manual sequences

d. Select To fit clocking sequences

e. Enter a range of test sequences in the Sequence name field or enter a file that
contains a range of test sequences in the Read sequence from file field.

Mapping Automatic Vectors into User Sequences

When user sequences are specified for stored pattern tests, Encounter Test takes the
automatically generated test vectors (the primary input and scannable latch values) and
places these vectors into the user-supplied sequence. The resulting test is then simulated
and written to the TBDbin output file for eventual use at the tester. This seemingly simple
process is made more complicated by the fact that sometimes the automatic test consists of
a sequence of vectors, as in the case of a partial scan design or a design containing
embedded RAM.

Each primary input vector and each latch vector must be inserted some place in the user-
defined sequence. This has greater significance in the specification of user test sequences
when the automatic test sequence contains more than one primary input vector or more than
one latch vector. Latch vectors are inserted in the user-defined sequence at each
Scan_Load (or Skewed_Scan_Load) event. If the automatic test sequence contains the
same number of these events as the user sequence, then the first latch vector found is copied

November 2010 210 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

into the first Scan_Load (or Skewed_Scan_Load) event of the user sequence, the second
is copied into the second stim latch event of the user sequence, and so on. If the number of
stim latch events in the user sequence is more or less than the number of stim latch events
(latch vectors) in the automatic sequence, then Encounter Test cannot map this automatic
test into the user-defined sequence. Either some latch vectors from the automatic sequence
would have to be discarded, or some stim latch events from the user sequence would have
to be discarded. Instead of doing that, Encounter Test discards the entire automatic
sequence, and it is not used at all.

With primary input vectors, the mapping is from Stim_PI events in the automatic sequence
to Put_Stim_PI events in the user-defined sequence. Stim_PI events in the user-defined
sequence (along with all other events except Scan_Load, Skewed_Scan_Load, and
Put_Stim_PI) are kept exactly as specified by the user. Thus, the user sequence can stim
a control primary input at any appropriate point in the sequence without any consideration of
whether the test generator will be generating a corresponding primary input vector.

The same considerations explained above for latch vectors apply equally to primary input
vectors. For each primary input vector (Stim_PI event) of the automatic test sequence there
must be a corresponding Put_Stim_PI event in the user-defined sequence to map the
vector into. If the number of Stim_PI events in the automatic sequence does not exactly
match the number of Put_Stim_PI events in the user sequence, then the mapping fails and
the automatic test is discarded.

Choosing from a List of User Sequences

For most designs, the automatic test generator will produce tests which vary with respect to
the number of primary input and latch vectors contained therein. For a full-scan design having
no embedded RAM, most tests will contain a single latch vector and a single primary input
vector, but when non-scan sequential elements are present, the test generator will produce
some tests with multiple input vectors to be applied in sequence with other activity such as
clock pulses.

If your test protocol can support a variety of test templates, and especially if the test generator
will produce tests with variable numbers of latch or primary input vectors, it is important to
introduce some flexibility to accommodate this variety of test templates. This flexibility is
supported in two ways. One way is to define each different sequence, or template, as a unique
test sequence and provide a list of test sequences to the test generation run.

When multiple test sequences are specified for a stored-pattern test generation run, each
generated test will be mapped, if possible, into one of the specified sequences. Encounter
Test has a two-tiered algorithm for choosing which user sequence to map the test into.

November 2010 211 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

The first step of the algorithm determines whether each user sequence has the requisite
numbers of latch stims and Put_Stim_PI events, in accordance with the criteria explained
in the preceding section, “Mapping Automatic Vectors into User Sequences” on page 210.
Only those sequences which “match” the automatic test with respect to the number of latch
stims and the number of primary input stims (Put_Stim_PI vs. Stim_PI) are considered in
the second step.

The second step of the matching algorithm selects from those user sequences which can be
mapped into, the one which most closely resembles the automatic test, considering all other
factors, such as which clocks are being pulsed, the order of the clock pulses, type of scan
(skewed load, skewed unload), number and placement of Measure_PO events, etc. In this
selection, a high weight is given to the comparison of the measure latch events between the
user sequence and the test sequence. If the test generator produced a
Skewed_Scan_Unload event for an LSSD design, it is reasonable to assume that it has
stored some meaningful test information in the L1 latches; if the user sequence has a
Scan_Unload event (no preceding B clock) then the scan out will begin with a shift A clock,
destroying the test information in the L1 latches. In this case, a user sequence with a
Skewed_Scan_Unload event would get a much better “score” for matching than a user
sequence with a Scan_Unload event.

Control Statements in Sequence Definitions

Besides allowing multiple user sequences to be specified for a test generation run, Encounter
Test has additional flexibility in the sequence matching by means of “keyed data” statements
that can be placed in the sequence definition. See “Keyed Data” in the Test Pattern Data
Reference for additional information.

There are three unique types of sequence matching control statements, and one of these has
two variations.

STIM=DELETE

This statement is placed on a Scan_Load, Skewed_Scan_Load, or Put_Stim_PI


statement in a sequence definition and causes the vector to be removed from the sequence.
The purpose of such a statement is to allow the sequence to be matched with an automatic
test and yet eliminate the event from the sequence. This is best explained by an example.

In the hypothetical example of Figure 6-2, there is a choice of two system clocks, C2 and C3.
The automatic test generator picks clock C2 but the user wants to use C3. The C2 clock signal
is gated by a control primary input. If we assume that the test generator is sophisticated
enough to figure this out and place the gating signal in a separate Stim_PI event prior to
pulsing the C2 clock, the automatic test could have two Stim_PI events. The sequence
matching and mapping algorithms count each Stim_PI event in the automatic test as a

November 2010 212 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

separate vector, and hence the user sequence definition must have two Put_Stim_PI
events. But let us say the user wants to leave the C2 clock gating primary input at its original
state from the first vector. This can be accomplished as shown in the figure, using the second
Put_Stim_PI event to “match” the second Stim_PI event in the automatic sequence, and
the STIM=DELETE statement to throw away this “vector” after the sequence matching has
been done.

Figure 6-2 Illustration of the STIM=DELETE control statement

[ Test_Sequence ;
[ Pattern 1;
Event 1 Stim_PI (): 1011001010001;
Event 2 Measure_PO (): ;
] Pattern 1;
[ Pattern 2;
Event 1 Stim_PI (): .....1.......;
Event 2 Pulse (): "C2"=-;
Event 3 Measure_PO (): ;
] Pattern 2;
] Test_Sequence ;

[ Define_Sequence Jones (test) ;


[ Pattern 1;
Event 1 Put_Stim_PI (): ;
Event 2 Measure_PO (): ;
] Pattern 1;
[ Pattern 2;
Event 1 Put_Stim_PI (): ;
[ Keyed_Data ;
STIM=DELETE
] Keyed_Data ;
Event 2 Pulse (): "C3"=-;
Event 3 Measure_PO (): ;
] Pattern 2;
] Define_Sequence Jones ;

STIM=DELETE statements are valid on Put_Stim_PI, Scan_Load, and


Skewed_Scan_Load events, and are coded as shown in the example of Figure 6-2. As has
been explained, STIM=DELETE is used to cause the event to be deleted from the sequence
after the event has been used for purposes of sequence matching.

PI_STIMS/LATCH_STIMS=n

This is the second type of sequence matching control statement, and there are two
statements of this type:
PI_STIMS=n
LATCH_STIMS=n

November 2010 213 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

where n is a positive integer.

These statements are interpreted and applied before sequence matching takes place. Their
primary purpose is to allow the same user test sequence definition to be “matched” with
different automatic tests that contain different numbers of primary input and/or latch vectors.
For example, an event to which a PI_STIMS=3 statement is attached is effectively removed
from the sequence definition if the automatic test sequence does not have at least three
STIM_PI events. Similarly, an event with an attached LATCH_STIMS=2 event is effectively
removed from the sequence definition if the automatic sequence does not have at least two
Scan_Load and/or Skewed_Scan_Load events.

Again using a hypothetical example, Figure 6-2 shows a possible use for the PI_STIMS=n
statement. The user's sequence definition “matches” both automatic sequences 1 and 2, and
so can be used for both of these automatic tests. In the case of test sequence 1, the sequence
contains one Stim_PI event, so the second Put_Stim_PI event of the sequence definition
is removed for this usage as it specifies PI_STIMS=2. With the second Put_Stim_PI event
removed, the number of Put_Stim_PI events remaining (one) matches the number of
primary input vectors in the first test sequence.

In the case of test sequence 2, the sequence contains two Stim_PI events, so the
Put_Stim_PI event with the PI_STIMS=2 statement is used for the purpose of matching
with the automatic test sequence. The number of Put_Stim_PI events (two) matches the
number of primary input vectors in the second test sequence.

November 2010 214 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

Figure 6-3 Illustration of the PI_STIMS=n control statement

[ Test_Sequence 1 ;
[ Pattern 1;
Event 1 Stim_PI (): 1001001000011;
Event 2 Measure_PO (): ;
] Pattern 1;
] Test_Sequence 1 ;

[ Test_Sequence 2 ;
[ Pattern 1;
Event 1 Stim_PI (): 0100101011100;
Event 2 Measure_PO (): ;
] Pattern 1;
[ Pattern 2;
Event 1 Stim_PI (): .....1.......;
Event 2 Pulse (): "C2"=-;
Event 3 Measure_PO (): ;
] Pattern 2;
] Test_Sequence 2 ;

[ Define_Sequence Hamilton (test) ;


[ Pattern 1;
Event 1 Put_Stim_PI (): ;
Event 2 Measure_PO (): ;
] Pattern 1;
[ Pattern 2;
Event 1 Put_Stim_PI (): ;
[ Keyed_Data ;
PI_STIMS=2
STIM=DELETE
] Keyed_Data ;
Event 2 Pulse (): "C3"=-;
Event 3 Measure_PO (): ;
] Pattern 2;
] Define_Sequence Hamilton ;

In the example of Figure 6-3, the conditional Put_Stim_PI event is discarded after
sequence matching, but the conditional (i.e., the PI_STIMS=n statement) is useful also in
cases where the vector is to be kept, as illustrated in the next example.

The PI_STIMS=n and LATCH_STIMS=n statements can be attached to any event, and are
not limited to Stim_PI and stim latch events. In the example of Figure 6-4, an entire chunk
of events consisting of a stim, a measure and a pulse is made conditional based upon
whether the automatic test sequence contains one, two, or three Stim_PI events.

November 2010 215 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

Figure 6-4 Illustration of the PI_STIMS=n control statement

[ Define_Sequence McPhee (test);


[ Pattern 1 (pattern_type = static);
Event 1.1 Scan_Load (): ;
] Pattern 1;
[ Pattern 2 (pattern_type = static);
Event 2.1 Put_Stim_PI (): ;
[ Keyed_Data ;
PI_Stims=3
] Keyed_Data ;
Event 2.2 Measure_PO (): ;
[ Keyed_Data ;
PI_STIMS=3
] Keyed_Data ;
] Pattern 2;
[ Pattern 3 (pattern_type = static);
Event 3.1 Pulse ():
"Pin.f.l.ptnmatch.nl.C4"=+ ;
[ Keyed_Data ;
PI_STIMS=3
] Keyed_Data ;
] Pattern 3;
[ Pattern 4 (pattern_type = static);
Event 4.1 Put_Stim_PI (): ;
[ Keyed_Data ;
PI_Stims=2
] Keyed_Data ;
Event 4.2 Measure_PO (): ;
[ Keyed_Data ;
PI_STIMS=2
] Keyed_Data ;
] Pattern 4;
[ Pattern 5 (pattern_type = static);
Event 5.1 Pulse ():
"Pin.f.l.ptnmatch.nl.C5"=+ ;
[ Keyed_Data ;
PI_STIMS=2
] Keyed_Data ;
] Pattern 5;
[ Pattern 6 (pattern_type = static);
Event 6.1 Put_Stim_PI (): ;
Event 6.2 Measure_PO (): ;
] Pattern 6;
[ Pattern 7 (pattern_type = static);
Event 7.1 Pulse ():
"Pin.f.l.ptnmatch.nl.C6"=+ ;
] Pattern 7;
[ Pattern 8 (pattern_type = static);
Event 8.1 Skewed_Scan_Unload (): ;
] Pattern 8;
] Define_Sequence McPhee ;

November 2010 216 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

TG=keyed data

This is another statement to control which pattern will be ignored by the test generator, and
there are three statements of this type:
TG=IGNORE
TG=IGNORE_FIRST
TG=IGNORE_LAST

The statement TG=IGNORE is used for a single pattern, while TG=IGNORE_FIRST and
TG=IGNORE_LAST statements allow multiple patterns to be ignored. To ignore multiple
patterns, specify TG=IGNORE_FIRST for the first pattern to be ignored, add other patterns
after this pattern, and specify TG=IGNORE_LAST for the last pattern to be ignored. All the
patterns starting from the one with TG=IGNORE_FIRST through the one with
TG=IGNORE_LAST will be ignored by the test generator.

Using a hypothetical example, the following snippet shows the possible use of the
TG=IGNORE statement in a user sequence:
[ Define_Sequence Jones (test);
[ Pattern 1;
Event 1 Scan_Load :
] Pattern 1;
[ Pattern 2;
Event 1 Put_Stim_PI (): ;
] Pattern 2;
[ Pattern 3;
[Keyed_Data ;
TG=IGNORE
] Keyed_Data ;
Event 1 Pulse () : "TCK"=-;
] Pattern 3;
[ Pattern 4;
Event 1 Pulse () : "C3"=-;
Event 2 Measure_PO () : ;
] Pattern 4;
] Define_Sequence Jones ;

The test generator discards Event 1 in Pattern3 because it has the TG=IGNORE statement
specified for it and considers the user sequence as follows:
[ Define_Sequence Jones (test) ;
[ Pattern 1;
Event 1 Scan_Load :
] Pattern 1;

November 2010 217 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

[ Pattern 2;
Event 1 Put_Stim_PI (): ;
] Pattern 2;
[ Pattern 4;
Event 1 Pulse () : "C3"=-;
Event 2 Measure_PO () : ;
] Pattern 4;
] Define_Sequence Jones ;

Sequences with On-Product Clock Generation


There are some unique considerations involved when writing a test sequence if you have
sectioned off some logic, such as clock generation or BIST control, with cut points. Although
WRPT and BIST may be applied to a design with OPC logic (cut points), the treatment of
pseudo primary inputs is explained in terms of a stored-pattern test sequence definition.

The following figure shows a design with clock generated by OPC logic.

November 2010 218 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

Figure 6-5 GSD Circuit with Clock Generated by OPC Logic

C1
-EC GSD
Circuit
OPCG Logic

LATCH M LATCH I opcgout

master slave

C2
+SC

C2

opcgout

An example of a static stored-pattern test sequence definition for the design of Figure 6-5 is
shown in Figure 6-6. For ease of reference the standard Encounter Test numbering
convention for patterns and events has not been followed here. Test data import ignores them
anyway, so the example is valid.

November 2010 219 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

Figure 6-6 A stored-pattern test sequence definition for a design with OPC logic

TBDpatt_Format (mode=node,model_entity_form=name);
# Mode Test sequence for opcgckt
# Created by TDA 11/08/96
# Normal Load and Unload
[ Define_Sequence TDA_opcgckt_test1 (test);
[ Pattern 1;
Event 1 Force ():
"Block.f.l.opcgckt.nl.opcg1.master"=0 ;
] Pattern 1;
[ Pattern 2;
Event 2 Scan_Load (): ;
] Pattern 2;
[ Pattern 3;
Event 3 Put_Stim_PI (): ;
Event 4 Measure_PO (): ;
] Pattern 3;
[ Pattern 4;
Event 5 Pulse ():
"Pin.f.l.opcgckt.nl.C2"=-;
] Pattern 4;
[ Pattern 5;
Event 6 Pulse ():
"Pin.f.l.opcgckt.nl.C2"=-;
Event 7 Pulse_Pseudo_PI ():
"opcgoutPPI"=+ ;
] Pattern 5;
[ Pattern 6;
Event 8 Scan_Unload (): ;
] Pattern 6;
] Define_Sequence TDA_opcgckt_test1;

Event 1 is a Force event which tells a simulator that the given net (usually a latch, as in the
present case) should be at the specified state. This event is needed here because most
simulators start out by assuming an unknown initial state in all latches, and this example
design does not have a homing sequence. In the present case, the lack of a homing sequence
is not a concern, since the static behavior of the design is the same no matter which state the
frequency divider starts in. The initial state affects the phase relationship between the input
clock pulses and the derived clock pulses on net opcgout. For static logic tests this does not
matter. No Force event is needed to initialize the other OPCG logic latch, because it is flushed
in the stability state and therefore will obtain its initial value through normal simulation.

The Force event could have been placed in a setup sequence, but since no setup sequence
was defined, it is more convenient, and does no harm to put it inside the test sequence as
shown.

Scan_Load (Event 2) is usually the first event in an automatic test sequence for a scan
design. If you have an LSSD design with A and B scan clocks, you will do well to define a
second sequence identical to this one except with a Skewed_Scan_Load in place of this
Scan_Load event, and supply both sequences to the test generator. Encounter Test will use

November 2010 220 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

the sequence definition that most closely matches the generated test. The latch values from
the generated sequence are moved into the event in the sequence definition; if the
Scan_Load types do not match, the test coverage could be adversely affected.

Events 2 and 3 together comprise what is commonly thought of as the “test vector,” the latch
and primary input values required for the test. Event 3 is actually a placeholder telling
Encounter Test where to put the Stim_PI portion of the test vector. Besides acting as a
placeholder, the Put_Stim_PI event allows the specification of primary input values that will
override any values specified by the test generator. When the test sequence is constructed
from this sequence definition and the generated test vector, the Put_Stim_PI event will be
replaced by the Stim_PI event.

Event 4 is an instruction to measure the primary outputs. When creating its own test
sequences, Encounter Test has algorithms that determine the sequential relationship
between clocks, primary input stims, primary output measures, and scan operations.
However, when the user specifies the sequence, Encounter Test must be told where the stims
and measures go.

Events 5 and 6 produce two cycles of the clock primary input, which cause the frequency
divider (the OPC logic in this design) to produce a single pulse on the derived clock, opcgout,
which is identified as a “cut point” in the design source or mode definition file.

Event 7 is the pulse on the derived clock, which is referred to by the name of its corresponding
pseudo PI. For this example the name opcgoutPPI was chosen. This event is used when
verifying the sequence, and for Encounter Test simulation of the tests. Note that the
automatically generated sequences will be in terms of the pseudo PIs, so this event is also
used for sequence matching when multiple sequence definitions are given to the test
generation application.

Event 8, the final step in this sequence, tells Encounter Test to scan out the latch values. As
with the Scan_Load event, when you are using LSSD with A and B scan clocks, there are
two forms of this event, the other one being Skewed_Scan_Unload. The choice of which
measure latch event to use is related to the clocking, so if the wrong event is used, the test
coverage is likely to be dramatically reduced. As with the Scan_Load event (see the
discussion of Event 2 above), you may want to define two sequences, one with each measure
latch event. Combined with the two different stim latch events, you will often have four similar
sequence definitions for an LSSD design. In contrast, a single clock edge-triggered design
may require only a single static test sequence for good fault coverage.

November 2010 221 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

Setup Sequences
Encounter Test automatically creates a setup sequence for WRPT and LBIST to hold the
weight information (for WRPT), to apply the optional initializing channel scan from PRPG (for
LBIST), and to apply any lineheld primary input values that are specified.

If you code your own setup sequence, Encounter Test will augment it with the above
information if needed. You would need to code a setup sequence only if there are additional
stimuli that need to be applied, or some of the “lineholds” are to pseudo primary inputs.
Neither of these reasons is likely to exist unless you are processing in a test mode with cut
points specified.

It should be helpful to think of the setup sequence as an initialization step for the WRPT or
LBIST test loop.

In some cases of processing with cut points, especially with an on-board BIST controller, it
may also be necessary to code a pseudo PI event (Stim_PPI) in the setup sequence just to
remind Encounter Test that the corresponding cut point nets were initialized to that state in
the modeinit sequence. For example, it is possible, especially with an on-product BIST
controller, that some control signal represented as a PPI is set in the mode initialization
sequence and then left in that state to begin the test sequence. If this PPI does not have a
stability value (that is, it is not attributed as a clock, TI, nor TC), Encounter Test will assume
the test sequence starts with the PPI at X unless told otherwise.

When a setup sequence is used, it is defined as type setup and then referenced in the test
sequence definition by the following construct:
[ Define_Sequence setup_seq_name (setup) ;
.
.
.
] Define_Sequence setup_seq_name;
[ Define_Sequence test_seq_name (test) ;
[ SetupSeq=setup_seq_name ];
.
.
.

If you import the setup sequence definition, but do not refer to it by the SetupSeq object
within a test sequence, Encounter Test will ignore it. The SetupSeq object is required within
the test sequence to tell Encounter Test to use the named sequence definition as the base
for the setup sequence that will precede the loop test sequence in the test generation output
file for WRPT or LBIST.

November 2010 222 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

Endup Sequences
At the end of a BIST test procedure, it may be necessary to de-activate a phase-locked-loop
or cycle the BIST controller into some special state, or quiesce the design in some other
fashion. If this is the case, you may be able to provide the stimuli to do this in the signature
observation sequence. On the other hand, you may choose to separate this operation from
the MISR observation sequence, or it may be necessary to apply this quiescing sequence
between two concatenated test sequence loops, without an intervening signature
observation.

The quiescing, or endup sequence can be coded as a separate sequence definition and
imported along with your test sequence definitions. Encounter Test will use the endup
sequence only if so directed by the presence of an EndupSeq object within a test sequence
definition. This is specified in a similar manner as a setup sequence:
[ Define_Sequence endup_seq_name (endup) ;
.
.
.
] Define_Sequence endup_seq_name;
[ Define_Sequence test_seq_name (test) ;
[ EndupSeq=endup_seq_name ];
.
.
.

This object appears before the first pattern of the test sequence definition, but it gets applied
at the end of the test sequence by means of an Apply:endup_seq_name; event that
Encounter Test will automatically insert after the loop.

Specifying Linehold Information in a Test Sequence


Linehold information is usually supplied to a test generation run through a linehold file,
specified as one of the inputs to the run. There are some cases where it is important that a
given test sequence always be used with some subset of invariant lineholds. A case in point
is linehold overrides of a fixed-value latch which is part of an LBIST controller or clock
generation logic. In this case, the latch is separated from the logic under test by cut points.
Assuming this lineheld fixed-value latch influences the value of the cut point nets, the
sequence verification program must be run with the same value in this latch as will be
specified when the sequence is used in test generation. To ensure that the test generation
run time override (linehold) value of this latch is the same as the value assumed by the
sequence verifier, the linehold for such a latch should be placed inside the test sequence.

We note that in the case of a “lineheld” primary input, the value can be specified directly by
means of a Stim_PI event in the sequence definition. Also, lineholds are specified because
the verification and the test generation/simulation are performed in two separate steps, and

November 2010 223 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

the need for the verification step is unique to the use of cut points and pseudo primary inputs.
Therefore, it is recommended that the use linehold specification within a test sequence
definition be limited to fixed value latches in OPC logic, but Encounter Test places no such
restriction on its use, in case the user should have some reason to make more extensive use
of it.

The linehold object is placed before the first pattern in the sequence definition, with the
following TBDpatt syntax:
[ Define_Sequence Example_Name (test);
[ Lineholds;
Netname_A=1;
Netname_B=0;
] Lineholds;
[ Pattern ;
.
.
.

Lineholds specified in this manner through a sequence definition are limited to primary inputs
and FLH (fixed-value) latches.

Lineholds specified in a sequence definition override any conflicting linehold specified in the
model source, test mode definition, or linehold file.

This linehold object is valid in a test sequence only, not in the setup sequence.

Using Oscillator Pins in a Sequence


This will be explained by an example where the scan operation is under control of a free-
running oscillator. We assume an LBIST design, so that no external signals have to be
applied during the scan operation other than the oscillator waveform. The following clock
generation design is assumed:

November 2010 224 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

Figure 6-7 Clock Generation Logic

The scan sequence definitions, which must be provided manually, are shown in Figure 6-8 on
page 229.

The cut point nets, A1_Clk, A2_Clk, B1_Clk, and B2_Clk of Figure 6-7 are defined by either
the model source or mode definition statements as pseudo primary inputs A_Clk and B_Clk.

November 2010 225 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

Thus, the sequence definitions in Figure 6-8 make reference to these PPI (pseudo primary
input) names.

A shift gate signal named Scan_Enable, with a test function attribute of -SE, is assumed to
be gating the scan data path. This signal is not part of the clock circuitry of Figure 6-7.

The free-running oscillator, “Osc_In” in Figure 6-7 , is assumed to have a test function of oTI,
indicating that it is “tied” to an oscillator. The mode initialization sequence must have applied
a Start_Osc event on this pin. Synchronization of the events in this sequence definition with
the oscillator is by means of Wait_Osc events.

The Wait_Osc event specifies how many oscillator cycles must have elapsed since the
previous Wait_Osc event. In our example, when the scan operation is not being executed,
the clock generation circuitry of Figure 6-7 is dormant, and has no effect upon the application
of the test patterns being applied between the scan operations. Thus, there need be no
Wait_Osc events in that portion of the test data between scan operations.

When the scan operation is started, the Scan_Enable signal is first set to its scan state of
0. Then, in preparation for turning on the Scan_Ctl signal to initiate the scan operation, a
Wait_Osc (Cycles=0) event is specified. This tells us that we must start paying attention to
the oscillator. “Paying attention” means simply to start counting oscillator pulses so that
subsequent events can be synchronized with the oscillator. Subsequent Wait_Osc events
will tell how many oscillator cycles should have elapsed at the given point in the test data.
Thus, the initial Wait_Osc event with Cycles=0 serves only as a signal to turn on one's
stopwatch, so to speak.

The next pattern turns on the Scan_Ctl input. When an oscillator is active (meaning that a
Wait_Osc has been encountered), any events following the Wait_Osc signal (even though
possibly in a different “pattern”) are assumed to be applied immediately within the first
oscillator cycle following the Wait_Osc event.

The scansequence, OPCKTscanSeq, is executed next. In this example, the scansequence


consists of a single pattern that defines two pseudo PI events. There are no external primary
input signals applied here, since the entire scan operation is effected by the train of pulses on
the oscillator after the Scan_Ctl signal was raised. This scansequence serves to tell
Encounter Test programs how the scan operation works with respect to the pseudo PIs, which
are treated like real primary inputs.

After the scansequence has completed (eight repetitions in our example), the scansectionexit
sequence, OPCKTscanSectExit, is executed. This sequence serves two purposes:
■ Provides a place to specify the Wait_Osc event which tells how many oscillator cycles
should elapse before the tester can assume that the scan operation is complete.
■ This is where the Scan_Ctl signal is returned to its normal state of 0.

November 2010 226 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

Note from the timing diagram of Figure 6-7 that the scan operation automatically terminates
and the clock generation logic goes dormant four oscillator cycles after the raising of the
Scan_Ctl signal. Thus, the restoring of the Scan_Ctl signal to the 0 state does not have to
be timed, and can be done at any time following the four oscillator pulses and before it is time
for the next scan operation. The Wait_Osc (Cycles=4,Off) event not only causes the four
oscillator cycles to elapse before proceeding, but the “Off” parameter also signifies that
subsequent events are not being synchronized with the oscillator; the oscillator is considered
to be inactive (i.e. ignored but still running) at this point, and will remain inactive until the next
Wait_Osc event is encountered.

When an oscillator is active, the intervening events between Wait_Osc events are applied
as follows:
■ All external events (e.g., Stim_PI, Pulse, Measure_PO) are applied immediately.
■ All pseudo PI events are assumed to occur in their order of appearance in the sequence,
and be complete by the number of cycles in the following Wait_Osc event.
■ No other external activity occurs until the time specified in the following Wait_Osc event.

Encounter Test will not rearrange the order of the events in the sequence definitions. This
means that when composing the sequences, the user should adhere to the above guideline.

To understand how Encounter Test handles Wait_Osc events, it helps to think in terms of an
oscillator signal being either non-existent, running, or active. An oscillator comes into
existence when it is connected to a pin by a Start_Osc event. A Stop_Osc event effectively
kills the oscillator, returning the pin to a static level. The oscillator is said to be running as long
as it “exists”--that is, as long as it is connected to the pin. But connecting a pin to an oscillator
does not automatically mean that the design is responding to the oscillator. There may be
(and usually are) some enabling signals, such as Scan_Ctl in our example, that control the
effect of the oscillator upon the logic. You can think of the oscillator as being “active” whenever
it is causing design activity. While the oscillator is active, the design is usually running
autonomously, perhaps communicating to the outside world through asynchronous
interfaces.

Encounter Test does not understand asynchronous interfaces, and continually simulating a
free-running oscillator signal would be prohibitively expensive. Therefore, Encounter Test
supports oscillators only if they affect the design through cut points, and all Encounter Test
programs except for Verify OPC Sequences treat those cut points as primary inputs, ignoring
the oscillator completely. Still, consideration has to be given to the relationship between the
oscillator and other primary input stimuli so that the tester knows how to apply the patterns
and Verify On Product Clock Sequences knows how to simulate the sequence to assure that
it actually produces predictable results and that the cut point (pseudo PI) information is
correctly specified for the mainstream Encounter Test processes.

November 2010 227 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

When the first Wait_Osc event is encountered, the specified pin must have previously been
connected to an oscillator by a Start_Osc event. The first Wait_Osc is a sign that the
oscillator is about to become active.

With the oscillator considered to be in an active condition, subsequent Wait_Osc events


specify the number of oscillator cycles since the previous Wait_Osc event that must transpire
before proceeding any further in the pattern sequence.

When the pattern sequence reaches a phase where the design will no longer be “listening”
to the oscillator signal, and primary input stimuli can be applied without any regard to the
oscillator, the Wait_Osc event provides a flag, Off. The off attribute on a Wait_Osc event is
a sign that the oscillator is no longer causing any significant design activity, and is considered
as being reverted to the should be in an inactive state after the specified number of cycles.

If two or more oscillators are active simultaneously, Encounter Test assumes that they are
controlling independent portions of the design.

November 2010 228 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

Figure 6-8 A scanop sequence definition using a free-running oscillator

TBDpatt_Format (mode=node, model_entity_form=name);

[ Define_Sequence OPCKTscanPre 1 (scanprecond);


[ Pattern 1.1 (pattern_type = static);
Event 1.1.1 Stim_PI (): "Pin.f.l.opckt.nl.Scan_Enable"=0;
Event 1.1.2 Wait_Osc (Cycles=0): "Pin.f.l.opckt.n.l.Osc_In";
] Pattern 1.1;
[ Pattern 1.2 (pattern_type = static);
Event 1.2.1 Stim_PI (): "Pin.f.l.opckt.nl.Scan_Ctl"=1;
] Pattern 1.2;
] Define_Sequence OPCKTscanPre 1;

[ Define_Sequence OPCKTscanSeq 2 (scansequence) (repeat=8);


[ Pattern 2.1 (pattern_type = static);
Event 2.1.1 Pulse_PPI (): "A_Clk"=+;
Event 2.1.2 Pulse_PPI (): "B_Clk"=+;
] Pattern 2.1;
] Define_Sequence OPCKTscanSeq 2;

[ Define_Sequence OPCKTscanSectExit 3 (scansectionexit);


[ Pattern 3.1 (pattern_type = static);
Event 3.1.1 Wait_Osc (Cycles=4,Off): "Pin.f.l.opckt.n.l.Osc_In";
] Pattern 3.1;
[ Pattern 2.2 (pattern_type = static);
Event 3.2.1 Stim_PI (): "Pin.f.l.opckt.nl.Scan_Ctl"=0;
] Pattern 3.2;
] Define_Sequence OPCKTscanSectExit 3;

[ Define_Sequence OPCKTscanSect 4 (scansection);


[ Pattern 4.1 (pattern_type = static);
Event 4.1.1 Apply (): OPCKTscanPre;
Event 4.1.2 Apply (): OPCKTscanSeq;
Event 4.1.3 Apply (): OPCKTscanSectExit;
]Pattern 4.1;
]Define_Sequence OPCKTscanSect 4;

[ Define_Sequence OPCKTscanOpSeq 5 (scanop);


[ Pattern 5.1 (pattern_type = static);
Event 5.1.1 Apply (): OPCKTscanSect;
] Pattern 5.1;
] Define_Sequence OPCKTscanOpSeq 5;

Importing Test Sequences


When specified by the user, test mode initialization and scan sequences must be provided as
input to Build Test Mode, Test and setup sequences must be imported Read Sequence
Definition after the test mode has been built. When running the import test data function, give
it the name of the file which you created as described above. See “Reading Sequence
Definition Data (TBDseq)” on page 252 for further details on this procedure.

November 2010 229 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

Ignoremeasures File
An ignoremeasures file is used to specify measure points to be ignored during test generation
or fault simulation. Measures are suppressed for all latches in the file by assigning a measure
X value instead of measure 1 or measure 0. Specify the keyword value
ignoremeasures=ignoremeasure_filename to utilize the ignoremeasures file.

The format of the file is a list of block or pin names, one name per line. The names may be
specified using either full proper form (Pin.f.l.topcell.nl.hier_name) or short form (hier_name).

You can add comments in the ignoremeasures file using any of the following characters:

//

/* <comment block> */

The following is an example of an ignoremeasures file:


#sample ignoremeasures file
Block.f.l.DLX_TOP.nl.DLX_CORE.C_REG.STORAGE.Q_N64_reg_0.I0.dff_primitive
DLX_CORE.C_REG.STORAGE.Q_N59_reg_5.I0.dff_primitive
Pin.f.l.DLX_TOP.DLX_CHIPTOP_DATA[31]
DLX_TOP.DLX_CHIPTOP_DATA[30]

The Scan_Unload and/or the Measure PO for the specified latches/POs will be set to X.

The following are potential results when resimulating with an ignoremeausure file:
■ Miscompare messages are produced since the miscompare checking is done before the
measures are X’d out.
■ If a measure is X’d out, the fault coverage is adjusted to remove the fault(s) detected by
that pattern.

Keepmeasures file
An keepmeasures file is used to specify measure points to be retained during test generation
or fault simulation. This file is useful if the number of measures to be ignored is greater than
the number to be kept.

All measures except those specified in the file are suppressed by assigning a measure X
value to all latches. The keepmeasures file is utilized by specifying the keyword value
keepmeasures=keepmeasure_filename.

November 2010 230 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

The format of the file is a list of block or pin names, one name per line. The names may be
specified using either full proper form (Pin.f.l.topcell.nl.hier_name) or short form (hier_name).

You can add comments in the keepmeasures file using any of the following characters:

//

/* <comment block> */

The following is an example of a keepmeasures file:


#sample keepmeasures file
Block.f.l.DLX_TOP.nl.DLX_CORE.C_REG.STORAGE.Q_N64_reg_0.I0.dff_primitive
DLX_CORE.C_REG.STORAGE.Q_N59_reg_5.I0.dff_primitive
Pin.f.l.DLX_TOP.DLX_CHIPTOP_DATA[31]
DLX_TOP.DLX_CHIPTOP_DATA[30]

The Scan_Unload and/or the Measure PO for the specified latches/POs are retained.

November 2010 231 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Customizing Inputs for ATPG

November 2010 232 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

7
Utilities and Test Vector Data

This chapter discusses the commands and concepts related to test patterns and the ATPG
process.

The chapter covers the following topics:


■ Test Pattern Utilities
❑ “Committing Tests” on page 233
❑ “Performing on Uncommitted Tests and Committing Test Data” on page 234
❑ “Deleting Committed Tests” on page 235
❑ “Deleting a Range of Tests” on page 236
❑ “Deleting an Experiment” on page 238
■ “Encounter Test Vector Data” on page 238

Committing Tests
All test generation runs are made as uncommitted tests in a test mode. Commit Tests moves
the uncommitted test results into the committed vectors test data for a test mode. Refer to
“Performing on Uncommitted Tests and Committing Test Data” on page 234 for more detailed
information about the overall test generation processing methodology.

To perform Commit Tests using the graphical interface, refer to “Commit Tests” in the
Graphical User Interface Reference.

To perform Commit Tests using command line, refer to “commit_tests” in the Command Line
Reference.

The syntax for the commit_tests command is given below:


commit_tests workdir=<directory> testmode=<modename> inexperiment=<name>

where:

November 2010 233 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Utilities and Test Vector Data

■ workdir = name of the working directory


■ testmode= name of the testmode for dynamic ATPG
■ inexperiment= name of the experiment to save

The most commonly used keyword for the commit_tests command is:
■ force=no/yes - To force uncommitted tests to be saved even if potential errors have
been detected. Some potential errors are:
❑ TSV was not run or detected severe errors
❑ Date/time indicates that the patterns were created before a previous save

Refer to “commit_tests” in the Command Line Reference for more information on the
keyword.

Prerequisite Tasks
Complete the following tasks before running Test Simulation:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide.
2. Create a Test Mode. Refer to “Performing Build Test Mode” in the Modeling User
Guide.
3. Build a fault model for the design. See “Building a Fault Model” in the Modeling User
Guide for more information.
4. Run ATPG and create a pattern to be saved. Refer to “Invoking ATPG” on page 35 for
more information.

Performing on Uncommitted Tests and Committing Test Data


Encounter Test creates test data in the form of uncommitted tests. You can inspect the
uncommitted test results to decide whether it is worth keeping or that it should be discarded.
If the test results from an uncommitted test are deemed worth saving, there are two means
available for continuing additional test generation from where an uncommitted test left off.
■ You can append to this uncommitted test when you run test generation again.
■ You can commit (save) the test data.
Note: A committed test or experiment may not be reprocessed.

November 2010 234 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Utilities and Test Vector Data

If you use the uncommitted test as input to another uncommitted test, append to it and the
results of the second uncommitted test are not satisfactory, both uncommitted tests will have
to be thrown away since they will have been combined into a single uncommitted test.

By committing an uncommitted test, you append the results of the uncommitted test to a
“committed vectors” set of test data for the test mode. The committed vectors for a test mode
is the data deemed worth sending to manufacturing. After an uncommitted test has been
committed, it is removed from the system and can no longer be manipulated independent
from the other committed vectors data. For future reference, the names of the committed tests
are saved with the committed test patterns.

After test data from one or more uncommitted test has been committed to the committed
vectors set of test data, any subsequent test generation uncommitted tests will target only
those faults left untested by the committed test patterns. To get the most benefit from this, it
is advisable to achieve acceptable uncommitted test results before beginning a new
uncommitted test. While it is possible to run many different test generation uncommitted tests
in parallel, if they are committed there will be many unnecessary test patterns included in the
committed vectors test set since the uncommitted tests may have generated overlapping test
patterns that test the same faults.

When uncommitted test results are committed to the committed vectors for a test mode, the
commit process checks whether it is allowed to give test credit for faults in any other test
modes that have been defined. This processing is referred to as cross mode fault mark-off.
This is a mechanism that allows a fault that was detected in one test mode to be considered
already tested when a different test mode is processed in which that fault is also active.
Sometimes cross mode fault mark-off is explicitly disabled, such as when two test modes are
targeting the same faults, but for different testers (and different chip manufacturers).

Deleting Committed Tests


Use the Delete Committed Tests command to
■ Remove all committed and uncommitted tests for a testmode
■ Remove individual experiments from a testmode if the experiment does not have any
saved fault statistics.
■ Reset the fault status for the specified testmode.

To perform Delete Commit Tests using the graphical interface, refer to "Delete Committed
Tests" in the Graphical User Interface Reference.

To perform Delete Commit Tests using command line, refer to “delete_committed_tests” in the
Command Line Reference.

November 2010 235 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Utilities and Test Vector Data

The syntax for the delete_commit_tests command is given below:


delete_commit_tests workdir=<directory> testmode=<modename>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode

The most commonly used keywords for the delete_committed_tests are:


■ keepuncommitted=no - Remove uncommitted experiments
■ gmexperiment=<name> - Name of the good machine experiment to remove from
committed test data.

Refer to “delete_committed_tests” in the Command Line Reference for more information


on these keywords.

Prerequisite Tasks
Complete the following tasks before running Test Simulation:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide.
2. Create a Test Mode. Refer to “Performing Build Test Mode” in the Modeling User
Guide.
3. Build a fault model for the design. See “Building a Fault Model” in the Modeling User
Guide for more information.
4. Commit test data. Refer to “Committing Tests” on page 233 for more information.

Deleting a Range of Tests


Use the delete_testrange command to remove one or more test vectors or a range of
test vectors from an experiment before committing the test data. The command removes the
tests or the range of tests specified for the testrange keyword. Refer to delete_testrange
in the Command Line Reference for more information.

After removing the tests from the experiment, resimulate the new set of tests in the following
conditions:
■ If the tests in the experiment are order dependent

November 2010 236 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Utilities and Test Vector Data

■ If fault status exists for the experiment. Resimulation is required to recalculate and reset
the fault status for the remaining tests.
Refer to “Simulating Vectors” on page 258 for more information.

The syntax for the delete_testrange command is given below:


delete_testrange workdir=<directory> testmode=<modename> inexperiment=<name>
testrange=<odometer>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode
■ inexperiment= name of experiment from which to delete patterns
■ testrange= odometer value of the test patterns to delete. The values can be:
❑ all - process all test sequences
❑ odometer - process a single experiment, test section, tester loop, procedure, or
test sequence in the TBDbin file (for example 1.2.1.3)
❑ testNumber - process a single test in the set of vectors by specifying the relative
test number (for example 12)
❑ begin:end - process range of test vectors. The couplet specifies begin and end
odometers (such as 1.2.1.3.1:1.2.1.3.15) or relative test numbers (such as 1:10)
❑ begin: - process the vectors starting at the beginning odometer or relative test
number to the end of the set of test vectors.
❑ :end - process the test vectors starting at the beginning of the set of test vectors
and ending at the specified odometer or relative test number
❑ : - process the entire range of the set of test vectors (the same as testrange = all)
❑ A zero (0) value in any odometer field specifies all valid entities at that level of the
TBD hierarchy. For example, 1.1.3.0.1 indicates the first test sequence in each test
procedure.

Refer to delete_testrange in the Command Line Reference for more information on these
keywords.

November 2010 237 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Utilities and Test Vector Data

Deleting an Experiment
Use the delete_tests command to remove all data associated with the specified uncommitted
experiment(s) from the working directory.

To perform Delete Tests using the graphical interface, refer to "Delete Tests" in the Graphical
User Interface Reference.

To perform Delete Tests using command line, refer to “delete_tests” in the Command Line
Reference.

The syntax for the delete_tests command is given below:


delete_tests workdir=<directory> testmode=<modename> experiment=<name>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode
■ experiment= name of experiment to delete

Encounter Test Vector Data


Encounter Test creates, stores, and processes test data in a compact binary format known
as TBDbin. The test data in TBDbin is in a hierarchical form which consists of the following
entities:

Figure 7-1 Hierarchy of TBDbin Output

Experiment
Define_Sequence
Timing_Data
Pattern
Event
Test_Sequence
Tester_Loop
Test_Procedure
Test_Sequence
Pattern
Event

November 2010 238 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Utilities and Test Vector Data

If the TBDbin contains signature based TBD data, the TBDbin hierarchy consists of these
following entities:
Experiment
Test_Section
Tester_Loop
Test_Procedure
Signature interval
Iteration 1
Iteration 2
Iteration 3
Iteration 4
Iteration 5
Signature interval 2
Iteration 1
Iteration 2
.
.
.

Although these elements are not numbered in the TBDbin, individual elements can be
referenced in Encounter Test tools by using a hierarchical numbering scheme (commonly
referred to as odometer represent levels in the hierarchy shown in Figure 7-1 on page 238.
So, as an example, 1.2.3.4.5.6.7 would be uncommitted test 1, test_section 2, tester_loop 3,
test_procedure 4, test_sequence 5, pattern 6, event 7.

The following gives a brief description of each of these levels of hierarchy. Additional
information about this and other data contained in the TBDbin is included in “Test Pattern
Data Overview” in the Test Pattern Data Reference.

Refer to “Viewing Test Data” in the Graphical User Interface Reference for details on
viewing and analyzing test data using the graphical interface.

Experiment
The experiment groups a set of test_sections generated from “one” test generation run.
The experiment name is specified at the invocation of the test generation application. The
“one” run may actually be multiple invocations of test generation appended to the same
experiment name. Refer to “Experiment” in the Test Pattern Data Reference for details.

Define_Sequence
This is basically a template of a test sequence that is used within this experiment. There will
be one of these for each unique test sequence template used in the experiment. The
sequence definition includes timing information for delay testing and a description (template)
of the patterns and events. The Define_Sequence currently is found only within

November 2010 239 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Utilities and Test Vector Data

experiments that contain timed dynamic tests. Refer to “TBDpatt and TBDseqPatt Format” in
the Test Pattern Data Reference.

Timing_Data
This defines a tester cycle and the specific points within the tester cycle when certain events
(defined in terms of the event type and signal or pin id) are to occur. Refer to “AC Test
Application Objects” in the Test Pattern Data Reference for a more detailed description of
timing data. The sequence definition may contain several Timing_Data objects, each one
defining a different timing specification for testing the product. The choice of which
Timing_Data object is controlled by the Test_Sequence which uses this sequence
definition.

Test_Section
The test_section groups a set of tester_loops within an experiment that have the
same tester setup attributes (such as tester termination), and the same types of test (see
“General Types of Tests” on page 30). Refer to “Test_Section” in the Test Pattern Data
Reference for details.

Tester_Loop
The tester_loop groups a set of test_procedures that is guaranteed to start with the
design in an unknown state. Thus, the tester_loops can be applied independently at the
tester. Refer to “Tester_Loop” in the Test Pattern Data Reference for details.

Test_Procedure
The test_procedure is a set of test_sequences. Test_Procedures are sometimes
able to be applied independently at the tester. If the test_procedures within a
tester_loop cannot be applied independently, an attribute to that effect is included in the
containing tester_loop. Refer to “Test_Procedure” in the Test Pattern Data Reference
for details.

There are attributes in the test_procedure to inform manufacturing about the


effectiveness of the test_sequences it contains. This is a statement of the number of static
faults, dynamic faults, and driver/receiver objectives that were detected with these
test_sequences and the test coverage calculated up to this point in the TBDbin.

November 2010 240 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Utilities and Test Vector Data

Test_Sequence
A test_sequence is a set of test patterns that are geared toward detecting a specific set of
faults. The patterns are intended to be applied in the given order. Refer to “Test_Sequence”
in the Test Pattern Data Reference for details.

Pattern
A pattern is a set of events that are applied at the tester in the specific order specified. Refer
to “Pattern” in the Test Pattern Data Reference for details.

Event
An event is the actual stimulus or response data (or any other data for which ordering is
important). The data within the event is represented as a string of logic values called a vector.
The pin to which the value applies is determined by the relative position of the logic value in
the vector. Encounter Test maintains a vector correspondence list, that indicates the order of
the primary inputs, primary outputs and latches as they appear in vectors.

There is a file created in Encounter Test called TBDvect. This file contains the vector
correspondence information. It can be edited if you need to change the order of the PIs, POs,
and/or latches in the vectors. Refer to “Event” in the Test Pattern Data Reference.

The following sections give information on providing test data to manufacturing and analyzing
test data.

Test Data for Manufacturing


There is no one vector format accepted by all component manufacturers. Therefore,
Encounter Test provides these vector formats that can be used for interfacing with various
manufacturers.
1. TBDbin - the compact binary format used by Encounter Test applications can also be
used as a manufacturing interface. The TBDbin, together with various other Encounter
Test files are packaged into a Encounter Test Manufacturing Data (TMD) file which is
accepted by manufacturing sites.
2. Waveform Generation Language (WGL)** - this format can be translated to many tester
formats using Fluence Technologies, Inc. WGL In-Convert applications. Also, some
manufacturers have their own translators for WGL.
See “WGL Pattern Data Format” in the Test Pattern Data Reference for more detail.

November 2010 241 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Utilities and Test Vector Data

3. Verilog** - this format is used by many manufacturers to drive a “sign off” simulation of
the vectors against the component model prior to fabrication. It can also be translated into
many tester formats by the manufacturers.
See “Verilog Pattern Data Format” in the Test Pattern Data Reference for more detail
on Verilog.
4. Encounter Test Pattern Data (TBDpatt) - The TBDpatt that is output from Encounter
Test contains all the same “manufacturing data” as the TBDbin, WGL, and Verilog forms.
The TBDpatt is an ascii form of the TBDbin.
See “TBDpatt and TBDseqPatt Format” in the Test Pattern Data Reference for
additional information.
5. STIL data - Encounter Test exports test vectors in STIL format, conforming to the IEEE
Standard 1450-1999 Standard Test Interface Language (STIL), Version 1.0 standard.
See “STIL Pattern Data Format” in the Test Pattern Data Reference for additional
details.
Note: See “Writing and Reporting Test Data” on page 169 for information about creating
(exporting) these test data forms.

Test Vector Forms

Test vectors may be created in any of the following forms:

Static

Static tests are structured to detect static (DC) defects. The detection of DC defects does not
require transitions, so the design is expected to settle completely before the next event is
applied.

Dynamic

Dynamic tests are structured to create a rapid sequence of events. These events are found
inside the dynamic pattern and are identified as release events (launch the transition),
propagate events (propagate the transition to the capture latch) and capture events (capture
the transition). Within the dynamic pattern the events are expected to be applied in rapid
succession. The speed is at the discretion of the manufacturing site, since there are no
timings to describe how fast they can be applied.

November 2010 242 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Utilities and Test Vector Data

Dynamic Timed

Dynamic timed tests are dynamic tests that have associated timings. In this case the speed
with which the events are expected to be applied at the tester is explicitly stated in the timing
data.

Tests for Sorting Product

It is possible to create tests that can be used to sort the product by applying the test at several
rates of speed. Not all manufacturers support this, so you must contact your manufacturer
before sending them test data with several different sets of timing data.

The manufacturing process variation curve is a normal distribution. Sorting the tests is done
by selecting a point on the process variation curve through the setting of the coefficients (best,
nominal, worst) of a linear combination.

November 2010 243 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Utilities and Test Vector Data

November 2010 244 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

8
Reading Test Data and Sequence
Definitions

This chapter discusses the commands and concepts to read in test patterns and test
sequences into Encounter Test.

The chapter covers the following topics:


■ “Reading Test Data” on page 245
■ “Reading Sequence Definition Data (TBDseq)” on page 252

Reading Test Data


Test vectors from other formats can be read into Encounter Test. Test vectors are read and
stored into the Encounter Test format called TBDbin. You can then use these test vectors in
TBDbin format as input to any Encounter Test command that takes existing experiments.

You can read from the following types of formats:


■ Encounter Test Pattern Data (TBDpatt)
■ Standard Test Interface Language (STIL)
■ Extended Value Change Dump (EVCD) file

To read Encounter Test Pattern Data using the graphical interface, refer to “Read Vectors” in
the Graphical User Interface Reference.

To read Encounter Test Pattern Data using command lines, refer to “read_vectors” in the
Command Line Reference.

The syntax for the read_vectors command is given below:


read_vectors workdir=<directory> testmode=<modename> experiment=<name>
language=<type> importfile=<filename>

where:

November 2010 245 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Reading Test Data and Sequence Definitions

■ workdir = name of the working directory


■ testmode= name of the testmode for dynamic ATPG
■ experiment= name of the output experiment resulting from simulation
■ language= type of patterns to be imported
■ importfile= location of patterns to be imported

The most commonly used keywords for the read_vectors command are:
■ language= stil|tbdpatt|evcd - Type of language in which the patterns are being
read
■ importfile=STDIN|<infilename> - Allows a filename or piping of data
■ uniformsequences=no|yes - STIL import options to create test procedures with
uniform clocking sequences. Default is no.
■ identifyscantest=no|yes - STIL import options to identify a scan integrity test, if
exists.

Refer to “read_vectors” in the Command Line Reference for more information on these
keywords.

Prerequisite Tasks
Complete the following tasks before running Test Simulation:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide.
2. Create a Test Mode. Refer to “Performing Build Test Mode” in the Modeling User
Guide.
3. Have the test patterns in the supported format.

Read Vectors Restrictions


The following restrictions apply to conversion of all test data formats:
■ TBDpatt that contains Load or Unload events, most likely created by Convert Vectors
for Core Tests, cannot be read.

November 2010 246 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Reading Test Data and Sequence Definitions

■ Conversion of vectors produced from designs with grandparent test modes is not
supported. Refer to "Building Multiple Test Modes" in the Modeling User Guide for
related information.

Reading Encounter Test Pattern Data (TBDpatt)


You can read TBDpatt files into Encounter Test. TBDpatt data files produced by writing test
data can be edited and resimulated by Encounter Test. This gives you the ability to perform
uncommitted test with test vectors as you work to achieve additional fault coverage on a
design. For information on how to create TBDpatt format data, refer to “Writing and
Reporting Test Data” on page 169.

TBDpatt is the recommended format for entering manually generated test vectors into
Encounter Test because the following reasons:
■ TBDpatt format translates directly into TBDbin format
The probability of information loss is less when translating between TBDpatt and
TBDbin formats.
■ TBDpatt format can be easily created
You can create a TBDpatt file for input to Encounter Test by first generating test vectors
(even just random ones) and then converting the resulting TBDbin file into a TBDpatt
file. You can then edit the TBDpatt file to reflect the desired primary input and latch
stimulus values.

To read Encounter Test Pattern Data using the graphical interface, refer to “Read Vectors” in
the Graphical User Interface Reference.

To read Encounter Test Pattern Data using command lines, refer to “read_vectors” in the
Command Line Reference.

Encounter Test Pattern Data Output Files


Encounter Test Pattern Data stores test vectors in an uncommitted vectors file with the format
TBDpatt.testmode.experiment. You must specify an uncommitted test name, which
will identify the read test vectors.

November 2010 247 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Reading Test Data and Sequence Definitions

Reading Standard Test Interface Language (STIL)


Encounter Test reads data that conforms to the IEEE 1450-1999 STIL, Version 1.0 standard.
The TBDbin produced by STIL can be used by any Encounter Test simulation or conversion
tool that accepts TBDbin format.

Initial support for STIL that conforms to the IEEE 1450.1 standard is available. Refer to
“Support for STIL Standard 1450.1.”

Encounter Test reads, parses, and performs semantic checks on the constructs found in the
input STIL file in addition to validating structural references and test vector data.

For information on Encounter Test STIL pattern data, refer to the following sections of the Test
Pattern Data Reference.
■ “STIL Pattern Data Format”
■ Appendix E, “STIL Pattern Data Examples”

Support for STIL Standard 1450.1


Encounter Test currently supports reading the IEEE P1450.1 standard constructs as follows:
■ The WFCmap construct within the Signals Block
■ X (cross-reference) statements
■ UserKeywords statement extension
■ resource_id tags (1450.3)
■ Parse-only support for the following constructs:

ActiveScanChains, AllNames, CellIn, CellOut, Constant, Cycle, Data, Design, E,


Else, Enumeration, Enumerations, Environment, Equivalent, Extend, Fixed,
FileFormat, FileReference, FileType, FileVersion, Format, If, Independent,
InitialValue, Instance, Iteration, InheritEnvironment, InheritNameMap, Integer,
IntegerConstant, IntegerEnum, Lockstep, NameMaps, Offset, ParallelPatList,
PatSet, PatternOffset, Real, ScanChainGroups, ScanEnable, SignalVariable,
SignalVariableEnum, SyncStart, TagOffset, Type, Usage, Values, Variables,
Version, Wait,While

November 2010 248 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Reading Test Data and Sequence Definitions

STIL Restrictions
It is strongly recommended that the STIL data that you read should be in an Encounter Test-
generated file, produced by “Writing Standard Test Interface Language (STIL)” on page 179.
The results for STIL data generated outside of Encounter Test are not guaranteed.

STIL scan protocol must match the scan procedures automatically generated by Encounter
Test. Custom Scan Protocol procedures are not supported. If you do use custom scan
protocol, we recommend that the read tests be simulated prior to sending data to the tester.
Refer to “Custom Scan Sequences” in the Modeling User Guide for more information.

Encounter Test does not support reading STIL data for designs with scan pipelines in
compression testmodes.

Identifying Scan Tests in STIL Vectors


A sequence is a scan test if the following conditions are present:
1. The first event in the sequence is the Scan_Load event.
2. The Scan_Load event loads all scan strings with alternating pairs of 1’s and 0’s. Note
that this may start at any point (for example, 00110011..., 011001100..., 11001100..., or
1011001100...).
3. There may be optional Stim_PI and Measure_PO events after the first event and before
the last event.
4. If there are any pulse events after the first event and before the last event, these pulse
events may only pulse clocks flagged as scan clocks.
5. The last event in the sequence is the Scan_Unload event. The Scan_Unload event
measures alternating pairs on each scan chain.

Identifying Mode Initialization Sequences in STIL Vectors


The read_vectors command, when working on STIL input data, compares the initial
patterns in the input STIL data against the mode initialization sequence as identified while
creating the test mode.

This comparison helps identify the mode initialization sequence even when the end of the
sequence is merged with the first real test pattern in the STIL data.
Note: The read_vectors command compares only the events directly controlled by the
tester. Internal events in the mode initialization sequence, such as the stimulation of pseudo-

November 2010 249 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Reading Test Data and Sequence Definitions

primary inputs or the pulsing of pseudo-primary input, clocks will be ignored in this
comparision.

If the first few STIL vectors match the mode initialization sequence, then read_vectors
replaces the matching vectors with the mode initialization sequence identified for this test
mode.

This replacement enables the resulting TBDbin (binary file of TBDPatt) to correctly identify
any internal events associated with the mode initialization. This enables read_vectors to
support the automatic creation of internal events for the mode initialization sequence, but not
for normal test patterns.

If the first few patterns in the STIL data do not match the mode initialization sequence
identified in the test mode, then the read_vectors command explicitly adds a mode
initialization sequence from the TBDseq file to the output patterns before converting any of
the STIL vectors. This makes sure that the mode initialization sequence occurs before the
patterns in the STIL data in the resulting test pattern file.

As the mode initialization sequence is taken from the TBDseq file, this comparision will also
correctly identify it as a mode initialization sequence in the resulting TBDbin file. When you
resimulate the patterns, the simulator will not insert another copy of the mode initialization
sequence.
Note: The read_vectors command does not identify a mode initialization sequence that
is functionally equivalent to the mode initialization sequence of the test mode, but does not
exactly match the test mode initialization sequence.

Reading Extended Value Change Dump (EVCD) File


NC-Verilog creates extended value change dump (EVCD) files that conform to the IEEE
1364-2001 Verilog standard. You can then read the EVCD file into Encounter Test for creating
Verilog test vectors, simulating and fault grading functional vectors, and detecting and
analyzing miscompares.

An EVCD file is an ASCII file that contains information about value changes on selected
variables in the design. The file contains header information, variable definitions, and the
value changes for all specified variables. Encounter Test accepts EVCD files through the
Read Vectors application. For more information on creation and content of the files, refer to
the following sections in Cadence® NC-Verilog® Simulator Help:
■ Generating a Value Change Dump (VCD) File
■ Generating an Extended Value Change Dump (EVCD) File for a Mixed-Language Design

November 2010 250 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Reading Test Data and Sequence Definitions

■ Generating an Extended Value Change Dump (EVCD) File

Tip
It is recommended to follow some rules while reading EVCD:
❑ Ensure that the input data only changes when the clock(s) are OFF.
❑ Do not have the data and clock signals changing at the same time.
❑ Do not have overlapping clocks (unless they are correlated).

To read the generated EVCD file into Encounter Test using the graphical interface, refer to
“Read Vectors” in the Graphical User Interface Reference. Select EVCD for Input
Format.

To read the generated EVCD file into Encounter Test using command lines, refer to
“read_vectors” in the Command Line Reference. Specify language=evcd.

Tip
The TBDbin consists of a single Experiment, Test_Section, and Tester_Loop. The
type of Test_Section created in the TBDbin depends on the specified value for
testtype or Test section type if using the GUI. The default Test_Section type is
logic. Refer to “Encounter Test Vector Data” on page 238 for more information.
The termination keyword sets the tester termination value to be assumed for the
Test_Section. The default termination setting is none.
The inputthreshold keyword defines the input threshold. Signals less than or equal
to the specified threshold are interpreted as Z.
The outputthreshold keyword defines the output threshold. Signals less than or
equal to the specified threshold are interpreted as X.
The measurecyclesboundary keyword is used to insert a Measure_PO event on a
cycle boundary when no measure exists in the EVCD input.

EVCD Restriction
Encounter Test supports only the single variable type of port. A message is issued if a port of
any other variable type is detected.

November 2010 251 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Reading Test Data and Sequence Definitions

EVCD Output
EVCD values that refer to Primary Inputs are translated to Stim_PI or Pulse events.
Primary Output values are translated to Measure_PO events. Refer to “Event” in the Test
Pattern Data Reference for descriptions of these events.

Reading Sequence Definition Data (TBDseq)


Besides reading tests in the form for direct application, Encounter Test also supports reading
test sequence definitions. Sequence definitions are used in cases where the same input
sequence is used repetitively.

There are two broad categories of sequence definitions:


■ Test
This type of sequence defines the clocking template for a set of tests in terms of which
clocks are to be pulsed, and in what order. The test sequence also specifies any other
events that are to be applied, such as the scan operation and stims, on any pins that do
not vary across tests.
■ Others
A sequence definition of any type other than test is used for operations that prepare for
a test or read out the results of a test. There are many different such types, for example
scan preconditioning (scanprecond) and scansequence. Refer to Define_Sequence in
the Test Pattern Data Reference to know more about the sequence definition types
recognized by Encounter Test.

A sequence definition is similar to a subroutine in programming. It is not necessary for every


sequence definition to have a recognized type. User-written sequence definitions can, in
general, invoke other sequence definitions might not have an identified 'type'.

After reading the test vector, you can use the Test Simulation application to simulate the
vectors. You can choose to perform good machine or fault machine simulation and forward or
reverse vector simulation.

To read Sequence Definition Data using the graphical interface, refer to “Read Sequence
Definition” in the Graphical User Interface Reference.

To read Sequence Definition Data using command lines, refer to “read_sequence_definition”


in the Command Line Reference.

The syntax for the read_sequence_definition command is given below:

November 2010 252 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Reading Test Data and Sequence Definitions

read_sequence_definition workdir=<directory> testmode=<modename>


importfile=<filename>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ importfile= location of sequences to import

The most commonly used keyword for the read_sequence_definition is:

importfile=STDIN|<infilename> - Allows a file name or piping of data

Refer to “read_sequence_definition” in the Command Line Reference for more infrormation


on the keywords.

Prerequisite Tasks
Complete the following tasks before running Test Simulation:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide.
2. Create a Test Mode. Refer to “Performing Build Test Mode” in the Modeling User
Guide.

Sequence Definition Data Output Files


Sequence definition data produces the file TBDseq. testmode that contains the
initialization sequences.

November 2010 253 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Reading Test Data and Sequence Definitions

November 2010 254 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

9
Simulating and Compacting Vectors

Encounter Test has capabilities to simulate and manipulate experiments to reduce the
number of patterns, fault simulate patterns, calculate new values, and verify existing patterns.

This chapter discusses the following:


■ “Compacting Vectors” on page 255
■ “Simulating Vectors” on page 258
■ “Analyzing Vectors” on page 260
■ “Timing Vectors” on page 262
■ “Test Simulation Concepts” on page 263
❑ “Specifying a Relative Range of Test Sequences” on page 265
❑ “Fault Simulation of Functional Tests” on page 266
❑ “Zero vs Unit Delay Simulation with General Purpose Simulator” on page 268
❑ “Simulating OPC Logic” on page 269
❑ “InfiniteX Simulation” on page 270
❑ “Pessimistic Simulation of Latches and Flip-Flops” on page 271
❑ “Resolving Internal and External Logic Values Due to Termination” on page 272
❑ “Overall Simulation Restrictions” on page 272

Compacting Vectors
With limited tester resources, when you want to limit the number of ATPG patterns to be
loaded onto a tester, it is strongly recommended that you compact the vectors to produce a
more ideal set of patterns before they limit the patterns being applied to the tester. Typically,
the test pattern set created during ATPG does not represent the most ideal order of patterns
to apply at a tester. For example, because of different clocking and random fault detection,

November 2010 255 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Simulating and Compacting Vectors

pattern #100 might test more faults than pattern #60. If you can only apply 70 patterns, they
would want pattern #100 to be applied because of the number of faults it detects.

Compact Vectors is designed to help order the patterns to allow the steepest test coverage
curve.

By default the compact vectors sorts an experiment from ATPG based on the number of faults
tested per experiment. The tools round the test coverage down to the nearest 0.05% and use
this as the cut-off coverage number. If the experiment is based on dynamic patterns, both the
static and dynamic coverages are reduced by 0.05%, and both numbers need to be reached
before stopping. The coverage number is based on AT-Cov or adjusted fault coverage.

When the input consists of multiple experiments, test sections, or tester loops (refer to
“Encounter Test Vector Data” on page 238 for more information on these terms), Compact
Vectors tries to combine all the vectors into a single test section and tester loop. However, for
that to happen, all the following conditions must be met:
■ Test section type must be logic or WRP
■ Test types (static, dynamic) must be the same
■ Tester termination must be the same
■ Pin timings must be the same
■ Any linehold fixed-value latch (FLH) values must be consistent (that is, for each FLH, its
value must be the same for all experiments or test sections being combined)
■ There must be consistent usage (or non-usage) across the test sections of tester
PRPGs, tester signatures, product PRPGs, and product signatures

To perform Compact Vectors using the graphical interface, refer to "Compact Vectors" in the
Graphical User Interface Reference.

To perform Compact Vectors using command line, refer to “compact_vectors” in the


Command Line Reference.

The syntax for the compact_vectors command is given below:


compact_vectors workdir=<directory> testmode=<modename> inexperiment=<name>
experiment=<name>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ inexperiment= name of the experiment to run re-order

November 2010 256 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Simulating and Compacting Vectors

■ experiment= output experiment name

The most commonly used keywords for the compact_vectors command are:
■ resimulate=yes|no - Set to no to not resimulate the result patterns. Default is yes
to resimulate the results.
■ reordercoverage=both|static|dynamic - Specify the fault types to drive sorting.
The default is based on the type of ATPG patterns that are being simulated.
■ maxcoveragestatic=# - Stop patterns at a specific static coverage number (for
example 99.00)
■ maxcoveragedynamic=# - Stop patterns at a specific dynamic coverage number
(for example 85.00)
■ numtests=# - Stop at a specific pattern count. This is an estimate as multiple patterns
are simulated in parallel, so the total final pattern count might be slightly higher (less than
64 away).

Refer to “compact_vectors” in the Command Line Reference for more information on these
keywords.

Prerequisite Tasks
Complete the following tasks before running Test Simulation:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide.
2. Create a Test Mode. Refer to “Performing Build Test Mode” in the Modeling User
Guide.
3. Run ATPG or have experiments to compact. Refer to “Invoking ATPG” on page 35 for
more information.

Output
The output includes the re-ordered experiment. Use this output experiment for exporting or
committing test data.

Output Message

The following are some messages that can be seen:

November 2010 257 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Simulating and Compacting Vectors

■ INFO (TWC-016)

A sample output log is given below:


-------------Total Fault Simulation Statistics----------

Tests Simulated..........................41 #Number of patterns simulate


Effective Tests..........................32 #Number of patterns kept
Detects..................................1455 #Fault detection information
3-STATE Contentions
#Identify any contention found
Expect/Measure
Good Compares ...........................2035 #Good/Bad compares found
Misc
Compares .............................0

Simulating Vectors
Simulate Vectors reads an ASCII pattern set and simulates it with one command. Another
way to do this is to perform Read Vectors (refer to “Reading Encounter Test Pattern Data
(TBDpatt)” on page 247) and then Analyze Vectors (refer to “Analyzing Vectors” on
page 260).

To perform Test Simulation using the graphical interface, refer to “Simulate Vectors” in the
Graphical User Interface Reference.

To perform Test Simulation using command lines, refer to “simulate_vectors” in the


Command Line Reference.

The syntax for the simulate_vectors command is given below:


simulate_vectors workdir=<directory> testmode=<modename> experiment=<name>
language=<value> importfile=<filename>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ experiment= name of the output experiment from simulation
■ language= Type of patterns to import
■ importfile= Location of patterns to import

The most commonly used keywords for the simulate_vectors command are:

November 2010 258 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Simulating and Compacting Vectors

■ language= stil|tbdpatt|evcd - Type of language in which the patterns are being


read.
■ gmonly=no|yes - Perform good machine simulation (no fault mark off). Default is no.
■ simulation=gp|hsscan - Select the simulator to use. Refer to “Test Simulation
Concepts” on page 263 for more information.
■ delaymode=zero|unit - Select the simulation type. Only applicable while using
simulation=gp. Refer to “Zero vs Unit Delay Simulation with General Purpose
Simulator” on page 268 for more information.

Refer to “simulate_vectors” in the Command Line Reference for more information on these
keywords.

Prerequisite Tasks
Complete the following tasks before running Simulate Vectors:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide.
2. Create a Test Mode. Refer to “Performing Build Test Mode” in the Modeling User
Guide.
3. Run ATPG or have experiments to compact. Refer to “Invoking ATPG” on page 35 for
more information.
4. Have STIL or TBDpatt pattern files. For limitations on patterns, refer to “Reading
Encounter Test Pattern Data (TBDpatt)” on page 247 for more information.

Refer to “Overall Simulation Restrictions” on page 272 for general simulation restrictions.

Output
The output is a new experiment containing the simulated results of the input patterns. The
output also contains the new calculated values in case of miscompares.

Output Messages

Some of the messages generated by simulate_vectors are given below:


■ INFO (TIM-800)
■ INFO (TIM-805)

November 2010 259 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Simulating and Compacting Vectors

■ INFO (TWC-015)

A sample output is given below:


--------------Total Fault Simulation Statistics----------

Tests Simulated..........................41 #Number of patterns simulate


Effective Tests..........................32 #Number of patterns kept
Detects..................................1455#Fault detection information
3-STATE Contentions # Identify any contention found
Expect/Measure
Good Compares ...........................2035 #Good/Bad compares found
Miscompares ..............................0

Analyzing Vectors
Use Analyze Vectors to simulate an existing Encounter Test experiment. This can be useful
when cross checking simulation results, manipulating patterns, or creating scope sessions for
further debug.

To perform Test Simulation using the graphical interface, refer to “Analyze Vectors” in the
Graphical User Interface Reference.

To perform Test Simulation using command lines, refer to “analyze_vectors” in the Command
Line Reference.

The syntax for the analyze_vectors command is given below:


analyze_vectors workdir=<directory> testmode=<modename> inexperiment=<name>
experiment=<name>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ inexperiment= name of the experiment to be simulated
■ experiment= name of the output experiment resulting from simulation

The commonly used keywords for the analyze_vectors command are:


■ gmonly=no|yes - Perform good machine simulation (no fault mark off). Default is no.
■ simulation=gp|hsscan - Select the simulator to use. Refer to “Test Simulation
Concepts” on page 263 for more information.

November 2010 260 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Simulating and Compacting Vectors

■ delaymode=zero|unit - Select the simulation type. Only applicable if using


simulation=gp. Refer to “Zero vs Unit Delay Simulation with General Purpose
Simulator” on page 268 for more information.
■ contentionreport= soft|hard|all|none - The type of contention to report on.
Default is soft.
■ watchnets=all|file |none|scan|trace - Specify to create simvisions
waveforms files. Default is none. Requires simulation=gp.

Refer to “analyze_vectors” in the Command Line Reference for more information on these
keywords.

Prerequisite Tasks
Complete the following tasks before running Analyze Vectors:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide.
2. Create a Test Mode. Refer to “Performing Build Test Mode” in the Modeling User
Guide.
3. Run ATPG or have experiments to analyze. Refer to “Invoking ATPG” on page 35 for
more information.

Refer to “Overall Simulation Restrictions” on page 272 for general simulation restrictions.

Output
The output is a new experiment containing the simulated results of the input patterns. The
output also contains the new calculated values in case of miscompares.

A sample output is given below:


--------------Total Fault Simulation Statistics----------
Tests Simulated..........................41 #Number of patterns simulate
Effective Tests..........................32 #Number of patterns kept
Detects..................................1455#Fault detection information
3-STATE Contentions # Identify any contention found
Expect/Measure
Good Compares ...........................2035 #Good/Bad compares found
Miscompares .............................0

November 2010 261 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Simulating and Compacting Vectors

Timing Vectors
Use Time Vectors to time/re-time an existing Encounter Test experiment. This will time only
dynamic events found in the test patterns. This simulation is good machine only.

To perform Time Vectors using the graphical interface, refer to "Time Vectors" in the
Graphical User Interface Reference.

To perform Time_Vectors using command lines, refer to “time_vectors” in the Command


Line Reference.

The syntax for the time_vectors command is given below:


time_vectors workdir=<directory> testmode=<modename> inexperiment=<name>
experiment=<name> delaymodel=<name>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ inexperiment= name of the experiment to simulate
■ experiment= name of the output experiment resulting from simulation
■ delaymodel= name of the delay model to use

The most commonly used keywords for the time_vectors command are:
■ constraintcheck=yes|no - Ability to turn off constraint checks. Default is yes to
check constraints.
■ earlymodel/latemode - Ability to customize delay timings. Default is 0.0,1.0,0.0
for each keyword. Refer to “Process Variation” on page 129 for more information.
■ clockconstraints=<file name> - List of clocks and frequencies to perform
ATPG. Refer to “Clock Constraints File” on page 125 for more information.
■ printtimings=no|yes - Print timing summary for each clock

Refer to “time_vectors” in the Command Line Reference for more information on these
keywords.

Prerequisite Tasks
Complete the following tasks before running Time Vectors:

November 2010 262 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Simulating and Compacting Vectors

1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide.
2. Create a Test Mode. Refer to “Performing Build Test Mode” in the Modeling User
Guide.
3. Have an existing dynamic experiment.

Refer to “Overall Simulation Restrictions” on page 272 for general simulation restrictions.

Output
The output a new experiment that is timed to the new timings found in the clock constraint
files or the specified options.

Test Simulation Concepts


Encounter Test can be used to simulate test patterns that already exist. In order to simulate
the test patterns, they must be available in a TBDbin file format. If you have the TBDbin file
from a previous test generation uncommitted test, that file can be simulated by providing the
uncommitted test name to the Test Simulation application. If the test patterns to be simulated
are not currently available in a TBDbin file format, you can import the patterns into a TBDbin
format using the import test data tool.

For more information, refer to “Reading Test Data and Sequence Definitions” on page 245.

Once the test patterns have been imported, they can be simulated by Encounter Test. The
types of simulation available in Encounter Test are:
■ High Speed Scan-Based Simulation - high speed simulation for designs and patterns
that adhere to the scan design guidelines.
■ General Purpose Simulation - simulation for designs and patterns independent of
whether they adhere to the scan design guidelines.
■ Good Machine Delay Simulation - simulation for designs and patterns that have a delay
model to accurately reflect the timing in the design.

High Speed Scan-Based Simulation provides high-speed fault simulation for efficient test
pattern generation. It requires designs and input patterns to adhere to the LSSD and GSD
guidelines for Encounter Test. Input patterns must adhere to the following constraint:
■ Patterns cannot place a test inhibit (TI) function pin or a test constraint (TC) function pin
away from its stability value.

November 2010 263 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Simulating and Compacting Vectors

General Purpose Simulation provides a more flexible simulation capability than High Speed
Scan-Based Simulation. A design is not required to adhere to design guidelines. Since this is
a fully event-driven simulator, general sequential logic is handled. Also, this simulation does
not impose particular clock or control input sequencing constraints on the input patterns. This
type of simulation is used:
■ to supplement High Speed Scan-Based Simulation for parts of a design that are not
compatible with automatic test generation.
■ to provide a means to insert functional tests into the tester data stream and to perform
fault grading of functional tests.
■ to verify test patterns before they are sent to a design manufacturer.

General Purpose Simulation compares its measurement results to those previously achieved
and alerts you to any differences. For example, when a miscompare between an expected
result and a simulator predicted result is found, analysis is needed to determine why the
miscompare occurred. You can use analyze test patterns using View Vectors. Select the
View Vectors icon on the Encounter Test schematic display to view simulation results and
provide visual insight into the test data.
Note: Although General Purpose Simulation provides a less constrained simulation
capability compared to High Speed Scan-Based Simulation, it does so at the price of
simulation speed. General Purpose Simulation will typically be at least an order of magnitude
slower than High Speed Scan-Based Simulation on the same design with the same patterns.
The key here is that General Purpose Simulation should only be used in those cases where
High Speed Scan-Based Simulation cannot be applied.

General Purpose Simulation provides some features for optimizing its performance under
varying conditions of design size, fault count, and computer resource availability. The primary
means for controlling the run time and memory usage during simulation is a technique known
as multi-pass fault simulation. This means that General Purpose Simulation can simulate a
given pattern set multiple times. For each pass, the unsimulated subset of the fault list is
chosen for simulation. This reduces the memory requirements of fault simulation.

There is no way to determine exactly how much memory is required and hence calculate the
ideal number of faults that should be simulated for a given pass. General Purpose Simulation
makes decisions for this number, but you can specify the Number of Faults per Pass and the
Maximum Number of Overflow Passes to control the multi-pass process.

Another feature of General Purpose Simulation that controls run time is fault dropping. As
simulation progresses, the simulator detects that certain faults are consuming an inordinate
amount of run time. When detected, these faults are dropped from the simulation for the
current run. Note that any faults so dropped are attempted for simulation on successive runs,

November 2010 264 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Simulating and Compacting Vectors

thus detection of the fault is still a possibility. The Machine Size sets the threshold at which
fault dropping occurs.

Good Machine Delay Simulation is similar to General Purpose Simulation in that it is an event
driven simulation. As the name implies, however, it does not perform fault simulation. Good
Machine Delay Simulation should be used when simulation of a design with delays and/or
time-based stimulus and measurements is required. Good Machine Delay Simulation can not
be used for weighted-random (WRPT) or LBIST tests unless these tests are first converted
to stored pattern format using the Manipulate Tests tool.

Notes:
1. To perform Test Simulation using the graphical interface, refer to “Simulate Vectors” in the
Graphical User Interface Reference.
2. To perform Test Simulation using command lines, refer to “simulate_vectors” in the
Command Line Reference.
3. Also refer to “Simulating Vectors” on page 258.

Specifying a Relative Range of Test Sequences


Use testrange =<relative test sequence number> to specify a relative range
of test sequences. A relative test sequence number is an integer value that identifies the
position of the test sequence relative to the start of the pattern file. The first test sequence in
the file has a relative test sequence number of 1. The next test sequence has a relative test
sequence number of 2, and so on.

For example, specify write_vectors testrange=1:5 to report the first five test
sequences. The following is a sample output:
TEST SEQUENCE COVERAGE SUMMARY REPORT
Test |Static |Static |Dynamic |Dynamic |Sequence|Overlapped|Total |
Sequence |Total |Delta |Total |Delta |Cycle |Cycle |Cycle |
|Coverage|Coverage|Coverage|Coverage|Count |Count |Count |

1.1.1.1.1 |0.00 |0.00 |0.00 |0.00 |1 |0 |1 |


1.1.1.2.1 |31.83 |31.83 |0.00 |0.00 |183 |0 |184 |
1.2.1.1.1 |31.83 |0.00 |0.00 |0.00 |1 |0 |185 |
1.2.1.2.1 |44.79 |12.96 |0.00 |0.00 |174 |86 |359 |
1.2.1.2.2 |51.72 |6.93 |0.00 |0.00 |88 |86 |447 |
1.2.1.2.3 |55.43 |3.70 |0.00 |0.00 |88 |86 |535 |
1.2.1.2.4 |57.78 |2.35 |0.00 |0.00 |89 |0 |624 |

In the log given above:


■ 1.1.1.1.1 is the modeinit for scan test section

November 2010 265 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Simulating and Compacting Vectors

■ 1.1.1.2.1 (scan test) has the relative pattern 1


■ 1.2.1.1.1 is the modeinit for logic test section
■ 1.2.1.2.1 (first logic test) has the relative pattern 2
■ 1.2.1.2.2 has the relative pattern 3
■ 1.2.1.2.3 has the relative pattern 4
■ 1.2.1.2.4 has the relative pattern 5

You can also specify a combination of odometer value with relative numbers. Any testrange
entry in a comma separated list of testranges that contains a period is assumed to be an
odometer entry. Any entry without a period is assumed to be a relative test number.

For example, to extract the first 10 sequences along with the sequence 2.4.1.1.1, specify the
test range as follows:
testrange=1:10,2.4.1.1.1

Note: A relative test number cannot be used to identify a modeinit, scan or setup sequence.
These sequences must be identified using the odometer value.

To specify a range of experiments in the testrange keyword, use a period (.) between the
experiment numbers, such as shown below:
testrange=1.:2.

Fault Simulation of Functional Tests


Fault simulation of functional tests can consist of a challenging set of tasks. Depending on the
size of the design, the number of functional patterns, and number of faults to be simulated,
the simulation can take large amounts of CPU time, system memory and file space.

Encounter Test can read in the following functional pattern languages:


■ ASCII format of Encounter Test pattern data (TBDpatt)
■ STIL test patterns
■ Enhanced VCD (EVCD)

For more information, refer to “Reading Test Data and Sequence Definitions” on page 245.

You can import this input into Encounter Test and use it as input to the simulator. This
simulation capability provides a means of fault grading functional tests (or other manual
patterns) and inserting them into the tester data.

November 2010 266 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Simulating and Compacting Vectors

Recommendation for Fault Grading Functional Tests


If your design is fully synchronous, with well-analyzed clocking races, we recommend using
the zero-delay mode of the General Purpose simulator. There are, however, many factors to
consider, as described in “Functional Test Guidelines” on page 267. If you encounter pitfalls
or conditions resulting from the described factors that are inappropriate, the unit-delay mode
can be manipulated to work with some modeling effort.

Fault simulation of functional tests is typically faster using zero-delay mode instead of unit-
delay mode, however in dealing with a large design size, a large number of patterns required
for a functional test, and a large number of faults remaining untested after scan chain tests,
it may not be feasible to perform full fault simulation within a reasonable time. Based on these
conditions, we recommend using the option for Random Fault Selection, nrandsel to fault
grade your functional tests. This will allow you to obtain an estimate of the combined fault
coverage of the scan-based ATPG tests plus the functional tests.

Functional Test Guidelines


In developing functional test cases, you must ensure that patterns can be simulated correctly
so simulation results will match the chip hardware results on the tester. This requires that the
functional test cases have these characteristics:
■ Functional Test Case Timing
A VERY slow clock rate must be used in order to allow each PI change to completely
propagate through the logic so the functional simulator results can match the Static
simulation results. A Static simulator assumes there is sufficient time between input
change events to allow all design change activity to settle before the next input change
event. Clock and Data input changes must be done at different, widely separated time
intervals so no real-time races between clock and data occurs that would require
accurate design delay information to compute the hardware results.
■ Pulse Events
There must be only a single Clock PI (possibly correlated with another PI) in any Pulse
Event. When Encounter Test detects two or more pins pulsed in one event, the pins are
simulated simultaneously. This function is called a multipulse.
■ Clock PI Stim Events
If any PI that can act as a clock to change a latch state (System Clock, TCK, TRST,
System Reset, even a clock gate) is ever stimulated, this PI must be the only PI in this
Stim Event. Data PI changes must be done in separate Stim Events to prevent clock
versus data races.

November 2010 267 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Simulating and Compacting Vectors

Refer to “1149.1 Boundary Controls” in the Modeling User Guide for additional
information on these latch states.
Encounter Test detects when more than one clock is stimulated or pulsed in any one
event, and reports the clock nets that have been changed as a result. This function,
called multiclock, is not performed for init procedures, scan chain tests, and macro tests.
See “Tester_Loop” in the Test Pattern Data Reference for a description of init
procedure.
■ Bi-Directional I/Os
The functional test case should be written so as not to create three-state contentions.
Typically this would require the product drivers go to Z before any BIDI PI Stims to 1/0,
and BIDI PI Stims to Z should occur before the product drivers can go out of Z. These
rules can be difficult to follow while developing a functional test case because the chip
driver enable state must be known when working with BIDI Stims. Note that it is not
impossible to have a simultaneous clock event to enable or disable a chip driver and a
Stim Event on a BIDI PI (causing only a transient contention) because Clock and Data
PI Events must be separated to achieve race-free simulation results.

Zero vs Unit Delay Simulation with General Purpose


Simulator
The choice between zero-delay or unit-delay modes depends on several factors that require
analysis and conclusion.

First is the issue of valid simulation results so that the tester does not fail a correctly designed
and manufactured chip. If the ATPG patterns or functional tests have been written correctly,
(refer to “Functional Test Guidelines” on page 267) and if there are no unpredictable races in
the design (that is, no severe TSV violations or data hold-time violations), either a zero-delay
or unit-delay mode of simulation can work if the models are developed correctly. A zero-delay
simulation predicts the clock always wins the clock/data race. That is, for clock gating or data
hold time races, zero-delay simulation will make calculations assuming that the clock goes
Off before new data arrives. For a unit-delay simulation model to work, the inherent races
among Flip-Flops, latches, and RAMs have to be accounted for in the models, taking clock
distribution unit-delay skews into consideration. If gate unit delays are not accurate
representations of clock distribution skew (for example, due to heavily loaded tests), you
might have to modify the models to make the simulation work.

In addition to balancing the clock distribution unit delays, another way to make the unit-delay
simulation model work properly is to insert gates (buffers) into the data inputs or outputs of
all latch flip-flops or RAM models. This delays the propagation of new data signals feeding to
opposite phase latches until their clocks can turn Off. The number of delay gates required in

November 2010 268 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Simulating and Compacting Vectors

the latch/flip-flop/RAM models depends on how well-balanced is the clock distribution tree
logic, how much logic already exists between latches/flip-flops/RAMs in the design, and
whether you need to use the xwidth option to predict unknown values for some types of truly
unpredictable design races. This unit-delay simulation technique using the xwidth option
(Delay Uncertainty Specific Sim option on the graphical user interface) is a major reason
to select unit-delay simulation over zero-delay simulation.

Unit-delay simulation mode allows for more accurate simulation in the presence of structures
which present problems for zero-delay simulation provided the model is unit-delay accurate.
For zero-delay simulation to function properly, the clock shaping logic must be identified to the
simulation engine to prevent the suppression of pulses. As TSV is intended to identify such
errors, unit delay is used for rules checking as the identification of TSV violations is required
for accurate simulation . For an accurate unit delay model, zero delay simulation does not
provide any advantage over unit delay simulation and will yield incorrect results in the
presence of incorrectly modeled clock shaping networks.

Using the ZDLY Attribute


The ZDLY=YES property specifies that the block should be treated as zero delay when
simulating with the General Purpose Simulator in unit delay mode. The ZDLY property can be
specified only on primitives. This must be applied in the source verilog or library files.

Simulating OPC Logic


Cut points are used to hide from Encounter Test sections of logic that are intractable to test
generation and other analysis functions. A common example of such intractable logic is an
LBIST controller when the design is being processed in a test mode where the LBIST
controller is active (running). Sequential clock generation macros also fall into this category.
Logic that is hidden, or blocked, by cut points is referred to as OPC (on-product control or
clock) logic. OPC logic is completely ignored by most Encounter Test application programs,
and Encounter Test relies upon user-specified sequences (modeinit, custom scan protocol,
user test sequences) to process designs that have cut points specified.

Confidence in Encounter Test data that is produced with cut points can be gained by use of
the Verify On-Product Clock Sequences tool (see “On Product Clock Sequence Verification”
in the Verification User Guide) if a simulation model (instead of a black box) is available for
the OPC logic. Additional confidence may be gained by resimulating the test data without the
use of cut points. You may be able to export the test data and re-import it to another test mode

November 2010 269 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Simulating and Compacting Vectors

that does not have cut points. However, in some cases this may not be possible. For example,
if the cut points are necessary for the definition of scan strings, and you define a test mode
without any cut points, the simulator would require an expanded form of the tests where each
scan operation appears as a long sequence of scan clock pulses. You may or may not want
to simulate the scan operations at this detailed level.

To avoid the necessity of defining additional test modes for simulation without cut points, and
to cover the case where the detailed simulation of scan operations is not needed, there is a
simulation option for ignoring cut points. Using this option, you can generate test data using
cut points and resimulate the tests in the same test mode, ignoring the cut points. Specify
useppis=no on the simulation (analyze_vectors) command line to use this feature.

A good verification of an LBIST controller can be achieved by the following steps:


1. Generate LBIST tests (using cut points).
2. Export Encounter Test Pattern data, specifying the options to expand scan operations
and unravel LBIST test sequences.
3. Re-import the Encounter Test Pattern data to the same test mode.
4. Use the General Purpose simulator or the Good Machine Delay simulator (if you have a
delay model) to simulate the tests. Be sure to turn on the option to Compare at Measure
Commands. The simulator will let you know if there are any “miscompares” . between this
simulation and the original test data generated by Step 1.

InfiniteX Simulation
InfiniteX simulation can be used to control how High Speed Scan Based Simulation simulates
StimPI and StimLatch events when race conditions have been identified by Verify Test
Structures. This mode of simulation employs a pessimistic approach to simulating race
conditions by introducing Xs into the simulation of a StimPI or StimLatch event to
pessimistically simulate the race condition. For example, if Verify Test Structures has
identified certain clock-data conditions, the simulator introduces an X into StimPI events
where the state of the PI is changing state (0->X->1 as opposed to 0->1 directly).

Invoke InfiniteX simulation by either specifying a value for the infinitex keyword in the
command environment or by selecting High Speed Scan Based Simulation option to Use
pessimistic simulation on the GUI. The default for infinitex is controlled by whether TSV
was previously run. If TSV was not run, standard simulation is performed. If TSV was run, the
default behavior for infiniteX simulation is dictated by the following conditions:
■ InfiniteX simulation for latches is performed if:

November 2010 270 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Simulating and Compacting Vectors

Verify Test Structures check Analyze potential clock signal races did not complete
or was not run
Or, Analyze potential clock signal races was run and produced message TSV-059.
For additional information, refer to “Analyze Potential Clock Signal Races” in the Custom
Features Reference and “Analyze Potential Clock Signal Races” in the Verification
User Guide.
■ InfiniteX simulation for PIs is performed if:
Verify Test Structures check Analyze test clocks' control of memory elements did not
complete or was not run
Or, Analyze clocks' control of memory elements was run and produced message
TSV-008 or message TSV-310.
For additional information, refer to “Analyze test clocks’ control of memory elements” for
LSSD in the Custom Features Reference and “Analyze test clocks’ control of memory
elements” for GSD in the Verification User Guide.
Note: You can turn off the infiniteX simulation mode by specifying infinitex=none for the
create_logic_tests command.

Pessimistic Simulation of Latches and Flip-Flops


Simulation of latches and flip-flops may be manipulated so that an X state appearing on a
clock input will cause the output of the latch or flip-flop to be X, regardless of the state of the
corresponding data pin and the current state of the memory element. This technique
facilitates creation of test data that will match a simulator which calculates X for a flip-flop
whose clock input is Xr.

Use the latchsimulation=optimistic|pessimistic keyword or option Latch/Flip-


flop output when clock at X on GUI Simulation forms to specify whether to leave the
output unchanged if it is the same as the data input state
(latchsimulation=optimistic) or to always set output to X
(latchsimulation=pessimistic), that is, to match the pessimistic flip-flop model.

All latches are simulated the same way in that all or simulated either optimistically or
pessimistically based on the selected option.

The following are limitations associated with the pessimistic method of simulation:
■ Simulation activity that occurs during Build Test Mode or Verify Test Structures is not
affected. This may cause the following conditions:

November 2010 271 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Simulating and Compacting Vectors

❑ Miscompares as a result of simulation mismatches during the test mode initialization


sequence
❑ Miscompares due to the failure (of the verification simulator) to recognize a data-
muxed fixed-value latch whose clock is at X
■ Simulation performed during GUI analysis does not support this option.
■ Use of latchsimulation=pessimistic may reduce the fault coverage of a given set
of tests.

Resolving Internal and External Logic Values Due to


Termination
During Test Generation and Simulation, the termination values used on pins and internal nets
are defined by the TDR and test simulation options.

For a detailed analysis of the interactions of the terms and parameters used to determine the
termination values during Test Generation and Simulation, refer to “Termination Values” in the
Modeling User Guide.

Refer to the following for additional related information:


■ “TESTER_DESCRIPTION_RULE” in the Modeling User Guide
■ “TDR_DEFINITION” in the Modeling User Guide

Overall Simulation Restrictions


Restrictions of Test Simulation using High Speed Scan-Based Simulation are:
■ Patterns cannot place a test inhibit (TI) function pin or a test constraint (TC) function pin
away from its stability value.
■ Patterns cannot turn more than one clock away from its stability value at the same time.

Restrictions of Test Simulation using General Purpose Simulation are:


■ Transition fault simulation is not supported.
■ Only the following Test_Section types are supported:
❑ logic
❑ flush

November 2010 272 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Simulating and Compacting Vectors

❑ scan
❑ macro
❑ driver_receiver
■ Supports only the test type of static.
■ Pattern elimination for three-state contention or ineffective tests is not supported.
■ Only test sections of Logic, IDDQ, Driver_Receiver, ICT (all types), and IOWRAP (all
types) can be sorted by sequence.
■ SimVision is not supported on Linux-64 bit platforms.
■ The capability to allow simulation run time controls to be established from information
stored in vector files generated by Encounter ATPG has a limitation that applies to the
detection of syntax errors introduced by a user manually editing the simOptions keyed
data in an Encounter Test vector file. Only invalid keyword values can be detected and
reported. Erroneous keywords themselves are not detected as syntax errors but are
simply ignored. It is generally recommended that "simOptions" keyed data not be
modified.

November 2010 273 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Simulating and Compacting Vectors

November 2010 274 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

10
Advanced ATPG Tests

This chapter discusses the commands and techniques to generate advanced ATPG test
patterns. These techniques are performed on designs at the same stage as in the basic ATPG
tests. Some of these techniques require special fault models to be built.

The chapter covers the following topics:


■ “Create IDDq Tests” on page 275
■ “Create Random Tests” on page 279
■ “Create Exhaustive Tests” on page 281
■ “Create Core Tests” on page 283
■ “Create Embedded Test - MBIST” on page 283
■ “Create Parametric Tests” on page 284
■ “Create IO Wrap Tests” on page 285
■ “IEEE 1149.1 Test Generation” on page 286
■ “Reducing the Cost of Chip Test in Manufacturing” on page 298
■ “Parallel Processing” on page 300
■ “” on page 312

Create IDDq Tests


IDDq testing exploits the fact that a fully complementary CMOS device has a very low leakage
current when the device is in the quiescent (static) state. A defect that causes the CMOS
device to have a high leakage current can be detected by measuring the current (Idd) into the
Vdd bus. Many defects and faults are easier to test with IDDq than with conventional testing
which measures only the signal outputs; some faults that cannot be detected by the
conventional measures can be detected by IDDq testing. See “IDDq Test Status” in the
Modeling User Guide for more information.

November 2010 275 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

A few issues must be considered when attempting to integrate IDDq testing into a high-
volume manufacturing test process:
■ Measurement of the supply current takes longer than measuring signal voltages. The
design has to be given time to settle into a quiescent state after the test is applied, and
the current measurement has to be averaged over some time interval (maybe over tests)
to eliminate noise in the measurement. Therefore, the test time per pattern is quite large
and so the number of current sensing tests must be kept small (considerably less than
one hundred for what most manufacturers would consider reasonable throughput in chip
testing).
■ Some design may contain high steadystate current conditions that are to be avoided.
■ The techniques used to find the cutoff between good and faulty levels of IDDq current are
often ad hoc and empirical, based on experience gained through extensive
experimentation.

Despite the challenges associated with IDDq testing, the benefit of detecting faults which are
not detected by conventional voltage sensing techniques (for example, gate-oxide shorts) has
given IDDq testing a place in many chip manufacturers' final test processes.

A typical IDDq test is composed of the following sequence of events:


1. Apply the test stimuli - load scan chain latches and set primary inputs (PIs).
2. Apply the required stimuli to disable any current paths that could invalidate an IDDq
measurement.
3. Wait until the design has stabilized.
4. Measure the IDDq current.

The Encounter Test stored pattern test generation application can automatically generate
IDDq test patterns. Since application of IDDq patterns may take significantly longer that
normal patterns due to having to wait for the design activity to quiesce, Encounter Test
provides some options for helping to keep the size of the IDDq test pattern set small.

IDDq test vectors can be used to support extended voltage screen testing via the generation
of a scan chain unload (Scan_Unload event) immediately after each Measure_Current
event. The Scan_Unload event supports the existing Ignore Latch functions as found in the
existing ATPG pattern generation.

To perform Create IDDq tests using the graphical user interface, refer to “Create Iddq Tests”
in the Graphical User Interface Reference.

To perform Create IDDq tests using the command line, refer to “create_iddq_tests” in the
Command Line Reference.

November 2010 276 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

The syntax for the create_iddq_tests command is given below:


create_iddq_tests workdir=<directory> testmode=<modename> experiment=<name>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ experiment= name of the test that will be generated

The most commonly used keywords for the create_iddq_tests command are:
■ compactioneffort=<value> - The amount of time and effort to reduce down the
pattern size. Default is medium effort.
■ iddqeffort=<value> - The amount of time and effort for the test generator to create
tests for hard-to-test faults. Default is low effort.
■ iddqunload=no|yes - Whether to enable the measure of the scan flops. Default is no.
■ ignoremeasures=<filename> - List of flops to ignore during measures. Refer to
“Ignoremeasures File” on page 230 for more information.
■ iddqmaxpatterns=# - The default number of sequences as defined in the Tester
Description Rule (TDR) used when creating the testmode. This can be overridden. Refer
to Tester Description Rule (TDR) File Syntax in the Modeling User Guide for more
information.

Refer to “create_iddq_tests” in the Command Line Reference for information on these


keywords.

Prerequisite Tasks
Complete the following tasks before executing Create IDDq Tests:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide for more information.
2. Create a Test Mode. See “Performing Build Test Mode” in the Modeling User Guide for
more information.
3. Build a fault model for the design. See “Building a Fault Model” in the Modeling User
Guide for more information.

November 2010 277 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

Output
Encounter Test stores the test patterns in the experiment name.

Command Output

The output log contains information about testmode, global coverage, and the number of
patterns used to generate those results.

Debugging Low Coverage

If you do not achieve the desired ATPG coverage, check for the following problems:
■ Contention in the design - Look for TSV-193 and TSV-093 messages from
verify_test_structures to identify internal contention.
■ Broken scan chains - Analyze the verify_test_structures log for broken scan
chains.
Note: Deterministic Faults Analysis will not be available based on random pattern
generation.

Iddq Compaction Effort


After generating each Iddq pattern, Encounter Test randomly fills and simulates it multiple
times to identify the best set of random filled data to save. This process is done for each
generated pattern, thus resulting in high simulation time.

With compactioneffort=medium, the test generator compacts tests to a certain point and
then randomly fills/simulates the patterns multiple times. With compactioneffort=high,
more tests are compacted into the patterns, thus causing a higher test coverage for a given
number of tests. If you do not set the pattern limit (which is 20 patterns by default), the end
result with compactioneffort=high will most likely have a higher coverage with lesser
number of patterns. This is because Iddq patterns are mostly driven by simulation time so less
patterns simulated means less over time.

The test generator works harder but the way Iddq patterns are simulated and fault graded
results in simulation time making up a majority of the Iddq pattern generation task.

November 2010 278 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

Create Random Tests


Create Random Tests is used to generate and simulate random patterns. Random tests
detect both static and dynamic faults.

To perform Create Random Tests using the graphical interface, refer to “Create Random
Tests” in the Graphical User Interface Reference.

To perform Create Random Tests using the command line, refer to create_random_tests in
the Command Line Reference.

The syntax for the create_random_tests command is given below:


create_random_tests workdir=<directory> testmode=<modename> experiment=<name>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ experiment= name of the test that will be generated

The most commonly used keywords for the create_iddq_tests command are:
■ maxrandpatterns=# - Specify the maximum number of random patterns to generate.
■ minrandpatterns=# - Specify the minimum number of random patterns to simulate.
■ detectthresholdstatic=# - Specify a decimal number (such as 0.1) as the
minimum percentage of static faults that must be detected in a given interval. Simulation
terminates for the current clocking sequence when this threshold exceeds.

Refer to “create_random_tests” in the Command Line Reference for information on these


keywords.

Prerequisite Tasks
Complete the following tasks before executing create IDDq tests:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide for more information.
2. Create a Test Mode. See “Performing Build Test Mode” in the Modeling User Guide for
more information.

November 2010 279 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

3. Build a fault model for the design. See “Building a Fault Model” in the Modeling User
Guide for more information.

Output
Encounter Test stores the test patterns in the experiment name.

Command Output

The output log contains information about testmode, global coverage, and the number of
patterns used to generate those results.

A sample output log is given below:


****************************************************************************
----Stored Pattern Test Generation Final Statistics----
Testmode Statistics: FULLSCAN_TIMED
#Faults #Tested #Redund #Untested %TCov %ATCov
Total Static 908 704 52 106 77.53 82.24
Total Dynamic 814 242 44 513 29.73 31.43

Global Statistics
#Faults #Tested #Redund #Untested %TCov %ATCov
Total Static 1022 737 52 187 72.11 75.98
Total Dynamic 1014 242 44 713 23.87 24.95
****************************************************************************
----Final Pattern Statistics----
Test Section Type # Test Sequences
----------------------------------------------------------
Logic 45
----------------------------------------------------------
Total 45

Debugging Low Coverage


Note: In case of random ATPG, if the design is not random testable, low test coverage can
be expected.

If you do not achieve the desired ATPG coverage, check for the following problems:
■ Contention in the design - Look for TSV-193 and TSV-093 messages from
verify_test_structures to identify internal contention.
■ Broken scan chains - Analyze the verify_test_structures log for broken scan
chains.

November 2010 280 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

Note: Deterministic Faults Analysis will not be available based on random pattern
generation.

Create Exhaustive Tests


Create Exhaustive Tests applies test patterns to a design to identify faults that are resistant
to random patterns. An uncommitted test fault model file is created which contains the status
of the faults after simulation. Patterns are simulated until either a fault detection threshold is
met or the specified pattern limit is reached. The Encounter Test Log window automatically
displays the results when Random Pattern Fault Simulation is complete.

It is possible to control fault detection during Create Exhaustive Tests by inserting Keyed Data
into the test sequence. Create Exhaustive Tests checks each event for the existence of keyed
data, and if found, allows fault detection to begin with that event. The keyed data should only
be placed on a single event within a test sequence. If the keyed data exists on more than one
event, Create Exhaustive Tests begins fault detection on the last event having keyed data.

Additional information is available in:


■ “Keyed Data” in the Test Pattern Data Reference.
■ “Coding Test Sequences” on page 206

This scenario does affect the measure points where Create Exhaustive Tests detects faults.
For example, if the test sequence contains a Scan_Unload event, the slave latches are still
the only latches used as the detect points.

You can perform Create Exhaustive Tests using only the command line. Refer to
“create_exhaustive_tests” in the Command Line Reference for more information.

The syntax for the create_exhaustive_tests command is given below:


create_exhaustive_tests workdir=<directory> testmode=<modename> experiment=<name>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ experiment= name of the test that will be generated

Prerequisite Tasks

Complete the following tasks before running Create Exhaustive Tests:

November 2010 281 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide for more information.
2. Create a Test Mode. See “Performing Build Test Mode” in the Modeling User Guide for
more information.
3. Build a fault model for the design. See “Building a Fault Model” in the Modeling User
Guide for more information.

Output Files
Encounter Test stores the test patterns in the experiment name.

Command Output

The output log contains information about testmode, global coverage, and the number of
patterns used to generate those results.

Restrictions

Restrictions of Create Exhaustive Tests are:


■ Invalid test data and reduced test coverage can result from designs that do not conform
to Encounter Test LSSD guidelines or GSD guidelines. The Test Structure Verification
application verifies that these guidelines are met.
■ Dynamic faults must be included in the test mode definition and the fault model.
■ The only supported values of the TDR PMU parameter are PMU=1 and unlimited PMUs.
A PMU value less than the number of data pins on the design is treated by Encounter
Test as PMU=1. A PMU value equal to or greater than the number of data pins is
equivalent to unlimited PMUs.
■ IDDq tests are not generated.
■ Driver and receiver faults are not processed.
■ Random patterns are not generated for a test mode having onboard PRPGs or MISRs.
■ Exhaustive simulation is supported for combinational designs with 24 or fewer input pins.

November 2010 282 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

Create Core Tests


These tests are used to verify a macro device (for example, a RAM) embedded in the design.
Encounter Test uses isolation requirements and test patterns written specifically for the macro
device to map the test patterns for the macro to the tester contactable I/O of the design.

Macro tests may be static, dynamic, or dynamic timed. See “Test Vector Forms” on page 242
for more information on these test formats. Also refer to Properties for Embedded Core Tests
in the Custom Features Reference for more information.

To perform Create Core Tests using the graphical interface, refer to “Create Core Tests” in the
Graphical User Interface Reference.

To perform Create Core Tests using the command line, refer to “create_core_tests” in
the Command Line Reference.

The syntax for the create_core_tests command is given below:


create_core_tests workdir=<directory> testmode=<modename> experiment=<name>
tdminput=<infilename> tdmpath=<path>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ experiment= name of the test that will be generated
■ tdminput= Indicates the name of the input TBDbin file containing pre-existing test data
for the macro (core) being processed. This file specification is required only if you invoke
TCTmain for migrating pre-existing core tests from the core boundary to the package
boundary.
■ tdmpath= Indicates the directory path of input TBDbin files containing pre-existing test
data for the macro (core) being processed. The specification is a colon separated list of
directories to be searched, from left to right, for the input TBDtdm files. You can also set
this option using Setup Window in the graphical user interface.

Create Embedded Test - MBIST


This is used for pattern generation and configuration for memory built-in self test (MBIST)
Refer to Create Embedded Test in the Memory Built-in Self Test Reference for more
information.

November 2010 283 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

To perform Create MBIST patterns using the graphical interface, refer to "Create Embedded
Tests" in the Graphical User Interface Reference.

To perform Create MBIST patterns using command line, refer to


“create_embedded_test” in the Command Line Reference.

The syntax for the create_embedded_tests command is given below:


create_embedded_tests workdir=<directory>

where workdir is the name of the working directory.

Create Parametric Tests


Parametric tests exercise the off-chip drivers and on-chip receivers. For each off-chip driver,
objectives are added to the fault model for DRV1, DRV0, and if applicable, DRVZ.

For each on-chip receiver, objectives are added to the fault model for RCV1 and RCV0 at
each latch fed by the receiver. These tests are typically used to validate that the driver
produces the expected voltages and that the receiver responds at the expected thresholds.

These tests require the fault model including driver/receiver faults (build_faultmodel
includedrvrcvr=yes)

To perform Create Parametric Tests using the graphical interface, refer to "Create Parametric
Tests" in the Graphical User Interface Reference.

To perform Create Parametric Tests using command lines, refer to


"create_parametric_tests” in the Command Line Reference.

The syntax for the create_parameteric_tests command is given below:


create_parametric_tests workdir=<directory> testmode=<modename> experiment=<name>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG
■ experiment= name of the test that will be generated

Prerequisite Tasks

Complete the following tasks before running Create Exhaustive Tests:

November 2010 284 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide for more information.
2. Create a Test Mode. See “Performing Build Test Mode” in the Modeling User Guide for
more information.
3. Build a fault model for the design. See “Building a Fault Model” in the Modeling User
Guide for more information.

Output Files
Encounter Test stores the test patterns in the experiment name.

Command Output

The output log contains information about testmode, global coverage, and the number of
patterns used to generate those results.

Create IO Wrap Tests


These tests are produced to exercise the driver and receiver logic. The tests use the chip’s
internal logic to drive known values onto the pads and to observe these values through the
pad’s receivers. IO wrap tests may be static or dynamic. Static IO wrap tests produce a single
steady value on the pad. Dynamic IO wrap tests produce a transition on the pad.

These tests require the fault model including stuck driver and shorted net objects
(build_faultmodel sdtsnt=yes).

To perform Create I/O Wrap Tests using the command line, refer to “create_iowrap_tests” in
the Command Line Reference.

To perform Create I/O Wrap Tests using the graphical user interface, refer to Create IOWrap
Tests in the Graphical User Interface Reference.

The syntax for the create_iowrap_tests command is given below:


create_iowrap_tests workdir=<directory> testmode=<modename> experiment=<name>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG

November 2010 285 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

■ experiment= name of the test that will be generated

Prerequisite Tasks

Complete the following tasks before running Create Exhaustive Tests:


1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in the Modeling User Guide for more information.
2. Create a Test Mode. See “Performing Build Test Mode” in the Modeling User Guide for
more information.
3. Build a fault model for the design. See “Building a Fault Model” in the Modeling User
Guide for more information.

Output Files
Encounter Test stores the test patterns in the experiment name.

Command Output

The output log contains information about testmode, global coverage, and the number of
patterns used to generate those results.

IEEE 1149.1 Test Generation


Encounter Test supports two ways to efficiently do stored pattern test generation (SPTG) for
a design which implements the 1149.1 Boundary standard. Both of these do not so much
affect test generation itself as they do the test mode initialization steps and the scan protocol
associated with test generation.
1. LSSD or GSD scan
For 1149.1 LSSD or GSD scan SPTG the design is brought to the Test-Logic-Reset state
of the TAP controller by the mode initialization sequence; all SPTG takes place with the
controller held in that state. All scanning operations are performed using the defined
LSSD or GSD scan chains, with the Test-Logic-Reset state maintained; the TAP port is
not involved in scanning.
When the TAP controller is in the Test-Logic-Reset state it is effectively disconnected
from the chip's system logic and does not interfere with normal chip operation. SPTG can
proceed without the sequential TG problem that would be incurred by having to

November 2010 286 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

repeatedly manipulate the finite state machine of the TAP controller every time a test is
being developed.

Figure 10-1 Overview of Scan Structure for 1149.1 LSSD/GSD Scan SPTG

Figure 10-1 shows an overview of the scan structure used for 1149.1 LSSD or GSD scan
SPTG. Note the following important points:

a. The TAP controller is in the Test-Logic-Reset state at all times, both for test
generation and for scanning.

b. The scan chains used are all the defined LSSD or GSD scan chains; the 1149.1 TDI-
to-TDO scan chains are not used for scanning.

November 2010 287 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

c. The scan protocol, which is automatically generated by Encounter Test, is the usual
A-E-B scan sequence, utilizing the LSSD or GSD scan clocks and gates.
2. TAP scan
For 1149.1 TAP scan SPTG the design is brought to a user-specified TAP controller state
by the mode initialization sequence, and all SPTG takes place with the controller held in
the specified state. This state is not maintained for the scan operation, however, which is
done exclusively by way of the TAP and therefore necessitates TAP controller state
changes. The test generation state desired is specified by the TAP_TG_STATE
parameter of the SCAN TYPE mode definition statement. This state may be Test-Logic-
Reset, the same as is automatically assumed for 1149.1 LSSD or GSD scan SPTG, or
may be either Run/Test-Idle, Shift-DR or Pause-DR.

Figure 10-2 Overview of Scan Structure for 1149.1 TAP Scan SPTG

November 2010 288 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

Figure 10-2 shows an overview of the scan structure used for 1149.1 TAP scan SPTG. In
contrast with Figure 10-1 note the following points:

a. The TAP controller is in one of four supported states for test generation, selectable
by the user: Test-Logic-Reset, Run-Test/Idle, Shift-DR or Pause DR. (The selected
TG state does not hold for the scanning operation.)

b. The scan chain used is whatever test data register is enabled between TDI and
TDO. Optionally, other defined scan chains (SI/SO) will also be used for scanning,
but only if scannable via the TAP scan protocol (see next).

c. The scan protocol, automatically generated by Encounter Test, conforms to the


1149.1 standard. That is, all scanning takes place purely by TMS and TCK
manipulation. No other defined scan clocks or scan enables are used.

Whether to do 1149.1 SPTG by LSSD or GSD scan or TAP scan is dependent on how your
scan chains are configured. If the tester on which the chip is to be tested is not too severely
pin limited then the best way to do stored pattern test generation is probably by LSSD or GSD
scan, utilizing totally all the normal scan capabilities of the chip. If, on the other hand, you are
dealing with a severely constrained tester pin interface, as might very well be the case, for
instance, with burn-in test, then you might want to consider TAP scan SPTG. With TAP scan
SPTG all scan operations are performed by the standard 1149.1 defined scan protocol. TCK
and TMS are manipulated to cause scan data to move along the TDI-to-TDO scan chain, as
well as possibly other defined SI-to-SO scan chains. So you are not limited to just one scan
chain under TAP scan SPTG, but it is important to realize that all scan chains that are defined
must be scannable in parallel and all by the same TCK-TMS scan protocol.

Configuring Scan Chains for TAP Scan SPTG


For this approach to 1149.1 SPTG it is necessary that your part's test aspects take two things
into consideration:
1. TCK must be distributed as a scan clock to all memory elements which are to be scanned
by the 1149.1 scan protocol.
2. For all those scannable memory elements that are to be scanned via the TDI-to-TDO
path there must exist scan path gating to redirect scanning away from the defined SI-SO
LSSD or GSD scan paths and to the TDI-TDO TAP scan path.

The way that these two things are accomplished is by defining a special instruction - call it
TAPSCAN - that when loaded into the 1149.1 instruction register (IR) will bring the necessary
clock and scan path gating into play. Refer to Figure 10-3, which describes TAP controller
operation, will aid in understanding why clock gating is necessary.

November 2010 289 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

Figure 10-3 TAP Controller State Diagram

November 2010 290 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

As seen from this figure, scanning of a test data register (TDR) must take place by exercising
the state transitions in the Select-DR-Scan column. Thus we have the following sequence of
state transitions for a TDR scan: Select-DR-Scan, Capture-DR, Shift-DR (repeat), Exit1-DR,
Update-DR. For this TDR scan protocol it is necessary to pass through the Capture-DR state,
in which data is parallel loaded into the TDR selected by the current instruction. Bearing in
mind that SPTG will already have captured test response data into this TDR prior to scanning
you must not allow the Capture-DR state to capture new data, thus destroying the test
generation derived response. It is thus necessary that the use of TCK as a scan clock to the
TDR be gated off except when in the Shift-DR state. Figure 10-4 shows an example of how
this can be done for a MUXed scan design.

Figure 10-4 Example of TCK Clock Gating for TAP Scan SPTG

November 2010 291 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

The TAPSCAN signal in this figure is hot (logic 1) when the instruction loaded into the IR is
TAPSCAN, the name we have arbitrarily assigned to the special instruction implemented in
support of TAP scan SPTG.

Gating of scan data must similarly use the ShiftDR TAP controller signal and the decode of
the TAPSCAN IR state. Figure 10-5 shows an example of how this might be implemented for
a MUXed scan design.

Figure 10-5 Example of scan chain Gating for TAP Scan SPTG

1149.1 SPTG Methodology


The preceding sections dealt with the concepts of implementing 1149.1 SPTG. We will now
discuss the methodology considerations directly related to this form of SPTG.

November 2010 292 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

The first thing necessary for 1149.1 SPTG is that the test mode of interest be recognizable
as having 1149.1 characteristics. This means that it must have at least one each of the
following pins:
■ TEST_CLOCK Input
■ TEST_MODE_SELECT Input
■ TEST_DATA_INPUT
■ TEST_DATA_OUTPUT

Refer to “1149.1 Boundary Controls” in the Modeling User Guide for details on these pin
types.

Optionally, there may also be a Test Reset Input -TRST. There are three ways of identifying
these pins to Encounter Test - either in the model, in the BSDL or in Test Mode Define
ASSIGN statements. Conflicts between these three possible sources of information are
resolved by having the mode definition ASSIGN statement take precedence over the BSDL,
which in turn takes precedence over the model.

Given that the test mode being processed is 1149.1, then there are two components to the
test generation process:
1. Fault simulation of 1149.1 BSV verification sequences
Algorithmic SPTG will likely have some difficulty generating tests for faults associated
with the 1149.1 test logic (TAP controller, BYPASS register, etc.). The most efficient way
to cover these faults is by invoking the General Purpose Simulator to simulate the
verification sequences developed by 1149.1 Boundary Scan Verification (BSV). Here is
a sample mode definition which will serve both for BSV and invocation of the General
Purpose Simulator:
Tester_Description_Rule = tdr1
;

scan type = 1149.1


boundary=no
in = PI
out = PO
;

test_types none
;

faults static
;

2. Algorithmic SPTG

November 2010 293 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

The determination of whether SPTG is to proceed using LSSD or GSD scan or TAP scan
is made by interrogating the mode definition TEST_TYPES and SCAN TYPE
statements:
❑ TEST_TYPES not NONE, SCAN TYPE = LSSD or GSD indicates LSSD or GSD
scan
❑ TEST_TYPES not NONE, SCAN TYPE = 1149.1 indicates TAP scan
Following are two sample mode definitions for 1149.1 SPTG, one for LSSD or GSD scan
and one for TAP scan:
/**************************************************************/
/*
/* Sample mode definition for 1149.1 LSSD or GSD scan SPTG
/*
/**************************************************************/

Tester_Description_Rule = tdr1
;

scan type = gsd


boundary=no
in = PI
out = PO
;

test_types static logic signatures no


;

faults static
;

/**********************************************************/
/*
/* Sample mode definition for 1149.1 TAP scan SPTG
/*
/**********************************************************/

Tester_Description_Rule = tdr1
;

scan type = 1149.1 instruction=10 tap_tg_state=rti


boundary=no
in = PI
out = PO
;

test_types static logic signatures no


;

November 2010 294 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

faults static
;

Finite state machine (FSM) latches:


The mode initialization sequence for 1149.1 SPTG must bring the finite state machine
latches of the TAP controller to a known state. If a TRST pin is present then an
asynchronous reset accomplishes this quite readily. If, on the other hand, there is no
TRST pin, then it is necessary to initialize these latches by a synchronous reset employing
TCK and TMS. Standard three-valued simulation will not bring these latches to a known
state from an initial undefined state. Special provisions must be made by Test Mode
Define to identify these latches and then resort to a complex simulation process to get
them to their home state. You can aid in this process by using the FSM attribute, either in
the model source or the mode definition ASSIGN statement, to identify the finite state
machine latches of the TAP controller. Otherwise Test Mode Define will attempt to
automatically identify them. If you choose to identify these latches then the FSM attribute
is placed either on the output pin of the latch primitive or the output pin of a cell definition
or instance that contains the latch primitive. When placed on the cell definition or instance
then the attribute is associated with the latch which drives the pin carrying the attribute.

One other consideration applies with respect to mode definition, but only if TAP scan is called
for. The SCAN TYPE mode definition statement must specify both an INSTRUCTION and a
TAP_TG_STATE.
■ INSTRUCTION:
Specify the instruction to be loaded into the IR to select a test data register (TDR) for
scanning through the TAP. This instruction will configure the design under test so that
SPTG can work effectively in the TAP controller state designated by TAP_TG_STATE. It
not only gates the selected TDR for scanning but also causes the correct TCK gating to
be brought into effect for all those memory elements to be scanned. (An example of the
type of clock gating necessary is shown in Figure 10-4 on page 291.
The instruction to be loaded is specified in one of two ways, either:
❑ bit_string
Specify the binary bit string to be loaded into the IR, with the bit closest to TDI being
the left-most bit and the bit closest to TDO being the right-most bit.
❑ instruction_name
Specify the name of the instruction to be extracted from the BSDL.
■ TAP_TG_STATE

November 2010 295 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

This parameter is used to specify the TAP controller state in which test generation is to
be performed. Acceptable values:

a. RUN_TEST_IDLE (RTI)
TG is to take place in the un-Test/Idle state of the TAP controller.

b. TEST_LOGIC_RESET (TLR)
TG is to take place in the Test-Logic-Reset state of the TAP controller.

c. SHIFT_DR (SDR)
TG is to take place in the Shift-DR state of the TAP controller.

d. PAUSE_DR (PDR)
TG is to take place in the Pause-DR state of the TAP controller.

e. CAPTURE_DR (CDR)
This option is intended for use only if you have implemented parallel scan capture
clocking via the CAPTURE_DR state. This is not a recommended way to implement
internal scan via an 1149.1 interface, but Encounter Test will support it in a limited
fashion.
For CAPTURE_DR 1149.1 test modes, Encounter Test generates a test sequence
definition that can be used when performing ATPG. When there are clocks defined
beyond TCK, an additional sequence definition is generated. It is permissible to copy
this additional sequence to use as a template for defining additional test sequences
for use during ATPG. Note that all such test sequences must include a single TCK
pulse and may optionally include as may pulses as desired of other clocks.
See TAP_TG_STATE, described under “SCAN” in the Modeling User Guide for
additional information.

From this point on, for both LSSD or GSD scan and TAP scan, the usual SPTG methodology
is followed for the 1149.1 test mode. Test Mode Define will automatically derive the mode
initialization sequence and scan protocol necessary for test generation.

Refer to “” on page 312 for a typical IEEE 1149.1 Test Generation task flow.

1149.1 Boundary Chain Test Generation


The following figure shows a typical processing flow for running Stored Pattern Test
Generation with an 1149.1 Boundary chip.

November 2010 296 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

Figure 10-6 Encounter Test 1149.1 Boundary Chip Processing Flow

1. Build Model
For complete information on Build Model, refer to “Performing Build Model” in the
Modeling User Guide.
2. Build Test Mode for 1149.1 Boundary
A sample of a mode definition file for this methodology follows:
TDR=bs_tdr_name
SCAN TYPE=1149.1 IN=PI OUT=PO;
TEST_TYPE=NONE;
FAULTS=NONE;
.
.
.

For complete information, refer to “Performing Build Test Mode” in the Modeling User
Guide.
3. Verify 1149.1 Boundary
For complete information, refer to the “Verify 1149.1 Boundary” in the Verification User
Guide.

November 2010 297 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

During the verification of the 1149.1 structures, Encounter Test creates an experiment
called 1149. This experiment can be fault graded to get coverage.
4. Analyze 1149.1 patterns
Fault grade the 1149 test patterns. Refer to “Analyzing Vectors” on page 260 for more
information.
5. Commit Results
Refer to “Committing Tests” on page 69 for more information.

Reducing the Cost of Chip Test in Manufacturing


As chip densities continue to grow, the number of test vectors required to test a chip also
tends to grow. In addition, data volume required to represent a single test also grows since
the number of scan bits grows along with chip complexity. This results in test data volumes
that grow at a faster than linear rate as compared to the number of gates in the design. The
increase in test data volume translates into an increase in the tester socket time required to
test a chip due to both the increase in scan bits (scan chain length) and the increase in the
number of tests.

Based on the preceding paragraph, the following assumptions are used to address the
reduction of chip manufacturing cost:
■ Tester time and test data volume are among the primary sources of test cost.
■ Any decrease in the time to apply patterns required to detect a faulty design translates
to cost savings.

Breaking the elements of cost into finer detail, it can also be assumed that a near-minimal set
of test vectors will take too long to apply and require too much buffer resource, resulting in the
conclusion that test vector cost should be reduced. The two aspects that contribute to the cost
of a test vector are:
■ stimulus data - the data stored on the tester which allows us to excite certain areas of the
design.
■ response data - data stored on the tester which allows us to detect when the design
under test responds differently than expected.

For stored pattern testing, the stimulus and response data comprise about 98% of the test
data for a large chip, with the remaining 2% attributed to clocking template and control
overhead. The response data can be as much as twice as large as the stimulus data
depending on the type of tester being used and how the data bits are encoded. However,

November 2010 298 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

stimulus data can be more of a problem for highly sequential (partial scan) designs. For large
chips, most of the stimulus and response data is located in the scan events.

The Encounter Test Stored Pattern test methodology includes a method, based on the
creation and use of an On-Product MISR test mode, to reduce data volumes while supporting:
■ Both stored pattern and WRP test methods
■ Use of test reordering to improve fault coverage versus pattern counts
■ Diagnostics
■ Reduced pin count test methodology

Refer to “On-Product MISR (OPMISR) Test Mode” in the Modeling User Guide for
additional information.

Using On-Product MISR in Test Generation


A sample scenario for implementing the On-Product MISR test mode in the test generation
process:
■ Insert the MISR and associated muxing logic into the design.
■ For stored pattern test generation with an on-product MISR it is necessary to define both
an On-Product MISR test mode and a diagnostics test mode. The On-Product MISR test
mode definition refers to the diagnostics test mode by the DIAGNOSTIC_MODE
statement. See “On-Product MISR (OPMISR) Test Mode” and “DIAGNOSTIC_MODE” in
the Modeling User Guide for more information.
■ Process the design through Encounter Test to generate test patterns. Refer to “Static
Test Generation” on page 59 for details.

The resulting test data from Stored Pattern Test Generation will contain a Channel_Scan
event and optionally, Diagnostic_Scan_Unload events to represent the scan-out operation.
There will also be Product_MISR_Signature events to denote the expect data for each test.
See the “Event” section of the Test Pattern Data Reference for details on these event
types.

When a failing MISR signature is detected in the application of OP-MISR tests, diagnosis of
the failure proceeds by switching to the diagnostics mode. In this mode the design is
reconfigured so that channel latch values can be scanned out. Encounter Test automatically
generates the Diagnostics_Observe and Diagnostics_Return sequences for
purposes of getting to and from the diagnostic test mode. Refer to “Define_Sequence” types
in the Test Pattern Data Reference for details.

November 2010 299 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

Using On-Product MISR, the test data file can realize a potential reduction of up to 90% for a
large design. For tests that include Diagnostic_Scan_Unload events for diagnostics,
there are corresponding increases to the test data volume. You can limit the amount of
diagnostic test data by specifying a fault coverage threshold (diagnosticmeasures) to limit
the volume of diagnostic measure data to retain.

On-Product MISR Restrictions


Following restrictions apply to any use of on-product MISRs without masking to prevent X
values from propagating into MISRs:
■ On-Product MISR is a signature based test methodology similar to LBIST or WRPT. The
designs must be configured to prevent X values from propagating into the signature
register. Specific options on the test generation process may need to be used to ensure
that X sources are prevented (for example, globalterm=none should not be specified
if the test mode has active (visible to a scan-measurable latch) 3-state bidirectional pins).
■ Multiple scan sections are not supported
■ Macro test patterns are not supported.
■ Interconnect Test is not supported.
■ AC Path Delay Test is not supported.
■ AC test constraint timings are not supported.

Parallel Processing
The ongoing quest to reduce test generation processing time has resulted in the introduction
of the data parallelism approach to the Encounter Test Generation and Fault Simulation
applications. Parallel Processing implements the concept of data parallelism, the
simultaneous execution of test generation and simulation applications on partitioned groups
of faults or patterns. This reduces the elapsed time of the application.

The data set is partitioned either dynamically (during the test generation phase of stored
pattern test generation) or statically (at the beginning of the simulation phase).

The set of hosts used for parallel processing can be designated using one of these methods:
■ Specifying a list of machines (available via either graphical user interface for the
application being run or via command line).

November 2010 300 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

■ Using Load Sharing Facility (LSF)* (available command line only, see“Load Sharing
Facility (LSF) Support” on page 301).

Encounter Test supports parallel processing for Stored Pattern Test, Weighted Random
Pattern Test, Logic Built-In Self Test, and Test Simulation. Refer to the following for conceptual
information on these applications:
■ “Advanced ATPG Tests” on page 275
■ “LBIST Concepts” on page 323
■ “Test Simulation Concepts” on page 263
■ “Stored Pattern Test Generation Scenario with Parallel Processing” on page 309 for a
sample processing flow incorporating Parallel Processing Test Generation.

Refer to “Performing Test Generation/Fault Simulation Tasks Using Parallel Processing” on


page 304 for additional details needed to set up and execute applications using parallel
processing.

Load Sharing Facility (LSF) Support


Encounter Test supports the use of Load Sharing Facility (LSF) from Platform Computing
Corporation** to pick machines for a parallel run. LSF support is available in command line
mode only. Entering application_command -h on the command line displays the LSF
options for the application. Refer to subsequent sections for additional information.

Parallel Stored Pattern Test Generation


This function is currently available using command line only. Refer to “Parallel Processing
Keywords” in the Command Line Reference.

A controller process is started on the machine that the run is started from. Test generation
and simulation processes are started on the machines identified by the user. At any given
time, only one of the processes will be working. The controller process primarily performs test
generation and simulation processes. It performs test generation when it is not busy with
simulation and handling communication from the processes. The controller process is
responsible for generating the patterns and fault status file.

Test Generation Parallel Processing Flow

An overview of how parallel processing works for stored pattern test generation:

November 2010 301 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

Figure 10-7 Test Generation Parallel Processing

Parallel Simulation of Patterns


Fault Simulation can be performed in parallel mode via graphical user interface for Test
Simulation and Manipulate Tests or in the command line environment.

For command line invocation, refer to “Parallel Processing Keywords” in the Command Line
Reference.

Parallel processing support is available for simulation of patterns. The following features are
supported:
■ High Speed Scan and General Purpose simulation (via command line
simulation=hsscan|gp)
■ Simulation of Scan Chain, Driver-Receiver, IDDq and Logic Tests.

To run parallel processing on these commands, add the following options:


■ To directly call certain machines -
[hosts=hostname1,hostname2,hostname3...]
■ To call an LSF queue - queuename=<string>

November 2010 302 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

An option for queuename is Numslaves=# that specifies the number of ATPG slave
processors on which to run parallel processing.

Currently, it does not support signature-based patterns (WRPT and LBIST patterns). The
Fault simulation phase of the algorithm is parallelized.

A controller process is started on the machine that the run is started from. Fault simulation
processes are started on the machines identified by the user. The controller process performs
good machine simulation and coordination of fault simulation processes. It also produces the
patterns and fault status files. The host used to start the application is used to run the
controller. The fault simulation phase is run in parallel on the hosts you select. Therefore, if n
hosts are selected, there are n simulation jobs running.

The Good Machine Simulation (in order to write patterns) on the originating host is performed
in parallel with the Fault Simulation running on the selected hosts. Therefore, for optimal
results, do not select the host that you originate the run from as one of the hosts to run Fault
Simulation on.

Optimum performance is achieved if the selected machines running in parallel have been
dedicated to your run.

In a parallel run, a greater number of patterns may be found effective as compared to a serial
run.

Fault Simulation Parallel Processing Flow

An overview of how parallel processing works for fault simulation:

November 2010 303 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

Figure 10-8 Fault Simulation Parallel Processing Flow

Performing Test Generation/Fault Simulation Tasks Using Parallel


Processing
Refer to “Parallel Processing” on page 300 for conceptual information.

To perform parallel processing using command line, refer to “Parallel Processing Keywords”
in the Command Line Reference.

Restrictions
For Logic Built-In Self Test restrictions, refer to “Restrictions” on page 330

Applicable to Parallel Processing:


■ If you are performing simultaneous parallel processing runs, each set of hosts identified
for the run can not include the same hosts. If a particular host is used for one parallel run,
that host may not be used in another simultaneous parallel run by the same user.
■ The resimulation phase of a parallel stored pattern test generation run is not parallelized.
■ Multiple test sequences are not supported within a single run for parallel LBIST
processing. However, multiple test sequences can be run in the parallel environment by

November 2010 304 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

using the expert option parallelpattern=no. This option causes the algorithm to
revert to the parallel processing support available with previous releases.
■ The parallel processing architecture does not support dynamic distribution of faults This
forces a user to run on dedicated machines in order to benefit from parallel processing.
It may also produce more patterns.
■ General Purpose Simulation is not supported for WRPT and LBIST when running in
parallel mode. High Speed Scan Based Simulation should be used.
■ LSF machines that access more than one job can cause parallel LSF to terminate. The
workaround is to use the lsfexclusive option. If you specify lsfexclusive,
ensure your queue is configured to accept exclusive jobs.

Prerequisite Tasks for LSF


Refer to the following for application-specific prerequisite tasks:
■ For Logic Built-In Self Test, “Input Files” on page 330
■ For Test Simulation, “Prerequisite Tasks” on page 259

Ensure the following conditions are met in your execution environment prior to using LSF:
■ Ensure that the statements in your .profile or profile files for your shell (.kshrc,
.cshrc) are error-free; otherwise you will not be able to run in the parallel environment.
The statements for profile-related information must be syntactically correct, i.e.
invocations that do not exist or are typed incorrectly will prevent successful setup of the
parallel processing environment.
■ Ensure that you can rsh to the machines selected for the parallel run from the machine
that you originate your run from and be able to write to the directory that contains your
part. The following command executed from the machine that you plan to originate your
run from will validate this requirement:
rsh name_of _machine_selected touch directory_containing_part/name_of
some_file

You should also be able to rsh to the selected host and be able to read the directory (and
all it's subdirectories) where the code resides. Execute the following command from the
machine you plan to originate your run from in order to verify this:
rsh name_of_machine_selected ls directory_where_code_is_installed

If these commands fail, please contact your system administrator


■ If you are running across cells, either of the following is required:

November 2010 305 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

❑ The version of rsh you use should inherit tokens obtained via the klog command
(versus one obtained as a result of login).
Notes:

a. If you are starting the run on AIX, verify that your PATH environment variable
ensures that the rsh command from the AFSRCMDS package will be picked up.
This version of rsh inherits tokens obtained via klog. This is usually installed in /
usr/afsrcmds/bin. In addition, ensure that the environment variable KAUTH is
set to yes.

b. KAUTH is available only from AIX.


❑ The .cshrc (for C shell users) and .profile (for Korn shell users) should
obtain tokens for all cells. A suggested method is the following:

a. Ensure that the directory containing your .profile or .cshrc have read and
lookup access.

b. Create an executable file in /tmp on all machines you will using that obtains tokens
using the klog command.

c. Ensure that the file can be read and executed only by you.

d. Put a statement in the .profile or .cshrc to execute the file.


■ Ensure that any directories used by Encounter Test parallel processing (for example part
identification parameters WORKDIR/) are accessible by all machines chosen to execute
in parallel. For example:
❑ Ensure the directory where the code is installed is accessible by all machines.
❑ All machines selected for the parallel run should be able to access the code
installation for their respective platforms.
❑ The directory specified by the TB_PERM_SPACE and TB_TEMP_SPACE environment
variables should be accessible by all machines chosen to execute in parallel. Refer
to “Common Advanced Keywords” in the Command Line Reference for additional
information.
❑ Any directories you have manually linked to the part directory should be accessible
by all machines chosen to execute in parallel.
■ Invoke the xhost command to enable machines participating in a parallel run to display
on the originating host. Examples of using xhost are:
❑ xhost + machine_name - enables the specified machine_name to display
on the machine that issues the xhost command.

November 2010 306 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

❑ xhost + - enables all machines to display on the machine that issues the xhost
command.

Maintain awareness of the following:


■ Before the parallel run is started, the .profile is executed. Environment variables set
outside of the .profile are not inherited by the parallel run.
■ echo statements in the .profile, .kshrc, .cshrc or other files associated with the
execution of the shell will cause the parallel run to fail. If you are reading the TERM
variable or echoing output in these files, do so conditionally by using the statement if
[tty -s] around the echo or reading of the TERM variable.
■ Any processes which expect a manual response during your logon will cause the parallel
processing logon processes to hang.
■ Processes such as auto klogs. These processes can cause problems during a parallel
run.
■ A process called lamd is created on every machine participating in the parallel run.
Please do not kill this process.
■ Do not remove these files in the /tmp directory:
❑ lam-userid@hostname
❑ lam-userid@hostname -s
❑ lam-userid@hostname -sio
■ If your parallel run hangs, you can execute the following commands on one of the
participating machines to terminate the run:
et -c
then
TPChaltMPI hosts = name1,name2,... where name1,name2,... are
the names of the machines participating in the run.
■ Run time improvements can be guaranteed only if all machines participating in the
parallel run are dedicated to the parallel run. This is because the current algorithm does
not perform load balancing.

In addition to the previously stated prerequisites for test generation/simulation applications,


ensure the following conditions are met in your execution environment prior to using LSF for
parallel processing:
■ Ensure the machine you are launching from has an LSF license.

November 2010 307 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

■ The PATH environment variable should be set up to find the bsub command. Specify this
setting in your .login file for .cshrc users and .profile for korn shell.
■ VERY IMPORTANT: The user running the LSF job must be allowed to rsh into the
machines being used while the LSF job is running.

Refer to “Parallel Processing Keywords” in the Command Line Reference for additional
information.

Also note that numerous repositories of LSF information exist on the WWW.

Input Files
Refer to the following for application-specific input files:
■ For Logic Built-In Self Test, “Input Files” on page 330

Output Files
Refer to the following for application-specific output files:
■ For Logic Built-In Self Test, “Output” on page 331
■ For Test Simulation, “Output” on page 259

During parallel processing, interim output files are created for each process.

The naming conventions for these interim files are:


■ faultStatus.mode. exper_name_prefix __ n of N where N is the total number
of hosts and n is 1 of N, 2 of N, etc.

When the parallel processing ends, these files are combined into individual output file for
faultStatus.

Temporary Output Files

When using parallel processing, a file named TPCgetMachine.date.time is generated


in the home directory during the parallel run. Please do not delete this file unless the GUI or
command line executable terminates abnormally. If the event the GUI does terminate
abnormally, this file should be manually deleted.

Additional files in the home directory:

November 2010 308 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

■ remotetb. pid.source_string
■ pid . myapp
■ pid . mynodes

These files appear in the /tmp directory:


■ lam-userid@hostname
■ lam-userid@hostname -s
■ lam-userid@hostname -sio
■ ./tmp/lamstdout.pid.taskid - this file contains the output of the child
process.
pid is the process id of the child process
taskid is a number from 1 to n where n is the number of hosts selected.
Note: If you manually kill the run, you may have to manually delete these files.

Stored Pattern Test Generation Scenario with Parallel Processing


The following figure shows a typical processing flow for Test Generation incorporating Parallel
Processing.

November 2010 309 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

Test Generation Process with Parallel Processing

1. Build Model
For complete information on Build Model, refer to “Performing Build Model” in the
Modeling User Guide.

November 2010 310 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

2. Build Test Mode


A sample mode definition file for this methodology follows:
TDR=stored_pattern_tdr_name
SCAN TYPE=LSSD (or SCAN TYPE=GSD) BOUNDARY=NO IN=PI OUT=PO;
TEST_TYPES=STATIC LOGIC SIGNATURES=NO,STATIC MACRO,
IDDQ PROPAGATE CELL_BOUNDARY,DRIVER_RECEIVER,SCAN_CHAIN;
FAULTS=DYNAMIC;
TEST_FUNCTION_PIN_ATTRIBUTES=MLE_FLAG;
ASSIGN PIN XYZ=-SC;

For complete information, refer to “Performing Build Test Mode” in the Modeling User
Guide.
3. Analyze Test Structure

a. Logic Test Structure Verification (TSV)


For complete information, refer to “Logic Test Structure Verification (TSV)” in the
Verification User Guide.

b. Verify Core Isolation


For complete information, refer to “Verify Core Isolation” in the Verification User
Guide.
4. Build a Fault Model
For complete information, refer to “Building a Fault Model” in the Modeling User Guide.
5. Stored Pattern Test (Scan Chain, Logic, IDDq, Driver and Receiver, I/O Wrap)
For complete information, refer to “Advanced ATPG Tests” on page 275.
6. Analyze Untestable Faults
For complete information, refer to “Deterministic Fault Analysis” in the Verification User
Guide.
7. Commit Tests
For complete information, refer to “Utilities and Test Vector Data” on page 233.
8. Export Test Data.
For complete information, refer to “Writing and Reporting Test Data” on page 169.
Note: To create the most compact pattern set for manufacturing, resimulate the final parallel
patterns, or use uni-processor test generation for your final pass.

November 2010 311 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Advanced ATPG Tests

November 2010 312 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

11
Test Pattern Analysis

This chapter discusses the tools and concepts that can be used to analyze test patterns
within Encounter Test.

The sections in the chapter include:


■ “Debugging Miscompares” on page 313
■ “Using Watch List” on page 315
■ “Viewing Test Data” on page 317
■ “SimVision Overview” on page 321
■ “Performing Test Pattern Analysis” on page 322

Test Pattern Analysis is the process of:


■ Taking data from manual patterns or a test generation process;
■ Simulating the patterns to determine if the expected responses match the actual values;
and then
■ Analyzing any miscompares to determine the cause of the problem.

Debugging Miscompares
Miscompares can be caused by various factors, some of which are given below:
■ The model is logically incorrect
■ The model does not account for the delay mode requirements of the simulator that
generated the original data (or the one that is being used for the re-simulation). For
example, if the cell description has some extra blocks to model some logical
configuration, the unit delay simulator may find that signals do not get through the logic
on time since it assigns a unit of delay to each primitive in the path. This might work better
with a zero delay simulation.

November 2010 313 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Test Pattern Analysis

■ The input patterns are incorrect (either due to an error in the manually generated
patterns; or due to a problem with the original simulation).

The following are some recommended considerations while analyzing the patterns:
1. The first thing to consider is where the “expected responses” come from.
❑ If you are writing manual patterns, the expected responses are included in these
patterns as an Expect_Event (see “TBDpatt and TBDseqPatt Format” in the Test
Pattern Data Reference).
❑ If you are analyzing a TBD from an Encounter Test test generation process, the
simulation portion of that process creates measures to indicate that the tester
should measure a particular set of values on primary outputs and/or latches. When
you resimulate the test data, the simulator compares its results with the previous
results specified by measure events (Measure_PO and Scan_Unload).
2. The next thing to consider is the analysis technique you plan to use. This choice will
influence the type of simulation to use for the re-simulation. Using the Test Simulation
application, you may select a variety of simulation techniques. In addition, there is an
interactive simulation technique specifically aimed at test pattern analysis (and
diagnostics).

Some analysis techniques are given below:


1. Use the following process to view the miscomparing patterns in a waveform display:
❑ Create a Watch List containing all the nets, pins, and/or blocks you want to include
in the analysis. This Watch List can be created:
❍ Interactively through the Encounter Test View Schematic window or the Watch
List window (refer to Watch List in the Graphical User Interface Reference)
❍ Manually (refer to “Using Watch List” on page 315 for more information)
❑ Select either the General Purpose Simulation or the Good Machine Delay
Simulation options from the Simulate Vectors windows since they are the only ones
that support this type of analysis. Set the appropriate simulation run parameters to
specify the watch list you are using, the specific test data during which simulation
should be saved for viewing, and any other desired options.
❑ When the simulation ends, click either the View Waveforms icon or Windows -
Show - Waveforms. The Select Optional Transition File dialog is displayed.
❑ Select the transition file (.trn) for the test mode and experiment run at the time of
simulation and click the Open button. A SimVision window is displayed with a set of
signals that define the correspondence between the Encounter Test vector format
and the waveform window timeline.

November 2010 314 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Test Pattern Analysis

❑ Use SimVision facilities to select and display design signals of interest. Refer to
“SimVision Overview” on page 321 for additional information.
2. To view the miscomparing patterns on the graphical view of the design, use View
Vectors to select a test sequence analyze and invoke View Circuit Values to see the
values displayed on the View Schematic window.
See “Viewing Test Data” on page 317 and “General View Circuit Values Information” in
the Graphical User Interface Reference for more information.
3. If you are satisfied after viewing the patterns and the model and making your own
correlation of the data or, if you are limited to this choice by the data you are analyzing,
then use the following process:

a. Select your choice of simulation.

b. View the resulting patterns using View Vectors.

c. View the logic on the Encounter Test View Schematic window and manually
analyze the problem.

Using Watch List


You can use a watch list to create and customize a list of design objects for input to various
Encounter Test applications using either the graphical interface or by manually specifying it.
See “Watch List” in the Graphical User Interface Reference for details.

November 2010 315 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Test Pattern Analysis

To manually create a watch list, use the following syntax and semantics:
■ Each line in the file is a statement, which can be of one of the following types:
Model Object Statement
Each Model Object Statement identifies one Encounter Test Model Object by type and
name. It also allows you to specify an alias name for the Object. The syntax of the
statement is the Model Object name optionally followed by the alias name. The Model
Object name in the full name or short name format. The simple name should have a type
specified before the name. The types are NET, PIN, or BLOCK. If a type is not specified,
then net is assumed first, then pin, and then block. The alias name can be any
combination of alpha-numeric or the following special characters:
!#$%&()+,-.:<=>?@[]^_`|~.
Note: If an entry is a BLOCK, Encounter Test will create a watch list for all ports/pins on
that block. Refer to “expandnets” on page 316 for information on how to identify signals
within a BLOCK.
Facility Start Statement
The Facility Start statement marks the beginning of a group of Model Objects that will be
associated together. The statement syntax is the word FACILITY followed by white space
followed by a facility name and ending with an open brace '}'. The facility name must start
with an alphabetic character. It may contain alpha-numeric characters or underscores. It
cannot end in an underscore nor have more than one underscore repeated. These are
the same rules for identifiers in VHDL. The name will always be folded to upper case and
is therefore case insensitive. When the same facility name appears more than once in
the file, only one facility by that name is created containing all the Model Objects
associated with the facility. It is an error to nest facilities. Here is an example Facility Start
Statement:
FACILITY TARGET {

Facility End Statement


The Facility End Statement marks the end of the group of Model Objects defined by the
previous Facility Start Statement. It is simply a close brace ’}’ character as the first
character of a line. It is an error to have a Facility End without a corresponding Facility
Start. It is an error to nest facilities.
expandnets
Use this syntax to direct Encounter Test to record net switching for all nets within the
specified level or hierarchy. This facility must be named expandnets. For example,
facility expandnets {
block xyz
block abc
}

November 2010 316 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Test Pattern Analysis

directs Encounter Test to record net switching activities for all nets within block xyz and
block abc.
Comments
The characters ’//’ and ’/*’ begin a comment. Comments are allowed at the end of a
statement or on a line by itself. Once a comment is started, it continues to the end of the
line.
■ An example of a watch list:
facility unit_a_buss_byte_0 {
"unit_a.buss_out[7]"
"unit_a.buss_out[6]"
"unit_a.buss_out[5]"
"unit_a.buss_out[4]"
"unit_a.buss_out[3]"
"unit_a.buss_out[2]"
"unit_a.buss_out[1]"
"unit_a.buss_out[0]"
}

facility expandnets {
block unit_b
block unit_c
}

"sys_clock"
"a_clock"
"b_clock"
"init_a.sys_clock"

There can be only one statement per line. Each statement must be on a single line.

Viewing Test Data


View the test pattern data created by Encounter Test by displaying the hierarchy of a Vectors
or TBDseq file. To access this tool, select Window - Show - Vectors from the main window.
Enter the information as described in “View Vectors Window” in the Graphical User
Interface Reference to display the patterns.

The Vectors hierarchy consists of the following entities:


Experiment
Test_Section
Tester_Loop
Test_Procedure
Test_Sequence
Pattern
Event

See “Encounter Test Vector Data” on page 238 for information about each level of the
hierarchy.

November 2010 317 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Test Pattern Analysis

You can perform a variety of tasks using the hierarchy. Its capabilities include:
■ Moving interactively up and down a hierarchy.
■ Expanding and collapsing levels of a Vectors file.
■ Displaying a design view with simulation values.
■ Displaying Details about specific objects in a Vectors file. This consists of information
also available in a TBDpatt file, including data, an audit summary, and a listing of the
contents of the Vectors file for the selected object.
■ Creating a minimum Vectors file containing the minimal amount of data needed to
perform simulation.
■ Displaying Attributes of specific entities in a Vectors file.
■ Displaying simulation results requiring special analysis (for example, miscompare data).
■ Display of failing patterns based on the currently selected failures. See Reading Failures
in the Diagnostics User Guide for more information.

For additional information, refer to:


■ “Viewing Test Data” in the Graphical User Interface Reference.
■ “Performing Test Pattern Analysis” on page 322.

Viewing Test Sequence Summary


The report_vector_summary program generates a summary of the test sequences
during a test. The summary is in the form of a table containing event types for each test
sequence and the information on the effectiveness in detecting faults.

The vector information is collected and summarized according to test sequences having
similar stream of patterns, events, and clock used. The report considers each unique event
stream as a template and displays it as a string of characters, with each character
representing a different event type.

If you set the vectors option to yes, the report displays the short form of vectors in the
summary. Set the key option to yes to print the description for each character used in the
template string.

The following is a sample vector summary generated for a test pattern by using the
report_vector_summary command with key=yes and vectors=yes:

November 2010 318 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Test Pattern Analysis

Key for string format of Test Sequence report


-----------------------------------------------------
S - Stim PI
I - Stim PPI
s - Scan Load
a - Skewed Scan Load
P - Pulse Clock (P is followed by name of the clock is being pulsed)
C - Stim Clock
Q - Pulse PPI (P is followed by name of the clock is being pulsed)
M - Measure PO
m - Scan Unload
b - Skewed Scan Unload
c - Channel_Scan
D - Diagnostic Skewed Scan Unload
d - Diagnostic_Scan_Unload
F - Force
O - Product MISR Signature
X - Any other events

#Sequences %Det #Static #Dynamic 1st Seq Template


---------- ---- ------- -------- ------ --------
#1 1 51.4 338 0 1 scan
#2 23 47.9 315 0 2 sSP(CLOCK)Mm
#3 4 0.6 4 0 25 sSS(CLOCK)MM(CLOCK)
#4 1 0.2 1 0 29 sSMm
#5 2 0.0 0 0 0 init

----Template #1---- scan


Scan_Load
Stim_PI
Stim_PI
Stim_PI
Pulse(CLOCK)
Measure_PO
Stim_PI
Pulse(CLOCK)
Measure_PO
Stim_PI
Pulse(CLOCK)

November 2010 319 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Test Pattern Analysis

Measure_PO
Stim_PI
Pulse(CLOCK)
Measure_PO
Stim_PI
Pulse(CLOCK)
Measure_PO
Scan_Unload
----Template #2---- logic
Scan_Load
Stim_PI
Pulse(CLOCK)
Measure_PO
Scan_Unload
----Template #3---- logic
Scan_Load
Stim_PI
Stim_Clock(CLOCK)
Measure_PO
Stim_Clock(CLOCK)
Scan_Unload
----Template #4---- logic
Scan_Load
Stim_PI
Measure_PO
Scan_Unload
----Template #5---- init
Stim_PI

The sample report contains three sections:


■ The first section is the key describing the format of the summary. It lists the characters
used in short form of the vectors in the summary and the event type that each character
represents.
■ The second section is the summary of test vectors. The summary contains the following
columns:
❑ #Sequences - shows the number of sequences with event stream matching the
template in the Template column
❑ %Det - shows the percentage of test coverage obtained by the sequences

November 2010 320 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Test Pattern Analysis

❑ #Static - shows the number of static faults detected by the sequences


❑ #Dynamic - shows the number of dynamic faults detected by the sequences
❑ 1st Seq - shows the location of the first matching sequence in the pattern set
❑ Template - shows the string describing the template. For the init, scan, and flush
sequences, the summary lists the event type instead of the string. The Key section
describes the characters representing the template string.
■ The third section describes the vectors in each template listed in the summary.

Refer to report_vector_summary in the Command Line Reference for more information.

SimVision Overview
You can use SimVision to analyze simulation results. Its capabilities include:
■ Displaying digital waveforms
■ Displaying values on nets during any time period in a simulation
■ Arranging signals (move, copy, delete, repeat, show, hide) in the window for easy
viewing, enabling better interpretation of results
■ Multiple waveform graphs per window
■ Multiple windows that allow you to organize data and to view multiple test data segments
■ Annotating waves with text
■ Performing logical and arithmetic operations on waveforms

A .dsn or .trn file can be input to SimVision for waveform analysis. The files can be created
by either running Encounter Test test generation or simulation applications or from View
Vectors by selecting a test section with scope data then clicking the custom mouse button to
invoke Create Waveforms for Pin Timing. Refer to the description for the View Vectors
“View Pull-down” in the Graphical User Interface Reference.

To view Encounter Test patterns in SimVision, perform the following steps:


1. Click the View Waveforms icon on the main toolbar. Start SimVision by entering
simvision at the command prompt.
2. Open a .trn file for the experiment on the resulting Select Optional Transition File
dialog.

November 2010 321 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Test Pattern Analysis

The following is an alternative method:


1. Invoke Simvision using this syntax: simvision -input showall.sv.
showall.sv is a Simvision script file that contains:
❍ database open path/TBscope.testmode.experiment.trn
❍ browser new
❍ waveform new
❍ set waves [browser find -name *]
❍ waveform add -signals $waves

Refer to the SimVision User Guide for additional details.

Performing Test Pattern Analysis


For details on using the graphical interface for analyzing and viewing test data, refer to
“Viewing Test Data” in the Graphical User Interface Reference.
Note: You can perform Test Pattern Analysis only through graphical user interface.

Prerequisite Tasks
■ Select an experiment.
■ Create a Vectors file by running a test generation or simulation application.

Input Files
An existing Encounter Test experiment, sequence definition files, and failure data (if existing).

Output Files
If Create Minimum Vectors is selected on the Test View window, a new Vectors file is
created as output.

November 2010 322 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

12
Logic Built-In Self Test (LBIST)
Generation

This chapter discusses the Encounter Test support for Logic Built-in Self Test (LBIST).

LBIST: An Overview
Logic Built-In Self Test (LBIST) is a mechanism designed into the design which allows the
design to effectively test itself. Encounter Test supports the design LBIST approach called
STUMPS (Self Test Using MISR and Parallel SRSG). A STUMPS design contains linear
feedback shift registers (LFSRs) which implement a Pseudo-Random Pattern Generator
(PRPG), sometimes referred to a Shift Register Sequence Generator or SRSG, to generate
the (random) test vectors to be applied to a scan design. It also includes an LFSR which
implements a Multiple Input Signature Register (MISR) to collect the scanned response data.

Refer to “Modeling Self Test Structures” in the Custom Features Reference for additional
information.

This application can generate tests for either/both of the following test types based on user
controls:
■ scan chain tests - refer to “Scan Chain Tests” on page 30
■ logic tests - refer to “Logic Tests” on page 31
❑ Logic tests are done using manually entered test sequences.

LBIST Concepts
Logic Built-In Self Test (LBIST) is a mechanism implemented into a design to allow a design
to effectively test itself. LBIST is most often used after a design has been assembled into a
product for power-on test or field diagnosis. However, some or all of the on-board LBIST
circuitry may also be used when testing the design stand-alone using a general-purpose logic

November 2010 323 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Logic Built-In Self Test (LBIST) Generation

tester. Design primary inputs and primary outputs may be connected to pseudo-random
pattern generators and signature analyzers or controlled by a boundary scan technique. The
tester may exercise considerable control over the execution of the test, unlike the situation in
a “pure” LBIST environment where the design is in near-total control. This is being pointed
out here to emphasize that:
■ Encounter Test does not require LBIST pattern generators and signature analyzers to
reside in a design.
■ Encounter Test assumes that the LBIST operations are controlled through primary input
signals.

Encounter Test supports the STUMPS style of LBIST. STUMPS is a compound acronym for
Self Test Using MISR and Parallel SRSG. MISR is an acronym for Multiple Input Signature
Register. SRSG stands for Shift Register Sequence Generator. In Encounter Test
documentation, we call an SRSG a pseudo-random pattern generator (PRPG).

STUMPS follows scan guidelines for either LSSD or GSD. STUMPS scan chains are often
called channels. The basic layout of the STUMPS registers is illustrated in Figure 12-1.

During a scan operation, the channels are completely “unloaded” into the MISR and
replenished with pseudo-random values from the PRPG. The scan operation consists of a
number of scan cycles equal to or larger than the number of bits in the longest channel. Each
scan cycle advances the PRPG by one step, the MISR by one step, and moves the data down
the channels by one bit position. Between scan operations, a normal LSSD or GSD test is
performed by pulsing the appropriate clocks that cause outputs of the combinational logic
between channels to be observed by the channel latches (and possibly the primary outputs).

November 2010 324 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Logic Built-In Self Test (LBIST) Generation

Figure 12-1 A Simplified Example STUMPS Configuration

November 2010 325 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Logic Built-In Self Test (LBIST) Generation

STUMPS requires a means of initializing the LFSRs and a means of reading out the contents
of the signature registers. The initializing sequence is supplied to Encounter Test by the
customer. Refer to “SEQUENCE_DEFINITION” in the Modeling User Guide for more
detailed information about initialization sequences. The only support for reading out the
signature registers provided by Encounter Test is to verify signature registers are observable
by a parent test mode's scan operation.

The specification of a parent mode for scanning also allows you to alter the initial state of
some latches from one LBIST test generation run to another. Having once verified that the
initialization sequence works, Encounter Test is able to re-specify the latch states within the
initialization sequence when the sequence is copied into the test pattern data. No fixed-value
latches are allowed to be changed in this manner, because altering their initial states would
invalidate the TSI and TSV results.

The control logic that steers the LBIST operation must be modeled so that to Encounter Test
it appears that all clock signals and any other dynamic signals emanate from primary inputs
or pseudo primary inputs; control signals that are constant throughout the LBIST operation
may be fed by either primary inputs or fixed-value latches.

Encounter Test provides limited support for the initialization of RAM by non-random patterns.
The test mode initialization sequence may initialize embedded RAM either by explicit, non-
randomized patterns or, if the RAM supports it, by the use of a reset signal. Encounter Test
determines through simulation which RAMs are initialized; any uninitialized RAMs are treated
as X-generators for LBIST, and therefore they must be blocked from affecting any observation
point.

Additional features of STUMPS processing include:


■ Fast Forward
This feature speeds up the application of tests by skipping tests that do not detect any
new faults. Since the determination of which tests detect new faults is based upon the
cycling of the PRPG through all the tests, the fast forward operation must be able to
preserve the original PRPG sequence while skipping some of the scan operations. To do
this, it makes the PRPG state at the beginning of each scan operation to be one shift
cycle advanced from the PRPG state at the beginning of the previous test. The LBIST
implementation of this (which is slightly different from WRPT Fast Forward) saves the
PRPG into a “shadow register” during the scan operation. At the initial fault simulation of
the LBIST patterns, it is assumed that the PRPG is saved after one cycle of the scan
operation. When the reduced test set is applied, the saving may actually occur after m+1
cycles, where m is the number of consecutive tests to be skipped. To apply the next
“effective” test, the PRPG is restored from the shadow register prior to the scan
operation. Thus, the state of the PRPG does not depend upon whether the previous
originally simulated scan operation was applied.

November 2010 326 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Logic Built-In Self Test (LBIST) Generation

In the general case, you supply two input pattern sequences in support of fast forward:
A PRPGSAVE sequence that moves the PRPG seed into the shadow register and a
PRPGRESTORE sequence that moves the seed from the shadow register into the
PRPG.
Alternatively, Encounter Test supports a specific implementation of the PRPG save and
restore operations whereby they occur concurrently with a channel scan cycle under
control of PV (PRPG save) and PR (PRPG restore) signals. When this implementation
is used, these signals must be identified by test function pin attributes and the
PRPGSAVE and PRPGRESTORE sequences are implicitly defined. For convenience,
we refer to the support of user-specified PRPGSAVE and PRPGRESTORE sequences
as “fast forward with sequences” and we refer to the support of PV and PR pins as “fast
forward with pins”. If the PV and PR pins are defined and user sequences are supplied
also, then “fast forward with sequences” takes precedence; in this case the PV and PR
pins are held to their inactive states throughout the application of test data, much like TIs.
“Fast forward with sequences” assumes that the PRPGSAVE sequence does not disturb
any latches except the shadow register and that the PRPGRESTORE sequence does
not disturb any latches except the PRPG. This of course implies that these operations
can not be executed concurrently with a channel scan cycle, but must be inserted
(between scan cycles in the case of the PRPGSAVE sequence).
There is no corresponding shadow register for MISRs. Skipping some scan cycles
changes the signature, and there is no way to avoid this fact. When fast forward is used,
two sets of MISR signatures are computed.
The PRPG shadow register must, like the PRPG itself, not be contaminated during self
test operations. The PRPGs are checked to verify that in the test mode, they have no
other function. To ensure that the PRPG shadow register is not corruptible, it must be
modeled as fixed value latches, which means that the TI inputs for this test mode prohibit
the shadow register latches from being changed.
The PRPGSAVE and PRPGRESTORE sequences will necessarily violate some TI
inputs. This is okay because Encounter Test does not simulate these sequences except
when checking to ensure that they work properly. TSV ensures that the PRPGSAVE and
PRPGRESTORE sequences work and that they do not disrupt any latches other than the
PRPG and its shadow latches.
In “fast forward with pins” the channels and MISR shift one cycle concurrently with the
restore operation, and the restore is executed on the last cycle of the scan. This works
only if the PRPG is never observed between scans; either the existence of explicit A, B,
or E shift clocks in the test sequence or a path from the PRPG to anything other than a
channel input would make it infeasible to use fast forward with pins. This is because it
would then be impossible to predict the effect of some tests being skipped; the PRPG
state would depend upon how many subsequent tests are to be skipped, and this

November 2010 327 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Logic Built-In Self Test (LBIST) Generation

information is not available until a fault simulation is performed, but the fault simulation is
not possible until the PRPG state is determined.
■ Channel input signal weighting
Channel input signal weighting improves the test coverage from LBIST. A multiple-input
AND or OR design is allowed to exist between the PRPG and a channel input. The signal
weight is determined by the selection of the logic function and the number of pseudo-
random inputs to it. This selection is made by control signals fed from primary inputs.
These controls are not allowed to change during the scan operation, and so every latch
within a given channel will be weighted the same, varying only as to polarity which
depends upon the number of inversions between the channel input and the latch. The
primary inputs that are used to control the channel input weight selection must be flagged
with the test function WS (for weight selection).
Weighting logic also may be fed by Test Inhibit (TI) or Fixed Value Linehold (FLH) latches.
FLH latches may be changed from one experiment to another, causing channels to be
weighted differently. The FLH latch values may be changed by adding them as lineholds
to the lineholds file.
■ Channel scan that considers the presence of a pipelined PRPG spreading network in
LBIST test modes
The pipelined spreading PRPG network between PRPGs and channel latches enables
simultaneous calculation of spreader function and weight function in one scan cycle. The
sequential logic in the PRPG spreading network can be processed by placing a Channel
Input (CHI) test function in the design source or the test mode definition file. Encounter
Test uses the CHI to identify PRPG spreading pipeline latches during creation of the
LBIST test mode.

Encounter Test LBIST does not require scan chains to be connected to primary pins or on-
board PRPG and MISR. For example, you may have some scan chains connected to scan
data primary inputs and scan data primary outputs, other scan chains connected to PRPG(s)
and MISR(s), other scan chains connected to scan data primary inputs and MISR(s), and
other scan chains connected to PRPG(s) and scan data primary outputs, all on the same part,
all in the same test mode. The scan data primary inputs and outputs, if used, must connect
to tester PRPGs and SISRs (single-input signature registers). Encounter Test assumes that
all the scan chains are scannable simultaneously, in parallel.

See “Task Flow for Logic Built-In Self Test (LBIST)” on page 335 for the processing flow.

November 2010 328 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Logic Built-In Self Test (LBIST) Generation

Performing Logic Built-In Self Test (LBIST) Generation


To perform Logic Built-In Self Test (LBIST) using the graphical interface, refer to “Create
LBIST Tests” in the Graphical User Interface Reference.

To perform Logic Built-In Self Test (LBIST) using command lines, refer to “create_lbist_tests”
in the Command Line Reference.

An Encounter True-Time Advanced license is required to run Create LBIST Tests. Refer to
“Encounter Test and Diagnostics Product License Configuration” in What’s New for
Encounter ® Test and Diagnostics for details on the licensing structure.

The syntax for the create_lbist_tests command is given below:


create_lbist_tests workdir=<directory> testmode=<modename> testsequence=<name>

where:
■ workdir = name of the working directory
■ testmode= name of the LBIST testmode
■ testsequence = name of the test sequence being used (optional but generally
specified)

The commonly used keywords for the create_lbist_tests command are given below:
■ extraprpgcycle=#
Indicates the number of times PRPGs are shifted by a clock or a pulse events in the test
sequence. If using parallel simulation by setting forceparallelsim=yes, also set
extraprpgcycle to accurately simulate the test sequence.
■ extramisrcycle=#
Indicates the number of times the MISRs are shifted by a clock or a pulse events in the
test sequence. If using parallel simulation by setting forceparallelsim=yes, also set
extramisrcycle to accurately simulate the test sequence.
■ prpginitchannel=no|yes
Specifies whether the channels need to be initialized by the PRPG before the first test
sequence.
■ reportmisrmastersignatures=yes/reportprpgmastersignatures=yes
Reporting options to print signature and channel data at specific times.

November 2010 329 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Logic Built-In Self Test (LBIST) Generation

Refer to “create_lbist_tests” in the Command Line Reference for information on all the
keywords available for the command.

Restrictions
■ General Purpose Simulation is not supported for LBIST when running in parallel mode.
High speed scan-based simulation should be used instead.
■ If stored-pattern (or OPMISR) tests are coexistent on the same TBDbin file with WRPT
or LBIST data, then resimulation of this TBDbin cannot be accomplished in a single Test
Simulation (analyze_vectors) pass. The selection of test types being processed is
controlled by the channelsim parameter. Channelsim=no (the default) will process
all stored-pattern (and OPMISR) tests; channelsim=yes will process all WRPT and
LBIST tests.
■ Support for multiple oscillators works only in cases where the oscillators are controlling
independent sections of logic that are not communicating with each other. In some
cases, it may be possible to use Encounter Test support of multiple oscillators if the two
domains are operating asynchronously and the communication is one-way only.
■ Within a test sequence definition, lineholds can be specified only on primary inputs and
fixed value latches (FLH).
■ The useppis=no option in the Good Machine Delay Simulator is not guaranteed to
work unless scan sequences have been fully expanded.

Input Files

User Input Files


■ Linehold file
This user-created file is an optional input to test generation. It specifies design PIs,
latches, and nets to be held to specific values for each test that is generated. See
“Linehold File” on page 197 in the for more information.
■ Testsequence
This file contains initialization sequences. This file needs to be read into Encounter Test
using the read_sequence_definition command.
■ Seed File
This user-created file specifies unique starting seeds per run for on-board PRPGs and
MISRs. For more information, refer to “Seed File” on page 331.

November 2010 330 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Logic Built-In Self Test (LBIST) Generation

Output
The following is a sample output log displaying the coverage achieved for a specific clocking
sequence:
----------------- Sequence Usage Statistics ----------------------
Sequence_Name EffCycle TotCycle DC_% AC_%
==================================================================
<userSequence01> 10 512 69.03 0.00
<< cross reference for user defined sequences >> <userSequence01>:Test1
Starting Fault Imply processing... fault status may be updated.
------------------------------------------------------------------
LBIST Statistics
-----------------------------------------------------------------
Logic Test Results
Patterns simulated :512
Effective patterns :10
Static fault coverage (DC) :69.0323%
Dynamic fault coverage (AC) :0%

Seed File
The purpose of this file is to allow the specification of unique starting seeds per run for on-
board PRPGs and MISRs. This allows flexibility in specifying the initial values for floating
latches and scannable latches.

The file uses TBDpatt format statements as follows:

TBDpatt_Format (mode=type, model_entity_form=nameform);

Event Scan_Load(): latchvalues;

[ SeqDef=(seqname," ")] SeqDef ;

where,
type is node or vector
nameform is name, hier_index, or flat_index
latchvalues represents the specification of the latch initial values in either TBDpatt
node list format or vector format, as specified by the TBDpatt_Format statement.
The [SeqDef statement is optional, and is used primarily when multiple Scan_Load
events (seeds) are specified. seqname is the name of a test sequence definition that has
already been imported

November 2010 331 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Logic Built-In Self Test (LBIST) Generation

For more information about TBDpatt syntax, refer to the Test Pattern Data Reference
manual.

The specification of an initial value to any latch that is not scannable in the parent mode or is
a fixed value latch in the target mode is ignored. Encounter Test does not modify the initial
value of any latches that are not scannable in the parent mode, and overriding the initial value
of fixed values is not allowed because it invalidates processing done by Test Mode Build and
Test Structure Verification.

If so desired, multiple seeds (Scan_Load events) can be specified. When there are multiple
seeds, or when there is only one seed and it has an associated [SeqDef statement, each
seed is used as the starting PRPG state for only one test sequence. At the end of the
specified number of test iterations, a final signature is provided; if there are additional seeds,
then the design is reinitialized with the new PRPG seed and additional test iterations are
applied from that starting state using a different (or possibly the same) test sequence as
indicated by the associated [SeqDef. When multiple seeds are specified, any Scan_Load
event with no associated [SeqDef will be applied using the first test sequence named in the
create_wrp_tests testsequence parameter list.

Following is an example:

Figure 12-2 Example of a Seed File

TBDpatt_Format (mode=node, model_entity_form=name);


Event Scan_Load(): "Block.f.l.lfsr.nl.prpg.014" = 0
"Block.f.l.lfsr.nl.prpg.015" = 0
"Block.f.l.lfsr.nl.prpg.016" = 0
"Block.f.l.lfsr.nl.prpg.017" = 0
"Block.f.l.lfsr.nl.prpg.018" = 0
"Block.f.l.lfsr.nl.prpg.019" = 0
"Block.f.l.lfsr.nl.prpg.020" = 0
"Block.f.l.lfsr.nl.prpg.021" = 0
"Block.f.l.lfsr.nl.prpg.022" = 0
"Block.f.l.lfsr.nl.prpg.023" = 0
"Block.f.l.lfsr.nl.prpg.024" = 0;

A seed file can also have multiple seeds, as shown in the following example.

November 2010 332 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Logic Built-In Self Test (LBIST) Generation

Figure 12-3 Example of a Multiple Seed File

TBDpatt_Format (mode=node, model_entity_form=name);

Event Scan_Load (): "Block.f.l.BF.nl.PRPG.PRPG0"=0


"Block.f.l.BF.nl.PRPG.PRPG1"=0
"Block.f.l.BF.nl.PRPG.PRPG2"=1
"Block.f.l.BF.nl.CH2.L2"=0
"Block.f.l.BF.nl.CH10.L2"=1
"Block.f.l.BF.nl.CH11.L2"=1
"Block.f.l.BF.nl.CH12.L2"=0
"Block.f.l.BF.nl.CH13.L2"=1
"Block.f.l.BF.nl.CH1.L2"=0
"Block.f.l.BF.nl.MISR.misr.MISR1"=1
"Block.f.l.BF.nl.MISR.misr.MISRO"=1 ;
[ SeqDef=(test1," ") &rbrl. SeqDef;
Event 1 Scan_Load (): "Block.f.l.BF.nl.PRPG.PRPG0"=0
"Block.f.l.BF.nl.PRPG.PRPG1"=1
"Block.f.l.BF.nl.PRPG.PRPG2"=0
"Block.f.l.BF.nl.CH2.L2"=0
"Block.f.l.BF.nl.CH10.L2"=1
"Block.f.l.BF.nl.CH11.L2"=1
"Block.f.l.BF.nl.CH12.L2"=0
"Block.f.l.BF.nl.CH13.L2"=1
"Block.f.l.BF.nl.CH1.L2"=0
"Block.f.l.BF.nl.MISR.misr.MISR1"=1
"Block.f.l.BF.nl.MISR.misr.MISRO"=1 ;
[ SeqDef=(test2," ") ] SeqDef;

Note that the " " associated with the SeqDef is enclose an optional time/date stamp. The
quotation marks are required even if you choose to leave out the date/time stamp.

Parallel LBIST
This function is currently available using graphical user interface and command line. Optimum
performance is achieved if the selected machines running in parallel have been dedicated to
your run. In a parallel run, a greater number of patterns will be found effective as compared
to a serial run.

For command line invocation, TWTmainpar -h displays the available options for specifying
a list of machines.
Note: For LBIST, General Purpose Simulation is not supported when running in parallel
mode. High Speed Scan Based Simulation should be used.

November 2010 333 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Logic Built-In Self Test (LBIST) Generation

The host used to start the application acts as the coordinator for the parallel process. As a
default, the run is made in two phases. In the first phase, the faults are partitioned across the
selected hosts. The controller performs Good Machine Simulation only to write out the
patterns.

If, for any given simulation interval, the Good Machine Simulation time exceeds the fault
simulation time, the run is switched to the second phase. In the second phase, the patterns
are partitioned across all of the processes (including the controller) and each process works
on all of the faults.

Figure 12-4 on page 334 and Figure 12-5 on page 335 parallel process.

Figure 12-4 LBIST Parallel Processing Flow, Phase 1

November 2010 334 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Logic Built-In Self Test (LBIST) Generation

Figure 12-5 LBIST Parallel Processing Flow, Phase 2

Task Flow for Logic Built-In Self Test (LBIST)


The following figure shows a typical processing flow for running Logic Built-In Self Test.

November 2010 335 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Logic Built-In Self Test (LBIST) Generation

Figure 12-6 Encounter Test Logic Built-In Self Test Processing Flow

1. Build an Encounter Test model. Refer to “Performing Build Model” in the Modeling User
Guide for more information.
2. Build a full scan test mode. Refer to “Performing Build Test Mode” in the Modeling User
Guide for more information.
A sample mode definition file is given below:

November 2010 336 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Logic Built-In Self Test (LBIST) Generation

TDR=lbist_tdr_name
SCAN_TYPE=LSSD BOUNDARY=NO IN=PI OUT=PO;
TEST_TYPES=STATIC LOGIC SIGNATURES=NO,STATIC MACRO,SCAN_CHAIN;
TEST_FUNCTION_PIN_ATTRIBUTES=TF_FLAG;
.
.
.

3. It is recommended that you perform Test Structure Verification (TSV) to verify the
conformance of a design to the Encounter Test LSSD guidelines or the GSD guidelines.
Non conformance to these guidelines may result in poor test coverage or invalid test data.
Refer to “Verify Test Structures” in the Verification User Guide.
4. Generate initialization sequence.
The LBIST methodology requires a user-defined initialization sequence for the test mode
to initialize the on-board BIST elements such as PRPG, MISR, and fixed-value latches.
Encounter Test does not automatically identify how to get these latches initialized; in fact,
it is only the results of the initialization sequence that tell Encounter Test which state each
fixed-value latch is supposed to be set to. Figure “An Example Test Mode Initialization
Sequence in TBDpatt Format” in the Modeling User Guide shows an example mode
initialization sequence definition for an LBIST test mode.
If the LBIST testing uses fast forward with sequences, you must also generate the
PRPGSAVE and PRPGRESTORE sequences at this time. Include these sequence
definitions in the same file with the mode initialization sequence.
LBIST processing also requires sequence definitions to specify the order of the clocks
and other events which get applied for each test. Each test generation run may use a
different sequence of clocks. These test sequence definitions may be included in the
mode initialization sequence input file, or they may be imported separately. An example
is given below.
5. Build LBIST test mode. The SCAN_TYPE must be LSSD or GSD.
Following is a sample mode definition file for LBIST:
Tester_Description_Rule = dummy_tester;
scan type = gsd boundary=internal length=1024 in = on_board out = on_board;
test_types static logic signatures only shift_register;
faults static ;
comet opcglbist markoff LBIST stats_only ;
misr Net.f.l.top.nl.BIST_MODULE.BIST_MISR.MISRREG[1048]=(0,2,21,23,53);
misr Net.f.l.top.nl.BIST_MODULE.BIST_MISR.MISRREG[1089]=(0,2,21,23,41);
prpg Net.f.l.top.nl.BIST_MODULE.BIST_PRPG.PRPGREG[52]=(1,2,6,53);
prpg Net.f.l.top.nl.BIST_MODULE.BIST_PRPG.PRPGREG[105]=(1,2,6,53);

6. Perform the TSV checks for LBIST to verify conformance to the LBIST guidelines. Refer
to “Performing Verify Test Structures” in the Verification User Guide for more
information.

November 2010 337 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Logic Built-In Self Test (LBIST) Generation

7. Build a fault model for the design. Refer to the “Building a Fault Model” in the Modeling
User Guide for more information.
8. To use user-defined clock sequences, read the test sequence definitions. See “Coding
Test Sequences” on page 206 for an explanation of how to manually create test (clock)
sequences.
For complete information, refer to “Reading Test Data and Sequence Definitions” on
page 245.
The following example uses internal clocking (cutpoints) for the LBIST sequence:
TBDpatt_Format (mode=node, model_entity_form=name);
[Define_Sequence Universal_Test (test);
[ Pattern 1.1; # Set Test Constraints to pre-scan values Event 1.1.1 Stim_PPI ():
BIST_MODULE.BIST_SCAN"=0;
] Pattern 1.1;
[ Pattern 1.2;
Event 1.2.1 Pulse_PPI ():"BIST_MODULE.BIST_CASCADE[7]"=+;
] Pattern 1.2;
[ Pattern 1.3;
Event 1.3.1 Pulse_PPI ():"BIST_MODULE.BIST_CASCADE[14]"=+;
] Pattern 1.3;
[ Pattern 1.4; # Set Test Constraints to pre-scan values
Event 1.4.1 Stim_PPI ():"BIST_MODULE.BIST_SCAN"=1;
] Pattern 1.4;
[ Pattern 1.5;
Event Channel_Scan ();
] Pattern 1.5;
] Define_Sequence Universal_Test;

9. Create LBIST Tests


For complete information, refer to “LBIST Concepts” on page 323.
10. Commit Tests
For complete information, refer to “Utilities and Test Vector Data” on page 233.
11. Write Vectors
For complete information, refer to “Writing and Reporting Test Data” on page 169.

November 2010 338 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Logic Built-In Self Test (LBIST) Generation

Debugging LBIST Structures


To ensure correct implementation of BIST, it is a common practice to simulate the BIST
controller and verify that the control signals for scanning and clocking are produced in the
proper sequence. However, if the BIST controller has been automatically inserted by
Encounter Test, you may consider this to be unnecessary. Regardless of the origin of the
BIST controller design and its level of integrity, other things may go wrong in the BIST
process. For example, some system clocks may be incorrectly generated or incorrectly wired
to their associated memory elements.

Encounter Test's Verify Test Structures tool is designed to identify many such design
problems so they can be eliminated before proceeding to test generation. Even so, it is
advisable to use your logic simulator of choice to simulate the BIST operation on your design
for at least a few test iterations (patterns) and compare the resulting signature with the
signature produced by Encounter Test's Logic Built-In Self Test generation tool for the same
number of test iterations. This simulation, along with the checking offered by Encounter Test
tools, provides high confidence that the signature is correct and that the test coverage
obtained from Encounter Test's fault simulator (if used) is valid.

When the signatures from a functional logic simulator and Encounter Test's LBIST tool do not
match, the reason will not be apparent. It can be tedious and technically challenging to
identify the corrective action required. The problem may be in the BIST logic, its
interconnection with the user logic, or in the Encounter Test controls. The purpose of this
section is to explain the use of signature debug features provided with Encounter Test's Logic
Built-In Self Test generation tool.

Prepare the Design


Follow the normal steps for running Logic Built-In Self Test generation, up to, but not including
the test generation step. Refer to “Task Flow for Logic Built-In Self Test (LBIST)” on page 335.

Check for Matching Signatures


It is not necessary to run the full number of test iterations to attain a high confidence that your
LBIST design is implemented properly and Encounter Test is processing it correctly. In fact,
the functional logic simulation run, against which you will compare Encounter Test's
signature, might be prohibitively expensive if you were to compare the final signatures after
several thousand test iterations. It is recommended that you run a few hundred or a few
thousand test iterations, or whatever amount is feasible with your functional logic simulator.

November 2010 339 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Logic Built-In Self Test (LBIST) Generation

Submit a Logic Built-In Self Test generation run, specifying the chosen number of test
iterations (called “.patterns”. in the control parameters for the tool). You will need to obtain the
MISR signatures; this can be done in any of three ways:
1. Request “.scope ”. data from the test generation run: simulation=gp
watchpatterns=range watchnets=misrnetlist where range is one of the valid
watchpatterns options and misrnetlist is any valid watchnets option that
includes all the MISR positions.
2. Specify report=misrsignatures in the test generation run, or
3. After the test generation run, export the test data and look at the TBDpatt file.

In the first method, you will use View Vectors to look at the test generation results as signal
waveforms. Refer to “Test Data Display” in the Graphical User Interface Reference for
details on viewing signal waveforms. This may seem the most natural if you are used to this
common technique for debugging logic. However, you may find it more convenient to have the
MISR states in the form of bit strings when comparing the results with your functional logic
simulator.

In both cases, MISR signatures are produced at every “.detection interval.”.


Signatures are printed in hexadecimal, and are read from left to right. The leftmost bit in the
signature is the state of MISR register position 1. (The direction of the MISR shift is from low-
to high-numbered bits with feedback from the high-numbered bit(s) to the low-numbered bits.)
Signatures are padded on the right with zeroes to a four-byte boundary, so there are trailing
zeroes in most signatures which should be ignored.

The MISR latch values found in the signatures are manually compared with the results of the
functional logic simulator, often by reading a timing chart.

Find the First Failing Test


If the final signatures do not match, it is necessary to find the first mismatching, or “failing”
signature. Make another debug run with the signature interval to set to 1, so as to narrow
down the problem to the exact failing test iteration. If you find the output log is becoming
inconveniently large, you can restrict the number of “patterns”; (test iterations) to stop after
the first known failing signature. Results of this run should narrow the problem down to the
first failing test iteration.

Diagnosing the Problem


The next step in finding the cause of mismatching signatures between the functional logic
simulator and Encounter Test depends upon the symptoms.

November 2010 340 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Logic Built-In Self Test (LBIST) Generation

Verify Primary Input Waveforms

If the signatures match for the first signature interval and then fail later on, several potential
causes can be eliminated, and the problem is likely to be found in the system clocking, or
“release-capture” phase of the test. You may have to figure out which channel latch is failing
and backtrace from there to find the problem, but first you should look carefully at the clock
sequence that Encounter Test is simulating to make sure it agrees with the functional logic
simulation input. If there is no obvious discrepancy from looking at the TBDpatt form of the
sequence definition, use the waveform display to compare the waveforms with the timing
chart from the functional logic simulator. Start by comparing the waveforms at the clock
primary inputs. If this does not provide any clues, select some representative clock splitter
design and compare the waveforms at the clock splitter outputs.

Find a Failing Latch

As Encounter Test signatures can be observed either by looking at the signal waveforms or
by having them printed in the output log, so the channel latch states can also be observed by
either of these methods. Using the response compaction of the MISR, you can find the failing
channel latch by means of a binary search instead of having to compare the simulation states
of thousands upon thousands of channel latches. The binary search technique requires that
the Encounter Test simulation of the channel scan be expanded in some fashion.

Scoping the Channel Scan

Using a waveform display is not the recommended way of finding the failing channel latch, but
some users may be more comfortable doing it this way. Once the failing channel latch is
identified, the waveform display may prove invaluable in the next step of the diagnosis of a
failing signature.

Another debug test generation run must be made to generate the waveform data for the
channel scan. From the command line, you would use create_lbist_tests
simulation=gp watchpatterns=n watchnets=scan watchscan=yes along with
any other create_lbist_tests parameters necessary for your design (such as
testsequence), where n is the number of the failing test iteration. Note that when a single
test iteration is specified on the watchpatterns parameter, create_lbist_tests will
expand the channel scan for that test iteration so the detailed waveforms can be generated.
Refer to “Test Data Display” in the Graphical User Interface Reference for details on
viewing signal waveforms.

November 2010 341 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Logic Built-In Self Test (LBIST) Generation

Using the LBIST Debug Reporting Options

You may find it more convenient, when locating the failing latch, to use the debug reporting
options instead of the waveform display. Instead of, or in addition to, the BIST parameters
listed in the previous section, specify create_lbist_tests reportlatches= n1:n2
to obtain MISR signatures for each iteration of the test from iteration n1 through iteration n2.
Note that it is not necessary to specify simulation=gp to use the printlatches
option. This reporting option also produces the states of all the channel latches in addition to
the MISR values. The information is printed to the log.

Once the failing test iteration is identified, if there appears to be a mismatch in the scan
operation, similar debug printout can be obtained for each scan cycle by specifying only a
single test iteration: lbist reportlatches=n.

November 2010 342 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

A
Using Contrib Scripts for Higher Test
Coverage

This appendix discusses the following scripts in Encounter Test contrib directory that you can
use to achieve higher test coverage in ATPG.
■ Reset/Set Generation
❑ Prepare Reset Lineholds
❑ Create Resets Tests (static/delay)
■ Reporting Domain Coverages

Preparing Reset Lineholds


This command identifies potential set and reset clocks in a testmode. It is recommended that
the set/reset pins are linehold or tied off during the delay test ATPG. You do not need to run
this command if no set/reset clocks are present in the target testmode.

The syntax for the prepare_reset_lineholds command is given below:


prepare_reset_lineholds workdir=<directory> testmode=<modename>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode for timed test
Note: This script is not formally supported.

Prerequisite Tasks
Complete the following tasks before executing prepare_reset_lineholds:

November 2010 343 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Using Contrib Scripts for Higher Test Coverage

1. Build a design, testmode, and fault model. Refer to the Modeling User Guide for more
information.

The output log after this task will look like the following:

Clock Pin Index Test Function


--------------- --------- -------------
SET 3 +SC
RESET 2 -SC

The script identifies the set/reset clocks for the design. If it successfully identifies the set/reset
pins, it creates a linehold file in testresults/reset_lineholds.<TESTMODE NAME>.
You should point to your ATPG runs (linehold=<filename>).

Creating Reset Delay Tests


This command generates a static/delay reset test targeting dynamic faults along the set/reset
paths. You do not need to run this command if no set/reset clocks are present in the target
testmode. After the creation of the scan patterns, they can be committed to save the patterns
and fault status for future ATPG runs.

The syntax for the create_reset_delay_tests command is given below:

Static:
create_reset_tests workdir=<directory> testmode=<modename> experiment=<name>

Dynamic:
create_reset_delay_tests workdir=<directory> testmode=<modename>
experiment=<name>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode
■ experiment= name of the test patterns
Note: This script is not formally supported.

Prerequisite Tasks
Complete the following tasks before executing create reset delay tests:

November 2010 344 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Using Contrib Scripts for Higher Test Coverage

1. Build a design, testmode, and fault model. Refer to the Modeling User Guide for more
information.

The output log contains a summary of the number of generated patterns and their
representative coverage. Both static and dynamic faults should be tested.
INFO (TDA-220):--- Tests ---Faults---- ATCov ------- CPU Time ------[end TDA_220]
INFO (TDA-220):Sim. Eff. Detected Tmode Global Sim. Total[end TDA_220]
INFO (TDA-220):2 2 17 2.09% 1.68% 00:00.01 00:00.03[end TDA_220]
INFO (TDA-220):4 4 17 4.18% 3.35% 00:00.01 00:00.03[end TDA_220]
INFO (TGR-101): Simulation of a logic test section with dynamic test type completed.
[end TGR_101]
Testmode Statistics: FULLSCAN

#Faults #Tested #Possibly #Redund #Untested #PTB #TestPTB %TCov %ATCov


Total 1722 116 20 0 1586 33 7 6.74 6.74

Total
Static 908 82 20 0 806 18 7 9.03 9.03
Total
Dynamic 814 34 0 0 780 15 0 4.18 4.18

Debugging No Coverage
If no ATPG coverage was achieved, check for the following:
■ Contention in the design - Look for TSV-193 and TSV-093 messages from
verify_test_structures for internal contention.
■ Broken scan chains - Analyze the verify_test_structures log for broken scan chains.

Reporting Domain Specific Fault Coverages


Use the report_domain_fault_statistics command and a clock constraint file to
report domain specific fault coverages. This command reports the domain coverage for all
clocking sequences in the clock constraint file.

The syntax for the report_domain_fault_statistics command is given below:


report_domain_fault_statistics workdir=<directory> testmode=<modename>
clockconstraints=<filename> experiment=<name>

where:
■ workdir = name of the working directory
■ testmode= name of the testmode for dynamic ATPG

November 2010 345 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Using Contrib Scripts for Higher Test Coverage

■ experiment= name of the experiment from which to get fault status


■ clockconstraints=<file name> - List of clocks and frequencies. Refer to “Clock
Constraints File” on page 125 for more information.

Prerequisite Tasks
Complete the following tasks before reporting domain specific fault coverages:
1. Import a design into the Encounter Test model format. Refer to “Performing Build Model”
in Modeling User Guide.
2. Build Encounter Test Testmode. Refer to “Performing Build Test Model” in the Modeling
User Guide for more information.
3. Generate ATPG test patterns. Refer to “Invoking ATPG” on page 35 for more information.

Output Files
None.

Command Output

The following is a sample report for experimental test coverage based on clocking sequences.
Experiment Statistics: FULLSCAN.printCLKdom
#Faults #Tested #Possibly #Redund #Untested #PTB #TestPTB
%TCov %ATCov %PCov %APCov %PTBCov %APTBCov
Total 3152 1107 68 0 1977 0 0 35.12
35.12 37.28 37.28 35.12 35.12
Total Static 1584 690 47 0 847 0
0 43.56 43.56 46.53 46.53 43.56 43.56
Total Dynamic 1568 417 21 0 1130 0
0 26.59 26.59 27.93 27.93 26.59 26.59
Collapsed 1730 612 45 0 1073 0
0 35.38 35.38 37.98 37.98 35.38 35.38
Collapsed Static 928 411 33 0 484 0
0 44.29 44.29 47.84 47.84 44.29 44.29
Collapsed Dynamic 802 201 12 0 589 0
0 25.06 25.06 26.56 26.56 25.06 25.06

November 2010 346 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

B
Three-State Contention Processing

This appendix describes the uses of Encounter Test test generation and simulation keywords
that control processing and reporting of three-state contention.

The following keywords are directly related to three-state contention:


■ contentionreport identifies the specified type of the contention of interest and also
instructs the simulator as to the type of contention to report via error messages.
■ contentionprevent indicates whether the test generator is to try to prevent the type
of the contention of interest (based on contentionreport) from occurring in the
generated patterns. If contentionprevent=yes and the test generator produces a
pattern and is unable to protect it, it will try to generate a different pattern. With
contentionprevent=yes, there are fewer patterns (theoretically zero) discarded out
by the simulator thus achieving higher coverage without having to perform additional
passes.
■ contentionremove indicates whether the simulator is to discard patterns that cause
the type of contention of interest (based on contentionreport).
■ contentionmessage indicates how many contention messages are to be printed.

The following are some combinations of the keywords and resulting actions:
■ contentionreport=soft contentionmessage=all contentionremove=yes
contentionprevent=yes (all the defaults)
The test generator will prevent soft (known vs. X) and hard (0 vs. 1)contention from
occurring in the patterns it creates. If it cannot create a pattern for the fault without
causing contention, it will not create a pattern for that fault. There are no "contention
messages" issued from this process. Theoretically, the simulator will not find any
patterns to remove; BUT, just in case a burnable pattern gets through, the simulator is
there to ensure it doesn’t get into the final set of vectors. If the test generator successfully
eliminates contention from all patterns, no contention messages are produced.
■ contentionreport=soft contentionmessage=all contentionremove=yes
contentionprevent=no

November 2010 347 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Three-State Contention Processing

When specifying contentionprevent=no, the test generator will not ensure that the
pattern does not cause contention. The simulator will remove any patterns that would
cause soft or hard contention. Messages are produced AND, you will see messages for
patterns discarded due to contention. The final vectors will not include burnable patterns.
■ contentionreport=soft contentionmessage=all contentionremove=no
contentionprevent=yes
The test generator will prevent hard and soft contention from occurring in the patterns it
creates. If it cannot create a pattern for the fault without causing contention, it will not
create a pattern for that fault. There are no "contention messages" issued from this
process. Theoretically, the simulator will not find any patterns to report; however, if a
burnable pattern gets through, the simulator will report it; but it will not remove it since
contentionremove=no was specified. If the test generator allows a burnable pattern
to get through, contention messages are produced and the final vectors will include the
burnable vectors.
■ contentionreport=soft contentionmessage=all contentionremove=no
contentionprevent=no
When specifying contentionprevent=no, the test generator will not ensure that the
pattern doesn’t cause contention. The simulator will report the patterns that cause hard
or soft contention, however, because contentionremove=no was specified, the
simulator will not discard them; if you see any contention messages with these settings,
the final vectors include patterns that may burn.

The following table lists contention keyword combinations and their respective results.
contentionmessage
contentionprevent
contentionremove
contentionreport

Contention

Contention

Contention
Messages

Removes

Removes

Vectors
in Final
Sim
Log

TG

hard, yes yes greater only if sim should not yes no


soft, than 0 removes have
all contention anything to
remove

November 2010 348 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Three-State Contention Processing

contentionmessage
contentionprevent
contentionremove
contentionreport

Contention

Contention

Contention
Messages

Removes
Removes

Vectors
in Final
Log

Sim

TG
hard, yes no greater yes yes no no
soft, than 0
all
hard, no yes greater no no yes Should not
soft, than 0 be
all
hard, no no greater yes no no yes if
soft, than 0 reported
all
none yes yes greater no no hard only Should not
than 0 (contention be hard
prevent= contention
yes wins
over
contentionr
eport
=none)
none no no greater no no no Could be:
than 0 not
reported,
prevented
or
removed.

Note: If contentionmessage=0 then no messages will be printed; but the rest of the
processing is the same.

November 2010 349 Product Version 10.1.100


Automatic Test Pattern Generation User Guide
Three-State Contention Processing

November 2010 350 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

Index
Numerics verify primary input waveforms 341
DEFAULT statement rules 204
1149.1 test generation 286 delay model
1450.1 IEEE standard 248 concepts 85
delay model, creating
input files 82
A output files 83
perform 80
analysis techniques prerequisite tasks 81, 99
for test patterns 314 delay model, overview 85
delay test
characterization test 133
C delay defects 141
delay path calculation 93
circuit values, viewing 315 dynamic constraints 150
clock constraints file/clock domain file 125 manufacturing delay test 73
clock sequences, delay test 146 timed pattern failure analysis 151
cmos testing 32, 275 true-time test 37
coding test sequences 206 wire delays 91
commit test data 233, 234 delay test lite 71
commit tests design constraints file 100
overview 27, 233 DUT values 152
concepts dynamic constraints 150
LBIST (logic built-in self test) 323 dynamic logic test 31
stored pattern test generation 275
conditional events 213
constraints, dynamic 150 E
create exhaustive tests
overview 281 Encounter Test pattern data, exporting
prerequisite tasks 281, 284, 286 export files 191
restrictions 282 input files 191
customer service, contacting 23 overview 190
customizing delay checks 149 Encounter Test pattern data, importing
output files 247
overview 247
D endup sequences 223
environment variables
debugging LBIST structures TB_PERM_SPACE 306
check for matching signatures 339 evcd (extended value change dump) data,
diagnosing 340 reading 250
DUT preparation 339 events
find a failing latch 341 conditional 213
finding the first failing test 340 removing 212
overview 339 exporting test data
reporting options 342 Encounter Test pattern data 190
scoping the channel scan 341 printing structure-neutral TBDbin 194

November 2010 351 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

sequence definition data 192


standard test interface language
L
(STIL) 179 LATCH_STIMS 213
test data migration (TDM) 193 LBIST (logic built-in self test)
verilog format 183 concepts 323
waveform generation language debugging structures 339
(WGL) 180 input files 330
extended value change dump (evcd) data, output files 331
reading 250 overview 323
performing 329
restrictions 330
F task flow 335
linehold file
failure analysis, timed test patterns 151 command line syntax 199
fast forward 326 general rules 198
fault simulation 37 overview 197
concepts 25 semantic rules 203
tasks 266 load sharing facility (lsf) 301
flush test 30 logic built-in self test (LBIST)
concepts 323
debugging structures 339
H input files 330
overview 323
help, accessing 22 performing 329
HOLD statement rules 204 restrictions 330
task flow 335
logic test 31
I lsf prerequisites 305
iddq tests 32, 275
IEEE 1149.1 test generation
methodology 292
M
overview 286 macro test 34, 283
scan chains, configuring 289 manufacturing chip expense, reducing 298
ieee 1497 sdf constructs 87 multiclock 268
ignoremeasures file 230 multipulse 267
importing test data
Encounter Test pattern data 247
overview 245
sequence definition data 252
N
standard test interface language NC-Sim keywords 184
(STIL) 248
infiniteX simulation 270
O
K odometer 239
on-product clock generation, sequences
keepmeasures file 230 with 218
on-product misr
restrictions 300

November 2010 352 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

scenario 299 coding 206


oscillator pins in a sequence 224 endup 223
importing 229
oscillator pins 224
P setup 222
specifying lineholds 223
parallel processing setup sequences 222
input files 308 simulating existing patterns 263
lsf (load sharing facility) support 301 simulating non-Encounter Test
output files 308 patterns 263
overview 300 simulation 255
prerequisite tasks for LSF 305 simvision, using 321
restrictions 304 standard test interface language (STIL),
simulation 302 exporting 179
simulation flow 303 standard test interface language (STIL),
stored pattern test generation 309 importing
stored pattern test generation flow 301 overview 248
path tests restrictions 249
hazard-free 137 static compaction 37
non-robust 139 static logic test 31
overview 32, 137 STIL (standard test interface language),
robust 138 importing 248
pattern data, importing 247 STIM=DELETE 212
pattern matching 209 stims
perform uncommitted tests 234 delete 212
PI_STIMS 213 stored pattern test generation
Put_Stim_PI 209 concepts 36
types of tests 30
structure-neutral TBDbin, printing
R output file 194
overview 194
RELEASE statement rules 204 STUMPS
circuit 323
example 325
S
scan chain test 30 T
scan chains for TAP scan chain test
generation 289 tap controller state diagram 289
SDC_statements 100 task flows
seed file 331 LBIST (logic built-in self test) 335
semantic rules stored pattern test generation 60
linehold file 203 tb_verilog_scan_pin property 186
sequence TBDbin, structure-neutral 194
with on-product clock generation 218 TBDpatt format 263
sequence definition data, exporting creating vector correspondence 188
sequence definition data, importing TBDvect file 207
output files 253 TDM (test data migration), exporting
overview 252 input files 194
sequence matching 209 output files 194
sequences overview 193

November 2010 353 Product Version 10.1.100


Automatic Test Pattern Generation User Guide

termination values, resolving 272 designing 38


Test 245 faster than at-speed 40
test data overview 37
Define_Sequence 239 pre-analysis 38
Event 241 true-time delay test, automated 39
Pattern 241 true-time test 37
Test_Procedure 240 test pattern generation 39
Test_Section 240
Test_Sequence 241
Tester_Loop 240 U
Timing_Data 240
uncommitted test 239 user sequences for stored patterns,
test data migration (TDM), exporting 193 matching 209
test data, saving 233 using Encounter Test
test generation online help 22
concepts 25
costs, reducing 298
performing 59 V
task flows 60
test generation concepts vector correspondence
overview 25 creating 188
test pattern analysis using 207
input files 322 verilog format, exporting
output files 322 output files 183
performing 322 overview 183
prerequisite tasks 322 restrictions 183
simvision 321 viewing test data 317
viewing test data 317
test pattern generation 36
test sequence W
coding 206
defined 206 watch list, overview and syntax 315
how to code 206 waveform generation language (WGL),
test simulation exporting 180
concepts 263 WGL (waveform generation language),
functional tests 266 exporting
infiniteX 270 output files 181
OPC logic 269 overview 180
test types restrictions 181
descriptions 30 wire delays 97
iddq 32, 275
logic 31
macro 34, 283
path 32, 137
scan chain 30
test vectors, creating 242
three-state contention 347
time test pattern failure analysis 151
timings for vector export 172
true-time delay test
at-speed 40

November 2010 354 Product Version 10.1.100

You might also like