0% found this document useful (0 votes)
72 views4 pages

Survey of VLSI Test Data Compression Methods: Usha Mehta

This document summarizes various methods for lossless compression of test data used for VLSI chip testing. It discusses how test data volumes have increased significantly with newer chip designs and fabrication technologies. Test data compression aims to reduce this data volume by adding on-chip decompression hardware. Several techniques are described, ranging from simple code-based methods to combined/hybrid approaches. The goals of test data compression are to reduce tester memory usage and testing time by enabling more test data to be transferred per unit of time through decompression.

Uploaded by

Gokul Prasath
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views4 pages

Survey of VLSI Test Data Compression Methods: Usha Mehta

This document summarizes various methods for lossless compression of test data used for VLSI chip testing. It discusses how test data volumes have increased significantly with newer chip designs and fabrication technologies. Test data compression aims to reduce this data volume by adding on-chip decompression hardware. Several techniques are described, ranging from simple code-based methods to combined/hybrid approaches. The goals of test data compression are to reduce tester memory usage and testing time by enabling more test data to be transferred per unit of time through decompression.

Uploaded by

Gokul Prasath
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

38 NIRMA UNIVERSITTY JOURNAL OF ENGINEERING AND TECHNOLOGY, VOL.1, NO.

1, JAN-JUN 2010

Survey of VLSI Test Data Compression Methods


Usha Mehta

Abstract—It has been seen that the test data compression has which becomes 40X if the next design doubles in size. If you
been an emerging need of VLSI field and hence the hot topic consider reducing the block-level routing and top-level scan
of research for last decade. Still there is a great need and scope pins by 5X, that means you need 5X more compression on
for further reduction in test data volume. This reduction may be
lossy for output side test data but must be lossless for input side top of the existing compression. Supporting multisite testing
test data. This paper summarizes the different methods applied or DFM-based fault models will triple the compression
for lossless compression of the input side test data, starting with requirements at a minimum. A major benefit of compression
simple code based methods to combined/hybrid methods. The is to reduce test pin count, which is a major cost benefit
basic goal here is to prepare survey on current methodologies at manufacturing. As a result, some companies are already
applied for test data compression and prepare a platform for
further development in this avenue. looking for compression well beyond 100X tester cycle
reduction[4][5].
Index Terms—VLSI, Testing, data compression
Conventional external testing involves storing all test
I. N EED OF T EST DATA C OMPRESSION vectors and test response on an external tester-that is, ATE.
But these testers have limited speed, memory, and I/O
A S a result of the emergence of new fabrication technolo-
gies anddesign complexities,standard stuck-at scan tests
are no longer sufficient. The number of tests and correspond-
channels. The test data bandwidth between the tester and
the chip is relatively small; in fact, it is often the bottleneck
determining how fast you can test the chip. Testing cannot
ing data volume increase with each new fabrication process
proceed any faster than the amount of time required to
technology.
transfer the test data:

Test time ≥(amount of test data on tester) / (number of


tester channels X tester clock rate)[1]

The resurgence of interest in test data compression has


also led to new commercial tools that can provide over 10X
compression for large industrial designs. For example, the
OPMISR and SmartBIST[3] tools from IBM and the TestKom-
press tool from Mentor Graphics[2] reduce test data volume
and testing time through the use of test data compression and
on-chip decompression.

II. T EST DATA C OMPRESSION T ECHNIQUES

Fig. 1. Volume of Test Data

As fabrication technologies evolve, test application time


and test data volume are drastically increasing just to maintain
test quality requirements. New tests require: greater than 2X
the test time to handle devices that double in gate count but
maintain the same number of scan channels, 3X to 5X the
number of patterns to support at-speed scan testing for the
growing population of timing defects at 130-nm and smaller
fabrication processes, and 5X the number of patterns to
handle multiple-detect and new DFM-based fault models.

Thus, the starting point is 10X compression just to maintain


tester throughput and 20X if new fault models are used,

Usha Mehta is with the Department of Electronics & Communica-


tion Institute of Technology, Nirma University Ahmedabad-382481, e-
mail:[email protected] Fig. 2. Test data compression.
NIRMA UNIVERSITTY JOURNAL OF ENGINEERING AND TECHNOLOGY, VOL.1,NO.1, JAN-JUN 2010 39

As Figure 2 illustrates, test data compression involves data into symbols, and then replacing each symbol with
adding some additional on-chip hardware before and after a code word to form the compressed data. To perform
the scan chains. This additional hardware decompresses the decompression, a decoder simply converts each code word in
test stimulus coming from the tester; it also compacts the the compressed data back into the corresponding symbol.
response after the scan chains and before it goes to the tester.
This permits storing the test data in a compressed form on The quantity of test data rapidly increases, while, at
the tester. With test data compression, the tester still applies a the same time, the inner nodes of dense SoCs become
precise deterministic (ATPG-generated) test set to the Circuit less accessible from the external pins. The testing problem
Under Test (CUT). is further exacerbated by the use of intellectual property
(IP) cores, since their structure is often hidden from the
The advantage of test data compression is that it generates system integrator. In such cases, no modifications can be
the complete set of patterns applied to the CUT with ATPG, applied to the cores or their scan chains, whereas neither
and this set of test patterns is optimizable with respect to automatic test pattern generation nor fault simulation tools
the desired fault coverage. Test data compression is also can be used. Only precomputed test sets are provided by
easier to adopt in industry because it’s compatible with the the core vendors, which should be applied to the cores
conventional design rules and test generation flows for scan during testing. So in this case, any test data compression
testing. Test data compression provides two benefits. First, it technique which is ATPG independent and fault simulation
reduces the amount of data stored on the tester, which can independent is most preferable. So code based test data
extend the life of older testers that have limited memory. compression technique satisfies both the requirements i.e. it
Second-and this is the more important benefit, which applies applies directly to ready test patterns and doesn’t require any
even for testers with plenty of memory-it can reduce the ATPG. The same way it doesn’t require any fault simulation
test time for a given test data bandwidth. Doing so typically also. The other advantages that can be achieved in some of
involves having the decompressor expand the data from n the cases are difference patterns and reordering of test patterns.
tester channels to fill greater than n scan chains. Increasing
the number of scan chains shortens each scan chain, in turn A few important factors to be considered with any
reducing the number of clock cycles needed to shift in each compression technique are:
test vector.
• The amount of compression possible,
Test data compression must compress the test vectors • The area overhead because of decoding architecture. The
lossless (i.e. it must reproduce all the care bits after on-chip decompression circuitry must be small so that
decompression) to preserve fault coverage. Test vectors are it does not add significant area overhead. The properties
highly compressible because typically only 1% to 5% of their of the code are chosen such that the decoder has a very
bits are specified (care) bits. The rest are don’t-cares, which small area and is guaranteed to be able to decode the test
can take on any value with no impact on the fault coverage. data as fast as the tester can transfer it.
A test cube is a deterministic test vector in which the bits • The reduction in test time. Transferring compressed test
that ATPG does not assign are left as don’t-cares (i.e. the vectors takes less time than transferring the full vectors
ATPG does not randomly fill the don’t-cares). In addition to at a given bandwidth. However, in order to guarantee
containing a very high percentage of don’t-cares, test cubes a reduction in the overall test time, the decompression
also tend to be highly correlated because faults are structurally process should not add much delay (which would subtract
related in the circuit. Both of these factors are exploitable to from the time saved in transferring the test data).
achieve high amounts of compression. Recently, researchers • The scalability of compression (does the compression
have proposed a wide variety of techniques for test vector technique work with various design sizes, with few or
compression. many scan channels, and with different types of designs?),
• Power dissipation is an important factor in today’s chip
Test vector compression schemes fall broadly into three design. Power dissipation in CMOS circuits is propor-
categories[1]: tional to the switching activity in the circuit. During
1) Code-based schemes use data compression codes to normal operation of a circuit, often a small number of
encode test cubes. flip flops change values in each clock cycle. However,
2) Linear-decompression-based schemes decompress the during test operation, large numbers of flip flops switch,
data using only linear operations (that is LFSRs and especially when test patterns are scanned into the scan
XOR networks). chain. Compacting the test set often requires replacing
3) Broadcast-scan-based schemes rely on broadcasting the (mapping) don’t cares with specified bits ”0” or ”1”. This
same values to multiple scan chains. process may increase switching activity of scan flip flops
and eventually the scan-in power dissipation. There are
usually plenty of don’t cares in test patterns generated for
III. C ODE BASED T EST DATA C OMPRESSION T ECHNIQUES scan. Test compression method should effectively use this
The Code-based schemes use data compression codes to opportunity for compression as well as power reduction.
encode the test cubes. This involves partitioning the original • The robustness in the presence of X states (can the design
40 NIRMA UNIVERSITTY JOURNAL OF ENGINEERING AND TECHNOLOGY, VOL.1, NO.1, JAN-JUN 2010

maintain compression while handling X states without freevariable assignments needed to generate the test cube. If
losing coverage?), no solution exists, then the test cube is unencodable (that is, it
• The ability to perform diagnostics of failures when ap- does not exist in the output space of the linear decompressor).
plying compressed patterns. In this method, it is difficult to encode a test cube that has
• Type of Decoder: data-independent decoder or data more specified bits than the number of free variables available
dependant decoder. In the former category, the on-chip to encode it. However, for linear decompressors that have
decoder or decompression program is universal, i.e., it diverse linear equations (such as an LFSR with a primitive
is reusable for any test set. In contrast, the decoder of a polynomial), if the number of free-variables is sufficiently
data-dependent technique can only decompress a specific larger then the number of specified bits, the probability of not
test vector. They have difficulties in terms of size and being able to encode the test cube becomes negligibly small.
organization for improved compression and often require For an LFSR with a primitive polynomial, research showed
large on-chip memory[6] Hence, data-independency is a that if the number of free variables is 20 more than the number
preferable property. of specified bits, then the probability of not finding a solution
is less than 10−6 .
The data compression codes are generally classified into Researchers have proposed several linear decompression
four categories based on symbol size and codeword size. schemes, which are either combinational or sequential.

1) Fixed to Fixed Coding Schemes: Where symbol size as V. B ROADCAST- SCAN - BASED SCHEMES
well as codeword size is fixed. Like Dictionary Code A third category of techniques is based on the idea of
2) Fixed to Variable Coding Schemes: Where symbol size broadcasting the same value to multiple scan chains (a single
is fixed but codeword size is variable. Like Huffman tester channel drives multiple scan chains). This is actually
Code. a special degenerate case of linear decompression in which
3) Variable to Fixed Coding Schemes: Where symbol size the decompressor consists of only fan-out wires. Given a
is variable but codeword size is fixed. Like Run Length particular test cube, the probability of encoding it with a linear
Code decompressor that uses XORs is higher because it has a more
4) Variable to Variable Coding Schemes: Where symbol as diverse output space with fewer linear dependencies than a
well as codeword size is variable. Like Golomb Code. fan-out network. However, the fact that faults can be detected
During these years, the researchers have developed a large by many different test cubes provides an additional degree of
number of variants of above schemes. The following survey freedom. LFSR must be at least as large as the number of
covers most of all such variants. Based on the basic schemes specified bits in the test cube. One way around this is to only
and evolved variants, these techniques are broadly divided decompress a scan window (a limited number of scanslices)
into four different categories. per seed.

1) Run-Length Based Codes VI. C ONCLUSIONS


2) Dictionary Based Codes
3) Statistical Codes In this paper, the overview of wide variety of test data
4) Constructive Codes compression techniques proposed by researchers in current era
is covered. This study analysis draws a conclusion that as the
IV. L INEAR - DECOMPRESSION - BASED SCHEMES design complexity and hence test data volume continues to
DECOMPRESS THE DATA USING ONLY LINEAR OPERATIONS
grow, the test data compression will be a major demand to
reduce test time and test cost. The data independent compres-
A second category of compression techniques is based on sion methods, i.e. code based scheme can be most attractive
using a linear decompressor. Any decompressor that consists sill in future avenues also. The hybrid methods combining
of only wires, XOR gates, and flipflops is a linear decompres- code based scheme with other scheme like linear decopressor
sor and has the property that its output space (the space of or broadcast based scheme can be further explored.
all possible vectors that it can generate) is a linear subspace
spanned by a Boolean matrix. A linear decompressor can
generate test vector Y if and only if there exists a solution R EFERENCES
to the system of linear equations AX = Y, where A is the [1] Nur A. Tauba, ‘‘Survey of Test Vector Compression Techniques’’ IEEE
characteristic matrix for the linear decompressor and X is transaction Design & Test of Computers , 2006.
[2] A. Chandra, K. Chakrabarty, ‘‘Efficient test data compression and
a set of free variables shifted in from the tester (you can decompression for system-on-a-chip using internal scan chains and
think of every bit on the tester as a free variable assigned Golomb coding’’, DATE ’01: Proceedings of the conference on Design,
as either 0 or 1). The characteristic matrix for a linear decom- automation and test in Europe, March 2001.
[3] Anshuman Chandra, Krishnendu Chakrabarty, ‘‘Test Data Compression
pressor is obtainable from symbolic simulation of the linear for System-on-a-Chip Using Golomb Codes’’ VTS ’00: Proceedings of
decompressor; in this simulation a symbol represents each free the 18th IEEE VLSI Test Symposium, 2000.
variable from the tester.Encoding a test cube using a linear [4] Anshuman Chandra, Krishnendu Chakrabarty, Frequency-Directed Run-
Length (FDR) Codes with Application to System-on-a-Chip Test Data
decompressor requires solving a system of linear equations Compression, Proceedings of the 19th IEEE VLSI Test Symposium,
consisting of one equation for each specified bit, to find the March 2001.
NIRMA UNIVERSITTY JOURNAL OF ENGINEERING AND TECHNOLOGY, VOL.1,NO.1, JAN-JUN 2010 41

[5] Anshuman Chandra, Krishnendu Chakrabarty, Test Data Compression Usha Mehta received B.E. degree in electronics
and Test Resource Partitioning for System-On-Chip Using Frequency- engineering from Gujarat University and Master
Directed Run-Length (FDR) Codes IEEE Transactions on Computers , Degree in VLSI Design from Nirma University. Cur-
Volume 52 Issue 8, August 2003. rently She is Associate Professor in Electronics and
Communication Engineering Department, of Insti-
tute of Technology, Nirma University. Her research
[6] Xrysovalantis Kavousianos, Emmanouil Kalligeros, Dimitris Nikolos,
interests include ATPG, Test Data Compression and
Multilevel-Huffman test-data compression for IP cores with multiple
DFT. She is a member of the IEEE, IETE, VSI and
scan chains, IEEE Transactions on Very Large Scale Integration (VLSI)
ISTE.
Systems , Volume 16 Issue 7 , July 2008.

You might also like