Survey of VLSI Test Data Compression Methods: Usha Mehta
Survey of VLSI Test Data Compression Methods: Usha Mehta
1, JAN-JUN 2010
Abstract—It has been seen that the test data compression has which becomes 40X if the next design doubles in size. If you
been an emerging need of VLSI field and hence the hot topic consider reducing the block-level routing and top-level scan
of research for last decade. Still there is a great need and scope pins by 5X, that means you need 5X more compression on
for further reduction in test data volume. This reduction may be
lossy for output side test data but must be lossless for input side top of the existing compression. Supporting multisite testing
test data. This paper summarizes the different methods applied or DFM-based fault models will triple the compression
for lossless compression of the input side test data, starting with requirements at a minimum. A major benefit of compression
simple code based methods to combined/hybrid methods. The is to reduce test pin count, which is a major cost benefit
basic goal here is to prepare survey on current methodologies at manufacturing. As a result, some companies are already
applied for test data compression and prepare a platform for
further development in this avenue. looking for compression well beyond 100X tester cycle
reduction[4][5].
Index Terms—VLSI, Testing, data compression
Conventional external testing involves storing all test
I. N EED OF T EST DATA C OMPRESSION vectors and test response on an external tester-that is, ATE.
But these testers have limited speed, memory, and I/O
A S a result of the emergence of new fabrication technolo-
gies anddesign complexities,standard stuck-at scan tests
are no longer sufficient. The number of tests and correspond-
channels. The test data bandwidth between the tester and
the chip is relatively small; in fact, it is often the bottleneck
determining how fast you can test the chip. Testing cannot
ing data volume increase with each new fabrication process
proceed any faster than the amount of time required to
technology.
transfer the test data:
As Figure 2 illustrates, test data compression involves data into symbols, and then replacing each symbol with
adding some additional on-chip hardware before and after a code word to form the compressed data. To perform
the scan chains. This additional hardware decompresses the decompression, a decoder simply converts each code word in
test stimulus coming from the tester; it also compacts the the compressed data back into the corresponding symbol.
response after the scan chains and before it goes to the tester.
This permits storing the test data in a compressed form on The quantity of test data rapidly increases, while, at
the tester. With test data compression, the tester still applies a the same time, the inner nodes of dense SoCs become
precise deterministic (ATPG-generated) test set to the Circuit less accessible from the external pins. The testing problem
Under Test (CUT). is further exacerbated by the use of intellectual property
(IP) cores, since their structure is often hidden from the
The advantage of test data compression is that it generates system integrator. In such cases, no modifications can be
the complete set of patterns applied to the CUT with ATPG, applied to the cores or their scan chains, whereas neither
and this set of test patterns is optimizable with respect to automatic test pattern generation nor fault simulation tools
the desired fault coverage. Test data compression is also can be used. Only precomputed test sets are provided by
easier to adopt in industry because it’s compatible with the the core vendors, which should be applied to the cores
conventional design rules and test generation flows for scan during testing. So in this case, any test data compression
testing. Test data compression provides two benefits. First, it technique which is ATPG independent and fault simulation
reduces the amount of data stored on the tester, which can independent is most preferable. So code based test data
extend the life of older testers that have limited memory. compression technique satisfies both the requirements i.e. it
Second-and this is the more important benefit, which applies applies directly to ready test patterns and doesn’t require any
even for testers with plenty of memory-it can reduce the ATPG. The same way it doesn’t require any fault simulation
test time for a given test data bandwidth. Doing so typically also. The other advantages that can be achieved in some of
involves having the decompressor expand the data from n the cases are difference patterns and reordering of test patterns.
tester channels to fill greater than n scan chains. Increasing
the number of scan chains shortens each scan chain, in turn A few important factors to be considered with any
reducing the number of clock cycles needed to shift in each compression technique are:
test vector.
• The amount of compression possible,
Test data compression must compress the test vectors • The area overhead because of decoding architecture. The
lossless (i.e. it must reproduce all the care bits after on-chip decompression circuitry must be small so that
decompression) to preserve fault coverage. Test vectors are it does not add significant area overhead. The properties
highly compressible because typically only 1% to 5% of their of the code are chosen such that the decoder has a very
bits are specified (care) bits. The rest are don’t-cares, which small area and is guaranteed to be able to decode the test
can take on any value with no impact on the fault coverage. data as fast as the tester can transfer it.
A test cube is a deterministic test vector in which the bits • The reduction in test time. Transferring compressed test
that ATPG does not assign are left as don’t-cares (i.e. the vectors takes less time than transferring the full vectors
ATPG does not randomly fill the don’t-cares). In addition to at a given bandwidth. However, in order to guarantee
containing a very high percentage of don’t-cares, test cubes a reduction in the overall test time, the decompression
also tend to be highly correlated because faults are structurally process should not add much delay (which would subtract
related in the circuit. Both of these factors are exploitable to from the time saved in transferring the test data).
achieve high amounts of compression. Recently, researchers • The scalability of compression (does the compression
have proposed a wide variety of techniques for test vector technique work with various design sizes, with few or
compression. many scan channels, and with different types of designs?),
• Power dissipation is an important factor in today’s chip
Test vector compression schemes fall broadly into three design. Power dissipation in CMOS circuits is propor-
categories[1]: tional to the switching activity in the circuit. During
1) Code-based schemes use data compression codes to normal operation of a circuit, often a small number of
encode test cubes. flip flops change values in each clock cycle. However,
2) Linear-decompression-based schemes decompress the during test operation, large numbers of flip flops switch,
data using only linear operations (that is LFSRs and especially when test patterns are scanned into the scan
XOR networks). chain. Compacting the test set often requires replacing
3) Broadcast-scan-based schemes rely on broadcasting the (mapping) don’t cares with specified bits ”0” or ”1”. This
same values to multiple scan chains. process may increase switching activity of scan flip flops
and eventually the scan-in power dissipation. There are
usually plenty of don’t cares in test patterns generated for
III. C ODE BASED T EST DATA C OMPRESSION T ECHNIQUES scan. Test compression method should effectively use this
The Code-based schemes use data compression codes to opportunity for compression as well as power reduction.
encode the test cubes. This involves partitioning the original • The robustness in the presence of X states (can the design
40 NIRMA UNIVERSITTY JOURNAL OF ENGINEERING AND TECHNOLOGY, VOL.1, NO.1, JAN-JUN 2010
maintain compression while handling X states without freevariable assignments needed to generate the test cube. If
losing coverage?), no solution exists, then the test cube is unencodable (that is, it
• The ability to perform diagnostics of failures when ap- does not exist in the output space of the linear decompressor).
plying compressed patterns. In this method, it is difficult to encode a test cube that has
• Type of Decoder: data-independent decoder or data more specified bits than the number of free variables available
dependant decoder. In the former category, the on-chip to encode it. However, for linear decompressors that have
decoder or decompression program is universal, i.e., it diverse linear equations (such as an LFSR with a primitive
is reusable for any test set. In contrast, the decoder of a polynomial), if the number of free-variables is sufficiently
data-dependent technique can only decompress a specific larger then the number of specified bits, the probability of not
test vector. They have difficulties in terms of size and being able to encode the test cube becomes negligibly small.
organization for improved compression and often require For an LFSR with a primitive polynomial, research showed
large on-chip memory[6] Hence, data-independency is a that if the number of free variables is 20 more than the number
preferable property. of specified bits, then the probability of not finding a solution
is less than 10−6 .
The data compression codes are generally classified into Researchers have proposed several linear decompression
four categories based on symbol size and codeword size. schemes, which are either combinational or sequential.
1) Fixed to Fixed Coding Schemes: Where symbol size as V. B ROADCAST- SCAN - BASED SCHEMES
well as codeword size is fixed. Like Dictionary Code A third category of techniques is based on the idea of
2) Fixed to Variable Coding Schemes: Where symbol size broadcasting the same value to multiple scan chains (a single
is fixed but codeword size is variable. Like Huffman tester channel drives multiple scan chains). This is actually
Code. a special degenerate case of linear decompression in which
3) Variable to Fixed Coding Schemes: Where symbol size the decompressor consists of only fan-out wires. Given a
is variable but codeword size is fixed. Like Run Length particular test cube, the probability of encoding it with a linear
Code decompressor that uses XORs is higher because it has a more
4) Variable to Variable Coding Schemes: Where symbol as diverse output space with fewer linear dependencies than a
well as codeword size is variable. Like Golomb Code. fan-out network. However, the fact that faults can be detected
During these years, the researchers have developed a large by many different test cubes provides an additional degree of
number of variants of above schemes. The following survey freedom. LFSR must be at least as large as the number of
covers most of all such variants. Based on the basic schemes specified bits in the test cube. One way around this is to only
and evolved variants, these techniques are broadly divided decompress a scan window (a limited number of scanslices)
into four different categories. per seed.
[5] Anshuman Chandra, Krishnendu Chakrabarty, Test Data Compression Usha Mehta received B.E. degree in electronics
and Test Resource Partitioning for System-On-Chip Using Frequency- engineering from Gujarat University and Master
Directed Run-Length (FDR) Codes IEEE Transactions on Computers , Degree in VLSI Design from Nirma University. Cur-
Volume 52 Issue 8, August 2003. rently She is Associate Professor in Electronics and
Communication Engineering Department, of Insti-
tute of Technology, Nirma University. Her research
[6] Xrysovalantis Kavousianos, Emmanouil Kalligeros, Dimitris Nikolos,
interests include ATPG, Test Data Compression and
Multilevel-Huffman test-data compression for IP cores with multiple
DFT. She is a member of the IEEE, IETE, VSI and
scan chains, IEEE Transactions on Very Large Scale Integration (VLSI)
ISTE.
Systems , Volume 16 Issue 7 , July 2008.