Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
213 views
255 pages
Failure Analysis
Failure Analysis
Uploaded by
sampon
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download
Save
Save Failure Analysis For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
213 views
255 pages
Failure Analysis
Failure Analysis
Uploaded by
sampon
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Carousel Previous
Carousel Next
Download
Save
Save Failure Analysis For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 255
Search
Fullscreen
FAILURE ANALYSIS OF INTEGRATED CIRCUITS TOOLS AND TECHNIQUESFAILURE ANALYSIS OF INTEGRATED CIRCUITS TOOLS AND TECHNIQUES edited by Lawrence C, Wagner, Ph.D. Texas Instruments Incorporated Ay KLUWER ACADEMIC PBULISHERS Boston / Dordrecht / Londonee Distributors for North, Central and South America: Kluwer Academic Publishers 101 Philip Drive Assinippi Park ‘Norwell, Massachusetts 02061 USA Tel: 781-871-6600 Fax: 781-871-6528 E-mail:
[email protected]
Distributors for all other countries: Kluwer Academic Publishers Group Distribution Centre Post Office Box 322 3300 AH Dordrecht, THE NETHERLANDS Tel: 31786392 392 Fax: 31.78 6546 474 E-mail:
[email protected]
Be Electronic Services: http://www.wkap.n! Library of Congress Cataloging-in-Publication Data ACLLP. Catalogue record for this book is available from the Library of Congress, Copyright © 1999 by Kluwer Academic Publishers. ‘All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, mechanical, photo- copying, recording, or otherwise, without the prior written permission of the publisher, Kluwer Academic Publishers, 101 Philip Drive, Assinippi Park, Norwell, “Massachusetts 02061 Printed on acid-free paper. Printed in the United States of AmericaTABLE OF CONTENTS PHELACE sssssrssorssesssseceseeseeessenee Acknowledgements. 1 Introduction Lawrence C. Wagner ... 1.1 Electrical Characterization 1.2 Die Exposure 1.3 Fail Site Isolation 1.4 Package Analysis 1.5 Physical and Chemical Analysis 1.6 Diagnostic Activities 1.7 Root Cause and Corrective Action 10 1.8 Conclusion 10 wae ewe 2 Electrical Characterization Steven Frank, Wilson Tan and John F. West........»« 2.1 Blectrical Characterization 4 2.2 Curve Tracing 18 2.3 Electrical Characterization of State Dependent Logic Failures 25 2.4 Memory Functional Failures 31 2.5 Challenges of Analog Circuit Fault Isolation and Analog Building Blocks 34 2.6 Future Challenges for Circuit Characterization 393. Package Analysis: SAM and X-ray ‘Thomas M. Moore and Cheryl Hartfield. 3.1 The Scanning Acoustic Microscope 44 3.2 The Real “Time X-Ray Inspection System 47 3.3. Application Examples 49 3.4 Summary and Trends in Nondestructive Inspection 54 4 Die Exposure Phuc D. Ngo... nsntnsnereenneeeneneneeesnesneaeee 4.1, Didding Cavity Packages 59, 4.2. Decapsulation of Plastic Packages 61 43 Alternative Decapsulation Methods 2 4.4 Backside Preparation for Characterization and Analysis 65 4.5 Future Requirements 66 5 Global Failure Site Isolation: Thermal Techniques Daniel L. Barton...... seoeenesssersnsensnnnsesessees OT 5.1 Blackbody Radiation and Infrared Thermography 67 5.2 Liquid Crystals 70 5.3 Fluorescent Microthermal Imaging 16 5.4 Conclusion 84 6 Failure Site Isolation: Photon Emission Microscopy ‘Optical/Electron Beam Techniques Edward I. Cole and Daniel L. Barton......... 6.1 Photon Emission Microscopy 88 62. Active Photon Probing 95 63 Active Electron Beam Probing 102 64 Future Developments for Photon and Electron Based Failure Analysis 110 7 Probing Technology for IC Diagnosis Christopher G. Talbot.... send 13 7.1 Probing Applications and Key Probing Technologies 1137.2. Mechanical Probing 4 7.3 E-beam Probing 17 74 Navigation and Stage Requirements 128 7.5. FIB for Probing and Prototype Repair 131 7.6 Backside Probing for Flip Chip IC 137 8 Deprocessing Daniel Yim.. meses wnsvrnenrsvereenssenneeer LS, 8.1 IC Wafer Fabrication 145 8.2 Deprocessing Methods 148 8.3 New Challenges 156 9 Cross-section Analysis Tim Haddock and Scott Boddicker .........-cssseessse 9.1 Packaged Device Sectioning Techniques 159 9.2 Wafer Cleaving | . 162 9.3 Die Polishing Techniques 164 9.4 Cross Section Decoration: Staining 165 9.5 Focused Ion Beam (FIB) Techniques 166 9.6 Sectioning Techniques for TEM Imaging 168 9.7 Future Issues 173 10 Inspection Techniques Lawrence C. Wagner: 10.1 Microscopy 175 10.2 Optical Microscopy 177 10.3 Scanning Electron Microscopy 183 10.4 Focused Yon Beam Imaging 188 105 Transmission Electron Microscopy 189 10.6 Scanning Probe Microscopy 189 10.7 Future Considerations 19211 Chemical Analysis Lawrence C. Wagner sees 11.1 Incident Radiation Sources 195 11.2 Radiation-Sample Interaction 197 11.3 Radiation Flux 197 114 Detectors 198 11.5 Common Analysis Techniques 198 11.6 Microspot FTIR 199 11.7 Other Techniques 201 11.8 Conclusion 203 12 Energy Dispersive Spectroscopy Phuc D. Ngo... 12.1 Characteristic X-Ray Process and Detection 205 12.2 Quantitative Analysis 209 12.3 Sample Considerations 210 12.4 EDS Applications 212 12.5 Future Considerations 214 13 Auger Electron Spectroscopy Robert K. Lowry... 13.1 The Auger Electron Process 217 13.2 ABS Instrumentation and Characteristics 219 13.3 ABS Data Collection and Analysis 220 13.4 Specimen, Material, and AES Operational Concerns 221 13.5 ABS in Failure Analysis 223 13.6 Conclusion 226 14 Secondary Ion Mass Spectrometry, SIMS Keenan Evans ... 14.1 Basic SIMS Theory and Instrumentation 230 14.2 Operational Modes, Artifacts, and Quanitification 23314.3 Magnetic Sector SIMS Applications 234 14.4 Quadrupole SIMS Applications 236 14.5 Time-of-Flight SIMS Applications 238 14.6 Future SIMS Issues 239 15 Failure Analysis Future Requirements David P. Vallett... 15.1 IC Technology Trends Driving Failure Analysis, 242 15.2. Global Failure Site Isolation 243 15.3 Development in Probing 245 15.4 Deprocessing Difficulties 246 15.5 Defect Inspection — A Time vs. Resolution Tradeoff 246 15.6 Failure Analysis Alternatives 248 15.7 Beyond the Roadmap 249 IndexPREFACE This book is intended as guide for those diagnosing problems in integrated circuits. ‘The process of selecting the tools and techniques to apply to a specific problem can be bewildering and inefficient. As shown throughout the book, there are a wide variety of tools and techniques employed in the diagnosis or failure analysis of semiconductors, It is not practical to attempt to apply all of them to any specific failure. Some are mutually exclusive. For example, it is not practical to attempt to perform TEM and deprocess the exact same device. Some techniques are targeted as specific failure mechanisms, Hence, it is not practical to apply them unless there is an indication that the particular failure mechanism is possible. Further, the cost and effort to apply all of the available techniques make it impractical, ‘This book provides a basic understanding of how each technique works. It is, however, not intended to provide mathematical detail. Rather, it provides the qualitative understanding generally required in making intelligent tool choices. The book discusses the shortcomings and limitations of each technique as well as their strengths. Typical applications of the techniques are used to illustrate the strengths of the techniques. The book is also not intended to provide recipes for executing those techniques. ‘Those recipes arc very dependent on the semiconductor manufacturing process and the specific tool manufacturer., The diversity of semiconductor processes and tools make it impractical to attempt that in a single volume. ‘As well as understanding how and when to apply each technigue, it is important to understand how they fit together. It is all together too easy to become tied up in attempting to make one technique work on a specific failure. The diagnosis is really part of a much bigger continuous improvement process. That process and the integration of the tools are presented in the first chapter. ‘The semiconductor industry is propelled by the rapid pace of technological improvements. The trends towards more complex, faster, denser devices with smaller features and more layers. None of these trends makes diagnosis of problems easier. ‘Thus the technology of failure analysis must strive to keep pace. The efforts of failure analysis to keep pace are described in the final chapter.Acknowledgements I would like to thank all of the chapter authors who contributed so much of their time and effort to making this book possible. I would also like to thank the people who helped with chapter reading including Richard Clark of Intel, Dave Vallett of IBM, Ken Butler, Hal Edwards, Walter Duncan, Gordon Pollack, Tim Haddock, ‘John West, John Gertas, and Monte Douglas from Texas Instruments. I would to like to especially thank Randy Harris for his support in completing this book.INTRODUCTION Lawrence C. Wagner Texas Instruments Incorporated Failure analysis can be defined as a diagnostic process for determining the cause of a failure. It has broad applications in various industries but is particularly important in the integrated circuit industry. Within the semiconductor industry, failure analysis is a term broadly applied to a number of diagnostic activities. These activities are geared towards to supporting the determination of a “root cause” of failure, which support process improvements impacting product yield, quality and reliability. I most namow usage, failure analysis is the analysis of completed semiconductor devices, which have failed. ‘These consist primarily of customer returns, qualification failures and engineering evaluations. In the broadest sense, failure analysis includes a diverse range of diagnostic activities, which include support of process development, design debug, and yield improvement activities in the both the wafer fab and assembly site, Despite the broad range of diagnostic activities, a relatively common set of tools and techniques has emerged to support them. However, nearly all of the extremely diverse group of the tools and techniques are used in the failure analysis of completed devices. Because of this, the failure analysis of completed devices provides an excellent perspective to approach a discussion of these tools and techniques. ‘These tools and techniques can be grouped into several critical subprocesses, which form a generic failure analysis process flow. This is illustrated in Figure 1.1 as a flow chart, This flow for failure analysis of completed devices includes all of major elements of a broad range of diagnostic activity which will be discussed below. It is important to understanding that the flow chart in Figure 1.1 is a grossly simplified process flow. ‘There are in fact countless branches and decisions in a complete failure analysis process description. In many ways, the process can be viewed as two primary parts: the electrical isolation of the failure and physical cause analysis as shown in the Figure 1.1. The electrical isolation can be viewed as a narrowing of the scope of investigation by determining a more precise electrical cause of failure. For example, a precise electrical cause of failure might be a short between two nets or signal interconnects. The second part of the process is the determination of the physical cause of the failure. This is the process of uncovering what has physically caused the electrical failure cause. For the example above, the physical cause of the faiture might be a stainless stee! particle, which shorted the two interconnects.2 Introduction Die Related Defect Bec Chanceiton De bape Electrical — Cause of cob ae Ste ton Package Related Defect Failure Probe Fire Ste ion ‘cea nein eft apse Patage Ais Physica epee Detevbe Bipanre Physical Cause of Ce Ants ays specon Failure Chemie Anais igure 1.1. Typical failure analysis process flows for die related and package related defects are shown for comparison. Each flow can be broken down in to an electrical cause of failure determination and physical cause of failure determination. [As pointed out above, the failure analysis process is in fact a part of a larger product or manufacturing improvement process. ‘The input to the failure analysis process is failed devices. In some cases, very specific devices such as qualification failures must be successfully analyzed. In other cases such as yield analysis, where it is not feasible to analyze every failed device, a selection process must occur. The ‘celection is usually made in order to make the biggest possible improvement in the manufacturing process. This typically consists of a Pareto ranking of defect Characteristics observed. The goal of the failure analysis process is to enable an improvement process. Hence, the results of the failure analysis, the identification of 1 physical cause of failure, must be useful for correcting a manufacturing problem. ‘This drives a need for “root cause” identification. This frequently goes beyond the physical cause of the failure. The root cause for the particle, cited in the example above, might be a poorly designed wafer loader for the interconnect deposition too! which results in friction and particle generation.Wagner 3 In simpler terms, the isolation process can be viewed as determination of where to look for the defect or anomaly. The physical analysis process is one of exploration and information gathering about the failure mechanism or physical cause of the failure, The corrective action process is using the physical analysis to understand how the defect or anomaly was generated in the manufacture of the device and how (o eliminate the “root cause” of failure. 1.1 ELECTRICAL CHARACTERIZATION Electrical testing plays a very significant role in failure analysis and is the starting point for every analysis. The initial electrical verification process provides a general understanding of how the device is failing electrically. ‘This is generally in the form of a datalog, the measured continuity, parametric, and functional outputs of the device from production test equipment. This guides the further course of action. ‘The results may drive the need for more detailed electrical characterization. For example, electrical leakage identified may lead to detailed I-V (current-voltage) characterization on a curve tracer or parametric analyzer. For functional failures, observation of functional failures may lead to Schmoo plotting (characterization as function of temperature, power supply voltage or frequency) or scan testing to better understand the characteristics of the failure. This further electrical characterization may provide partial or even complete failure site identification. In highly structured devices such as memories, electrical testing my conclusively identify a failure site as single memory bit which covers less than a square micron. When Design for Test features such as SCAN are present in logic, they can frequently be used to isolate failures a single circuit node or net. ‘The initial electrical characterization can also provide indications of a package-telated problem. In such cases, non-destructive analysis of the package may take place before any subsequent activity. Electrical testing is also required for fail site isolation. While fail site isolation techniques are extremely diverse, they do have two common requirements. ‘They require that the failing device be placed into the failing electrical condition. An understanding of how to place the device into this failing condition is achieved uring the electrical characterization. They also require access to the die as discussed below. 1.2 DIE EXPOSURE Die Exposure is also required to perform fail site isolation techniques since packaging materials are opaque. In addition, direct access to the device is required for failure site isolation techniques. Ceramic packages are normally mechanically ‘opened while plastic encapsulated devices are subjected to a decapsulation process. ‘This is most commonly a jet etch decapsulation. A critical requirement of these processes is to maintain electrical integrity of the device. It is essential that the device continue to fail electrically in the same manner as prior to die exposure. In addition, the external pins and their connections to the die must remain in tact for convenient biasing for failure site isolation,4 Introduction While in most cases, the active surface of the die is exposed, for some technologies, it is not feasible to expose the active face and maintain electrical functionality. ‘This is observed in most flip-chip devices and in some memory devices where bond wires obscure much of the die surface. An alternative is to expose and polish the back surface of the die and use backside failure site isolation techniques. The second alternative is removing the die completely from package. ‘This results in removing the usual paths for electrical stimulation, which is required for failure site isolation, ‘This can be remedied by repackaging (placing the device in ‘an alternative package better suited to failure analysis and rewiring) or by using a probe card. 1.3 FAIL SITE ISOLATION Fail Site Isolation consists of a broad range of techniques for narrowing the focus of diagnostic investigation. Modern IC’s may have literally many millions of transistors and interconnects. It has becoriie impossible to perform analysis on IC’s without narrowing the scope of the failure analysis from the millions of elements down to a very few. In general, techniques can be classified as global technique or probe techniques. Global techniques attempt to identify secondary effects of the failure. ‘Thermal detection techniques attempt to identify heat generate at a failure site. Photon Emission Microscopy similarly identifies anomalous light emission as a result of electron-hole recombination. A range of other techniques which identify various cartier generated or thermal events are also effective methods of quickly performing failure site isolation. Probe techniques are those which make measurements of electrical signals within an IC. ‘These techniques can be viewed as direct trouble-shooting of the device by direct measurement of circuit performance. If these techniques are applied without some fail site isolation from electrical testing, the process of fail site isolation by probing becomes a long and tedious one. 1.4 PACKAGE ANALYSIS Package Analysis is focused on non-destructive techniques for analyzing the structures within an IC package. Electrical data can indicate a possible package related problem, e.g. opens, shorts or leakages. In such cases, it is helpful to assess the physical condition of the package and its interior prior to any destructive analysis. These techniques also initiate the physical cause analysis for open and short failures. Powerful probes such as X-radiography and the Scanning Acoustic Microscopy (SAM) provide particularly useful insights into the integrity of a package structure. 1.5 PHYSICAL AND CHEMICAL ANALYSIS ‘The task of Physical and Chemical Analysis supports the determination of the physical cause of failure, Once the failure site isolation is complete, analysis must continue to identify the physical cause of failure. There are generally three types ofWagner 5 tools employed: sample preparation, physical observation and chemical analysis. ‘Sample preparation techniques are used to remove materials to access the failure site. These include such processes as deprocessing, parallel polishing and cross sectioning. Physical observation is carried using various types of microscopes: optical microscopes, scanning electron microscopes, transmission electron microscopes and others. Chemical analysis is performed on defects where it will provide valuable information in the “root cause” determination, 1.6 DIAGNOSTIC ACTIVITIES In many ways, the diagnostic process described above has been the standard flow for failure analysis for several decades. The remarkable progress of the semiconductor industry in terms of continuously improved technology at lower cost has been well documented. Many of the industry trends, which have driven this progress, have had ‘a major impact on failure analysis. These include greater device complexity, smaller feature sizes, more levels of interconnect, lower power supply voltages, and package evolution from through hole to surface mount and ultimately to chip-scale packaging. These changes have made the tools required to perform these failure analysis subprocesses much more sophisticated and expensive. ‘These changes have also driven the development of a very diverse set of tools and techniques. This occurs because many of tools work extremely well for one class of defects but are ineffective for others. For example, some global failure site isolation techniques are perform very well for open circuits but provide not benefit on leakage failures. At times, specific tool development has come about because of a specific change in the industry. The use of double level metallization was, in large part, responsible for the development of the global failure site isolation tools such as liquid crystal and Photon emission microscopy. Similarly, surface mount technology drove the development of the SAM in order to detect and understand delamination observed during customer assembly of surface mount devices. As the requirements for diagnostic activity increased and became specialized, many of the tools have become specialized to specific applications, For example, in-line tools such as Scanning Electron Microscopes (SEM) and optical microscopes have become highly automated with wafer handling capability and clean room compatibility in order to support wafer fab yield analysis. Similarly, SAM applications have been adapted to allow 100% inspection of devices following assembly in order to assure adhesion of packaging materials to the die. Just as the toolset has expanded, the application of the failure analysis methodology has also expanded and become customized to an assortment of diagnostic activities. Historically, these tasks were consolidated in the failure analysis laboratories. However as the importance of the various diagnostic activities became understood and the tools became customized, these activities became separate functions. The subprocesses are used in whole or part in various diagnostic applications: Process Development, Wafer Fab Yield Analysis, Design Debug, Assembly Yield Analysis as well as the Qualification/Customer Return Failure Analysis. ‘The6 Introduction application of the various failure analysis techniques can be looked at a matrix of the technique and the diagnostic activity as shown in Table 1.1 Diagnost 2\3 8 g 2 =f agnostic ae g aS | 3s ctivit 32 |5s | 2 2 | 28s Activity da lai la 53 35 a 2 3 2 6 Particle Process | Analysis. x x Development Parametric Wafer Fab | Analysis x x Yield jg | Uamodeled Analysis Yield x x x Analysis Design Debug x x Assembly Yield Analysis |X x x Customer Qualification | ystome x x x x x Customer | Qualification! rewnFa |Retanny | * | *{| *] *| Table 1.1. Diagnostic activities are compared to failure analysis tools used. Electrical characterization is a key element of all diagnostic activities. ‘These varied applications of failure analysis processes all feed back into to the continuous improvement program of the IC manufacturer. 1.6.1 Process Development and Yield Analysis In general, these diagnostic activities in the wafer fab can be broken down into three. categories. The first category is defect or particle reduction. This is an important clement of both process development and ongoing yield improvement. Particle detection tools are used to identify the number, size and location of particles ‘generated at various wafer fab process steps. Physical and chemical analysis is performed both in-line and in wafer fab support areas to detormine the physicalWagner 7 Properties and composition of these particles. However, not all particles generate electrical failures. It is important to be able understand where particle generation is resulting in electrical failures. Much of this process is performed using memory devices or memory imbedded in logic. Memory is highly structured and this allows ture site identification from test data only. In the case of single bit failures, the electrical isolation from test includes identification of the specific memory cell, which fails. Added electrical characterization of the failing bit can identify the specific element of the memory bit that has failed. The second category of wafer fab diagnostics is parametric analysis. During process development, an understanding of unexpected electrical performance must be characterized and the physical causes understood. ‘This characterization includes both mean value analysis and distribution analysis. During ongoing manufacture, it is essential to track variations in electrical performance and tighten the distribution of key parameters such as drive current, contact resistance, and leakages. ‘These are frequently impacted by process variations, which are not particle related. Where these generic problems exists, test structure analysis should be able to quickly identify the parametric problem. Physical analysis and possibly chemical analysis are used on the test structure, The final category of diagnostic activity in the wafer fab is the analysis of problems that can not be modeled with the observed defect densities and observed Parametric distribution. In the first two categories, the failure site isolation is performed predominant from the electrical characterization either to a memory bit location or to a parametric test structure. ‘This provides a fast and efficient method for providing yield improvement information. The diagnosis of non-modeled failures on the other hand generally utilizes the full range of failure site isolation tools describe above, including both global and probing techniques. Unmodeled yield loss is frequently due to a subtle design to process interaction, This is typically in form of a unique geometric feature. ‘The diagnostic activities discussed above provide much of the basis for ongoing, yield, quality and reliability improvements that have become an expected part of the semiconductor business. ‘The interrelationship between yield, quality and reliability hhas been intuitively understood for many years. This understanding has recently become more formalized’. In addition to the quality and reliability implications, significant profit margin improvement can be achieved through the improvement in yield. For these reasons, much of the increase in diagnostic activity is expected to focus on this area as it has for several years. 1.6.2 Design Debug Design debug is critical to timely product introductions. When designs do not function as intended when implemented into silicon, it is critical to quickly understand why the device is not functioning properly. It is also critical to assure that all of the defects in the design are corrected in a single pass. Design debug is generally performed through the use of probe measurements on the circuit, circuit analysis and simulations. In general, design debug predominantly uses the probe8 Introduction failure site isolation methods from failure analysis although photon emission microscopy can also be useful in the identification of some design defects. } [Customer Return Feedback eld Analysis Feedback 5 Assembly/Test Quelification Figure 1.2. Typical feedback loops for customer return and yield analysis on a wafer fab related problem. ‘A significant trend in the semiconductor industry has been the decrease in the life cycle of IC products. This puts a great deal of pressure on first pass success in the design process. When the first implementation of a design in silicon does not function properly, recovery is critical to the life of the product. Long delays in debug can now easily cause a product to entircly miss its lifetime window. 1.6.3 Assembly Yield Analysis ‘Assembly yield analysis is a critical part of the ongoing cost reductions in the semiconductor industry. It has an impact on quality and reliability similar to that noted above for wafer fab yield. Assembly yield analysis is generally focused on defects generated in the various assembly processes. Since the purpose of packaging. is to provide connectivity between the IC and the external pins, most of the failures associated with the assembly process are opens, shorts and pin-to-pin leakages. Most of these failures are attacked through the use of non-destructive package analysis, predominantly X-ray and SAM. This is inspection process is followed up with physical and chemical analysis. Just as wafer fab yield is key to semiconductor profitability, packaging yield is also critical. Profits are impacted by the loss of good IC dice. ‘They are also impact by the package costs for any packages, which are assembled but can not be sold.Wagner 9 1.64 Qualification and Customer Return FA ‘The failure analysis of qualification failures encompasses the complete process flow as described above. In many ways, it is the most difficult and diverse of the diagnostic activities since all of the tools must be used at times. In addition, there is a great deal more focus on successful analysis of specific failures. In yield analysis and process development, the goal is typically to understand the statistically significant portions of the yield fall-out. ‘Thus, unique failures are frequently not as critical. In design debug and unmodeled yield loss, the resolution of a specific issue is very critical, but typically a large number of non-functioning devices are available. ‘Stress ‘Typical Stress] Purpose Failure Test Mechanism(s) High Temperature | High Temperature} Wafer Fab | Oxide and Metal High Electrical Bias |Operating Life Test] Reliability | | Wearout Defect Related Failures Low Temperature High| Low Temperature | Hot Carrier | Hot Carrier Electrical Bias |Operating Life Test High/Low Temperature|Temperature Cycle] _ Package | Mechanical Stress Reliability High/Low Temperature] “Thermal Shock | Package _| Mechanical Suess Rapid Temp. Change Reliability [High Temperature High| 85185 Package Corrosion. Humidity/Electrical Reliability | Delamination “Bias [Higher Temperature andffighly Accelerated) Package Corrosion, Humidity than 85/85_|Stress Test (HAST)}_ Reliability Delamination High Temperature High] Autoclave Package ‘Corrosion. Humidity Reliability Delamination Simulated ESD events | Electrostatic | Sensitivity to | _ EOS, ESD Discharge EDS Table 1.2. Typical accelerated stress tests are identified with goals. Customer return analysis was initially used as product improvement feedback loop. Several factors have slowly eroded that role. ‘The most significant is that this feedback loop is much too long. Figure 1.2 shows a comparison of the feedback loop for customer return failure analysis and yield and analysis. The comparison of the length of the customer return feedback loop to the length of the product lifecycle is also significant. Problems needed be addressed much earlier in the process in order to reduce customer exposure to problems, which occur. The processes, primarily yield analysis, to provide shorter feedback loops have become more effective. Ongoing quality and reliability improvements, Yield Improvement activities, Wafer Level Reliability efforts, and “Maverick” lot reduction activities have provided shorter and more effective feedback loops into the semiconductor10 Introduction manufacturing processes. These activities are expected to reduce the number of serious IC problems to which customers are exposed. Customer return analysis remains critical to support customer business decisions including purges, product recalls, line shut downs when serious problems do occur. ‘Qualification failures are also critical to the IC business. Qualification consists of a series of accelerated stress tests, which are performed on statistically selected samples from a new device type, new wafer fab process or new package. These test ‘are generally geared towards the detection of know wear-out mechanisms in IC’s as shown in Table 1.2. However, they also provide an insight into anticipated use- ‘condition failure rates. When a product or process fails qualification, scrious delays in the introduction of the product or process ocour. When failures occur, failure analysis is essential to help identify the “root cause” of the failures. Correction of these “root causes” makes qualification on a second pass more certain. In addition, an understanding of the “root causes” may lead to an acceptable screen, e.g. burn-in, for devices already manufactured. In addition to qualification failures, other failures, which occur on accelerated stress testing, must also be analyzed. While the same stresses as used in ‘qualification are employed, many evaluations and process monitors are performed in addition to qualifications. ‘These provide feedback for ongoing reliability improvements. 1.7 ROOT CAUSE AND CORRECTIVE ACTION {As the diversity and complexity of the failure analysis toolset has increased, the cost has risen as well. With the increased cost, more emphasis is being placed on understanding how a return on investment is generated. The return on investment is through the improvements made during the corrective action process. While this book deals primarily with the tools and techniques of failure analysis, this process ‘can not be considered complete without a “root cause” determination and corrective action. With the challenge of understanding the array of the available tools and techniques for failure analysis, most analysts will have only a general understanding of specific semiconductor manufacturing processes. Since it is not generally possible for the analyst to also have a detailed understanding of the individual semiconductor processes, the task of using the failure analysis results to determine root cause and to generate corrective action lies predominantly with the process ‘engineer or IC designer. ‘The value of the failure analysis is therefore in the use of those results rather than in the generation. 1.8 CONCLUSION Failure analysis is a difficult task, requiring a diverse set of tools and techniques. In fact, failure analysis itself is a diverse set of applications of this toolset. A critical part of success in failure analysis is tool and technique selection. This is a constantly evolving decision based on several factors. These factors include such things as likely causes of failures based on device history. They also include the probability of success using as particular tool, given a particular electrical signature. IntangibleWagner 11 factors such as recent successes or failures with a tool also: tend to impact the selection process. This book will provide an understanding of how the various tools and techniques work. Examples will be used to assist in understanding how these tools and techniques can be applied. REFERENCE 1 Van der Pol JA, Kuper FG, Ooms ER. Proceedings 7* European Symposvm fr Reliability of Hletron Devices, Failure Physics and Analysis (Porgamon Press), 1996, 1603.ELECTRICAL CHARACTERIZATION Steven Frank Texas Instruments Incorporated Wilson Tan Micron Semiconductor Asia Pte Ltd John F. West Cyrix Corporation This chapter will present the concepts of electrical characterization for the failure analysis of IC’s. Electrical characterization plays a key role in the success of failure analysis. It provides the starting point for the complex process of narrowing the scope of analysis from an entire IC to a single bond wire, signal net, transistor or defective component. In some cases, particularly in single bit memory failures, electrical characterization can provide very detailed and precise information about the physical location of the failure. In other cases, a general area of the IC may be indicated. In all cases, electrical characterization provides the failing electrical conditions to perform physical failure site isolation described in later chapters. While most failure analysis tools and techniques bridge the full range of IC's, significant differences exist in methodology and utilization of electrical characterization tools for different types of IC’s. In general, we can view the differences in the three broad categories of IC’s: logic, analog and memory. These distinctions may appear blurred as we approach “system on a chip” devices which contain all of these elements. Even when merged, however, there are elements of electrical characterization for these three types of circuits that carry through to the ‘component blocks. In today's world, logic IC's encompass a broad range of semiconductor devices. ‘The most common are microprocessors, microcontrollers, and digital signal processors, Other logic devices range from “glue” logic chips, which may consist of only a few gates to Application Specific IC’s (ASIC’s) which may contain hundreds ‘of thousands of gates, ASIC’s have become very complex and may contain embedded blocks of memory and analog functions in addition to digital logic. Determining the electrical cause of failure in one of these devices is not a trivial task and clectrical characterization provides the first valuable clues in the failure site isolation process. Memory, whether embedded in a logic device or stand alone, provides the most highly structured of all IC’s, These highly structured circuits are very observable and14 Electrical Characterization ‘can typically be characterized very precisely. In logic devices with embedded memory, the memory blocks are commonly used in wafer fab yield analysis to drive defect detection because of the ease of failure site isolation. However, logic content in memory devices is rapidly growing to support its interaction with processors. This results in a need to use more “logic-like” failure analysis techniques for memories. Even as the world rapidly becomes digital, interface to analog inputs and outputs remains a significant requirement. The increase in mixed signal (analog and digital) integrated circuits reflects this. Characterization of analog IC’s and analog circuit blocks is particularly challenging because failures tend to occur with small deviations in voltage and current. ‘The topics discussed in this chapter include basic equipment and techniques used for circuit characterization, electrical characterization for failure site isolation, and future challenges for circuit failure isolation. 2.1 ELECTRICAL CHARACTERIZATION Electrical failures, in general, can be divided into three areas: continuity, parametric and functional failures. Each of these types of electrical failures has its own requirements for characterization. Continuity failures are the easiest to characterize and only require a simple measurement for opens and shorts on the external device pins. A short may also be further characterized as to its measured resistance, Parametric failures require more complex measurements for characterization. ight include input or output voltage levels, power supply current, bias currents, offset voltages, overcurrent shutdown levels, frequency response, or other parameters specific for a given device. Part of the complexity with characterization of parametric fails is that each circuit will have its own distinct parameters, with each parameter specified to fall within a range of acceptable values. While most parameters for devices tend to be common across the product families, cach family of devices has parameters unique to that family. For example, op-amps will have different specified parameters than an analog to digital converter might have. Parametric characterization is also used to measure individual component parameters such as bipolar transistor DC current gain, MOS transistor threshold voltage, ‘capacitor matching, or resistor matching. Functional failure characterization is performed by inputting a known stimulus ‘and measuring the resulting outputs. If the measured output states correspond to the expected output states, the part is said to be functional, If not, the part is failing functionally. Functional testing over a range of supply voltages or temperatures may give clues to help isolate failures (known as shmoo plotting). At times, the distinction between parametric and functional fails can blur. For example, power supply current parameters, commonly referred as Tppg in CMOS deviees, is a unique parameter in that it may require creating a very specific functional state in order to ‘measure the leakage current. If the failure is functional state independent or easily achieved, Inoq failures are more like parametric failures. When the clectricalFrank, Tan, West 15, stimulus becomes very complex, their characterization is more like that of functional failures. 2.1.1 Tools for Electrical Characterization ‘The most basic tools for electrical characterization are those used for continuity testing and basic parametric analysis. For these tasks, a curve tracer is an essential tool. A curve tracer applies a variable voltage (V) to the device under test (DUT) and displays the resulting current flow (I) in an X-Y plot. It is used to examine the T- V characteristics of device 1/0 (input/output) pins and can also be used to perform Parametric characterization of internal resistors, diodes, and transistors. For example, the curve tracer’ can be used to measure the DC current gain (A) of bipolar transistors as well as act as a load to measure output parameters such as Vox, Vout and Vsqr._ The use of specially constructed switching boxes for low to medium pin count devices facilitates measurement from various pins. A manual curve tracer has the advantages of high available output current, versatility, and case of use. It docs, however, have disadvantages that severely limit its usefulness for VO. pin characterization of high pin count devices. A manual curve tracer has no facility to fixture high pin count devices. It also lacks the ability to store I-V curves for a good- ‘bad comparison of I-V curves. A.computer controlled curve tracer overcomes the limitations of the manual curve tracer for high pin count devices and has become a standard tool for V/O electrical characterization. ‘The computer controlled curve tracer standardizes fixturing with a standard zero insertion force (ZIF) socket (c.g. 21 X 21 PGA) that is connected to analog bus lines via an electrically addressable switching matrix. The computer controls the application of the stimulus to each pin and records the measured data for display or comparison to previously stored data. The analog busses are configurable $0 that it can be setup for standard curve trace analysis, powered curve trace analysis, or latch-up testing The tools required for electrical characterization of parametric failures can vary greatly depending on the type of parameter that failed. A semiconductor parameter analyzer is a useful tool for characterizing diodes and transistors, performing circuit level parametric testing which includes DC functional testing of analog circuits such as voltage references, regulators, and op-amps. A semiconductor parameter analyzer is an instrument that is made up of programmable I-V source/monitor units that can he independently configured in a variety of ways. Semiconductor parameter analyzers can measure very Jow levels of current (fA resolution) and voltages (WV resolution), making them very useful for a broad range of measurement and characterization functions. Logic devices parametric failures fall into two categories, logic-state independent and logic-state dependent. Logic state independent failures are those. which can be directly observed from the external device pins regardless of the state of the other pins. In some cases, such as Voy, the characterization of the failure requires placing ‘the output into a high state through the logic of the internal circuitry. The16 Electrical Characterization requirements to achieve a high state may vary from a simple DC level of a set of input pins to a complex set of functional test vectors. The simpler device failures are normally analyzed in a lab using bench-test equipment such as a switch box, stimulated with power supplies, pattem generators, curve tracers, and parameter ‘analyzers. Complex devices normally cannot be characterized on a bench setup due to the large number of pins, which must be controlled simultaneously. They require a higher level of test equipment for the electrical stimulus. In this regard, state dependent parametric failures can become more like functional failures in terms of the electrical stimulus required to create them, The tools required to characterize functional failures will also depend on the type of failure encountered. Functional failures on simple devices can be analyzed using bench test electronics such as power supplies, function generators, pattern generators, and logic analyzers. As devices become more complex, the number of vectors required to achieve the desired state (vector depth) becomes much higher and the timing of the inputs becomes more critical. For many years, failure analysis performed this function with a class of testers, frequently categorized as ASIC verification testers. ‘These testers filled a gap between bench pattern generators/logic analyzers and production test equipment. These testers, compared to production versions, were of lower cost. Further, conversion of test patterns for low pin count devices was relatively easy but did not assure correlation to production ATE (Automatic Test Equipment). As the number of UO pins and timing complexity on devices have increased steadily, this correlation with production ATE has become a ficant issue. As pin counts in all areas continue to increase, the use of ATE as a tool for electrical characterization is expected to increase. Factors such as increased number of power supplies, increased timing complexity, analog content and vector depth make the ASIC verification tester less suitable for failure analysis purposes. ‘The ATE also provides a software toolkit for debug and failure analysis through its graphical user interface. The most common debug tools available are wave, vector or pattern, and shmoo tools. ‘The wave (or scope) tool is a digitizing sampling oscilloscope that displays both input and output waveforms of the device. For outputs, the true waveform can be displayed along with trip levels and output strobe markers. The veetor or pattern tool displays the inputs and cutputs for the entire vector set. All failing vectors and the failing pins within the vector are highlighted. ‘The most useful characterization tool is the shmoo tool. A one, two, or three dimensional shmoo plots allow multiple parameters to be simultaneously varied in order to determine the conditions under which the device passes and fails. While using these tools, device timing and logic level definitions can be modified “on the ‘ly’, eliminating the need to recompile vectors and the test program. ‘The use of ATE for failure analysis provides several distinct advantages. Since the tester is used in production, verified test programs already exist, No correlation issues should occur. In addition to having existing test programs, test hardware is also readily available. ‘Typically, a failure analysis compatible test socket is the onlyFrank, Tan, West 17 added hardware cost. The tester has been matched to the product to assure that it is capable of adequately testing the device. The tester cost of ownership is also significantly reduced. Since the tester is not dedicated to failure analysis, it can be utilized for production or engineering when not employed for failure analysis. Cycle time is also positively impacted. Since the hardware and test programs arc debugged for production, little or no effort is expended in correlation and debug activitiesin addition, the production testers are typically well maintained and calibrated for production activity. 2.1.2 Electrical Characterization for Fault Isolation Failure analysis normally begins with an examination of the datalog submitted with the failing unit. A datalog is the measured results of the production test program. ‘The datalog submitted with a failing unit should be a full datalog, ic. a listing of the results of all of the production tests. Most production test programs stop at the first indication of failure and this feature must be overridden to generate a fall datalog. It may also include data for device operation at various temperatures and operating voltages. Interpretation of this data is a key factor in efficient failure analysis. By examining all the fails, the analyst can use a heuristic approach to determine if one failure mode (an observed electrical failure) is dominant and causes the others to cour. It is common for a failure to have a large of number of failures documented in adatalog. For example, a device fails functional testing across the Vpp range and an input leakage high (Ix) test on one input pin with mA of leakage (a normal reading is cn the order of nanoamperes). In this case, the leakage is likely the cause of the functional fail because it is preventing the input buffer from reaching the desired Vt with the appropriate timing. This prevents the pin ftom switching properly. Analyzing the failure as a functional fail would add significantly to the complexity of the analysis. In general, the failure mode should be expressed in the simplest terms for failure analysis. If many different tests are failing the test program, the sequence of failure mode types in order of ease for failure analysis is: continuity, parametric, functional in that order. Continuity failures typically cause a large number of failure modes. A device failing with an open input will drive many additional failures, functional as well as Tppq, due to improper conditioning and floating gates, ete. Opens on outputs will ‘cause functional and Vour parametric fails. Shorts creates both functional and Ippq failures, as well as leakage and Vour parametric tests. Power supply shorts are typically seen as Ippg and functional fails. Parametric failures can also result in functional failure modes. For example input eakage may cause Ipog to fail as well as slowing the input enough to cause at-speed functional failures. Conversely, functional failures can lead to parametric failures. Many output parameters are dependent on creating a particular state on that output. ‘Thus, a true functional fail may fail Vour and Ioz tests in addition to the functional patterns. The difference can usually be established by determining the state of the18 Electrical Characterization output at the time the parametric test is performed. Thus, if the output is in correct state, Vour and Iz failures can be real. In general, a thorough evaluation of the datalog will allow the reduction of the analysis to that of a continuity failure or a simple parametric analysis where that is possible. In addition to ascertaining what testis failing, it is important to understand the magnitude of the failure. ‘This drives subsequent failure site isolation, For ‘example, the approach on a 1 milliampere Ippq failure is likely to differ from a 1 microampere Ippq failure. If available, a datalog should also be acquired from a passing or correlation device. This will give the analyst a good indication as to what to expect for typical parametric values. 2.2. CURVE TRACING Curve tracing is an important technique that is used to measure the electrical characteristics of the V/O structures on integrated circuits. A curve tracer or parametric analyzer is used to apply a variable voltage to a device pin and display the resulting current flow as a function of the. voltage. This I-V curve is a characteristic curve, and any deviations from an ideal I-V curve will give clues for possible failure mechanisms. Curve tracing is also useful in characterizing internal device components, such as resistors, diodes, and transistors. This section presents the use ‘of curve tracers and parametric analyzers in the characterization of diodes, transistors, and I/O structures. 2.2.1 Diode Characterization “The most basic building block for an integrated circuit is the PN junction. The PN junction forms the basis of diodes and transistors and an understanding of its characterization and possible failure mechanisms is crucial to understanding, structures that are more complex. A common measurement for a PN junction diode is the current through the diode as a function of the applied voltage (the positive terminal is taken to be the P side). Figure 2.1 shows the I-V curve for a PN junction diode and identifies its three regions of operation. The forward bias region occurs when the applied voltage is greater than zero. In this region, the current flow through the diode is an exponential function of the applied voltage. The sccond Figure 2.1. 1-V characteristics of a PN junction region of operation isthe showing the three regions of operation,Frank, Tan, West 19 reverse bias region. Here, the applied voltage is less than zero and the current through the diode is a small constant value (essentially zero) and is called the reverse. saturation current. The third region of operation is the reverse bias breakdown region. In this region, the breakdown voltage of the PN junction is reached and the current through the junction increases rapidly due to avalanche multiplication, ‘This current Figure 2.2. An example of a failed PN junction feed to be. limited by an that is electrically equivalent to a resistor in parallel with the junction is shown. <— 200 wayory—> <— 200 av/pry —> external resistor or permanent damage to the junction will result Measured deviations from the normal I-V curve of a diode can provide an indication of how the diode may have failed. For example, a tow value resistance shunt in the I-V measurement can indicate a catastrophic and irreversible degradation of the junction. This type of degradation is commonly caused by an electrical overstress (EOS) condition. Figure 2.2 shows an I-V curve of a damaged PN junction whose electrical properties were equivalent to a resistor in parallel with the junction, If the damage to the junction were more severe, a short would have resulted. Characterization of a PN junction in its three regions of ‘operation can provide helpful clues to possible failure mechanisms. In the forward bias region, a parameter called the diode ideality factor, n, is one that can be used to detect junctions that have been electrically overstressed. It can_ be 0 02 04 ae o8 10 'V (volt) extracted from a modified I-V where the current through the ; - diode is plotted’ on a Figure 2.3. Logio(1) vs. V for a discrete diode. Jogarithmic scale. This can ‘The idcality factor is calculated from the slope of asily be done using a the line,20 Electrical Characterization parameter analyzer. Recall that in the forward bias region the current through a Giode is an exponential function of the applied voltage. Therefore, if the current through the diode is plotted on a logarithmic scale, the resulting curve should be linear with a slope m. The ideality factor is then proportional to (I/m). Figure 2.3 shows how the ideality factor is measured. The ideality factor typically ranges from 1 (for integrated diodes) to 2 (for discrete diodes) and 2 measured value greater than 2 indicates that the junction has been electrically over-stressed**. In the reverse bias region, the saturation current can provide helpful clues to possible failure mechanisms. Since the value of the saturation current is so low, it is very sensitive to damage that result in low levels of leakage current, such as electrostatic discharge (ESD) damage. This type of damage is easily isolated and identified. The reverse saturation current is also sensitive to surface conduction and therefore a high reverse saturation current may be indicative of ionic contamination. Finally, the reverse breakdown voltage can provide helpful clues about possible failure mechanisms. First, the value of the breakdown voltage is dependent on process parameters and a low breakdown voltage may point to a process anomaly. However, it also may be due to an electrical overstress condition that has damaged the junction. An erratic and unstable reverse breakdown I-V curve may indicate a cracked die. Finally, characterizing the reverse bias breakdown voltage can uncover fa phenomenon called ‘walkout’, Walkout is a term that is used to describe the upward drift of the reverse bias breakdown voltage with increasing applied current. Walkout results from surface avalanche and is due to charge injection into the oxide above the silicon surface. 2.2.2 Transistor Characterization ‘This section discusses the characterization of bipolar and MOS transistors with emphasis on parameters that are useful in detecting failure mechanisms. In bipolar transistors, the basic PN junction characterization techniques outlined in the previous section are used in characterizing the base-emitter (B-E), base-collector (B-C), and the collector-emitter (C-E) Satuaton petive junctions. In addition to HOT Region Region this basic PN- junction ° 40uk characterization, some sows commonly measured z° parameters of a bipolar =, rows transistor are: the : rpetou Characteristic curves, the gain, and for power cuit transistors, Verisary Which os 1 48 2 25 8 is the collector to emitter voltage when the transistor Figure 2.4. Characteristic curves for an NPN ig in saturation. The transistor showing the three regions of operation. characteristic curves for aFrank, Tan, West 21 bipolar transistor are a family of plots of the collector current (Ic) versus the collector-emitter voltage (Vcr) for a series of base currents (Ip). An example for a NPN transistor can be found in Figure 2.4. ‘Transistor parameters such ax gain (f), the Early voltage (VA), and the collector-emitter breakdown voltage (BVceo) can be determined from these characteristic curves. Each of these parameters can provide helpful clues to possible failure mechanisms. ‘The Early voltage is inversely related to the variation of Ic with Vce, which is merely the slope of the characteristic curve in the active region. A low value for the Early voltage means that the collector current of that particular transistor is dependent on the value of the collector voltage. This is electrically equivalent to having a resistor from collector to emitter. The collector to emitter breakdown voltage (BVceo) is analogous to the PN junction reverse breakdown voltage discussed earlier. The failure mechanisms found in the PN junction analysis section also apply here. ‘The DC current gain (A) is the second major parameter measured in conjunction with failure analysis of the bipolar transistor. is defined as the ratio of the collector current to the base current and is measured in a variety of ways. In the proceeding section, it was stated that the gain of a transistor could be found by its characteristic curves. It also can be measured directly on a parameter analyzer by configuring it to ‘sweep the base current while measuring the collector current. The measurement of 8 for a transistor can provide helpful clues about possible failure ‘mechanisms. 0 Prostoss Degradation of B at low 200 collector currents is an Hoh indication thatthe transistor a” BEEP has-been electrically 10 stressed, typically by 0 oat biasing the base-emitter junction into avalanche, ‘The — degradation is attributed — 10 surface recombination within the B= Figure 2.5. Variation of B with IC before and after _B depletion region atthe reverse bias B-E avalanche is illustrated. Pre-stress, *i4€ _interface"*. Other. . characteristics of this failure posttest, and bake-recovered (125°C) gain Fr echanism that can. be ‘measured are: the transistor will have normal 8 at high collector currents, the transistor will have normal B-E. diode characteristics, and the gain will recover after a high temperature bake. In analog circuits, degradation adversely affects input bias currents and input offset voltages. Figure 2.5 graphically shows the affect of reverse bias B-E avalanche on 200 400=««aO BOD «TOO ica)22 Electrical Characterization the Bof a transistor. It shows clearly the degradation of f after stress, and it’s partial recovery after bake. Triode ; The collector-emitter (Linear) SRegion” saturation voltage (Vensxn) 30) Region is the third bipolar transistor - parameter commonly used in = failure analysis, particularly =” for high current output 2% structures. In output 10 structures, Vousan is the voltage measured at the output pin when the output power transistor is fully or 88 eotte? 25 conducting. ‘The parameter is particularly sensitive to series resistance values introduced by assembly: namely ball bond and stitch bond resistance. CMOS processes haye become dominant in the industry, in not only digital circuits, but analog circuits as well. ‘The following sections will discuss the characterization of MOS transistors, with emphasis on parameters that are useful in detecting failure mechanisms. Basic PN junction characterization techniques can be ‘used for the drain, source, and substrate junctions. In addition to this, some commonly measured parameters of a MOS transistor are: the characteristic curves, ‘gate Ieakage measurements, the threshold voltage (Vz), and for power transistors, Rpscony Which is the ratio of the drain voltage to drain current of the MOS transistor when the transistor is_——_ fully conducting. ‘The MOS characteristic curve is Vo wots) similar to the bipolar curve and is an important source of information. The characteristic curves for an NMOS transistor is shown in Figure 2.6 and arc a family of plots of the drain current (Ip) versus the drain Figure 2.6. Characteristic curves for a NMOS: transistor show tlie regions of operation. om Pest srs, 002 Figure 2.7. The Vz shift is shown for a NMOS transistor before and after a gate stress illustrating variation in Vz due to charge injection into the gate oxide (65 WA of current gate current for 2 seconds with the drain and source grounded).Frank, Tan, West 23 voltage (Vp) for a series of fixed gate voltage (Vos). The possible failure mechanisms detected with the MOS characteristic curve are similar to those found with the bipolar curves. A MOS transistor has a parameter that is analogous to the Early voltage and a low value on this parameter is electrically equivalent to a resistor from drain to source. ‘The second common parameter measured on MOS transistors is gate leakage. Gate leakage in a MOS in transistor is current flow between the gate and either the source or drain. Normally, there is no current flow into the gate and leakage can result in device failures. Leakage can result in /O parametric leakage failures, power supply current failures, input bias current failures and in some cases, functional failures. Gate leakage measurements are a simple procedure using a curve tracer or semiconductor parameter analyzer. A standard I-V measurement set-up is all that is required to measure and characterize gate leakage, Failure mechanisms responsible for gate leakage include gate oxide defects, electrical overstress damage to the oxide, and certain process defects. The third common parameter measured on MOS transistors is the transistor threshold voltage. The threshold voltage for a MOS transistor is simply the gate voltage corresponding to the onset of strong inversion in the channel. ‘There are several different ways to measure the threshold voltage, but a commonly used method is to plot to square root of the drain current versus the gate to source voltage", With this kind of plot, the curve will be a straight line and the threshold voltage will be the X-intercept. Figure 2.7 shows how the threshold voltage is found using this method. Figure 2.7 also shows one of the failure mechanisms associated with the threshold voltage shifts. Shifts in threshold voltages will have a detrimental effect on circuits that require a high degree of transistor matching for proper operation. Shifts in threshold voltages occur in a variety of situations but the basic mechanism is the same - unwanted charge in or near the gate oxide". Two of the common reasons for threshold voltage shifts are ionic contamination and charge injection into the gate oxide caused by an overstress condition or hot channel carriers. Finally, Rosow is a parameter commonly measured for MOS power transistors, RDSov is analogous to Vergar) in bipolar transistors and is equal to the ratio of the drain voltage to the drain current. Like Voe;sxn for bipolar transistors, it is highly sensitive to series resistance and assembly related issues. 2.2.3 VO characterization VO characterization is primarily performed by curve tracing cach pin of an unpowered device or performing powered curve tracing to measure input current parameters Iyy and IIL. ly is the current into the pin when a logical ‘high’ voltage is Present. IIL is the current out of the pin when a logic ‘low’ voltage is present. Figure 2.8 shows a simplified schematic of a typical UO pin. The ESD protection ircuitry presents the electrical equivalent of two diodes at the external pin. One diode is reverse biased to Vpp and one diode is reverse biased to ground under24 Electrical Characterization normal operating conditions. This makes 1/0 characterization similar to Bond Internal diode characterization. Pad Circuitry One standard approach to VO characterization is to curve trace each pin with 1 respect to ground, power supplies, and adjacent pins. In automated curve tracers, Figure 2.8, Typical /O pin equivalent circuit is the plots for each pin (0 shown. ground or a power supply can be acquired and overlaid. Since all signal pins have essentially the same expected characteristic, any deviation is readily recognized. Continuity failures and most U/O related failures lead to an anomaly in the I-V curves on these pins. Curve tracing can be used to characterize output parametrics if the device can be powered into the desired state, tri-state (Z) or valid output (L or H) condition. For example, tri-state leakage (loz) testing with a curve tracer is simply a curve trace of the MO while the pin is in the Z state (commonly referred as powered curve tracing). ‘The resultant curve trace should indicate infinite impedance between the regions in which the ESD diodes are forward biased. If a basic curve trace does not indicate leakage on pins, which fail output parameters such as Voy (voltage output high) and Vox, (voltage output low), a different approach is used to detect abnormal series resistance on the failing pin. The device is powered to place the failing output pin in the correct logic state (L or H). ‘The failing pin is loaded with the appropriate current source value (e.g. 2mA, 8mA) and the out voltage is measured using a curve tracer or bench test setup. For example, a5 volt device with an 8 mA output must exhibit a Vom. (voltage output high loaded) greater than 3.7 volts. By sourcing 8 mA to ground on the f ‘an abnormal series resistance is indicated if the voltage measured at the failing pin is below 3.7 volts. In a similar manner, Vo, failures are characterized by sinking the rated current into the failing pin and observing a voltage above the minimum specification limit. ‘An alternate procedure for curve tracing pins is to ground every pin on the device ‘and then remove one pin at a time to curve trace it with respect to ground, typically with a voltage range of -2 to 2 volts. The current is clamped at a level so as not to aggravate any damage that might be present. ‘The measured J-V curve using this approach will be two forward biased diodes, one in the negative direction ‘corresponding to the diode between the VO pin and ground, and one in the positive direction corresponding to the diode between the /O pin and Vcc. This procedure has the advantage of reducing the number of curves needed for ground and power supply measurementsFrank, Tan, West 25, 2.3 ELECTRICAL CHARACTERIZATION OF STATE DEPENDENT LOGIC FAILURES As stated in the introduction, logic failures can be divided into two categories, state dependent and state independent failures. State independent failures are typically analyzed as continuity or parametric failures as described in a previous section of this chapter. Since no specific logic state is required to create the failing condition, electrical stimulation for failure analysis is relatively easy to setup. State dependent failures are those in which the device needs to be conditioned into a particular logic state to generate the failure. In general, these include most Ippg failures, functional failures, and certain output parametric measurements (Many output parametric failures require creating a particular state condition on the output. Since the input conditions to achieve this logic state may be quite complex, they must be treated as state dependent failures from an electrical stimulus perspective but behave more like parametric failures during failure site isolation.). While most logic state independent failures are readily observable from a powered or unpowered I-V characterization of the external pins of the device, logic- state dependent failures typically manifest themselves within the core of complex logic devices and are much more difficult to stimulate. The failure modes which predominantly fall into this category are Ipog and functional. Since these types of failures are the most challenging to characterize and isolate, it is important to take advantage of test information and testability features of the device such as Ippg and scan, Without testability features, Ippg and scan being the most common, the best case is that the vectors which are failing indicate a block circuit in which to focus failure site isolation activities, making failure site isolation much more difficult. Ippq is a particularly powerful tool for failure analysis as well as device testing of CMOS logic with low power consumption. Scan provides a method for improving both the controllability and observability of internal circuit nodes. Both controllability and observability are important factors for failure site isolation. 2.3.1 Inng Testing ppg is a particularly important parameter for failure analysis. ‘This leakage current, which is frequently present on devices categorized as functional failures, can lead to failures being isolated by global techniques rather than probing techniques. Ippq is defined as the current flowing from the power supply to ground when the device is in a quiescent state”. In a quiescent or static state, N and P channel transistors are either “on” or “off”, and in the absence of through current devices or defects, there is nominally no active current. ‘Transistors, which are “on”, are in saturation and driving cither a logic “0” or a logic “I”. In the Ippg test methodology, vectors are excouted to a carefully chosen locations and halted at a parametric measurement stopping place (PM stop) to measure various parameters such as the output Parameters and Ippq. PM stops are normally selected to provide low Inpg. This is in fact the case for most CMOS static logic. High Inpg can result from circuits, such as26 Electrical Characterization dynamic logic and memory, which do not operate at zero nominal standby current. ‘Sub-threshold (Toss) transistor leakage’? is the primary contributor to the background leakage current. Outside of the very deep submicron regime, this leakage is extremely low compared the leakage level required to cause a circuit failure. As sub- threshold leakage increases in the deep submicron Tegime, Ippg testability will become limited due to the high level of background current. Ippg is particularly strongly impacted by the sub-threshold leakage because normal Ippg represents the sum of the sub-threshold leakage for every transistor on an IC. From a production test program perspective, many Ippg PM stop measurements are required to detect all possible faults. With each additional PM stop location within the vectors, a higher percentage of nodes within the device are toggled and checked for a leakage path from Vpp to Ground. On the other hand, PM stops consume a significant amount of test time, limiting the number of stops used in production test programs. For failure analysis applications, the test time limitation is not as relevant and Ipoq can be measured at more points. For example, Ing measures could be made at a failing vector and vectors immediately preceding the failing vector. In addition, Ippg can be ‘characterized in more detail as a function of Vpo, time and temperature for a failure analysis test program. Logic devices can have many power supplies. It is important to understand which power supply exhibits Ippq leakage. This can eliminate large parts of the circuit as possible failure sites since different power supplies are commonly used in different sections of the logic device. Correlation units are an important part of Ippg failure analysis. For devices, which do not exhibit low background Ippg, the correlation units are used to identify the location of the stand- by current so that it is not incorrectly identified as a failure site. Correlation units are ‘also important in the understanding of Ippq as a function of Vpn, time and temperature. If some of the PM stops are passing, they can often be used for correlation purposes. Figure 2.9. The shape of the curve for Ipp vs Vop can indicate likely failure mechanisms. 2.3.2 Voltage Dependency of foog ‘The most valuable characterization method for Inog leakage failures is the generation of plots of Ipp versus Vop. In most CMOS technologies, the defect leakage current isFrank, Tan, West 27 typically several orders of magnitude greater than the background leakage (i.e. assuming negligible transistor sub-threshold leakage). A good device should have little or no Ipp current until Vpp approaches the BVDSS (reverse breakdown voltage) of the CMOS transistors. With background low, the Ipp versus Vpp plot is essentially a powered curve trace of the defective area of the IC. This can provide significant information about the nature of the defect (see figure 2.9). A linear characteristic suggests an ohmic defect such as a metal bridge caused by particles or incomplete metal etching. A saturation Ipp versus Vpp characteristic suggests that a transistor, which is supposed to be “off”, is actually “on” and is sourcing current between Vpn and ground. ‘This forms a broad class of failure mechanisms. A shallow slope may indicate diffusion spacing violations, oxide microcracking, or filaments. Other curves vary as the square of the Vpp voltage indicating a transistor current. These curves are used to determine which failure isolation technique needs to be used in the resolution of the problem, Since the —Ippq current is state dependent, the Ippq characterization must be taken by adjusting voltages on the device, which must remain in the defined logic state. If Vpp is reduced below the minimum operating voltage of the functional patterns, the device may lose its state. Similarly, if the Figure 2.10. The temperature dependence of Ippo cn input voltages are indicate a stuck-at or leakage mechanism, varied below the Vz of the input buffer, the logic level of some of the internal nodes may change to a different state effectively creating a different Ippq PM stop. Fortunately, the tools available on the ATE make this characterization a fairly easy process. The measurement can be recorded manually by using software tools, or they can be programmed into the test program So the Ipp Versus Vp results can be automatically printed or plotted. The execution: of the test program is halted at the failing Ippq PM stop, maintaining the power supply settings and input drive stimulus to the device. While halted at the failing PM stop, Vpp voltage is varied across the operating range of the device (the minimum and maximum operating Vop of the device are measured using the functional debug tools on the failing device or a correlation device) and the Ippg value is recorded as function of Vpo. Note that the input drivers must vary with Vpp to prevent forward biasing Vpp diodes used in ESD protection structures. wp ‘SmecFalh SitaebCvcns Ga ‘Teraperature (Celsius)28 — Electrical Characterization 2.33 Temperature Dependence of Inq Recording the response of the power supply current across the minimum and ‘maximum temperature range of the device also provides valuable information about fan Ippo failure. Ippo for each failing PM stop is measured over a range of temperatures, using standard ATE temperature tools. Ippq for a stuck fault defect is expected to decrease current with increasing temperature due to the reduotion in transistor saturation current at elevated temperatures. Ipog due to junction leakages is expected to increase rapidly as the temperature is increased. Sub-threshold current (Log) on narrow gate devices typically doubles for every 10 degrees Celsius increase. ‘These typical temperature behaviors are illustrated in Figure 2.10. Thus the temperature dependence of Inog can provide insight into likely failure mechanisms. 23.4 Time dependent Iovg ‘The test program usually makes an Ippg measurement within 50ms after halting the vectors at the specified PM stop. The measurement is delayed in order to allow a stable measurement of the anticipated low current. Most defects have no time dependency and the current should stabilize at a given Vpp very quickly. However, floating gates generate a time-dependent Ippg. If a gate initially floats to a voltage of one-half of Vpo, both n-channel and p-channel transistors in standard logic will be partially on and conduct current from Vpp to ground. Eventually, the floating node will charge to a logic state and the Ippq current will subside. This behavior can be ‘generated by a design defect or a via, contact or metallization line which is very resistive or open. For high resistances, the time for the node to charge to its proper level will typically be Tong enough forthe production test program to record an elevated Ippq current. In Figure 2.11, a plot of Tog versus time for a device having a resistive via is shown. The data was collected on an ATE by inserting a looping procedure into the program and adding a specified wait time before the next measurement was made. If a device displays this Ippq instability, the Ippq PM stop vectors can be executed in a tight loop during failure isolation in conjunction with a photon emission microscope to isolate the failure. TDDQ Mesurenent Delay Tene Figure 2.11. A typical time dependent IDDQ is shown.Frank, Tan, West 29 2.3.5 Punetional Pattern Failures and Structured Design for Test (DET) Traditionally, a failure in the core of the device has been detected with functional Yectors. ‘These vectors can be created with an ATPG (Automatic Test Program Generation) synthesis tool or modeled on the circuit application. Measurement of the quality of the functional patterns js based the percentage of nets within the design for which toggling between a logic “O” and a logic “I” (based on a stuck-at-fault model) can be detected. Functional patterns serve various functions. “Loose functional” Patterns are typically run at low frequency and are intended to verify gross functionality. “At-speed” vectors typically test the chip's application vectors at the ‘maximum frequency intended for operation. “Delay fault” vectors guarantee critical timing to components external to the IC making a measurement between various input and output timing edges. ‘These vectors are intended to verify critical speed paths in the circuit. In some cases, specific functional vectors are tailored to test a particular block of circuitry. Thus specific failing vectors can potentially isolate the failure to a particular area of the IC. . In the worst case, functional tests define only the failing ‘output from which to backtrace the failure. As designs become more complex, it becomes much more difficult to verify with electrical test that logic IC’s are completely functional. DFT methodologies are typically required to break the IC into more manageable blocks of circuitry. In order to test devices in this way, the inputs to the reduced circuits must be controllable and the output of the blocks observable. Scan is the most common method used to improve the observability and controllability of a logic design. Using scan insertion software tools, existing flip-flops are modified to have a “test mode” of operation, permitting serial access to the flip-flops. ATPG software tools generate vector sets with a high fault grade, Scan vectors allow the internal logic to quickly obtain a known state. In this manner, a device can be tested with much fewer vectors than conventional functional vectors". In addition to improving test coverage, scan can be successfully used to isolate failures, using the circuit controllability and observability'""". A single or combination of passing and failing scan vectors can be correlated back to specific nodes within the design. Characterization of these failures often requires running a complete set of scan vectors, rather than exiting the vector set on the first fail as in the production test program. Some ATPG tools include software diagnosis, The Giagnostic software predicts nodes or nets within the die that may have a stuck fault. In ideal situations, faults can be isolated to a single node or set of equivalent nodes. In any case, it significantly reduces the area of the die to be considered in failure site ‘isolation, In order to run diagnosis and to predict failing nets within a scan design, the scan chain must obviously be intact and operational. Nearly all IC designs will implement, a scan flush or scan check vector to verify scan chain operation, Defects can occur in the scan chains as well as in the functional logic circuit. Since the scan chain is30 Electrical Characterization serially connected flip-flops, the chain can be checked with a relatively simple binary search to find the broken chain using probes. 2.3.6 Functional Failure Characterization ‘A hard functional failure is defined as a device, which fails consistently over different voltages, temperatures, frequencies and input timing ranges. These typically are the result of defects that create true stuck-at faults within the device. A soft functional failure may change with voltage, temperature, frequency or input timing. The device may fai a vector set at one Vpp or temperature, but may either pass or change cycles (vector depth) and pins at a different voltage or temperature. Soft failures are often due to an internal circuit node, which is not switching with the correct timing. This ‘can occur due to a delay introduced into a signal path or leakage, which slows a rise of fall time. Characterization of the dependency of soft failures can provide useful insight into the likely failure mechanisms. Pere erst eo) Veiner Vase oo Figure 2.12. An ideal shmoo plot exhibits operating margin well beyond. the required window defined by process and design (above left). A device with a high Vec/low frequency problem is shown (above). A device with a low Vechhigh frequency problem is also shown (left). ‘The characterization of soft failure is best displayed by the use of the shmoo tool. ‘The shmoo tool graphically depicts the passing and failing regions as parameters are varied together and against each other. In Figure 2.12, several shmoo plots of Vpp vs frequency are shown. The rectangle represents the design and process window for a device of a given technology. Marginality at a comer ot edge of this window can be explained by common test and silicon failure mechanisms. For example, a deviceFrank, Tan, West 31 with a high voltage and low frequency problem (Figure 1.12b) might have test Program timing issues, proximity design rule violations, or leakage (BVDSS). Similarly, a device with low voltage and high frequency problems (Figure 2.12¢) might be failing due to an undersized transistor, a speed path, tester input/output timing, resistive interconnects, or overall transistor drive current. Generally, temperature effects on device functionality vary inversely with the effects of Vop voltage. Either increasing the voltage or decreasing the temperature will increase the speed of the device, while decreasing Vpp or increasing temperature will slow the device down. In most failure site isolation tools, it is difficult to control the temperature of the device, particularly to low temperatures, due to fixturing issues. Therefore, a temperature dependent functional failure should frequently be recharacterized as a Vpp dependent failure. This may include operating the device outside of the specified voltage range. In such cases, characterization of a correlation unit is important to verify the voltage range over which the device can normally be operated. Thus, it may be possible to reduce Vpp below the minimum specification limit to duplicate a high temperature failure or increase Vpp above the maximum to duplicate a low temperature failure. If the failing information from the datalog is the same in both situations, failure site isolation can be performed at room temperature. 2.4 MEMORY FUNCTIONAL FAILURES Electrical characterization or Electrical Failure Analysis (EFA) of memories is an important part of the overall failure analysis process. Increases in the structural complexity of memories has made their physical analysis and fault isolation more difficult and challenging. Electrical analysis can be used to characterize failures and provide an understanding of failing characteristics pertinent to each failure. During failure analysis, electrical characterization also strives to “fingerprint” new failure modes and mechanisms for recognition of future occurrences. Testing of a memory ‘can be broken down into two components, array and periphery. ‘The periphery is largely logic, and analysis of functional failures in the periphery is very similar to logic function failure analysis as discussed above. As with logic functional failure analysis, Ipoq plays a critical role in physical failure site isolation. Partial and full array failures are also commonly analyzed as logic functional or parametric failures. However since the array is very highly structured, very detailed information can be ‘obtained about array failures from electrical characterization’®. This type of analysis relies on the built-in electrical functionality of the IC chip as a detector and sensor. ‘The temperature and voltage dependence of failures are also useful since the voltage and temperature dependence of the detector, the device itself, is very well understood. A defect ina DRAM memory cell array can manifest itself in a limited number of ways. These include (a) individual cell failure, (b) cell-to-wordline, (c) bitline-to- bitline, (d) wordline-to-wordline, (¢) cell-to-cell, (f) cell-to-bitline, (g) wordline-to- bitline and (fh) cell-to-substrate per Figure 2.13 below. A good understanding of the test programming language and the internal memory cell architecture are key factors32 Electrical Characterization in the successful implementation of EFA with test pattern algorithms to narrow down to the most probable failure mechanism. BL BL BL wu 3B a APS Lt ese Pr Poe 4 Lt Ha 2: Lt rt 2 - Hi, th t Ht + PP bet 4 Figure 2.13. The possible failure manifestations in a DRAM array are illustrated, 2.4.1 EFA Test Program Because of the highly structured array, EFA is able to provide very detailed information about a memory device. EFA programs are customized for failure analysis and differ from production test programs. They progressively check the functionality of the device so that at each stage it verifies the functionality of a portion of the circuit that can then be used to test other circuit areas. EFA programs are generally longer and include more algorithms than a production test program. A typical EFA program for DRAM’s is summarized in Table 2.1. It is important to understand that the testing must be based on physical locations rather than address locations, It is an important consideration in many of the test algorithms such as adjacent row and column tests. EFA must take into account any redundant rows or columns that have been used during laser repair to understand the physical layout. 24.2 Single Bit Failures Single bit failures form a large and very significant part of the failure distribution for memory devices. Single bit failures can represent stuck-at faults or an inability to ‘maintain a particular state. A Write Immediate Read (WIR) test is used to test the most basic operation of a single cell, ic. to store and read a “0” and aI”, A hard failure can usually be differentiated very readily from a marginal failure. For example, most DRAM designs incorporate a DFT feature to bias the cell top plateFrank, Tan, West 33, voltage “low” or “high”. This allows cell dielectric failure to be differentiated from cell access gate failure. This test is also useful for detection of single cell stuck-at faults. A Single Bit Pause test is used to assess the ability of the capacitor to retain its stored charge. Because of the reverse-biased junction leakage current of the 1-T cell, the amount of stored charge (10° electrons) will decrease with time. A normal cell will exhibit reasonable refresh times of 64ms or greater but a defective one can range from.<64ms to Oms, ‘TYPE OF TEST FAULT ISOLATION/PURPOSE. Internal Voltage monitor Internal regulators functionality VBB pump check Leakage to substrate Redundancy Check Information on repaired row/column ‘Vow shmoo Single bit to row/column DCR characterization Margin for sensing Write Immediate Read How gross is the failure Whole Array Disturb by Row/Column Other row/column fail Row/Column Pause Worst bits on row/column Adjacent Row/Column Disturb Leakage from adjacent row/column ‘Open Address Pin characterization Internal address malfunction STIM level shmoo STIM failure Field Plate Program Leakage paths Failure Distribution Package stress/characterization Table 2.1. Det of a typical DRAM EFA Test Program are shown, A Single Bit Disturb test (GALPAT = galloping pattern) is used to detect neighborhood pattern sensitive faults"®. An array matrix of 8x8 cells is used to determine if a write operation on a nearby cell can change the contents of a base cell (cell under test) while the remaining cells and base cell contain a certain pattern. Each base cell must be read in state “0” and in state “1” for all possible changes in the neighborhood pattern. The neighborhood cells must be in physical proximity of the base cell because they are the most likely to influence the base cell and induce failure, rather than based on bit numbers. ‘This test is also capable of detecting address faults, stuck-at faults, transition faults, and coupling faults. ‘The various possible leakage components within a DRAM cell are limited by the physical construction features of the cell, For single bit failures, differentiation between the various leakage paths caf be easily achieved by varying the supply Voltages. A simple Write-Immediate-Read (WIR) algorithm for a failing cell is developed to vary the cell top plate and device operating voltages. In the case of single bit failures, the failure site isolation is teadily achieved down to the exact nature of the leakage path through BFA.34 Electrical Characterization 2.4.3 Ippo Testing Topo testing” has been used to detect defects which functional testing was unable to. ‘A current mirror can be used to measure the current drawn by the device during a specific test cycle. This can be cither the dynamic or the quiescent portion of the cyele. The tester is modified to read Tong values into a bitmap. In this way, an Toog threshold value can be set and Ipnq values exceeding that threshold can be detected for every read through the memory array. In this way, individual bits, rows, and columns, which gencrate abnormal Ippq values, can be detected. 2.4.4 Other DRAM Characterization Features DRAM’s utilize a number of internal DC voltages such as Vpa, Ver, Vrenn Vary> Vaux. These internal voltages are monitored through DFT tests on some devices or by bond pad probing on others. In addition, a different voltage can be forced on the power supply through the respective probe pads. ‘This allows testing with different Jnternal voltages on the chip. For example, forcing Vax enables detailed Pause characteristics to be measured. Similarly, Ver can be forced to detect substrate eakage paths. ‘Curve tracing between Vpp and ground in a power down mode is useful for the assessing the gross functionality of a DRAM. As voltage is slowly ramped, current spikes indicate an internal state transition. This type of current spike can be used for subsequent global failure site isolation techniques. ‘The use of I-V analysis allows us to apply the right excitation techniques to other fault isolation techniques to enhance their effectiveness. The sequence in which tests critical if we are to interpret the results of the test program. We need to be aware of interaction effects especially when dealing with failures. The current limit set in tests must be carefully chosen and balanced between sensitivity and other effects, Electrical Failure Analysis is only one of the complement of tools used in the fault diagnosis process. 2.5 CHALLENGES OF ANALOG CIRCUIT FAULT ISOLATION AND ANALOG BUILDING BLOCKS Electrical characterization of analog circuits for fault isolation poses unique challenges. Digital logic has several properties that make it well suited for testing and failure site isolation. Since the possible states of the system are limited to (wo Values, 0 and 1, modeling failures is straightforward, This enables design for test (DFT) methodologies to be used that facilitate the diagnosis and isolation of failures. Low power (quiescent) states in logic devices also facilitate the use of Ing test methodologies, which is useful as a global failure site isolation tool. ‘Analog circuits, on the other hand, have properties that make failure site isolation Jifficult. “The outputs of analog circuits typically take on a continuous range of values and frequently have nonlinear transfer functions. This makes it difficult to nodel faults and facilitate DFT strategies which can assist in failure diagnosis andFrank, Tan, West 35 isolation”. In addition, analog circuits in general do not have power down states to Support Inpq testing. Another significant challenge is that analog circuit failures are frequently associated with subtle wafer fab process variations, component matching sensitivity, or other design layout sensitivities. This results in failures that require extensive mechanical probing to isolate. Analog circuits have a wide range of complexity but are generally composed of ‘smaller building blocks such as ‘Op-amps, voltage references, and current. sources, ‘hese building blocks ean be combined, often with logic, to provide more complex analog and mixed signal devices. ‘The following sections describe the Characterization of several analog building blocks — voltage references and regulators, current sources, ‘op-amps, and data conversion blocks. 2.5.1 Voltage References and Regulators ‘The objective of a voltage reference is to supply a known voltage that is stable over {Cmperature and power supply variations, Figure 2.14 shows a block diagram of a bandgap reference circuit. It shows that the reference voltage is derived from the ‘sum of a base-emitter voltage with the thermal voltage, kT. A base-emitter voltage has a negative temperature coefficient and the thermal voltage has a positive temperature coefficient, so that with a proper choice of constant, the output will have @ Zero temperature coefficient”. Failure modes associated with voltage reference generally fall into three categories; no output voltage, a stable output voltage that has the wrong value, or an output voltage that doesn't track correctly over temperature. Voltage references are feedback devices and will normally have two stable States, one at the correct output voltage and one at an incorrect output voltage, usually zero. Because of the existence of two stable states in voltage references, a start-up circuit is generally employed so that reference will power-up in the comect state For voltage references with little or no output, the following characterization Procedure can be used. If no Teakage current is observed, the start-up circuitry is characterized to determine if it is operating properly. If it is not possible due to reference circuit architecture to isolate the start-up circuit for characterization, node voltages are measured and compared to a known good unit. Figure 2.14. Block diagram of a bandgap voltage reference.36 Electrical Characterization Voltage references that have an incorrect output voltage or do not track correctly over temperature are more difficult. Resistors on voltage references are typically trimmed at wafer probe in order to set the correct reference voltage. They are commonly trimmed using fuses or Zener diodes. The first step then would be to characterize the trims over temperature to assess their stability. Frequently a difference in the base-emitter voltage of two transistors (AVpx) is set up in the reference by a ratio of resistor values. If one of these resistors is grossly off, say by ‘an open contact, the reference voltage will be off also. Also, a high input offset voltage in the summer circuitry will also affect the reference voltage. ‘Voltage regulators use a voltage reference to produce a stable dc output voltage and maintain this voltage over Io? VO a wide range of load currents and input voltages”. Figure VRE RI 2.15 is a block diagram of a T _ ‘sonsip £108 pass regulator showing Sex the major components. A 2 sampling network monitors the error amp output voltage. The sampled output voltage is compared to a stable reference voltage and produces an error signal. This error signal is used to control Figure 2.15. Block diagram of a series pass an element that converts the voltage regulator. input voltage to the output voltage over variable load conditions. Additional protection circuitry can include overcurrent and thermal ‘overload shutdown circuits but these are not shown in the block diagram. ‘A typical failure for voltage regulators is an incorrect output voltage. If the device is not regulating, it means the output voltage is a function of the input voltage. ‘This type of failure mode is usually associated with a degradation of the control element. For this type of failure mode, curve trace analysis of the input to ground and from the input to output is many times sufficient to characterize the failure. ‘A second functional failure commonly seen is regulation at the wrong output voltage. ‘This means that the output voltage is fixed and independent of the input voltage, but the output voltage is not the expected value, The location of the error can be the voltage reference, the sampling element or the error amplifier. Isolation of failures in the voltage reference has already been discussed above. Common sampling clemonts consist of a resistor divider network (Figure 2.15) with the feedback voltage defined by a ratio of resistor values. Resistors are measured in ‘order to isolate failures such as resistive contacts. ‘The last potential source of error involves the errors associated with the error amp, ‘These errors are frequently due to low gain, high input offset voltage, or common mode rejection ratio.Frank, Tan, West 37 2.5.2 Current Sources A current source is another basic analog circuit building block. As the name implies, @ current source is used to generate a known current that is used to bias the various Circuits on an integrated circuit, Current sources are also used as active loads for amplifiers to help to increase the voltage gain. Chafacterization of current sources generally consists of measuring the reference current. If this is incorrect, the mismatch between the transistors is characterized. 2.5.3 Op-amps Op-amps are differential amplifiers that are widely used as stand alone circuits or blocks in larger circuits. Functional failures on op-amps include those with outputs do not respond to any input stimulus and are generally stuck at Vpp or ground and units that oscillate. Units that oscillate generally do so when the op-amp is configured as a unity gain amplifier. For op-amps whose outputs do not respond to any input stimulus, the Iec current with the device in the failing state is compared it to a known good device. A significantly lower Icc reading may indicate an internal node is open and a significantly higher Ioc may indicate an internal node that is shorted. If lec variation is not effective, internal node probing (with comparisons to a good device) is required to isolate the failure. Oscillation at unity gain in op-amps configured as an amplifier is primarily due to Problems, in the internal compensation network. ‘The compensation network generally consists of a capacitor and a resistor. If the op-amp operates correctly when configured at higher gain, then a good characterization technique for this type of failure is to plot the gain and phase vs. frequency when configured with higher gain, ‘The gain and phase can give clues to what could be wrong with the compensation network, Resistance value measurements and capacitor leakage measurements usually isolate the defective component. There arc several parametric failure modes that can occur in op-amps. These failure modes include input offset voltage, input bias current, or maximum power ‘supply current tests. Input offsct voltage and input bias current failures are the most ‘common failure modes analyzed Input bias current failures can be a result of several failure mechanisms. Since the requirements for input leakage are extremely high, failures can occur due to very low current levels. Some of the lcakage failures will be resolved by standard global failure site isolation techniques. For example, photon emission microscopy will generally isolate gate leakage on the input transistors of MOS op-amps. If global techniques are not effective, the elements of the input structure are separated using focused ion beam milling or laser cutting and probed to isolate a defective element. This is often required in the case of damage input protection structures. In bipolar op-amps, excess base current on the input transistors is the common cause of input bias current failures. This excess current can be a result of junction damage due to an38 Electrical Characterization overstres low gain. For both the bipolar and MOS case, the input offset voltage is a function of the mismatch in the components that make up the input differential pair. The characterization of these fails will then consist of measuring the relative mismatches in the respective components. condition or it may be the extra base current required of a transistor with 2.5.4 Data Converters Data converters are the interface between the analog and digital world. They consist of analog to digital converters (ADC) and digital to analog converters (DAC). ADCs take an analog signal and converts it into a discrete time digital signal that can be processed by logic circuits or digital signal processors (figure 2.16). DAC’s do the reverse and take a digital signal and converts it back into a continuous time analog signal. Data converters are more complex than the analog blocks described so far, but their importance as a building block for more complex mixed signal devices ‘warrants their inclusion here. There are many parameters associated with data converters and many different architectures from which to choose from. However, the specification they all have in common, and one of the failure modes for this class of patt, is linearity. For ADCs, the digital code output should be a linear function of the applied analog voltage. Likewise for DACs, the output analog voltage should be a linear function of the input digital code. Characterization of linearity is simply a measurement of the transfer function of the given converter. Linearity failures can be caused by several defects. If an internal node is shorted, the unit will likely fail functionally as well as numerous parametric tests, including linearity. If the data converter has an internal voltage reference that is not working, the unit may fail the Tinearity test. In this case the characterization techniques described for voltage regulators should be used. If the data converter is functional and is still failing linearity, a likely failure mechanism is mismatched components (capacitors of resistors depending on the type of converter). For these types of failures, characterizing the matching characteristics of the critical components needs to be done. , | i> [=| vl of oc fe mi Figure 2.16. Block diagram shows that ADC and DAC link the analog and digital domain,Frank, Tan, West 39 2.6 FUTURE CHALLENGES FOR CIRCUIT CHARACTERIZATION Analog characterization is difficult and will become more difficult due to shrinking feature sizes, multiple metal layers, and increased device complexity. Analog characterization is heavily dependent on mechanical Probing for precise DC measurements but shrinking device features and taultiple metal layers will make mechanical probing extremely difficult without probe point creation. Also, devices will contain more analog blocks mixed th increasing levels of both logic and memory and the lack of well-defined testability methods will severely hamper analog analysis. Testability consists of both the controllability and observability of the IC and observability is particularly critical for failure analysis. ‘The ability to identify and perform initial electrical characterization of failing analog blocks will be dependent on their testability. The success of characterization of analog circuits in Jarge mixed signal devices will depend on the development of fault models, testing strategies, and software tools to extract critical electrical characterization information from the test data, Voltage measurement-based test methods using stuck-at fault models has performed well to date for logic and memory. However, detection of faults?" due to bridging and other physical defects that do not map directly onto stuck-at levels are expected to become more important. Probabilistic methods can be used to estimate whether this indeterminate logic value will be recognized and propagated as logic 0 or logic 1. Improvements in test modeling are expected to play a significant role in testing of memory and logic devices. These improvements are targeted at circuit controllability and observability. This process should also support more effective failure localization through testing. With shrinking geometries and thinner gate dielectric, bridging and gate-oxide shorts have become the more common defects and Tong testing is currently the most effective methodology for detecting these defects and performing physical failure site isolation, Incorporating Ippq testing can bridge the current deficiency of performing logic testing alone by detecting defective devices that pass the functional test or logic testing. Hence by combining logic and Tong testing, a very effective method to detect logical faults and physical defects in any circuit can be achieved. Logic and memory DFT methodologies are driven by the opposing needs to improve test coverage and reduce test cost. Scan is likely to grow importance to increase test coverage of devices. Using BIST in logic to read the scan results is expected to help drive down the overall test costs. Similarly, BIST” for memory is expected to drive down test costs. REFERENCES Uebel 1, Wilson D. “Curve Tracer Applications and Hints for Failure Analysis.” In Microclectronics Failure Analysis Desk Reference, 3 Edition, Metals Park: ASM Internationel, 1993, 2 Stabe D. Appleman D. Failure Analysis of Complex and High Pin Count Devices Using Computer Aided Electical Characterization, Proceeding Intemational Symposium for ‘Testing and Failure Analysis, 1989, 261,40 Electrical Characterization 3. Appleman D, Wong F, Computerized Analysis and Comparison of IC Curve Trace Data and ‘Other Device Characteristics, Proceeding Intemational Symposium for Testing and Failure Analysis, 1990, 271 4 Lycoudes N, Childers C. Semiconductor Instability Failure Mechanisms Review. IBFE ‘Transactions of Relisbility, R-29, (3), 1980, 237. ‘5 SzeS.M, Physics of Semiconductor Devices. John Wiley & Sons, 1981. 6 Eland T. Lateral DMOS Structure Development for Advanced Power Technologies. Texas Instroments Technical Journal, March ~ April 1994, 10. "] Collins R. Excess Cument Generation Due t Reverse Bias P-N Junction Stress. Applied Physics Letters. 1968, 13 (@), 264. 3 Collins DR. HFE Degradation Due to Reverse Bias Emitter Base Junction Stress. IGE “Transactions on Electron Devices. ED-16 (4), 1969, 403, ‘9 McDonald BA. Avalanche Degradation of hFE. IEEE Transactions on Electron Devices, 1970, BD-17 (10), 871 10. Schroder Dieter, Semiconductor Material and Device Characterization. John Wiley & Sons, 1990, 11 Amerasekera A,. Najm R. Failure Mechanisms in Semiconductor Devices. John Wiley & Sons, 1997. 12, Hawkins C, Soden J, Fritzemeier R, Homing L. Quiescent power supply current measurement for CMOS les. IEEE Trans. on Indus. Elecuron., 1989, 46 (2), 211. 13 Williams T, Kapur R, Mercer M, Dennard R, Maly W. IDDQ Testing for High Performance (CMOS ~The Next Ten Years. Proceedings European Design & Test Conference, 1996, 578. 14 Mentor Geaphies. Understanding DFT basics, ASICAC Design-for-test Process Guide, V8_5_1, 1995, pp 21. 15 Butler KM, Johnson K, Plat J, Jones A, Saxena J. Automated Diagnosis in Testing and Failure Analysis. IBBE Design & Test, 1997, 14(3), 83. 16 Platt J, Butler KM, Venkataraman S, Hetherington G, Lorig G. Fault Diagnosis of the “TMS320C80 (MVP) using FastScan. Proceeding Intemational Symposium for ‘Testing and Failure ‘Analysis, 1996, 127. 17 Tan W, Chan A, Lam D, Swee YK. Electrical Failure Analysis In High Density DRAMs. IEEE. Intemational Workshop on Memory Technology, Design and Test Symposium, 1994, 26. 18 Lam D, Swee YK. Effective Test For Memories Based On Fault Models For Low PPM Defects IEEE International Workshop on Memory Technology, Design and Test Symposium, 1993. 19. Van de Goor AJ. Testing Semiconductor Memories Theory and Practice. West Sussex, UK: John Wiley, 1991. 20 Lam D, Durai E, Swe YK. Implementation of 1DDQ Testing for DRAMs. 2nd Memory Packaging and Test Conference, TI Singapore Intemal Publication, July 1997. 21 Salama, Stazyk J, Bandler J. A Unified Decomposition Approach for Fault Location in Large ‘Analog Circuits. IEEE Transactions on Circuits and Systems, 1984, CAS-31 (7), 609. ‘22 Milor L, Viswanathan V, Detection of Catastrophic Faults in Analog Integrated Cirewits. IEEE “Transactions on Computer-Aided Design, 1989, & (2), 114. 23 Hamida NB, Kaminska B. Muliple Fault Analog Circuit Testing by Sensitivity Analysis. ‘Analog Integrated Cirouts and Signal Processing, 1993, 4, 231, 24. Prasad VC, Babu NSC. On Minimal Set of Test Nodes for Fault Dictionary of Analog Circuit Fault Diagnosis, Journal of Electronic Testing: Theory and Applications, 1995, 7,255. 25 Chao Y, Lin HJ, Milor L. Optimal Testing of VLSI Analog Circuits, IEEE Transactions on. Computer-Aided Design of Integrated Circuits and Systems, 1997, 16 (1, 58. 26 Gray P, Meyer R. Analysis and Design of Analog Integrated Circuits. John Wiley & Sons, 1993. 27, Brokaw P. A Simple Three-Terminal IC Bandgap Reference. IEEE Jounal of Solid-State Circaits, 1974, SC-9 6), 388, 28 Michejda J, Kim S.A Precision CMOS Bandgap Reference. IEEE Joumal of Solid-State Cireits, 1984, SC-19 (6), 1014,Frank, Tan, West 41 20 Sons, Gray P. A Precision Curvature-Compensated CMOS Bandgap Reference. IEEE Journal 0f Solid-State Circuits, 1983, SC-18 (6). 643, 30. Widlar R. New Developments in IC Voltage Regulators, IEEE Joumal of Sol 1971, SC-6.(),2. 31 Saxena J, Butler KM, Balachandran H, Lavo DB, Chess B, Larabee , Ferguson FJ. 1k. Proceedings IEEE Interationsl Test Conference, 1998. 32 Lavo DB, Chess B, Larabee T, Ferguson FJ, Saxcaa J, Buller KM. Bridging Fault Diagnosis is the Absence of Physical Information, Proceedings IEEE International Test Conference, 1997, 887, 33. Hii F, Powell T, Cline D. A Built-In Self Test Scheme for 256 Meg SDRAM. IEEE. Intemational Workshop on Memory Technology, Design and Test Symposium, 1996, 15, State Circuits,PACKAGE ANALYSIS: SAM AND X-RAY Thomas M. Moore Cheryl D. Hartfield Texas Instruments Incorporated ‘The detection of package related defects is an essential part of failure analysis. Non- destructive evaluations play a critical role in understanding the location and causes of assembly-related failures. At times, these techniques provide a complete understanding of a failure. At other times, destructive techniques must also be employed to understand the failure. ‘The destructive techniques, typically decapsulation and cross-sectioning, are often mutually exclusive so that they can not bbe applied in series, In addition to guiding the selection of destructive procedures to complete the analysis, the non-destructive techniques provide an indication of areas to be exposed by decapsulation or cross-section. Scanning acoustic microscopy (SAM) and real-time x-ray radiography (RTX) are the primary techniques for the nondestructive imaging of the internal features of IC packages. SAM is based on the focusing of an acoustic pulse at an interface within the Package. Image contrast in the reflection (pulse-echo) mode is mostly due to the acoustic reflectivity at interfaces. The ideal spot size is diffraction limited and incident inspection wavelengths typically range from 6 - 150 um in water (10 ~ 250 MHz). However, focusing aberrations, scattering and frequency dependent attenuation result in a practical spot size limit in the range of 50 — 250 um depending on the type of package. Because sound is a matter wave, SAM is sensitive to cracks and delaminations within the package. ‘The wavelength of x-ray radiation is on the same order as interatomic spacings, so x-rays cannot be focused with a lens or reflected at internal interfaces (except at a glancing angle or as diffracted beams). RTX image contrast is based on the attenuation of an unfocused beam from a point source. The typical spatial resolution is better than 10 um, RTX is sensitive to more strongly attenuating materials such as ‘metal leads, eutectic die attach material and Au bond wires. SAM and RTX are complementary rather than competitive in their capabilities. ‘They provide different perspectives of the same package and each has its relative strengths and weaknesses, The instrumentation and capabilities of these techniques are presented, and practical examples in IC package inspection are discussed.44 Package Analysis, 3.1 THE SCANNING ACOUSTIC MICROSCOPE In pulse-echo SAM, the same transducer both transmits the incident pulses and receives the returning echoes as it is scanned in an image raster pattern. The sample and transducer are acoustically coupled by a water bath. Acoustic pulses are focused to.a spot within the IC package (Figure 3.1) with a lens. Broad-band pulses are used to enable the differentiation of closely spaced layers. ‘The echo signal is analyzed and characteristics of the signal, such as amplitude, phase and depth, are used to form images of internal structures and defects. ‘The transducer is precisely scanned in a plane parallel to the surface of the package to produce an image at a fixed depth (C- sean image). Alternately, the equivalent of a nondestructive cross section image can be created by scanning the transducer in a line and displaying the reflected amplitude vvs. depth (B-scan image). The pulse repetition rate is typically limited to 10 kHz due to decay of the reverberations that occur between the transducer and sample. Typical scan times for a package are less than 1 minute for a 256 by 256 pixel image. ‘The application of acoustic microscopy to IC package development was driven by the industry wide conversion to surface mount designs in the 1980's. Prior to this development, x-ray radiography (both film-based and real-time) was the primary method for nondestructive inspection of IC packages. IC packaging migrated from relatively small dies in robust, through-hole, typically dual in-line (DIP) packages, to larger, more complex devices in thin surface mount packages. Unlike wave solder assembly, surface mount assembly (vapor phase or infrared solder reflow) exposes the package body to the very rapid ramp to a higher temperature. This can result in the development of moisture/thermal-induced stresses sufficient to cause internal delamination and package cracking, ‘The increased die size in many surface mount product results in higher stresses during temperature cycling. These stresses occur due to mismatch in the coefficient of thermal expansion (CTE) between the die and packaging materials. Reliability PIEZOELEGTIC studies incorporating SAM inspection TRANSDUCER helped to clarify the moisture sensitivity issue, identifying delamination and not package —| cracking as the primary cause of electrical failure during temperature WATER cycling", These studies correlate the delamination revealed = by nondestructive SAM inspection to the results of electrical testing and destructive physical analysis on both board assembly failures and reliability test failures. Figure 3.1. Inspection of IC packages ‘The more recent development of with pulse-echo acoustic microscopy, ‘multi-layer substrate packages such as plastic ball grid arrays (BGAs) hasMoore, Hartfield 45 resulted in the use of through-transmission SAM inspection of these substrates. The high attenuation in the organic materials and the fine layer spacing in these packages can make echo identification challenging in the pulse-echo mode of SAM inspection, Through transmission inspection makes use of a second transducer (receiver only) which is scanned with the transmitter. Through-transmission SAM enables rapid sereening of organic BGA substrates at the sacrifice of the depth and phase information provided by pulse-echo SAM“. 3.1.1 Image Contrast in the SAM Image contrast in pulse-echo SAM inspection of IC packages is due primarily to the reflectivity of internal interfaces. The typical molded package provides a featureless front surface which is parallel to the desired image plane, and the interfaces of interest lie at approximately at the same depth. ‘Thus, front surface morphology and absorption contrast due to path length differences do not dominate the image contrast. In through-transmission SAM, both interface reflection and attenuation losses contribute to the image contrast Sound is a matter wave. Matter waves depend on the vibration of molecules, atoms and electrons for propagation (in the same manner as heat conduction). Sound in the frequency range used for SAM has wavelengths similar to infrared radiation. ‘The reflection and transmission of sound at interfaces can be described by geometric optics. However, there are some interesting differences between the behavior of light and sound at an interface. Unlike the refraction of light at the interface between isotropic media, sound produces two reflected and refracted waves in isotropic elastic solids (birefringence). The faster of these is a longitudinal wave in which the direction of particle motion is parallel to the direction of propagation. ‘The other is a shear wave in which particle motion is perpendicular to the direction of the wave. Although shear wave imaging offers unique capabilities, this discussion will be limited to the primary imaging mode which uses longitudinal waves" Also, in optical systems, light travels faster through the air than the lens. In acousto-optics, this situation is reversed, Sound typically wavels faster through a solid lens than through the water couplant. So even though reflected and refracted wave directions in both systems are described by Snell's law, the sign of the radius of curvature of an P, P, acoustic lens will be opposite that of the —— > | _ corresponding optical lens (ie.concave instead of convex). Pa At internal interfaces, a fraction of the incident acoustic energy is reflected and the remainder is transmitted. Figure 3.2 illustrates z the example of a plane wave travelling through an ideal clastic solid (left) and Figure 3.2. Reflection at an ideal impinging at normal incidence onto a planar interface is illustrated. interface with another ideal clastic solid46 Package Analysis, Grighd. ‘The incident plane wave has the sinusoidal acoustic pressure amplitude P; and reflected and transmitted pressures amplitudes Py and Py, respectively. The boundary conditions at the interface state that the acoustic pressures and particle velocities must be equal in both materials. Thus, the frequency remains unchanged across the interface, and the reflectivity (R) and transmitivity (T) of the interface can be described by the equations below: R= Pal Py= (Zp- Zi MZ, + Zi) T=Pr Pi=2Zf(Z2+Z)) ‘The acoustic impedances, Z,, represent the ratios of the acoustic pressures to the particle velocities per unit area in each material. The acoustic impedances can be derived in the above example from the product of the density (p;) and the speed of mn Z=Pivi Table 3.1 lists the acoustic parameters for some package materials. Note that at a bonded interface between plastic mold compound and Si, for example, R is positive and roughly 52%. However, at a delamination or package crack (which is represented by an interface between mold compound and air) 100% of the amplitude is ideally reflected (no transmission), and the phase of the reflected pulse is inverted relative to the incident pulse. Ideally both amptitude and phase inversion detection in the reflected signal can be used to identify internal delaminations. Losses due to frequency dependent attenuation can be high in some packaging materials such as mold compounds (especially those with rubberized filler), adhesive die attach materials and organic substrates. Such attenuation losses in plastic packages and other factors such as interface roughness can often obscure the amplitude difference between reflections from delaminated and bonded interfaces. In these cases, phase inversion detection is an important tool to assist in the identification of delaminations and cracks in plastic packages. ‘The speed of sound in materials is typically less than 13 km/sec. ‘This is roughly four orders of magnitude less than the speed of v p Zz light. The time delay ___Material__(misec)__(g/ee) _ (0 glem’see) between echoes returning AnOs oi) 2 from a typical molded si soba 3 package can be easily Mold Comp. 350018 63 measured electronically Water 148010 1s and images with three- Air 343 0.0012 0.00041 dimensional information can be displayed. This Table 3.4. Acoustic Parameters of Packaging Materials @ unique advantage of a are shown. pulse-echo acousticMoore, Hartfield 47 technique and has been useful, for example, in determining the mechanism for package crack formation. 3.1.2 Image Resolution in the SAM ‘The spot size obtainable with a spherical lens (optical or acoustic) is limited by diffraction effects. Using the Rayleigh criterion, the Lateral resolution", d, in a pulse-echo system is given by: d= 1.024 F/D, where A is the acoustic wavelength, F is the focal length, and D is the diameter of the lens. The value of F/D ranges from 2 to 4 for typical transducers used for subsurface inspection in IC packages. So, practically speaking, the best resolution obtainable with these transducers is roughly two times the wavelength, At a center frequency of 75 MHz the wavelength in water is approximately 20 jum and the expected lateral resolution is roughly 40-80 um, ‘However, attenuation in a package acts as a low-pass filter and shifts the center of the pulse frequency distribution to a lower frequency. Focussing aberrations and scattering also degrade the spot size, This results in an increase in the minimum spot size. In practice, spot sizes are typically in the range of 50-250 im, depending on the incident pulse frequency and package materials and thickness. Although transducer Performance is the primary component affecting resolution, at pulse frequencies above ~100 MHz, it is the total integrated system performance that determines resolution, pulse width and efficiency. In addition to transducer performance, important system parameters include pulser and receiver bandwidth, cable effects and noise suppression. Depth (or axial) resolution is important for distinguishing reflections from closely spaced layers within the package. Depth resolution in the time domain is determined by pulse duration as well as frequency. The inherent decay time for the transducer, focusing properties of the lens and frequency dependent attenuation all contribute to pulse duration. A typical pulse duration for a broad band transducer is 2 periods at the pulse center frequency. This effect creates what has been termed the “dead zone” below an interface. For example, for a duration of two periods at the center frequency, reflections from one interface may interfere with the reflection from an earlier interface and make detection difficult, especially if the signal amplitude is diminished by losses at the earlier interface. Real-time frequency domain analysis ‘techniques may become practical for reducing this effect”. 3.2 THE REAL-TIME X-RAY INSPECTION SYSTEM ‘The microfocus RTX system consists of three major components: the microfocus x- ray source, the sample and the image intensifier. Figure 3.4 shows a schematic of the ‘geometry of the system, In the microfocus x-ray source, an electron beam (up to 360 KeV in energy) is focused to a small spot on a replaceable target. ‘The size of this focal spot determines the “geometric unsharpness” in the image. The sample holder allows for orientation of the IC package in 5 axes (x, y, 2, rotation and tilt), and continuous adjustment of the magnification. The projected image of the package48 Package Analysis (shadowgraph) is detected with an image intensifier tube which consists of a fluorescent screen to convert the x-ray intensity image into light, and a camera or photomultiplier array to amplify the light image and convert it into digital format. Radiation shielding and safety interlocks are always an important aspect of x-ray radiography. Real-time x-ray inspection systems are the successors of carly x-ray fluoroscopes in which the transmitted x- ray image was projected onto a fluorescent screen. RTX systems with image intensifier tubes have been available since the 1950's. Dramatic improvements in the intensifier tubes in the 1970’s expanded the application of RTX to the inspection of materials with higher attenuation (higher energy x- rays). Current RTX systems offer high sensitivity (1% change in attenuation), a broad field of view, a spatial resolution better than 10 jim, flexible sample orientation, and sophisticated _post- Microfocus RTX system acquisition image processing. Real-time schematic. Magnification: m= Zper / frame averaging improves the effective Zopwecr- signal-to-noise ratio which results in images with better contrast and resolution. The post-processing options inchide automated defect recognition features. RTX offers several advantages over film radiography. The ability to access the image almost immediately after acquisition is a major advantage. The flexibility to reorient the IC package and continuously change the magnification during inspection reduce the time required to obtain results and reveal defects that might otherwise go unnoticed. Image processing capabilities available for post-processing facilitate defect identification and enable RTX to compete with film radiography for image quality? 3.2.1 RTX Image Contrast and Resolution Because the x-ray wavelength is similar to interatomic spacings, the x-ray photons interact with matter on an atomic scale. The primary intcraction processes in the typical x-ray energy range for IC package inspection (<360 keV) are Rayleigh scattering, the photoelectric effect, and Compton scattering. Reflection and refraction at interfaces are replaced by diffraction at specific angles determined by the wavelength and the crystal structure of the diffracting material (Bragg’s Law).Moore, Hartfield 49 ‘Thus, x-rays cannot be focused with conventional lenses. Reflection at a glancing angle (such as total internal reflection in a fiber) is possible but is very inefficient. ‘Therefore, RTX image contrast is due primarily to differential x-ray attenuation in the sample. The attenuation is a function of the x-ray wavelength and the atomic ‘number and thickness of the absorbing material. The reduction in x-ray intensity due to attenuation is a function of the linear absorption coefficient (j1) and the sample thickness (f) and can be expressed as: TW Ip=exp(-ut) ‘The overall attenuation in a material is often expressed as the mass absorption Coefficient ( \1/ p ), where p is the material density. For example, denser materials such as metals (Cu lead frames and Au bond wires) have much higher mass absorption coefficients than less dense materials such as Si (dies), Al (bond wires) and plastic mold compound®. The spatial resolution in RTX images is primarily determined by the focal spot size of the microfocus source due to the geometric unsharpness factor described in Figure 3.4. Microfocus sources with reflection targets are available which provide a focal spot size of S um. Thin targets (called transmission targets) offer even finer focal spot sizes (down to I um). 3.3 APPLICATION EXAMPLES Since SAM and RTX both provide non-destructive imaging of internal package structures, there can be confusion in choosing which of these two techniques is appropriate for identification of specific types of suspected defects. Each techniques has strengths or weakness in the detection of typical assembly related failures such as delaminations and package cracks, voids in flip chip solder bumps and die attach voids. The following examples illustrate practical applications of these techniques and illustrate the appropriateness of each technique for the identification of specific types of defects. They also illustrate the complementary capabilities of SAM and RIX in cases where there is not a specific failure mechanism suspected. Figure 3.5 shows RTX and SAM images of the same plastic quad flat pack (PQEP) package. The contrast mechanisms at work in these two images are significantly different. The RTX image is a record of the relative attenuation of a broad beam of x-rays transmitted through the sample, while the SAM image shows the return echo amplitude of a focused acoustic spot that is mechanically translated to form an image. The acoustic echo signal has been gated at the appropriate time to selectively image the interfaces between the mold compound and the die and lead frame. The edges of the die are evident in the SAM image, and the bright corners of the die (arrow) are delaminations between the mold compound and the dic surface. ‘The RTX does not detect the delamination due to the relatively low attenuation of x- rays in air. The Au bond wires (25 Hm) are not resolved in the SAM image and the overall spatial resolution is obviously better in the RTX image.50 Package Analysis, Figure 3.5. A plastic quad flat package (PQFP). A) SAM image (image area = 20 x 20 mm), and B) RTX image of the upper right corner of the same package. The RTX image shows high attenuation at the Cu lead frame and Au bond wires. The low attenuation through Si makes the die relatively transparent in the RTX image. The area covered by the Ag-filled die attach is apparent and represents the area of coverage, but not necessarily the area bonded. As discussed in the following paragraph, SAM images can show the total bonded area at the interface between the die and the die attach material when the die attach reflection is selected for imaging. Figure 3.6 shows an RTX image as well as two SAM images of the same thin quad flat pack (TQFP) package. In this case, the outline of the die is faint but discernable in the RTX image (arrows). The RTX image shows fairly good coverage of the die attach but it is heavily voided. Studies have documented that the RTX image of the die attach area remains the same during temperature cycling while periodic SAM images show progressive damage and the true percent area bonded”. In the TQFP SAM image shown here, the die attach is poorly bonded (Figure 3.6.C). ‘The phase inversion image acquired by SAM (Figure 3.6.B) shows the presence of delaminations on the die pad and die surface. These are not detected by RTX. This is a good example of the complementary relationship between the two techniques: cach is capable of providing information not revealed by the other technique. ‘The attenuation of X-rays in a sample can be so strong that limited information is available. Cracks in the ceramic portion of a microwave housing were detected ly by dye penetrant methods, Due to the 5 mm thick metal ring frame ‘connected to the ceramic, the cracks were not detected by RTX inspection (area 1 in Figure 3.7). For the same reason, a void or delamination in the solder beneath the ring frame was also not detected by RIX (Figure 3.7, area 2). Areas 3, 4, and 5 in Figure 3.7 again display the strength of RTX for showing dig attach coverage and the strength of SAM for showing true percent area bonded.Moore, Hartfield $1 Figure 3.6. Images of a TQFP are shown: A) RTX image (arrow heads point to outline of die), B) SAM image at die surface {all white areas are delaminations), Cc) acoustic image at die attach (only darker regions (arrow) are bonded). (Image area 9x9mm) 5 Figure 3.7. Housing comprised of a thick metal ring frame (5 mm) bonded to a ceramic substrate with solder. A) RTX image, B) SAM image: Five areas of interest ‘are marked on each image for comparison, (Image area: 38 x 18 mm) The fine layer spacing and high acoustic attenuation in multi-layer organic BGA substrates makes echo identification difficult. Through-transmission SAM inspection Provides a method for rapidly screening these substrates for delaminations. Figure 3.8 shows the comparison of the RTX and through-transmission SAM images of a 272-pin plastic BGA package, ‘The circled area is a delamination between the die and die attach material that was detected by SAM (delaminations appear dark in through-transmission and bright in pulse-echo SAM). The RTX image indicates cracks in the die attach probably caused by shrinkage during curing that were not resolvable with SAM. This apparent contradiction is explained by the fact that the52 Package Analysis cracks in the Agcfilled epoxy dic attach extend completely through the die attach layer while the die attach delamination detected by through-transmission SAM is a relatively thinner air layer that presents a negligable increase total x-ray attenuation. Flip chip underfill is an organic adhesive that has a Figure 3.8. Images from the die area of a plastic relatively low x-ray mass 272-pin BGA. A) SAM image in through- absorption coefficient. The transmission mode, and B) RTX image. (1 mm RTX image of an assembled BGA ball pitch) flip chip device in Figure 39.B reveals patterned metal in the multi-layer substrate and the eutectic solder bumps but no defects are apparent. However, because sound is a matter wave, the SAM image clearly shows underfill voids and delamination (bright areas). ‘The RTX image in Figure 3.10 demonstrates the effectiveness of x-ray inspection for detecting small voids in eutectic flip chip bumps ~150 jim in diameter. Detecting these solder voids with SAM is difficult because of the resolution required (spatial and axial). This capability will be increasingly important as the size of flip chip bumps shrinks to accommodate finer pitch interconnect. Figure 3.9. Images of a flip chip. A) SAM image gated at the underfill ayer, and B) RIX image. (250 pm flip chip bump pitch).Moore, Hartfield 53, fees eee Figure 3.11. Images from a mounted but non-underfilled flip chip. A) RTX image, B) SAM image. Arrowheads show same positions in both images where an "anomolous" bright reflection is detected acoustically. Circles in the SAM image show where solder bridges occur, The final example in this section is an analysis that required the capabilities of both SAM and RTX to understand the nature of the defect. The RTX image in Figure 3.11.A shows solder bridging in an organic substrate flip chip package that hhas not been underfilled. The SAM image (Figure 3.11.) of the same location gated at the device surface shows bright contrast at specific solder bump locations. Bright reflections in SAM normally indicate cracks or delaminations which in this case would be an electrically open bump. However, in the plan view x-ray image (Figure 3.11.A) no evidence of open bumps are indicated.54 Package Analysis, Figure 3.12 shows the RTX image of the same location as in Figure 3.11, but at an oblique angle. ‘This image reveals that the solder bridges (Figure 3.11.A) are confined to the substrate. Furthermore, the tilted view RTX image identifies the bright spots in the SAM image (Figure 3.11.B) as pads on the die that are not connected to the corresponding pads on the substrate. ‘Therefore, the plan view SAM. image in Figure 3.11.B detects these “non-wets”, whereas the plan view RTX inspection did not. However, the tilted view RTX image was required to understand the contrast in the SAM image. Without the discrepancy between the plan view RTX and SAM images, a high resolution RTX inspection on the tilted sample would not have been performed. Because the die was not underfilled in this case, the air gap between the substrate and the die prevented the SAM from detecting the solder bridging on the substrate. Figure 3.12. X-ray image shows the same flip chip as Fig. 13.11, taken at an oblique angle. Arrows correlate to positions marked by arrow heads in Fig. 13.10. 3.4 SUMMARY AND TRENDS IN NONDESTRUCTIVE INSPECTION ‘The examples presented in this section demonstrate the effectiveness of SAM and RTX for nondestructively detecting defects in packaged ICs. Both techniques provide images that are similar in many regards, but that have significant differences due to the contrast mechanisms at work. Image contrast in RTX images is based on the relative density of the materials in the package, while the pulse-echo SAM is sensitive to changes in the mechanical properties at interfaces. Although the attenuation of sound and x-rays as a function of depth can be described by very similar equations, it is often the case that materials that transmit sound readily are highly attenuating to x-rays and vice versa, For example, Cu heat slugs are highly attenuating to x-rays, but have very low attenuation to sound. Jn another ‘comparison, the superior spatial resolution of RTX is needed to detect wire sweepMoore, Hartfield 55 and voids in flip chip solder bumps, but the sensitivity of the sound wave to changes in mechanical interface properties is required to image package cracks and delaminations. By comparing the information provided by these two techniques, one obtains a broader perspective of a packaging issue than could be obtained from either technique by itself. ‘The trend in pulse-echo SAM development is toward higher pulse center frequencies for better spatial and depth resolution. ‘This is important for inspection of fine pitch flip chip interconnects, for example. However, frequency-dependent attenuation in the water path may set a practical upper limit on pulse frequency. Until this time, the analysis of the acoustic echo pulse has been done in the time domain. Itis reasonable to anticipate the development of real-time frequency domain analysis capabilities to improve depth resolution for SAM inspection of advanced packages. Also, the image acquisition time in the SAM (less than 1 min.) can be greatly improved to the level of real-time by the development of two dimensional arrays for acoustic imaging”. RTX technology continues to produce improvements in source spot size and brightness. Replaceable thin transmission targets produce roughly a 5X reduction in source spot size by eliminating the large x-ray production volume that results from the interaction of the electron beam with a bulk target in the microfocus source. Continuing developments in post-acquisition image processing (post-processing) features extend the utility of RTX for defect detection. Emerging nondestructive imaging techniques are being developed that may impact IC package inspection in the future. X-ray laminography uses a steerable x- ray soured coupled with a rotating detector to produce cross section images within the sample. Because the practical spatial resolution is limited to roughly 125 um, this technique is now seeing applications primarily in solder joint and printed circuit board inspection™. Imaging with far-infrared terahertz. waves offers the potential for 100% inspection of packages without ionizing radiation or the need for a water couplant. Currently, the spatial resolution offered by THz imaging is roughly 250 Jum and instrument cost and availability are still an issue”, ACKNOWLEDGEMENTS: ‘The authors wish to acknowledge Jay Adams of CR Technology for the x-ray images of the flip chips. REFERENCES 1 Lin, Blackshear B, Scrisky P. Moisture Induced Package Cracking in Plastic Encapsulated ‘Surface Mount Components During Solder Reflow Process. Proceedings Intemational Reliability Physics ‘Symposium, 1988, £3. 2 Moore TM. Identification of Package Defects in Plastic-Packaged Surface Mount IC's by ‘Scanning Acoustic Microscopy. Proceedings International Symposium for Testing and Failure Analysis, 1989, 61 3 Kuroki S, Ooia K. High-Relisbility Epoxy Molding Compound For Surface- Mounted Devices, Proceedings Electronic Components Conference, 1989, 885.56 Package Analysis, 4. Nishimura A, Kawai S, Murakami G. Effect of Lead Frame Material on Plastic-Encapsulated IC Package Cracking Under Temperature Cycling. Proceedings Electronic Components Conference, 3 Van der Wijk A, van Doorselner K. Nondestructive Failure Analysis of ICs Using Scanning ‘Acoustic Tomography (SCAT) and High Resolution X-ray Microscopy (HRXM). Proceedings Intemational Symposium for Testing and Failure Analysis, 1989, 69. 6 Moore TM, McKenna R, Kelstll $J. ‘The Application of Scanning Acoustic Microscopy 10 Control Moisture/Thermal-Indvced Package Defecs. Proceedings Intemational Symposium for Testing ‘and Failure Analysis, 1990, 251. 7 Moore TM, McKenna R, Keliall $J. Correlation of Surface Mount Plastic Package Reliability ‘Testing to Nondestructive Inspection by Scanning Acoustic Microscopy. Proveedings International Reliability Physics Symposium, 1991, 160. ‘8 Moore, TM., Kelsall, SJ, McKenna, R.G. “Moisture Sensitivity of Plastic Packages”. In Characterization of Electronic Packaging Materials, Moore TM., McKenna, R.G. ed. New York: Butterworth/Heinemann, 1993. 9 Van Gestel R, de Zocuw K, van Gemert L, Bagerman E. Comparison of Delamination Effects Between Temperature-Cycling Test and Highly Accelerated Stress Test in Plastic-Packaged Devices, Proceedings International Symposium for Testing and Faure Analysis, 1992, 177. 10. Moore TM, Kelsall SJ. Impact of Delamination on Stess-Induced and Contamination Related Failure in Surface Mount ICs. Proceedings International Reliability Physics Symposium, 1992, 169. 11 Shook RL, Moisture Sensitivity Characterization of Plastic Surface Mount Devices Using ‘Scanning Acoustic Microscopy. Proceedings interational Relibility Physics Symposium 1992, 157 12 Van Doorselaer K, Moore TM, Tiziani R, Baelde W. Evaluation of Methods for Delamination Detection by Acoustic Microscopy in Plastc-Packaged Integrated Circuits. Proceedings Intemational ‘Symmposiun for Testing and Failure Analysis, 1992, 425. 13. Moore TM. The lmpact of Acoustic Microscopy on the Development of Advanced IC Packages. Proceedings Intemational Workshop on Semiconductor Characterization: Present Status and Future "Needs, NIST, 1995, 202, 14. Moore TM, Hartfield CD, Through-Transmission Acoustic Inspection of Ball Grid Array (BGA) Packages, Proceedings International Syroposium for Testing and Failure Analysis, 1997, 197. 1S Moore TM, Drescher-Krasicka E, Comparison Between Images of Damage and Internal Stress in 1C Packages by Acoustic Microscopy. Proceedings Intemational Workshop on Moisture in ‘Micrcelectronics, NIST, 1996. 16. Drescher-Krasicka E, Willis JR. Mapping Stresses with Ultrasound. Nature, 1996, 7, 32 17 Scilard,3, Ultrasonic Testing. New York: John Wiley and Sons, 1982. 18 Kinsler LA., Frey AR., Coppens AB., Sanders IV. Fundamentals of Acoustics, p. 125. New ‘York: John Wiley and Sons, 1982. 19. Krautkramer, J, and Krautkrames, H, Ultrasonic Testing of Materials, 3rd ed. Springer-Verlag, 198s. 20 Briggs, A. An Introduction to Scanning Acoustic Microscopy. Royal Microscopy Society (Oxford University Press, 1985. 21. Briggs, A. Acoustic Microscopy. New York: Oxford University Press, 1992. 22, Moore TM, Hartfield CD. Trends in Nondestructive Imaging of IC Packages. Proceedings ‘Characterization ané Metrology for ULSI Technology, NIST, 1998 "23 Halmshaw, R, “Radiological Methods”. In Nondestructive Testing, Honeycombe, R.W.K., Hancock, P., ed, London: Edward Amol! Publishes, 1987. 24 Bray, DE, Stanley, RK. “Radiographic Techniques in Nondestructive Evaluation” In [Nondestructive Evaluation: A Tool in Design, Manufacturing and Service, Holman, 1.P. ed. New York: MeGraw-Hill, 1989, 25 ASM Commitee on Radiographic Inspection, Radiographic Inspection. In Metals Handbook, (6th ed), Vol. 17. Metals Park, OH ASM International, 1989. 26. Colangelo, J. “Advanced Radiographic Techniques in Failure Analysis". In Microelectronics Failure Analysis Desk Reference (3 ed,), Lee, T.W., Pabsetly, S.V., ed. Materials Park, Ohio: ASM International, 1994,‘Moore, Hartfield 57 27 Moore TM, Frank K. Experience with Nondestructive Acoustic Inspection of Power IC’. Proceedings Electronic Components and Technology Conference, 1995, 305. 28 Adams, J. “X-ray Laminography”. In Characterization of Electronic Packaging Materials, ‘Moore ,T-M., MeKenna, R.G. ed, New York: Butterworth/Manning, 1993. 29° Mitileman DM, Jacobsen RH, Nuss MC. T:-Ray Imaging. IEEE Journal of Selected Topics in ‘Quantum Electronics, 1996, 2, 679.DIE EXPOSURE Phuc D. Ngo ST Microelectronics If the physical cause of failure is on the die or elsewhere inside of the package, the first destructive failure analysis process is exposing the die and bonding. In general, these processes are intended to maintain the electrical characteristics of the device. ‘This has become particularly critical as the number of pins has increased and the speed of the devices has increased. Maintaining an clectrical signature of the failure for failure isolation is critical to success. Increases in device complexity and speed make it increasingly difficult to maintain electrical characteristics except through existing package connections. For cavity packages, exposure of the die typically consists in mechanically removing the lid, commonly called detidding. For plastic packages, removal of the mold compound covering the die and bond wires is called decapsulation. While historically exposing the top of the die has been desired, more recent emphasis has been placed on exposing the back of the die. This trend is being driven by the increase in flip-chip mounting. In addition the challenges posed by flip- chip technology, the increase number of layers of metallization has, in many cases, made it easier to isolate failures from the back of the die rather than the front. This consists of exposing the back of the die through removal of materials from the back of the chip. ‘This can range from a heat sink to plastic material. After the bulk of the material is removal, the silicon surface is generally polished to facilitate IR light transmission, In some cases where the failure mechanism is anticipated and chemical analysis is, critical, mechanical opening without maintaining electrical connections may be preferred. Metallization corrosion failures are the most typical examples. 4.1 DELIDDING CAVITY PACKAGES Since the die and wires in a cavity package are not connected to the lid, mechanical removal! of the lid will generally expose the die. Grinding away the lid is one approach, which has been successfully be used. In addition, the lid seal (material holding the lid to the package header) can be melted or cracked in order to remove the lid, Grinding is used predominantly on ceramic packages with ceramic lids. In many ceramic lid packages, the lid seal also contains the pins as shown in Figure 4.1 for a C-DIP package. Cracking this seal can result in damage to the pins and loss of60 Die Exposure electrical connections. In these cases, grinding provides a more reliable method of maintaining connectivity. The easiest method is to use a diamond impregnated grinding wheel having coarse grit, Figure 4.1 shows a Cerdip prepared with this technique. Figure 4.1, A cross-section drawing of a Cerdip package cavity before (left) and after grinding the lid off is shown. An optical photograph of a Cerdip after grinding on the lid is shown at the right. Cracking the lid seal is also possible with most cavity packages. Most techniques employ either one or two knife edges. For ceramic seals, a pair of knife-edges is commonly used to crack the seal. Typically, the knife-edges are put together in an assembly with a pedestal which helps to maintain the device in a flat position between the knife-edges as shown in Figure 4.2. This type of assembly provides a very quick method for delidding. While most packages can be successfully cracked open without losing electrical connections, grinding is more reliable. Metal lid packages tend to be somewhat easier to manage since the external pins cannot be part of the lid seal as with C-DIP packages. There are generally two approaches used. ‘The primary approach is to crack the lid off. One approach is to use a single knife- is in position and the blade is placed in position to initiate a crack in the corner of the lid seal as shown in Figure 43. Typically a small hammer is used to drive the blade. Fixtures have also a Penance: been developed to simplify this process, particularly for cavity down packages where the pins can interfere with placing the knife-edge. A second approach is to melt the lid seal, which is typically a entectic alloy of gold and tin. Heat is applied Figure 4.2. The diagram shows how a two blade system is used to crack open a Cerdip package.Ngo 61 to the lid until the alloy melts and the lid can be peeled up. ‘A few IC's, particularly analog devices, are packaged in cans. For these packages, the lid can be cut off. The tool used is somewhat akin to a copper-tubing cutter as shown in Figure 4.4. The device is rotated with the roller while forcing it against the knife-edge. 4.2. DECAPSULATION OF PLASTIC PACKAGES Early acid decapsulation methods were extremely varied **, In some cases, the entire package was dissolved away, leaving behind the die and lead frame. A variation on this was to solder the lead frame to a paper clip in order to maintain the position of the bond wires. Obviously, neither of these techniques was very effective in maintaining electrical connections. With some variations, the industry migrated to a technique where a cavity was formed in the top of the package by milling. The device was placed on a heater block and a decapsulating acid (predominantly fuming nitric acid or fuming sulfuric acid) ‘was dropped into the cavity. The acid was dumped and replenished until the die was exposed. This approach was reasonably successful for Figure 4.3. The procedure for cracking a metal lid many years. seal is shown in top and side views. Jet etch decapsulation* was developed largely to provide a more efficient method of decapsulation. A low-level vacuum is used to create a jet of hot decapsulating acid and to hold the device in place as shown in Figure 4:5. However, as mold compound has become more difficult to decapsulate, Jetetch has become a requirement for decapsulation. The jet etch provides several important advantages. The acid can be heated to a high temperature without excessively heating the device. In general, higher decapsulation temperatures are required for current devices. In addition, the time to decapsulate can be controlled by the temperature. If decapsulation time is too short, itis difficult to control the extent of etching. If the time is too long, acid tends to be adsorbed into the mold62 Die Exposure compound causing swelling and ultimately damage to the bonding. Fresh acid impinges on the device surface continuously until the die is exposed. This appears to prevent passivation of the etched surface. This passivation process can occur during the other approaches as the acid reacts and as clean- ups are performed for intermediate inspections. One approach to facilitating the process of decapsulation is to reduce the amount of material which needs tobe removed. If much of the Knife-edge mold compound —_ is removed mechanically, the remaining mold compound can be removed much more quickly and . . efficiently. This applies Figure 4.4. Delidding a metal can devices is equally well to other normally performed on a tool similar to a pipe cutter. decapsulation techniques, which will be discussed, 4.3 ALTERNATIVE DECAPSULATION METHODS, ‘Acid decapsulation is the dominant method for die exposure in wire-bonded plastic packages. However, other techniques have been developed and can be useful in some situations, Plasma, laser assisted, and thermo-mechanical decapsulation ‘methods have been employed. 43.1 Plasma Decapsulation Plasma decapsulation of mold compounds is possible using a primarily oxygen plasma, ‘This attacks the organic components of the mold compound. However, the filler material (typically silicon dioxide to provide a thermal coefficient of expansion closer to silicon) is the primary component of the mold compound and it is not etched in the oxygen plasma. The typical ashing (a common term for plasma removal of organic materials) plasma is generated using oxygen mixed with less than ten percent CE. Hence the filler material must be removed separately. Plasma decapsulation is, therefore, typically an iterative process of ashing the polymer and mechanically removing the filler material. Since this is a time consuming process, removal of the bulk of the mold compound over the die can significantly decrease the time for decapsulation. ‘The plasma reactions also generate significant heating of the device. ‘This heating must be controlled in order to prevent damage to the device. On theother hand, the heating helps to speed up the plasma reaction and should not be totally eliminated. ‘The plasma can also generate significant charging making it important to ground all of the pins. Plasma decapsulation has not been widely used because it is time consuming to alternate between the filler and resin removal cycles. Alternative methods are preferred due to cycle time and throughput constraints. 4.3.2: Laser Decapsulation ‘As mold compounds have become more difficult to decapsulate, interest has grown in laser decapsulation. In the laser decapsulation process, the mold compound is Precisely ablated. ‘The benefits of laser milling applications include avoiding the stresses induced by mechanical drills and temperature cycling of lengthy wet etches. ‘The laser milling can be performed such that a few mils of mold compound remain above the chip maintaining its electrical integrity. ‘The remaining mold compound can be removed quickly with the standard wet etches. This technique becomes particularly applicable, as decapsulation of newer mold compound exceeds the capability of most jet etchers. 43.3 Thermomechanical Decapsulation ‘When failures are expected to be due to corrosion, analysis of the corrosion residue is a critical part of the analysis. Analysis of this residue provides a “fingerprint” of the source of ionic contamination, which contributed to the corrosion. Other types of failures where chemical analysis is critical can also be decapsulated thermomechanically*™’, e.g. bond adhesion failures. Wet chemical decapsulation procedures typically remove corrosion products whose analysis is critical the failure analysis process. Thermomechanical techniques have been devised to break open the IC package without removing these corrosion products. Analysis of lead frame segments ta Vacuum from a thermomechanically decapsulated device may also assist in identifying the migration path of contamination. The thermomechanical decapsulation techniques are extremely varied. They generally include one or more of three Figure 4.5. The basic element of jet etch elements: some reduction of decapsulation is a container of heated acid package size by grinding, heating from which a jet is created by a vacuum. the mold compound, and exertion64 Die Exposure of a mechanical force to crack the package or separate materials. One technique currently employed is to crack a heated device along the upper lead frame to mold compound interface. This approach is very similar to the technique described above for ceramic packages (Figure 4.2) except that a heated block replaces the pedestal. If the die surface is not exposed by the fracture, the top of the device is heated until the mold compound softens and the die can be: lifted out with tweezers. In an earlier version, the backside was ground away to expose the die. The device was heated and the die lifted or pried out of the softened mold compound. With these approaches, the Jead frame elements can be extracted from the mold compound as well as the die. A third approach has been to heat the device until it begins to smoke and twist the package with pliers. Innumerable other variations have been employed with varied success rates. Techniques may often be selected based on the sample size and required success rate, ‘The primary disadvantage of this approach is the loss of electrical continuity, which makes it impractical for most decapsulation requirements. 43.4 Repackaging/Package Rework Repackaging has been used for devices, which for one reason or another cannot be decapsulated or delidded with the die clearly exposed. A good example is current DRAM packaging where the leadframe is positioned on top of the die and the bond pads typically reside along the middle of the array. In order to achieve visibility of half of the die, the die is removed from the package and is wire bonded in a package (see Figure 4.6). Die removal with the standard wet etches can sometimes damage the bond pad metallization. ‘This and some other situations call for the polishing technique in which the top Figure 4.6. EDO memory rebonded in of the package is polished to the die. different configurations (optical image) allows surface. Dry etching is then for analysis and characterization of specific employed to remove poly-imide and circuits, Samples courtesy of Craig Salling of or passivation. ‘This leaves the Texas Instruments Incorporated. sample in a condition to be tested and analyzed with probe cards or bonded into a package. Repackaging is also often needed for Tape Automated Bonding (TAB) assembly failures. Once assembled, these devices no longer have leads for testing, These failures must be re-packaged before they can be tested and failure analyzed. ‘Many packages also require rework to prepare them for testing and analysis. Ball Grid Array (BGA) failures, for example, typically require “tebumping” or “rebelling”after they has been removed from a board. Solder bumps on BGA packages can also be damaged during the decapsulation process. Malformed solder bumps are commonly reworked before the device is tested in a socket. ‘The process involves removing and replacing all bumps in the array with interim steps of cleansing with solvents?. 44 BACKSIDE PREPARATION FOR CHARACTERIZATION AND ANALYSIS Several developments have created interest in accessing the device from the backside through the substrate. One is the growing application of flip-chip technology. Maintaining the ability to electrically stimulate an IC at speed is very difficult to achieve on high-speed IC’s except through its normal interconnection environment. The second development is the rapid increase in the number of layers of metallization on a device. This blocks access to many arcas of the device for conventional failure site isolation. Another case, for which backside analysis is important, is when design requirements ‘cover the entire device with a power plane. Accessing the backside consist of three processes: removal of any heat sink or package material below the die, thinning and polishing the silicon and anti-reflective coating. ‘The removal of heat sinks and other materials from the back can be performed in a number of ways. ‘The most common procedure is to use a milling machine with carbon steel bits to grind the material away. This technique sometimes employs a continual water rinse to reduce wear and tear on the bit and heating. In the case of ceramics, diamond encrusted bits are used. Computerized milling machines are commercially available for this application. Once the backside of the die is exposed, a polishing compound is applied onto the silicon substrate, Dremel-like tools are sometimes used do the polishing. In most cases, a rotary polisher with some random motion is applied to maintain planarity. Iterative steps of inspecting the chip with an IR microscope would indicate when the active circuits are visible and suitable for analysis. Timed polishing can also be used since the removal rate is fairly consistent with the grit of the polishing compound. Most of the tools used for subsequent analysis benefit from removing as much silicon as possible. Removal down to a substrate thickness on the order of 100 microns has proven to be reasonably achievable. Techniques, which require greater thinning, are expected to be performed locally using techniques such as focused ion beam milling, chemically ‘enhanced laser etching or mechanical methods. ‘An anti-reflective coating is typically required when the analytical technique employs an incident IR beam probe to extract data from the device. The coating, minimizes beam reflection and refraction from the backside surface at the point of entry and in some cases, also aid with image resolution. ‘The anti-reflective coating helps to maintain a high IR light intensity and improved signal to noise ratio. In the event the die is in a cavity package or the device has been decapsulated, an ‘epoxy must be used to fill the cavity to maintain the chip's stability during backside polishing (see Figure 4.7). This epoxy must have properties that will not alter the bonding as it cures. Backside preparations are typically more complex than topside.66 Die Exposure Figure 4.7. Optical images show the topside and backside of a Cerdip cavity which has been prepared for backside Emission Microscopy. The blurred topside view is a result of an epoxy used to fill the cavity. 4.5 FUTURE REQUIREMENTS Access the die is a fundamental requirement for physical failure site isolation techniques. As flip chip devices become more common, improvements in the backside sample preparation will be required. In addition, as mold compound continue to become more difficult 10 remove, more innovative approaches such as laser
You might also like
The Mobile Wave How Mobile Intelligence Will Change Everything 1st Vanguard Press Ed Edition Michael Saylor Instant Download
PDF
100% (3)
The Mobile Wave How Mobile Intelligence Will Change Everything 1st Vanguard Press Ed Edition Michael Saylor Instant Download
61 pages
Moderation Workshop Sept 2018
PDF
No ratings yet
Moderation Workshop Sept 2018
78 pages
Iso 13053
PDF
0% (1)
Iso 13053
49 pages
Iso 8130 13 2019
PDF
No ratings yet
Iso 8130 13 2019
9 pages
FAO43 WaterLiftingDevices
PDF
100% (1)
FAO43 WaterLiftingDevices
358 pages
SPC
PDF
No ratings yet
SPC
121 pages
En 9131-2009
PDF
No ratings yet
En 9131-2009
17 pages
QSB Presentacion 2007
PDF
No ratings yet
QSB Presentacion 2007
115 pages
Get Silicon Carbide Ceramics Structure Properties and Manufacturing 1st Edition Andrew J Ruys Free All Chapters
PDF
No ratings yet
Get Silicon Carbide Ceramics Structure Properties and Manufacturing 1st Edition Andrew J Ruys Free All Chapters
49 pages
Adaptive Control PDF
PDF
No ratings yet
Adaptive Control PDF
380 pages
Apparel Industrial Engineering
PDF
No ratings yet
Apparel Industrial Engineering
4 pages
PH4418 Physics in Industry - Semiconductors - Part4
PDF
No ratings yet
PH4418 Physics in Industry - Semiconductors - Part4
43 pages
Ipc 4552a
PDF
100% (2)
Ipc 4552a
8 pages
Lean Manufacturing Tools Techniques and How To Use Them 157444297X
PDF
No ratings yet
Lean Manufacturing Tools Techniques and How To Use Them 157444297X
245 pages
E-Coat Simulation STAR 2013
PDF
100% (1)
E-Coat Simulation STAR 2013
35 pages
Apqp Ybr
PDF
No ratings yet
Apqp Ybr
56 pages
270213-Future Skills Requirements of Manufacturing-Publication PDF
PDF
No ratings yet
270213-Future Skills Requirements of Manufacturing-Publication PDF
156 pages
Counterfeit Integrated Circuits
PDF
No ratings yet
Counterfeit Integrated Circuits
15 pages
Hydroelectric Power Engineering Handbook
PDF
100% (1)
Hydroelectric Power Engineering Handbook
266 pages
Imds Recommendation 001 Annex I
PDF
No ratings yet
Imds Recommendation 001 Annex I
33 pages
09110Z Sample BuyNow
PDF
No ratings yet
09110Z Sample BuyNow
15 pages
CQI 14 Warranty June 2013
PDF
No ratings yet
CQI 14 Warranty June 2013
10 pages
Process Mining and Its Impact On BPM
PDF
No ratings yet
Process Mining and Its Impact On BPM
16 pages
Esd Failure Analysis Methodology
PDF
No ratings yet
Esd Failure Analysis Methodology
12 pages
Turbines 101
PDF
No ratings yet
Turbines 101
184 pages
Failure Analysis For Dummies
PDF
No ratings yet
Failure Analysis For Dummies
79 pages
Electrostatic Discharge (ESD) Protection in CMOS PDF
PDF
No ratings yet
Electrostatic Discharge (ESD) Protection in CMOS PDF
113 pages
Braze Materials Guide: Sulzer Metco
PDF
No ratings yet
Braze Materials Guide: Sulzer Metco
16 pages
Plasma Etching Review
PDF
100% (1)
Plasma Etching Review
48 pages
EMT 480 - Lecture 6
PDF
No ratings yet
EMT 480 - Lecture 6
36 pages
PPAP Checklist Sample Report SafetyCulture
PDF
No ratings yet
PPAP Checklist Sample Report SafetyCulture
20 pages
Chapter 3 Dec 50143
PDF
No ratings yet
Chapter 3 Dec 50143
29 pages
A Review of The Technology and Process On Integrat
PDF
No ratings yet
A Review of The Technology and Process On Integrat
7 pages
Conducting The Failure Examination
PDF
No ratings yet
Conducting The Failure Examination
15 pages
Injection-Molding-Flaws-and-Defects Lopott
PDF
No ratings yet
Injection-Molding-Flaws-and-Defects Lopott
28 pages
Dr. Joanna Schmit: Surface Measurement 101 - Non-Contact 3D Optical Metrology
PDF
No ratings yet
Dr. Joanna Schmit: Surface Measurement 101 - Non-Contact 3D Optical Metrology
57 pages
Space Product Assurance: Manual Soldering of High-Reliability Electrical Connections
PDF
No ratings yet
Space Product Assurance: Manual Soldering of High-Reliability Electrical Connections
106 pages
Semiconductor Failure Analysis
PDF
No ratings yet
Semiconductor Failure Analysis
35 pages
Working Paper: 02-043: Last Revised: December 16, 2009
PDF
No ratings yet
Working Paper: 02-043: Last Revised: December 16, 2009
31 pages
Materials 12 04177
PDF
No ratings yet
Materials 12 04177
25 pages
Identified and Handled Only in An ESD: Relevant ANSI/ESD S20.20, EIA-625, or Industry Standard Requirement
PDF
No ratings yet
Identified and Handled Only in An ESD: Relevant ANSI/ESD S20.20, EIA-625, or Industry Standard Requirement
4 pages
Supplier Charge Back PDF
PDF
100% (1)
Supplier Charge Back PDF
4 pages
Process Capability Indices-A Review, 1992-2000
PDF
No ratings yet
Process Capability Indices-A Review, 1992-2000
18 pages
Drill Deep 2011 - 3d-5why
PDF
No ratings yet
Drill Deep 2011 - 3d-5why
42 pages
Fault Tree Analysis Guide
PDF
No ratings yet
Fault Tree Analysis Guide
6 pages
Failure Analysis and Principles Involved
PDF
No ratings yet
Failure Analysis and Principles Involved
29 pages
Failure Analysis Process
PDF
No ratings yet
Failure Analysis Process
3 pages
Whitepaper Process Development in Cleanroom Conditions
PDF
No ratings yet
Whitepaper Process Development in Cleanroom Conditions
8 pages
Quality Assurance Matrix in Automotive Industry: Article
PDF
No ratings yet
Quality Assurance Matrix in Automotive Industry: Article
6 pages
Failure Analysis of Semiconductor Devices
PDF
No ratings yet
Failure Analysis of Semiconductor Devices
25 pages
Thin Film Silicon Dioxide (Primarily Oxidation of Silicon) ECE 4752
PDF
No ratings yet
Thin Film Silicon Dioxide (Primarily Oxidation of Silicon) ECE 4752
30 pages
ESD-Standards List
PDF
No ratings yet
ESD-Standards List
4 pages
White Paper: Introduction To Japanese Style Mizenboushi Methods For Preventing Problems Before They Occur
PDF
No ratings yet
White Paper: Introduction To Japanese Style Mizenboushi Methods For Preventing Problems Before They Occur
4 pages
MSAI 04 - Jidoka
PDF
No ratings yet
MSAI 04 - Jidoka
44 pages
Flooring Resistance Test Kit: Flooring That Keeps You Grounded
PDF
No ratings yet
Flooring Resistance Test Kit: Flooring That Keeps You Grounded
2 pages
DFX and DFSS How QFD Integrates Them
PDF
No ratings yet
DFX and DFSS How QFD Integrates Them
7 pages
8 Ds
PDF
No ratings yet
8 Ds
2 pages
C. Cross-Sectional Observation (FIB, SEM, and TEM) : Semiconductor Materials Science Ablation of Materials
PDF
No ratings yet
C. Cross-Sectional Observation (FIB, SEM, and TEM) : Semiconductor Materials Science Ablation of Materials
5 pages
Drill Deep Worksheet
PDF
No ratings yet
Drill Deep Worksheet
2 pages
Semiconductor Wafer Fab QC & Package Assy QC Standard Process
PDF
No ratings yet
Semiconductor Wafer Fab QC & Package Assy QC Standard Process
6 pages
Surface Roughness
PDF
100% (2)
Surface Roughness
1 page
Ipsen Vacuum Technology
PDF
No ratings yet
Ipsen Vacuum Technology
6 pages
The Five Types of Manufacturing Processes A 113
PDF
No ratings yet
The Five Types of Manufacturing Processes A 113
2 pages
Failure Analysis of Integrated Circuits PDF
PDF
No ratings yet
Failure Analysis of Integrated Circuits PDF
2 pages
Rice Paddy
PDF
No ratings yet
Rice Paddy
23 pages
3d Ducting
PDF
No ratings yet
3d Ducting
1 page