0% found this document useful (0 votes)
8 views34 pages

Unit V

Uploaded by

jenitta89
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views34 pages

Unit V

Uploaded by

jenitta89
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

UNIT V ASIC DESIGN AND TESTING 9

Introduction to wafer to chip fabrication process flow. Microchip design process & issues
in test and verification of complex chips, embedded cores and SOCs, Fault models, Test
coding. ASIC Design Flow, Introduction to ASICs, Introduction to test benches, Writing
test benches in Verilog HDL, Automatic test pattern generation, Design for testability, Scan
design: Test interface and boundary scan.

I. Introduction to wafer to chip fabrication process flow:

• Wafer to chip fabrication, also known as semiconductor manufacturing, is the process of


transforming a silicon wafer into individual semiconductor chips or integrated circuits
(ICs).
Here's an overview of the typical wafer to chip fabrication process flow:

1. Wafer Ingot Growth: The process begins with the growth of a silicon ingot. The silicon ingot
is sliced into thin, circular wafers using a diamond-tipped saw. These wafers serve as the base
material for manufacturing chips.
2. Wafer Cleaning: The wafers undergo rigorous cleaning processes to remove any
contaminants or particles that might have accumulated during handling or previous steps.
Cleanliness is crucial to ensure defect-free manufacturing.
3. Oxidation: The wafers are exposed to high-temperature oxygen or steam to create a thin layer
of silicon dioxide (SiO2) on their surface. This layer serves as an insulating material and also
provides a base for subsequent processes.
4. Photolithography: In this step, a photoresist material is applied to the wafer's surface. Light
is then shone through a photomask that contains the pattern of the desired circuit. The photoresist
is exposed to this patterned light, creating a mask on the wafer. This process defines the circuit
pattern for the subsequent steps.
5. Etching: The exposed parts of the wafer's surface are either removed or modified using
chemical or physical etching processes. This step transfers the pattern from the photomask onto
the wafer, defining the circuit layout.
6. Doping: Dopants (impurity atoms) are selectively introduced into specific areas of the wafer
to modify its electrical properties. This process creates regions with either excess or deficient
electrons, forming the various components of transistors (source, drain, gate, etc.).
7. Thin Film Deposition: Thin films of various materials, such as metal, polysilicon, or
insulators, are deposited onto the wafer surface using techniques like chemical vapor deposition
(CVD) or physical vapor deposition (PVD). These films serve as conductors or insulators in the
circuit.
8. Chemical Mechanical Polishing (CMP): CMP is used to polishing the wafer's surface,
making it smooth and even. This is essential for accurate layering and subsequent processing
steps.
9. Annealing: The wafer is heated in a controlled environment to activate dopants, repair crystal
damage, and improve the electrical properties of the fabricated components.
10. Chemical Mechanical Polishing (CMP): CMP is used to polishing the wafer's surface,
making it smooth and even. This is essential for accurate layering and subsequent processing
steps.
11. Annealing: The wafer is heated in a controlled environment to activate dopants, repair
crystal damage, and improve the electrical properties of the fabricated components.
12. Testing: Throughout the process, various tests are conducted to ensure the quality of the
chips being manufactured. These tests help identify defects and ensure that the chips meet the
required specifications.
13. Packaging: Once all the chips on the wafer are deemed functional, they are separated and
assembled into their respective packages. The packages provide protection and electrical
connections to the chips, enabling them to be mounted on printed circuit boards (PCBs).
14. Final Testing: After packaging, the chips undergo final testing to verify their functionality
and performance. Defective chips are discarded, and only fully functional chips are sent for
distribution and integration into electronic devices.

II. Microchip design process & issues in test and verification of complex chips:

Microchip design process:


The microchip design process involves several stages from conceptualization to production. Here
is an overview of the typical steps involved:
1. Specification: In this stage, the requirements and functionality of the microchip are defined.
Designers work closely with stakeholders to understand the application and performance targets.
2. Architecture Design: The chip's high-level architecture is developed, including the selection
of components, interconnections, and overall system design. This stage focuses on defining the
chip's functionality and how different components will interact.
3. RTL Design: The Register Transfer Level (RTL) design is created, describing the chip's
behavior using hardware description languages like Verilog or VHDL. RTL design forms the
basis for later stages.
4. Functional Verification: The RTL design is extensively tested to ensure it behaves as
intended. Various verification techniques, such as simulation, formal verification, and hardware
emulation, are employed to catch design bugs and issues.
5. Synthesis and Physical Design: The RTL code is synthesized into a gate-level netlist, which
represents the chip's physical implementation. The physical design phase involves floor
planning, placement, routing, and optimization to meet timing and area constraints.
6. Design for Testability (DFT): Techniques like scan chains, built-in self-test (BIST)
structures, and boundary scan are added to make the chip more testable during manufacturing
and in the field.
7. Manufacturing: The final design is sent to a semiconductor foundry for fabrication. This
process involves photolithography and other steps to create the actual silicon chip.
8. Testing and Quality Assurance: After manufacturing, the chips undergo various testing
methodologies to ensure they meet the desired specifications and are free from defects.

Issues in Test and Verification of Complex Chips:

1. Complexity: As chips and systems-on-chip (SoCs) become more complex, the verification
effort increases exponentially. Ensuring all possible scenarios and corner cases are covered in
testing becomes challenging.
2. Verification Time and Cost: With the growing complexity, the time and cost required for
functional verification can become substantial.
3. Integration Testing: Integrating various IP cores and subsystems onto a single chip or SoC
introduces new challenges in testing the interactions between these components.
4. Power and Clock Domains: Handling multiple power domains and clock domains in a chip
requires careful verification to ensure proper functionality and minimize power consumption.
5. Performance Verification: Ensuring that the chip operates at the desired performance levels
under all conditions and workloads is crucial, especially for high-performance chips.
6. Test Generation: Generating effective and efficient test patterns to cover various fault models
is a non-trivial task, especially for complex designs.
7. Debugging: Identifying and debugging issues in large and complex designs can be time-
consuming and requires advanced debugging techniques.

III. Embedded cores and SOCs:


Embedded SoC–SystemonChip.
• A SoC, is essentially an integrated circuit (IC) that takes a single platform and integrates an
entire electronic or computer system onto it.
• It is, exactly as its name suggests, an entire system on a single chip.
• A system that assembles several chips and components onto a circuit board, the SoC
fabricates all necessary circuits into the one unit.
• The components that an SoC generally looks to incorporate within itself include a central
processing unit (CPU), input and output ports (I/O Ports), internal memory, as well as
analog inputand output blocks among other things.
• Depending on the kind of system that has been reduced to the size of a chip, it can perform a
variety of functions including signal processing, wireless communication, artificial
intelligence and more.
SoC usually contains various components such as,
• Operating System
• Utility software applications
• Voltage Regulators and power management circuits
• Timing sources such as phase lock loop control systems or oscillators
• A Microprocessor, Microcontroller or Digital Signal Processor
• Peripherals such as real-time clocks, counter timers and power-on-reset generators
• External interfaces such as USB, FireWire, Ethernet, Universal Asynchronous Receiver-
Transmitter or Serial Peripheral Interface Bus.
• Analog Interfaces such as DAC and ADC
• RAM and ROM
SoC Building Blocks:

• To begin with, a system on chip must have a processor at its core which will define its
functions.
• Normally, an SoC has multiple processor cores.
• It can be a microcontroller, a microprocessor, a digital signal processor, or an application
specific instruction set processor.
• Secondly, the chip must have its memories which will allow it to perform computation. It
may have RAM, ROM, EEPROM, or even a flash memory.
• SoC must possess are external interfaces which will help it comply with industry standard
communication protocols such as USB, Ethernet, and HDMI.
• It can also incorporate wireless technology and involve protocols pertaining to WiFi and
Bluetooth.
• It will also need a GPU or a Graphical Processing Unit in order to help visualize t he
interface.
• Other stuff that an SoC may have includes voltage regulators, phase lock loop control
systems and oscillators, clocks and timers, analog to digital and digital to analog converters,
etc.
• Internal interface bus or a network to connect all the individual blocks.
• Ultimately, the elements incorporate in an SoC corresponds to the function it is supposed to
perform.

Advantages of SoC:
• Power saving, space saving and cost reduction.
• SoCs are also much more efficient as systems as their performance is maximized per watt.
• minimize the latency.
• SoC has a greater design security.

Applications of SoC:

1. Speech Signal Processing


2. Image and Video Signal Processing
3. USB PC Interface
4. Computer Peripherals–Printer Controller, LCD Monitor Controller, DVD Controller etc.
5. Data Communication
6. Wire Line Communication–e.g. Giga bit Ethernet
7. Wireless Communication–Bluetooth,WLAN,2G/3G/4G/5G,WiMaxetc.

IV. Fault models:


A model for how fault occur and their impact on circuits is called a fault model. A fault model
for every circuit is proposed before actual testing.
Two popular fault models are,
i. Stuck-at faults
ii. Stuck-open or stuck-short faults.
The stuck-at fault model is most popular and simpler model. A stuck-open or stuck-short model
is more close to the real behavior of circuit but it is difficult to incorporate.
i. Stuck-at Faults:
• Stuck-at fault model assigns a fixed value (0 or 1) to signal line in the circuit, which is an
input or output of a gate/flip-flop. Popular forms of stuck-at fault are single stuck-at
faults two fault per line (stuck-at-1, stuck-at-0).
• The properties that fault model must possess are as under -
1. The model must correspond to real faults.
2. The model must have adequate granularity.
3. The model must be accountable.
4. The model must be automated.
• The stuck-at fault model even though corresponds to real faults but it does not represent
all possible faults.
• Granularity means resolution at which a model represents faults. A fault model with fine
granularity is more useful than a model with coarse granularity.
• The fault models can be deduced by using basic circuits such as AND, OR, Inverter and
tri-state buffer.
• Fig. 4.4.2 shows how S-A-0 or S-A-1 fault occurs.
• Stuck-at-faults mostly occur due to shorting of gate oxide (NMOS gate to GND or PMOS
gate to VDD) or metal-to-metal shorts.
ii.Stuck-Open and Stuck-Short Faults:
• Stuck-open and stuck-short faults are usually referred as transistor faults. Physical faults
which occurs at manufacturing level are called as defects. The electrical or logic-level
faults by physical defects are referred as defect oriented faults such as open links,
improper semiconductor doping, bridging faults etc.
• Consider a MOS transistor to be modelled as ideal switch. The defect may be switch
being permanently in open state or short state. This fault model involves just one
transistor as stuck-open or stuck-short. Fig. 4.4.3 shows as CMOS NOR logic circuit to
illustrate this model.

• Q1 and Q2 are PMOS transistors and Q3 and Q4 are NMOS transistors. When gate inputs
A and B are '0', transistors Q1, Q2 are shorted and Q3, Q4 are open. Therefore when
A=B=0, output C is connected to VDD. Similarly when A=B=1 output C is connected to
ground i.e. 0.
• Suppose fault Q1 stuck-open. If A = B = 0 then Q1 and Q2 are shorted in fault tree circuit
but only Q2 is shorted in faulty circuit Q3 and Q4 are open in both circuit. Hence output
C is '1' in good circuit but is floating (neither VDD nor ground) in faulty circuit.
1
• The good and faulty state-s of output C are denoted by Z and 𝑍 respectively.
• The output node C has parasitic capacitance. For detecting a fault, it should be ensured
that value of Z is 0. It can be done by preceding A = B = 0 as initializing vector to A = 1,
B = 0. This sets output node C to 0 in faulty circuit by discharging node capacitance to
ground potential.
• To complete the test another input from 10 to 00 is applied. It produces an output 0 → 1
in good circuit and 0 → 0 in faulty circuit.

V. Test coding:
Test coding involves writing test patterns to test the functionality and detect faults in a chip.
Various methods and languages can be used for test coding, such as:
1. ATPG (Automatic Test Pattern Generation): ATPG tools automatically generate test
patterns based on fault models.
2. BIST (Built-In Self-Test): BIST structures are embedded within the chip to facilitate self-
testing.
3. Scan Chains: These enable efficient testing by serially scanning in test data and capturing
results.
4. Test benches: Test benches are used for simulation-based verification, where test stimuli are
applied to the design, and responses are analyzed.
5. High-Level Test Languages: Some specialized languages and tools are used for high-level
test descriptions, which can be automatically converted to lower-level test patterns.
VI. ASIC Design Flow:
ASIC design flow describes the sequence of steps to be followed in ASIC design. Such a steps
are given below:
Steps:
1. Design Entry:
Enter the design into ASIC design system using VHDL (or) Verilog.
2. Logic Synthesis:
Create netlist using VHDL (or) Verilog tool.
Netlist is a description of logic cells and their connections.
3. System Partitioning:
A large system is divided into ASIC sized pieces.
4. Pre layout Simulation:
It is used to check the design whether it functions (works) correctly or not.
5. Floor Planning:
In this step, the blocks of netlist are arranged on the chip.
6. Placement:
It is used to decide the location of cells in a block.
7. Routing:
It makes the connections between cells and blocks.
8. Extraction:
It is used to find the resistance and capacitance of the interconnect.
9. Post layout Simulation:
It checks to see the design whether it still works with the added loads of the interconnect.
Step 1 to Step 4 are known as logic design.
Step 5 to Step 9 are known as physical design.

VII. Introduction to ASICs:

ASIC - Application Specific Integrated Circuit is an Integrated Circuit (IC) designed to


perform a specific function for a specific application.
• Levels of integration:
The levels of integration are:
✓ SSI - Small scale integration
✓ MSI - Medium scale integration
✓ LSI - Large scale integration
✓ VLSI - Very large scale integration
✓ USLI - Ultra large scale integration
• Implementation technology
• The implementation technologies used in ASIC are:
✓ TTL – Transistor Transistor Logic
✓ ECL – Emitter Coupled Logic
✓ MOS – Metal Oxide Semiconductor (NMOS, CMOS)
Types of ASICs:
• The ASICs are classified as follows:
I. Full-Custom ASICs
II. Semi-custom ASICs
a. Standard-Cell–Based ASICs (CBIC)
b. Gate-Array–Based ASICs (MPGA)
i. Channeled Gate Array
ii. Channelless Gate Array
iii. Structured Gate Array
I. Full-Custom ASICs:
In full custom ASIC, engineer can design full logic cells in IC. So, this technique is
known as Full custom ASIC technique.
Engineer uses mixed analog and digital technique to manufacture IC. All the logic cells are
specifically designed for one ASIC.
Uses of bipolar technology:
• The characteristics of bipolar components in the same IC are matched very well.
• But the characteristic of components in different IC are not matched well
Uses of CMOS:
• This is widely used technology to manufacture IC.
• Mixing of analog and digital function are integrated in the same IC for which CMOS
technology suits well.
• Designers give importance to performance.
• When large volume is manufactured, overall cost will be reduced.
• In super computer, quality is important so this design is implemented.
• All mask layers are customized in a full-custom ASIC
• Generally, the designer lays out all cells by hand
• Some automatic placement and routing may be done
• Critical (timing) paths are usually laid out completely by hand
• Full-custom design offers the highest performance and lowest part cost (smallest die
size) for a given design.
• The disadvantages of full-custom design include increased design time, complexity,
design expense, and highest risk.
• Microprocessors (strategic silicon) were exclusively full-custom, but designers are
increasingly turning to semicustom ASIC techniques in this area as well.

II. Semi-custom ASICs – Design:

a. Standard cell based design:


• Standard cells are referred to AND gate, OR gate, multiplexer, flip flop, NOR gate etc.
• Standard cells can be used with larger predefined cells.
• This approach standardizes design entry level at logic gate.
• A design is generated automatically from HDL language.
• Then layout is created. In standard cell design, cells are placed in rows, and rows are
separated by routing channel.
• All cells in library are in identical heights, widths of the cells can be varied to
accommodate for variations in complexity between cells.
• A substantial fraction of area is allotted for signal routing.
• The minimization of interconnect overhead is most important goal of standard-
cell placement routing tools. It is done by feed through cells.
• By using feed through cell, cells in different rows can be connected through vertical
routing. So length of wire is reduced by feed through cells.
Semi-custom ASICs – CBIC:
• CBIC means Cell Based ASICs.
• All the mask layers of CBIC are customized.
• It allows mega cells (SRAM, MPEG, decoder etc) to be placed in the same IC with
standard cells (adder, gates etc).
• Mega cells are supplied by ASIC Company.
• Data path logic means the logic that operates on multiple signals across a data bus.
• Some of the ASIC library companies provide data path compiler which automatically
generate data path logic.
• Data path library contains cells like adders, multiplexer, simple ALUs.
• ASIC Library Company provide data book which has functional description.

Features:
• It is a cell-based ASIC ( CBIC —“sea-bick”)
• It has Standard cells. Standard cell is logic elements used CMOS technology.
• Possibly megacells , megafunctions , full-custom blocks , system-level macros
(SLMs), fixed blocks , cores , or Functional Standard Blocks ( FSBs )
• All mask layers are customized - transistors and interconnect
• Automated buffer sizing, placement and routing. And custom blocks can be embedded.
• A “wall” of standard cells forms a flexible block.
b. Gate Array Based ASICs:

• Gate array is known as GA.


• In GA based ASIC, the transistors are predefined on the silicon wafer.
• Base array: the predefined pattern of transistors on a gate is known as base array.
• Base cell: the small element which is replicated to make the base array is known as
base cell or primitive cell.
• Masked Gate array: Interconnect is defined by using top few layers of metal.
• This type of gate array is known as masked gate array.
• Gate array library is provided by ASIC Company.
• The designer can choose the predefined logic cells from a gate array library. These
logic cells are known as Macros.
• Cell-layout is same for each logic cell. But interconnect is customized.
• It is also called as pre-diffused array because the transistors are diffused at first.
Types of MPGAs (Mask Programmable Gate Arrays):
✓ Channeled Gate Array
✓ Channel less Gate Array
✓ Structured Gate Array

(i) Channeled Gate Array:


• It is similar to CBIC (cell based ASIC).
• In the both types, rows of cells are separated by channels. These
channels are used for interconnect.
• Space between rows of cells is fixed in a channeled gate array. But space
between rows of cells may be adjusted in a CBIC.
Features:
✓ Only interconnect is customized.
✓ The interconnect uses predefined spaces between rows of base cells.
✓ Manufacturing lead time is between two days and two weeks.

Figure: Channel Gate Array


(ii) Channel less Gate Array:
• Channel less Gate Array is also called as channel free GA.
• In this array, there is no predefined space between rows for routing.
• Top few layers are used for defining interconnect connections.
• There are no predefined areas set aside for routing - routing is over the top of
the gate-array devices.
• Achievable logic density is higher than for channeled gate arrays.
• Each logic cell or macro in a gate-array library is predesigned using fixed
tiles of transistors known as the gate-array base cell (or just base cell).

Figure: Channel less Gate Array


• Channeled and channelless gate arrays may use either gate isolation or oxide isolation.
• Isolate the transistors on a gate array from one another either with thick field oxide or
by using other transistors that are wired permanently off.
(iii ) Structured Gate Array:
• Structured Gate Array is also called as embedded gate array or master slice or master
image gate.
• It combines some of the features of CBIC and Masked gate array (MGA).
• In this array, some of the area is used for implementation of specially designed
embedded block.
• Embedded area either can contain a different base cell that is more suitable for
building memory cells, or a complete circuit block, such as a microcontroller.
Special features:
o Only the interconnect is customized
o Custom block can be embedded
o Manufacturing lead time is 2 days to 2 weeks
o Area efficiency is increased
o Performance is increased with low cost
Disadvantages: the embedded function is fixed.

• For ex: if embedded block has 32K bit memory. But the customer
needs only 18K bit, the 16K memory is wasted.
VIII. Introduction to test benches:
A test bench is a model which is used to exercise and verify the correctness of hardware model.
It is used for:
• Generating stimulus for simulation
• Applying the stimulus to the entity under test and to collect the output.
• Comparing obtained output with expected output.
Verilog models are tested through simulation. For small designs, it may be practical to manually
apply inputs to a simulator and visually check for the correct outputs. For larger designs, this
procedure is usually automated with a test bench.
The test bench uses nonsynthesizable system calls to read a file, apply the test vectors to the
device under test (DUT), check the results, and report.
The test bench is used to verify the functionality of the design. The test bench allows the design
to verify the functionality of the design at each step in the HDL synthesis-based methodology.
When the designer makes a small change to fix an error, the change can be tested to make sure
that it didn't affect the other part of the design.

As mentioned in the block diagram, a test bench is at the highest level in the hierarchy of the
design. The test bench instantiates the design under test (DUT). The test bench provides the
necessary input stimulus to the DUT and examines the output from the DUT.
The test bench format is given below.
entity testbench is
end;
architecture tb of testbench is
component test
Port (port names and modes)
end component;
Local signal declarations;
Begin
-
-
-
port map (port associations);
end tb;
Waveform Generation
Two methods can be followed to generate stimulus values.
✓ To create waveforms and apply stimulus at discrete time intervals.
✓ To generate stimulus based on the state of the entity or output of the entity.
Two types of waveforms can be obtained namely repetitive pattern and vector pattern.
Repetitive Pattern
If clk<= not elk after 10 ns
Then the following waveform is created. Here ON period and off period are same.

We can see the following statements.


Process
Constant off period; TIME; = 10 ns;
Constant ON period; TIME; = 5 ns;
begin
wait for OFF period
clk <= '1';
wait for ON period;
clk <= '0';
end process
If we want to stop the process from generating more events, then we can use wait statement.

Example:
if NOW> 100 ns then
wait;
endif;
clk<= '0', '1' after 50ns,
'0' after 100ns, '1' after 150ns;
For this format, the corresponding waveform is given below.

Non-Repetitive Waveform
Non-repetitive waveform can be generated by using following statement.
clk <= '0', '1' after 50 ns, '0' after
80 ns, '1' after 100 ns
Its waveform is given below.

IX. Writing test benches in Verilog HDL:


Test bench for counter
Use ieee.std_logic_1164.all;
use ieee.std_logic_unsigned.all; use ieee.std_logic_arith.all; Entity counter is
Port(reset, clk: in std_logic;
count:out integer range 0 to 10); End counter;
Architecture counter of counter is Begin
process (reset, clk)
variable counting: integer range 0 to 10; begin
if reset='1' then
counting:=0;
elsif
clk'event and clk='1' then
if counting-10 then counting: =0;
else
counting:=counting+1;
end if;
end if;
count<=counting;
end process;
end counter;
Test bench creation
Library ieee;
Use ieee.std_logic_1164.all;
Entity testbench is
End testbench;
Architecture testbench of testbench is
Signal reset, clk: std_logic;
Signal count: std_logic_vector(0 to 3); COMPONENT counter
PORT (
reset: IN std_logic;
clk: IN std_logic;
count: OUT std_logic_vector(0 to 3) );
END COMPONENT;
Inst_counter: counter PORT MAP (
reset => reset,
clk => clk,
count => count
);
process
begin
wait for 50 ns;
reset<='1',' 0 after 50ns;
clk<= '1',' 0 after 50ns;
end process;
end;

X. Automatic test pattern generation:

A physical fault can be transformed into a logical fault model that allows one to develop sets of
test vectors. Many techniques have been developed for testing CMOS VLSI chips that use
common circuit design styles. Most Automatic Test Pattern Generation (ATPG) approaches have
been based on simulation. A five-valued logic form is commonly used to implement test
generation algorithms (more advanced algorithms use up to 10 level logic). This consists of the
states 1, 0, D, D and X, where 0 and 1 represent logical zero and logical one respectively, D
represents a logic 1 in a good machine and a logic 0 in a faulty machine, D represents a logic 0 in
a good machine and a logic 1 in a faulty machine. X represents the don't-care state.
If we want to test the gate which is embedded in large logic circuit.
Then we can use the existing circuit to create a specific path from the location of gate which is
going to be checked for finding fault. This technique is known as path sensitization. This process
of creating the path is known as propagation.

In the Fig.5.5, we want to find the inputs to test for SA0 fault at c. Path sensitization is
performed using two steps given below.
Step: 1 Forward Drive:
Note: If we want to test any node for SA0 fault, that node = 1
We want to test for SA0 fault, So, c should be equal to 1(c = 1). If c = 1, Then we want
to get the o/p of A2 as 1 (propagate c through A2).
c = 1,
We want to get Z2=1.

So, b = 1
To get output y,
Z2 = 1 and Z1 = 0.
It should be processed through OR gate 1. (Fig. 5.5)

Step 2: Backward Trace


c = 1.
and b = 0.
If b = 0, in A1 gate, a = 0 (or) 1. Test vector will be given below for SA0 fault.
{a, b, c} = {x, 0, 1}
x-don't care, may be 0(or)1.

Test h-node for SA0 fault. Sensitization steps are given below.
Steps:

We want to test h-node for SA0 fault. So, h = 1 (h should be equal to 1).
We want to get y = 1, (to propagate h) Refer Fig. 5.6.
So, e =1, ( y=e. h)
If
h = 1,
then f=1, and g = 1. (: h=f. g)
If f=1
then a = 1
b = 1 ( f = a. b)
If
g=1
then c = 1 and d=0
(or)
c=0 and d = 1
11.xx.0.0) (10)
d=0
Now, we can write test vector {a, b, c, d, e} = {1, 1, 1, 0, 1} (or) {1, 1, 0, 1, 1}

Eg: 3
If we want to check h-node for SA1 fault.
Then steps are given below. For the same Fig.5.6.
Steps:
h=0 (h should be equal to 0 for testing SA1 fault)
y = 0 (: y = h.e)
So, e=1
If h=0,
then f=0 and g=1
(or)
f=1 and g=0,
If
g=0
c=0
d=0
If
g=1,
c=x
d=x
(or)
c=0
d=0
If
f=0,
a=0
b=1
(or)
a=1
b=0
If, f=1
a=1
b=1
Now, we can write test vector set as
{a, b, c, d, e} = {0, 1, x, x, 1}
(or)
{1, 0, x, x, 1}
(or)
{0, 0, x, x, 1}
(or)
{1, 1, 0, 0, 1}
D Algorithm

This figure is same as in Fig.5.6. But in Fig.5.7, D is introduced in h-node which is to be


D Algorithm - objectives:
1. Propagate D on node to one (or) more primary outputs (y in Fig.5.7)
2. Set node h to state D via a set of primary inputs. (a, b, c, d, e).

D-Algorithms is known as DALG. It is started by propagating D-value on an internal node (h) to


a primary o/p. This is known as propagation phase.
Testing node-h:
We will get test vector
{a, b, c, d, e} = {0, 1, x, x, 1}
(or)
{1, 0, x, x, 1}
(or)
{0, 0, x, x, 1}
(or)
{1, 1, 0, 0, 1} (Refer path sensitization)
Testing node f:
Test for SA0,
Now set f= 1,
So, we want to get h=1,
h=f.g,
So,
g=1
If f=1
Then, a = 1
and b=1 (f=a. b)
If
g=1,
Then c=1 d=0
c=0 d=1
Now, we can write test vector for f (SA0 fault).
{a, b, c, d, e} = {1, 1, 1, 0, 1} = {1, 1, 0, 1, 1}
c=1 to propagate h-node to y.

Testing f-node for SA1 fault:


For SA1 fault, f= 0 (condition)
then as in the previous method, we can get,
(a, b, c, d, e) = {0, 0, 0, 1, 1) = {0, 0, 1, 0, 1}
To test for g-node:
For SA0 fault,
we will get (a, b, c, d, e} = {1, 1, 0, 1, 1} = {1, 1, 1, 0, 1} = {1, 1, 1, 1, 1}
For SA1 fault,
(a, b, c, d, e} = {1, 1, 0, 0, 1}
Now, we want to collapse the vectors in to the least set which covers all nodes,
That set is
{a, b, c, d, e} = {1, 1, 0, 1, 1}, {0, 0, 1, 0, 1}, {1, 1, 0, 0, 1}

XI. Design for testability:


The keys to designing circuits that are testable. They are
• controllability
• observability
i. Controllability is the ability to set (to 1) and reset (to 0) every node internal to the circuit.
ii. Observability is the ability to observe, either directly or indirectly, the state of any node in the
circuit.
iii. Good observability and controllability reduce the cost of manufacturing test because they
allow high fault coverage with relatively few test vectors.
These may be categorized as:
1.Ad hoc testing
2.Scan-based approaches
3.Built-in self-test (BIST)
4.IDDQ Testing
5.Design for Manufacturability

Application of these techniques:


• Random logic (multilevel standard cell, two-level PLA)
• Regular logic arrays (data paths)
• Memories (RAM, ROM, CAM)

1. Ad hoc testing:
Ad hoc test techniques, are collections of ideas aimed at reducing the combinational explosion of
testing. It is only useful for small designs ATPG, and BIST are not available. A complete scan-
based testing methodology is recommended for all digital circuits. Common techniques for ad
hoc testing involve:
• Partitioning large sequential circuits
• Adding test points
• Adding multiplexers
• Providing for easy state reset

A technique classified in this category is the use of the bus in a bus-oriented system for
test purposes.Each register has been made loadable from the bus and capable of being driven
onto the bus. Here, the internal logic values that exist on a data bus are enabled onto the bus for
testing purposes.
Frequently, multiplexers can be used to provide alternative signal paths during testing. In
CMOS, transmission gate multiplexers provide low area and delay overhead.
Any design should always have a method of resetting the internal state of the chip within a
single cycle or at most a few cycles. Apart from making testing easier, this also makes simulation
faster as a few cycles are required to initialize the chip.

2. Scan-based approaches:
The scan-design strategy for testing to provide observability and controllability at each register.
In designs with scan, the registers operate in one of two modes. In normal mode, it behave as
expected. In scan mode, it connected to form a giant shift register called a scan chain spanning
the whole chip. By applying N clock pulses in scan mode, all N bits of state in the system can be
shifted out and new Nbits of state can be shifted in. Therefore, scan mode gives easy
observability and controllability of every register in the system. The scan register is a D flip-flop
preceded by a multiplexer.
• When the SCAN signal is deasserted, the register behaves as a conventional register,
storing data on the D input.
• When SCAN is asserted, the data is loaded from the SI pin, which is connected in shift
register fashion to the previous register Q output in the scan chain.
Modern scan is based on the use of scan registers, as shown in Figure 12.13.
For the circuit shown, to load the scan chain, SCAN is asserted and CLK is pulsed eight times to
load the first two ranks of 4-bit registers with data. SCAN is deasserted and CLK is asserted for
one cycle to operate the circuit normally with predefined inputs. SCAN is then reasserted and
CLK asserted eight times read the stored data out. At the same time, the new register contents
can be shifted in for the next test.
Testing proceeds in this manner of serially clocking the data through the scan register to the right
point in the circuit, running a single system clock cycle and serially clocking the data out for
observation. In this scheme, every input to the combinational block can be controlled and every
output can be observed. In addition, running a random pattern of 1's and 0's through the scan
chain can test the chain itself.
Test generation for this type of test architecture can be highly automated. ATPG techniques can
be used or the combinational blocks and, as mentioned, the scan chain is easily tested.
Disadvantage:
The area and delay impact of the extra multiplexer in the scan register.

i. Level Sensitive Scan Design (LSSD):


This approach is based on two concepts. First, that the circuit is level sensitive. The second is
that each register may be converted to a serial shift register. A logic system is level-sensitive if,
and only if, the steady state response to any allowed input state change is independent of the
circuit and wire delays within the system.
Figure 14.22(a) shows a typical LSSD scan system. An expanded view is show in Figure
14.22(b). The first rank of SRLs have inputs driven from a preceding stage and have outputs
QA1, QA2 and QA3. These outputs feed a block of combinational logic. The output of this logic
block feeds a second rank of SRLS with outputs QB1, QB2 and QB3.
Initially, the shift-clock and C2 are clocked three times to shift data into the first rank of SRLS
(QA1 to QA3). C1 is asserted and then C2 is asserted, clocking the output of the logic block into
the second rank of SRLS (QB1 to QB3). Shift-clock and C2 are then clocked three times to shift
QB1, QB2 and QB3 out via the serial-data out line. Testing proceeds in this manner of serially
clocking the data through the SRLS to the right point in the circuit, running a single 'system'
clock cycle and serially clocking the data out for observation. In this scheme, every input to the
combinational block may be controlled and every output may be observed. In addition, running a
serial sequence of 1s and 0s (such as 110010) through the SRLs can test them.

ii. Serial scan:


It has become difficult to provide a hazard-free latching scheme with level sensitive scan. Faster
clock speeds and design for smaller overhead in the registers has led to simplification in the SRL
A schematic for a commonly used CMOS edge-sensitive scan-register is shown in Figure 14.23.
A MUX is added before the master latch in a conventional D register. TE is the test enable pin,
and TI is the test input pin. When TE is enabled, TI is clocked into the register by the rising
edge of clock.

iii. Parallel scan:


An extension of serial scan is called random-access or parallel scan. The basic structure of
parallel scan is shown in Figure 14.26.

Each register in the design is arranged on an imaginary (or real) grid where registers on common
rows receive common data lines and registers in common columns receive common read- and
write-control signals. In the figure, and array of 2-by-2 registers is shown. The D and Q signals
of the registers are connected to the normal circuit connections. Any register output may be
observed by enabling the appropriate column read line and setting the appropriate address on an
output data multiplexer. Similarly, data may be written to any register.

3.Built-in self-test (BIST):


One method of testing a module is to use signature analysis or cyclic redundancy checking.
This involves using a pseudo-random sequence generator (PRSG) to produce the input signals
for a section of combinational circuitry and a signature analyzer to observe the output signals.
PRSG is defined by a polynomial of some length n. It is constructed from a linear feedback
shift register(LFSR) which in turn is made of n flip-flops connected in a serial fashion.
A signature analyzer receives successive outputs of a combinational logic block and produces a
syndrome that is a function of these outputs. The syndrome is reset to 0, and then XORed with
the output on each cycle.
i. BILBO:
The combination of signature analysis and the scan technique creates a structure known as
BILBO--for Built-In Logic Block Observation -or BIST-for Built-In Self-Test. The 3- bit
BILBO register shown in Figure 12.22 is a scannable, resettable register that also can serve as a
pattern generator and signature analyzer.
C[1:0] specifies the mode of operation. In the reset mode (10), all the flip- flops are
synchronously initialized to 0. In normal mode (11), the flip-flops behave normally with their D
input and Q output. In scan mode (00), the flip-flops are configured as a 3-bit shift register
between SI and SO. Note that there is an inversion between each stage. In test mode (01), the
register behaves as a pseudo- random sequence generator or signature analyzer. If all the D
inputs are held low, the Q outputs loop through a pseudo-random bit sequence, which can serve
as the input to the combinational logic. If the D inputs are taken from the combinational logic
output, the existing state to produce the syndrome. In summary, BIST is performed by first
resetting the syndrome in the output register. Then both registers are placed in the test mode to
produce the pseudo-random inputs and calculate the syndrome. Finally, the syndrome is shifted
out through the scan chain.

ii. Memory Self-test:


Testing large memories on a production tester can be expensive because it contain so many bits
and thus require so many test vectors. Embedding self-test circuits with the memories can reduce
the number of external test vectors that have to be run. A typical read/write memory (RAM) test
Program for an M-bit address memory might be as follows
FOR i=0 to M-1 write(~data)
FOR i=0 to M-1 read(~data) then write(data)
FOR i=0 to M-1 read(data) then write(~data)
FOR i=M-1 to 0 read(~data) then write(xFF)
FOR i=M-1 to 0 read (data) then write(~data)
where data is 1 and ~data is 0 for a single-bit memory or a selected set of patterns for an n-bit
word. For an 8-bit memory, data might be x00, x55, xAA, and xFF. These patterns test writing
all zeroes, all ones, and alternating ones and zeroes. An address counter, some multiplexers, and
a simple state machine result in a low-overhead self-test structure for read/write memories.
Advantages of BIST:
1. Low cost.
2. High quality testing.
3. Faster fault detection.
4. Ease of diagnostics.
5. Reduced maintenance and repair costs.

4.IDDQ Testing:
When the signal inputs are stable (not switching), the quiescent leakage current Ippo can be
measured. This is illustrated in Figure 14.30.

Every chip design is found to have a range of 'normal' levels. IDDQ testing is based on the
assumption that an abnormal reading of the leakage current indicates a problem on the chip. IDDQ
testing is usually performed at the beginning of the testing cycle. If a die fails, it is reflected and
no further tests are performed.
The components of a basic IDDQ measurement system is shown in Figure 14.31.

The test chip is modelled as being in parallel with the testing capacitance, Ctest. A power supply
with a value VDD is connected to the chip by a switch that is momentarily closed at time t = 0.
The current IDD is monitored by a buffer (a unity-gain amplifier) and gives the output voltage.
5. Design for Manufacturability:
Circuits can be optimized for manufacturability to increase their yield. This can be done in a
number of different ways.
i. Physical:
At the physical level (i.e., mask level), the yield and hence manufacturability can be improved
by reducing the effect of process defects.
• Increase the spacing between wires where possible- this reduces the chance of a defect
causing a short circuit.
• Increase the overlap of layers around contacts and vias-this reduces the chance that a
misalignment will cause an aberration in the contact structure.
• Increase the number of vias at wire intersections beyond one if possible-this reduces the
chance of a defect causing an open circuit.
Increasingly, design tools are dealing with these kinds of optimizations automatically.
ii. Redundancy:
Redundant structures can be used to compensate for defective components on a chip. For
example, memory arrays are commonly built with extra rows.
iii. Power:
Elevated power can cause failure due to excess current in wires, which in turn can cause metal
migration failures. In addition, high-power devices raise the die temperature, degrading device
performance and, over time, causing device parameter shifts.
iv. Process Spread:
Process simulations can be carried out at different process corners. Monte Carlo analysis,
provide better modeling for process spread and can help with centering a design within the
process variations.
v. Yield Analysis:
When a chip has poor yield or will be manufactured in high volume, dice that fail manufacturing
test can be taken to a laboratory for yield analysis to locate the root cause of the failure. If
particular structures are determined to have caused many of the failures, the layout of the
structures can be redesigned.
XII. Scan design: Test interface and boundary scan:

The IEEE 1149 Boundary scan architecture is shown in Figure 14.33. It provides a standardized
serial scan path through the I/O pins of an IC. At the board level, ICs obeying the standard may
be connected in a variety of series and parallel combinations to enable testing of a complete
board or, possibly, collection of boards. The standard allows for the following types of tests to be
run in a certified testing framework:
• Connectivity tests between components.
• Sampling and setting chip I/Os.
• Distribution and collection of self-test or built-in test result

i. Test Access Port (TAP):


It represents the interface that needs to be included in an IC to make it capable of being included
in a Boundary Scan Architecture. The port has four or five single-bit connections, as follows:
• TCK (The Test Clock Input)
-used to clock tests into and out of chips.
• TMS (The Test Mode Select)
-used to control test operations.
• TDI (The Test Data Input)
-used to input test data to a chip.
TDO (The Test Data Output)
-used to output test data from a chip.
• TRST (The Test Reset signal)
-an optional signal
-used to asynchronously reset the TAP controller
-also used if a power-up reset signal is not available in the chip being tested.
The TDO signal is defined as a tristate signal that is only driven when the TAP controller is
outputting test data.
ii. Test architecture:
The basic test architecture that must be implemented on a chip is shown in Figure 14.34.

• the TAP interface pins


• a set of test-data registers to collect data from the chip
• an instruction register to enable test inputs to be applied to the chip
• a TAP controller, which interprets test instructions and controls the flow of data into and
out of the TAP
The data that is input via the TDI port may be fed to one or more test data registers or an
instruction register. An output MUX selects between the instruction register and the data
registers to be output to the tristate TDO pin.

iii. TAP controller:

The TAP Controller is a 16-state FSM that proceeds from state to state based on the TCK and
TMS signals. It provides signals that control the test data registers, and the instruction register.
These include serial-shift clocks and update clocks. The state diagram is shown in Figure 14.35.
The state adjacent to each state transition is that of the TMS signal at the rising edge of TCK.
Starting initially in the Test-logic-Reset state, a low on TMS transitions the FSM to the Run-
Test/Idle Mode. Holding TMS high for the next three TCK cycle places the FSM in the select-
DR-scan, select-IR-scan, and finally capture-IR mode. In this mode, two bits are input to the TDI
port and shifted into the instruction register.
Asserting TMS for a cycle allows the instruction register to pause while serially loading to allow
tests to be carried out. Asserting TMS for two cycles allows the FSM to enter the Exit-2-IR mode
on exit from the pause-IR state and then to enter the Update-IR mode where the Instruction
Register is updated with the new IR value. Similar sequencing is used to load the data registers.

iv. Instruction Register (IR):


The Instruction Register has to be at least two bits long, and logic detecting the state of the
instruction register has to decode at least three instructions which are as follows:
• BYPASS
-This instruction is represented by an IR having all 0s in it.
-used to bypass any serial-data registers in a chip with a 1-bit register
-This allows specific chips to be tested in a serial-scan chain without having to shift through the
accumulated SR stages in all the chips.
• EXTEST
-This instruction allows for the testing of off-the-chip circuitry and is represented by all Is in the
IR.
• SAMPLE/PRELOAD
-This instruction places the boundary-scan registers (i.e., at the chip's I/O pins) in the DR chain,
and samples or preloads chips I/Os.
In addition to these instructions, the following are also recommended:
• INTEST
-This instruction allows for single-step testing of internal circuitry via the boundary- scan
registers.
• RUNBIST
-This instruction is used to run internal self-testing procedures within a chip.
A typical IR bit is shown in Figure 14.36.

v. Test-Data Registers (DRs):

The test-data registers are used to set the inputs of modules to be tested, and to collect the results
of running tests. The simplest data-register configuration would be a boundary-scan register
(passing through all I/O pads) and a bypass register (1-bit long). Figure 14.37 shows a
generalized view of the data registers where one internal data register has been added. A
multiplexer under the control of the TAP controller selects which particular data register is
routed to the TDO pin.
vi. Boundary Scan Registers:

The boundary scan register is a special case of a data register. It allows circuit-board
interconnections to be tested, external components tested, and the state of chip digital I/Os to be
sampled. Apart from the bypass register, it is the only data register required in a Boundary Scan
compliant part.

A single structure in addition to the existing I/O circuitry can be used for all I/O pad types,
depending on the connections made on the cell. It consists of two multiplexers and two edge-
triggered registers.

Figure 14.38(a) shows this cell used as an input pad. Two register bits allow the serial shifting of
data through the boundary-scan chain and the local storage of a data bit. This data bit may be
directed to internal circuitry in the INTEST or RUNBIST modes (mode 1). When mode = 0, the
cell is in EXTEST or SAMPLE/PRELOAD mode. A further multiplexer under the control of
shift DR controls the serial/parallel nature of the cell. The signal clock DR and update DR
generated by the TAP controller load the serial and parallel register respectively.

An output cell is shown in Figure 14.38(b). When mode=1, the cell is in EXTEST, INTEST, or
RUNBIST modes, communicating the internal data to the output pad. When mode = 0, the cell is
in the SAMPLE/PRELOAD mode. Two output cells may be combined to form a tristate
boundary-scan cell, as in Figure 14.39.
The output signal and tristate-enable each have their own MUXes and registers. The Mode
control is the same for the output-cell example. Finally, a bidirectional pin combines an input
and tristate cell as in Figure 14.40.

You might also like