0% found this document useful (0 votes)
32 views21 pages

Module 3

Uploaded by

ajithstephen11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views21 pages

Module 3

Uploaded by

ajithstephen11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Memory

System timing considerations:


• Two phase non-overlapping clock signal is assumed to be available, and this clock will be used
throughout the system.
• clock phase 1 leads 2.
• Bits to be stored are written to register and subsystems on 1.
• Bits or data written are assumed to be settled before 2.
• 2 signal is used to refresh data
• Delays assumed to be less than the intervals between the leading edge of 1 & 2
• Bits or data may be read on the next 1.
• There must be at least one clocked storage element in series with every closed Loop signal path

3T dynamic RAM cell:


Circuit diagram

Working
Write operation:
• To perform write operation WR is made high and RD is kept low, transistor T1 turns ON and data (bit
level) on the bus is stored in Cg (gate capacitance) of T2 through transistor T1.
• Once the data is stored RD and WR is made low (i.e. RD=WR=low).
Read operation:
• To perform read operation RD is made high and WR is kept low.
• Bus will be pulled down to ground through transistor T3 and T1 if a 1 was stored.
• Otherwise transistor T2 will be non-conducting and the bus will remain HIGH due to the pull up
transistor connected on top of the bus.
• Note that the complement of the stored bit is read onto the bus.
Area:
• From the layout it will be seen that an area of more than 500λ2 is required to store each bit.
• Thus, if we consider 5µm technology λ=2.5µm.
Area/bit=3000µm2
• Thus, if we consider a 4mm X 4mm chip then it can accommodate > 4.8k bits.

Dissipation:
• Static dissipation is nil.
• Depends on bus pull-up & on duration of RD signal & switching frequency.

Volatility:
• Cell is dynamic, data will be there as long as charge remains on Cg of T2

1T dynamic memory cell:


Working
Write operation:
• Row select (RS) is made high.
• Capacitor Cm is charged through read/write line based on the data to be stored.
Read operation:
• data is read by detecting the charge on capacitor Cm with Row select (RS) high

Cell arrangement is bit complex hence few solutions is stated below


• Solution: extend the diffusion area of the source of pass transistor, but Cd<<< Cg channel.
• Another solution: create significant capacitor using poly plate over diffusion area.
• With all this careful design is necessary to achieve consistent readability.

Area:
• From the layout it will be seen that an area of 200λ2 is required to store each bit.
• Thus, if we consider 5µm technology λ=2.5µm.
Area/bit=1250 µm2
• Thus, if we consider a 4mm X 4mm chip then it can accommodate 12k bits.

Dissipation:
• No static power, but there must be an allowance for switching energy during read/write.

Volatility:
• The charge stored in capacitor Cm will deplete due to leakage current thus the data must be
periodically refreshed for every 1ms or even less.
Pseudo static RAM / register cell:
Write operation:
Working
• Dynamic RAM need to be refreshed periodically and hence not convenient, so static RAM needs to be
designed to hold data indefinitely.
• One way to design SRAM is to connect 2 inverter stages with a feedback.
• 2 is used to refresh the data and 1 is synchronized with read and write operations.

Write operation:
• WR will occur in synchronous with 1 of the clock.
• when WR is made high transistor T1 turns ON, T2 and T3 are OFF hence T1 acts as a short circuit and
the bits on the bus is stored on the gate capacitance Cg of inverter 1 and the complement will be stored
in the gate capacitance of inverter 2.
• Output of the inverter 2 gives the true output.
Hold operation:
• During every 2of the clock stored bit is refreshed through the gated feedback path from output of
inverter 2 to the input of inverter 1.
• Thus, the bit will be held as long as 2 of the clock is kept high.
Read operation:
• RD will occur in synchronous with 1 of the clock.
• When RD is made high transistor T2 turns ON, T1 and T3 are OFF hence T2 acts as a short circuit and
the data on the inverter 2 output is read onto the bus.
Area
• From the layout it will be seen that an area of (59λ X 45λ) is used for CMOS design and is more than
2655λ2 to store each bit.
• If we consider a single bus and more compact layout then area required can be reduced to 1750λ2
• Thus, if we consider 5μm technology λ=2.5μm.
Area/bit=10000μm2
• Thus, if we consider a 4mm X 4mm chip then it can store approximately 1.4K bits.
Dissipation
• Static memory cell uses inverters, one with 8:1 and the other 4:1 ratio.
• Power dissipation will depend on the current drawn.
• Let’s us assume inverter1 resistance is 90KΩ, and inverter2 resistance of 50KΩ.
5 5
Average current = 0.5( 90𝐾 + 50𝐾 ) = 80μA
• Thus, dissipation per bit stored = 80μA X 5V = 400μw
• Thus 1.4 K bits on the single chip would dissipate 560mW.
Volatility
• The cell is non-volatile provided that 2 of signals are present
4T/6T memory cell:
Working
• uses 2 buses per bit to store bit and bit’
• Both buses are precharged to logic 1 before read or write operation.

Write operation
• both bit &̅̅̅̅̅𝑏𝑖𝑡 buses are precharged to VDD with clock 1 via transistor T5 & T6
• Column select line is activated along with 2
• Based on the data in I/O bus either bit or ̅̅̅̅
𝑏𝑖𝑡 line is discharged along the I/O line.
• Now row select signals are activated and the states on bit line are stored in the gate capacitance of T1
& T2 as charge via T3 & T4,
Read operation
• ̅̅̅̅ lines are again precharged to VDD via T5 & T6 during 1
bit and 𝑏𝑖𝑡
• if 1 has been stored, T2 turns ON & T1 turns OFF
• ̅̅̅̅
𝑏𝑖𝑡 line will be discharged to VSS via T2
• each cell of RAM array is of minimum size & hence the transistors is incapable of sinking large charges
quickly
• The sense amplifier formed from the arrangement of T1, T2, T3 and T4, which forms a flip flop
circuit.
• If the “sense” de-active/ inactive, then the bit line state is reflected in the gate capacitances of T1 and
T3 and this is w.r.t VDD. This will cause one of the transistors to turn ON and other to turn OFF.
• When sense = enabled, current flows from VDD through ON transistor and helps to maintain the
state of the bit line.
• Sense amplifier performs 2 function
1. Rewriting the data after reading i.e., refreshing the memory cell so that it holds the data without
signal degradation
2. It predetermines the state of the data lines.
JK flip flop:
• It is a memory element. It is the widely used arrangement for static memory element.
• Also, with JK other flip-flop arrangements can be obtained such as T and D flip-flop.
• The flip-flop has inputs clocked J and K along with asynchronous clear and has the output as Q and 𝑄̅
• The inputs J and K are read for the rising edge of clock signal and data is passed to the output for the
falling edge of clock.
Note: here JK is implemented in master slave configuration in order to solve the race around condition

Gate Implementation of flip-flops:


• The expressions for the flip-flops can be implemented using either NAND or NOR logic.
• If NAND arrangement is used (in NAND inputs have to be connected in series) then it would take large
area. Also, it is seen that when connected in series the overall delay increases and in practice not more
than 4 transistors should be connected in series. However, the number can be increased by including
buffers in between and next four transistors. Also, it is seen that performance of NAND is slower than
NOR.
• In NOR the implementation can be done easily (as the inputs have to be connected in parallel)
D and T flip-flop circuit:
• D flip-flop can be formed from JK by connecting an inverter between both inputs.
• T flip-flop can be readily formed from JK by connecting JK to form T input. This is shown in the Fig.
Introduction
➢ Tests fall into three main categories:
1. functionality tests or logic verification
2. silicon debug or chip debug
3. manufacturing tests.

functionality tests or logic verification:

• These tests verifies that the chip performs its intended or required function.
• These tests, called functionality tests or logic verification, are run before tapeout to verify the functionality of
the circuit.

silicon debug or chip debug:


• These tests are run on the first batch of chips that return from fabrication.
• These tests confirm that the chip operates as it was intended and help debug any discrepancies.
• These tests are much more extensive than the logic verification tests because the chip can be tested at
full speed in a system.
• For example, a new microprocessor can be placed in a prototype motherboard to try to boot the
operating system.

manufacturing tests:
• The third set of tests verify that every transistor, gate, and storage element in the chip functions
correctly.
• These tests are conducted on each manufactured chip before shipping to the customer to verify that the
silicon is completely undamaged.
• manufacturing tests can be used for all three steps, but often it is better to use one set of tests to chase
down logic bugs and separate set optimized to catch manufacturing defects.

➢ yield of a particular IC was the number of good die divided by the total number of die per wafer. Because of
the complexity of the manufacturing process, not all die on a wafer function correctly.
➢ Dust particles and small imperfections in material or photomasking can result in bridged connections or
missing features. These imperfections result in what is termed a fault.
➢ The goal of a manufacturing test procedure is to determine which die are good and should be shipped to
customers.

Testing a die (chip) can occur at the following levels:


❖ Wafer level
❖ Packaged chip level
❖ Board level
❖ System level
❖ Field level
➢ By detecting a malfunctioning chip early, the manufacturing cost can be kept low.
➢ the approximate cost to a company of detecting a fault at the various levels is at least
❖ Wafer $0.01–$0.10
❖ Packaged chip $0.10–$1
❖ Board $1–$10
❖ System $10–$100
❖ Field $100–$1000
➢ If testing is not considered in advance, the manufacturing test can be extremely time consuming and hence
expensive.
➢ Some chips have even proved impossible to debug because designers have so little visibility into the
internal operation.

Logic Verification:
• Verification tests are usually the designer first choice that is constructed as part of the design process.
• Verification tests is necessary to prove that a synthesized gate description was functionally equivalent to
the source RTL. This proves that RTL is equivalent to the design specification at a higher behavioural or
specification level of abstraction.
• The behavioral specification might be a verbal description, a plain language textual specification, a
description in some high-level computer language such as C, a program in a system-modelling language
such as System C, or a hardware description language such as VHDL or Verilog, or simply a table of inputs
and required outputs.

• Often, designers will have a golden model in one of the previously mentioned formats and this becomes
the reference against which all other representations are checked.
• You can check functional equivalence through simulation at various levels of the design hierarchy.
• If the description is at the RTL level, the behaviour at a system level may be able to be fully verified.
• The best advice with respect to writing functional tests is to simulate as closely as possible the way in
which the chip or system will be used in the real world. Often, this is impractical due to slow simulation
times and extremely long verification sequences.
• One approach is to move up the simulation hierarchy as modules become verified at lower levels.
• Verification at the top chip level using an FPGA emulator offers several advantages over simulation and,
the final chip implementation.
• The emulation speed can be near if not real time. This means that the actual analog signals can be
interfaced with the chip.
• In most projects, the amount of verification effort greatly exceeds the design effort.

Logic Verification Principles:


• Figure 15.6(a) shows a combinational circuit with N inputs.
• To test this circuit exhaustively, a sequence of 2N inputs (or test vectors) must be applied.
• This combinational circuit is converted to a sequential circuit with addition of M registers, as shown in
Figure 15.6(b).

• The output state of the sequential circuit is determined by the present inputs and the previous output
state.

• A minimum of 2N+M test vectors must be applied to exhaustively test the circuit.

• exhaustive testing is infeasible for most systems because the number of potentially non-functional nodes
on a chip is much smaller than the number of states.

• The verification engineer must cleverly devise test vectors that detect any (or nearly any) defective node
without requiring so many patterns.

1. Test Vectors:
• Test vectors are a set of patterns applied to inputs and a set of expected outputs.
• Both logic verification and manufacturing test require a good set of test vectors.
• The set should be large enough to catch all the logic errors and manufacturing defects, yet small
enough to keep test time (and cost) reasonable.
• Directed and random vectors are the most common types.
• Directed vectors are selected by an engineer who is knowledgeable about the system. Their
purpose is to cover the corner cases where the system might be most likely to malfunction.
• For example, in a 32-bit Datapath, likely corner cases include the following:
0x00000000 All zeros
0xFFFFFFFF All ones
0x00000001 One in the lsb
0x80000000 One in the msb
0x55555555 Alternating 0’s and 1’s
0xAAAAAAAA Alternating 1’s and 0’s
0x7A39D281 A random value
• Directed vectors are an efficient way to catch the most obvious design errors and a good logic
designer will always run a set of directed tests on a new piece of RTL to ensure a minimum level
of quality.
• Applying a large number of random or semirandom vectors is a surprisingly good way to detect
more subtle errors.
• The effectiveness of the set of vectors is measured by the fault coverage.
• Automatic test pattern generation tools are good at producing high fault coverage for
manufacturing test
2. Testbenches and Harnesses:
• A verification test bench or harness is a piece of HDL code that is placed as a wrapper around a
core piece of HDL to apply and check test vectors.
• In the simplest test bench, input vectors are applied to the module under test and at each cycle,
the outputs are examined to determine whether they match with a predefined expected data
set.
• The expected outputs can be derived from the golden model and saved as a file or the value can
be computed on the fly.
• Simulators usually provide settable break points and single or multiple stepping abilities to allow
the designer to step through a test sequence while debugging discrepancies.
3. Regression Testing:
• Regression testing involves performing a large set of simulations to automatically verify that no
functionality has inadvertently changed in a module or set of modules.
• During a design, it is common practice to run a regression script every night after design
activities have concluded to check that bug fixes or feature enhancements have not broken
completed modules.
4. Version Control:
• Combined with regression testing is the use of versioning, that is, the orderly management of
different design iterations.
• Unix/Linux tools such as CVS or Subversion are useful for this.
5. Bug Tracking:
• Another important tool to use during verification (and in fact the whole design cycle) is a bug-
tracking system.
• Bug-tracking systems such as the Unix/Linux based GNATS allow the management of a wide
variety of bugs.
• In these systems, each bug is entered and the location, nature, and severity of the bug noted.
• The bug discoverer is noted, along with the perceived person responsible for fixing the bug.
Manufacturing Test Principles:
• integrated circuits have a yield of less than 100%.
• The purpose of manufacturing test is to screen out most of the defective parts before they are shipped
to customers.
• Typical commercial products target a defect rate of 350–1000 defects per million (DPM) chips shipped.
• The customer then assembles systems from the chips, tests the systems, and discards or repairs
defective systems.

• Fault Models:
• fault model is a model for how faults occur and their impact on circuits.

• Stuck-At model is a model to check single or multiple struck-at-fault at gate level/ register level
net list.

• Switch level fault model checks the struck-short or struck-open fault at transistor level.
• The most popular model is called the Stuck-At model.
Stuck-At Faults:
• In the Stuck-At model, some lines in the circuit is permanently struck at logic 0 is called as stuck
at zero (Stuck-At-0, S-A- 0) or permanently struck at logic 1 is called as stuck at one (Stuck-At-l, S-
A-l).
• Figure 15.11 illustrates how an S-A-0 or S-A-1 fault might occur.
• Struck at fault can be at the Input or output of a gate/module.
• These faults most frequently occur due to some physical failure such as gate oxide shorts (the
nMOS gate to GND or the pMOS gate to VDD) or metal-to-metal shorts.
Switch level fault model:
There are two types of switch level fault model they are
i) Struck-open fault: a transistor never turns ‘ON’ i.e. source and drain never short
ii) Struck- short fault: Transistor is always ‘ON’ irrespective of gate voltage i.e. source and drain
are always short.
Struck at fault model cannot detect switch level fault because circuit behaves differently during these
faults.
Struck-open fault:
• This is illustrated in Figure 15.13 for the case of a 2-input NOR gate in which one of the
transistors is permanently non conducting.
• If nMOS transistor A is stuck open, then the gate output may depend on its previous state i.e.
there is possible for a fault to convert a combinational circuit into a sequential circuit.
̅̅̅̅ + Z’
Z = 𝑨𝑩
Where Z’ is the previous state of the gate.

• Typically, it requires two test vectors that are to be applied in sequence called 2-pattern test to
detect this fault.
• Consider transistor T1 struck open.
• When A=B=0, output F=1 in the absence of fault.
• In the presence of fault, the output is floating and the voltage on F depends on the charge stored
in the load capacitance.
• We apply two pattern a) A=1, B=0 to initialize F=0;
b) A=0, B=0 to find the fault.
• Hence, we require a pair if test vector at the input to find the fault.

Struck-open fault:
• A transistor will be permanently shorted in the presence of fault.

• Checking the output may not be sufficient.

• Both pull-up and pull-down network may become conducting there by causing the output to
reach some indeterminate level.

• High current will be flowing from VDD to GND.


• To detect these types of fault we need to monitor the current flowing called IDDQ testing.
• Consider transistor T1 struck short.
• When A=1, B=0, output F=0 in the absence of fault.
• In the presence of fault, transistor T1, T2, T3 will be ‘ON’ hence there will be a short circuit between
VDD and GND.
Observability:
• Observability is the ability to observe, either directly or indirectly, the state of any node in the
circuit.
• This is relevant when you want to measure the output of a gate within a larger circuit to check that
it operates correctly.
• The aim of good chip designers to have easily observed gate outputs with limited number of
nodes that can be directly observed.
Controllability:
• controllability is the ability to set (to 1) and reset (to 0) every node internal to the circuit.
• An easily controllable node would be directly settable via an input pad.
• A node with little controllability, such as the most significant bit of a counter, might require many
hundreds or thousands of cycles to get it to the right state.
• the aim of good chip designers to make all nodes easily controllable.
• Making all flip-flops resettable via a global reset signal is one step toward good controllability
Repeatability:
• The repeatability of system is the ability to produce the same outputs given the same inputs.
• Combinational logic and synchronous sequential logic is always repeatable when it is functioning
correctly.
• asynchronous sequential circuits are nondeterministic
Survivability:
• The survivability of a system is the ability to continue function after a fault. For example, Redundant
rows and columns in memories and spare cores provide survivability in the event of
manufacturing defects.
• Some survivability features are invoked automatically by the hardware, while others are activated
by blowing fuses after manufacturing test.
Fault Coverage:
• for the vectors applied, what percentage of the chip’s internal nodes were checked? Gives the
fault coverage.
• Each circuit node is taken in sequence and held to 0 (S-A-0), and the circuit is simulated with the
test vectors comparing the chip outputs with a known good machine.
• When a discrepancy is detected between the faulty machine and the good machine, the fault is
marked as detected and the simulation is stopped. This is repeated for setting the node to 1 (S-
A-1).
Automatic Test Pattern Generation (ATPG):
• To deal with the burden for generating tests by the designer, Automatic Test Pattern Generation
(ATPG) methods have been invented.
• Commercial ATPG tools can achieve excellent fault coverage.
• they are computation-intensive and often must be run on servers or computer with many parallel
processors.
Delay Fault Testing:
• Failures that occur in CMOS could leave the functionality of the circuit untouched, but affect the
timing.
• If an open circuit occurs in one of the nMOS transistor source connections to GND, then the gate

would still function but with increased tpdf.

• Delay faults may be caused by crosstalk, Delay faults can also occur more often in SOI logic
through the history effect.
• Software has been developed to model the effect of delay faults.

Design for Testability


• The keys to designing circuits that are testable are controllability and observability.
• controllability is the ability to set (to 1) and reset (to 0) every node internal to the circuit.
• Observability is the ability to observe, either directly or indirectly, the state of any node in the circuit.
• Good observability and controllability reduce the cost of manufacturing testing because they allow high
fault coverage with relatively few test vectors.
three main approaches to what is commonly called Design for Testability
a. Ad hoc testing
b. Scan-based approaches
c. Built-in self-test (BIST)
Ad Hoc Testing:
Good design practice framed by experience designers who have learned through experience are used as
guidelines which are listed below.
• Avoid asynchronous feedback because input will be continuously changing.
• Avoid delay dependent logic.
• Avoid monostable and self-resetting logic because any internal reset will be difficult to test.
• Avoid gated clock because if there is problem in the logic then it will be difficult in testing
• Avoid redundant gates because if the logic is not minimized then the no of faults might increase.
• Avoid high fan-in fanout combinations because it reduces reliability and performance.
• Make flip flop initializable (flip flops should be initialized externally with less effort)
• Separate digital and analog circuit because to avoid un predictable behaviour
• Provide test control for difficult to control signals by using MUX to control
• Buses can be useful.
• Consider ATE requirement.

Scan Design:
• The scan-design strategy for testing has evolved to provide observability and controllability at each
register.
• In designs with scan, the registers operate in one of two modes i.e. normal mode and scan mode.
• In normal mode, they behave as expected.
• In scan mode, they are connected to form a giant shift register called a scan chain.
• By applying N clock pulses in scan mode, all N bits of state in the system can be shifted out and new N bits
of state can be shifted in. Therefore, scan mode gives easy observability and controllability of every
register in the system.
• In scan design we shall learn serial and parallel scan chain
Serial scan:
• Modern scan is based on the use of scan registers, as shown in Figure 15.16.
• The scan register is a D flip-flop preceded by a multiplexer.
• When the SCAN signal is deasserted, the register behaves as a conventional register, storing data on
the D input.
• When SCAN is asserted, the data is loaded from the SI pin.
• For the circuit to load the scan chain, SCAN is asserted and CLK is pulsed to load the registers with Scan
data provided in SI pin.
• SCAN is deasserted and CLK is asserted for one cycle to operate the circuit normally with predefined
inputs.
• SCAN is then reasserted and CLK asserted to read the stored data out. At the same time, the new register
contents can be shifted in for the next test.
• In this scheme, every input to the combinational block can be controlled and every output can be
observed.
• Test generation for this type of test architecture can be highly automated by using ATPG.
• The prime disadvantage is the area and delay impact of the extra multiplexer in the scan register.
Parallel Scan:
• serial scan chains can become quite long, and the loading and unloading can dominate testing time.
• A simple idea is to split the chains into smaller segments. This can be done on a module-by-module
basis or by limiting to some specified scan length is called random access scan.
• The basic idea is shown in Figure 15.17.
• The figure shows a two-by-two register section.
• Each register receives a column (column<m>) and row (row<n>) access signal along with a row data
line (data<n>).
• A global write signal (write) is connected to all registers.
• By asserting the row and column access signals in conjunction with the write signal, any register can
be read or written in exactly the same method as a conventional RAM.
signature analysis or cyclic redundancy checking:
• One method of testing a module is to use signature analysis or cyclic redundancy checking.
• This involves using a pseudo-random sequence generator (PRSG) to produce the input signals for a section of
combinational circuitry and a signature analyzer to observe the output signals.

• A PRSG of length n is constructed from a linear feedback shift register (LFSR), which in turn is made of n flip-
flops connected in a serial fashion, as shown in Figure 15.19(a).
• The XOR of particular outputs are fed back to the input of the LFSR.
n
• An n-bit LFSR will cycle through 2 -1 states before repeating the sequence.
• characteristic polynomial indicates which bits are fed back.
• A complete feedback shift register (CFSR), shown in Figure 15.19(b), includes the zero state that may be
required in some test situations.
• An n-bit LFSR is converted to an n-bit CFSR by adding an n – 1 input NOR gate connected to all but the
last bit.
• When in state 0…01, the next state is 0…00. When in state 0…00, the next state is 10…0. Otherwise, the
sequence is the same.
• A signature analyzer receives successive outputs of a combinational logic block and produces a syndrome
that is a function of these outputs.
• The syndrome is reset to 0, and then XORed with the output on each cycle.
• At the end of a test sequence, the LFSR contains the syndrome that is a function of all previous outputs.
• This can be compared with the correct syndrome (derived by running a test program on the good logic)
to determine whether the circuit is good or bad.
Built-In Self-Test (BIST)
• The combination of signature analysis and the scan technique creates a structure known as BIST—for Built-
In Self-Test or BILBO—for Built-In Logic Block Observation.
• The 3-bit BIST register shown in Figure 15.20 is a scannable resettable register that also can serve as a
pattern generator and signature analyser C[1:0] specifies the mode of operation.
• In the reset mode (10), all the flip-flops are synchronously initialized to 0.
• In normal mode (11), the flip-flops behave normally with their D input and Q output.
• In scan mode (00), the flip-flops are configured as a 3-bit shift register between SI and SO.
• test mode (01), the register behaves as a pseudo-random sequence generator or signature analyser.
• If all the D inputs are held low, the Q outputs loop through a pseudo-random bit sequence, which can
serve as the input to the combinational logic.
• If the D inputs are taken from the combinational logic output, they are mixed with the existing state to
produce the syndrome.

Scannable Register Design:


• an ordinary flip-flop can be made scannable by adding a multiplexer on the data input, as shown in Figure
15.18(a).
• Figure 15.18(b) shows a circuit design for such a scan register using a transmission-gate multiplexer.
• The setup time increases by the delay of the extra transmission gate in series with the D input as compared
to the ordinary static flip-flop shown in Figure 10.19(b).
• Figure 15.18(c) shows a circuit using clock gating to obtain nearly the same setup time as the ordinary
flip-flop.
• In either design, if a clock enable is used to stop the clock to unused portions of the chip, care must be
taken that Φ always toggles during scan mode.
IDDQ Testing:
• A method of testing for bridging faults is called IDDQ test (VDD supply current Quiescent) or supply current
monitoring
• when a CMOS logic gate is not switching, it draws no DC current (except for leakage).
• When a bridging (short circuit Fault) fault occurs, then for some combination of input conditions, a
measurable DC current IDD will flow.
• Testing of short circuit fault is done by applying the normal vectors, allowing the signals to settle, and

then measuring IDD.

• any circuits that draw DC power such as pseudo-nMOS gates or analog circuits have to be disabled.
• As current measuring is slow, the tests must be run slower (of the order of 1 ms per vector) than normal,
which increases the test time.
• IDDQ testing can be completed externally to the chip by measuring the current drawn on the VDD line or
internally using specially constructed test circuits.

Design for Manufacturability:


Circuits can be optimized for manufacturability to increase their yield. This can be done in a number of different
ways.
Physical:
• At the physical level (i.e., mask level), the yield can be improved by following the design rules.
The following list is representative:
✓ Increase the spacing between wires where possible––this reduces the chance of a defect causing
a short circuit.
✓ Increase the overlap of layers around contacts and vias––this reduces the chance that a
misalignment will cause variation in the contact structure.
✓ Increase the number of vias at wire intersections beyond one if possible––this reduces the chance
of a defect causing an open circuit.
Redundancy:
• Redundant structures can be used to compensate for defective components on a chip. For example,
memory arrays are commonly built with extra rows. During manufacturing test, if one of the words is
found to be defective, the memory can be reconfigured to access the spare row instead.
• Laser-cut wires or electrically programmable fuses can be used for configuration.
Power:
• power can cause failure due to excess current in wires, which in turn can cause metal migration failures.
• high-power devices raise the die temperature, degrading device performance and, over time, causing
device parameter shifts.
Process Spread:
• Monte Carlo analysis can provide better modeling for process spread and can help with centering a design
within the process variations.
Yield Analysis:
• When a chip has poor yield or will be manufactured in high volume, dice that fail manufacturing test can
be taken to a laboratory for yield analysis to locate the root cause of the failure.
• If particular structures are determined to have caused many of the failures, the layout of the structures
can be redesigned.

You might also like