Module 3
Module 3
Working
Write operation:
• To perform write operation WR is made high and RD is kept low, transistor T1 turns ON and data (bit
level) on the bus is stored in Cg (gate capacitance) of T2 through transistor T1.
• Once the data is stored RD and WR is made low (i.e. RD=WR=low).
Read operation:
• To perform read operation RD is made high and WR is kept low.
• Bus will be pulled down to ground through transistor T3 and T1 if a 1 was stored.
• Otherwise transistor T2 will be non-conducting and the bus will remain HIGH due to the pull up
transistor connected on top of the bus.
• Note that the complement of the stored bit is read onto the bus.
Area:
• From the layout it will be seen that an area of more than 500λ2 is required to store each bit.
• Thus, if we consider 5µm technology λ=2.5µm.
Area/bit=3000µm2
• Thus, if we consider a 4mm X 4mm chip then it can accommodate > 4.8k bits.
Dissipation:
• Static dissipation is nil.
• Depends on bus pull-up & on duration of RD signal & switching frequency.
Volatility:
• Cell is dynamic, data will be there as long as charge remains on Cg of T2
Area:
• From the layout it will be seen that an area of 200λ2 is required to store each bit.
• Thus, if we consider 5µm technology λ=2.5µm.
Area/bit=1250 µm2
• Thus, if we consider a 4mm X 4mm chip then it can accommodate 12k bits.
Dissipation:
• No static power, but there must be an allowance for switching energy during read/write.
Volatility:
• The charge stored in capacitor Cm will deplete due to leakage current thus the data must be
periodically refreshed for every 1ms or even less.
Pseudo static RAM / register cell:
Write operation:
Working
• Dynamic RAM need to be refreshed periodically and hence not convenient, so static RAM needs to be
designed to hold data indefinitely.
• One way to design SRAM is to connect 2 inverter stages with a feedback.
• 2 is used to refresh the data and 1 is synchronized with read and write operations.
Write operation:
• WR will occur in synchronous with 1 of the clock.
• when WR is made high transistor T1 turns ON, T2 and T3 are OFF hence T1 acts as a short circuit and
the bits on the bus is stored on the gate capacitance Cg of inverter 1 and the complement will be stored
in the gate capacitance of inverter 2.
• Output of the inverter 2 gives the true output.
Hold operation:
• During every 2of the clock stored bit is refreshed through the gated feedback path from output of
inverter 2 to the input of inverter 1.
• Thus, the bit will be held as long as 2 of the clock is kept high.
Read operation:
• RD will occur in synchronous with 1 of the clock.
• When RD is made high transistor T2 turns ON, T1 and T3 are OFF hence T2 acts as a short circuit and
the data on the inverter 2 output is read onto the bus.
Area
• From the layout it will be seen that an area of (59λ X 45λ) is used for CMOS design and is more than
2655λ2 to store each bit.
• If we consider a single bus and more compact layout then area required can be reduced to 1750λ2
• Thus, if we consider 5μm technology λ=2.5μm.
Area/bit=10000μm2
• Thus, if we consider a 4mm X 4mm chip then it can store approximately 1.4K bits.
Dissipation
• Static memory cell uses inverters, one with 8:1 and the other 4:1 ratio.
• Power dissipation will depend on the current drawn.
• Let’s us assume inverter1 resistance is 90KΩ, and inverter2 resistance of 50KΩ.
5 5
Average current = 0.5( 90𝐾 + 50𝐾 ) = 80μA
• Thus, dissipation per bit stored = 80μA X 5V = 400μw
• Thus 1.4 K bits on the single chip would dissipate 560mW.
Volatility
• The cell is non-volatile provided that 2 of signals are present
4T/6T memory cell:
Working
• uses 2 buses per bit to store bit and bit’
• Both buses are precharged to logic 1 before read or write operation.
Write operation
• both bit &̅̅̅̅̅𝑏𝑖𝑡 buses are precharged to VDD with clock 1 via transistor T5 & T6
• Column select line is activated along with 2
• Based on the data in I/O bus either bit or ̅̅̅̅
𝑏𝑖𝑡 line is discharged along the I/O line.
• Now row select signals are activated and the states on bit line are stored in the gate capacitance of T1
& T2 as charge via T3 & T4,
Read operation
• ̅̅̅̅ lines are again precharged to VDD via T5 & T6 during 1
bit and 𝑏𝑖𝑡
• if 1 has been stored, T2 turns ON & T1 turns OFF
• ̅̅̅̅
𝑏𝑖𝑡 line will be discharged to VSS via T2
• each cell of RAM array is of minimum size & hence the transistors is incapable of sinking large charges
quickly
• The sense amplifier formed from the arrangement of T1, T2, T3 and T4, which forms a flip flop
circuit.
• If the “sense” de-active/ inactive, then the bit line state is reflected in the gate capacitances of T1 and
T3 and this is w.r.t VDD. This will cause one of the transistors to turn ON and other to turn OFF.
• When sense = enabled, current flows from VDD through ON transistor and helps to maintain the
state of the bit line.
• Sense amplifier performs 2 function
1. Rewriting the data after reading i.e., refreshing the memory cell so that it holds the data without
signal degradation
2. It predetermines the state of the data lines.
JK flip flop:
• It is a memory element. It is the widely used arrangement for static memory element.
• Also, with JK other flip-flop arrangements can be obtained such as T and D flip-flop.
• The flip-flop has inputs clocked J and K along with asynchronous clear and has the output as Q and 𝑄̅
• The inputs J and K are read for the rising edge of clock signal and data is passed to the output for the
falling edge of clock.
Note: here JK is implemented in master slave configuration in order to solve the race around condition
• These tests verifies that the chip performs its intended or required function.
• These tests, called functionality tests or logic verification, are run before tapeout to verify the functionality of
the circuit.
manufacturing tests:
• The third set of tests verify that every transistor, gate, and storage element in the chip functions
correctly.
• These tests are conducted on each manufactured chip before shipping to the customer to verify that the
silicon is completely undamaged.
• manufacturing tests can be used for all three steps, but often it is better to use one set of tests to chase
down logic bugs and separate set optimized to catch manufacturing defects.
➢ yield of a particular IC was the number of good die divided by the total number of die per wafer. Because of
the complexity of the manufacturing process, not all die on a wafer function correctly.
➢ Dust particles and small imperfections in material or photomasking can result in bridged connections or
missing features. These imperfections result in what is termed a fault.
➢ The goal of a manufacturing test procedure is to determine which die are good and should be shipped to
customers.
Logic Verification:
• Verification tests are usually the designer first choice that is constructed as part of the design process.
• Verification tests is necessary to prove that a synthesized gate description was functionally equivalent to
the source RTL. This proves that RTL is equivalent to the design specification at a higher behavioural or
specification level of abstraction.
• The behavioral specification might be a verbal description, a plain language textual specification, a
description in some high-level computer language such as C, a program in a system-modelling language
such as System C, or a hardware description language such as VHDL or Verilog, or simply a table of inputs
and required outputs.
• Often, designers will have a golden model in one of the previously mentioned formats and this becomes
the reference against which all other representations are checked.
• You can check functional equivalence through simulation at various levels of the design hierarchy.
• If the description is at the RTL level, the behaviour at a system level may be able to be fully verified.
• The best advice with respect to writing functional tests is to simulate as closely as possible the way in
which the chip or system will be used in the real world. Often, this is impractical due to slow simulation
times and extremely long verification sequences.
• One approach is to move up the simulation hierarchy as modules become verified at lower levels.
• Verification at the top chip level using an FPGA emulator offers several advantages over simulation and,
the final chip implementation.
• The emulation speed can be near if not real time. This means that the actual analog signals can be
interfaced with the chip.
• In most projects, the amount of verification effort greatly exceeds the design effort.
• The output state of the sequential circuit is determined by the present inputs and the previous output
state.
• A minimum of 2N+M test vectors must be applied to exhaustively test the circuit.
• exhaustive testing is infeasible for most systems because the number of potentially non-functional nodes
on a chip is much smaller than the number of states.
• The verification engineer must cleverly devise test vectors that detect any (or nearly any) defective node
without requiring so many patterns.
1. Test Vectors:
• Test vectors are a set of patterns applied to inputs and a set of expected outputs.
• Both logic verification and manufacturing test require a good set of test vectors.
• The set should be large enough to catch all the logic errors and manufacturing defects, yet small
enough to keep test time (and cost) reasonable.
• Directed and random vectors are the most common types.
• Directed vectors are selected by an engineer who is knowledgeable about the system. Their
purpose is to cover the corner cases where the system might be most likely to malfunction.
• For example, in a 32-bit Datapath, likely corner cases include the following:
0x00000000 All zeros
0xFFFFFFFF All ones
0x00000001 One in the lsb
0x80000000 One in the msb
0x55555555 Alternating 0’s and 1’s
0xAAAAAAAA Alternating 1’s and 0’s
0x7A39D281 A random value
• Directed vectors are an efficient way to catch the most obvious design errors and a good logic
designer will always run a set of directed tests on a new piece of RTL to ensure a minimum level
of quality.
• Applying a large number of random or semirandom vectors is a surprisingly good way to detect
more subtle errors.
• The effectiveness of the set of vectors is measured by the fault coverage.
• Automatic test pattern generation tools are good at producing high fault coverage for
manufacturing test
2. Testbenches and Harnesses:
• A verification test bench or harness is a piece of HDL code that is placed as a wrapper around a
core piece of HDL to apply and check test vectors.
• In the simplest test bench, input vectors are applied to the module under test and at each cycle,
the outputs are examined to determine whether they match with a predefined expected data
set.
• The expected outputs can be derived from the golden model and saved as a file or the value can
be computed on the fly.
• Simulators usually provide settable break points and single or multiple stepping abilities to allow
the designer to step through a test sequence while debugging discrepancies.
3. Regression Testing:
• Regression testing involves performing a large set of simulations to automatically verify that no
functionality has inadvertently changed in a module or set of modules.
• During a design, it is common practice to run a regression script every night after design
activities have concluded to check that bug fixes or feature enhancements have not broken
completed modules.
4. Version Control:
• Combined with regression testing is the use of versioning, that is, the orderly management of
different design iterations.
• Unix/Linux tools such as CVS or Subversion are useful for this.
5. Bug Tracking:
• Another important tool to use during verification (and in fact the whole design cycle) is a bug-
tracking system.
• Bug-tracking systems such as the Unix/Linux based GNATS allow the management of a wide
variety of bugs.
• In these systems, each bug is entered and the location, nature, and severity of the bug noted.
• The bug discoverer is noted, along with the perceived person responsible for fixing the bug.
Manufacturing Test Principles:
• integrated circuits have a yield of less than 100%.
• The purpose of manufacturing test is to screen out most of the defective parts before they are shipped
to customers.
• Typical commercial products target a defect rate of 350–1000 defects per million (DPM) chips shipped.
• The customer then assembles systems from the chips, tests the systems, and discards or repairs
defective systems.
• Fault Models:
• fault model is a model for how faults occur and their impact on circuits.
• Stuck-At model is a model to check single or multiple struck-at-fault at gate level/ register level
net list.
• Switch level fault model checks the struck-short or struck-open fault at transistor level.
• The most popular model is called the Stuck-At model.
Stuck-At Faults:
• In the Stuck-At model, some lines in the circuit is permanently struck at logic 0 is called as stuck
at zero (Stuck-At-0, S-A- 0) or permanently struck at logic 1 is called as stuck at one (Stuck-At-l, S-
A-l).
• Figure 15.11 illustrates how an S-A-0 or S-A-1 fault might occur.
• Struck at fault can be at the Input or output of a gate/module.
• These faults most frequently occur due to some physical failure such as gate oxide shorts (the
nMOS gate to GND or the pMOS gate to VDD) or metal-to-metal shorts.
Switch level fault model:
There are two types of switch level fault model they are
i) Struck-open fault: a transistor never turns ‘ON’ i.e. source and drain never short
ii) Struck- short fault: Transistor is always ‘ON’ irrespective of gate voltage i.e. source and drain
are always short.
Struck at fault model cannot detect switch level fault because circuit behaves differently during these
faults.
Struck-open fault:
• This is illustrated in Figure 15.13 for the case of a 2-input NOR gate in which one of the
transistors is permanently non conducting.
• If nMOS transistor A is stuck open, then the gate output may depend on its previous state i.e.
there is possible for a fault to convert a combinational circuit into a sequential circuit.
̅̅̅̅ + Z’
Z = 𝑨𝑩
Where Z’ is the previous state of the gate.
• Typically, it requires two test vectors that are to be applied in sequence called 2-pattern test to
detect this fault.
• Consider transistor T1 struck open.
• When A=B=0, output F=1 in the absence of fault.
• In the presence of fault, the output is floating and the voltage on F depends on the charge stored
in the load capacitance.
• We apply two pattern a) A=1, B=0 to initialize F=0;
b) A=0, B=0 to find the fault.
• Hence, we require a pair if test vector at the input to find the fault.
Struck-open fault:
• A transistor will be permanently shorted in the presence of fault.
• Both pull-up and pull-down network may become conducting there by causing the output to
reach some indeterminate level.
• Delay faults may be caused by crosstalk, Delay faults can also occur more often in SOI logic
through the history effect.
• Software has been developed to model the effect of delay faults.
Scan Design:
• The scan-design strategy for testing has evolved to provide observability and controllability at each
register.
• In designs with scan, the registers operate in one of two modes i.e. normal mode and scan mode.
• In normal mode, they behave as expected.
• In scan mode, they are connected to form a giant shift register called a scan chain.
• By applying N clock pulses in scan mode, all N bits of state in the system can be shifted out and new N bits
of state can be shifted in. Therefore, scan mode gives easy observability and controllability of every
register in the system.
• In scan design we shall learn serial and parallel scan chain
Serial scan:
• Modern scan is based on the use of scan registers, as shown in Figure 15.16.
• The scan register is a D flip-flop preceded by a multiplexer.
• When the SCAN signal is deasserted, the register behaves as a conventional register, storing data on
the D input.
• When SCAN is asserted, the data is loaded from the SI pin.
• For the circuit to load the scan chain, SCAN is asserted and CLK is pulsed to load the registers with Scan
data provided in SI pin.
• SCAN is deasserted and CLK is asserted for one cycle to operate the circuit normally with predefined
inputs.
• SCAN is then reasserted and CLK asserted to read the stored data out. At the same time, the new register
contents can be shifted in for the next test.
• In this scheme, every input to the combinational block can be controlled and every output can be
observed.
• Test generation for this type of test architecture can be highly automated by using ATPG.
• The prime disadvantage is the area and delay impact of the extra multiplexer in the scan register.
Parallel Scan:
• serial scan chains can become quite long, and the loading and unloading can dominate testing time.
• A simple idea is to split the chains into smaller segments. This can be done on a module-by-module
basis or by limiting to some specified scan length is called random access scan.
• The basic idea is shown in Figure 15.17.
• The figure shows a two-by-two register section.
• Each register receives a column (column<m>) and row (row<n>) access signal along with a row data
line (data<n>).
• A global write signal (write) is connected to all registers.
• By asserting the row and column access signals in conjunction with the write signal, any register can
be read or written in exactly the same method as a conventional RAM.
signature analysis or cyclic redundancy checking:
• One method of testing a module is to use signature analysis or cyclic redundancy checking.
• This involves using a pseudo-random sequence generator (PRSG) to produce the input signals for a section of
combinational circuitry and a signature analyzer to observe the output signals.
• A PRSG of length n is constructed from a linear feedback shift register (LFSR), which in turn is made of n flip-
flops connected in a serial fashion, as shown in Figure 15.19(a).
• The XOR of particular outputs are fed back to the input of the LFSR.
n
• An n-bit LFSR will cycle through 2 -1 states before repeating the sequence.
• characteristic polynomial indicates which bits are fed back.
• A complete feedback shift register (CFSR), shown in Figure 15.19(b), includes the zero state that may be
required in some test situations.
• An n-bit LFSR is converted to an n-bit CFSR by adding an n – 1 input NOR gate connected to all but the
last bit.
• When in state 0…01, the next state is 0…00. When in state 0…00, the next state is 10…0. Otherwise, the
sequence is the same.
• A signature analyzer receives successive outputs of a combinational logic block and produces a syndrome
that is a function of these outputs.
• The syndrome is reset to 0, and then XORed with the output on each cycle.
• At the end of a test sequence, the LFSR contains the syndrome that is a function of all previous outputs.
• This can be compared with the correct syndrome (derived by running a test program on the good logic)
to determine whether the circuit is good or bad.
Built-In Self-Test (BIST)
• The combination of signature analysis and the scan technique creates a structure known as BIST—for Built-
In Self-Test or BILBO—for Built-In Logic Block Observation.
• The 3-bit BIST register shown in Figure 15.20 is a scannable resettable register that also can serve as a
pattern generator and signature analyser C[1:0] specifies the mode of operation.
• In the reset mode (10), all the flip-flops are synchronously initialized to 0.
• In normal mode (11), the flip-flops behave normally with their D input and Q output.
• In scan mode (00), the flip-flops are configured as a 3-bit shift register between SI and SO.
• test mode (01), the register behaves as a pseudo-random sequence generator or signature analyser.
• If all the D inputs are held low, the Q outputs loop through a pseudo-random bit sequence, which can
serve as the input to the combinational logic.
• If the D inputs are taken from the combinational logic output, they are mixed with the existing state to
produce the syndrome.
• any circuits that draw DC power such as pseudo-nMOS gates or analog circuits have to be disabled.
• As current measuring is slow, the tests must be run slower (of the order of 1 ms per vector) than normal,
which increases the test time.
• IDDQ testing can be completed externally to the chip by measuring the current drawn on the VDD line or
internally using specially constructed test circuits.