0% found this document useful (0 votes)
21 views21 pages

Avlsi

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views21 pages

Avlsi

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

1 a).

With a neat diagram, explain ASIC design flow

The sequence of steps to design an ASIC is known as the Design flow . The various steps involved in
ASIC design flow are given below.

1. Design entry : Design entry is a stage where the micro architecture is implemented in a Hardware
Description language like VHDL, Verilog , System Verilog etc.

In early days , a schematic editor was used for design entry where designers instantiated gates.
Increased complexity in the current designs require the use of HDLs to gain productivity . Another
advantage is that HDLs are independent of process technology and hence can be re-used over time

2. Logic synthesis: Use an HDL (VHDL or Verilog) and a logic synthesis tool to produce a net list a
description of the logic cells and their connections

3. System partitioning: Divide a large system into ASIC-sized pieces.

4. Pre-layout simulation: Check to see if the design functions correctly.

5. Floor planning: Arrange the blocks of the netlist on the chip.


6. Placement: Decide the locations of cells in a block.

7. Routing: Make the connections between cells and blocks.

Steps 1–4 are part of logical design , and steps 5–9 are part of physical design . There is some
overlap. For example, system partitioning might be considered as either logical or physical design. To
put it another way, when we are performing system partitioning we have to consider both logical and
physical factors. Chapters 9–14 of this book is largely about logical design and Chapters 15–17 largely
about physical design.

1 c)Explain the following with relevant diagram


a. Standard Cell based ASIC

A cell-based ASIC (cell-based IC, or CBIC pronounced sea-bick) uses predesigned logic cells
(AND gates, OR gates, multiplexers, and flip-flops, for example) known as standard cells.
• One can apply the term CBIC to any IC that uses cells, but it is generally accepted that a
cell-based ASIC or CBIC means a standard-cell based ASIC.
• The standard-cell areas (also called flexible blocks) in a CBIC are built of rows of standard
cells like a wall built of bricks. The standard-cell areas may be used in combination with
microcontrollers or even microprocessors, known as mega cells. Mega cells are also called
mega functions, full-custom blocks, system-level macros (SLMs), fixed blocks, cores, or
Functional Standard Blocks (FSBs).
• A cell-based ASIC (CBIC) die with a single standard-cell area (a flexible block) together with
four fixed blocks.
• The ASIC designer defines only the placement of the standard cells and the interconnect in
a CBIC. However, the standard cells can be placed anywhere on the silicon; this means that
all the mask layers of a CBIC are customized and are unique to a particular customer.
• The advantage of CBICs is that designers save time, money, and reduce risk by using a
predesigned, pretested, and pre characterized standard-cell library.
• In addition each standard cell can be optimized individually. During the design of the cell
library each and every transistor in every standard cell can be chosen to maximize speed or
minimize area .
• The disadvantages are the time or expense of designing or buying the standard-cell library
and the time needed to fabricate all layers of the ASIC for each new design.

b. Gate array based ASIC

• In a gate array (sometimes abbreviated GA) or gate-array based ASIC the transistors are
predefined on the silicon wafer.
• The predefined pattern of transistors on a gate array is the base array , and the smallest
element that is replicated to make the base array is the base cell (sometimes called a
primitive cell ).

• Only the top few layers of metal, which define the interconnect between transistors,
are defined by the designer using custom masks. To distinguish this type of gate
array from other types of gate array, it is often called a masked gate array ( MGA ).
• The designer chooses from a gate-array library of predesigned and pre-
characterized logic cells
• The logic cells in a gate-array library are often called macros . The reason for this is
that the base-cell layout is the same for each logic cell, and only the interconnect
(inside cells and between cells) is customized, which is similar to a software macro.

➢ Types of MGA or Gate-array based ASICs

There are three types of Gate Array based ASICs

➢ Channelled gate arrays.


➢ Channelless gate arrays.
➢ Structured gate arrays.

➢ Channeled gate arrays


• The channeled gate array was the first to be developed . In a channeled gate array space is left
between the rows of transistors for wiring.
• A channeled gate array is similar to a CBIC. Both use the rows of cells separated by channels used
for interconnect. One difference is that the space for interconnect between rows of cells are fixed in
height in a channeled gate array, whereas the space between rows of cells may be adjusted in a CBIC.

➢ Channel less Gate Array

This channel less gate-array architecture is now more widely used . The routing on a channelless gate
array uses rows of unused transistors.

• The key difference between a channel less gate array and channeled gate array is that there are no
predefined areas set aside for routing between cells on a channel less gate array. Instead we route
over the top of the gate-array devices. We can do this because we customize the contact layer that
defines the connections between metal 1, the first layer of metal, and the transistors.

• Features of Channel less Gate Array

• Only the interconnect is customized.


• The interconnect uses predefined spaces between rows of base cells.
• Manufacturing lead time is around two days to two weeks.
• When we use an area of transistors for routing in a channel less array, we do not make any
contacts to the devices lying underneath , we simply leave the transistors unused.
• The basic difference between a channel less gate array and channeled gate array is that there
are no predefined areas set aside for routing between cells on a channel less gate array.
Instead we route over the top of the gate-array devices.
• It is done like this because we customize the contact layer that defines the connections
between metal1, the first layer of metal, and the transistors.
➢ Structured Gate Array

This design combines some of the features of CBICs and MGAs.It is also known as an embedded gate
array or structured gate array(also called as master slice or master image).

• One of the limitations of the MGA is the fixed gate-array base cell. This makes the implementation
of memory, difficult and inefficient.

• In an embedded gate array some of the IC area is set aside and dedicate it to a specific function.
This embedded area either can contain a different base cell that is more suitable for building
memory cells, or it can contain a complete circuit block, such as a microcontroller.

➢ Features of Structured Gate Array


• Only the interconnect is customized.
• Custom blocks (the same for each design) can be embedded.
• Manufacturing lead time is between two days and two weeks.
• An embedded gate array gives the improved area efficiency and
increased performance of a CBIC but with the lower cost and faster turn
around of an MGA.
• he disadvantage of an embedded gate array is that the embedded
function is fixed.

2 a) Explain the working of a. Carry skip and b. Carry bypass adders with necessary
diagrams
➢ Carry Skip / Bypass Adder.
The problem with an RCA is that every stage has to wait to make its carry decision, C[ i ], until
the previous stage has calculated C[ i – 1]. If we examine the propagate signals we can bypass
this critical path. Thus, for example, to bypass the carries for bits 4–7 (stages 5–8) of an adder we
can compute BYPASS = P[4].P[5].P[6].P[7] and then use a MUX as follows
C[7] = (G[7] + P[7] · C[6]) · BYPASS' + C[3] · BYPASS . (2.54)

Adders based on this principle are called carry-bypass adders ( CBA ) [Sato et al., 1992]. Large,
custom adders employ Manchester-carry chains to compute the carries and the bypass operation
using TGs or just pass transistors [Weste and Eshraghian, 1993, pp. 530–531]. These types of carry
chains may be part of a predesigned ASIC adder cell, but are not used by ASIC designers.

Instead of checking the propagate signals we can check the inputs. For example we can compute
SKIP= (A[ i – 1] ⊕ B[ i – 1]) + (A[ i ]⊕ B[ i ] ) and then use a 2:1 MUX to select C[ i ]. Thus,

CSKIP[ i ] = (G[ i ] + P[ i ] · C[ i – 1]) · SKIP' + C[ i – 2] · SKIP . (2.55)

This is a carry-skip adder [Keutzer, Malik, and Saldanha, 1991; Lehman, 1961]. Carry-bypass and
carry-skip adders may include redundant logic (since the carry is computed in two different ways—
we just take the first signal to arrive). We must be careful that the redundant logic is not optimized
away during logic synthesis.

2 c).Explain I/O cells with neat diagram


shows a three-state bidirectional output buffer (Tri-State ® is a registered trademark of National
Semiconductor). When the output enable (OE) signal is high, the circuit functions as a noninverting
buffer driving the value of DATAin onto the I/O pad. When OE is low, the output transistors or drivers
, M1 and M2, are disconnected. This allows multiple drivers to be connected on a bus. It is up to the
designer to make sure that a bus never has two drivers—a problem known as contention .

A three-state bidirectional output buffer. When the output enable, OE, is '1' the output section is
enabled and drives the I/O pad. When OE is '0' the output buffer is placed in a high- impedance
state.

2b)Define data path and explain any 8 data path elements with neat diagram

shows symbols for some other datapath elements. The combinational datapath cells, NAND, NOR,
and so on, and sequential datapath cells (flip-flops and latches) have standard-cell equivalents and
function identically. I use a bold outline (1 point) for datapath cells instead of the regular (0.5 point)
line I use for scalar symbols. We call a set of identical cells a vector of datapath elements in the same
way that a bold symbol, A , represents a vector and A represents a scalar.
Symbols for datapath elements. (a) An array or vector of flip-flops (a register). (b) A two-input NAND
cell with databus inputs. (c) A two-input NAND cell with a control input. (d) A buswide MUX.

(e) An incrementer/decrementer. (f) An all-zeros detector. (g) An all-ones detector. (h) An


adder/subtracterter.

1A subtracter is similar to an adder, except in a full subtracter we have a borrow-in signal, BIN; a
borrow-out signal, BOUT; and a difference signal, DIFF:

DIFF = A ⊕ NOT(B) ⊕ NOT( BIN)

NOT(BOUT) = A · NOT(B) + A · NOT(BIN) + NOT(B) · NOT(BIN)

These equations are the same as those for the FA except that the B input is inverted and the sense of
the carry chain is inverted. To build a subtracter that calculates (A – B) we invert the entire B input
bus and connect the BIN[0] input to VDD (not to VSS as we did for CIN[0] in an adder). As an
example, to subtract B = '0011' from A = '1001' we calculate '1001' + '1100' + '1' = '0110'. As with an
adder, the true overflow is XOR(BOUT[MSB], BOUT[MSB – 1]).

2. An adder/subtracter has a control signal that gates the A input with an exclusive-OR cell (forming
a programmable inversion) to switch between an adder or subtracter. Some adder/subtracters gate
both inputs to allow us to compute (–A – B). We must be careful to connect the input to the LSB of
the carry chain (CIN[0] or BIN[0]) when changing between addition (connect to VSS) and subtraction
(connect to VDD).

3. A barrel shifter rotates or shifts an input bus by a specified amount. For example if we have an
eight- input barrel shifter with input '1111 0000' and we specify a shift of '0001 0000' (3, coded by bit
position) the right-shifted 8-bit output is '0001 1110'. A barrel shifter may rotate left or right (or
switch between the two under a separate control). A barrel shifter may also have an

output width that is smaller than the input. To use a simple example, we may have an 8-bit input and
a 4-bit output.

This situation is equivalent to having a barrel shifter with two 4-bit inputs and a 4-bit output. Barrel

4. A leading-one detector is used with a normalizing (left-shift) barrel shifter to align mantissas in
floating-point numbers. The input is ann -bit bus A, the output is an n -bit bus, S, with a single '1' in
the bit position corresponding to the most significant '1' in the input. Thus, for example, if the input
is A = '0000 0101' the leading-one detector output is S = '0000 0100', indicating the leading one in A
is in bit position 2 (bit 7 is the MSB, bit zero is the LSB).

5. The output of a priority encoder is the binary-encoded position of the leading one in an input. For
example, with an input A = '0000 0101' the leading 1 is in bit position 3 (MSB is bit position 7) so the
output of a 4-bit priority encoder would be Z = '0010' (3). In some cell libraries the encoding is
reversed so that the MSB has an output code of zero, in this case Z = '0101' (5). This second,
reversed, encoding scheme is useful in floating-point arithmetic. If A is a mantissa and we normalize
A to '1010 0000' we have to subtract 5 from the exponent, this exponent correction is equal to the
output of the priority encoder.
6. An accumulator is an adder/subtracter and a register. Sometimes these are combined with a
multiplier to form a multiplier–accumulator( MAC ). An incrementer adds 1 to the input bus, Z = A +
1, so we can use this function, together with a register, to negate a two‘s complement number for
example. The implementation is Z[ i ] = XOR(A[ i ], CIN[ i ]), and COUT[ i ] = AND(A[ i ], CIN[ i ]). The
carry-in control input, CIN[0], thus acts as an enable: If it is set to '0' the output is the same as the
input.

Z[ i (even)] = XOR(A[ i ], CIN[ i ]) and COUT[ i (even)] = NAND(A[ i ], CIN[ i ]).

This inverts COUT, so that in the following stage we must invert it again. If we push an inverting
bubble to the input CIN we find that:

Z[ i (odd)] = XNOR(A[ i ], CIN[ i ]) and COUT[ i (even)] = NOR(NOT(A[ i ]), CIN[ i ]).

7. A register file is a bank of flip-flops arranged across the bus; sometimes these have the option of
multiple ports (multiport register files) for read and write. Normally these register files are the
densest logic and hardest to fit in a datapath. For large register files it may be more appropriate to
use a multiport memory. We can add control logic to a register file to create afirst-in first-out register
( FIFO ), or last-in first-out register ( LIFO ).

3a).Explain about the following: Goals and objectives of 1.Floor planning 2. Placement and 3.
Routing

2. Placement
3. routing

1 floor planning
3b)Explain the global routing methods in an ASIC physical design

Global routing methods:

1. Sequentialrouting:Each

2. netinturnandcalculatestheshortestpathsuingtreealgorithms-
withtheaddedrestrictionsofusingtheavailablechannels.(eachnet(connectionbetweenpinsorcomponen
ts)isprocessedonebyone.)

3. Orderindependentrouting:Routingeachnet,ignoringhowcrowdedthechannelsare(Thisapproachrout
eseachnetwithoutconsideringthecurrentoccupancyorcongestionofchannels.)

4. OrderDependent:Theroutingissequential,buttheorderofprocessingthenetsareinorder.

• Inorder-dependentrouting,thesequenceinwhichnetsareprocessedaffectsthefinalroutingresult.

• Thequalityofroutingcansignificantlyvarydependingonthisorder;thus,carefulselectionoftheroutingord
eriscrucial.

5. HierarchicalRouting:Theroutingwhichhandlesallnetsataparticularlevelatonce.

• Thismethodisoftenmoreefficientforlargerorcomplexdesignssinceitdividestheroutingproblemintoman
ageablechunks,improvingscalabilityandperformance.

Physical design flow


3c) Explain the following in ASIC floor plan 1. Clock planning 2. Power planning in brief

1. Clock planning
2. Power planning in brief

4a) Explain the following placement algorithms: 1. Min-cut placement 2. Iterative placement
improvement

1. Min-cut placement
2. Iterative placement improvement

4b) Explain global routing between blocks with neat diagram


Back Annotation:

• onceacircuitlayoutiscompleted,theactuallengthsofinterconnectionsandtheparasiticofroutingchannel
scanbeextractedandannotatedbackintothetimingsimulation.

• Thisprovidesamoreaccuratepictureofhowthedesignwillperformafterfabrication.

• Itisacriticalstepforvalidatingthatthedesignmeetstherequiredspecificationsbeforemovingontomanufa
cturing

4c) Explain the concept of measurement of delay in floor planning


5a)Explain the verification process of system Verilog
5b) Explain the different types of array methods used in unpacked arrays
• Dynamic Arrays
The basic Verilog array type shown so far is known as a fixed-size array, as its size is set at compile time. But
what if you do not know the size of the
barray[1]
barray[2]
barray[0] 7 6 5 4 3 2 1 0
barray[0][3] barray[0][1][6]
765432107654321076543210
76543210765432107654321076543210
76543210765432107654321076543210
array until run-time? You may choose the number of transactions randomly between 1000 and 100,000, but
you do not want to use a fixed-size array that would be half empty. SystemVerilog provides a dynamic array
that can be allocated and resized during simulation. A dynamic array is declared with empty word subscripts
[]. This means that you do not want to give an array size at compile time; instead, you specify
it at run-time. The array is initially empty, so you must call the new[] operator to allocate space, passing in the
number of entries in the square brackets. If you pass the name of an array to the new[] operator, the values are
copied into the new elements.
Example 2-15 Using dynamic arrays
int dyn[], d2[]; // Empty dynamic arrays
initial begin
dyn = new[5]; // Allocate 5 elements
foreach (dyn[j])
dyn[j] = j; // Initialize the elements
d2 = dyn; // Copy a dynamic array
d2[0] = 5; // Modify the copy
$display(dyn[0],d2[0]); // See both values (0 & 5)
dyn = new[20](dyn); // Expand and copy
dyn = new[100]; // Allocate 100 new integers
// Old values are lost
dyn.delete; // Delete all elements
end
The $size function returns the size of a fixed-size or dynamic array.Dynamic arrays have several specialized
routines, such as delete and size.The latter function returns the size, but does not work with fixed-size arrays.If
you want to declare a constant array of values but do not want to bother counting the number of elements, use
a dynamic array with an array literal.
• Queues

SystemVerilog introduces a new data type, the queue, which provides easy searching and sorting in a structure
that is as fast as a fixed-size array but as versatile as a linked list. Like a dynamic array, queues can grow and
shrink, but with a queue you can easily add and remove elements anywhere. Example 2-17 adds and removes
values from a queue.
Example 2-17 Queue operations
int j = 1,
b[$] = {3,4},
q[$] = {0,2,5}; // {0,2,5} Initial queue
initial begin
q.insert(1, j); // {0,1,2,5} Insert 1 before 2
q.insert(3, b); // {0,1,2,3,4,5} Insert whole q.
q.delete(1); // {0,2,3,4,5} Delete elem. #1
// The rest of these are fast
q.push_front(6); // {6,0,2,3,4,5} Insert at front
j = q.pop_back; // {6,0,2,3,4} j = 5
q.push_back(8); // {6,0,2,3,4,8} Insert at back
j = q.pop_front; // {0,2,3,4,8} j = 6
foreach (q[i])
$display(q[i]);
end
When you create a queue, SystemVerilog actually allocates extra space so you can quickly add extra elements.
Note that you do not need to call the new[] operator for a queue. If you add enough elements so that the queue
runs out of space, SystemVerilog automatically allocates additional space. As a result, you can grow and
shrink a queue without the performance penalty of a dynamic array. It is very efficient to push and pop
elements from the front and back of a queue, taking a fixed amount of time no matter how large the queue.
Adding and deleting elements in the middle is slower, especially for larger queues, as
SystemVerilog has to shift up to half of the elements. You can copy the contents of a fixed or dynamic array
into a queue.
• Associative Arrays
Dynamic arrays are good if you want to occasionally create a large array, but what if you want something
really huge? Perhaps you are modeling a processor that has a multi-gigabyte address range. During a typical
test, the processor may only touch a few hundred or thousand memory locations containing
executable code and data, so allocating and initializing gigabytes of storage is wasteful.
SystemVerilog offers associative arrays that store entries in a sparse matrix. This means that while you can
address a very large address space, SystemVerilog only allocates memory for an element when you write to it.
In the following picture, the associative array holds the values 0:3, 42, 1000, 4521, and 200,000. The memory
used to store these is far less than would be needed to store a fixed or dynamic array with 200,000 entries.
Example 2-18 Declaring, initializing, and using associative arrays
initial begin
logic [63:0] assoc[*], idx = 1;
// Initialize widely scattered values
repeat (64) begin
assoc[idx] = idx;
idx = idx << 1;
end
// Step through all index values with foreach
foreach (assoc[i])
$display("assoc[%h] = %h", i, assoc[i]);
// Step through all index values with functions
if (assoc.first(idx))
begin // Get first index
do
$display("assoc[%h]=%h", idx, assoc[idx]);
while (assoc.next(idx)); // Get next index
end
// Find and delete the first element
assoc.first(idx);
assoc.delete(idx);
end

5c) Write a short note on built in data types of system verilog with examples

Built-in Data Types


Verilog-1995 has two basic data types: variables (reg) and nets, that hold four-state values: 0, 1, Z,
and X. RTL code uses variables to store combinational and sequential values. Variables can be
unsigned single or multi-bit
(reg [7:0] m), signed 32-bit variables (integer), unsigned 64-bit variables (time), and floating point
numbers (real). Variables can be grouped together into arrays that have a fixed size. All storage is
static, meaning that all variables are alive for the entire simulation and routines cannot use a stack
to hold arguments and local values. A net is used to connect parts of a design such as gate
primitives and module instances. Nets come in many flavors, but most designers use scalar and
vector wires to connect together the ports of design blocks. SystemVerilog adds many new data
types to help both hardware designers
Example 2-1 Using the logic type
module logic_data_type(input logic rst_h);
parameter CYCLE = 20;
logic q, q_l, d, clk, rst_l;
initial begin
clk <= 0; // Procedural assignment
forever #(CYCLE/2) clk = ~clk;
end
assign rst_l = ~rst_h; // Continuous assignment
not n1(q_l, q); // q_l is driven by gate
my_dff d1(q, d, clk, rst_l); // d is driven by module
endmodule

6b)Explain the various test bench components

Testbench Components
In simulation, the testbench wraps around the DUT, just as a hardware tester connects to a physical chip.
Both the testbench and tester provide stimulus and capture responses. The difference between them is that
your testbench needs to work over a wide range of levels of abstraction, creating transactions and
sequences, which are eventually transformed into bit vectors. A tester just works at the bit level.

Testbench block made of many bus functional models (BFM), that you can think of as testbench components
— to the DUT they look like real components, but are part of the testbench, not RTL. If the
real device connects to AMBA, USB, PCI, and SPI buses, you have to build equivalent components in your
testbench that can generate stimulus and check the response. These are not detailed, synthesizable models
but instead, highlevel transactors that obey the protocol, but execute more quickly. If you are
prototyping using FPGAs or emulation, the BFMs do need to be synthesizable.
Layered Testbench
A key concept for any modern verification methodology is the layered testbench. While this process may
seem to make the testbench more complex, it actually helps to make your task easier by dividing the code
into smaller pieces that can be developed separately. Don’t try to write a single routine that can randomly
generate all types of stimulus, both legal and illegal, plus inject errors with a multi-layer protocol. The routine
quickly becomes complex and unmaintainable.
Flat testbench
When you first learned Verilog and started writing tests, they probably looked like the following low-level
code that does a simplified APB (AMBA Peripheral Bus) Write. (VHDL users may have written similar code.)

6c) Explain the constants and strings in system verilog with examples

Strings
• The SystemVerilog string type holds variable-length strings. Anindividual character is of type byte. The
elements of a string of length N are numbered 0 to N-1.
• Note that, unlike C, there is no null character at the end of a string, and any attempt to use the character
“\0” is ignored.
• Strings use dynamic memory allocation, so you do not have to worry about running out of space to store
the string
• shows various string operations.
• • The function getc(N) returns the byte at location N, while toupper returns
• an upper-case copy of the string and tolower returns a lowercase copy.
• • The curly braces {} are used for concatenation.
• • The task putc(M) writes a byte into a string at locaˇtion M, that must be
• between 0 and the length as given by len.
• The substr(start,end) function extracts characters from location start to end


9a)Write about the common randomization problems in system verilog

Thc random stimulus gcncration in Systcm Vcrilog is most uscful whcn used with OOP. You first crcate a
class to hold a group of related random variables, and thcn have thc random-solvcr till thcm with random
values. You can create constraints to limit the random valucs to legal values, or to test-spccific features.
Note that you can randomizc individual variables, but this case is the least interesting. Truc constraincd-
random stimuli is created at the transaction level, not onc value at a time

9b)List the various coverage types in system verilog and explain them
➢ Code Coverage
The easiest way to measure verification progress is with code coverage. Here you are measuring how
many lines of code have been executed (line coverage), which paths through the code and
expressions have been executed (path coverage), which singlebit variables have had the values 0 or I
(toggle coverage), and which states and transitions in a state machine have been visited (FSM
coverage). You don't have to write any extra HDL code. The tool instruments your design
automatically by analyzing the source code and adding hidden code to gather statistics. You then run
all your tests, and the code coverage tool creates a database. Many simulators include a code
coverage tool. A postprocessing tool converts the database into a readable form. The end result is a
measure of how much your tests exercise the design code. Note that you are primarily concerned with
analyzing the design code, not the testbench. Untested design code could conceal a hardware bug, or
may be just redundant code. Code coverage measures how thoroughly your tests exercised the
"implementation" of the design specification, and not the verification plan. Just because your tests
have reached 100% code coverage, your job is not done. What if you made a mistake that your test
didn't catch? Worse yet, what if your implementation is missing a feature? The following module is
for a D-flip flop. Can you see the mistake
➢ Functional Coverage
The goal of verification is to ensure that a design behaves correctly in its real environment, be that an
MP3 player, network router, or cell phone. The design specification details how the device should
operate, whereas the verification plan lists how that functionality is to be stimulated, verified, and
measured. When you gather measurements on what functions were covered, you are performing
"design" coverage. For example, the verification plan for a D-flip flop would mention not only its
data storage but also how it resets to a known state. Until your test checks both these design features,
you will not have 100% functional coverage. Functional coverage is tied to the design intent and is
sometimes called "specification coverage," while code coverage measures the design implementation.
Consider what happens if a block of code is missing from the design. Code coverage cannot catch
this mistake, but functional coverage can.
➢ Bug Rate
An indirect way to measure coverage is to look at the rate at which fresh bugs are found. You should
keep track of how many bugs you found each week, over the life of a project. At the start, you may find
many bugs through inspection as you create the testbench. As you read the design spec, you may find
inconsistencies, which hopefully are fixed before the RTL is written. Once the testbench is up and
running, a torrent of bugs is found as you check each module in the system. The bug rate drops, hopefully
to zero, as the design nears tape-out. However, you are not yet done. Every time the rate sags. it is time to
find different ways to create comer cases.

9c)Write a short note on measuring coverage statistics during simulation


Measuring Coverage Statistics During Simulation You can query the level of functional coverage on
the fly during simulation. This allows you to check whether you have reached your coverage goals,
and possibly to control a random test. At the global level, you can get the total coverage of all cover
groups with $get_coverage, which returns a real number between 0 and 100. This system task looks
across all cover groups. You can narrow down your measurements with the get_coverage () and
get_inst_coverage () methods. The first function works with both cover group names and instances to
give coverage across all instances of a cover group, for example, CoverGroup: :get_coverage() or
cglnst.get_coverage(). The second function returns coverage for a specific cover group instance, for
example cglnst.get_inst_coverageO. You need to specify option.per_instance=l if you want to gather
perinstance coverage. The most practical use for these functions is to monitor coverage over a long
test. If the coverage level does not advance after a given number of transactions or cycles, the test
should stop. Hopefully, another seed or test will increase the coverage. While it would be nice to
have a test that can perform some sophisticated actions based on functional coverage results, it is
very hard to write this sort of test. Each test + random seed pair may uncover new functionality, but it
may take many runs to reach a goal. If a test finds that it has not reached 100% coverage, what should
it do? Run for more cycles? How many more? Should it change the stimulus being generated? How
can you correlate a change in the input with the level of functional coverage? The one reliable thing
to change is the random seed, which you should only do once per simulation. Otherwise, how can
you reproduce a design bug if the stimulus depends on multiple random seeds? You can query the
functional coverage statistics if you want to create your own coverage database. Verification teams
have built their own SQL databases that are fed functional coverage data from simulation. This setup
allows them greater control over the data, but requires a lot of work outside of creating tests. Some
formal verification tools can extract the state of a design and then create input stimulus to reach all
possible states. Don't try to duplicate this in your testbench!

You might also like