0% found this document useful (0 votes)
7 views

Coverage types

Ggft

Uploaded by

nikhilmallesh84
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Coverage types

Ggft

Uploaded by

nikhilmallesh84
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Coverage types:

Coverage is a generic term for measuring progress to complete design verification. The
coverage tools gather information during a simulation and then postprocess it to produce a
coverage report. You can use this report to look for coverage holes and then modify existing
tests or create new ones to fill the holes. This iterative process continues until you are
satisfied with the coverage level.

Code Coverage:

The easiest way to measure verification progress is with code coverage. Here you are
measuring how many lines of code have been executed (line coverage), which paths through
the code and expressions have been executed (path coverage), which single bit variables have
had the values 0 or 1 (toggle coverage), and which states and transitions in a state machine
have been visited (FSM coverage). You don't have to write any extra HDL code. The tool
instruments your design automatically by analyzing the source code and adding hidden code
to gather statistics. You then run all your tests, and the code coverage tool creates a
database.

Many simulators include a code coverage tool. A postprocessing tool converts the database
into a readable form. The end result is a measure of how much your tests exercise the design
code. Note that you are primarily concerned with analyzing the design code, not the
testbench. Untested design code could conceal a hardware bug, or may be just redundant
code.

Code coverage measures how thoroughly your tests exercised the "implementation" of the
design specification, and not the verification plan. Just because your tests have reached 100%
code coverage, your job is not done. What if you made a mistake that your test didn't catch?
Worse yet, what if your implementation is missing a feature? The following module is for a D-
flip flop. Can you see the mistake?

Incomplete D-flip flop model missing a path

module dff(output logic q,q_1,

input logic clk,d,reset_1);

always @(posedge clk or negedge reset_1) begin

q <= d;

q_1 <= !d;

end

endmodule
The reset logic was accidently left out. A code coverage tool would report that every line had
been exercised, yet the model was not implemented correctly.

Functional Coverage:

Functional coverage is tied to the design intent and is sometimes called "specification
coverage," while code coverage measures the design implementation. Consider what happens
if a block of code is missing from the design. Code coverage cannot catch this mistake, but
functional coverage can.

Bug Rate:

An indirect way to measure coverage is to look at the rate at which fresh bugs are found. You
should keep track of how many bugs you found each week, over the life of a project. At the
start, you may find many bugs through inspection as you create the testbench. As you read
the design spec, you may find inconsistencies, which hopefully are fixed before the RTL is
written. Once the testbench is up and running, a torrent of bugs is found as you check each
module in the system. The bug rate drops, hopefully to zero, as the design nears tape-out.
However, you are not yet done. Every time the rate sags. it is time to find different ways to
create comer cases.

The bug rate can vary per week based on many factors such as project phases. recent design
changes, blocks being integrated, personnel changes, and even vacation schedules.
Unexpected changes in the rate could signal a potential problem. As shown in Figure 9-3, it is
not uncommon to keep finding bugs even after tape-out and even after the design ships to
customers.
Assertion Coverage:

Assertions are pieces of declarative code that check the relationships between design signals,
either once or over a period of time. These can be simulated along with the design and
testbench, or proven by formal tools. Sometimes you can write the equivalent check using
System verilog procedural code, but many assertions are more easily expressed using System
verilog Assertions (SVA).

Assertions can have local variables and perform simple data checking. The most familiar
assertions look for errors such as two signals that should be mutually exclusive or a request
that was never followed by a grant. These error checks should stop the simulation as soon as
they detect a problem. Assertions can also check arbitration algorithms, FIFOs, and other
hardware. These are coded with the assert property statement.

You might also like