0% found this document useful (0 votes)
12 views

Module 3 (1)

Uploaded by

Akash K.n
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Module 3 (1)

Uploaded by

Akash K.n
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

MODULE 3

Verification Guidelines and Data Types


Module 3
• Verification Guidelines: The verification process, basic test bench
functionality, directed testing, methodology basics, constrained
random stimulus, randomization, functional coverage, test bench
components, layered testbench.
• Data Types: Built in Data types, fixed and dynamic arrays, Queues,
associative arrays, linked lists, array methods, choosing a type,
creating new types with type def, creating user defined structures,
type conversion, Enumerated types, constants and strings, Expression
width (Text Book 2)
Introduction
• Verification is the process of testing and verifying a design
before a complete system is manufactured.
• This module deals with set of guidelines and coding styles for
designing and constructing a testbench that meets particular
needs.
• Features of System Verilog Hardware Verification Language
(HVL)
• Constrained random stimulus generation
• Functional coverage
• Higher level structure especially OOP
• Multi threading and inter process communication
• Support for HDL types such as 4 state values.(0,1 x,z)
Verification process
• The goal of hardware design is to create a device that performs a
particular task.
• The purpose verification engineer is to make sure the device can
accomplish that task successfully — that is, the design is an accurate
representation of the specification.
• Process of verification parallels with design creation process.
• A designer reads the hardware specification for a block, interprets the
human language description, and creates the corresponding logic in a
machine-readable form, usually RTL code.
• To do this, he or she needs to understand the input format, the
transformation function, and the format of the output.
• As a verification engineer, you must also read the hardware
specification, create the verification plan, and then follow it to build
tests showing the RTL code correctly implements the features.
• What types of bugs are lurking in the design?
• The easiest ones to detect are at the block level, in modules created
by a single person
• After the block level, the next place to look for discrepancies is at
boundaries between the blocks.
• The first designer builds a bus driver with one view of the
specification, while a second builds a receiver with a slightly different
view.
• Your job is to find the disputed areas of logic and maybe even help
reconcile these two different views.
• To simulate a single design block, you need to create tests that
generate stimuli from all the surrounding blocks.
• As you start to integrate design blocks, they can stimulate each other,
reducing your workload. These multiple block simulations may
uncover more bugs, but they also run slower.
• At the highest level of the DUT, the entire system is tested, but the
simulation performance is greatly reduced. Your tests should strive to
have all blocks performing interesting activities concurrently.
• Once you have verified that the DUT performs its designated
functions correctly, you need to see how it operates when there are
errors. Can the design handle a partial transaction, or one with
corrupted data or control fields
• As the design abstraction gets higher, so does the verification
challenge. You can never prove there are no bugs left, so you need to
constantly come up with new verification tactics.
Basic Test bench functionality
• The purpose of a testbench is to determine the correctness of the
Design Under Test (DUT).
• This is accomplished by the following steps.
• Generate stimulus
• Apply stimulus to the DUT
• Capture the response
• Check for correctness
• Measure progress against the overall verification goals
Directed Testing
• Traditionally, when faced with the task of verifying the correctness of a
design, you may have used directed tests.
• Using this approach, you look at the hardware specification and write a
verification plan with a list of tests, each of which concentrated on a set of
related features.
• Armed with this plan, you write stimulus vectors that exercise these
features in the DUT.
• You then simulate the DUT with these vectors and manually review the
resulting log files and waveforms to make sure the design does what you
expect.
• Once the test works correctly, you check off the test in the verification plan
and move to the next.
• What if you do not have the necessary time or resources to carry out
the directed testing approach?
• As you can see, while you may always be making forward progress,
the slope remains the same.
• When the design complexity doubles, it takes twice as long to
complete or requires twice as many people.
• Neither of these situations is desirable.
• You need a methodology that finds bugs faster in order to reach the
goal of 100% coverage.
Methodology Basics
• Constrained-random stimulus
• „Functional coverage
• „Layered testbench using transactors
• „Common testbench for all tests
• „Test-specific code kept separate from testbench
• Random stimulus is crucial for exercising complex designs.
• A directed test finds the bugs you expect to be in the design, while a
random test can find bugs you never anticipated.
• When using random stimulus, you need functional coverage to
measure verification progress.
• Furthermore, once you start using automatically generated stimulus,
you need an automated way to predict the results, generally a
scoreboard or reference model.
• Building the testbench infrastructure, including self-prediction, takes
a significant amount of work.
• A layered testbench helps you control the complexity by breaking the
problem into manageable pieces. Transactors provide a useful pattern
for building these pieces
• With appropriate planning, you can build a testbench infrastructure
that can be shared by all tests and does not have to be continually
modified.
• Code specific to a single test must be kept separate from the
testbench so it does not complicate the infrastructure.
Constrained Random Stimulus
• While you want the simulator to generate the stimulus, you don’t
want totally random values.
• You use the SystemVerilog language to describe the format of the
stimulus (“address is 32-bits, opcode is X, Y, or Z, length < 32 bytes”),
and the simulator picks values that meet the constraints.
• Constraining the random values to become relevant stimuli is
covered i. These values are sent into the design, and also into a high-
level model that predicts what the result should be.
• The design’s actual output is compared with the predicted output.
Figure 1-4 shows the coverage for constrained-random tests over the total design space. First, notice that a random
test often covers a wider space than a directed one. This extra coverage may overlap other tests, or may explore new
areas that you did not anticipate. If these new areas find a bug, you are in luck! If the new area is not legal, you need
to write more constraints to keep away. Lastly, you may still have to write a few directed tests to find cases not covered
by any other constrained-random tests
Randomization
• When you think of randomizing the stimulus to a design, the first thing that
you might think of is the data fields.
• This stimulus is the easiest to create – just call $random.
• The problem is that this gives a very low payback in terms of bugs found.
• The primary types of bugs found with random data are data path errors,
perhaps with bit-level mistakes.
• You need to find bugs in the control logic.
• You need to think broadly about all design input, such as the following.
• „ Device configuration
•„ Environment configuration
• „ Input data
• „ Protocol exceptions
• „ Delays
• „ Errors and violation
Device and environment configuration
• What is the most common reason why bugs are missed during testing
of the RTL design? Not enough different configurations are tried.
Most tests just use the design as it comes out of reset, or apply a
fixed set of initialization vectors to put it into a known state
• In the real world, your device operates in an environment
containing other components. When you are verifying the DUT, it is
connected to a testbench that mimics this environment
• You should randomize the entire environment configuration,
including the length of the simulation, number of devices, and how
they are configured.
Input data
• When you read about random stimulus, you probably thought of
taking a transaction such as a bus write or ATM cell and filling the
data fields with random values
• You need to anticipate any layered protocols and error injection, plus
scoreboarding and functional coverage.
Protocol exceptions, errors, and violation
• There are few things more frustrating than when a device such as a
PC or cell phone locks up. Many times, the only cure is to shut it
down and restart. Chances are that deep inside the product there is a
piece of logic that experienced some sort of error condition and could
not recover, and thus stopped the device from working correctly.
• If something can go wrong in the real hardware, you should try to
simulate it.
• . Look at all the errors that can occur What happens if a bus
transaction does not complete? If an invalid operation is
encountered? Does the design specification state that two signals are
mutually exclusive?
• Add checker code to look for these violations. Your code should at
least print a warning message when this occurs, and preferably
generate an error and wind down the test.
Delays and synchronization
• How fast should your testbench send in stimulus? Always use
constrained random delays to help catch protocol bugs
• A test that uses the shortest delays runs the fastest, but it won’t
create all possible stimulus. You can create a testbench that talks to
another block at the fastest rate, but subtle bugs are often revealed
when intermittent delays are introduced.
• A block may function correctly for all possible permutations of
stimulus from a single interface, but subtle errors may occur when
data is flowing into multiple inputs
• Try to coordinate the various drivers so they can communicate at
different relative timing. What if the inputs arrive at the fastest
possible rate, but the output is being throttled back to a slower rate?
What if stimulus arrives at multiple inputs concurrently? What if it is
staggered with different delays.
Parallel random testing
• How should you run the tests?
• A directed test has a testbench that produces a unique set of
stimulus and response vectors.
• To change the stimulus, you need to change the test.
• A random test consists of the testbench code plus a random seed.
• If you run the same test 50 times, each with a unique seed, you will
get 50 different sets of stimuli.
• Running with multiple seeds broadens the coverage of your test
• Each job creates a set of output files such as log files and functional
coverage data.
• You can run each job in a different directory, or you can try to give a
unique name to each file.
Functional Coverage
• The process of measuring and using functional coverage consists of
several steps.
• First, you add code to the testbench to monitor the stimulus going
into the device, and its reaction and response, to determine what
functionality has been exercised.
• Next, the data from one or more simulations is combined into a
report.
• Lastly, you need to analyze the results and determine how to create
new stimulus to reach untested conditions and logic
Feedback from functional coverage to stimulus
• A random test evolves using feedback. The initial test can be run with
many different seeds, creating many unique input sequences.
• Eventually the test, even with a new seed, is less likely to generate
stimulus that reaches areas of the design space.
• As the functional coverage asymptotically approaches its limit, you
need to change the test to find new approaches to reach uncovered
areas of the design. This is known as “coverage-driven verification.”
• It aims to generate tests that cover as many functional areas of the
design as possible. If certain area remains uncovered after running
random test with different seeds the test bench adpts by focusing
on those areas.
Test bench Components
• In simulation, the testbench wraps around the DUT, just as a
hardware tester connects to a physical chip.
• Both the testbench and tester provide stimulus and capture
responses.
• The difference between them is that your testbench needs to work
over a wide range of levels of abstraction, creating transactions and
sequences, which are eventually transformed into bit vectors.
• A tester just works at the bit level.
Layered Test bench
• A key concept for any modern verification methodology is the layered
testbench.
• While this process may seem to make the testbench more complex, it
actually helps to make your task easier by dividing the code into
smaller pieces that can be developed separately.
• Don’t try to write a single routine that can randomly generate all
types of stimulus, both legal and illegal, plus inject errors with a multi-
layer protocol
Types
• Flat test bench
• The signal and command layers
• The functional layer
• The scenario layer
• The test layer and functional coverage
Data Types
• System Verilog offers many improved data structures compared with
Verilog.
• Some of these were created for designers but are also useful for
testbenches
• Types are:
• Built in data types
• Fixed size arrays
• Packed arrays
• Queues
Built in Data Types
• The logic type:
• Two-state types:
• SystemVerilog introduces several two-state data types to improve
simulator performance and reduce memory usage, over four-state
types.
• The simplest type is the bit, which is always unsigned. There are four
signed types: byte, shortint, int, and longint
Fixed Sized Arrays
• Declaring and Initialiing fixed size arrays:
• The array literal
Basic Array Operations
Dynamic Arrays
Queues
Associative Arrays
Array Methods
• Array Reduction Method

You might also like