Verification
Verification
Anoushka Tripathi
1
Verification can feel daunting, but think of it like building a house. Would you start picking out
curtains and furniture before understanding how the house will be used? Of course not. You’d
begin by asking critical questions: How many bedrooms are needed? Will the kitchen be a
central hub, or is a home theater more important? Is the budget for a mansion or a modest
home? The answers guide the structure of the house.
Verification works the same way. Before diving into SystemVerilog or its technical details, step
back and think about what you're trying to achieve with your testbench.
Foundations of Verification:
• Purpose: Ensure the design matches its specification and functions as intended.
• Approach: Start with a plan, just like an architect starts with blueprints.
The design process is creative, but it can introduce discrepancies (bugs). A designer might
translate a specification into RTL (Register-Transfer Level) code with assumptions or
interpretations. Your job as a verification engineer is to independently interpret the same
specifications and test the design to ensure accuracy.
Contrary to popular belief, finding bugs is good. Each bug you catch before production saves
money, time, and reputation. Treat bugs as opportunities:
• These tests are straightforward but crucial for catching basic errors.
• These tests mimic real-world scenarios and uncover complex timing and integration
issues.
As the scope increases, simulations slow down, but they become more realistic and valuable.
SystemVerilog elevates your ability to test designs. It isn’t just an HDL (Hardware Description
Language); it’s a Hardware Verification Language (HVL) with unique features:
With these tools, you can create testbenches that are smarter and more efficient.
• Incorporate techniques like directed tests, random testing, assertions, and error
injection.
Example: For a network router, include tests for packet routing under normal, stressed, and
erroneous conditions.
Example of UVM
However, VMM might be overkill for small projects. In such cases, a simpler, custom approach
suffices, but keep in mind the bigger picture—your block will integrate into larger systems.
The Verification Methodology Manual (VMM) is like a toolbox for verifying designs effectively. It
provides:
• Base Classes: Ready-made templates for handling data, test environments, and
utilities.
• Log Management and Communication Tools: Helps in organizing logs and managing
how different parts of the testbench interact.
This book introduces you to SystemVerilog concepts and how you can build the components of
VMM yourself. Learning these techniques gives you a deeper understanding of the logic behind
verification.
A testbench is like a detective investigating whether a design works correctly. Here’s how it
works:
2. Apply Stimulus: Send these signals to the Design Under Test (DUT).
4. Check for Correctness: Compare the response with the expected behavior.
The way you design your testbench determines how these steps are automated or handled
manually.
1. Create a Verification Plan: Write a list of tests to check different features of the DUT.
2. Write Test Vectors: Manually create input scenarios for specific features.
3. Run Simulations: Simulate the DUT with these inputs and observe the behavior.
4. Check Results: Review log files or waveforms to ensure everything works as expected.
Once a feature is verified, you move to the next test. It’s a straightforward and incremental
method, popular because it shows quick progress.
5
Directed testing is great for small designs, but as designs grow in complexity, it can become a
bottleneck:
• High Effort: Twice the complexity means twice the time or double the team size.
This image show how directed tests incrementally cover the features in the verification
plan. Each test is targeted at a very specific set of design elements. If you had enough
time, you could write all the tests needed for 100% coverage of the entire verification
plan.What if you do not have the necessary time or resources to carry out the directed
testing approach? As you can see, while you may always be making forward progress,the
slope remains the same. When the design complexity doubles, it takes twice as long to
complete or requires twice as many people to implement it. Neither of these situations
is desirable. You need a methodology that finds bugs faster in order to reach the goal of
100% coverage.
Directed tests provide steady progress, but they don’t scale well for large or complex designs.
This is where advanced methodologies, like constrained-random testing, come into play to
cover more ground faster.
Directed verification has some limitations that can affect its effectiveness in identifying all
potential design bugs. Here are some of the limitations:
• Limited test coverage: Directed verification is based on pre-defined test cases that are
designed to test specific functionalities or scenarios. This means that it may not cover
all possible scenarios, which can lead to undetected bugs.
• Limited scalability: Directed verification may not scale well for larger and more
complex designs, as it can become increasingly difficult to create enough test cases to
cover all possible scenarios.
7
• Time-consuming test case creation: Creating directed test cases can be time-
consuming, as each test case must be carefully designed to exercise a specific
functionality or scenario.
Modern testbenches follow these principles to improve efficiency and bug detection:
1. Constrained-Random Stimulus:
Instead of manually writing inputs (like in directed tests), random values are generated.
These values are controlled with constraints to ensure they’re meaningful.
2. Functional Coverage:
Measures how much of the design has been tested. This tells you if your random tests
are thorough or if there are gaps.
• Directed Tests: Test specific features you know about, but may miss unexpected bugs.
• With random stimulus, you need automated checks (like a scoreboard) to predict the
correct outputs, since manually verifying random outputs isn’t practical.
Example: A layered testbench helps manage this complexity by breaking the testbench into
small, focused components.
• Initial Delay: Building a random testbench takes more time upfront compared to
directed tests. This may worry managers since no tests are run at the beginning.
8
• High Payback: Once the random testbench is ready, it finds bugs faster and more
efficiently than writing individual directed tests for every scenario.
Result: A single random testbench can cover a lot more scenarios than multiple directed tests
with less effort over time.
• What It Does: Generates inputs that are random, but within specific rules.
• Why Constrain It?: Totally random values might be invalid (e.g., an address that's out of
range). Constraints ensure the inputs make sense.
A 4-bit adder that adds two inputs, A and B, and produces a 4-bit output, C. We want to use CRV
to generate a set of test cases that cover a wide range of scenarios and satisfy the following
constraints:
• The input values A and B should be within the range of 0 to 15 (4-bit numbers).
• The output value C should be within the range of 0 to 31 (5-bit numbers).
• The adder should operate correctly for both signed and unsigned inputs.
• The adder should operate correctly for all possible combinations of A and B
10
This figure illustrates the difference in coverage over time between constrained-random
tests and directed tests:
• Random Tests: Initially cover a broad area of the design space, sometimes hitting
unexpected or unintended areas. This can be beneficial if it uncovers a bug that a
directed test might miss. However, these tests can also overlap with each other or
explore areas that aren't relevant, so additional constraints may be needed to avoid
illegal cases.
• Directed Tests: These are more focused and specifically designed to hit certain edge
cases that the random tests might miss. Directed tests are often written to target
particular scenarios in the design that are less likely to be covered by random testing
alone.
Over time, random tests cover the majority of the design, but directed tests are
necessary to fill the gaps, ensuring full coverage.
Coverage Convergence
11
This figure shows a typical process of how coverage improves as you run constrained-
random tests with different seeds and make modifications to improve the tests:
2. Identify Holes in Coverage: After running the random tests, functional coverage reports
will show which areas of the design were not sufficiently covered. These areas are
represented by question marks in the figure (i.e., "New area?" or "Test overlap?").
3. Directed Test Cases: For the remaining uncovered areas, write directed tests that
specifically target those gaps.
4. Add Constraints or Minimal Code Changes: If the random tests are hitting illegal
areas, add constraints to prevent the generation of illegal stimuli. In some cases, you
may also need to make minimal changes to the testbench or the design under test (DUT)
to properly explore these uncovered areas.
5. Converge Towards Complete Coverage: Over time, as you refine the random tests, add
directed tests, and improve the constraints, you converge towards full functional
coverage of the design.
Key Takeaways:
• Constrained-Random Testing provides broad coverage but may leave some gaps or
overlap certain areas.
• Directed Tests are written to target areas that constrained-random tests are unlikely to
hit.
• Coverage Reports help identify these gaps, and through iterations (e.g., adding
constraints, writing directed tests), you can achieve complete coverage.
o Random tests cover more scenarios and may find unexpected bugs.
o Run random tests with different seeds (starting points for randomness).
Example: Start with broad random tests. If you find a gap (e.g., a feature wasn’t tested), add
constraints to focus on that feature.
When randomizing the inputs for testing a design, it’s not just about randomly generating data.
To uncover deeper bugs, particularly in control logic, you need to think more broadly and
include the following aspects:
1. Device Configuration:
o Configure the device in as many legal ways as possible, mimicking how it might
operate in real-world scenarios.
o Example: For a multiplexer with 2000 input channels and 12 output channels,
test different ways of connecting and configuring these channels (not just a
few fixed setups).
2. Environment Configuration:
o Randomize the simulation environment, like:
▪ Number of devices connected.
▪ Their configurations (e.g., master/slave roles).
▪ Length of the simulation or timing delays.
o Example: For a chip connecting multiple PCI buses, randomly choose the
number of buses (1–4), devices per bus (1–8), and parameters (like memory
addresses). Use functional coverage to track which configurations are tested.
3. Input Data:
o Randomly fill transaction fields, such as bus write operations or packet
headers, while ensuring they conform to constraints (e.g., valid address
ranges).
o Include layered protocols, error scenarios, and data verification using tools like
scoreboards.
4. Protocol Exceptions, Errors, and Violations:
o Test edge cases where the device could fail, such as:
13
• Directed tests focus on specific scenarios but may miss rare bugs. Randomization
allows for exploration of unexpected areas.
• Example: If a PC only boots with default settings, it might miss bugs that occur in
customized configurations. Testing real-world variations ensures robustness.
• Before Randomization:
A designer tests a few predefined configurations of a device, potentially missing bugs
in untested setups.
• After Randomization:
o Parameters for configuration (like channel mappings) are randomized within
legal constraints.
o This approach ensures that more combinations are covered without writing
extensive manual test cases.
o Functional Coverage is used to verify that all desired scenarios are tested.
When designing and testing complex systems, timing plays a critical role in finding subtle bugs
that may not appear during straightforward or fast-paced testing. Here's how delays and
synchronization are handled effectively:
• Constrained-Random Delays:
Use delays that vary randomly (within constraints) instead of fixed delays. This approach
helps uncover protocol bugs.
o Fastest Rate: Running the test at maximum speed won't explore all possible
timing scenarios.
o Intermittent Delays: Introducing delays can reveal subtle timing issues that
occur when data isn't perfectly synchronized.
o Subtle errors may occur when multiple input transactions overlap or arrive out of
sync.
o Experiment with:
• Random Seeds:
Random tests rely on a random seed to generate unique stimuli. Changing the seed
changes the stimulus, enabling broader coverage without altering the testbench.
• Unique Seeds:
o Better Approaches:
o Example: If two jobs start at the same time on different CPUs, adding the
process ID ensures they use unique seeds.
o Separate Directories:
Create a unique directory for each simulation run.
o Unique Filenames:
Alternatively, append the random seed to filenames to differentiate them.
o Example: A simulation with seed 1234 could output files named log_1234.txt or
coverage_1234.dat.