0% found this document useful (0 votes)
47 views16 pages

Verification

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views16 pages

Verification

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Understanding

From Basics to Advance

Anoushka Tripathi
1

Verification can feel daunting, but think of it like building a house. Would you start picking out
curtains and furniture before understanding how the house will be used? Of course not. You’d
begin by asking critical questions: How many bedrooms are needed? Will the kitchen be a
central hub, or is a home theater more important? Is the budget for a mansion or a modest
home? The answers guide the structure of the house.

Verification works the same way. Before diving into SystemVerilog or its technical details, step
back and think about what you're trying to achieve with your testbench.

Foundations of Verification:

• Purpose: Ensure the design matches its specification and functions as intended.

• Approach: Start with a plan, just like an architect starts with blueprints.

1. Why Verification Matters

The design process is creative, but it can introduce discrepancies (bugs). A designer might
translate a specification into RTL (Register-Transfer Level) code with assumptions or
interpretations. Your job as a verification engineer is to independently interpret the same
specifications and test the design to ensure accuracy.

Think of it as a second opinion:

• Designers might misinterpret or miss details.

• Verification ensures the design behaves as expected in all conditions.

2. Bugs: Your Friends in Disguise

Contrary to popular belief, finding bugs is good. Each bug you catch before production saves
money, time, and reputation. Treat bugs as opportunities:

• Be devious: Push the design to its limits with unexpected scenarios.

• Track everything: Keep records of every bug, as patterns might emerge.

• Celebrate success: Finding a bug now prevents a future disaster.

3. The Verification Hierarchy

Verification happens at different levels:

Block Level (Single Module):

• Test individual components.

• Example: Does the ALU add numbers correctly?

• These tests are straightforward but crucial for catching basic errors.

Boundary Level (Between Modules):


2

• Look for inconsistencies when modules interact.

• Example: Does the bus driver and receiver agree on a protocol?

• Bugs here often arise from different interpretations of the specification.

System Level (Entire DUT):

• Test how the whole design works together.

• Example: Does a smartphone handle a call while downloading a file?

• These tests mimic real-world scenarios and uncover complex timing and integration
issues.

As the scope increases, simulations slow down, but they become more realistic and valuable.

4. What Makes SystemVerilog Special?

SystemVerilog elevates your ability to test designs. It isn’t just an HDL (Hardware Description
Language); it’s a Hardware Verification Language (HVL) with unique features:

• Constrained-random stimulus generation: Automate generating test scenarios.

• Functional coverage: Measure what’s been tested and identify gaps.

• Object-oriented programming (OOP): Organize and reuse code efficiently.

• Multithreading and IPC: Simulate concurrent operations for real-world accuracy.

With these tools, you can create testbenches that are smarter and more efficient.

5. Planning Your Verification Strategy

Your verification plan is the cornerstone of your efforts. It should:

• Tie closely to the hardware specification.

• List all features to test and how to test them.

• Incorporate techniques like directed tests, random testing, assertions, and error
injection.

Example: For a network router, include tests for packet routing under normal, stressed, and
erroneous conditions.

Example of UVM

• Reuse Methodology (2002) - eRM: An early methodology focused on reusable


verification components.

• Reuse Verification Methodology (2003) - RVM: Focused on reusability within


verification processes.
3

• Advanced Verification Methodology (2004) - AVM [Mentor Graphics]: Introduced


more advanced verification techniques.

• Verification Methodology Manual (2004) - VMM [Synopsys and ARM]: Provided


comprehensive guidelines for verification.

• Universal Reuse Methodology (2006) - URM: Expanded on eRM, emphasizing universal


applicability.

• Open Verification Methodology (2008) - OVM [Mentor Graphics & Cadence]: A


significant step towards standardization, combining efforts from major EDA tool
providers.

• Universal Verification Methodology (2011) - UVM: The culmination of these


methodologies, combining OVM and VMM, and maintained by Accellera, a consortium
of tool companies including Mentor Graphics, Synopsys, and Cadence.

6. Why Use Methodologies Like VMM?

Verification Methodology Manual (VMM) provides a structured approach to verification,


especially for complex designs. It:

• Builds on best practices from industry experts.

• Ensures reusable, modular, and scalable testbenches.

However, VMM might be overkill for small projects. In such cases, a simpler, custom approach
suffices, but keep in mind the bigger picture—your block will integrate into larger systems.

What is VMM and Why Is It Useful?

The Verification Methodology Manual (VMM) is like a toolbox for verifying designs effectively. It
provides:

• Base Classes: Ready-made templates for handling data, test environments, and
utilities.

• Log Management and Communication Tools: Helps in organizing logs and managing
how different parts of the testbench interact.

This book introduces you to SystemVerilog concepts and how you can build the components of
VMM yourself. Learning these techniques gives you a deeper understanding of the logic behind
verification.

What Does a Testbench Do?

A testbench is like a detective investigating whether a design works correctly. Here’s how it
works:

1. Generate Stimulus: Create test signals to mimic real-world scenarios.


4

2. Apply Stimulus: Send these signals to the Design Under Test (DUT).

3. Capture the Response: Record how the DUT reacts.

4. Check for Correctness: Compare the response with the expected behavior.

5. Track Verification Goals: Measure how much of the testing is complete.

The way you design your testbench determines how these steps are automated or handled
manually.

What is Directed Testing?

Directed Testing is a traditional approach to verify designs, step by step:

1. Create a Verification Plan: Write a list of tests to check different features of the DUT.

2. Write Test Vectors: Manually create input scenarios for specific features.

3. Run Simulations: Simulate the DUT with these inputs and observe the behavior.

4. Check Results: Review log files or waveforms to ensure everything works as expected.

Once a feature is verified, you move to the next test. It’s a straightforward and incremental
method, popular because it shows quick progress.
5

Challenges with Directed Testing

Directed testing is great for small designs, but as designs grow in complexity, it can become a
bottleneck:

• Slow Progress: Each feature requires a new, detailed test.


6

• High Effort: Twice the complexity means twice the time or double the team size.

This image show how directed tests incrementally cover the features in the verification
plan. Each test is targeted at a very specific set of design elements. If you had enough
time, you could write all the tests needed for 100% coverage of the entire verification
plan.What if you do not have the necessary time or resources to carry out the directed
testing approach? As you can see, while you may always be making forward progress,the
slope remains the same. When the design complexity doubles, it takes twice as long to
complete or requires twice as many people to implement it. Neither of these situations
is desirable. You need a methodology that finds bugs faster in order to reach the goal of
100% coverage.

Directed tests provide steady progress, but they don’t scale well for large or complex designs.
This is where advanced methodologies, like constrained-random testing, come into play to
cover more ground faster.

Directed verification has some limitations that can affect its effectiveness in identifying all
potential design bugs. Here are some of the limitations:

• Limited test coverage: Directed verification is based on pre-defined test cases that are
designed to test specific functionalities or scenarios. This means that it may not cover
all possible scenarios, which can lead to undetected bugs.

• Bias towards the designer's assumptions: Directed verification is based on the


designer's knowledge of the design specification and intended behavior. This can lead to
test cases that are biased towards the designer's assumptions, which may not always
be accurate.

• Difficulty in detecting complex bugs: Directed verification may not be effective in


detecting complex bugs that require multiple functionalities to interact in specific ways.

• Limited scalability: Directed verification may not scale well for larger and more
complex designs, as it can become increasingly difficult to create enough test cases to
cover all possible scenarios.
7

• Time-consuming test case creation: Creating directed test cases can be time-
consuming, as each test case must be carefully designed to exercise a specific
functionality or scenario.

Methodology Basics and Constrained-Random Testing

1.5 Methodology Basics

Modern testbenches follow these principles to improve efficiency and bug detection:

1. Constrained-Random Stimulus:
Instead of manually writing inputs (like in directed tests), random values are generated.
These values are controlled with constraints to ensure they’re meaningful.

2. Functional Coverage:
Measures how much of the design has been tested. This tells you if your random tests
are thorough or if there are gaps.

3. Layered Testbench with Transactors:


The testbench is divided into layers, each handling a specific part of the process, like
sending data or verifying outputs. Transactors act like translators between these layers,
managing data flow.

4. Common Testbench for All Tests:


One testbench infrastructure is used for all tests. Only specific parts of the test (like
unique scenarios) are added separately to keep the base testbench clean and reusable.

5. Separate Test-Specific Code:


Code for individual test cases is kept separate from the core testbench to avoid
unnecessary complexity.

Why Use These Principles?

• Directed Tests: Test specific features you know about, but may miss unexpected bugs.

• Random Tests: Help find unexpected bugs by exploring more scenarios.

• With random stimulus, you need automated checks (like a scoreboard) to predict the
correct outputs, since manually verifying random outputs isn’t practical.

Example: A layered testbench helps manage this complexity by breaking the testbench into
small, focused components.

Challenges and Payoff:

• Initial Delay: Building a random testbench takes more time upfront compared to
directed tests. This may worry managers since no tests are run at the beginning.
8

• High Payback: Once the random testbench is ready, it finds bugs faster and more
efficiently than writing individual directed tests for every scenario.

Result: A single random testbench can cover a lot more scenarios than multiple directed tests
with less effort over time.

1.6 Constrained-Random Stimulus

• What It Does: Generates inputs that are random, but within specific rules.

• Why Constrain It?: Totally random values might be invalid (e.g., an address that's out of
range). Constraints ensure the inputs make sense.

• Constraint Random Verification (CRV) is a technique for generating randomized


test cases with specific constraints to ensure that the generated input stimuli meet
certain design requirements.
• In CRV, a set of constraints that capture the requirements of the design, such as data
ranges, timing requirements, and interface protocols are defined. The testbench then
generates a set of input stimuli that satisfies these constraints. The generated test
cases can then be used to verify the design's functionality and performance.
• CRV is a popular verification technique because it can generate a large number of
randomized test cases that cover a wide range of scenarios. By using CRV, a
verification engineer can quickly identify potential design bugs that may not be found
using other verification techniques.
• One of the major advantages of CRV is its scalability. It can be used to verify designs
of any size and complexity, and can generate millions of test cases with relative ease.
Additionally, CRV allows for quick iteration and modification of test cases, which can
accelerate the verification process.
• However, CRV also has some limitations. The generated test cases may not cover all
possible scenarios, and some bugs may still go undetected. Additionally, creating
effective constraints can be challenging, especially for complex designs. Finally,
debugging failed test cases can be difficult, as the root cause of the failure may not
be immediately apparent.
9

A 4-bit adder that adds two inputs, A and B, and produces a 4-bit output, C. We want to use CRV
to generate a set of test cases that cover a wide range of scenarios and satisfy the following
constraints:

• The input values A and B should be within the range of 0 to 15 (4-bit numbers).
• The output value C should be within the range of 0 to 31 (5-bit numbers).
• The adder should operate correctly for both signed and unsigned inputs.
• The adder should operate correctly for all possible combinations of A and B
10

Constrained-Random Test Coverage

This figure illustrates the difference in coverage over time between constrained-random
tests and directed tests:

• Random Tests: Initially cover a broad area of the design space, sometimes hitting
unexpected or unintended areas. This can be beneficial if it uncovers a bug that a
directed test might miss. However, these tests can also overlap with each other or
explore areas that aren't relevant, so additional constraints may be needed to avoid
illegal cases.

• Directed Tests: These are more focused and specifically designed to hit certain edge
cases that the random tests might miss. Directed tests are often written to target
particular scenarios in the design that are less likely to be covered by random testing
alone.

Over time, random tests cover the majority of the design, but directed tests are
necessary to fill the gaps, ensuring full coverage.

Coverage Convergence
11

This figure shows a typical process of how coverage improves as you run constrained-
random tests with different seeds and make modifications to improve the tests:

1. Start with Constrained-Random Tests: The first step is to run constrained-random


tests. These tests, when executed with different random seeds, explore various parts of
the design.

2. Identify Holes in Coverage: After running the random tests, functional coverage reports
will show which areas of the design were not sufficiently covered. These areas are
represented by question marks in the figure (i.e., "New area?" or "Test overlap?").

3. Directed Test Cases: For the remaining uncovered areas, write directed tests that
specifically target those gaps.

4. Add Constraints or Minimal Code Changes: If the random tests are hitting illegal
areas, add constraints to prevent the generation of illegal stimuli. In some cases, you
may also need to make minimal changes to the testbench or the design under test (DUT)
to properly explore these uncovered areas.

5. Converge Towards Complete Coverage: Over time, as you refine the random tests, add
directed tests, and improve the constraints, you converge towards full functional
coverage of the design.

Key Takeaways:

• Constrained-Random Testing provides broad coverage but may leave some gaps or
overlap certain areas.

• Directed Tests are written to target areas that constrained-random tests are unlikely to
hit.

• Coverage Reports help identify these gaps, and through iterations (e.g., adding
constraints, writing directed tests), you can achieve complete coverage.

Coverage with Constrained-Random Testing:


12

• Random vs. Directed Testing:

o Random tests cover more scenarios and may find unexpected bugs.

o Directed tests focus on specific, known features.

• How to Improve Coverage:

o Run random tests with different seeds (starting points for randomness).

o Identify coverage gaps using functional coverage reports.

o Add constraints or directed tests to target missing areas.

Example: Start with broad random tests. If you find a gap (e.g., a feature wasn’t tested), add
constraints to focus on that feature.

What Should You Randomize?

When randomizing the inputs for testing a design, it’s not just about randomly generating data.
To uncover deeper bugs, particularly in control logic, you need to think more broadly and
include the following aspects:

1. Broader Randomization Categories

1. Device Configuration:
o Configure the device in as many legal ways as possible, mimicking how it might
operate in real-world scenarios.
o Example: For a multiplexer with 2000 input channels and 12 output channels,
test different ways of connecting and configuring these channels (not just a
few fixed setups).
2. Environment Configuration:
o Randomize the simulation environment, like:
▪ Number of devices connected.
▪ Their configurations (e.g., master/slave roles).
▪ Length of the simulation or timing delays.
o Example: For a chip connecting multiple PCI buses, randomly choose the
number of buses (1–4), devices per bus (1–8), and parameters (like memory
addresses). Use functional coverage to track which configurations are tested.
3. Input Data:
o Randomly fill transaction fields, such as bus write operations or packet
headers, while ensuring they conform to constraints (e.g., valid address
ranges).
o Include layered protocols, error scenarios, and data verification using tools like
scoreboards.
4. Protocol Exceptions, Errors, and Violations:
o Test edge cases where the device could fail, such as:
13

▪Transactions not completing.


▪Invalid operations.
▪Situations where two signals that should never be active together are
driven simultaneously.
o Example: If two mutually exclusive signals are active, ensure the device either
handles the error gracefully or logs the issue.
5. Delays:
o Introduce timing variations and delays to mimic real-world scenarios where
inputs might not arrive perfectly synchronized.

2. Why Broad Randomization is Important

• Directed tests focus on specific scenarios but may miss rare bugs. Randomization
allows for exploration of unexpected areas.
• Example: If a PC only boots with default settings, it might miss bugs that occur in
customized configurations. Testing real-world variations ensures robustness.

3. Example: Randomizing Configurations

• Before Randomization:
A designer tests a few predefined configurations of a device, potentially missing bugs
in untested setups.
• After Randomization:
o Parameters for configuration (like channel mappings) are randomized within
legal constraints.
o This approach ensures that more combinations are covered without writing
extensive manual test cases.
o Functional Coverage is used to verify that all desired scenarios are tested.

4. Preventing Errors in Testing

• Randomization might inadvertently generate invalid configurations. Use constraints to


ensure that only valid setups are tested.
• Use assertions to catch illegal conditions (e.g., two signals active together).
• Example: If a bus transaction fails, the design should recover gracefully rather than
crashing or locking up.

5. Tools and Best Practices


14

• Scoreboards: Verify that actual outputs match expected results.


• Assertions: Catch errors close to the source to make debugging easier.
• Error Handling: Ensure the design can recover from simulated errors without manual
intervention.

Delays and Synchronization

When designing and testing complex systems, timing plays a critical role in finding subtle bugs
that may not appear during straightforward or fast-paced testing. Here's how delays and
synchronization are handled effectively:

1. Importance of Delays in Testing

• Constrained-Random Delays:
Use delays that vary randomly (within constraints) instead of fixed delays. This approach
helps uncover protocol bugs.

o Fastest Rate: Running the test at maximum speed won't explore all possible
timing scenarios.

o Intermittent Delays: Introducing delays can reveal subtle timing issues that
occur when data isn't perfectly synchronized.

• Testing with Multiple Inputs:

o Subtle errors may occur when multiple input transactions overlap or arrive out of
sync.

o Experiment with:

▪ Inputs arriving at maximum speed while outputs are slowed down.

▪ Inputs arriving simultaneously versus staggered with varying delays.

• Use Functional Coverage:


This measures how well your randomized testing explores timing combinations. It
ensures that a wide range of delay scenarios is tested.

2. Parallel Random Testing

• Random Seeds:
Random tests rely on a random seed to generate unique stimuli. Changing the seed
changes the stimulus, enabling broader coverage without altering the testbench.

o Running Multiple Tests:


If you run the same test 50 times with different seeds, you'll get 50 unique sets of
stimuli, significantly broadening your coverage.

• Unique Seeds:

o Avoid duplicate seeds to ensure each test generates unique stimuli.


15

o Better Approaches:

▪ Combine the time of day with the processor name.

▪ For multi-CPU systems, also include the process ID.

o Example: If two jobs start at the same time on different CPUs, adding the
process ID ensures they use unique seeds.

3. Organizing Test Outputs

• Managing Simulation Output Files:


Each test run generates files like logs and coverage data. Proper organization is essential
when running multiple simulations.

o Separate Directories:
Create a unique directory for each simulation run.

o Unique Filenames:
Alternatively, append the random seed to filenames to differentiate them.

o Example: A simulation with seed 1234 could output files named log_1234.txt or
coverage_1234.dat.

You might also like