0% found this document useful (0 votes)
17 views

MODULE 3a Notes

Uploaded by

Protocol
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

MODULE 3a Notes

Uploaded by

Protocol
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

ADVANCED VLSI 21EC71

MODULE 3(a)

VERIFICATION GUIDELINES

Effective SystemVerilog testbenches for hardware design verification uses an analogy of


building a house to emphasize the importance of planning and understanding requirements
before diving into details.

1. Planning is Crucial: Just as building a house starts with understanding the owners'
needs and budget, creating a testbench begins by defining how the design will be
verified.
o Understand the goals (e.g., what features to test).
o Plan the testbench structure based on these needs.
2. Common Testbench Elements: All testbenches share a basic structure:
o Stimulus generation: Creating inputs for the design.
o Response checking: Validating the outputs against expected behavior.
3. Verification Methodologies:The chapter introduces guidelines for structuring
testbenches, borrowing concepts from methodologies like VMM, UVM, and OVM but
without relying on their base classes.
4. Finding Bugs:Verification engineers are encouraged to embrace bugs as opportunities.
o Each bug found early saves significant cost compared to discovering it later in
the design lifecycle.
o The goal is to test rigorously to uncover as many issues as possible.
5. SystemVerilog as a Hardware Verification Language (HVL):The passage highlights
features that make SystemVerilog suitable for verification:
o Constrained-random stimulus generation: To test various scenarios
systematically.
o Functional coverage: Measuring which parts of the design are tested.
o Object-oriented programming (OOP): Allowing modular, reusable code.
o Multi-threading & IPC: To manage complex interactions.
o Event simulator integration: For precise control of design under test.
6. Advantages of HVLs over HDLs:
SystemVerilog enables higher abstraction levels for testbench creation, offering better
efficiency and effectiveness compared to traditional HDLs like Verilog or VHDL.

1.1 The Verification Process

 The primary goal of hardware verification is to ensure that a design accurately


implements its specification and can perform its intended task, such as operating a
network router or radar processor. While finding bugs is a part of this process, the
ultimate aim is confirming the design's correctness and functionality according to
its specification. Identifying discrepancies, or bugs, highlights areas where the
design fails to meet the specification.
 Verification mirrors the design creation process. Designers translate specifications
into logic in a machine-readable form, such as RTL code. This process often
involves ambiguity due to unclear, incomplete, or contradictory specifications.
Verification engineers address these challenges by independently interpreting the
same specifications, creating a verification plan, and developing tests to validate
that the RTL code aligns with the intended design.

DEPARTMENT OF ECE, SMVITM 1


ADVANCED VLSI 21EC71

 Verification requires a deep understanding of the design's intent and careful


consideration of corner cases that designers might overlook. By performing an
independent interpretation of the specifications, verification engineers add a layer
of redundancy, ensuring that tests rigorously evaluate the design's accuracy and
completeness.

1.1.1 Testing at Different Levels

The different types of bugs that may exist in a design and how they can be identified through
verification at various levels is stated below:

1. Block-Level Bugs: These are the simplest to find, as they exist within individual
modules created by a single designer. Examples include verifying arithmetic operations
in an ALU or ensuring data packets move correctly through a network switch. Directed
tests can easily uncover such bugs because they focus on specific, isolated functionality.
2. Integration-Level Bugs: These arise at the boundaries between blocks when multiple
designers interpret specifications differently. Issues can emerge in protocols, timing, or
signal interactions between modules, requiring the verification engineer to identify
discrepancies and potentially resolve conflicting interpretations.
3. System-Level Bugs: At the top level, the entire Design Under Test (DUT) is simulated,
often with all components operating simultaneously. This testing is comprehensive but
slower, and it uncovers more complex issues like timing and data alignment problems.
Testing real-world use cases, such as multitasking operations on a device, ensures the
design handles realistic user scenarios effectively.

Block Simulation Challenges: Simulating a single block requires creating stimuli for
interactions with surrounding blocks, which can be complex and code-intensive. While
these simulations are fast, they might also expose bugs in both the design and the
testbench. As multiple blocks are integrated, their interactions reduce the need for
external stimuli but slow down simulations and complicate debugging.

Error Injection and Recovery: Beyond verifying correct functionality, it's crucial to
test how the design handles errors like corrupted data or incomplete transactions.
Designing tests for error recovery is one of the most challenging aspects of verification.

System-Level Complexities: At higher abstraction levels, verification involves


analyzing aggregate behaviors, such as prioritizing data streams in an ATM router. The
focus shifts from individual transactions to ensuring overall system correctness and
efficiency across multiple scenarios.

Continuous Verification: Since it's impossible to prove that all bugs are eliminated,
verification must remain an ongoing, iterative process with new strategies constantly
developed to uncover potential issues.

1.1.2 The Verification Plan

A verification plan is created based on the hardware specification and outlines the features to
be tested along with the methods to be used. These methods may include directed and random
testing, assertions, hardware/software co-verification, emulation, formal proofs, and
verification IP.

DEPARTMENT OF ECE, SMVITM 2


ADVANCED VLSI 21EC71

1.2 The Verification Methodology Manual (VMM)

Verification Methodology Manual (VMM), originated from work by Janick Bergeron and
colleagues at Qualis Design. Their efforts began with industry-standard practices, refined over
years of experience, initially for the OpenVera language and later extended to SystemVerilog
in 2005. VMM, alongside its predecessor, the Reference Verification Methodology (RVM),
has been instrumental in verifying complex hardware designs across diverse domains,
including networking devices and processors.

The book introduces foundational principles shared by VMM, the Universal Verification
Methodology (UVM), and the Open Verification Methodology (OVM), all of which
incorporate similar verification techniques. It is tailored for readers new to verification, object-
oriented programming (OOP), or constrained-random testing (CRT), providing a stepping
stone toward more advanced methodologies like UVM and VMM.

While UVM and VMM are suited for complex designs, such as large systems with multiple
protocols and intricate error-handling requirements, this book focuses on smaller-scale
applications. It emphasizes creating reusable, modular code that aligns with broader system
requirements. Additionally, it offers insight into the underlying construction of base classes
and utilities used in advanced methodologies, helping readers grasp the foundational concepts
needed for effective verification.

1.3 Basic Testbench Functionality

The purpose of a testbench is to determine the correctness of the DUT. This is accomplished
by the following steps.
• Generate stimulus
• Apply stimulus to the DUT
• Capture the response
• Check for correctness
• Measure progress against the overall verifi cation goals

1.4 Directed Testing

Traditionally, design verification relied on directed testing, where specific tests are written to
target individual features based on the hardware specification and a verification plan. Test
stimuli are created to exercise these features, and the design is simulated to validate its behavior
against expectations. Test results are manually reviewed, and once verified, the test is marked
complete, progressing incrementally through the plan.
This method shows steady progress, which appeals to project managers, and requires minimal
infrastructure since test creation is guided directly. While sufficient for many designs given
enough time and resources, directed testing becomes impractical as design complexity grows.
Doubling the complexity either doubles the time needed or requires significantly more
resources, neither of which is feasible.
The challenge lies in achieving 100% coverage efficiently. Directed testing progresses linearly,
making it unsuitable for verifying large design spaces with numerous features and potential
bugs. For example, verifying all input combinations for a complex design, such as a 32-bit
adder, would be computationally prohibitive. A more effective methodology is needed to
accelerate bug discovery and achieve comprehensive coverage.

DEPARTMENT OF ECE, SMVITM 3


ADVANCED VLSI 21EC71

1.5 Methodology Basics


The following principles are used:
• Constrained-random stimulus
• Functional coverage
• Layered testbench using transactors
• Common testbench for all tests
• Test case-specific code kept separate from testbench

Random stimulus is vital for testing complex designs, as it can uncover unexpected bugs that
directed tests might miss. Directed tests target known issues, while random tests explore
broader scenarios. To use random stimuli effectively, functional coverage is essential for
tracking verification progress, and automated result prediction, typically through a scoreboard
or reference model, is required. Building a robust testbench infrastructure—including self-
checking mechanisms—requires significant upfront effort but provides long-term benefits.

DEPARTMENT OF ECE, SMVITM 4


ADVANCED VLSI 21EC71

A layered testbench approach simplifies the process by dividing the system into manageable
components. Transactors serve as reusable building blocks, allowing tests to shape stimuli and
introduce disturbances without altering the core infrastructure. Test-specific code is kept
separate to maintain the testbench's reusability and clarity.

Although constructing a random testbench takes more time initially, especially for the self-
checking features, the investment pays off. Once in place, it supports multiple tests with
minimal additional effort. Random tests typically consist of concise code for constraining
stimuli and generating specific scenarios, making them more efficient than directed tests, which
must be created individually.

As testing progresses, adjustments to random constraints can uncover new bugs. While the
final bugs may require directed tests, the majority are found through random testing. A random
testbench can be adapted for directed testing if needed, but a directed testbench cannot achieve
the versatility of a random one. Despite the upfront challenges, this approach accelerates bug
discovery and enhances overall verification efficiency.

1.6 Constrained-Random Stimulus

In constrained-random testing, the goal is for the simulator to generate stimulus based on
specific constraints, not completely random values. Using SystemVerilog, you define the
format of the stimulus, such as the size of an address or the types of operations (e.g., ADD,
SUB, STORE), and the simulator ensures that the generated values adhere to these constraints.
This process allows for testing that remains relevant to the design while covering a broader
space than directed tests, which typically focus on predefined scenarios.

The results from the design’s actual output are compared against those predicted by a high-
level model, enabling functional verification. Constrained-random tests often explore new
areas of the design, potentially uncovering bugs in unexpected places. If a new area reveals an
issue, it’s beneficial, but if it generates illegal functionality, additional constraints must be
added to prevent such cases. In some instances, directed tests may still be needed to cover
specific cases not addressed by the random tests.

To achieve complete coverage, it’s important to run constrained-random tests with various
seeds to explore different parts of the design space, as shown in the coverage diagram. This
approach, while requiring more initial setup, ultimately provides broader test coverage and
helps identify issues that might otherwise be overlooked.

DEPARTMENT OF ECE, SMVITM 5


ADVANCED VLSI 21EC71

1.7 What Should You Randomize?


When you think of randomizing the stimulus to a design, you might first pick the data fields.
These values are is the easiest to create — just call $random () . The problem is that this choice
gives a very low payback in terms of bugs found. The primary types of bugs found with random
data are data path errors, perhaps with bit-level mistakes. You need to find bugs in the control
logic, source of the most devious problems.
Think broadly about all design inputs, such as the following.
• Device configuration
• Environment configuration
• Input data
• Protocol exceptions
• Errors and violations
• Delays

1.7.1 Device and Environment Configuration

The most common reason bugs are missed in RTL design testing is the lack of sufficient
configuration variation. Many tests focus on the design’s default state after reset or apply a
fixed set of initialization vectors. This approach is similar to testing an operating system right
after installation, without considering how real-world usage can affect performance or cause
issues.

In actual use, the design's configuration changes as it operates. For example, a time-division
multiplexor switch with thousands of input channels may have numerous valid configurations,
with each input potentially divided into multiple channels. While standard configurations are
commonly used, a large number of combinations are possible, and missing these configurations
during testing can lead to bugs. One engineer faced this challenge and was limited by the
complexity of manually writing testbench code for each possible configuration. By introducing
randomized channel parameters, they were able to test a broader range of configurations and
uncover bugs related to configuration.

DEPARTMENT OF ECE, SMVITM 6


ADVANCED VLSI 21EC71

In real-world environments, the device under test (DUT) interacts with other components.
Thus, the entire environment, including simulation length, number of devices, and their
configurations, should be randomized. Constraints are necessary to ensure that the
configurations are legal. For example, when verifying an I/O switch chip that connects multiple
PCI buses to a memory bus, engineers randomized the number of PCI buses, the number of
devices on each bus, and the specific parameters for each device. They used functional
coverage to track the combinations tested and ensure comprehensive coverage. Other
parameters, such as test length, error injection, and delay modes, should also be considered for
thorough testing.

1.7.2 Input Data

When considering random stimulus, one might think of generating transactions, like a bus write
or ATM cell, and filling in the data fields with random values. While this approach is relatively
simple, it requires careful preparation of transaction classes. Additionally, you must account
for layered protocols, error injection, scoreboarding, and functional coverage to ensure
thorough testing and accurate results.

1.7.3 Protocol Exceptions, Errors, and Violations

One of the most frustrating issues in hardware design is when a device, like a PC or cell phone,
locks up due to an unhandled error. This usually happens because some piece of logic
encounters an error condition it cannot recover from, causing the device to stop functioning.
To prevent such issues in your own hardware, simulate potential error scenarios. Consider what
happens when a bus transaction fails, when an invalid operation occurs, or when mutually
exclusive signals are driven simultaneously.

In addition to provoking errors, it's important to detect them. For example, if mutually exclusive
signals are violated, your testbench should include checkers that identify these violations,
printing warning messages or, ideally, generating an error that halts the simulation. This
approach makes it easier to catch issues early in the testing process, rather than spending hours
tracing malfunctions back to their source. Implementing assertions in your testbench and design
code is key for this. However, be sure to have the option to disable error-stopping code for
testing error handling scenarios.

1.7.4 Delays and Synchronization

The speed at which your testbench sends stimulus should vary with random delays to help
uncover protocol bugs. While writing a test with the shortest delays is simple, it may not create
all possible stimulus combinations, and subtle boundary condition bugs are often exposed when
more realistic delays are used.

Even if a block functions correctly with all possible stimulus permutations from a single
interface, issues may arise when transactions flow into multiple inputs. It's important to test
different timing scenarios, such as when inputs arrive at the fastest rate while the output is
throttled, or when multiple inputs receive stimulus concurrently or with staggered delays.

DEPARTMENT OF ECE, SMVITM 7


ADVANCED VLSI 21EC71

1.7.5 Parallel Random Testing

When running tests, directed tests produce a fixed set of stimulus and response vectors,
meaning you must modify the test to change the stimulus. In contrast, random tests involve a
testbench and a random seed, and running the same test with different seeds generates distinct
sets of stimuli, broadening test coverage and making better use of your efforts.

Each simulation needs a unique seed. While some people use the time of day, this can still lead
to duplicates, especially in a batch queuing system across a CPU farm. To avoid this, combine
the processor name and process ID with the seed. This ensures that even if multiple jobs start
at the same time, they will each receive a unique seed and generate different stimulus sets.

1.8 Functional Coverage

It can take too long to reach all possible states, and some states may remain unreachable, even
with unlimited simulation time. To effectively track progress, you need to measure what has
been verified.

Functional coverage involves a few steps. First, you add code to the testbench to monitor the
stimulus sent to the device and its response, identifying which functionality has been tested.
Run multiple simulations with different seeds, then merge the results into a report. Finally,
analyze the report to determine which areas remain untested and create new stimulus to cover
these gaps.

1.8.1 Feedback from Functional Coverage to Stimulus

A random test evolves using feedback, where multiple simulations with different seeds
generate unique input sequences. However, as the test progresses, it becomes less likely to
cover unvisited areas of the design space. As functional coverage approaches its limit, you need
to adjust the test to explore new areas, a process called "coverage-driven verification."

In an ideal scenario, a smart testbench could automatically adjust to cover uncovered areas. For
example, in a previous job, I wrote a test to generate every bus transaction for a processor,
ensuring all terminators fired at the right cycles. After the processor's timing changed, I had to
reanalyze and adjust the test, which was time-consuming. A more efficient approach uses
random transactions and terminators, and as the test runs, coverage improves. The test can be
made flexible enough to adapt to timing changes by adjusting constraint weights dynamically.

However, this type of dynamic feedback is rare in practice. In most cases, you need to manually
analyze functional coverage reports and adjust random constraints. Formal analysis tools like
Magellan use feedback, analyzing the design to find reachable states and calculating the
stimulus required to cover them, but such techniques are not typically used for constrained-
random stimulus.

DEPARTMENT OF ECE, SMVITM 8


ADVANCED VLSI 21EC71

1.9 Testbench Components

In simulation, the testbench acts as a wrapper around the DUT (Device Under Test), similar to
how a hardware tester connects to a physical chip. Both the testbench and tester provide
stimulus and capture responses. However, the key difference is that the testbench operates over
various levels of abstraction, generating transactions and sequences that are ultimately
converted into bit vectors, whereas a hardware tester works directly at the bit level.

The testbench consists of several Bus Functional Models (BFMs), which are testbench
components that simulate real components for the DUT. These BFMs generate stimulus and
check the responses, mimicking the behavior of buses like AMBA, USB, PCI, and SPI. They
are high-level models that follow the protocol but are not detailed or synthesizable, allowing
for faster execution. In cases where FPGA or emulation prototyping is used, the BFMs need to
be synthesizable for hardware integration.

1.10 Layered Testbench

A layered testbench is a fundamental concept in modern verification methodologies. While it


may initially seem to add complexity, it actually simplifies the process by breaking the
testbench into smaller, manageable pieces that can be developed independently. Rather than
creating a single complex routine to generate all types of stimuli, handle errors, and manage
multi-layer protocols, the layered approach prevents the testbench from becoming overly
complicated and unmaintainable. Additionally, this structure promotes the reuse and
encapsulation of Verification IP (VIP), leveraging Object-Oriented Programming (OOP)
principles.

DEPARTMENT OF ECE, SMVITM 9


ADVANCED VLSI 21EC71

1.10.1 A Flat Testbench

A flat testbench refers to a simple, low-level approach to writing test code. When first learning
Verilog or VHDL, the test code often resembled the basic, straightforward example in Sample
1.1, which demonstrates a simplified APB (AMBA Peripheral Bus) Write. This type of
testbench is often straightforward but can become more complex and harder to manage as the
design grows.

Since the code is very repetitive, tasks can be created for common operations such as a bus
write, as shown in Sample 1.2 .

DEPARTMENT OF ECE, SMVITM 10


ADVANCED VLSI 21EC71

Hence, testbench became simpler, as shown in Sample 1.3

By taking the common actions (such as reset, bus reads and writes) and putting them in a
routine, you became more efficient and made fewer mistakes. This creation of the physical and
command layers is the first step to a layered testbench.

DEPARTMENT OF ECE, SMVITM 11


ADVANCED VLSI 21EC71

1.10.2 The Signal and Command Layers


In a layered testbench, the lowest level is the signal layer, which includes the design under test
(DUT) and its connecting signals. Above this is the command layer, where drivers send
individual commands, like bus read or write, to the DUT's inputs. The DUT's outputs are
monitored by a component that captures signal transitions and organizes them into commands.
Assertions also operate at this layer, monitoring individual signals and tracking changes across
the entire command.

1.10.3 The Functional Layer


In the layered testbench, the functional layer is added above the command layer. This layer
includes the agent (or transactor in VMM), which takes high-level transactions, like DMA read
or write, and breaks them down into individual commands. These commands are sent to a
scoreboard, which predicts the expected results of the transaction. A checker then compares
the commands from the monitor with the predicted results in the scoreboard.

1.10.4 The Scenario Layer


The scenario layer in the testbench is driven by the generator, which orchestrates various steps
of an operation. For example, in a scenario like downloading a music file on an MP3 player,
the process involves multiple actions such as reading and writing control registers, performing
DMA transfers, and executing additional read/write operations. The scenario layer coordinates
these steps with constrained-random values for parameters like track size and memory location,
ensuring that the device performs its intended task, such as playing music or responding to user
input.

DEPARTMENT OF ECE, SMVITM 12


ADVANCED VLSI 21EC71

The testbench blocks within the dashed line in the figure are developed at the start of the
project. While they may evolve and new functionalities can be added during development,
these blocks should remain unchanged for individual tests. To allow tests to modify the
behavior of these blocks without rewriting them, "hooks" are created using factory patterns and
callbacks. This approach ensures flexibility while maintaining the integrity of the core
testbench components.

1.10.5 The Test Layer and Functional Coverage

The top-level testbench, represented by the test layer, coordinates the various efforts within the
verification environment. It serves as a conductor, guiding the activities of other components
without directly interacting with them. This layer handles stimulus generation through
constraints and uses functional coverage to measure the progress in meeting verification
requirements. While the functional coverage code evolves throughout the project, it remains
separate from the core environment.

In a constrained-random environment, directed tests can be inserted into the random sequence
or run in parallel, combining both methods to expose bugs that might not have been considered.
The complexity of the testbench depends on the DUT; a simpler design may require fewer
layers, whereas more complex designs with multiple protocol layers necessitate additional
testbench layers. The structure of the testbench can vary based on specific needs and should
adapt to the design's requirements, with flexibility to incorporate physical error handling when
needed.

DEPARTMENT OF ECE, SMVITM 13

You might also like