MODULE 3a Notes
MODULE 3a Notes
MODULE 3(a)
VERIFICATION GUIDELINES
1. Planning is Crucial: Just as building a house starts with understanding the owners'
needs and budget, creating a testbench begins by defining how the design will be
verified.
o Understand the goals (e.g., what features to test).
o Plan the testbench structure based on these needs.
2. Common Testbench Elements: All testbenches share a basic structure:
o Stimulus generation: Creating inputs for the design.
o Response checking: Validating the outputs against expected behavior.
3. Verification Methodologies:The chapter introduces guidelines for structuring
testbenches, borrowing concepts from methodologies like VMM, UVM, and OVM but
without relying on their base classes.
4. Finding Bugs:Verification engineers are encouraged to embrace bugs as opportunities.
o Each bug found early saves significant cost compared to discovering it later in
the design lifecycle.
o The goal is to test rigorously to uncover as many issues as possible.
5. SystemVerilog as a Hardware Verification Language (HVL):The passage highlights
features that make SystemVerilog suitable for verification:
o Constrained-random stimulus generation: To test various scenarios
systematically.
o Functional coverage: Measuring which parts of the design are tested.
o Object-oriented programming (OOP): Allowing modular, reusable code.
o Multi-threading & IPC: To manage complex interactions.
o Event simulator integration: For precise control of design under test.
6. Advantages of HVLs over HDLs:
SystemVerilog enables higher abstraction levels for testbench creation, offering better
efficiency and effectiveness compared to traditional HDLs like Verilog or VHDL.
The different types of bugs that may exist in a design and how they can be identified through
verification at various levels is stated below:
1. Block-Level Bugs: These are the simplest to find, as they exist within individual
modules created by a single designer. Examples include verifying arithmetic operations
in an ALU or ensuring data packets move correctly through a network switch. Directed
tests can easily uncover such bugs because they focus on specific, isolated functionality.
2. Integration-Level Bugs: These arise at the boundaries between blocks when multiple
designers interpret specifications differently. Issues can emerge in protocols, timing, or
signal interactions between modules, requiring the verification engineer to identify
discrepancies and potentially resolve conflicting interpretations.
3. System-Level Bugs: At the top level, the entire Design Under Test (DUT) is simulated,
often with all components operating simultaneously. This testing is comprehensive but
slower, and it uncovers more complex issues like timing and data alignment problems.
Testing real-world use cases, such as multitasking operations on a device, ensures the
design handles realistic user scenarios effectively.
Block Simulation Challenges: Simulating a single block requires creating stimuli for
interactions with surrounding blocks, which can be complex and code-intensive. While
these simulations are fast, they might also expose bugs in both the design and the
testbench. As multiple blocks are integrated, their interactions reduce the need for
external stimuli but slow down simulations and complicate debugging.
Error Injection and Recovery: Beyond verifying correct functionality, it's crucial to
test how the design handles errors like corrupted data or incomplete transactions.
Designing tests for error recovery is one of the most challenging aspects of verification.
Continuous Verification: Since it's impossible to prove that all bugs are eliminated,
verification must remain an ongoing, iterative process with new strategies constantly
developed to uncover potential issues.
A verification plan is created based on the hardware specification and outlines the features to
be tested along with the methods to be used. These methods may include directed and random
testing, assertions, hardware/software co-verification, emulation, formal proofs, and
verification IP.
Verification Methodology Manual (VMM), originated from work by Janick Bergeron and
colleagues at Qualis Design. Their efforts began with industry-standard practices, refined over
years of experience, initially for the OpenVera language and later extended to SystemVerilog
in 2005. VMM, alongside its predecessor, the Reference Verification Methodology (RVM),
has been instrumental in verifying complex hardware designs across diverse domains,
including networking devices and processors.
The book introduces foundational principles shared by VMM, the Universal Verification
Methodology (UVM), and the Open Verification Methodology (OVM), all of which
incorporate similar verification techniques. It is tailored for readers new to verification, object-
oriented programming (OOP), or constrained-random testing (CRT), providing a stepping
stone toward more advanced methodologies like UVM and VMM.
While UVM and VMM are suited for complex designs, such as large systems with multiple
protocols and intricate error-handling requirements, this book focuses on smaller-scale
applications. It emphasizes creating reusable, modular code that aligns with broader system
requirements. Additionally, it offers insight into the underlying construction of base classes
and utilities used in advanced methodologies, helping readers grasp the foundational concepts
needed for effective verification.
The purpose of a testbench is to determine the correctness of the DUT. This is accomplished
by the following steps.
• Generate stimulus
• Apply stimulus to the DUT
• Capture the response
• Check for correctness
• Measure progress against the overall verifi cation goals
Traditionally, design verification relied on directed testing, where specific tests are written to
target individual features based on the hardware specification and a verification plan. Test
stimuli are created to exercise these features, and the design is simulated to validate its behavior
against expectations. Test results are manually reviewed, and once verified, the test is marked
complete, progressing incrementally through the plan.
This method shows steady progress, which appeals to project managers, and requires minimal
infrastructure since test creation is guided directly. While sufficient for many designs given
enough time and resources, directed testing becomes impractical as design complexity grows.
Doubling the complexity either doubles the time needed or requires significantly more
resources, neither of which is feasible.
The challenge lies in achieving 100% coverage efficiently. Directed testing progresses linearly,
making it unsuitable for verifying large design spaces with numerous features and potential
bugs. For example, verifying all input combinations for a complex design, such as a 32-bit
adder, would be computationally prohibitive. A more effective methodology is needed to
accelerate bug discovery and achieve comprehensive coverage.
Random stimulus is vital for testing complex designs, as it can uncover unexpected bugs that
directed tests might miss. Directed tests target known issues, while random tests explore
broader scenarios. To use random stimuli effectively, functional coverage is essential for
tracking verification progress, and automated result prediction, typically through a scoreboard
or reference model, is required. Building a robust testbench infrastructure—including self-
checking mechanisms—requires significant upfront effort but provides long-term benefits.
A layered testbench approach simplifies the process by dividing the system into manageable
components. Transactors serve as reusable building blocks, allowing tests to shape stimuli and
introduce disturbances without altering the core infrastructure. Test-specific code is kept
separate to maintain the testbench's reusability and clarity.
Although constructing a random testbench takes more time initially, especially for the self-
checking features, the investment pays off. Once in place, it supports multiple tests with
minimal additional effort. Random tests typically consist of concise code for constraining
stimuli and generating specific scenarios, making them more efficient than directed tests, which
must be created individually.
As testing progresses, adjustments to random constraints can uncover new bugs. While the
final bugs may require directed tests, the majority are found through random testing. A random
testbench can be adapted for directed testing if needed, but a directed testbench cannot achieve
the versatility of a random one. Despite the upfront challenges, this approach accelerates bug
discovery and enhances overall verification efficiency.
In constrained-random testing, the goal is for the simulator to generate stimulus based on
specific constraints, not completely random values. Using SystemVerilog, you define the
format of the stimulus, such as the size of an address or the types of operations (e.g., ADD,
SUB, STORE), and the simulator ensures that the generated values adhere to these constraints.
This process allows for testing that remains relevant to the design while covering a broader
space than directed tests, which typically focus on predefined scenarios.
The results from the design’s actual output are compared against those predicted by a high-
level model, enabling functional verification. Constrained-random tests often explore new
areas of the design, potentially uncovering bugs in unexpected places. If a new area reveals an
issue, it’s beneficial, but if it generates illegal functionality, additional constraints must be
added to prevent such cases. In some instances, directed tests may still be needed to cover
specific cases not addressed by the random tests.
To achieve complete coverage, it’s important to run constrained-random tests with various
seeds to explore different parts of the design space, as shown in the coverage diagram. This
approach, while requiring more initial setup, ultimately provides broader test coverage and
helps identify issues that might otherwise be overlooked.
The most common reason bugs are missed in RTL design testing is the lack of sufficient
configuration variation. Many tests focus on the design’s default state after reset or apply a
fixed set of initialization vectors. This approach is similar to testing an operating system right
after installation, without considering how real-world usage can affect performance or cause
issues.
In actual use, the design's configuration changes as it operates. For example, a time-division
multiplexor switch with thousands of input channels may have numerous valid configurations,
with each input potentially divided into multiple channels. While standard configurations are
commonly used, a large number of combinations are possible, and missing these configurations
during testing can lead to bugs. One engineer faced this challenge and was limited by the
complexity of manually writing testbench code for each possible configuration. By introducing
randomized channel parameters, they were able to test a broader range of configurations and
uncover bugs related to configuration.
In real-world environments, the device under test (DUT) interacts with other components.
Thus, the entire environment, including simulation length, number of devices, and their
configurations, should be randomized. Constraints are necessary to ensure that the
configurations are legal. For example, when verifying an I/O switch chip that connects multiple
PCI buses to a memory bus, engineers randomized the number of PCI buses, the number of
devices on each bus, and the specific parameters for each device. They used functional
coverage to track the combinations tested and ensure comprehensive coverage. Other
parameters, such as test length, error injection, and delay modes, should also be considered for
thorough testing.
When considering random stimulus, one might think of generating transactions, like a bus write
or ATM cell, and filling in the data fields with random values. While this approach is relatively
simple, it requires careful preparation of transaction classes. Additionally, you must account
for layered protocols, error injection, scoreboarding, and functional coverage to ensure
thorough testing and accurate results.
One of the most frustrating issues in hardware design is when a device, like a PC or cell phone,
locks up due to an unhandled error. This usually happens because some piece of logic
encounters an error condition it cannot recover from, causing the device to stop functioning.
To prevent such issues in your own hardware, simulate potential error scenarios. Consider what
happens when a bus transaction fails, when an invalid operation occurs, or when mutually
exclusive signals are driven simultaneously.
In addition to provoking errors, it's important to detect them. For example, if mutually exclusive
signals are violated, your testbench should include checkers that identify these violations,
printing warning messages or, ideally, generating an error that halts the simulation. This
approach makes it easier to catch issues early in the testing process, rather than spending hours
tracing malfunctions back to their source. Implementing assertions in your testbench and design
code is key for this. However, be sure to have the option to disable error-stopping code for
testing error handling scenarios.
The speed at which your testbench sends stimulus should vary with random delays to help
uncover protocol bugs. While writing a test with the shortest delays is simple, it may not create
all possible stimulus combinations, and subtle boundary condition bugs are often exposed when
more realistic delays are used.
Even if a block functions correctly with all possible stimulus permutations from a single
interface, issues may arise when transactions flow into multiple inputs. It's important to test
different timing scenarios, such as when inputs arrive at the fastest rate while the output is
throttled, or when multiple inputs receive stimulus concurrently or with staggered delays.
When running tests, directed tests produce a fixed set of stimulus and response vectors,
meaning you must modify the test to change the stimulus. In contrast, random tests involve a
testbench and a random seed, and running the same test with different seeds generates distinct
sets of stimuli, broadening test coverage and making better use of your efforts.
Each simulation needs a unique seed. While some people use the time of day, this can still lead
to duplicates, especially in a batch queuing system across a CPU farm. To avoid this, combine
the processor name and process ID with the seed. This ensures that even if multiple jobs start
at the same time, they will each receive a unique seed and generate different stimulus sets.
It can take too long to reach all possible states, and some states may remain unreachable, even
with unlimited simulation time. To effectively track progress, you need to measure what has
been verified.
Functional coverage involves a few steps. First, you add code to the testbench to monitor the
stimulus sent to the device and its response, identifying which functionality has been tested.
Run multiple simulations with different seeds, then merge the results into a report. Finally,
analyze the report to determine which areas remain untested and create new stimulus to cover
these gaps.
A random test evolves using feedback, where multiple simulations with different seeds
generate unique input sequences. However, as the test progresses, it becomes less likely to
cover unvisited areas of the design space. As functional coverage approaches its limit, you need
to adjust the test to explore new areas, a process called "coverage-driven verification."
In an ideal scenario, a smart testbench could automatically adjust to cover uncovered areas. For
example, in a previous job, I wrote a test to generate every bus transaction for a processor,
ensuring all terminators fired at the right cycles. After the processor's timing changed, I had to
reanalyze and adjust the test, which was time-consuming. A more efficient approach uses
random transactions and terminators, and as the test runs, coverage improves. The test can be
made flexible enough to adapt to timing changes by adjusting constraint weights dynamically.
However, this type of dynamic feedback is rare in practice. In most cases, you need to manually
analyze functional coverage reports and adjust random constraints. Formal analysis tools like
Magellan use feedback, analyzing the design to find reachable states and calculating the
stimulus required to cover them, but such techniques are not typically used for constrained-
random stimulus.
In simulation, the testbench acts as a wrapper around the DUT (Device Under Test), similar to
how a hardware tester connects to a physical chip. Both the testbench and tester provide
stimulus and capture responses. However, the key difference is that the testbench operates over
various levels of abstraction, generating transactions and sequences that are ultimately
converted into bit vectors, whereas a hardware tester works directly at the bit level.
The testbench consists of several Bus Functional Models (BFMs), which are testbench
components that simulate real components for the DUT. These BFMs generate stimulus and
check the responses, mimicking the behavior of buses like AMBA, USB, PCI, and SPI. They
are high-level models that follow the protocol but are not detailed or synthesizable, allowing
for faster execution. In cases where FPGA or emulation prototyping is used, the BFMs need to
be synthesizable for hardware integration.
A flat testbench refers to a simple, low-level approach to writing test code. When first learning
Verilog or VHDL, the test code often resembled the basic, straightforward example in Sample
1.1, which demonstrates a simplified APB (AMBA Peripheral Bus) Write. This type of
testbench is often straightforward but can become more complex and harder to manage as the
design grows.
Since the code is very repetitive, tasks can be created for common operations such as a bus
write, as shown in Sample 1.2 .
By taking the common actions (such as reset, bus reads and writes) and putting them in a
routine, you became more efficient and made fewer mistakes. This creation of the physical and
command layers is the first step to a layered testbench.
The testbench blocks within the dashed line in the figure are developed at the start of the
project. While they may evolve and new functionalities can be added during development,
these blocks should remain unchanged for individual tests. To allow tests to modify the
behavior of these blocks without rewriting them, "hooks" are created using factory patterns and
callbacks. This approach ensures flexibility while maintaining the integrity of the core
testbench components.
The top-level testbench, represented by the test layer, coordinates the various efforts within the
verification environment. It serves as a conductor, guiding the activities of other components
without directly interacting with them. This layer handles stimulus generation through
constraints and uses functional coverage to measure the progress in meeting verification
requirements. While the functional coverage code evolves throughout the project, it remains
separate from the core environment.
In a constrained-random environment, directed tests can be inserted into the random sequence
or run in parallel, combining both methods to expose bugs that might not have been considered.
The complexity of the testbench depends on the DUT; a simpler design may require fewer
layers, whereas more complex designs with multiple protocol layers necessitate additional
testbench layers. The structure of the testbench can vary based on specific needs and should
adapt to the design's requirements, with flexibility to incorporate physical error handling when
needed.