Directed Testing
Directed Testing
1. Definition: Directed testing involves creating specific test cases based on a hardware
specification to verify a design's correctness.
2. Process:
o Write a verification plan listing tests for relaited features.
o Create stimulus vectors to exercise these features in the design under test
(DUT).
o Simulate the DUT with these vectors.
o Manually review log files and waveforms to confirm expected behavior.
3. Progress:
o Incremental and steady, suitable for tracking progress.
o Managers favor it due to its measurable headway and immediate results.
4. Benefits:
o Minimal infrastructure needed to start.
o Effective for verifying designs if time and staffing are sufficient.
5. Challenges:
o Requires significant time and resources to cover 100% of the verification plan.
o Complexity increases linearly with design growth.
o Doubling the design complexity doubles the time or workforce needed.
6. Scalability Issues:
o Directed testing struggles with complex designs due to the time and
manpower required.
o Faster bug detection methodologies are needed for higher efficiency and full
coverage.
o
o Methodology Basics
o Principles:
o Constrained-random stimulus
o Functional coverage
o Layered testbench using transactors
o Common testbench for all tests
o Test-specific code kept separate from the testbench
o Key Points:
o Random stimulus is crucial for testing complex designs:
o Directed tests find expected bugs.
o Random tests find unanticipated bugs.
o A
utomatically generated stimulus requires an automated way to
predict results:
o Typically done using a scoreboard or reference model.
o Building testbench infrastructure takes significant effort:
o Self-predicting mechanisms add complexity.
o A layered testbench divides the problem into manageable pieces.
o Transactors are useful for creating these manageable testbench
pieces.
o A well-planned testbench infrastructure can be shared across all
tests:
o Includes “hooks” for test-specific actions (e.g., shaping stimulus,
injecting disturbances).
o Test-specific code must remain separate to avoid complicating the
testbench.
o Building this testbench style takes longer initially compared to
traditional directed testbenches:
o Self-checking portions introduce a delay before the first test can
run.
o Managers should be aware of and plan for this delay.
o Payoff of the initial effort:
o Shared testbench reduces redundancy.
o Random tests require minimal code to constrain stimulus and
introduce exceptions.
o Bugs are found faster than with directed tests.
o As discovery slows, new random constraints can target unexplored
areas.
o Final bugs may still require directed testing, but most are caught
by random tests.
o
Constrained-Random Stimulus
While you want the simulator to generate the stimulus, you don’t want
totally random values.
Use the SystemVerilog language to describe the format of the stimulus
(e.g., “address is 32-bits, opcode is X, Y, or Z, length < 32 bytes”).
The simulator picks values that meet the constraints.
Constraining random values to create relevant stimuli is covered in
Chapter 6.
Stimulus values are sent into:
o The design under test (DUT).
o A high-level model that predicts the expected results.
The design’s actual output is compared with the predicted output.
Coverage Details:
o Figure 1-4 shows constrained-random test coverage over the total
design space.
o Random tests often cover a wider space than directed tests.
o Extra coverage may:
Overlap with other tests.
Explore new, unexpected areas (may find bugs or need
constraints if illegal).
Steps to Achieve Complete Coverage (Figure 1-5):
1. Start with basic constrained-random tests.
2. Run tests with many different seeds.
3. Review functional coverage reports and identify gaps.
4. Make minimal code changes (e.g., add constraints, inject errors, or
delays).
5. Spend most of your time refining the random tests.
6. Write directed tests only for features unlikely to be reached by
random tests.
A few directed tests may still be required to find cases not covered by
constrained-random tests.
What Should You Randomize?
General Randomization
Randomizing design stimulus can extend beyond simple data fields.
Common design inputs to randomize:
o Device configuration
o Environment configuration
o Input data
o Protocol exceptions
o Delays
o Errors and violations
1. Device and Environment Configuration
Many bugs are missed because tests lack diverse configurations.
Typical issue:
o Testing designs only after reset or with fixed initialization vectors.
o Example: Testing a PC’s OS right after installation without applications.
Real-world configurations evolve and become more random with use.
Example: Time-Division Multiplexor Switch:
o 2000 input channels mapped to 12 output channels.
o Huge set of possible configurations, beyond standard ones.
o Solution: Automate configuration randomization for broader coverage.
Randomize the entire environment configuration, including:
o Simulation length.
o Number of devices.
o Device setup.
Use constraints to ensure legal configurations.
Example: I/O Switch Chip:
o Randomly configure PCI buses (1–4) and devices (1–8) with various
parameters.
o Track combinations with functional coverage.
2. Input Data
Randomly filling transaction data fields is straightforward with prepared transaction
classes.
Anticipate:
o Layered protocols.
o Error injection.
o Scoreboarding.
o Functional coverage.
3. Protocol Exceptions, Errors, and Violations
Simulate real-world error conditions to ensure the design recovers properly.
Examples:
o Incomplete bus transactions.
o Invalid operations.
o Mutually exclusive signals driven simultaneously.
Add checkers to detect violations:
o Print warnings.
o Generate errors and wind down tests.
Use assertions to catch issues early, but allow disabling for error-handling tests.
4. Delays and Synchronization
Use constrained random delays to identify protocol bugs.
Fast stimulus tests:
o Run quickly but may miss edge cases.
Introduce intermittent delays to uncover subtle bugs.
Explore scenarios:
o Stimulus arriving at different rates or concurrently.
o Output throttling.
o Staggered stimulus with varying delays.
Measure coverage of delay combinations using functional coverage.
5. Parallel Random Testing
A directed test produces fixed stimulus and response vectors.
Random tests rely on a testbench and a random seed.
o Running with multiple unique seeds expands coverage.
Ensure unique seeds:
o Combine time of day, processor name, and process ID to avoid duplicates.
Manage multiple simulations:
o Organize output files per job with unique directories or file names.
Subheadings to Remember
1.10 Functional Coverage
1. Random Stimuli and Input Space:
o Stimuli can randomly walk through the input space but may miss some states.
2. Unreachable States:
o Some states remain untested even with unlimited simulation time.
3. Need for Measurement:
o Measuring verified functionality is necessary to meet the verification plan.
4. Steps in Functional Coverage:
o Monitoring Stimuli: Add code to testbench to track inputs and responses.
o Data Reporting: Combine data from simulations into a report.
o Analysis and Stimulus Generation: Analyze reports and create new tests for
untested conditions.
5. SystemVerilog Functional Coverage:
o Detailed in Chapter 9 of the reference material.