SV Session 7
SV Session 7
Introduction To SystemVerilog
Prepared by Ahmed Eissa
[email protected]
CONTENTS
• Parameterized Classes
• Introduction to design patterns
• SV scheduling semantics
• virtual interfaces
CONSTRAINED RANDOM
COVERAGE DRIVEN VERIFICATION
CONSTRAINED RANDOM COVERAGE
DRIVEN VERIFICATION
Stimulus
Constrain
Pure
Directed ed
random
random
CONSTRAINED RANDOM COVERAGE
DRIVEN VERIFICATION
• Direct stimulus:
– With directed testbenches,
individual features are verified
using individual testbenches.
– The stimulus is manually crafted
to exercise that feature.
– The response is verified against
the symptoms that would
appear should the feature not
be correctly implemented.
CONSTRAINED RANDOM COVERAGE
DRIVEN VERIFICATION
• Direct stimulus:
– You need to write tests that
cover all the features and find
the bugs.
– You will only discover bugs
that you expect
– Poor scalability
– Poor maintainability
– Use for small number of
testcases
CONSTRAINED RANDOM COVERAGE
DRIVEN VERIFICATION
▪ Pure random:
▪ randomize stimulus with no
constraints
▪ May generate corner testcases
that wouldn’t normally be
tested
▪ Usually generates highly
redundant scenarios
▪ May generate illegal stimulus
▪ Unreasonable for large
complex design
▪ How to measure verification
progress??
CONSTRAINED RANDOM COVERAGE
DRIVEN VERIFICATION
• Constrained random stimulus
– Steer randomization to interesting
scenarios with constraints and
probabilities
– Will create conditions that you have not
thought of when writing your
verification plan
– Create unexpected conditions and hit
corner case
– Reduce the bias introduced by
the verification engineer when coding
directed testbenches
– Takes longer time to ramp up compared
to direct testcases
– Better maintainability and scalability
Challenge:
How to measure verification progress?
Solution:
Coverage
CONSTRAINED RANDOM COVERAGE
DRIVEN VERIFICATION
• The solution is to measure
progress against functional
coverage points that will identify
whether a feature has been
exercised.
• The objective becomes filling a
functional coverage model of
your design rather than writing
a series of testcases.
• You could fill this coverage
model using large directed
testbenches or random
testbench
CONSTRAINED RANDOM COVERAGE
DRIVEN VERIFICATION
• Notice that a random test often
covers a wider space than a
directed one
• This extra coverage may overlap
other tests, or may explore new
areas that you did not anticipate.
• you need to write more
constraints to keep random
generation from creating illegal
design functionality
• You need to write more
constraints to keep random
generation from creating illegal
design functionality
CONSTRAINED RANDOM COVERAGE DRIVEN VERIFICATION
RANDOMIZATION IN
SYSTEMVERILOG
RANDOMIZATION IN SYSTEMVERILOG
end
endmodule
RANDOMIZATION IN SYSTEMVERILOG
Solution:
Randomization in SystemVerilog is PRNG
RANDOMIZATION
IN
SYSTEMVERILOG
• The process of solving constraint expressions
is handled by the SystemVerilog constraint
solver.
• If you give a SystemVerilog simulator the same seed and the same
testbench, it should always produce the same results.
I made a minimal
change in the
DUT/TB and now
Random stability
the output is
different. How did
that happen?
RANDOMIZATION IN SYSTEMVERILOG
within expression
RANDOMIZATION IN SYSTEMVERILOG:
endclass
RANDOMIZATION IN SYSTEMVERILOG:
endclass
RANDOMIZATION IN SYSTEMVERILOG
module rand_methods;
initial begin
packet pkt;
pkt = new();
pkt.randomize();
end
endmodule
RANDOMIZATION IN SYSTEMVERILOG
//disabling constraint
pkt.addr_range.constraint_mode(0);
• You need to think broadly about all design input such as the
following items.
– Device configuration
– Environment configuration
– Primary input data
– Encapsulated input data
– Protocol exceptions
– Delays
COVERAGE
Challenge:
How to measure verification progress?
Solution:
Coverage
COVERAGE
• Code coverage
– Automatically extracted from the design code
– Code coverage measures how much of the “design Code” is exercised.
– Tool generated
– Usually used to identify holes in the verification that has been performed
– sometimes called structural coverage
– Has many types
▪ Statement coverage: Have all lines of code been executed?
▪ Branch coverage: Have all branches in the code been taken?
▪ Expression coverage: Have all expressions that could affect a branch been executed?
▪ State / Transition coverage: Have all states of a state machine been active and have all
transitions between states been traversed?
▪ Toggle coverage: Have all variables or all bits of all variables changed state and or been
through all transitions?
CODE COVERAGE: STATEMENT
COVERAGE
• Statement coverage can also
be called block coverage or
line coverage, where a block is
a sequence of statements that
are executed if a single
statement is executed.
CODE COVERAGE: STATEMENT
COVERAGE
• Expression coverage measures
the various ways decisions
through the code
CODE COVERAGE: FSM COVERAGE
• If the functional coverage is high but the code coverage is low, what does that mean?
– Your tests are not exercising the full design,
– Perhaps your coverage model is inadequate
– it may be time to go back to the hardware specifications and update your verification plan. Then
you need to add more functional coverage points to locate untested functionality.
– The design code may contain dead code
▪ Dead code is code that can’t be exercised such as a default case inside a full case
▪ You may need to add coverage exclusions
• If the functional coverage is low but the code coverage is high, what does that mean?
– Your testbench is giving the design a good workout
– You are unable to put it in all the interesting states.
– Your design itself might be missing some features
FUNCTIONAL COVERAGE