0% found this document useful (0 votes)
23 views49 pages

Avlsi M3

Advance vlsi

Uploaded by

sangameshasr2001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views49 pages

Avlsi M3

Advance vlsi

Uploaded by

sangameshasr2001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

An Introduction to

SystemVerilog
What is SystemVerilog?

 SystemVerilog is a hardware description and Verification


language(HDVL)
 SystemVerilog is an extensive set of enhancements to IEEE
1364 Verilog-2001 standards
 It has features inherited from Verilog HDL,VHDL,C,C++
 Adds extended features to Verilog
System verilog is the superset of verilog
 It supports all features of verilog plus add on features
 It’s a super verilog
 additional features of system verilog will be discussed
Why SystemVerilog?

Constrained Randomization Easy c model integration

OOP support New data types ie,logic


System Verilog
Assertions Coverage support

Narrow gap b/w design & verification engineer


SystemVerilog Intent

Verilog System Verilog

 Design entry  Module level design

 Module level verification  Gate level simulations


 System level verification
 Unified language to span almost
the entire SoC design flow
Relaxed data type rules
Verilog System Verilog

 Strict about usage of wire  Logic data type can be used so


& reg data type no need to worry about reg & wire
 Variable types are 4 state  2 state data type added – 0, 1
– 0,1,X,Z state
 2 state variable can be used in
test benches,where X,Z are not
required
 2 state variable in RTL model
may enable simulators to be more
efficient
Memory Management
Verilog System Verilog

 Memories in verilog are  Memories are dynamic in


static in nature nature
Example :-reg[7:0] X[0:127];  Allocated at runtime
128 bytes of memory  Better memory management
ie,queues
Example:Logic[3:0] length[$];
an empty queue with an
unbounded size of logic data
type
Complexity
Verilog System Verilog

 For complex designs  Less RTL & verification code


large number of RTL code is
required  Less code hence less no. of bugs

 Increase in verification  Readable


code to test these designs  Higher level of abstraction due to
 Extra time algorithmic nature(inherited from
C++)
Hardware specific procedures
Verilog System Verilog

It uses the “always” It uses three new procedures


procedure to represent
 always_ff - sequential logic
 Sequential logic
 always_comb - combinational
 Combinational logic logic
 Latched logic  always_latch - latched logic
Port connections
Verilog System Verilog

 Ports are connected  Ports are connected using


using either named Design DUT(.*);which means
instance or positional connect all port to variables or
instance nets with the same name as the
ports
Synthesis support
Verilog System Verilog

Extensive support for  Synthesis tool support


verilog-2001 in simulation for system verilog is
and synthesis limited

“This is a major drawback which is restricting people


to accept SystemVerilog as a Design language”
Specific features of HVL

i) Constrained-random stimulus generation.


ii) Functional coverage
iii)Higher-level structures, especially Object Oriented
iv)Programming Multi-threading and interprocess
communication
v) Support for HDL types such as Verilog’s 4-state values
vi)Tight integration with event-simulator for control of the design
Verification Process
 Finding Bugs
Accomplish the work successfully
Checking the Hardware specifications
Perform the simulation of multiple blocks
Testing of Entire System
Error injection and handling the problems
Verification plan
• Directed or Random Testing
• Assertions
• H/W, SW co-verification
• Emulation
• Formal Proof’s
• Use of Verification IP
Basic Test bench Functionality
The purpose of a test bench is to determine the correctness
of the design under test (DUT). This is accomplished by
the following steps.
• Generate stimulus
• Apply stimulus to the DUT
• Capture the response
• Check for correctness
• Measure progress against the overall verification
goals
Directed Testing
Hardware specification and verification plan with a list of
tests.

Write a stimulus vectors based on DUT

Design complexity increases testing time is also increasing


Directed Test coverage
Directed Test coverage
• Above fig shows the total design space and
the features that get covered by directed
testcases.
• In this space are many features, some of
which have bugs.
• Need to write tests that cover all the
features and find the bugs.
Verification Methodology basics
• Constrained-random stimulus
• Functional coverage
• Layered testbench using transactors
• Common testbench for all tests
• Test-specific code kept separate from
testbench
Verification Methodology basics
• Random stimulus is crucial for exercising
complex designs.
• A directed test finds the bugs you expect to
be in the design, while a random test can
find bugs you never anticipated
Constraint random test progress
• When the simulator to generate the stimulus, don’t want
totally random values. use the SystemVerilog language to
describe the format of the stimulus (“address is 32-bits,
opcode is X, Y, or Z, length < 32 bytes”),
• The simulator picks values that meet the constraints.
• Constraining the random values to become relevant
stimuli.
• These values are sent into the design, and also into a high-
level model that predicts what the result should be.
• The design’s actual output is compared with the predicted
output.
Constraint random test progress
Constraint random test progress
• Every test you create shares this common testbench, as
opposed to directed tests where each is written from
scratch.
• Each random test contains a few dozen lines of code to
constrain the stimulus in a certain direction and cause any
desired exceptions, such as creating a protocol violation.
The result is that your single constrained-random testbench
is now finding bugs faster than the many directed ones.
• As the rate of discovery begins to drop off, you can create
new random constraints to explore new areas. The last few
bugs may only be found with directed tests, but the vast
majority of bugs will be found with random tests.
Constrained –Random Stimulus
Constrained –Random Stimulus
• Start at the upper left with basic constrained-random tests.
• Run them with many different seeds.
• When you look at the functional coverage reports, find the
holes, where there are gaps.
• Now you make minimal code changes, perhaps with new
constraints, or injecting errors or delays into the DUT.
• Spend most of your time in this outer loop, only writing
directed tests for the few features that are very unlikely to
be reached by random tests.
What should you Randomize
When you think of randomizing the stimulus to a design,
the first thing that you might think of is the data fields.
This stimulus is the easiest to create – just call $random.
need to think broadly about all design input, such as the
following.
• Device configuration
• Environment configuration
• Input data
• Protocol exceptions
• Delays
• Errors and violations
.
Device and Environment configuration
• What is the most common reason why bugs are missed
during testing of the RTL design?
• Not enough different configurations are tried. Most tests
just use the design as it comes out of reset, or apply a fixed
set of initialization vectors to put it into a known state.
• This is like testing a PC’s operating system right after it
has been installed, without any applications installed. Of
course the performance is fine, and there aren’t any
crashes.
Device and Environment configuration

• In the real world, your device operates in an


environment containing other components.
• When you are verifying the DUT, it is connected
to a testbench that mimics this environment.
• You should randomize the entire environment
configuration, including the length of the
simulation, number of devices, and how they are
configured. Of course you need to create
constraints to make sure the configuration is legal.
Input Data
• When you read about random stimulus, you probably
thought of taking a transaction such as a bus write or ATM
cell and filling the data fields with random values.
• Actually this approach is fairly straightforward as long as
you carefully prepare your transaction classes.
• Need to anticipate any layered protocols and error
injection, plus score boarding and functional coverage.
Protocol exceptions, errors, and
violations

• If something can go wrong in the real hardware,


try to simulate it.
• Look at all the errors that can occur. What
happens if a bus transaction does not complete?
• If an invalid operation is encountered? Does the
design specification state that two signals are
mutually exclusive?
• Drive them both and make sure the device
continues to operate.
Delays and synchronization
• How fast should your testbench send in stimulus? Always
use constrainedrandom delays to help catch protocol bugs.
• You can create a testbench that talks to another block at
the fastest rate, but subtle bugs are often revealed when
intermittent delays are introduced.
• A block may function correctly for all possible
permutations of stimulus from a single interface, but subtle
errors may occur when data is flowing into multiple inputs.
• Try to coordinate the various drivers so they can
communicate at different relative timing.
Parallel random testing
• If you run the same test 50 times, each with a unique seed,
you will get 50 different sets of stimuli. Running with
multiple seeds broadens the coverage of your test and
leverages your work.
• You need to choose a unique seed for each simulation.
Some people use the time of day, but that can still cause
duplicates.
• What if you are using a batch queuing system across a
CPU farm and tell it to start 10 jobs at midnight? Multiple
jobs could start at the same time but on different
computers,
Functional Coverage
• Functional coverage is a user-defined metric that measures how
much of the design specification has been exercised in verification.
• Test bench visits some areas often, but takes too long to reach all
possible states. Unreachable states will never be visited, even given
unlimited simulation time.
• Need to measure what has been verified in order to check off items in
your verification plan.
• The process of measuring and using functional coverage consists of
several steps. First, you add code to the test bench to monitor the
stimulus going into the device, and its reaction and response, to
determine what functionality has been exercised.
• Next, the data from one or more simulations is combined into a report.
Lastly, you need to analyze the results and determine how to create
new stimulus to reach untested conditions and logic..
Feedback from functional coverage to
stimulus
• As the functional coverage asymptotically approaches its
limit, you need to change the test to find new approaches
to reach uncovered areas of the design. This is known as
“coverage-driven verification.”
• A more productive testing strategy uses random
transactions and terminators. The longer you run it, the
higher the coverage.
• As a bonus, the test could be made flexible enough to
create valid stimuli even if the design’s timing changed.
You could add a feedback loop that would look at the
stimulus created so far (generated all write cycles yet?) and
change the constraint weights (drop write weight to zero).
Feedback from functional coverage to
stimulus
Test bench Components
Test bench Environment

 In simulation, the test bench wraps around the DUT , just as a hardware.
 Both the test bench and tester provide stimulus and capture responses.
The difference between them is that your test bench needs to work over a
wide range of levels of abstraction.
Test bench Components

 What goes into that test bench block? It is made of many bus functional models
(BFM).
 If the real device connects to AMBA, USB, PCI, and SPI buses, you have to build
equivalent components in your test bench that can generate stimulus and check the
response.
 These are not detailed, synthesizable models but instead, highlevel transactors.
.
Layered Test bench
• A key concept for any modern verification methodology is the layered
Test bench.
• While this process may seem to make the test bench more complex,
it actually helps to make your task easier by dividing the code into
smaller pieces that can be developed separately.
• Don’t try to write a single routine that can randomly generate all types
of stimulus, both legal and illegal,

Flat Test bench:

When you first learned Verilog and started writing tests, they probably
looked like the following low-level code .that does a simplified APB (AMBA
Peripheral Bus) Write
Flat Test bench … Example
Example 1-1 Driving the APB pins
• module test(PAddr, PWrite, PSel, PRData, Rst, clk);
• // Port declarations omitted...
• initial begin
• // Drive reset
• Rst <= 0;
• #100 Rst <= 1;
• // Drive Control bus
• @(posedge clk)
• PAddr <= 16’h50;
• PWData <= 32’h50;
• PWrite <= 1'b1;
• PSel <= 1'b1;
• // Toggle PEnable
• @(posedge clk)
• PEnable <= 1'b1;
• @(posedge clk)
• PEnable <= 1'b0;
• // Check the result
• if (top.mem.memory[16’h50] == 32’h50)
• $display("Success");
• else
• $display("Error, wrong value in memory");
• $finish;
• end
• endmodul
Assertions in System Verilog

 An assertion is a check embedded in design or


bound to a design unit during the simulation.
 Warnings or errors are generated on the failure of
a specific condition or sequence of events.
 Assertions are used to, Check the occurrence of a
specific condition or sequence of events.
Signal and command layers
• At the bottom is the signal layer that contains the design under test and the
signals that connect it to the test bench.
• The next level up is the command layer. The DUT’s inputs are driven by
the driver that runs single commands such as bus read or write.
 The DUT’s output drives the monitor that takes signal transitions and groups them
together into commands.
 Assertions also cross the command/signal layer, as they look at individual signals
but look for changes across an entire command.
The functional layer
• The functional layer feeds the command layer. The agent block (called the
transactor in the VMM) receives higher-level transactions such as DMA
read or write and breaks them into individual commands.
• These commands are also sent to the scoreboard that predicts the results
of the transaction.
• The checker compares the commands from the monitor with those in the
scoreboard.
The scenario layer
• The functional layer is driven by the generator in the scenario layer.
• What is a scenario? Remember that your job as a verification engineer is to
make sure that this device accomplishes its intended task.
• An example device is an MP3 player that can concurrently play music from
its storage, download new music from a host, and respond to input from the
user, such as volume and track controls.
• Each of these operations is a scenario. Downloading a music file takes several
steps, such as control register reads and writes to set up the operation.
Full test bench with all layers
• The Test contains the constraints to create the stimulus.
• A complicated design requires a sophisticated test bench.
• Always need the test layer. For a simple design, the scenario layer may
be so simple that you can merge it with the agent.
• When estimating the effort to test a design, don’t count the number of
gates; count the number of designers.
Building a Layered Testbench
• Now it is time to take the previous diagrams and learn how to map the
components into System Verilog constructs.

Creating a simple driver


The driver receives commands from the agent,
may inject errors or add delays, and then breaks
the command down into individual signal changes
such as bus requests, handshakes, etc. The general
term for such a test bench block is a “transactor,”
which, at its core, is a loop:

Transactor. A generic term for any


verification component that actively
interacts with the DUT interfaces (e.g. a
driver or responder).
Transactor Example

• task run();
• done = 0;
• while (!done) begin
• // Get the next transaction
• // Make transformations
• // Send out transactions
• end
• endtask
Simulation Environment Phases
• until now we have been learning what parts make up the
environment.

• When do these parts execute? we want to clearly define the


phases to coordinate the test bench so all the code for a
project works together.

• The three primary phases are Build, Run, and Wrap-up.


Each is divided into smaller steps.
Building phase in simulation environment
• Build phase is divided into the following steps:
• Generate Configuration: randomize the configuration of
the DUT and the surrounding environment.
• Build Environment: Allocate and connect the test bench
components based on the configuration.
• A testbench component is one that only exists in the test
bench, as opposed to physical components in the design
that are built with RTL code.
• For example, if the configuration chose three bus drivers,
the testbench would allocate and initialize them in this
step.
• Reset the DUT. Configure the DUT based on generated
configuration from the first step.
Run phase in simulation environment
• Run phase is where the test actually runs. It has the
following steps:
• Start environment: run the test bench components such as
BFMs and stimulus generators.
• Run the test: start the test and then wait for it to complete.
It is easy to tell when a directed test has completed, but
doing so can be complex for a random test.
• we can use the test bench layers as a guide. Starting from
the top, wait for a layer to drain all the inputs from the
previous layer (if any), wait for the current layer to become
idle, and then wait for the next lower layer.
• we should also use time-out checkers to ensure that the
DUT or test bench does not lock up.
Wrap-up phase in simulation environment
• The Wrap-up phase has two steps:
• Sweep: After the lowest layer completes, you need to wait
for the final transactions to drain out of the DUT.
• Report: Once the DUT is idle, sweep the testbench for lost
data. Sometimes the scoreboard holds transactions that
never came out.
• With this information you can create the final report on
whether the test passed or failed. If it failed, be sure to
delete any functional coverage data, as it may not be
correct.

You might also like