0% found this document useful (0 votes)
12 views33 pages

VSLVD Unit 1

The document discusses verification and scripting languages used in VLSI design, emphasizing the importance of verification in ensuring design correctness and adherence to specifications. It outlines the VLSI design flow, the roles of hardware description languages (HDLs) and hardware verification languages (HVLs), and the significance of scripting languages in automating verification tasks. Additionally, it details various methodologies and components involved in the verification process, including testbenches, directed testing, functional coverage, and layered testbench structures.

Uploaded by

sooraj Puppala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views33 pages

VSLVD Unit 1

The document discusses verification and scripting languages used in VLSI design, emphasizing the importance of verification in ensuring design correctness and adherence to specifications. It outlines the VLSI design flow, the roles of hardware description languages (HDLs) and hardware verification languages (HVLs), and the significance of scripting languages in automating verification tasks. Additionally, it details various methodologies and components involved in the verification process, including testbenches, directed testing, functional coverage, and layered testbench structures.

Uploaded by

sooraj Puppala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

VERIFICATION

AND SCRIPTING
LANGUAGES FOR
VLSI DESIGN
Dr.Sk.Shoukat Vali
Asst. Professor,
ECE,
VNRVJIET.
What is verification?
• Verification is a process that ensures conformance of a design to
some predefined set of expectations
• In case of digital design verification, the expectations are defined by
the specifications
• Verification can be described as the reverse process of design, in that
it starts from the implementation and confirms that the specifications
are met
• The goal of verification is to demonstrate functional correctness of a
design, attempt to find design errors and to attempt to show that the
design implements the specification
• Design verification is the most important aspect of the product
development process consuming as much as 80% of the total product
development time.
• The intent is to verify that the design meets the system requirements
and specifications
Front End Design
VLSI Design Flow
• The VLSI design cycle starts with a formal specification
of a VLSI chip, follows a series of steps, and eventually
produces a packaged chip
HDL -Hardware description language.
• It is used to design digital logic. Eg: VHDL, Verilog.
• HDL is used for RTL design.
HVL -Hardware Verification language.
• It is used to Functionally verify the digital logic designed using a HDL
Eg: system-C, system-Verilog.
• HVL is used for RTL Verification (Random Verification).
• We can use hdl like verilog for the verification also but the problem
is we dont have options like structures in it that's why we prefer hvl
for the verification because it gives more freedom.
Scripting language
• It is a programming language designed for integrating and
communicating with other programming languages.
• Some of the most widely used scripting languages are JavaScript, VBS
(visual basic script), PHP, Perl, Python, Ruby, ASP and Tcl.
• Since a scripting language is normally used in conjunction with
another programming language, they are often found alongside
HTML, Java or C++.
• One common distinction between a scripting language and a
language used for writing entire applications is that, while a
programming language is typically compiled first before being allowed
to run, scripting languages are interpreted from source code or
bytecode one command at a time.
Scripting Languages
• In the new competitive generation of chip designing where time is a critical
parameter and also the complexity of designs are increasing exponentially.
• Adding to that it is also observed that the Verification is always considered the
longest pole and takes nearly 70% of the chip design life cycle.
• Hence any opportunity to automate a task which is repeatable for more than
once is considered of most importance to improve the verification productivity.
• This is where “scripting” skills are highly valuable for any design or verification
engineer.
UNIT – I
Introduction to Verification Methodology
• Basic Testbench Functionality
• Directed Testing
• Methodology Basics
• Functional Coverage
• Testbench Components
• Layered Testbench
• Building a Layered Testbench
• Simulation Environment
The Verification Plan
• The verification plan is derived from the hardware specification
• Contains a description of what features need to be exercised and the
techniques to be used
• These steps may include directed or random testing, assertions,
HW/SW co-verification, emulation, formal proofs, and use of
verification IP.
The Verification Process
• The process of verification parallels the design creation process
• A designer reads the hardware specification for a block, interprets the
human language description, and creates the corresponding logic in a
machine-readable form, usually RTL code
• A verification engineer must also read the hardware specification,
create the verification plan, and then follow it to build tests showing
the RTL code correctly to implement the features
• Hence a verification engineer need to understand the design and its
intent and to consider all the corner test cases that the designer
might not have thought about
Basic Testbench Functionality
The purpose of a testbench is to determine the correctness of the DUT(device
under test). This is accomplished by the following steps.
• Generate stimulus
• Apply stimulus to the DUT
• Capture the response
• Check for correctness
• Measure progress against the overall verification goals
Some steps are accomplished automatically by the testbench, while others are
manually determined.
Directed Testing
• Traditionally way of verifying the correctness of a design
• In this approach, according to the hardware specification write a verification
plan with a list of tests, each of which concentrated on a set of related features.
• Then write stimulus vectors that exercise these features in the DUT.
• Simulate the DUT with these vectors and manually review the resulting log files
and waveforms to make sure the design does what you expect.
• Once the test works correctly, you check it off in the verification plan and move to
the next one.
• It produces almost immediate results, since little infrastructure is needed when
you are guiding the creation of every stimulus vector.
• With ample time and staffing, directed testing is sufficient to verify many
designs.
• If you had enough time, you could write all the tests needed for 100% coverage of
the entire verification plan.
• When the design complexity doubles, it takes twice as long to complete or
requires twice as many people to implement it.

Fig.1. Directed test progress over


time
Fig.2 Directed test coverage
• We need a methodology that finds bugs faster in order to reach the
goal of 100% coverage
Methodology Basics
The following principles are used in methodology
1.Constrained-random stimulus
2.Functional coverage
3.Layered testbench using transactors
4.Common testbench for all tests
5.Test case-specific code kept separate from testbench
• All these principles are related
• Random stimulus is crucial for exercising complex designs
• A directed test finds the bugs you expect to be in the design, whereas a random
test can find bugs you never anticipated
• When using random stimuli, you need functional coverage to measure verification
progress
• Building the testbench infrastructure, including self-prediction, takes a significant
amount of work.
• A layered testbench helps you control the complexity by breaking the problem
into manageable pieces. Transactors provide a useful pattern for building these
pieces.
• With appropriate planning, you can build a testbench infrastructure that can be
shared by all tests and does not have to be continually modified.
• But the code specific to a single test must be kept separate from the testbench to
prevent it from complicating the infrastructure.
---Building this style of testbench takes longer than a traditional directed testbench
---As a result, there may be a significant delay before the first test can be run
• Every random test you create shares this common testbench, as opposed to
directed tests where each is written from scratch.
• Each random test contains a few dozen lines of code to constrain the
stimulus in a certain direction and cause any desired exceptions, such
as creating a protocol violation.
• So single constrained-random testbench can find bugs faster than the
many directed ones.

Fig. Constrained-random test progress over


time vs. directed testing
Functional Coverage
• Functional coverage is a measure of what functionalities/features of the design
have been exercised by the tests
• This can be useful in constrained random verification (CRV) to know what
features have been covered by a set of tests in a regression
• The process of measuring and using functional coverage consists of several steps.
• First, you add code to the testbench to monitor the stimulus going into the
device, and its reaction and response, to determine what functionality has been
exercised
• Run several simulations, each with a different seed
• Next, merge the results from these simulations into a report
• Lastly, analyze the results and determine how to create new stimulus to reach
untested conditions and logic
Feedback from Functional Coverage to Stimulus
• A random test evolves using feedback
• The initial test can be run with many different seeds, thus creating many unique
input sequences
• As the functional coverage asymptotically approaches its limit, you need to
change the test to find new approaches to reach uncovered areas of the design.
This is known as “coverage-driven verification”

Fig. Test progress with and without


feedback
• The test can be made flexible enough to create valid stimuli even if
the design’s timing changed
• This can be achieved by adding a feedback loop at the stimulus
created so far and then change the constraint weights.
• This improvement would greatly reduce the time needed to get to full
coverage, with little manual intervention.
Testbench Components
• Interface
• Driver
• Monitor
• Generator
• Scoreboard
• Environment
• Test

Fig. The testbench – design environment


Fig. Test bench components
Interface
• It is a container in which all the I-O ports can be placed
• Contains design signals that can be driven or monitored
• The design can be driven with these values through the interface
Driver
• Drives the generated stimulus to the design
• The DUT’s inputs are driven by the driver that runs single commands,
such as bus read or write
• It can do the pin-wiggling (the voltage level on pin is changing ) to the
DUT through the interface
Monitor
• Monitor picks up the processed data by DUT and converts it into data object
• Monitor the design input-output ports to capture design activity
Generator
• Generate different types of input stimulus
Scoreboard
• Checks output from the design with expected behavior
• The Scoreboard can have a reference model that behaves the same way as the
DUT
• This model reflects the expected behavior of the DUT.
• Input sent to the DUT is also sent to this reference model.
• So if the DUT has a functional problem, then the output from the DUT will not
match the output from our reference model.
• So comparison of outputs from the design and the reference model will tell us if
there is a functional defect in the design
Environment
• Contains all the verification components mentioned
Test
• The test will instantiate an object of the environment and configure it
the way the test wants to
• The design may need thousands of tests and it is not feasible to make
direct changes to the environment for each test.
• Instead we want certain knobs/parameters in the environment that
can be adjusted for each test.
• So the test will have a higher control over stimulus generation and
will be more effective.
Layered Testbench
• A testbench in System Verilog is layered because the process of
verification is distributed into segments, each performing different
tasks
• It helps to make the task easier by dividing the code into smaller
pieces that can be developed separately
• A layered approach allows reuse and encapsulation (the action of
enclosing something) of Verification IP (VIP) which are OOP (Object
Oriented Programming) concepts
• By taking the common actions (such as reset, bus reads and writes)
and putting them in a routine, we became more efficient and made
fewer mistakes.
Signal Layers
• Signal layer contains the DUT and the signals that connect it to the testbench
Command Layers
• The DUT’s inputs are driven by the driver that runs single commands, such as bus
read or write
• The DUT’s output drives the monitor that takes signal transitions and groups them
together into commands
• Assertions also cross the command/signal layer, as they look at individual signals and
also changes across an entire command.
The Functional Layer
• The functional layer feeds down into the command layer
• The agent block receives higher-level transactions such as DMA read or write and
breaks them into individual commands or transactions.
• These commands are also sent to the scoreboard that predicts the results of the
transaction.
• The checker compares the commands from the monitor with those in the
scoreboard.
The Scenario Layer
• The functional layer is driven by the generator in the scenario layer
• The scenario layer of testbench provide the required constrained-
random values for parameters of every scenario
The Test Layer and Functional Coverage
• The test layer contains the constraints to create the stimulus
• Functional coverage measures the progress of all tests in fulfilling the
verification plan requirements
• The functional coverage code changes through the project as various
criteria complete. This code is constantly being modified and thus it is
not part of the environment
Fig. Full testbench with all layers

You might also like