Verification Academy: Cookbook
Verification Academy: Cookbook
Cookbook
Online Methodology Documentation from the
Mentor Graphics Verification Methodology Team
Contact [email protected]
http://verificationacademy.com
Table of Contents
Articles
Introduction 0
Cookbook/Acknowledgements 0
Testbench Architecture 1
Ovm/Testbench 1
Ovm/Testbench/Build 6
Ovm/Testbench/Blocklevel 15
Ovm/Testbench/IntegrationLevel 26
Ovm/Component 36
Ovm/Agent 39
Ovm/Phasing 45
Ovm/Factory 48
Ovm/UsingFactoryOverrides 51
Ovm/SystemVerilogPackages 54
Sequences 192
Ovm/Sequences 192
Ovm/Sequences/Items 197
Ovm/Transaction/Methods 199
Ovm/Sequences/API 204
Ovm/Connect/Sequencer 208
Ovm/Driver/Sequence API 210
Ovm/Sequences/Generation 216
Ovm/Sequences/Overrides 224
Ovm/Sequences/Virtual 226
Ovm/Sequences/VirtualSequencer 233
Ovm/Sequences/Hierarchy 239
Ovm/Driver/Use Models 244
Ovm/Driver/Unidirectional 245
Ovm/Driver/Bidirectional 248
Ovm/Driver/Pipelined 253
Ovm/Sequences/Arbitration 265
Ovm/Sequences/Priority 274
Ovm/Sequences/LockGrab 275
Ovm/Sequences/Slave 282
Ovm/Stimulus/Signal Wait 288
Ovm/Stimulus/Interrupts 293
Ovm/Sequences/Stopping 300
Ovm/Sequences/Layering 301
Datestamp:
- This document is a snapshot of dynamic content from the Online Methodology Cookbook
Introduction
Cookbook/Acknowledgements
UVM/OVM Cookbook Authors:
• Gordon Allan
• Mike Baird
• Rich Edelman
• Adam Erickson
• Michael Horn
• Mark Peryer
• Adam Rose
• Kurt Schwartz
We acknowledge the valuable contributions of all our extended team of contributors and reviewers, and those who help
deploy our methodology ideas to our customers, including: Alain Gonier, Allan Crone, Bahaa Osman, Dave Rich, Eric
Horton, Gehan Mostafa, Graeme Jessiman, Hans van der Schoot, Hager Fathy, Jennifer Adams, John Carroll, John
Amouroux, Jason Polychronopoulos, John Stickley, Nigel Elliot, Peet James, Ray Salemi, Shashi Bhutada, Tim
Corcoran, and Tom Fitzpatrick.
Testbench Architecture
Ovm/Testbench
This chapter covers the basics and details of OVM testbench architecture, construction, and leads into other chapters
covering each of the constituent parts of a typical OVM testbench.
Topic Overview
//
//
// Example class that contains a message and some convenience methods
//
class example;
string message;
endclass: example
//
// Module that uses the class - class is constructed, used and dereferenced
// in the initial block, after the simulation starts
//
module tb;
initial begin
C = new(); // Handle points to C object in memory
C.set_message("This object has been created");
#10;
C.print();
C = null; // C has been deferenced, object can be garbage collected
end
endmodule: tb
The OVM Package
The OVM package contains a class library that comprises three main types of classes, ovm_components which are used
to construct a class based hierarchical testbench structure, ovm_objects which are used as data structures for
configuration of the testbench and ovm_transactions which are used in stimulus generation and analysis.
An OVM testbench will always have a top level module which contains the DUT and the testbench connections to it. The
process of connecting a DUT to an OVM class based testbench is described in the article on DUT- testbench
connections.
The top level module will also contain an initial block which will contain a call to the OVM run_test() method.
This method starts the execution of the OVM phases, which controls the order in which the testbench is built, stimulus
is generated and then reports on the results of the simulation.
The Agent
• A Sequencer - The role of the sequencer is to route sequence_items from a sequence where they are generated
to/from a driver.
• A Monitor - The monitor observes pin level activity and converts its observations into sequence_items which are sent
to components such as scoreboards which use them to analyse what is happening in the testbench.
• Configuration object - A container object, used to pass information to the agent which affects what it does and how
it is built and connected.
Each agent should have a configuration object, this will contain a reference to the virtual interface which the driver and
the monitor use to access pin level signals. The configuration object will also contain other data members which will
control which of the agents sub-components are built, and it may also contain information that affects the behaviour of
the agents components (e.g. error injection, or support for a protocol variant)
The agent configuration object contains an active bit which can be used to select whether the agent is passive - i.e. the
driver and sequencer are not required, or active. It may also contain other fields which control whether other
sub-component classes such as functional coverage monitors or scoreboards get built or not.
Other classes that might be included in an agent package:
• Functional coverage monitor - to collect protocol specific functional coverage information
• Scoreboard - usually of limited use
• A responder - A driver that responds to bus events rather than creating them (i.e. a slave version of the driver rather
than a master version).
• (API) Sequences - Utility sequences likely to be of general use, often implementing an API layer for the driver.
The env
The environment, or env, is a container component for grouping together sub-components orientated around a block, or
around a collection of blocks at higher levels of integration.
behaving correctly. OVM scoreboards use analysis transactions from the monitors implemented inside agents. A
scoreboard will usually compare transactions from at least two agents, which is why it is usually present in the env.
• Predictors - A predictor is a component that computes the response expected from the stimulus, it is generally used in
conjunction with other components such as the scoreboard.
• Functional Coverage Monitors - A functional coverage monitor analysis component contains one or more
covergroups which are used to gather functional coverage information relating to what has happened in a testbench
during a test case. A functional coverage monitor is usually specific to a DUT.
The diagram shows a block level testbench which consists of a series of tests which build an env which contains several
analysis components and two agents.
Ovm/Testbench/Build
The first phase of an OVM testbench is the
build phase. During this phase the
ovm_component classes that make up the
testbench hierarchy are constructed into
objects. The construction process works
top-down with each level of the hierarchy
being constructed before the next level is
configured and constructed. This approach to
construction is referred to as deferred
construction.
Factory Overrides
The OVM factory allows an OVM class
to be substituted with another derived
class at the point of construction.
This facility can be useful for
changing or updating
component behaviour or for extending a
configuration object. The factory
override must be specified before the
target object is constructed, so it is
convenient to do it at the start of
the build process.
Sub-Component Configuration
Objects
Each collective component such as an agent or an env should have a configuration object which defines their structure
and behaviour. These configuration objects should be created in the test build method and configured according to the
requirements of the test case. If the configuration of the sub-component is either complex or is likely to change then it is
worth adding a function call to take care of the configuration, since this can be overloaded in test cases extending from
the base test class.
`ifndef SPI_TEST_BASE
`define SPI_TEST_BASE
//
// Class Description:
//
//
class spi_test_base extends ovm_test;
//------------------------------------------
// Data Members
//------------------------------------------
//------------------------------------------
// Component Members
//------------------------------------------
// The environment class
spi_env m_env;
// Configuration objects
spi_env_config m_env_cfg;
apb_agent_config m_apb_cfg;
spi_agent_config m_spi_cfg;
//------------------------------------------
// Methods
//------------------------------------------
endclass: spi_test_base
configure_env(m_env_cfg);
// Create apb agent configuration object
m_apb_cfg = apb_agent_config::type_id::create("m_apb_cfg");
// Call function to configure the apb_agent
configure_apb_agent(m_apb_cfg);
// More to follow
endfunction: build
//
// Convenience function to configure the env
//
// This can be overloaded by extensions to this base class
function void spi_test_base::configure_env(spi_env_config cfg);
cfg.has_functional_coverage = 1;
cfg.has_reg_scoreboard = 0;
cfg.has_spi_scoreboard = 1;
endfunction: configure_apb_agent
//
// Convenience function to configure the apb agent
//
// This can be overloaded by extensions to this base class
function void spi_test_base::configure_apb_agent(apb_agent_config cfg);
cfg.active = OVM_ACTIVE;
cfg.has_functional_coverage = 0;
cfg.has_scoreboard = 0;
endfunction: configure_apb_agent
`endif // SPI_TEST_BASE
// The build method from before, adding the apb agent virtual interface assignment
// Build the env, create the env configuration
// including any sub configurations and assigning virtural interfaces
function void spi_test_base::build();
// Create env configuration object
m_env_cfg = spi_env_config::type_id::create("m_env_cfg");
// Call function to configure the env
configure_env(m_env_cfg);
// Create apb agent configuration object
m_apb_cfg = apb_agent_config::type_id::create("m_apb_cfg");
// Call function to configure the apb_agent
configure_apb_agent(m_apb_cfg);
// Adding the apb virtual interface:
m_apb_cfg.APB = ovm_container #(virtual apb_if)::get_value_from_config(this, "APB_vif");
// More to follow
endfunction: build
//
// Configuration object for the spi_env:
//
`ifndef SPI_ENV_CONFIG
`define SPI_ENV_CONFIG
//
// Class Description:
//
//
class spi_env_config extends ovm_object;
//------------------------------------------
// Data Members
//------------------------------------------
// Whether env analysis components are used:
bit has_functional_coverage = 1;
bit has_reg_scoreboard = 0;
bit has_spi_scoreboard = 1;
//------------------------------------------
// Methods
//------------------------------------------
endclass: spi_env_config
`endif // SPI_ENV_CONFIG
//
// Inside the spi_test_base class, the agent config handles are assigned:
//
// The build method from before, adding the apb agent virtual interface assignment
// Build the env, create the env configuration
// including any sub configurations and assigning virtural interfaces
function void spi_test_base::build();
// Create env configuration object
m_env_cfg = spi_env_config::type_id::create("m_env_cfg");
// Call function to configure the env
configure_env(m_env_cfg);
// Create apb agent configuration object
m_apb_cfg = apb_agent_config::type_id::create("m_apb_cfg");
// Call function to configure the apb_agent
configure_apb_agent(m_apb_cfg);
// Adding the apb virtual interface:
m_apb_cfg.APB = ovm_container #(virtual apb_if)::get_value_from_config(this, "APB_vif");
// Assign the apb_angent config handle inside the env_config:
m_env_cfg.m_apb_agent_cfg = m_apb_cfg;
// Repeated for the spi configuration object
m_spi_cfg = spi_agent_config::type_id::create("m_spicfg");
configure_spi_agent(m_spi_cfg);
m_spi_cfg.SPI = ovm_container #(virtual spi_if)::get_value_from_config(this, "SPIvif");
m_env_cfg.m_spi_agent_cfg = m_spi_cfg;
// Now env config is complete set it into config space:
set_config_object("*", "spi_env_config", m_env_cfg, 0);
// Now we are ready to build the spi_env:
m_env = spi_env::type_id::create("m_env", this);
endfunction: build
Examples
The build process is best illustrated by looking at some examples to illustrate how different types of component hierarchy
are built up:
A block level testbench containing an agent
An integration level testbench
Ovm/Testbench/Blocklevel
As an example of a block level test bench, we
are going to consider a test bench built to
verify a SPI Master DUT. In this case, the
OVM environment has two agents - an APB
agent to handle bus transfers on its APB slave
port, and a SPI agent to handle SPI protocol
transfers on its SPI port. The structure of the
overall OVM verification environment is as
illustrated in the block diagram. We shall go
through each layer of the test bench and
describe how it is put together from the top
down.
Block level ovm hierarchy.gif
The Test Bench Module
The top level test bench module is used to encapsulate the SPI Master DUT and connect it to the apb_if and spi_if
SystemVerilog interfaces. There is also an initial block which generates a clock and a reset signal for the APB interface.
In the initial block of the test bench, handles for the APB, SPI and INTR (interrupt) virtual interfaces are put into the
OVM top configuration space using the set_value_in_global_config() method in the ovm_container. Then the
run_test() method is called - this causes the specified test to be constructed and the processing of the OVM phases to
start.
module top_tb;
`include "timescale.v"
import ovm_pkg::*;
import ovm_container_pkg::*;
import spi_test_lib_pkg::*;
//
// Instantiate the interfaces:
//
apb_if APB(PCLK, PRESETn); // APB interface
spi_if SPI(); // SPI Interface
intr_if INTR(); // Interrupt
// DUT
spi_top DUT(
// APB Interface:
.PCLK(PCLK),
.PRESETN(PRESETn),
.PSEL(APB.PSEL[0]),
.PADDR(APB.PADDR[4:0]),
.PWDATA(APB.PWDATA),
.PRDATA(APB.PRDATA),
.PENABLE(APB.PENABLE),
.PREADY(APB.PREADY),
.PSLVERR(),
.PWRITE(APB.PWRITE),
// Interrupt output
.IRQ(INTR.IRQ),
// SPI signals
.ss_pad_o(SPI.cs),
.sclk_pad_o(SPI.clk),
.mosi_pad_o(SPI.mosi),
.miso_pad_i(SPI.miso)
);
//
// Clock and reset initial block:
//
initial begin
PCLK = 0;
PRESETn = 0;
repeat(8) begin
#10ns PCLK = ~PCLK;
end
PRESETn = 1;
forever begin
endmodule: top_tb
The Test
The next phase in the OVM construction process is the build phase. For the SPI block level example this means building
the spi_env component, having first created and prepared all of the configuration objects that are going to be used by the
environment. The configuration and build process is likely to be common to most test cases, so it is usually good practice
to create a test base class that can be extended to create specific tests.
In SPI example, the configuration object for the spi_env contains handles for the SPI and APB configuration objects, this
allows the env configuration object to be used to pass all of the configuration objects to the env. The build method in the
spi_env is then responsible for passing on these sub-configurations. This "Russian Doll" approach to nesting
configurations is used since it is scalable for many levels of hierarchy.
Before the configuration objects for the agents are assigned to their handles in the env configuration block, they are
constructed, have their virtual interfaces assigned, using the ovm_container get_value_from_config() method, and then
they are configured. The APB agent may well be configured differently between test cases and so its configuration
process has been split out into a separate virtual method in the base class, this allows inheriting test classes to overload
this method and configure the APB agent differently.
The following code is for the spi_test_base class:
`ifndef SPI_TEST_BASE
`define SPI_TEST_BASE
//
// Class Description:
//
//
//
`ovm_component_utils(spi_test_base)
//------------------------------------------
// Data Members
//------------------------------------------
//------------------------------------------
// Component Members
//------------------------------------------
spi_env m_env;
// Configuration objects
spi_env_config m_env_cfg;
apb_agent_config m_apb_cfg;
spi_agent_config m_spi_cfg;
// Register map
spi_register_map spi_rm;
//------------------------------------------
// Methods
//------------------------------------------
endclass: spi_test_base
super.new(name, parent);
endfunction
m_env_cfg = spi_env_config::type_id::create("m_env_cfg");
// Register map - Keep reg_map a generic name for vertical reuse reasons
m_env_cfg.spi_rm = spi_rm;
m_apb_cfg = apb_agent_config::type_id::create("m_apb_cfg");
configure_apb_agent(m_apb_cfg);
m_env_cfg.m_apb_agent_cfg = m_apb_cfg;
m_spi_cfg = spi_agent_config::type_id::create("m_spi_cfg");
m_spi_cfg.has_functional_coverage = 0;
m_env_cfg.m_spi_agent_cfg = m_spi_cfg;
register_adapter_base::type_id::set_inst_override(apb_register_adapter::get_type(), "spi_bus.adapter");
endfunction: build
//
//
cfg.active = OVM_ACTIVE;
cfg.has_functional_coverage = 0;
cfg.has_scoreboard = 0;
cfg.no_select_lines = 1;
cfg.start_address[0] = 32'h0;
cfg.range[0] = 32'h18;
endfunction: configure_apb_agent
`endif // SPI_TEST_BASE
To create a specific test case the spi_test_base class is extended, and this allows the test writer to take advantage of the
configuration and build process defined in the parent class and means that he only needs to add a run method. In the
following (simplistic and to be updated) example, the run method instantiates a virtual sequence and starts it on the
virtual sequencer in the env. All of the configuration process is carried out by the super.build() method call in the build
method.
`ifndef SPI_TEST
`define SPI_TEST
//
// Class Description:
//
//
class spi_test extends spi_test_base;
//------------------------------------------
// Methods
//------------------------------------------
endclass: spi_test
task spi_test::run;
check_reset_seq reset_test_seq = check_reset_seq::type_id::create("rest_test_seq");
send_spi_char_seq spi_char_seq = send_spi_char_seq::type_id::create("spi_char_seq");
reset_test_seq.start(m_env.m_apb_agent.m_sequencer);
spi_char_seq.start(m_env.m_apb_agent.m_sequencer);
#100ns;
global_stop_request();
endtask: run
`endif // SPI_TEST
The env
The next level in the SPI OVM environment is the spi_env. This class contains a number of sub-components, namely the
SPI and APB agents, a scoreboard, and a functional coverage monitor. Which of these sub-components gets built is
determined by variables in the spi_env configuration object.
In this case, the spi_env configuration object also contains a virtual interface and a method for detecting an interrupt.
This will be used by sequences running on the agent sequencers. The contents of the spi_env_config class are as follows:
`ifndef SPI_ENV_CONFIG
`define SPI_ENV_CONFIG
//
// Class Description:
//
//
class spi_env_config extends ovm_object;
//------------------------------------------
// Data Members
//------------------------------------------
// Whether env analysis components are used:
bit has_functional_coverage = 0;
bit has_spi_functional_coverage = 1;
bit has_reg_scoreboard = 0;
bit has_spi_scoreboard = 1;
// Whether the various agents are used:
bit has_apb_agent = 1;
bit has_spi_agent = 1;
// Configurations for the sub_components
apb_agent_config m_apb_agent_cfg;
spi_agent_config m_spi_agent_cfg;
// SPI Register model
ovm_register_map spi_rm;
//------------------------------------------
// Methods
//------------------------------------------
extern static function spi_env_config get_config( ovm_component c);
extern task wait_for_interrupt;
extern function bit is_interrupt_cleared;
// Standard OVM Methods:
extern function new(string name = "spi_env_config");
endclass: spi_env_config
//
// Function: get_config
//
// This method gets the my_config associated with component c. We check for
// the two kinds of error which may occur with this kind of
// operation.
//
function spi_env_config spi_env_config::get_config( ovm_component c );
ovm_object o;
spi_env_config t;
return t;
endfunction
// This task is a convenience method for sequences waiting for the interrupt
// signal
task spi_env_config::wait_for_interrupt;
@(posedge INTR.IRQ);
endtask: wait_for_interrupt
`endif // SPI_ENV_CONFIG
In this example, there are build configuration field bits for each sub-component. This gives the env the ultimate
flexibility for reuse.
During the spi_envs build phase, a pointer to the spi_env_config is retrieved from the tests configuration table using
get_config(). Then the build process tests the various has_<sub_component> fields in the configuration object to
determine whether to build a sub-component. In the case of the APB and SPI agents, there is an additional step which is
to unpack the configuration objects for each of the agents from the envs configuration object and then to set the
agent configuration objects in the envs configuration table after any local modification.
In the connect phase, the spi_env configuration object is again used to determine which TLM connections to make.
`ifndef SPI_ENV
`define SPI_ENV
//
// Class Description:
//
//
class spi_env extends ovm_env;
//------------------------------------------
// Methods
//------------------------------------------
endclass:spi_env
super.new(name, parent);
endfunction
endfunction: connect
`endif // SPI_ENV
The Agents
Since the OVM build process is top down, the SPI and APB agents are constructed next. The article on the agent build
process describes how the APB agent is configured and built, the SPI agent follows the same process.
The components within the agents are at the bottom of the test bench hierarchy, so the build process terminates there.
( download source code examples online at http://verificationacademy.com/uvm-ovm ).
Ovm/Testbench/IntegrationLevel
This test bench example is one that takes two
block level verification environments and
shows how they can be reused at a higher
level of integration. The principles that are
illustrated in the example are applicable to
repeated rounds of vertical reuse.
The example takes the SPI block level
example and integrates it with another block
level verification environment for a GPIO
DUT. The hardware for the two blocks has
been integrated into a Peripheral Sub-System
(PSS) which uses an AHB to APB bus bridge
to interface with the APB interfaces on the
SPI and GPIO blocks. The environments from
the block level are encapsulated by the
pss_env, which also includes an AHB agent to
drive the exposed AHB bus interface. In this
configuration, the block level APB bus
interfaces are no longer exposed, and so the
APB agents are put into passive mode to
monitor the APB traffic. The stimulus needs
to drive the AHB interface and register
layering enables reuse of block level stimulus at the integration level.
We shall now go through the test bench and the build process from the top down, starting with the top level test
bench module.
module top_tb;
import ovm_pkg::*;
import ovm_container_pkg::*;
import pss_test_lib_pkg::*;
//
logic HCLK;
logic HRESETn;
//
// Instantiate the interfaces:
//
apb_if APB(HCLK, HRESETn); // APB interface - shared between passive agents
ahb_if AHB(HCLK, HRESETn); // AHB interface
spi_if SPI(); // SPI Interface
intr_if INTR(); // Interrupt
gpio_if GPO();
gpio_if GPI();
gpio_if GPOE();
icpit_if ICPIT();
serial_if UART_RX();
serial_if UART_TX();
modem_if MODEM();
// Binder
binder probe();
// DUT Wrapper:
pss_wrapper wrapper(.ahb(AHB),
.spi(SPI),
.gpi(GPI),
.gpo(GPO),
.gpoe(GPOE),
.icpit(ICPIT),
.uart_rx(UART_RX),
.uart_tx(UART_TX),
.modem(MODEM));
//
// Clock and reset initial block:
//
initial begin
HCLK = 0;
HRESETn = 0;
repeat(8) begin
#10ns HCLK = ~HCLK;
end
HRESETn = 1;
forever begin
#10ns HCLK = ~HCLK;
end
end
// Clock assignments:
assign GPO.clk = HCLK;
assign GPOE.clk = HCLK;
assign GPI.clk = HCLK;
assign ICPIT.PCLK = HCLK;
endmodule: top_tb
The Test
Like the block level test, the integration level test should have the common build and configuration process captured in a
base class that subsequent test cases can inherit from. As can be seen from the example, there is more configuration to do
and so the need becomes more compelling.
The configuration object for the pss_env contains handles for the configuration objects for the spi_env and the gpio_env.
In turn, the sub-env configuration objects contain handles for their agent sub-component configuration objects. The
pss_env is responsible for unnesting the spi_env and gpio_env configuration objects and setting them in its configuration
table, making any local changes necessary. In turn the spi_env and the gpio_env put their agent configurations into their
configuration table.
The pss test base class is as follows:
`ifndef PSS_TEST_BASE
`define PSS_TEST_BASE
//
// Class Description:
//
//
//
`ovm_component_utils(pss_test_base)
//------------------------------------------
// Data Members
//------------------------------------------
//------------------------------------------
// Component Members
//------------------------------------------
pss_env m_env;
// Configuration objects
pss_env_config m_env_cfg;
spi_env_config m_spi_env_cfg;
gpio_env_config m_gpio_env_cfg;
//uart_env_config m_uart_env_cfg;
apb_agent_config m_spi_apb_agent_cfg;
apb_agent_config m_gpio_apb_agent_cfg;
ahb_agent_config m_ahb_agent_cfg;
spi_agent_config m_spi_agent_cfg;
gpio_agent_config m_GPO_agent_cfg;
gpio_agent_config m_GPI_agent_cfg;
gpio_agent_config m_GPOE_agent_cfg;
// Register map
pss_register_map pss_rm;
//------------------------------------------
// Methods
//------------------------------------------
extern virtual function void configure_apb_agent(apb_agent_config cfg, int index, logic[31:0] start_address, logic[31:0] range);
endclass: pss_test_base
super.new(name, parent);
endfunction
m_env_cfg = pss_env_config::type_id::create("m_env_cfg");
// Register map - Keep reg_map a generic name for vertical reuse reasons
m_env_cfg.pss_rm = pss_rm;
m_spi_env_cfg = spi_env_config::type_id::create("m_spi_env_cfg");
m_spi_env_cfg.spi_rm = pss_rm;
m_spi_env_cfg.has_apb_agent = 1;
m_spi_apb_agent_cfg = apb_agent_config::type_id::create("m_spi_apb_agent_cfg");
m_spi_env_cfg.m_apb_agent_cfg = m_spi_apb_agent_cfg;
// SPI agent:
m_spi_agent_cfg = spi_agent_config::type_id::create("m_spi_agent_cfg");
m_spi_env_cfg.m_spi_agent_cfg = m_spi_agent_cfg;
m_env_cfg.m_spi_env_cfg = m_spi_env_cfg;
m_gpio_env_cfg = gpio_env_config::type_id::create("m_gpio_env_cfg");
m_gpio_env_cfg.gpio_rm = pss_rm;
m_gpio_apb_agent_cfg = apb_agent_config::type_id::create("m_gpio_apb_agent_cfg");
m_gpio_env_cfg.m_apb_agent_cfg = m_gpio_apb_agent_cfg;
// GPO agent
m_GPO_agent_cfg = gpio_agent_config::type_id::create("m_GPO_agent_cfg");
m_gpio_env_cfg.m_GPO_agent_cfg = m_GPO_agent_cfg;
// GPOE agent
m_GPOE_agent_cfg = gpio_agent_config::type_id::create("m_GPOE_agent_cfg");
m_gpio_env_cfg.m_GPOE_agent_cfg = m_GPOE_agent_cfg;
m_GPI_agent_cfg = gpio_agent_config::type_id::create("m_GPI_agent_cfg");
m_gpio_env_cfg.m_GPI_agent_cfg = m_GPI_agent_cfg;
m_gpio_env_cfg.has_AUX_agent = 0;
m_gpio_env_cfg.has_functional_coverage = 1;
m_gpio_env_cfg.has_reg_scoreboard = 0;
m_gpio_env_cfg.has_out_scoreboard = 1;
m_gpio_env_cfg.has_in_scoreboard = 1;
m_env_cfg.m_gpio_env_cfg = m_gpio_env_cfg;
// AHB Agent
m_ahb_agent_cfg = ahb_agent_config::type_id::create("m_ahb_agent_cfg");
m_env_cfg.m_ahb_agent_cfg = m_ahb_agent_cfg;
register_adapter_base::type_id::set_inst_override(ahb_register_adapter::get_type(), "spi_bus.adapter");
register_adapter_base::type_id::set_inst_override(ahb_register_adapter::get_type(), "gpio_bus.adapter");
endfunction: build
//
//
function void pss_test_base::configure_apb_agent(apb_agent_config cfg, int index, logic[31:0] start_address, logic[31:0] range);
cfg.active = OVM_PASSIVE;
cfg.has_functional_coverage = 0;
cfg.has_scoreboard = 0;
cfg.no_select_lines = 1;
cfg.apb_index = index;
cfg.start_address[0] = start_address;
cfg.range[0] = range;
endfunction: configure_apb_agent
task pss_test_base::run;
endtask: run
`endif // SPI_TEST_BASE
Again, a test case that extends this base class would populate its run method to define a virtual sequence that would be
run on the virtual sequencer in the env.. If there was non-default configuration to be done, then this could be done by
populating or overloading the build method or any of the configuration methods.
`ifndef PSS_TEST
`define PSS_TEST
//
// Class Description:
//
//
class pss_test extends pss_test_base;
//------------------------------------------
// Methods
//------------------------------------------
endclass: pss_test
endfunction: build
init_vseq(t_seq);
repeat(10) begin
t_seq.start(null);
end
global_stop_request();
endtask: run
`endif // PSS_TEST
//
// Class Description:
//
//
class pss_env extends ovm_env;
//------------------------------------------
// Data Members
//------------------------------------------
pss_env_config m_cfg;
//------------------------------------------
// Sub Components
//------------------------------------------
spi_env m_spi_env;
gpio_env m_gpio_env;
ahb_agent m_ahb_agent;
//------------------------------------------
// Methods
//------------------------------------------
endclass: pss_env
`endif // PSS_ENV
Ovm/Component
An OVM testbench is built from
component objects extended from the
ovm_component base class. When an
ovm_component object is created, it
becomes part of the testbench hierarchy
which remains in place for the duration
of the simulation. This contrasts with the
sequence branch of the ovm class
hiearchy, where objects are transient -
they are created, used and then garbage
collected when dereferenced.
The ovm_component static hierarchy is
used by the reporting infrastructure for
printing out the scope of the component
creating a report message, by the
configuration process to determine
which components can access a
configuration object, and by the factory for instance based factory overrides. This static hierarchy is represented by a
linked list which is built up as each component is created, the components location in the hierarchy is determined by the
name and parent arguments passed to its create method.
For instance. in the code fragment below, an apb_agent component is being created within the spi_env, which in turn is
created inside a test as m_env. The hierarchical path to the agent will be "ovm_test_top.m_env.m_apb_agent" and any
references to it would need to use this string.
//
// Hierarchical name example
//
class spi_env extends ovm_env;
//....
// ...
//
// The spi_env has a hierarchical path string "top.m_env" this is concatonated
// with the name string to arrive at "ovm_test_top.m_env.m_apb_agent" as the
// hierarchical reference string for the apb_agent
// ...
endfunction: build
// ....
endclass: spi_env
The ovm_component class inherits from the ovm_report_object which contains the functionality required to support the
OVM messaging infrastructure. The reporting process uses the component static hierarchy to add the scope of a
component to the report message string.
The ovm_component base class template has a virtual method for each of the OVM phases and these are populated as
required, if a phase level virtual method is not implemented then the component does not participate in that phase.
Also built into the ovm_component base class is support for a configuration table which is used to store configuration
objects which are relevant to a components child nodes in the testbench hierarchy. Again, this static hierarchy is used as
part of the path mechanism to control which components are able to access a configuration object.
In order to provide flexibility in configuration and to allow the OVM testbench hierarchy to be built in an intelligent way
ovm_components are registered with the OVM factory. When an OVM component is created during the build phase, the
factory is used to construct the component object. Using the factory allows a component to be swapped for one of a
derived type using a factory override, this can be a useful technique for changing the functionality of a testbench without
having to recompile. There are a number of coding conventions that are required for the implementation to work and
these are outlined in the arcticle on the Factory
The OVM package contains a number of extensions to the ovm_component for common testbench components. Most of
these extensions are very thin, i.e. they are literally just an extension of the ovm_component with a new name space, this
means that an ovm_component could be used in their stead. However, they can help with self-documention since they
indicate what type of component the class represents. There are also analysis tools available which use these base classes
as clues to help them build up a picture of the testbench hierarchy. A number of the extended components instantiate
sub-components and are added value building blocks. The following table summarises the available ovm_component
derived classes.
ovm_driver Adds sequence communcation sub-components, used with the ovm_sequencer Yes
ovm_sequencer Adds sequence communcation sub-components, used with the ovm_driver Yes
ovm_env Container for the verification components surrounding a DUT, or other envs surrounding a (sub)system No
Ovm/Agent
An OVM agent can be thought of as a
verification component kit for a specific
logical interface. The agent is developed as
package that includes a SystemVerilog
interface for connecting to the signal pins of a
DUT, and a SystemVerilog package that
includes the classes that make up the overall
agent component. The agent class itself is a
top level container class for a driver, a
sequencer and a monitor, plus any other
verification components such as functional
coverage monitors or scoreboards. The agent
also has an analysis port which is connected to
the analysis port on the monitor, making it
possible for a user to connect external analysis
components to the agent without having to Active agent ovm.gif
import ovm_pkg::*;
`include "ovm_macros.svh"
`include "apb_seq_item.svh"
`include "apb_agent_config.svh"
`include "apb_driver.svh"
`include "apb_coverage_monitor.svh"
`include "apb_monitor.svh"
`include "apb_sequencer.svh"
`include "apb_agent.svh"
// Utility Sequences
`include "apb_seq.svh"
endpackage: apb_agent_pkg
//
// Class Description:
//
//
class apb_agent_config extends ovm_object;
// Virtual Interface
virtual apb_if APB;
//------------------------------------------
// Data Members
//------------------------------------------
// Is the agent active or passive
ovm_active_passive_enum active = OVM_ACTIVE;
// Include the APB functional coverage monitor
bit has_functional_coverage = 0;
// Include the APB RAM based scoreboard
bit has_scoreboard = 0;
//
// Address decode for the select lines:
int no_select_lines = 1;
logic[31:0] start_address[15:0];
logic[31:0] range[15:0];
//------------------------------------------
// Methods
//------------------------------------------
extern static function apb_agent_config get_config( ovm_component c );
// Standard OVM Methods:
extern function new(string name = "apb_agent_config");
endclass: apb_agent_config
//
// Function: get_config
//
// This method gets the my_config associated with component c. We check for
// the two kinds of error which may occur with this kind of
// operation.
//
function apb_agent_config apb_agent_config::get_config( ovm_component c );
ovm_object o;
apb_agent_config t;
return t;
endfunction
`endif // APB_AGENT_CONFIG
`ifndef APB_AGENT
`define APB_AGENT
//
// Class Description:
//
//
class apb_agent extends ovm_component;
//------------------------------------------
// Data Members
//------------------------------------------
apb_agent_config m_cfg;
//------------------------------------------
// Component Members
//------------------------------------------
ovm_analysis_port #(apb_seq_item) ap;
apb_monitor m_monitor;
apb_sequencer m_sequencer;
apb_driver m_driver;
apb_coverage_monitor m_fcov_monitor;
//------------------------------------------
// Methods
//------------------------------------------
endclass: apb_agent
endfunction: connect
`endif // APB_AGENT
The build process for the APB agent can be followed in the block level testbench example:
( download source code examples online at http://verificationacademy.com/uvm-ovm ).
Ovm/Phasing
In order to have a consistent execution flow, the OVM uses phases
which are ordered to allow the testbench component objects to stay
in step as the testbench is built and configured and connected. Once
the testbench hierarchy is available the simulation run phase is
executed, after which the report phases occur. The defined phases
allow OVM verification components developed by different teams to
be mixed freely, since it is clear what happens in each phase.
build
In an OVM testbench only the test or root node component is constructed directly using the new method. After that, the
rest of the testbench hierarchy is built top-down during the build phase. Construction is deferred so that the structure and
configuration of each level of the component hierarchy can be controlled by the level above. During the build method
components are indirectly constructed through a factory based creation process.
connect
Once the testbench component hierarchy has been put in place during the build method, the connect phase begins. The
connect phase works from the bottom up and is used to make TLM connections between components or to make
references to testbench resources.
end_of_elaboration
This phase can be used to make any final enhancements to the environment after it has been built and its inter-component
connections made.
start_of_simulation
The start_of_simulation phase occurs just before the run phase. It may be a convenient point in the OVM phases to print
banner information or testbench configuration status information.
extract
The extract phase is intended to be used to extract test results and statistics together with functional coverage information
from different components in the testbench such as scoreboards and functional coverage monitors.
check
During the check phase, data collected during the previous extract phase is checked and the overall result of the testbench
is calculated.
report
The report phase is the final phase and is used to report the results of the test case, this is either via messages to the
simulator transcript or by writing to files.
When the report phase has completed, the OVM testbench terminates and, by default, makes a $finish system call.
run_test() -> new function Psuedo phase - used to construct the top level component, usually the test
build Top-Down function Component hierarchy deferred construction and configuration phase
connect Bottom-Up function Used to make TLM and other connections once components in place
run Bottom-Up task Time consuming phase where the work of the test case is done
extract Bottom-Up function Used for data extraction from analysis components
check Bottom-Up function Used to make checks to ensure that the simulation has completed with no errors
report Bottom-Up function Used to report the results of the simulation and any statistics collected
Deprecated Phases
There are a number of phases which are a legacy from the AVM, URM and earlier versions of the OVM. These phases
are still supported in the current version of the OVM , but may disapear in future versions and will not be supported in
the UVM. Users should avoid using the deprecated phases, and they should consider porting any verification components
using them to align with the supported set of OVM phases.
The deprecated phases and their recommended replacement phases are shown in the following table:
post_new build
export_connections connect
import_connections connect
pre_run start_of_simulation
configure end_of_elaboration
Ovm/Factory
The OVM Factory
The purpose of the OVM factory is to allow an object of one type to be substituted with an object of a derived type
without having to change the structure of the testbench or edit the testbench code. The mechanism used is refered to as an
override and the override can be by instance or type. This functionality is useful for changing sequence functionality or
for changing one version of a component for another. Any components which are to be swapped must be
polymorphically compatible. This includes having all the same TLM interfaces handles exist and TLM objects be
created by the new replacement component. Additionally, in order to take advantage of the factory certain coding
conventions need to be followed.
// Wrapper class around the component class that is used within the factory
typedef ovm_component_registry #(my_component, "my_component") type_id;
...
endclass: my_component
The registration code has a regular pattern and can be safely generated with one of a set of four factory registration
macros:
// For a component
`ovm_component_utils(my_component)
`ovm_component_param_utils(this_t)
// For a class derived from an object (ovm_object, ovm_transaction, ovm_sequence_item, ovm_sequence etc)
`ovm_object_utils(my_item)
`ovm_object_param_utils(this_t)
// For an object
class my_item extends ovm_sequence_item;
my_component m_my_component;
my_param_component #(.ADDR_WIDTH(32), .DATA_WIDTH(32)) m_my_p_component;
task run;
my_seq test_seq;
my_param_seq #(.ADDR_WIDTH(32), .DATA_WIDTH(32)) p_test_seq;
Ovm/UsingFactoryOverrides
The OVM factory allows a class to be substituted with another class of a derived type when it is constructed. This can be
useful for changing the behaviour of a testbench by substituting one class for another without having to edit or
re-compile the testbench code. In order for factory override process to work there are a number of coding convention
pre-requisites that need to be followed, these are explained in the article on the OVM factory.
The OVM factory can be thought of as a lookup table. When "normal" component construction takes place using the
<type>::type_id::create("<name>", <parent>) approach, what happens is that the type_id is used to pick the factory
component wrapper for the class, construct its contents and pass the resultant handle back again. The factory override
changes the way in which the lookup happens so that looking up the original type_id results in a different type_id being
used and a consequently a handle to a different type of constructed object being returned. This technique relies on
polymorphism, in other words the ability to be able to refer to derived types using a base type handle. In practice, an
override will only work when a parent class is overriden by one of its descendents in the class extension hiearchy.
Component Overrides
There are two types of component overrides in the OVM - type overrides and instance overrides.
`ovm_component_utils(colour)
// etc
endclass: colour
`ovm_component_utils(red)
//etc
endclass: red
//
// This means that the following creation line returns a red, rather than a colour
pixel = colour::type_id::create("pixel", this);
Parameterised component classes can also be overriden, but care must be taken to ensure that the overriding class has the
same parameter values as the class that is being overriden, otherwise they are not considered to be of related types:
//
// ----------------------------------------------------------
`ovm_component_param_utils(bus_driver #(BUS_WIDTH))
// etc
endclass: bus_driver
`ovm_component_param_utils(bus_conductor #(BUS_WIDTH))
// etc
endclass: bus_conductor
// Whereas creating a #(16) bus_driver results in a #(16) bus_driver handle being returned because
// Simularly if a type override has non-matching parameters, then it will fail and return the original type
// --------------------------------------------
//
//
// <original_type>::type_id::set_inst_override(<substitute_type>::get_type(), <path_string>);
//
colour::type_id::set_inst_override(red::get_type(), "top.env.raster.spot");
// And again for a parameterised type, the parameter values must match
Object Overrides
Objects or sequence related objects are generally only used with type overrides since the instance override approach
relates to a position in the OVM testbench component hierarchy which objects do not take part in. However there is a
coding trick which can be used to override specific "instances" of an object and this is explained in the article on
overriding sequences.
The code for an object override follows the same form as the component override.
Ovm/SystemVerilogPackages
A package is a SystemVerilog language construct that enables related declarations and definitions to be grouped together
in a package namespace. A package might contain type definitions, constant declarations, functions and class templates.
To use a package within a scope, it must be imported, after which its contents can be referenced.
The package is a useful means to organise code and to make sure that references to types, classes etc are consistent. The
OVM base class library is contained within one package called the "ovm_pkg". When developing OVM testbenches
packages should be used to collect together and organise the various class definitions that are developed to implement
agents, envs, sequence libraries, test libraries etc.
Imports from other packages should be declared at the head of the package
A packages content may need to refer to the contents of another package, in this case the external packages should be
declared at the start of the package code body. Individual files, such as class templates, that may be included should not
do separate imports.
Justification: Grouping all the imports in one place makes it clear what the dependencies of the package are. Placing
imports in other parts of the package or inside included files can cause ordering and potential type clashes.
All the files used by a package should be collected together in one directory
All of the files to be included in a package should be collected together in a single directory. This is particularly
important for agents where the agent directory structure needs to be a complete stand-alone package.
Justification: This makes compilation easier since there is only one include directory, it also helps with reuse since all
the files for a package can be collected together easily.
Below is an example of a package file for an OVM env. This env contains two agents (spi and apb) and a register model
and these are imported as sub-packages. The class templates relevant to the env are `included:
// Note that this code is contained in a file called spi_env_pkg.sv
//
// In Questa it would be compiled using:
// vlog +incdir+$OVM_HOME/src+<path_to_spi_env> <path_to_spi_env>/spi_env_pkg.sv
//
//
// Package Description:
//
package spi_env_pkg;
// Includes:
`include "spi_env_config.svh"
`include "spi_virtual_sequencer.svh"
`include "spi_env.svh"
endpackage: spi_env_pkg
Package Scopes
Something that often confuses users is that the SystemVerilog package is a scope. This means that everything declared
within a package, and the contents of other packages imported into a package are only visible within the scope of a
package. If a package is imported into another scope (i.e. another package or a module) then only the contents of the
package are visible and not the contents of any packages that it imported. If the content of these other packages are
needed in the new scope, then they need to be imported separately.
//
// Package Scope Example
// ----------------------------------------------------
//
package spi_test_pkg;
// Other imports
// Other `includes
`include spi_test_base.svh
endpackage: spi_test_pkg
Ovm/Connections
Learn about DUT Interface Connections, techniques for hookup and reuse
Topic Overview
Introduction
The Device Under Test (DUT) is typically a Verilog module or a VHDL entity/architecture while the testbench is
composed of SystemVerilog class objects.
There are number of factors to consider in DUT - Testbench (TB) connection and communication; module instance to
class object communication mechanisms, configuration of the DUT, reuse, emulation, black box/white box testing and so
forth. There are quite a number of different approaches and solutions for managing the different pieces of this puzzle.
The challenge is to manage it in a way that addresses all these different factors.
DUT-TB Communication
The DUT and testbench belong to two different SystemVerilog instance worlds. The DUT belongs to the static instance
world while the testbench belongs to the dynamic instance world. Because of this the DUT's ports can not be connected
directly to the testbench class objects so a different SystemVerilog means of communication, which is virtual
interfaces, is used.
The DUT's ports are connected to an instance of an interface. The Testbench communicates with the DUT through the
interface instance. Using a virtual interface as a reference or handle to the interface instance, the testbench can access the
tasks, functions, ports, and internal variables of the SystemVerilog interface. As the interface instance is connected to the
DUT pins, the testbench can monitor and control the DUT pins indirectly through the interface elements.
Sometimes a virtual interface approach cannot be used. In which case there is a second or alternative approach to
DUT-TB communication which is referred to as the abstract/concrete class approach that may be used. However, as
long as it can be used, virtual interfaces is the preferred and recommended approach.
Regardless of which approach is used instance information must be passed from the DUT to the testbench.
When using virtual interfaces the location of the interface instance is supplied to the testbench so its virtual interface
properties may be set to point to the interface instance. The recommended approach for passing this information to the
testbench is to use either the configuration database using ovm_container or to use a package.
The test class in the testbench receives the information on the location of the interface instance. After receiving this
information it supplies this information to the agent transactors that actually need the information. The test class does this
by placing the information in a configuration object which is provided to the appropriate agent.
More detailed discussion and examples of passing virtual interface information to the testbench from the DUT and on
setting virtual interfaces for DUT-TB communication is in the article on virtual interfaces.
DUT-TB Configuration
Parameterized Tests
Another approach to passing parameters into the testbench is to parameterize the top level class in the testbench which is
typically the test. There are a number of issues with parameterized tests that are discussed along with solutions. Note:
this article is not available to publish at this time so a link is not made.
Encapsulation
A typical DUT-TB setup has a top level SystemVerilog module that is a container for both the testbench and the DUT
with its associated connection and support logic (such as clock generation). This style setup is referred to as a single top
The top level module can become messy, complicated and hard to manage. When this occurs it is recommended to group
items by encapsulating inside of wrapper modules. Encapsulation also provides for modularity for swapping and for
reuse. Several different levels of encapsulation may be considered and are discussed below.
Dual Top
A level of encapsulation where two top modules are used is called dual top. One of the top modules is a DUT wrapper
module that includes the DUT, interfaces, protocol modules, clock generation logic, DUT wires, registers etc. The other
top module is a wrapper module which creates the testbench. When emulation is a consideration Dual top is a necessity.
The DUT wrapper is the stuff that goes in the emulator. The testbench wrapper module stays running in the simulator. If
the testbench is only going to be used in simulation dual top is not necessary but may however still provide a useful level
of encapsulation for modularity, reuse etc.
The passing of information from the DUT to the testbench is the same as described earlier. A more detailed explanation
and example is in the article Dual Top.
Protocol Modules
When emulation is a consideration another level of encapsulation called protocol modules is necessary to isolate the
changes that occur in the agent and interface in moving between simulation and emulation. Protocol modules are wrapper
modules that encapsulate a DUT interface, associated assertions, QVL instances (which are not allowed inside an
interface), and so forth.
If the testbench is only going to be used in simulation protocol modules are not necessary. They may however still
provide a useful level of encapsulation for modularity, reuse etc.
Blackbox testing
Blackbox testing of the DUT is a method of testing that tests the functionality of the DUT at the interface or pins of the
DUT without specific knowledge of or access to the DUT's internal structure. The writer of the test selects valid and
invalid inputs and determines the correct response. Black box access to the DUT is provided typically by a virtual
interface connection to an interface instance connected to the pins of the DUT.
Whitebox testing
Whitebox Testing of the DUT is a method of testing that tests the internal workings of the DUT. Access to the DUT's
internal structure is required. Providing this access effects the structure of the DUT-TB communication and must be
taken into account if white box testing is a requirement.
Ovm/SVCreationOrder
SystemVerilog Instance Worlds
When generating an OVM testbench and in particular the DUT - testbench communication it is helpful to understand the
differences between the two different "instance worlds" of SystemVerilog and the order in which things are created.
Order of Creation
The components of the two instance worlds are created in this order:
During Elaboration:
1. Component instances of the static world
2. Static methods and static properties of classes
During run-time:
1. Class instances
Connect/SystemVerilogTechniques
Introduction and Recommendations
SystemVerilog provides in general four different means of communication or connection between instances: ports,
pointers, Verilog hierarchical paths, and shared variables. For class based testbenches ports may not be used.
Hierarchical paths are not recommended. Pointers are the common means used. Shared variables may be used in limited
areas.
Ports
Ports are connections between members of the Static Instance World such as module and interface instances. Therefore
they may not be used in classes which are part of the Dynamic Instance World.
UVM provides a notion of ports such as uvm_tlm_put_port etc. These are not SystemVerilog ports but rather are
wrapper classes around pointers. Hence a UVM TLM port is a pointer based communication scheme dressed up like
ports to look familiar to Verilog and VHDL engineers.
Handles
A class handle is what points to a class object (instance). It is called a handle to differentiate from a pointer. A handle is
what is considered a safe-pointer because of the restrictive rules of use compared to pointers in other languages such as
C.
A virtual interface is a variable that represents an interface instance. It may be thought of as a handle to an interface
instance.
Shared Variables
Shared variables are sometimes referred to as global variables although generally speaking they are not truly global in
scope. A shared variable is a variable declared in a scope that may be referenced by other scopes. In shared variable
behavior, the variable may be read and or written in these other scopes. The two most common examples of shared
variables used in testbenches are variables declared in packages and static property declarations of classes.
In packages a variable may be declared such as an int or virtual interface. These variables may be referenced (i.e. both
read and written) within other scopes such as classes or modules either by a fully resolved name (
package_name::variable_name ) or by an import.
Static property declarations of classes may be referenced by a fully resolved name (class_name::static_property_name).
Often a static method of a class may be provided for accessing the static property.
It is recommended that shared variables only be used for initialization or status type communication where there is a
Ovm/ParameterizedTests
Introduction
When configuring a test environment, there are two situations where SystemVerilog parameters are the only option
available - type parameters and parameters used to specify bit vector sizes. Due to the nature of SystemVerilog
parameters, the latest time that these values can be set is at elaboration time, which is usually at the point where you
invoke the simulator (See regression test performance below).
DVCon Paper
The information in this article was also presented at DVCon 2011 with Xilinx. The DVCon paper is available for
download (Parameters And OVM - Can't They Just Get Along? [1]). The material in this article is a result of a
collaboration between Mentor and Xilinx.
Parameterized classes use the `ovm_component_param_utils and `ovm_object_param_utils macros to register with the
factory. There are actually two factories, however - one string-based and one type-based. The param_utils macros only
register with the type-based factory.
Occasionally, you might want to use the string-based factory to create a component or object. The most common case
where the string-based factory is used is during the call to run_test(). run_test() uses either its string argument or the
string value from the OVM_TESTNAME plusarg to request a component from the string-based factory.
Since a parameterized component does not register with the string-based factory by default, you will need to create a
string-based registration for your top-level test classes so that they can be instantiated by run_test().
To accomplish this, you need to manually implement the actions that the param_utils macro performs.
For example, given a parameterized test class named alu_basic_test #(DATA_WIDTH), the macro call
`ovm_component_param_utils(alu_basic_test #(DATA_WIDTH)) would expand to:
typedef ovm_component_registry #(alu_basic_test #(DATA_WIDTH)) type_id;
The typedef in the code above creates a specialization of the ovm_component_registry type, but that type takes two
parameter arguments - the first is the type being registered (alu_basic_test #(DATA_WIDTH) in this case) with the
type-based factory, and the second is the string name that will be used to uniquely identify that type in the string-based
registry. Since the param_utils macro does not provide a value for the second parameter, it defaults to the null string and
no string-based registration is performed.
To create a string-based registration, you need to provide a string for the second parameter argument that will be unique
for each specialization of the test class. You can rewrite the typedef to look like:
typedef ovm_component_registry #(alu_basic_test #(DATA_WIDTH), "basic_test1") type_id;
In addition, you would need to declare a "dummy" specialization of the parameterized test class so that the string name
specified above is tied to the particular parameter values.
module testbench #(DATA_WIDTH);
initial begin
run_test("basic_test1");
endmodule
Note: instead of a name like "basic_test1", you could use the macro described below to generate a string name like
"basic_test_#(8)" with the actual parameter values as part of the string.
In order to increase simulation performance, QuestaSim performs some elaboration tasks, including specifying top-level
parameters, in a separate optimization step via the vopt tool. This tool takes top-level parameters and "bakes" them into
the design in order to take full advantage of the design structure for optimization.
Unfortunately, this means that if you want to change the values of these parameters, it requires a re-optimization of the
entire design before invocation of the simulator. This could have a significant impact on regression test performance
where many test runs are made with different parameter values. To avoid this, you can tell vopt that certain parameters
should be considered "floating", and will be specified later, by using the command-line options +floatparameters (for
Verilog) and +floatgenerics (for VHDL).
Once the parameters have been specified as floating, you can use the -g option in vsim to set the parameter value.
Subsequent changes to the parameter value only require re-invocation of vsim with a new value and do not require a
re-optimization of the design.
The trade-off, however, is that these parameter values will not be used to help optimize the design for the best run-time
performance. So, it is recommended that you use this technique sparingly, only when necessary (e.g. when the time cost
of optimization is a measurable percentage of the total simulation run time). If necessary, you can separately
pre-optimize the design with several parameter values, then select the optimization to use at run time.
These macros keep with the reuse philosophy of minimizing areas of change. By using the macros, there is one,
well-defined place to make changes in parameter lists.
Ovm/Connect/Virtual Interface
Virtual Interaces
A virtual interface is a dynamic variable that contains a reference to a static interface instance. For all intents and
purposes, it can be thought of as a handle or reference to a SystemVerilog interface instance. Note that the use of the
term "virtual" here is not the in the same sense as is conventionally used in object oriented programming but rather it is
what the IEEE 1800 committee chose to call these references.
An example DUT (WISHBONE bus slave memory in diagram) has the following ports:
module wb_slave_mem #(parameter mem_size = 13)
(clk, rst, adr, din, dout, cyc, stb, sel, we, ack, err, rty);
...
endmodule
In the WISHBONE bus environment there are a number of parameters that are shared between the DUT and the
testbench. They are defined in a test parameters package ( test_params_pkg) shown below. Of interest here are the
mem_slave_size and mem_slave_wb_id parameters. The mem_slave_size is used to set the size of the slave memory
device. The WISHBONE bus has both masters and slaves with each having master and slave ids respectively. The
mem_slave_wb_id is used to sets the WISHBONE slave id of the slave memory
package test_params_pkg;
import ovm_pkg::*;
endpackage
A WISHBONE bus interconnect interface to connect to this DUT is below. This interconnect supports up to 8 masters
and 8 slaves. Not shown here is the arbitration , clock , reset and slave decode logic. Only shown is the interconnect
variables. A link to the full source is further down in this article.
// Wishbone bus system interconnect (syscon)
// for multiple master, multiple slave bus
// max 8 masters and 8 slaves
interface wishbone_bus_syscon_if #(int num_masters = 8, int num_slaves = 8,
int data_width = 32, int addr_width = 32) ();
To connect the interface to the DUT a hierarchical connection from the pins of the DUT to the variables in the interfaces
is made as shown below. Note that the mem_slave_wb_id parameter from the test_params_pkg is used as a slave "slot
id" to connect the slave memory to the correct signals in the interface.
module top_mac;
import ovm_pkg::*;
import tests_pkg::*;
import ovm_container_pkg::*;
import test_params_pkg::*;
//-----------------------------------
// WISHBONE 0, slave 0: 000000 - 0fffff
// this is 1 Mbytes of memory
wb_slave_mem #(18) wb_s_0 (
wb_slave_mem #(mem_slave_size) wb_s_0 (
// inputs
.clk ( wb_bus_if.clk ),
.rst ( wb_bus_if.rst ),
.adr ( wb_bus_if.s_addr ),
.din ( wb_bus_if.s_wdata ),
.cyc ( wb_bus_if.s_cyc ),
.stb ( wb_bus_if.s_stb[mem_slave_wb_id] ),
.sel ( wb_bus_if.s_sel[3:0] ),
.we ( wb_bus_if.s_we ),
// outputs
.dout( wb_bus_if.s_rdata[mem_slave_wb_id] ),
.ack ( wb_bus_if.s_ack[mem_slave_wb_id] ),
.err ( wb_bus_if.s_err[mem_slave_wb_id] ),
.rty ( wb_bus_if.s_rty[mem_slave_wb_id] )
);
...
endmodule
In the testbench access to the DUT is typically required in transactors such as drivers and monitors that reside in an
agent. Assume in the code example below that the virtual interface property m_v_wb_bus_if points to the instance of the
wishbone_bus_syscon_if connected to the DUT (the next section discusses setting the virtual interface property). Then in
a WISHBONE bus driver the code might look like this. Note the use of the virtual interface property to access the
interface variables :
class wb_m_bus_driver extends ovm_driver #(wb_txn, wb_txn);
...
ovm_analysis_port #(wb_txn) wb_drv_ap;
virtual wishbone_bus_syscon_if m_v_wb_bus_if; // Virtual Interface
bit [2:0] m_id; // Wishbone bus master ID
wb_config m_config;
...
m_v_wb_bus_if.m_cyc[m_id] = 1;
m_v_wb_bus_if.m_stb[m_id] = 1;
@ (posedge m_v_wb_bus_if.clk)
while (!(m_v_wb_bus_if.m_ack[m_id] & m_v_wb_bus_if.gnt[m_id])) @ (posedge m_v_wb_bus_if.clk);
req_txn.adr = req_txn.adr + 4; // byte address so increment by 4 for word addr
end
`ovm_info($sformatf("WB_M_DRVR_%0d",m_id),
$sformatf("req_txn: %s",orig_req_txn.convert2string()),
351 )
wb_drv_ap.write(orig_req_txn); //broadcast orignal transaction
m_v_wb_bus_if.m_cyc[m_id] = 0;
m_v_wb_bus_if.m_stb[m_id] = 0;
endtask
...
endclass
The questions may be asked: "Why not have the agents get the connection information directly from the DUT? Why
have it distributed by the test? Doesn't that seem more complicated and extra work?"
The approach of the agents getting the information direct effectively hard codes information in the agents or transactors
about the DUT and reduces scalability and reuse. If a change is made in the DUT configuration it is likely that change
would be required in the agent. One may think of the DUT connection and configuration information as a "pool" of
information provided by the DUT to the testbench. In the recommended approach the test class gets information out of
this pool and distributes it to the correct agents. If the information pool changes then appropriate changes are made in
one location - the test. The agents are not affected because they get their information in the same manner - from the test.
If instead the agents each get information directly from the pool they need to know which information to fetch. If the
information pool changes then changes would need to be made in the agents.
There are two approaches to passing the location of the interface instance to the test class. The recommended approach
is the first listed here which is using ovm_container.
endmodule
In the test class the appropriate virtual interface is extracted and assigned to the appropriate wishbone configuration
object. Wishbone environement 0 is then connected to wishbone bus wrapper 0 an
class test_mac_simple_duplex extends ovm_test;
...
...
wb_config_0 = new();
wb_config_0.m_wb_id = 0; // WISHBONE 0
// Get the virtual interface handle that was set in the top module or protocol module
wb_config_0.m_v_wb_bus_bfm_if =
...
wb_config_1 = new();
wb_config_1.m_wb_id = 1; // WISHBONE 1
wb_config_1.m_v_wb_bus_bfm_if =
...
endfunction
...
endclass
// DUT instance
alu_rtl alu (
.val1(a_if.val1),
.val2(a_if.val2),
.mode(a_if.mode),
.clk(a_if.clk),
.valid_i(a_if.valid_i),
.valid_o(a_if.valid_o),
.result(a_if.result)
);
initial begin
// Each virtual interface must have a unique name, so use $sformatf
ovm_container #(virtual alu_if)::set_value_in_global_config($sformatf("ALU_IF_%0d",i),a_if);
end
end
Ovm/VirtInterfaceConfigContainer
Setting Virtual Interface Properties in the Testbench with the Configuration Database using
ovm_container
This is the recommended approach in assigning the actual interface reference to the virtual interface handles inside the
testbench. This approach in general has three steps.
1. Use ovm_container as a means to put a virtual interface, that points to a specific interface instance, into the
configuration database.
2. The test class fetches the virtual interface from the configuration database and places it in a configuration object that
is made available for the particular components (agents, drivers, monitors etc.) that communicate with the DUT
through that particular interface.
3. The component that actually accesses the DUT via the virtual interface sets its virtual interface property from the
virtual interface in the supplied configuration object.
There is a discussion here as to why one would take the approach of the test class fetching and distributing the
information to the agents and transactors instead of having the transactors or agents fetch the data directly.
This approach supports scalability and reuse:
• Since the transactor receives the interface instance information from the configuration object it is not affected by
changes in the DUT configuration.
• If you are using emulation, this method works with protocol modules in the "Dual Top" methodology.
module top_mac;
...
wishbone_bus_syscon_if wb_bus_if();
...
initial begin
end
endmodule
...
...
wb_config_0 = new();
wb_config_0.v_wb_bus_if =
endfunction
super.build();
set_wishbone_config_params();
...
endfunction
...
endclass
Ovm/Connect/VirtInterfacePackage
Setting Virtual Interface Properties in the Testbench with Packages
An easy way of assigning the actual interface reference to the virtual interface handles inside the testbench is by creating
a virtual interface variable in a package. This method has the advantage of simplicity. Because of its disadvantages
however this approach should only be considered for relative simple designs that do not have parameterized interfaces, or
do not have multiple instances of an interface and is not recommended for general use.
It has the following disadvantages that limit reusability:
• Parameterized interfaces cannot be declared in the package with generic parameter values - they must use actual
values. Any changes to parameter values would then force a recompilation.
• It introduces an additional dependency on an external variable. So, for example, any change to the virtual interface
name would require changes in any components that referenced the variable.
endpackage
In the top level module, just assign the actual interface instance to the package variable:
module top_mac;
...
// WISHBONE interface instance
wishbone_bus_syscon_if wb_bus_if();
...
initial begin
//set virtual interface to wishbone bus
wishbone_pkg::v_wb_bus_if = wb_bus_if;
...
end
endmodule
Any component that uses the virtual interface should create a local handle and assign the package variable to the local
handle in the connect() method.
// wishbone master driver
class wb_m_bus_driver extends ovm_driver #(wb_txn, wb_txn);
...
endclass
Strictly speaking, the use of a local virtual interface handle is not necessary, since the package variable is visible, but this
step makes the code more reusable. For example, if the package variable name changes, there is only one line in the
driver that would need to change.
( download source code examples online at http://verificationacademy.com/uvm-ovm ).
Ovm/Connect/VirtInterfaceConfigPkg
Setting Virtual Interface Properties in the Tesbench using a Package
An alternative to using ovm_container to provide virtual interface information to the test class is to use a package. The
recommended approach is to sue the test parameters package. An alternate approach, which is not recommended but is
in industry use is to use the agent package. This article will focus on the recomended approach.
...
endpackage
initial begin
//set WISHBONE virtual interface in test_params_pkg
v_wb_bus_if = wb_bus_if;
endmodule
// Set WISHBONE bus virtual interface in config obj to virtual interface in test_params_pkg
wb_config_0.v_wb_bus_if = v_wb_bus_if;
...
Ovm/Connect/TwoKingdomsFactory
Abstract/base Class
The Abstract/base class is defined as part of the agent in the testbench. In this example it is a base class driver and
includes ports for connection to the rest of the wishbone bus agent.
Wrapper module
A wrapper module is created that includes instances of the BFM and the DUT. The concrete class is defined inside the
module so its scope will be the wrapper module.
An instance of the concrete class is created inside of the wrapper module. A handle to this instance is placed inside the
configuration database using ovm_container.
...
endtask
endmodule
Below is the code for the WISHBONE bus wrapper module. Note the instances of the BFM
(wishbone_bus_syscon_bfm), the slave memory (wb_slave_mem) and the Ethernet MAC (eth_top). The MAC chip also
has a Media Independent Interface (MII) besides the WISHBONE interface that is not shown or discussed. There are
actually two concrete classes defined in this bus wrapper - a driver and a monitor but only the driver
(wb_bus_bfm_driver_c) is shown and discussed.
module wb_bus_wrapper #(int WB_ID = 0);
...
// WISHBONE BFM instance
// Supports up to 8 masters and up to 8 slaves
wishbone_bus_syscon_bfm wb_bfm();
// MAC 0
// It is WISHBONE slave 1: address range 100000 - 100fff
// It is WISHBONE Master 0
eth_top mac_0 ( ... );
If for example the concrete driver class receives a WISHBONE write transaction it calls its local wb_write_cycle() task
which in turn calls the wb_write_cycle() task inside the BFM.
task run();
wb_txn req_txn;
forever begin
seq_item_port.get(req_txn); // get transaction
@ ( posedge wb_bfm.clk) #1; // sync to clock edge + 1 time step
case(req_txn.txn_type) //what type of transaction?
NONE: `ovm_info($sformatf("WB_M_DRVR_%0d",m_id),
$sformatf("wb_txn %0d the wb_txn_type was type NONE",
req_txn.get_transaction_id()),OVM_LOW )
WRITE: wb_write_cycle(req_txn);
READ: wb_read_cycle(req_txn);
RMW: wb_rmw_cycle(req_txn);
WAIT_IRQ: fork wb_irq(req_txn); join_none
default: `ovm_error($sformatf("WB_M_DRVR_%0d",m_id),
$sformatf("wb_txn %0d the wb_txn_type was type illegal",
req_txn.get_transaction_id()) )
endcase
end
endtask
// Methods
// calls corresponding BFM methods
//WRITE 1 or more write cycles
task wb_write_cycle(wb_txn req_txn);
wb_txn orig_req_txn;
$cast(orig_req_txn, req_txn.clone()); //save off copy of original req transaction
wb_bfm.wb_write_cycle(req_txn, m_id);
wb_drv_ap.write(orig_req_txn); //broadcast orignal transaction
endtask
...
endclass
...
endmodule
In the wishbone wrapper an instance override is created of the derived concrete class driver for the base class driver.
Note the code in this example is set up to handle multiple instances of the wishbone bus wrapper (and hence multiple
DUTs) which is the reason for the use of the WB_ID parameters to uniquify the instances override. This parameter is set
in the top_mac module.
module top_mac;
endmodule
initial begin
//set inst override of concrete bfm driver for base bfm driver
wb_bus_bfm_driver_base::type_id::set_inst_override(
wb_bus_bfm_driver_c #(WB_ID)::get_type(),
$sformatf("*env_%0d*", WB_ID));
...
end
endmodule
In the wishbone agent in the testbench a base class driver handle is declared. When it is created by the factory the
override will take effect and a derived concrete class driver object will be created instead. This object has as its Verilog
scope the wishbone bus wrapper and its Verilog path would indicate it is inside the wishbone bus wrapper instance
created inside of top_mac (wb_bus_0). From the OVM perspective the object is an OVM hierarchical child of the
wishbone agent and its ports are connected to the ports in the wishbone agent.
class wb_master_agent extends ovm_agent;
...
//ports
ovm_analysis_port #(wb_txn) wb_agent_drv_ap;
...
// components
wb_bus_bfm_driver_base wb_drv;
wb_config m_config;
...
function void build();
super.build();
m_config = wb_config::get_config(this); // get config object
//ports
wb_agent_drv_ap = new("wb_agent_drv_ap", this);
...
//components
wb_drv = wb_bus_bfm_driver_base::type_id::create("wb_drv", this); // driver
...
endfunction
Ovm/DualTop
Typically a DUT-TB setup has a single SystemVerilog module as the top level. This top level module contains the DUT
and its associated interfaces, protocol modules, connection and support logic. It also contains the code to create the
testbench. All this "stuff" can get messy and hard to manage. A different way to manage all this stuff is to encapsulate
everything associated directly with the DUT in a wrapper module. The code to create the testbench is placed in its own
module. Verilog allows for more than one top level module in a simulation. Neither of these two modules are instantiated
but rather are treated as top level modules. This arrangement is referred to as dual top.
Dual top is a necessity for emulation. The DUT wrapper is the stuff that goes in the emulator. The other top module
containing the testbench stays running in the simulator. If the testbench is only going to be used in simulation dual top is
not necessary but may however still provide a useful level of encapsulation for modularity, reuse etc.
Communicating the virtual interface connection between the DUT wrapper module and the testbench is done using the
configuration database with ovm_container approach.
import ovm_pkg::*;
import ovm_container_pkg::*;
import test_params_pkg::*;
wishbone_bus_syscon_if wb_bus_if();
//-----------------------------------
...
);
//-----------------------------------
// MAC 0
// It is WISHBONE Master 0
eth_top mac_0 (
...
);
...
);
initial
endmodule
initial
run_test("test_mac_simple_duplex"); // create and start running test
endmodule
Ovm/VirtInterfaceFunctionCallChain
Function Call Chaining
It is unfortunate that this approach to assigning actual interface reference to the virtual interface handles inside the
testbench is the one that is used in the xbus example that is prevalent in the OVM user's guide. Many users naturally
assume that this is the recommended method because of its use in the example. This approach is not recommended.
It involves creating a function (called assign_vi in the example) that takes a virtual interface handle as an argument, and
calls an equivalent function (also named assign_vi) on one or more child components. This is repeated down the
hierarchy until a leaf component is reached. Any components that need the virtual interface declare a local handle and
assign the function argument to the local handle.
In the connect() function of test env:
xbus0.assign_vi(xbus_tb_top.xi0);
In the agent:
function void assign_vi(virtual interface xbus_if xmi);
monitor.assign_vi(xmi);
if (is_active == OVM_ACTIVE) begin
sequencer.assign_vi(xmi);
driver.assign_vi(xmi);
end
endfunction : assign_vi
In the monitor:
function void assign_vi(virtual interface xbus_if xmi);
this.xmi = xmi;
endfunction
There are two main reasons why this method should not be used.
• It is not reusable - If the test environment hierarchy changes, these functions must be updated
• Unnecessary extra work - To reach leaf components in the environment, you must pass the virtual interface handle
down through intermediate levels of the hierarchy that have no use for the virtual interface. Also, to make this method
more reusable with respect to environment hierarchy changes, you would have to embed extra decision-making code
(as in the examples above). or write each function to iterate over all children or and call the function on each child.
This requires even more unnecessary work.
Ovm/BusFunctionalModels
Bus Functional Models for DUT communication
Sometimes a the DUT connection is not directly to the ports of the DUT but rather is made through a BFM. As shown in
the diagram below, typically the BFM will have tasks for generating DUT transactions.
The interface acts as a "proxy" for the BFM to the testbench transactors. For example to do a wishbone bus write
transaction the driver would call the write task in the interface which in turn would call the write task in the BFM. See
the diagram below.
// MAC 0
eth_top mac_0(...);
// Interface
interface wishbone_bus_bfm_if #(int ID = WB_ID)
(input bit clk);
// Methods
//WRITE 1 or more write cycles
task wb_write_cycle(wb_txn req_txn, bit [2:0] m_id);
wb_bfm.wb_write_cycle(req_txn, m_id);
endtask
// other tasks not shown
...
endinterface
// Interface instance
wishbone_bus_bfm_if #(WB_ID) wb_bus_bfm_if(.clk(wb_bfm.clk));
initial
//set interface in config space
ovm_container #(virtual wishbone_bus_bfm_if)::set_value_in_global_config(
$sformatf("WB_BFM_IF_%0d",WB_ID), wb_bus_bfm_if);
endmodule
Ovm/ProtocolModules
Protocol modules are wrapper modules that encapsulate a DUT interface, associated assertions, QVL instances (which
are not allowed inside an interface), and so forth.
When emulation is a consideration protocol modules provide a level of encapsulation necessary to isolate the changes
that occur in the agent and interface in moving between simulation and emulation. If the testbench is only going to be
used in simulation protocol modules are not necessary. They may however still provide a useful level of encapsulation
for modularity, reuse etc. While it is not required that protocol modules be used together with the dual top methodology
it is likely to be used mainly in connection with the dual top approach since it is also required for emulation.
By adopting encapsulation, protocol modules protect the top level from changes:
• Any re-mapping due to changes in the interface can be done inside the protocol module.
• The top level module is protected from changes to the virtual interface registration/connection technique.
• You can instantiate QVL instances (which would not be allowed inside the SV interface) as well as add other
assertions that might not already be present in the interface.
Example:
In this example an Ethernet Media Access Controller (MAC) is the DUT. A MAC has multiple interfaces. The one
shown in the example is the Media Independent Interface (MII) which is where Ethernet packets are transferred to the
physical interface. In this example the protocol module contains the MII interface instance, a QVL MII monitor and
code for putting the interface instance location in the configuration database using ovm_container.
module mac_mii_protocol_module #(string INTERFACE_NAME = "") (
input logic wb_rst_i,
// Tx
output logic mtx_clk_pad_o, // Transmit clock (from PHY)
input logic[3:0] mtxd_pad_o, // Transmit nibble (to PHY)
input logic mtxen_pad_o, // Transmit enable (to PHY)
input logic mtxerr_pad_o, // Transmit error (to PHY)
// Rx
output logic mrx_clk_pad_o, // Receive clock (from PHY)
output logic[3:0] mrxd_pad_i, // Receive nibble (from PHY)
output logic mrxdv_pad_i, // Receive data valid (from PHY)
output logic mrxerr_pad_i, // Receive data error (from PHY)
// Common Tx and Rx
output logic mcoll_pad_i, // Collision (from PHY)
output logic mcrs_pad_i, // Carrier sense (from PHY)
import ovm_container_pkg::*;
// Instantiate interface
mii_if miim_if();
initial begin
ovm_container #(virtual mii_if)::set_value_in_global_config(interface_name, miim_if);
end
endmodule
It should be noted that if the parameter INTERFACE _ NAME is not set, then the default value is %m (i.e., the
hierarchical path of this module). This is guaranteed to be unique. If this parameter is explicitly set, then it is up to the
designer to make sure that the names chosen are unique within the enclosing module.
( download source code examples online at http://verificationacademy.com/uvm-ovm ).
Ovm/Connect/AbstractConcrete
Abstract/Concrete Class approach to DUT-TB communication
A handle based approach to DUT-TB communication that does not use virtual interfaces is referred to in the OVM
industry as the abstract/concrete class approach. There is also a form of this approach that is in use that is known within
Mentor Graphics as Two Kingdoms.
As with using virtual interfaces this approach may be set up for the transactors in the testbench to communicate with the
pins of the DUT, with a Bus Functional Model (BFM) which drives transactions on the DUT or the internals of the DUT
using the SystemVerilog bind construct. The most typical use is with BFMs.
Virtual interfaces is the recommended approach for DUT-TB communication. The abstract/concrete class approach
should only be considered when virtual interfaces can not be used or as a secondary approach in the case of legacy
BFM's that can not be modified to be an interface.
The discussion in this article going forward will focus only on use of the abstract/concrete class approach with BFMs.
Example BFM
Here is a diagram showing a BFM for the WISHBONE bus that will be used in the examples here. The wishbone bus
BFM is connected to the WISHBONE bus and has the WISHBONE bus arbitration, clock, reset etc. logic along with
tasks which generate WISHBONE bus transactions (read, write etc.).
Here is code from the BFM. The full code may be downloaded with the other example code shown later.
module wishbone_bus_syscon_bfm #(int num_masters = 8, int num_slaves = 8,
int data_width = 32, int addr_width = 32)
(
// WISHBONE common signals
output logic clk,
output logic rst,
...
);
// WISHBONE bus arbitration logic
...
//Slave address decode
...
// BFM tasks
//WRITE 1 or more write cycles
task wb_write_cycle(wb_txn req_txn, bit [2:0] m_id = 1);
...
endtask
endmodule
Abstract/Concrete Classes
First an abstract class (SystemVerilog virtual class) is defined. The abstract class has pure virtual methods and properties
which define a public interface for accessing information. The implementations of the methods are not in the abstract
class but rather are in a derived class which is referred to as the concrete class. The concrete class is defined inside of a
wrapper module which also instantiates the BFM.
DUT Connection
Since it is defined inside the wrapper module the scope of the concrete class, wb_bus_concr_c in the example, is the
wrapper module and hence its methods can access anything defined inside of the wrapper module
(wb_bus_protocol_module), including ports, variables, class handles, functions, tasks, module instances etc.
In this diagram an instance of the BFM is created inside the wrapper module. The methods of the concrete class access
the BFM methods by hierarchical reference through this instance (bmf_instance_name.method_name).
object being created. More details, diagrams and an example using a BFM are here.
Ovm/Connect/AbstractConcreteContainer
Abstract Class
The Abstract class is defined as part of the agent in the testbench and is included in the agent package. Below is the code
for an example abstract class called wb_bus_abs_c. Note the pure virtual methods which define a public interface to this
class. As part of the public interface too is an event to represent the posedge of the clock. Note too that the abstract class
inherits from ovm_component and so inherits the phase methods etc.
// Abstract class for abstract/concrete class wishbone bus communication
//----------------------------------------------
virtual class wb_bus_abs_c extends ovm_component;
// API methods
//WRITE 1 or more write cycles
pure virtual task wb_write_cycle(wb_txn req_txn, bit [2:0] m_id);
event pos_edge_clk;
endclass
Concrete Class
The concrete class is derived from the abstract class. It is required to override any pure virtual methods, providing
implementations. It may also provide code that writes/reads variables inherited from the abstract class. This class is
defined inside of a wrapper module that includes an instance of the BFM. Since it is defined inside the wrapper module
the scope of the concrete class, is the wrapper module and hence its methods can access anything defined inside of the
wrapper module, including ports, variables, class handles, functions, tasks, and in particular the BFM module instance.
Here is the code for the concrete class wb_bus_concr_c. It is defined inside the wrapper module
wb_bus_protocol_module which is a protocol module. Note that this class inherits from the abstract class and provides
implementations of the methods. These methods are straight forward in that they are "proxy" methods that simply call
the corresponding method inside the BFM. For example the concrete driver class wb_write_cycle() task calls the
wb_write_cycle() task inside the BFM. At the bottom of the wb_bus_protocol_module is the instance (wb_bfm) of the
BFM (wishbone_bus_syscon_bfm).
module wb_bus_protocol_module #(int WB_ID = 0, int num_masters = 8, int num_slaves = 8,
int data_width = 32, int addr_width = 32)
(
// Port declarations
// WISHBONE common signals
output logic clk,
output logic rst,
...
);
...
super.new(name,parent);
endfunction
// API methods
// simply call corresponding BFM methods
//WRITE 1 or more write cycles
task wb_write_cycle(wb_txn req_txn, bit [2:0] m_id);
wb_bfm.wb_write_cycle(req_txn, m_id);
endtask
task run();
forever @ (posedge clk)
-> pos_edge_clk;
endtask
endclass
...
// WISHBONE BFM instance
wishbone_bus_syscon_bfm wb_bfm(
.clk( clk ),
.rst( rst ),
...
);
endmodule
In the diagram above the DUTs and the wb_bus_protocol_module are wrapped in a wrapper module the
wb_bus_wrapper. This is for modularity and convenience in instantiating multiple wishbone buses.
The code below shows the instance of the concrete class inside the wrapper module and a method (a "lazy allocator" ie a
method that doesn't allocate the instance until it is needed) used for creating the concrete class instance.
module wb_bus_protocol_module #(int WB_ID = 0, int num_masters = 8, int num_slaves = 8,
int data_width = 32, int addr_width = 32)
(
// Port declarations
// WISHBONE common signals
output logic clk,
output logic rst,
...
);
if(wb_bus_concr_c_inst == null)
wb_bus_concr_c_inst = new();
return (wb_bus_concr_c_inst);
endfunction
initial
//set concrete class object in config space
ovm_container #(wb_bus_abs_c)::set_value_in_global_config(
$sformatf("WB_BUS_CONCR_INST_%0d",WB_ID) , get_wb_bus_concr_c_inst());
The location of the concrete class instance is provided to the transactor in the same manner as in virtual interface
connections using ovm_container to pass a handle that points to the concrete class instance to the test class and then
through a configuration object from the test class to the transactor. In the code above inside the initial block a handle to
the concrete instance is placed inside the configuration space using ovm_container.
In the test class the handle to the concrete driver class instance is fetched from the configuration database and placed
inside a configuration object which is made available to the wishbone agent. This approach is recommended as it
follows the recommended use model for passing information from the DUT to the testbench which is discussed in detail
here in the article on virtual interfaces.
class test_mac_simple_duplex extends ovm_test;
...
mac_env env_0;
wb_config wb_config_0; // config object for WISHBONE BUS 0
...
set_wishbone_config_params();
set_mii_config_params();
...
endfunction
...
endclass
Inside the driver an abstract class handle is made to point to the concrete class instance by fetching the location from the
configuration object provided by the test class.
// WISHBONE master driver
class wb_bus_bfm_driver extends ovm_driver #(wb_txn, wb_txn);
`ovm_component_utils(wb_bus_bfm_driver)
task run();
wb_txn req_txn;
forever begin
seq_item_port.get(req_txn); // get transaction
@ ( m_wb_bus_abs_c.pos_edge_clk) #1; // sync to clock edge + 1 time step
case(req_txn.txn_type) //what type of transaction?
NONE: `ovm_info($sformatf("WB_M_DRVR_%0d",m_id),
$sformatf("wb_txn %0d the wb_txn_type was type NONE",
req_txn.get_transaction_id()),OVM_LOW )
WRITE: wb_write_cycle(req_txn);
READ: wb_read_cycle(req_txn);
RMW: wb_rmw_cycle(req_txn);
WAIT_IRQ: fork wb_irq(req_txn); join_none
default: `ovm_error($sformatf("WB_M_DRVR_%0d",m_id),
$sformatf("wb_txn %0d the wb_txn_type was type illegal",
req_txn.get_transaction_id()) )
endcase
end
endtask
//READ 1 or more cycles
virtual task wb_read_cycle(wb_txn req_txn);
wb_txn rsp_txn;
m_wb_bus_abs_c.wb_read_cycle(req_txn, m_id, rsp_txn);
seq_item_port.put(rsp_txn); // send rsp object back to sequence
wb_drv_ap.write(rsp_txn); //broadcast read transaction with results
endtask
//RMW ( read-modify_write)
virtual task wb_rmw_cycle(ref wb_txn req_txn);
`ovm_info($sformatf("WB_M_DRVR_%0d",m_id),
"Wishbone RMW instruction not implemented yet",OVM_LOW )
endtask
endclass
When the driver receives a WISHBONE write transaction for example in the run task it calls its wb_write_cycle() task
which uses the abstract class handle (m_wb_bus_abs_c) to call the wb_write_cycle() method in the concrete class which
in turn calls the wb_write_cycle() method in the BFM.
Ovm/Configuration
Learn about passing configuration into an OVM test environment
Topic Overview
Introduction
One of the key tenets of designing reusable testbenches is to use configuration parameters whenever possible.
Parameterization permits scalability as well as flexibility to adapt a testbench to changing circumstances.
This article uses the generic term "parameter" to mean any value that can be used to establish a specific configuration for
a testbench, as opposed to the term "SystemVerilog parameter", which refers to the syntactic element.
In a testbench, there are any number of values that you might normally write as literals - values such as for-loop limits,
string names, randomization weights and other constraint expression values, coverage bin values. These values can be
represented by SystemVerilog variables, which can be set (and changed) at runtime, or SystemVerilog parameters, which
must be set at compile time. Because of the flexibility they offer, variables should be the preferred way to set
configuration parameters.
There is a common situation, however, where SystemVerilog parameters are the only option available - bit size
parameters. That situation is discussed in more detail in the Parameterized Tests document.
Configuration Objects
Configuration objects are an efficient, reusable mechanism for organizing configuration parameters. They are described
in detail in the whitepaper OVM Configuration and Virtual Interfaces. In a typical testbench, there can be several
configuration objects, each tied to a component. They are created as a subclass of ovm_object and group together all
related configuration parameters for a given branch of the test structural hierarchy. There can also be an additional,
single configuration object that holds global configuration parameters.
The OVM configuration database takes care of the scope and storage of the object. For convenience, a configuration
object can have a static method that gets the object out of the database. Here is an example configuration object.
// configuration container class
`ovm_object_utils( wb_config );
// Configuration Parameters
int unsigned m_s_mem_wb_base_addr; // base address of wb memory for MAC frame buffers
super.new( name );
endfunction
// Convenience function that first gets the object out of the OVM database
// and reports an error if the object is not present in the database, then
// casts it to the correct config object type, again checking for errors
ovm_object o;
wb_config t;
return null;
end
$sformatf("the object associated with id %s is of type $s which is not the required type %s" ,
return null;
end
return t;
endfunction
endclass
...
...
wb_config_0 = new();
wb_config_0.v_wb_bus_if =
wb_config_0.m_wb_id = 0; // WISHBONE 0
wb_config_0.m_mac_eth_addr = 48'h000BC0D0EF00;
wb_config_0.m_mac_wb_base_addr = 32'h00100000;
wb_config_0.m_tb_eth_addr = 48'h000203040506;
wb_config_0.m_s_mem_wb_base_addr = 32'h00000000;
wb_config_0.m_wb_verbosity = 350;
endfunction
...
super.build();
set_wishbone_config_params();
...
endfunction
...
endclass
The components that use the configuration object data get access via the static helper function. In this example, the
drivers get the virtual interface handle, ID, and verbosity from the object.
class wb_m_bus_driver extends ovm_driver #(wb_txn, wb_txn);
...
endclass
Configuring sequences
A sequence can use configuration data, but it must get the data from a component. Usually this should be done via its
sequencer. Here, the MAC simple duplex sequence gets the configuration data from its sequencer and uses parameters to
influence the data sent from the sequence:
class mac_simple_duplex_seq extends wb_mem_map_access_base_seq;
...
wb_config m_config;
task body;
...
m_config = wb_config::get_config(m_sequencer); // get config object
...
endclass
...
wb_config_0 = new();
// Get the virtual interface handle that was set in the top module or protocol module
wb_config_0.v_wb_bus_if =
...
endfunction
...
endclass
Ovm/Config/SetGetConfig
Configuration is an important part of OVM testbench construction and it is used to control the way in which the tes
tbench is structured - i.e. its topology, to
pass handles to testbench resources such
as virtual interfaces, and to define the
behaviour of a specific component. The
recommended way to encapsulate
configuration information is through the
use of configuration objects. The
mechanism used to propagate
configuration information is a
configuration table that is built into each
ovm_component. This lookup table can
be accessed by child nodes in the
testbench hierarchy tree. Configuration
objects are inserted into an
ovm_components configuration table using the set_config_object() method so that its sub-components can reference
them using the get_config_object() method.
//
// Class Description:
//
//
class spi_agent_config extends ovm_object;
// Virtual Interface
virtual spi_if SPI;
//------------------------------------------
// Data Members
//------------------------------------------
// Is the agent active or passive
ovm_active_passive_enum active = OVM_ACTIVE;
// Enable functional coverage
bit has_functional_coverage_monitor = 1;
// Enable scoreboard
bit has_scoreboard = 1;
//------------------------------------------
// Methods
//------------------------------------------
endclass: spi_agent_config
`endif // SPI_AGENT_CONFIG
set_config_object
The set_config_object() method is used to create an entry in an ovm_components configuration table for a configuration
objects. This table can be thought of as an associative array which is indexed by a string key. Only sub-components and
their children can reference the configuration table, and whether a particular sub-component can access an entry is
controlled by a path argument in the set_config_object() method. The set_config_object() method takes 4 arguments:
• string <path> - This controls which sub-components are able to access the object in the configuration table. The path
specified needs to tie in with the component name hierarchy that is created during the build process. Each node on the
path is determined by the value of the name argument passed during the creation step for each component. A typical
path might look like "m_env.bus_agent.bus_driver", where each node is delimited by a period character. The path
string can contain * and ? wildcard characters where * means everything below this point and ? is a don't care in the
string path. For example "m_env.bus_agent.*" would refer to everything south of the bus_agent in the testbench
hierarchy and "m_env.?_agent" would refer to all xxx_agents that are sub-components of m_env.
• string <key> - This is the string that is used to store and look up the object in the configuration table.
• ovm_object <value> - This is the pointer to the object which is copied into the configuration table.
• bit <clone> - This controls whether the table contains a pointer to the object or a pointer to a deep copy, or clone, of
the object. By default this is set to 1, but in almost all cases this should be set by the user to 0 so that all
sub-components are referring to the original object.
The following code example illustrates how the set_config_object() method is used:
// Code from inside the env containing the SPI agent
//
// Note that:
// * The spi agents handle in the env is m_spi_agent
// * The lookup key for configuration in the spi_agent is spi_agent_config
// * The configuration object for the agent is nested inside the envs config m_cfg
//
set_config_object("m_spi_agent", "spi_agent_config", m_cfg.m_spi_agent_cfg, .clone(0));
get_config_object
The get_config_object() method is the antidote to set_config_object() and is used by child components to retrieve
configuration objects from a parent or ancestor's configuration table. The result of the method call is a bit which indicates
whether the object lookup was successful, a 1 indicating success, a 0 indicating failure. An inout argument to the method
returns a pointer of ovm_object type which has to be cast to the target object type before the contents of the configuration
object can be referenced. The get_config_object call takes 3 arguments:
• string <key> - This is the look up key string, which has to match the one used with set_config_object
• ovm_object <value> - This is an inout field in the function call which returns a pointer to the configuration object of
type ovm_object.
• bit <clone> - This controls whether a pointer to a deep copy, or clone, of the object is returned. This is set to a 1 by
default, but in most cases it should be set to 0 so that a pointer to the configuration object is returned.
The following code example illustrates how the get_config_object() method is used. Note the use of defensive
programming techniques to ensure that the lookup is successful and that the cast to the target configuration object is also
successful.
// From the build method of the SPI agent
ovm_object tmp;
end
end
end
else begin
`ovm_error("build:", "get config for \"spi_agent_config\" failed - check lookup string or set_config")
end
end
endfunction: build
//
// Class Description:
//
//
class spi_agent_config extends ovm_object;
// Virtual Interface
virtual spi_if SPI;
//------------------------------------------
// Data Members
//------------------------------------------
// Is the agent active or passive
//------------------------------------------
// Methods
//------------------------------------------
extern static function spi_agent_config get_config( ovm_component c);
// Standard OVM Methods:
extern function new(string name = "spi_agent_config");
endclass: spi_agent_config
//
// Function: get_config
//
// This method gets the my_config associated with component c. We check for
// the two kinds of error which may occur with this kind of
// operation.
//
function spi_agent_config spi_agent_config::get_config( ovm_component c );
ovm_object o;
spi_agent_config t;
return t;
endfunction
`endif // SPI_AGENT_CONFIG
When the get_config() method is used in the build method of the spi_agent, the code is simplified considerably. Compare
the following code against the previous version of the spi_agent.
// Note the use of get_config() to return a spi_agent_config object
function void spi_agent::build();
m_cfg = spi_agent_config::get_config(this);
// Monitor is always present
m_monitor = spi_monitor::type_id::create("m_monitor", this);
// Only build the driver and sequencer if active
if(m_cfg.active == OVM_ACTIVE) begin
m_driver = spi_driver::type_id::create("m_driver", this);
m_sequencer = spi_sequencer::type_id::create("m_sequencer", this);
end
endfunction: build
field string "" Print any configuration settings matching the field
comp ovm_component null If a component is specified, the configuration settings for that component are printed
recurse bit 0 If == 1, then the configuration information for the component & its children is printed
• print_config_matches - This is a variable within an ovm_component, if it is set to a 1, then details about any
matching configuration settings found by using the get_config_object method are printed out.
In addition to these methods the OVM will print out a diagnostic message at the completion of the report phase if there
are any set_config_object settings which have not been referenced by get_config_object calls during the life-time of the
testbench.
Ovm/Config/Container
The ovm_container class
OVM's configuration mechanism limits the type of data that can be stored to integral, string, and object (derived from
ovm_object). For any other type, such as virtual interfaces, associative arrays, queues, mailboxes, the only way to get
into the configuration database is to be a data member of a wrapper class that derives from ovm_object.
Rather than creating several different wrapper classes for each type of data to be stored, a single parameterized,
general-purpose class, ovm_container has been created by Mentor that wraps a single piece of data. Also, for
convenience, the class has static methods to add the object to, and retrieve the object from the configuration database.
The ovm_container class source code is available here [add link].
Here is the class definition:
class ovm_container #( type T = int ) extends ovm_object;
typedef ovm_container #( T ) this_t;
//
// Variable: t, the data being wrapped
//
T t;
endclass
This function
• Creates the ovm_container object
• Populates it with a piece of data
• Stores the container object into the configuration database with global scope - i.e. any component can retrieve the
object from the database. It also does not clone the object.
ovm_object o;
return tmp.t;
endfunction
This static method gets the ovm_container associated with the config_id using the local config in component c. If
set_value_in_global_config has been used then the component c is in fact irrelevant since the value will be the same
anywhere in the OVM component hierarchy. But passing a value for the component allows the theoretical possibiliy that
different values are present at different places in the OVM hierarchy for the same config_id.
Example use
Here is a link to an example use of ovm_container to convey a virtual interface pointer to the testbench from the top
module. In this example the top module wraps a virtual interface and places the ovm_container object in the
configuration database. Inside the testbench it fetches the ovm_container object from the database and extracts the virtual
interface from it.
Ovm/Config/Params Package
When a DUT and/or interface is parameterized the parameter values are almost always used in the testbench as well.
These parameters should not be specified with direct literal values in your instantiations. Instead define named parameter
values in a package and use the named values in both the DUT side as well as the testbench side of the design.
This helps avoid mistakes where a change is made to one side but not to the other. Or, if a test configuration parameter is
some function of a DUT parameter, there is a chance that a miscalculation may when making a change.
Note that this package is not a place for all test parameters. If you have test-specific parameters that are not used by the
DUT, those values should be set directly in the test. The DUT parameters package should be used only for parameters
that are shared between the DUT and the test.
endpackage
The parameter values (mem_slave_size, mem_slave_wb_id) usage in the top module to instantiate the WISHBONE bus
slave memory module is shown below. Note the import of the test_params_pkg in the top_mac module:
module top_mac;
...
import test_params_pkg::*;
//-----------------------------------
// WISHBONE 0, slave 0: 000000 - 0fffff
// this is 1 Mbytes of memory
wb_slave_mem #(mem_slave_size) wb_s_0 (
// inputs
.clk ( wb_bus_if.clk ),
.rst ( wb_bus_if.rst ),
.adr ( wb_bus_if.s_addr ),
.din ( wb_bus_if.s_wdata ),
.cyc ( wb_bus_if.s_cyc ),
.stb ( wb_bus_if.s_stb[mem_slave_wb_id] ),
.sel ( wb_bus_if.s_sel[3:0] ),
.we ( wb_bus_if.s_we ),
// outputs
.dout( wb_bus_if.s_rdata[mem_slave_wb_id] ),
.ack ( wb_bus_if.s_ack[mem_slave_wb_id] ),
.err ( wb_bus_if.s_err[mem_slave_wb_id] ),
.rty ( wb_bus_if.s_rty[mem_slave_wb_id] )
);
...
endmodule
Parameter usage inside the test class of the testbench to set the configuration object values for the WISHBONE bus slave
memory is shown below. Note that instead of using the numeric literal of 32'h00100000 for the address value, the code
uses an expression involving a DUT parameter (mem_slave_size).
package tests_pkg;
...
import test_params_pkg::*;
...
`include "test_mac_simple_duplex.svh"
endpackage
//-----------------------------------------------------------------
...
...
wb_config_0 = new();
...
endfunction
...
endclass
Multiple Instances
When you have multiple instances of parameter sets you can either create a naming convention for your parameters or
you can use a prameterized class-based approach to organize your parameter sets on a per-instance basis.
Create a parameterized class which specifies the parameters and their default values. Then for each instance set the
parameter values by creating a specialization of the parameterized class using a typedef.
package test_params_pkg;
import ovm_pkg::*;
endpackage
To access or use the parameters, such as mem_slave_size or mem_slave_wb_id, specified in the specializations
WISHBONE_SLAVE_0 or WISHBONE_SLAVE_1 in the above code, use the following syntax
name_of_specialization::parameter_name as illustrated below.
module top_mac;
...
import test_params_pkg::*;
//-----------------------------------
// WISHBONE 0, slave 0: 000000 - 0fffff
// this is 1 Mbytes of memory
wb_slave_mem #(WISHBONE_SLAVE_0::mem_slave_size) wb_s_0 (
// inputs
.clk ( wb_bus_if.clk ),
.rst ( wb_bus_if.rst ),
.adr ( wb_bus_if.s_addr ),
.din ( wb_bus_if.s_wdata ),
.cyc ( wb_bus_if.s_cyc ),
.stb ( wb_bus_if.s_stb [WISHBONE_SLAVE_0::mem_slave_wb_id] ),
.sel ( wb_bus_if.s_sel[3:0] ),
.we ( wb_bus_if.s_we ),
// outputs
.dout( wb_bus_if.s_rdata[WISHBONE_SLAVE_0::mem_slave_wb_id] ),
.ack ( wb_bus_if.s_ack [WISHBONE_SLAVE_0::mem_slave_wb_id] ),
.err ( wb_bus_if.s_err [WISHBONE_SLAVE_0::mem_slave_wb_id] ),
.rty ( wb_bus_if.s_rty [WISHBONE_SLAVE_0::mem_slave_wb_id] )
);
...
endmodule
There is further discussion of the relationship between parameters and reuse here.
( download source code examples online at http://verificationacademy.com/uvm-ovm ).
Ovm/ResourceAccessForSequences
A sequence is derived from an ovm_object and is not an ovm_component, this means that it does not have direct access
to resources, such as configuration objects and register models, in the testbench hierarchy. However, when a sequence is
started it is "connected" to the testbench via the sequencer which it is started on. What happens is that a handle of type
ovm_sequencer_base called m_sequencer in the sequence is assigned a pointer to the host sequencer when the
sequences start() method is called.
Unfortunately, the m_sequencer handle cannot be used directly to access testbench resources, first it has to be cast to the
type of the host sequencer.
The resources that the sequencer can provide access to can either be directly declared in the sequencer, or they can be
accessed via static methods implemented in the resources themselves. For instance, handles for a register model or a
functional coverage monitor could be declared in the sequencer and could be accessed directly. The assumption here is
that the handles have been assigned pointers to the right components either in the connect method of the sequencer or in
some external component.
Another approach is to access the resource through an object in the configuration space, here the sequence can access the
resource by calling a static get_config() method passing the sequencer handle as an argument. The following code
illustrates this approach and an application of this technique is described in the article on waiting for hardware events.
The get_config() static method is also detailed in the article on configuration objects.
//
// Sequence that needs to access a test bench resource via
// the configuration space
//
// Note that this is a base class any class extending it would
// call super.body() at the start of its body task to get set up
//
class register_base_seq extends ovm_sequence #(bus_seq_item);
`ovm_object_utils(register_base_seq)
task body;
// Cast the ovm_sequencer_base m_sequencer handle to the actual sequencer type:
if(!$cast(BUS, m_sequencer) begin
`ovm_error("CAST_FAIL", "This sequence is not running on the correct sequencer")
end
// Get the configuration object for the env - via the component tree
env_cfg = bus_env_config::get_config(BUS);
// Assign a pointer to the register model which is inside the env config object:
RM = env_cfg.register_model;
endtask: body
endclass: register_base_seq
Using p_sequencer
An alternative to implementing the cast of the m_sequencer handle to an actual sequencer type is to use the
`ovm_declare_p_sequencer macro. This implements the code to cast the actual sequencer type from the m_sequencer
base pointer as in the example above, however, it creates a handle called p_sequencer which points to the parent
sequencer. This macro should be put into the sequence code and implements a method called m_set_p_sequencer which
is called when the sequences start() method is called.
//
// Sequence that needs to access a test bench resource via
// the configuration space
//
// Note that this is a base class any class extending it would
// call super.body() at the start of its body task to get set up
//
class register_base_seq extends ovm_sequence #(bus_seq_item);
`ovm_object_utils(register_base_seq)
// Macro that defines the p_sequencer - creates a function call call m_set_p_sequencer
// that is called when the sequence start method is called:
`ovm_declare_p_sequencer(bus_sequencer)
task body;
// Get the configuration object for the env - via the component tree
env_cfg = bus_env_config::get_config(p_sequencer); // Using p_sequencer as the ref to the sequencer
// Assign a pointer to the register model which is inside the env config object:
RM = env_cfg.register_model;
endtask: body
endclass: register_base_seq
MacroCostBenefit
Macros should be used to ease repetitive typing of small bits of code, to hide implementation differences or limitations
among the vendors' simulators, or to ensure correct operation of critical features. Many of the macros in OVM meet these
criterion, while others clearly do not. While the benefits of macros may be obvious and immediate, the costs associated
with using certain macros are hidden and may not be realized until much later.
This topic is explored in the following paper from DVCon11:
OVM-UVM Macros-Costs vs Benefits DVCon11 Appnote (PDF) [1]
This paper will:
• Examine the hidden costs incurred by some macros, including code bloat, low performance, and debug difficulty.
• Identify which macros provide a good cost-benefit trade-off, and which do not.
• Teach you how to replace high-cost macros with simple SystemVerilog code.
Summary Recommendations
`ovm_*_utils Always use. These register the object or component with the OVM factory. While not a lot of code, registration can be hard to
debug if not done correctly.
`ovm_info Always use. These can significantly improve performance over their function counterparts (e.g. ovm_report_info).
`ovm_warning
`ovm_error
`ovm_fatal
`ovm_*_imp_decl OK to use. These enable a component to implement more than one instance of a TLM interface. Non-macro solutions don't
provide significant advantage.
`ovm_field_* Do not use. These inject lots of complex code that substantially decreases performance and hinders debug.
`ovm_do_* Do not use. These unnecessarily obscure a simple API and are best replaced by a user-defined task, which affords far more
flexibility.
`ovm_sequence_utils Do not use. These macros built up a sequencer's sequence library and enable automatic starting of sequences, which is almost
`ovm_sequencer_utils always the wrong thing to do.
other related macros
Ovm/Analysis
Components in an OVM testbench that observe and analyze behavior of the DUT
Topic Overview
Verifying a design consists of two major parts: stimulus generation and an analysis of the designs response. Stimulus
generation sets up the device and puts it in a particular state, then the analysis part actually performs the verification.
The analysis portion of a testbench is made up of components that observe behavior and make a judgement whether or
not the device conforms to its specification. Examples of specified behavior include functional behavior, performance,
and power utilization.
Scoreboards
These analysis components collect the transactions sent by the monitor and perform specific analysis activities on the
collection of transactions. Scoreboard components determine whether or not the device is functioning properly. The best
scoreboard architecture separates its tasks into two areas of concern: prediction and evaluation.
A predictor model, sometimes referred to as a "Golden Reference Model", receives the same stimulus stream as the DUT
and produces known good response transaction streams. The scoreboard evaluates the predicted activity with actual
observed activity on the DUT.
A common evaluation technique when there is one expected stream and one actual stream is to use a comparator, which
can either compare the transactions assuming in-order arrival of transactions or out-of-order arrival.
Coverage Collectors
Coverage information is used to answer the questions "Are we done testing yet?" and "Have we done adequate testing?".
Coverage Collectors subscribe to a monitor's analysis ports and sample observed activity into SystemVerilog functional
coverage constructs. The data from each test is entered into a shared coverage database, which is used to determine
overall verification progress.
Metric Analyzers
Metric Analyzers watch and record non-functional behavior such as timing/performance and power utilization. Their
architecture is generally standard. Depending on the number of transaction streams they observe, they are implemented
as an ovm_subscriber or with analysis exports. They can perform ongoing calculations during the run phase, and/or
during the post-run phases.
Analysis Reporting
All data is collected during simulation. Most analysis takes place dynamically as the simulation runs, but some analysis
can be done after the run phase ends. OVM provides three post-run phases: extract, check, and report. These phases
allow components to optionally extract relevant data collected during the run, perform a check, and then finally produce
reports about all the analysis performed, whether during the run or post-run.
Ovm/AnalysisPort
Overview
One of the unique aspects of the analysis section of a testbench is that usually there are many independent calculations
and evaluations all operating on the same piece of data. Rather than lump all these evaluations into a single place, it is
better to separate them into independent, concurrent components. This leads to a topological pattern for connecting
components that is common to the analysis section: the one-to-many topology, where one connection point broadcasts
information to many connected components that read and analyze the data.
This connection topology implementation behavior lends itself to an OOP design pattern called the "observer pattern."
In this pattern, interested "observers" register themselves with a single information source. There is no minimum or
maximum number of observers (e.g. the number of observers could be zero). The information source performs a single
operation to broadcast the data to all registered observers.
An additional requirement of OVM Analysis is "Do not interfere with the DUT". This means that the act of broadcasting
must be a non-blocking operation.
OVM provides three objects to meet the requirements of the observer pattern as well as the non-interference
requirement: analysis ports, analysis exports, and analysis fifos.
Detail
Analysis ports, analysis exports, and
analysis fifos follow the standard OVM
transaction-level communication
semantics. An analysis port requires
an implementation of write() to be
connected to it. An analysis export
provides the implementation of the
write() function. As with other
OVM TLM ports, analysis ports are
parameterized classes where the
parameter is the transaction type being
passed. Ports provide a local object
through which code can call a function.
Exports are connection points on
components that provide the
implementation of the functions called through the ports. Ports and exports are connected through a call to the
connect() function.
All other OVM TLM ports and exports, such as blocking put ports and blocking put exports, perform point-to-point
communication. Because of the one-to-many requirement for analysis ports and exports, an analysis port allows multiple
analysis exports to be connected to it. It also allows for no connections to be present. The port maintains a list of
connected exports.
Analysis Exports
You register an analysis export as an observer with the analysis port by passing the export as the argument to the port's
connect() function. As with other TLM exports, an analysis export comes in two types: a hierarchical export or an "imp"
export. Both hierarchical and "imp" exports can be connected to a port.
An "imp" export is placed on a
component that actually implements the
write() function directly.
Analysis Fifos
OVM provides a pre-built component
called an analysis fifo that has an
"imp"-style export and an
implementation of write() that places the
data written into a fifo buffer. The
buffer can grow indefinitely, in order to
prevent blocking and adhere to the
non-interference requirement. The
analysis fifo extends the tlm_fifo class, so it also has all of the exports and operations of a tlm fifo such as blocking get,
etc.
The analysis component is required to implement the write() function that is called by the Monitor's analysis port. For an
analysis component that has a single input stream, you can extend the ovm_subscriber class. For components that have
multiple input streams, you can either directly implement "write()" functions and provide an "imp" exports, or you can
expose hierarchical children's implementations of write() by providing hierarchical exports. The decision of which to use
depends on the component's functionality and your preferred style of coding.
In the diagram above, the Coverage Collector extends the ovm_subscriber class, which has an analysis "imp" export.
The extended class then implements the write() function.
The Scoreboard has two input streams of transactions and so uses embedded Analysis fifos which buffer the incoming
streams of transactions. In this case the write() method is implemented in the fifos, so the Scoreboard uses hierarchical
analysis exports to expose the write() method externally.
Ovm/AnalysisConnections
Overview
An analysis component captures transaction data by implementing the write() function that is called by the analysis port
to which it is connected. Depending on the functionality of the analysis component, you either directly implement the
write() function and provide an "imp" export using the ovm_analysis_imp class, or you expose a hierarchical child's
implementation of write() by providing a hierarchical export using the ovm_analysis_export class.
Packet pkt;
int pkt_cnt;
covergroup cov1;
s: coverpoint pkt.src_id {
bins src[8] = {[0:7]};
}
d: coverpoint pkt.dest_id {
bins dest[8] = {[0:7]};
}
cross s, d;
endgroup : cov1
endclass
// Declare the suffixes that will be appended to the imps and functions
`ovm_analysis_imp_decl(_BEFORE)
`ovm_analysis_imp_decl(_AFTER)
real m_before[$];
real m_after[$];
real last_b_time, last_a_time;
real longest_b_delay, longest_a_delay;
endclass
task run();
string s;
alu_txn before_txn, after_txn;
forever begin
before_fifo.get(before_txn);
after_fifo.get(after_txn);
if (!before_txn.compare(after_txn)) begin
$sformat(s, "%s does not match %s", before_txn.convert2string(), after_txn.convert2string());
ovm_report_error("Comparator Mismatch",s);
m_mismatches++;
end else begin
m_matches++;
end
end
endtask
endclass
For more complicated synchronization needs, you would use a combination of multiple write_SUFFIX() functions which
would place transaction data into some kind of shared data structure, along with code in the run() task to perform
coordination and control.
( download source code examples online at http://verificationacademy.com/uvm-ovm ).
Ovm/MonitorComponent
Overview
The first task of the analysis portion of the testbench is to monitor activity on the DUT. A Monitor, like a Driver, is a
constituent of an agent. A monitor component is similar to a driver component in that they both perform a translation
between actual signal activity and an abstract representation of that activity. The key difference between a Monitor and a
Driver is that a Montor is always passive. It does not drive any signals on the interface. When an agent is placed in
passive mode, the Monitor continues to execute.
A Monitor communicates with DUT signals through a virtual interface, and contains code that recognizes protocol
patterns in the signal activity. Once a protocol pattern is recognized, a Monitor builds an abstract transaction model
representing that activity, and broadcasts the transaction to any interested components.
Construction
Monitors should extend ovm_monitor. They should have one analysis port and a virtual interface handle that points to a
DUT interface.
class wb_bus_monitor extends ovm_monitor;
`ovm_component_utils(wb_bus_monitor)
endfunction
endclass
Passive Monitoring
Like in a scientific experiment, where the act of observing should not affect the activity observed, monitor components
should be passive. They should not inject any activity into the DUT signals. Practically speaking, this means that monitor
code should be completely read-only when interacting with DUT signals.
Recognizing Protocols
A monitor must have knowledge of protocol in order to detect recognizable patterns in signal activity. Detection can be
done by writing protocol-specific state machine code in the monitor's run() task. This code waits for a pattern of key
signal activity by observing through the virtual interface handle.
Copy-on-Write Policy
Since objects in SystemVerilog are handle-based, when a Monitor writes a transaction handle out of its analysis port,
only the handle gets copied and broadcast to subscribers. This write operation happens each time the Monitor runs
through its ongoing loop of protocol recognition in the run() task. To prevent overwriting the same object memory in the
next iteration of the loop, the handle that is broadcast should point to a separate copy of the transaction object that the
Monitor created.
This can be accomplished in two ways:
• Create a new transaction object in each iteration of (i.e. inside) the loop
• Reuse the same transaction object in each iteration of the loop, but clone the object immediately prior to calling
write() and broadcast the handle of the clone.
Example Monitor
task run();
wb_txn txn, txn_clone;
txn = wb_txn::type_id::create("txn"); // Create once and reuse
forever @ (posedge m_v_wb_bus_if.clk)
if(m_v_wb_bus_if.s_cyc) begin // Is there a valid wb cycle?
txn.adr = m_v_wb_bus_if.s_addr; // get address
txn.count = 1; // set count to one read or write
if(m_v_wb_bus_if.s_we) begin // is it a write?
txn.data[0] = m_v_wb_bus_if.s_wdata; // get data
txn.txn_type = WRITE; // set op type
Ovm/Predictors
Overview
A Predictor is a verification component that represents a "golden" model of all or part of the DUT functionality. It takes
the same input stimulus that is sent to the DUT and produces expected response data that is by definition correct.
Predictors are a part of the Scoreboard component that generate expected transactions. They should be separate from the
part of the scoreboard that performs the evaluation. A predictor can have one or more input streams, which are the same
input streams that are applied to the DUT.
Construction
Predictors are typical analysis components that are subscribers to transaction streams. The inputs to a predictor are
transactions generated from monitors observing the input interfaces of the DUT. The predictors take the input
transaction(s) and process them to produce exptected output transactions. Those output transactions are broadcast
through analysis ports to the evaluator part of the scoreboard, and to any other analysis component that needs to observe
predicted transactions. Internally, predictors can be written in C, C++, SV or SystemC, and are written at an abstract
level of modeling. Since predictors are written at the transaction level, they can be readily chained if needed.
Example
class alu_tlm extends ovm_subscriber #(alu_txn);
`ovm_component_utils(alu_tlm)
endclass
Ovm/Scoreboards
Overview
The Scoreboard's job is to determine whether
or not the DUT is functioning properly. The
scoreboard is usually the most difficult part of
the testbench to write, but it can be
generalized into two parts: The first step is
determining what exactly is the correct
functionality. Once the correct functionality is
predicted, the scoreboard can then evaluate
whether or not the actual results observed on
the DUT match the predicted results. The best
scoreboard architecture is to separate the
prediction task from the evaluation task. This
gives you the most flexibility for reuse by
allowing for substitution of predictor and evaluation models, and follows the best practice of separation of concerns.
In cases where there is a single stream of predicted transactions and a single stream of actual transactions, the scoreboard
can perform the evaluation with a simple comparator. The most common comparators are an in-order and out-of-order
comparator.
An in-order comparator assumes that matching transactions will appear in the same order from both expected and actual
streams. It gets transactions from the expected and actual side and evaluates them. The transactions will arrive
independently, so the evaluation must block until both transactions are present. In this case, an easy implementation
would be to embed two analysis fifos in the comparator and perform the synchronization and evaluation in the run() task.
Evaluation can be as simple as calling the transaction's compare() method, or it can be more involved, because for the
purposes of evaluating correct behavior, comparison does not necessarily mean equality.
class comparator_inorder extends ovm_component;
`ovm_component_utils(comparator_inorder)
task run();
string s;
alu_txn before_txn, after_txn;
forever begin
before_fifo.get(before_txn);
after_fifo.get(after_txn);
if (!before_txn.compare(after_txn)) begin
$sformat(s, "%s does not match %s", before_txn.convert2string(), after_txn.convert2string());
ovm_report_error("Comparator Mismatch",s);
m_mismatches++;
end else begin
m_matches++;
end
end
endtask
endclass
An out-of-order comparator makes no assumption that matching transactions will appear in the same order from the
expected and actual sides. So, unmatched transactions need to be stored until a matching transaction appears on the
opposite stream. For most out-of-order comparators, an associative array is used for storage. This example comparator
has two input streams arriving through analysis exports. The implementation of the comparator is symmetrical, so the
export names do not have any real importance. This example uses embedded fifos to implement the analysis write()
functions, but since the transactions are either stored into the associative array or evaluated upon arrival, this example
could easily be written using analysis imps and write() functions.
Because of the need to determine if two transactions are a match and should be compared, this example requires
transactions to implement an index_id() function that returns a value that is used as a key for the associative array. If an
entry with this key already exists in the associative array, it means that a transaction previously arrived from the other
stream, and the transactions are compared. If no key exists, then this transaction is added to associative array.
This example has an additional feature in that it does not assume that the index_id() values are always unique on a given
stream. In the case where multiple outstanding transactions from the same stream have the same index value, they are
stored in a queue, and the queue is the value portion of the associative array. When matches from the other stream arrive,
they are compared in FIFO order.
class ooo_comparator
#(type T = int,
type IDX = int)
extends ovm_component;
typedef T q_of_T[$];
typedef IDX q_of_IDX[$];
if (rcv_count[idx] == 0) begin
received_data.delete(idx);
rcv_count.delete(idx);
end
end // forever
endtask
task run();
fork
get_data(before_fifo, before_proc, 1);
get_data(after_fifo, after_proc, 0);
join
endtask : run
endclass : ooo_comparator
Advanced Scenarios
In more advanced scenarios, there can be multiple predicted and actual transaction streams coming from multiple DUT
interfaces. In this case, a simple comparator is insufficient and the implementation of the evaluation portion of the
scoreboard is more complex and DUT-specific.
Ovm/MetricAnalyzers
Overview
Metric Analyzers watch and record non-functional behavior such as latency, power utilization, and other
performance-related measurements.
Construction
Metric Analyzers are generally standard analysis components. They implement their behavior in a way that depends on
the number of transaction streams they observe - either by extending ovm_subscriber or with analysis imp/exports. They
can perform ongoing calculations during the run phase, and/or during the post-run phases.
Example
`ovm_analysis_imp_decl(_BEFORE)
`ovm_analysis_imp_decl(_AFTER)
real m_before[$];
real m_after[$];
real last_b_time, last_a_time;
real longest_b_delay, longest_a_delay;
real delay;
delay = $realtime - last_a_time;
last_a_time = $realtime;
m_after.push_back(delay);
endfunction
foreach (m_after[i])
if (m_after[i] > longest_a_delay) longest_a_delay = m_after[i];
endfunction
endclass
Ovm/PostRunPhases
Overview
Many analysis components perform their analysis on an ongoing basis during the simulation run. Sometimes you need to
defer analysis until all data has been collected, or a component might need to do a final check at the end of simulation.
For these components, OVM provides the post-run phases extract, check, and report.
These phases are exected in a hierarchically bottom-up fashion on all components.
Example
`ovm_analysis_imp_decl(_BEFORE)
`ovm_analysis_imp_decl(_AFTER)
real m_before[$];
real m_after[$];
real last_b_time, last_a_time;
real longest_b_delay, longest_a_delay;
foreach (m_after[i])
if (m_after[i] > longest_a_delay) longest_a_delay = m_after[i];
endfunction
endfunction
endclass
Ovm/EndOfTest
End-of-test Guide for OVM - Learn about the available end-of-test mechanisms
Topic Overview
An OVM based simulation may be divided into two stages; active and passive.
Active Stage
The active stage is where stimulus generation is done. It is entered when the run phase is initiated. This stage is complete
when either stimulus generation is complete or when a desired outcome or event is observed. An example of the latter
might be when 100% coverage is reached.
When the active stage is complete then end-of-test is initiated by calling stop_request(). This may be done indirectly
through the use of an objection (ovm_test_done) or directly by calling global_stop_request(). Typical participants in the
active stage are sequences, other generators, tests and environments.
Passive Stage
After stimulus generation is complete and the active stage is finished, the passive stage allows time for final data to pass
through the DUT, monitors to finish capturing data, scoreboards to drain etc.
The passive stage is optional and is entered when any component sets a flag to indicate it needs time to complete
activities after the end-of-test is initiated at the end of the active stage. At this time the components that have signaled
their participation in the passive stage, have their stop() method called (forked). If the passive stage is entered the end of
the run phase is deferred until the end of the passive stage. The passive stage is finished when all components complete
their passive stage activities and all the stop() methods are finished and exit.
Ovm/EOT/ActiveStage
End of Test - Active Simulation Stage
The active stage is where stimulus generation is done. It is entered when the run phase is initiated. Participants in the
active stage are those components and objects that need to complete all activities before the end-of-test mechanism is
applied. Typical participants are sequences, tests and environments.
End-of-test Mechanics
When stop_request() is called it first checks to see if there are any raised objections to ovm_test_done. If there is, it waits
until all objections to ovm_test_done are dropped. Once all the objections are dropped stop_request() moves on and any
further objections to ovm_test_done raised are ignored.
Every component is then checked to see if its enable_stop_interrupt flag is set. Each component with this flag set has its
stop() task forked as a parallel process. This is the entry into the passive stage. If any component has its
enable_stop_interrupt flag set the run phase continues for all components. When all the stop() methods are finished then
kill() is called on each component to stop the run phase.
If no component has its enable_stop_interrupt flag set then kill() is called immediately on each component to stop the run
phase.
calling the set_global_timeout() method. If the timeout value is set to 0 it will result in the default timeout value being
set.
Ovm/EOT/PassiveStage
End of Test - Passive Simulation Stage
The passive stage of simulation is provided to allow time for final data to pass through the DUT, monitors to finish
capturing data, scoreboards to drain etc. It occurs after the active stage in which stimulus is generated. When the active
stage finishes is not a concern of passive stage components. The passive stage behavior is implemented in the
stop() method.
stop()
virtual task stop(string ph_name)
As described in more detail in the End-of-test Mechanics section of the active stage, when stop_request() is called each
components enable_stop_interrupt flag is checked. For each component with its flag set the component's stop() method is
forked as a parallel process.
stop() is a virtual method that does nothing by default. It is inherited from ovm_component. By overriding this hook
method a component may insert its end of test behavior. The argument ph_name is set to the name of the task-based
phase in which stop_request() was called, which by default is "run". If there is more than one task-based phase ph_name
may be used to distinguish which one calle stop_request() and the stop() behavior may be programmed accordingly.
run()
If the passive stage is entered the end of the run phase is deferred until the end of the passive stage.
Stop timeout
There is a watch dog timer that runs parallel to the stop() methods and if reached will terminate all stop() methods. The
default stop timeout value is `OVM_DEFAULT_TIMEOUT (9200s by default) minus the current simulation time
($time()). The stop timeout value may be changed by calling the global method set_global_stop_timeout(). If the timeout
value is set to 0 it will result in the default timeout value being set.
General
In general one may wish to make sure pending interrupts, status etc. is cleared and check shadow registers/memories to
make sure everything matches.
Scoreboards
• Wait until buffers are drained.
• Wait some fixed amount of time.
• Wait 2x the latency of the DUT.
Below is an example of a scoreboard, mii_sb, that waits until its buffers are drained. It receives and compares Ethernet
frames that are sent and received by an Ethenet Media Access Controller (MAC) chip.
task run();
wb_txn txn;
forever begin
mii_tx_act_fifo.get(act_txn); // get actual txn
mii_tx_exp_fifo.get(exp_txn); // get expected txn
tx_txn_cnt++; // increment number of received tx transactions
if(!act_txn.compare(exp_txn)) begin // are they the same?
...
endtask
forever begin
mii_rx_act_fifo.get(act_txn); // get actual txn
mii_rx_exp_fifo.get(exp_txn); // get expected txn
rx_txn_cnt++; // increment number of received tx transactions
if(act_txn.compare(exp_txn))
...
endtask
Monitors
• Set a "busy" bit when in the middle of a transaction to hold of the completion of stop().
Below is an example of a Media Independent Interface (MII) monitor that sets a busy bit at the beginning of receipt of an
Ethernet frame from an Ethernet MAC chip and clears it when the frame has been received. Its stop task will finish if or
when the busy bit is off.
Note the busy bit is set in the get_frame() task after a transmission starts. The bit is cleared at the end of the get_frame()
task when the frame has been received. Note how the stop() task waits for the bit to be cleared if it is set.
class mii_tx_monitor extends ovm_driver #(ethernet_txn,ethernet_txn);
task run();
ethernet_txn txn;
enable_stop_interrupt = 1; // enable passive phase
forever begin
get_frame(txn); // receive txn from MAC
mii_tx_mon_ap.write(txn); // broadcast received txn
end
endtask
Ovm/Objections
Objections
The ovm_objection class provides a facility for synchronization or coordination of information between participating
objects. The information that is shared is not explicit shared between participating objects but is defined by or implicit in
the objection itself. The information is defined by what the objection "means". An example of this is the ovm_test_done
objection where the implicit meaning is this is an objection to the end of test.
The participating objects are either a testbench component or a sequence. The processes accessing the objection may
either be in the same component or different components. Participating objects raise or drop objections to indicate state
or status.
Objection counts are maintained hierarchically. Parent objects maintain two different counts - a count for itself that is the
count of its own objections and a count for all its children.
It is not recommended to create and use your own objections in general. They are overly complex and have significant
overhead. It is recommended to only use the built in objection ovm_test_done.
ovm_objection Interfaces
The ovm_objection class has three interfaces or APIs.
Objection Control
Methods for raising and dropping objections and for setting the drain time.
• raise_objection ( ovm_object obj = null, int count = 1). Raises the number of objections for the source object by
count, which defaults to 1. The raise of the objection is propagated up the hierarchy.
• drop_objection ( ovm_object obj = null, int count = 1). Drops the number of objections for source object by count,
which defaults to 1. The drop of the objection is propagagted up the hierarchy.
• set_drain_time ( ovm_object obj = null, time drain). Sets the drain time on the given object.
Recommendations:
• Use raise_objection/set_objection to indicate components readiness.
• Always use default count.
• Limit use of drain_time to top level, if used.
Objection Status
Methods for obtaining status information regarding an objection.
• get_objection_count ( ovm_object obj). Returns objection count for object.
• get_objection_total ( ovm_object obj = null). Returns objection count for object and all children.
• get_drain_time ( ovm_object obj). Returns drain time for object ( default: 0ns).
• display_objections ( ovm_object obj = null, bit show_header = 1). Displays objection information about object.
Recommendations:
• Generally only useful for debug
• get_objection_total can be used at top level in conjunction with timeout to initiate graceful end-of-test when timeout
reached: drop_objection(this,get_objection_total(this));
Callback Hooks
• raised(). Called when raise_objection has reached object
• dropped(). Called when drop_objection has reached object
• all_dropped(). Called when drop_objection has reached object and the total count for object goes to zero
Recommendations:
Do not use callback hooks. They are called repeatedly throughout the simulation degrading simulation performance.
Adds unnecessary complexity.
Ovm/EOT/TestDone
End of Test Objection
The recommended way to end the active stage of simulation and to begin the end-of-test mechanics is to use the
ovm_test_done objection.
ovm_test_done
ovm_test_done is a global singleton of the type ovm_test_done_objection which is derived from the general objection
class ovm_objection. The main functionality added in this derived class is when all objections are dropped a call to
ovm_top.stop_request() is made.
ovm_test_done Semantics
The implicit meaning of an objection to ovm_test_done is "I object to the termination of the run phase". A component
objects to the end of test by raising an objection on ovm_test_done. The component drops the objection to indicate that
from its perspective it is OK to end the test.
If no component raises an objection to ovm_test_done the ovm_test_done end-of-test mechanics will not be initiated and
the run phase is not terminated (by means of ovm_test_done anyway).
If an object raises an objection on ovm_test_done an objection count is incremented. When an object drops its objection
the ovm_test_done objection count is decremented. If on the drop of the objection the objection count decrements to
zero then the end-of-test mechanics will be initiated and the run phase will be terminated.
Note that if any component uses the ovm_test_done objection mechanism other components need to use it too else
premature end to the run phase may occur.
If a component calls stop_request while any objection(s) to ovm_test_done is raised, stop_request will first wait until all
objections to ovm_test_done are dropped before proceeding. Once all the objections are dropped stop_request() will
ignore any further objections raised to ovm_test_done.
ovm_test_done Mechanics
After any objection to ovm_test_done is raised when all objections are dropped, that is the total objection count to
ovm_test_done is zero, three steps are taken to end the active stage and move to the passive stage of the simulation.
1. A drain amount of time is waited. Note that drain is a property of the object that drops the objection not the
ovm_test_done objection itself. The default value of drain is zero and may be set with the set_drain_time() method.
2. A call back task all_dropped() is called. This is a hook provided to add behavior that does nothing by default.
Steps 1 and 2 are repeated up the hierarchy to the top level object in the testbench
3. A call to ovm_top.stop_request() is issued.
ovm_test_done Recommendations
In general it is recommended to use ovm_test_done instead of components calling global_stop_request directly. This
provides a scalable approach and is better for stand alone code. There is also a need to code "defensively" meaning you
must assume that at least one other component might be using ovm_test_done and therefore your component must use
ovm_test_done to avoid premature ending of the test.
performance.
Do not use the drain time mechanism. The passive stage of simulation enabled by setting the enable_stop_interrupt flag
provides the same capability. Use the drain time at the top level if at all.
The above diagram shows a testbench for a mulitple (2) Ehternet Media Access Controller (MAC) chips as a
DUT environment. The testbench implements a TCP/IP protocol stack (see the diagram below). The top sequence is the
top level virtual sequence that controls stimulus generation. It generates multiple Trivial FTP (TFTP) commands to the
TFTP clients in the two TFTP Protocol Stacks. It raises an objection to ovm_test_done when it begins the generation of
commands and drops the objection when all the commands are complete signaling the end of stimulus generation. It
does this to avoid the premature end to the active simulation stage by other components raising and dropping objections
to end of test such as the TFTP client as described below.
class tftp_top_seq extends ovm_sequence #(ui_txn);
...
task body;
...
// Directed test
ovm_test_done.raise_objection(this); // raise objection to simulation done
send_command(m_global_map.orca, m_global_map.blue, writef2f,
"text_files/f1.txt", "text_files/blue_f_out1.txt");
...
endclass
The diagram above shows more detail of the TFTP Protocol Stack. The TFTP client raises an objection when it receives
a command (from the top level virtual sequence) to execute. It drops the objection on completion of the command.
class tftp_client_seq extends translation_sequence_base;
...
forever begin
m_seq_item_port.get(base_txn_h); // get a ui command
$cast(txn, base_txn_h);
ovm_test_done.raise_objection(this); // raise objection to simulation done
`ovm_info($sformatf("TFTP_CLIENT_%0h",m_wb_id),
...
endclass
// Class: hier_off_test_done_objection
// Built-in end-of-test coordination with m_hier_mode set to 0.
class hier_off_test_done_objection extends ovm_test_done_objection;
endclass
endpackage
//-----------------------------------------------------------------------------
module top();
import ovm_pkg::*;
import objection_hier_off_pkg::*; // replace ovm_test_done
import tests_pkg::*;
initial
run_test("test_2_wbs_tftp"); // create env and start running test
endmodule
Sequences
Ovm/Sequences
Learn all about Sequences, Sequencer/Driver hookup, simple and complex
Topic Overview
Sequence Overview
In testbenches written in traditional
HDLs like Verilog and VHDL, stimulus
is generated by layers of sub-routine
calls which either execute time
consuming methods (i.e. Verilog tasks or
VHDL processes or procedures) or call
non-time consuming methods (i.e.
functions) to manipulate or generate data. Test cases implemented in these HDLs rely on being able to call sub-routines
which exist as elaborated code at the beginning of the simulation. There are several disadvantages with this approach - it
is hard to support constrained random stimulus; test cases tend to be 'hard-wired' and quite difficult to change and
running a new test case usually requires recompiling the testbench. Sequences bring an Object Orientated approach to
stimulus generation that is very flexible and provides new options for generating stimulus.
A sequence is an example of what software engineers call a 'functor', in other words it is an object that is used as a
method. An OVM sequence contains a task called body. When a sequence is used, it is created, then the body method is
executed, and then the sequence can be discarded. Unlike an ovm_component, a sequence has a limited simulation
life-time and can therefore can be described as a transient object. The sequence body method can be used to create and
execute other sequences, or it can be used to generate sequence_item objects which are sent to a driver component, via a
sequencer component, for conversion into pin-level activity or it can be used to do some combination of the two. The
sequence_item objects are also transient objects, and they contain the information that a driver needs in order to carry out
a pin level interaction with a DUT. When a response is generated by the DUT, then a sequence_item is used by the driver
to pass the response information back to the originating sequence, again via the sequencer. Creating and executing other
sequences is effectively the same as being able to call conventional sub-routines, so complex functions can be built up by
chaining together simple sequences.
In terms of class inheritance, the ovm_sequence inherits from the ovm_sequence_item which inherits from the
ovm_object. Both base classes are known as objects rather than components. The OVM testbench component hierarchy
is built from ovm_components which have different properties, which are mainly to do with them being tied into a static
component hierarchy as they are built and that component hiearchy stays in place for the life-time of the simulation.
In the OVM sequence architecture, sequences are responsible for the stimulus generation flow and send sequence_items
to a driver via a sequencer component. The driver is responsible for converting the information contained within
sequence_items into pin level activity. The sequencer is an intermediate component which implements communication
channels and arbitration mechanisms to facilitate interactions between sequences and drivers. The flow of data objects is
bidirectional, request items will typically be routed from the sequence to the driver and response items will be returned to
the sequence from the driver. The sequencer end of the communication interface is connected to the driver end together
during the connect phase.
Sequence Items
As sequence_items are the foundation on which sequences are built, some care needs to be taken with their
design. Sequence_item content is determined by the information that the driver needs in order to execute a pin level
transaction; ease of generation of new data object content, usually by supporting constrained random generation; and
other factors such analysis hooks. By convention sequence_items should also contain a number of standard method
implementations to support the use of the object in common transaction operations, these include copy, compare and
convert2string.
In order to handle the sequence_items arriving from the sequence, the driver has access to methods which are
implemented in the sequencer, these give the driver several alternate means to indicate that a sequence_item has been
consumed or to send responses back to the sequence.
The handling of sequence_items inside a sequence often relies to some extent on the how the driver processes sequence
items. There are a number of common sequence-driver use models, which include:
• Unidirectional non-pipelined
• Bidirectional non-pipelined
• Pipelined
Warning:
Once a sequence has started execution it should be allowed to complete, if it is stopped prematurely, then there is a
reasonable chance that the sequence-sequencer-driver communication mechanism will lock up.
Layering
In many cases, sequences are used to generate streams of data objects which can be abstracted as layers, serial
communication channels being one example and accessing a bus through a register indirection is another. The layering
mechanism allows sequences to be layered on top of eachother, so that high level layer sequences are translated into
lower level sequences transparently. This form of sequence generation allows complex stimulus to be built very rapidly.
Ovm/Sequences/Items
The OVM stimulus generation process is based on sequences controlling the behaviour of drivers by generating
sequence_items and sending them to the driver via a sequencer. The framework of the stimulus generation flow is built
around the sequence structure for control, but the generation data flow uses sequence_items as the data objects.
Randomization Considerations
Sequence_items are randomized within sequences to generate traffic data objects. Therefore, stimulus data properties
should generally be declared as rand, and the sequence_item should contain any constraints required to ensure that the
values generated by default are legal, or are within sensible bounds. In a sequence, sequence_items are often randomized
using in-line constraints which extend these base level constraints.
As sequence_items are used for both request and response traffic and a good convention to follow is that request
properties should be rand, and that response properties should not be rand. This optimises the randomization process and
also ensures that any collected response information is not corrupted by any randomization that might take place.
For example consider the following bus protocol sequence_item:
class bus_seq_item extends ovm_sequence_item;
`ovm_object_utils(bus_seq_item)
// etc
endclass: bus_seq_item
Ovm/Transaction/Methods
When working with data object classes derived from ovm_objects, including ones derived from ovm_transactions,
ovm_sequence_items and ovm_sequences, there are a number of methods which are defined for common operations on
the data objects properties. In turn, each of these methods calls one or more virtual methods which are left for the user to
implement according to the detail of the data members within the object. These methods and their corresponding virtual
methods are summarised in the following table.
clone do_copy Creates a new object and then does a deep copy of an object
unpack do_unpack Converts a bit format into the data object format
The do_xxx methods can be implemented and populated using `ovm_field_xxx macros, but the resultant code is
inefficient, hard to debug and can be prone to error. The recommended approach is to implement the methods manually
which will result in improvements in testbench performance and memory footprint. For more information on the issues
involved see the page on macro cost benefit analysis.
Consider the following sequence_item which has properties in it that represent most of the common data types:
class bus_item extends ovm_sequence_item;
// Factory registration
`ovm_object_utils(bus_item)
endclass: bus_item
The common methods that need to be populated for this sequence_item are:
do_copy
The purpose of the do_copy method is to provide a means of making a deep copy* of a data object. The do_copy
method is either used on its own or via the ovm_objects clone() method which allows independent duplicates of a data
object to be created. For the sequence_item example, the method would be implemented as follows:
// do_copy method:
function void do_copy(ovm_object rhs);
bus_item rhs_;
// Directly:
bus_item A, B;
// Indirectly:
$cast(A, B.clone()); // Clone returns an ovm_object which needs
// to be cast to the actual type
Note that the rhs argument is of type ovm_object since it is a virtual method, and that it therefore needs to be cast to the
actual transaction type before its fields can be copied. A design consideration for this method is that it may not always
make sense to copy all property values from one object to another.
*A deep copy is one where the value of each of the individual properties in a data object are copied to another, as
opposed to a shallow copy where just the data pointer is copied.
do_compare
The do_compare method is called by the ovm_object compare() method and it is used to compare two data objects of the
same type to determine whether their contents are equal. The do_compare() method should only be coded for those
properties which can be compared.
The ovm_comparer policy object has to be passed to the do_compare() method for compatability with the virtual method
template, but it is not necessary to use it in the comparison function and performance can be improved by not using it.
// do_compare implementation:
function bit do_compare(ovm_object rhs, ovm_comparer comparer);
bus_item rhs_;
if(!A.compare(B)) begin
// Report and handle error
end
else begin
// Report and handle success
end
convert2string
In order to get debug or status information printed to a simulator transcript or a log file from a data object there needs to
be a way to convert the objects contents into a string representation - this is the purpose of the convert2string() method.
Calling the method will return a string detailing the values of each of the properties formatted for transcript display or for
writing to a file. The format is determined by the user:
// Implementation example:
function string convert2string();
string s;
s = super.convert2string();
// Note the use of \t (tab) and \n (newline) to format the data in columns
// The enumerated op_code types .name() method returns a string corresponding to its value
$sformat(s, "%s\n delay \t%0d\n addr \t%0h\n op_code \t%s\n slave_name \t%s\n",
s, delay, addr, op_code.name(), slave_name);
// For an array we need to iterate through the values:
foreach(data[i]) begin
$sformat(s, "%s data[%0d] \t%0h\n", s, i, data[i]);
end
$sformat(s, "%s response \t%0b\n", s, response);
return s;
endfunction: convert2string
do_print
The do_print() method is called by the ovm_object print() method. Its original purpose was to print out a string
representation of an ovm data object using one of the ovm_printer policy classes. However, a higher performance version
of the same functionality can be achieved by implementing the method as a wrapper around a $display() call that takes
the string returned by a convert2string() call as an argument:
function void do_print(ovm_printer printer);
if(printer.knobs.sprint == 0) begin
$display(convert2string());
end
else begin
printer.m_string = convert2string();
end
endfunction: do_print
This implementation avoids the use of the ovm_printer policy classes, takes less memory and gives higher performance.
However, this is at the expense of not being able to use the various formatted ovm printer options.
To acheive full optimisation, avoid using the print() and sprint() methods all together and call the convert2string()
method directly.
do_record
The do_record() method is intended to support the viewing of data objects as transactions in a waveform GUI. Like
the printing data object methods, the principle is that the fields that are recorded are visible in the transaction viewer. The
underlying implementation of the do_record() method is simulator specific and for Questa involves the use of the
$add_attribute() system call:
// Macro to help with recording - Questa specific
`define ovm_record_field(NAME,VALUE) \
$add_attribute(recorder.tr_handle, VALUE, NAME);
// do_record:
function void do_record(ovm_recorder recorder);
super.do_record(recorder); // To record any inherited data members
`ovm_record_field("delay", delay)
`ovm_record_field("addr", addr)
`ovm_record_field("op_code", op_code.name())
`ovm_record_field("slave_name", slave_name)
foreach(data[i]) begin
`ovm_record_field($sformatf("data[%0d]", i), data[i])
end
`ovm_record_field("response", response)
endfunction: do_record
The transactions that are recorded by implementing do_report() and by turning on the recording_detail are available in
the sequencer with a transaction stream handle name of aggregate_items.
For more information on transaction recording see the related article on TransactionRecording.
Ovm/Sequences/API
Sequence API Fundamentals
A ovm_sequence is derived from an ovm_sequence_item and it is parameterised with the type of sequence_item that it
will send to a driver via a sequencer. The two most important properties of a sequence are the body method and the
m_sequencer handle.
Running a sequence
To get a sequence to run there are three steps that need to occur:
m_seq = my_sequence::type_id::create("m_seq");
Using the factory creation method allows the sequence to be overriden with a sequence of derived type as a means of
varying the stimulus generation.
// Using randomization
assert(m_seq.randomize() with {no_iterations inside {[5:20]};});
It is possible to call the sequence start method without any arguments, and this will result in the sequence running
without a direct means of being able to connect to a driver.
Step 1 - Creation
The sequence_item is derived from ovm_object and should be created via the factory:
Using the factory creation method allows the sequence_item to be overridden with a sequence_item of a derived type if
required.
Step 3 - Set
The sequence_item is prepared for use, usually through randomization, but it may also be initialised by setting properties
directly.
Step 4 - Go - finish_item()
The finish_item() call is made, which blocks until the driver has completed its side of the transfer protocol for the item.
No simulation time should be consumed between start_item() and finish_item().
my_sequence_item item;
task body;
// Step 1 - Creation
item = my_sequence_item::type_id::create("item");
Late Randomization
In the sequence_item flow above, steps 2 and 3 could be done in any order. However, leaving the randomization of the
sequence_item until just before the finish_item() method call has the potential to allow the sequence_item to be
randomized according to conditions true at the time of generation. This is sometimes referred to as late randomization.
The alternative approach is to generate the sequence_item before the start_item() call, in this case the item is generated
before it is necessarily clearhow it is going to be used.
In previous generation verification methodologies, such as Specman and the AVM, generation was done at the beginning
of the simulation and a stream of pre-prepared sequence_items was sent across to the driver. With late randomization,
sequence_items are generated just in time and on demand.
Coding Guidelines
Justification:
Although sequences can also be started using start_item() and finish_item(), it is clearer to the user whether a sequence
or a sequence item is being processed if this convention is followed.
Using the start() call also allows the user to control whether the sequences pre_body() and post_body() hook methods are
called. By default, the start() method enables the calling of the pre and post_body() methods, but this can be disabled by
passing an extra argument into the start() method. Also note that the start_item() and finish_item() calls do not call pre or
post_body().
Using start() for sequences means that a user does not need to know whether a sub-sequence contains one of these hook
methods.
Justification:
Keeping this separation allows sequences to be reused as stimulus when an OVM testbench is linked to an emulator for
hardware acceleration.
Ovm/Connect/Sequencer
The transfer of request and response
sequence items between sequences and
their target driver is facilitated by a
bidirectional TLM communication
mechanism implemented in the
sequencer. The ovm_driver class
contains an ovm_seq_item_pull_port
which should be connected to an
ovm_seq_item_pull_export in the
sequencer associated with the driver.
The port and export classes are
parameterised with the types of the
sequence_items that are going to be used
for request and response
transactions. Once the port-export connection is made, the driver code can use the API implemented in the export to get
request sequence_items from sequences and return responses to them.
The connection between the driver port and the sequencer export is made using a TLM connect method during the
connect phase:
// Driver parameterised with the same sequence_item for request & response
// response defaults to request
class adpcm_driver extends ovm_driver #(adpcm_seq_item);
....
endclass: adpcm_driver
// Sequencer parameterised with the same sequence item for request & response
class adpcm_sequencer extends ovm_sequencer #(adpcm_seq_item);
...
endclass: adpcm_sequencer
adpcm_driver m_driver;
adpcm_sequencer m_sequencer;
adpcm_agent_config m_cfg;
// Sequencer-Driver connection:
function void connect();
if(m_cfg.active == OVM_ACTIVE) begin // The agent is actively driving stimulus
m_driver.seq_item_port.connect(m_sequencer.seq_item_export); // TLM connection
m_driver.vif = cfg.vif; // Virtual interface assignment
end
endfunction: connect
The connection between a driver and a sequencer is typically made in the connect() method of an agent. With the
standard OVM driver and sequencer base classes, the TLM connection between a driver and sequencer is a one to one
connection - multiple drivers are not connected to a sequencer, nor are multiple sequencers connected to a driver.
In addition to this bidirectional TLM port, there is an analysis_port in the driver which can be connected to an
analysis_export in the sequencer to implement a unidirectional response communication path between the driver and the
sequencer. This is a historical artifact and provides redundant functionality which is not generally used. The bidirectional
TLM interface provides all the functionality required. If this analysis port is used, then the way to connect it is as
follows:
// Same agent as in the previous bidirectional example:
class adpcm_agent extends ovm_agent;
adpcm_driver m_driver;
adpcm_sequencer m_sequencer;
adpcm_agent_config m_cfg;
// Connect method:
function void connect();
if(m_cfg.active == OVM_ACTIVE) begin
m_driver.seq_item_port.connect(m_sequencer.seq_item_export); // Always need this
m_driver.rsp_port.connect(m_sequencer.rsp_export); // Response analysis port connection
m_driver.vif = cfg.vif;
end
//...
endfunction: connect
endclass: adpcm_agent
Note that the bidirectional TLM connection will always have to be made to effect the communication of requests.
One possible use model for the rsp_port is to notify other components when a driver returns a response, otherwise it is
not needed.
Ovm/Driver/Sequence API
The ovm_driver is an extension of the ovm_component class that adds an ovm_seq_item_pull_port which is used to
communicate with a sequence via a sequencer. The ovm_driver is a parameterised class and it is parameterised with the
type of the request sequence_item and the type of the response sequence_item. In turn, these parameters are used to
parameterise the ovm_seq_item_pull_port. The fact that the response sequence_item can be parameterised independently
means that a driver can return a different response item type from the request type. In practice, most drivers use the same
sequence item for both request and response, so in the source code the response sequence_item type defaults to the
request sequence_item type.
The use model for the ovm_driver class is that it consumes request (REQ) sequence_items from the sequencers request
FIFO using a handshaked communication mechanism, and optionally returns response (RSP) sequence_items to the
sequencers response FIFO. The handle for the seq_item_pull_port within the ovm_driver is the seq_item_port. The
API used by driver code to interact with the sequencer is referenced by the seq_item_port, but is actually implemented in
the sequencers seq_item_export (this is standard TLM practice).
OVM Driver API
The driver sequencer API calls are:
get_next_item
This method blocks until a REQ sequence_item is available in the sequencers request FIFO and then returns with a
pointer to the REQ object.
The get_next_item() call implements half of the driver-sequencer protocol handshake, and it must be followed by an
item_done() call which completes the handshake. Making another get_next_item() call before issuing an item_done() call
will result in a protocol error and driver-sequencer deadlock.
try_next_item
This is a non-blocking variant of the get_next_item() method. It will return a null pointer if there is no REQ
sequence_item available in the sequencers request FIFO. However, if there is a REQ sequence_item available it will
complete the first half of the driver-sequencer handshake and must be followed by an item_done() call to complete the
handshake.
item_done
The non-blocking item_done() method completes the driver-sequencer handshake and it should be called after a
get_next_item() or a successful try_next_item() call.
If it is passed no argument or a null pointer it will complete the handshake without placing anything in the sequencer's
response FIFO. If it is passed a pointer to a RSP sequence_item as an argument, then that pointer will be placed in the
sequencer's response FIFO.
peek
If no REQ sequence_item is available in the sequencer's request FIFO, the peek() method will block until one is available
and then return with a pointer to the REQ object, having executed the first half of the driver-sequencer handshake. Any
further calls to peek() before a get() or an item_done() call will result in a pointer to the same REQ sequence item being
returned.
get
The get() method blocks until a REQ sequence_item is available in the sequencer's request FIFO. Once one is available,
it does a complete protocol handshake and returns with a pointer to the REQ object.
put
The put() method is non-blocking and is used to place a RSP sequence_item in the sequencer's response FIFO.
The put() method can be called at any time and is not connected with the driver-sequencer request handshaking
mechanism.
Note: The get_next_item(), get() and peek() methods initiate the sequence arbitration process, which results in a
sequence_item being returned from the active sequence which has selected. This means that the driver is effectively
pulling sequence_items from the active sequences as it needs them.
//
// Driver run method
//
task run;
bus_seq_item req_item;
forever begin
seq_item_port.get_next_item(req_item); // Blocking call returning the next transaction
@(posedge vif.clk);
vif.addr = req_item.address; // vif is the drivers Virtual Interface
//
// etc
//
// End of bus cycle
if(req_item.read_or_write == READ) begin // Assign response data to the req_item fields
req_item.rdata = vif.rdata;
end
req_item.resp = vif.error; // Assign response to the req_item response field
seq_item_port.item_done(); // Signal to the sequence that the driver has finished with the item
end
endtask: run
The corresponding sequence implementation would be a start_item() followed by a finish_item(). Since both the driver
and the sequence are pointing to the same sequence_item, any data returning from the driver can be referenced within the
sequence via the sequence_item handle. In other words, when the handle to a sequence_item is passed as an argument to
the finish_item() method the drivers get_next_item() method call completes with a pointer to the same sequence_item.
When the driver makes any changes to the sequence_item it is really updating the object inside the sequence. The drivers
call to item_done() unblocks the finish_item() call in the sequence and then the sequence can access the fields in the
sequence_item, including those which the driver may have updated as part of the response side of the pin level
transaction.
//
// Sequence body method:
//
task body;
bus_seq_item req_item;
bus_seq_item req_item_c;
req_item = bus_seq_item::type_id::create("req_item");
repeat(10) begin
$cast(req_item_c, req_item.clone); // Good practice to clone the req_item item
start_item(req_item_c);
req_item_c.randomize();
finish_item(req_item_c); // Driver has returned REQ with the response fields updated
`ovm_report("body", req_item_c.convert2string());
end
endtask: body
//
// run method within the driver
//
task run;
REQ req_item; //REQ is parameterized type for requests
RSP rsp_item; //RSP is parameterized type for responses
forever begin
seq_item_port.get(req_item); // finish_item in sequence is unblocked
@(posedge vif.clk);
vif.addr = req_item.addr;
//
// etc
//
// End of bus transfer
$cast(rsp_item, req_item.clone()); // Create a response transaction by cloning req_item
//
// Corresponding code within the sequence body method
//
task body;
REQ req_item; //REQ is parameterized type for requests
RSP rsp_item; //RSP is parameterized type for responses
repeat(10) begin
req_item = bus_seq_item::type_id::create("req_item");
start_item(req_item);
req_item.randomize();
finish_item(req_item); // This passes to the driver get() call and is returned immediately
get_response(rsp_item); // Block until a response is received
`ovm_info("body", rsp_item.convert2string(), OVM_LOW);
end
endtask: body
set_id_info
The ovm_sequence_item contains an id field which is set by a sequencer during the sequence start_item() call. This id
allows the sequencer to keep track of which sequence each sequence_item is associated with, and this information is used
to route response items back to the correct sequence. Although in the majority of cases only one sequence is actively
communicating with a driver, the mechanism is always in use. The sequence_item set_id_info method is used to set a
response item id from a the id field in request item.
If a request sequence_item is returned then the sequence id is already set. However, when a new or cloned response item
is created, it must have its id set.
task run;
forever begin
seq_item_port.get(req_item);
assert($cast(rsp_item, req_item.clone()); // This does not copy the id info
rsp.set_id_info(req_item); // This sets the rsp_item id to the req_item id
//
// Do the pin level transaction, populate the response fields
//
// Return the response:
seq_item_port.put(rsp_item);
//
end
endtask: run
Ovm/Sequences/Generation
The ovm_sequence_base class extends the ovm_sequence_item class by adding a body task method. The sequence is
used to generate stimulus through the execution of its body task. A sequence object is designed to be a transient dynamic
object which means that it can be created, used and then garbage collected after it has been dereferenced.
The use of sequences in the OVM enables a very flexible approach to stimulus generation. Sequences are used to control
the generation and flow of sequence_items into drivers, but they can also create and execute other sequences, either on
the same driver or on a different one. Sequences can also mix the generation of sequence_items with the execution of
sub-sequences. Since sequences are objects, the judicious use of polymorphism enables the creation of interesting
randomized stimulus generation scenarios.
In any sequence stimulus generation process there are three primary layers in the flow:
1. The master control thread - This may be a run task in an OVM test component or a high level sequence such as a
virtual sequence or a default sequence. The purpose of this thread is to start the next level of sequences.
2. The individual sequences - These may be stand-alone sequences that simply send sequence_items to a driver or they
may in turn create and execute sub-sequences.
3. The sequence_item - This contains the information that enables a driver to perform a pin level transaction. The
sequence item contains rand fields which are randomized with constraints during generation within a sequence.
// Parent sequence body method running child sub_sequences within a fork join_any
task body;
//
// Code creating and randomizing the child sequences
//
fork
seq_A.start(m_sequencer);
seq_B.start(m_sequencer);
seq_C.start(m_sequencer);
join_any
// First past the post completes the fork and disables it
disable fork;
// Assuming seq_A completes first - seq_B and seq_C will be terminated in an indeterminate
// way, locking up the sequencer
// The way to achieve the desired functionality is to remove the fork join_none from sequence_A
// and to fork join the two sequences in the control thread:
//
task run;
// ....
fork
sequence_A_h.start(m_sequencer);
another_seq_h.start(m_sequencer);
join
// ....
Randomized Fields
Like a sequence_item, a sequence can contain data fields that can be marked as rand fields. This means that a sequence
can be made to behave differently by randomizing its variables before starting it. The use of constraints within the
sequence allows the randomization to be within "legal" bounds, and the use of in-line constraints allows either a specific
values or values within ranges to be generated.
Typically, the fields that are randomized within a sequence control the way in which it generates stimulus. For instance a
sequence that moves data from one block of memory to another would contain a randomized start from address, a start to
address and a transfer size. The transfer size could be constrained to be within a system limit - say 1K bytes. When the
sequence is randomized, then the start locations would be constrained to be within the bounds of the relevant memory
regions.
//
// This sequence shows how data members can be set to rand values
//
// The sequence reads one block of memory (src_addr) into a buffer and then
// writes the buffer into another block of memory (dst_addr). The size
//
`ovm_object_utils(mem_trans_seq)
// Randomised variables
// Internal buffer
logic[31:0] buffer[];
//
//
constraint page_size {
constraint address_alignment {
src_addr[1:0] == 0;
dst_addr[1:0] == 0;
super.new(name);
endfunction
task body;
buffer = new[transfer_size];
start_item(req);
finish_item(req);
buffer[i] = req.read_data;
end
start_item(req);
finish_item(req);
buffer[i] = req.read_data;
end
endtask: body
endclass: mem_trans_seq
// //
//
`ovm_component_utils(seq_rand_test)
super.new(name);
endfunction
task run;
//
//
transfer_size == 128;});
seq.start(m_agent.m_sequencer);
global_stop_request();
endtask: run
endclass: seq_rand_test
A SystemVerilog class can randomize itself using this.randomize(), this means that a sequence can re-randomize itself in
a loop.
( download source code examples online at http://verificationacademy.com/uvm-ovm ).
// This class shows how to reuse the values persistent within a sequence
// It runs the mem_trans_seq once with randomized values and then repeats it
// reached. This shows how the end address values are reused on each repeat.
//
`ovm_object_utils(rpt_mem_trans_seq)
super.new(name);
endfunction
task body;
// First transfer:
trans_seq.start(m_sequencer);
// Each block transfer continues from where the last one left off
trans_seq.start(m_sequencer);
end
endtask: body
endclass: rpt_mem_trans_seq
`ovm_object_utils(rand_order_seq)
//
// The sub-sequences are created and put into an array of
// the common base type.
//
// Then the array order is shuffled before each sequence is
// randomized and then executed
//
task body;
bus_seq_base seq_array[4];
seq_array[0] = n_m_rw__interleaved_seq::type_id::create("seq_0");
seq_array[1] = rwr_seq::type_id::create("seq_1");
seq_array[2] = n_m_rw_seq::type_id::create("seq_2");
seq_array[3] = fill_memory_seq::type_id::create("seq_3");
assert(seq_array[i].randomize());
seq_array[i].start(m_sequencer);
end
endtask: body
endclass: rand_order_seq
Sequences can also be overridden with sequences of a derived type using the OVM factory, see the article on overriding
sequences for more information. This approach allows a generation flow to change its characteristics without having to
change the original sequence code.
( download source code examples online at http://verificationacademy.com/uvm-ovm ).
Ovm/Sequences/Overrides
Sometimes, during stimulus generation, it is useful to change the behaviour of sequences or sequence items. The
OVM factory provides an override mechanism to be able to substitute one object for another without changing any
testbench code and without having to recompile it.
The OVM factory allows factory registered objects to be overridden by objects of a derived type. This means that when
an object is constructed using the <class_name>::type_id::create() approach, then a change in the factory lookup for that
object results in a pointer to an object of a derived type being returned. For instance, if there is sequence of type seq_a,
and this is extended to create a sequence of type seq_b then seq_b can be used to override seq_a.
There are two types of factory override available - a type override, and an instance override.
//
// Run method
//
task run;
a_seq s_a; // Base type
b_seq s_b; // b_seq extends a_seq
c_seq s_c; // c_seq extends b_seq
global_stop_request();
endtask: run
Ovm/Sequences/Virtual
A virtual sequence is a sequence which controls stimulus generation using several sequencers. Since sequences,
sequencers and drivers are focused on point interfaces, almost all testbenches require a virtual sequence to co-ordinate
the stimulus across different interfaces and the interactions between them. A virtual sequence is often the top level of the
sequence hierarchy. A virtual sequence might also be referred to as a 'master sequence' or a 'co-ordinator sequence'.
A virtual sequence differs from a normal sequence in that its primary purpose is not to send sequence items. Instead, it
generates and executes sequences on different target agents. To do this it contains handles for the target sequencers and
these are used when the sequences are started.
// Creating a useful virtual sequence type:
typedef ovm_sequence #(ovm_sequence_item) ovm_virtual_sequence;
task body();
...
// Start interface specific sequences on the appropriate target sequencers:
aseq.start( a_sequencer , this );
bseq.start( b_sequencer , this );
endtask
endclass
In order for the virtual sequence to work, the sequencer handles have to be assigned. Typically, a virtual sequence is
created in a test class in the run phase and the assignments to the sequencer handles within the virtual sequence object are
made by the test. Once the sequencer handles are assigned, the virtual sequence is started using a null for the sequencer
handle.
my_seq vseq = my_seq::type_id::create("vseq");
vseq.a_sequencer = env.subenv1.bus_agent.sequencer;
vseq.b_sequencer = env.subenv2.subsubenv1.bus_agent3.sequencer;
vseq.start( null );
There are several variations on the virtual sequence theme. There is nothing to stop the virtual sequence being started on
a sequencer and sending sequence items to that sequencer whilst also executing other sequences on their target
sequencers. The virtual sequence does not have to be executed by the test, it may be executed by an environment
encapsulating a number of agents. For a large testbench with many agents and several areas of concern there may be
several virtual sequences running concurrently.
In addition to target sequencer handles, a virtual sequence may also contain handles to other testbench resources such as
register models which would be used by the sub-sequences.
Recommended Virtual Sequence Initialisation Methodology
In order to use the OVM effectively, many organisations separate the implementation of the testbench from the
implementation of the test cases. This is either a conceptual separation or a organisational separation. The testbench
implementor should provide a test base class and a base virtual sequence class from which test cases can be derived. The
test base class is responsible for building and configuring the verification environment component hierarchy, and
specifying which virtual sequence(s) will run. The test base class should also contain a method for assigning sequence
handles to virtual sequences derived from the virtual sequence base class. With several layers of vertical reuse, the
hierarchical paths to target sequencers can become quite long. Since the hierarchical paths to the target sequencers are
known to the testbench writer, this information can be encapsulated for all future test case writers.
As an example consider the testbench illustrated in the diagram. To illustrate a degree of virtual reuse, there are four
target agents organised in two sub-environments within a top-level environment. The virtual sequence base class contains
handles for each of the target sequencers:
class top_vseq_base extends ovm_sequence #(ovm_sequence_item);
`ovm_object_utils(top_vseq_base)
endclass: top_vseq_base
In the test base class a method is created which can be used to assign the sequencer handles to the handles in classes
derived from the virtual sequence base class.
class test_top_base extends ovm_test;
`ovm_component_utils(test_top_base)
env_top m_env;
endclass: test_top_base
In a test case derived from the test base class the virtual sequence initialisation method is called before the virtual
sequence is started.
class init_vseq_from_test extends test_top_base;
`ovm_component_utils(init_vseq_from_test)
task run();
vseq_A1_B_C vseq = vseq_A1_B_C::type_id::create("vseq");
raise_objection(this);
init_vseq(vseq); // Using method from test base class to assign sequence handles
vseq.start(null); // null because no target sequencer
drop_objection(this);
endtask: run
endclass: init_vseq_from_test
The virtual sequence is derived from the virtual sequence base class and requires no initialisation code.
`ovm_object_utils(vseq_A1_B_C)
task body();
a_seq a = a_seq::type_id::create("a");
b_seq b = b_seq::type_id::create("b");
c_seq c = c_seq::type_id::create("c");
a.start(A1);
fork
b.start(B);
c.start(C);
join
endtask: body
endclass: vseq_A1_B_C
This example illustrates how the target sequencer handles can be assigned from the test case, but the same approach
could be used for passing handles to other testbench resources such as register models and configuration objects which
may be relevant to the operation of the virtual sequence or its sub-sequences.
( download source code examples online at http://verificationacademy.com/uvm-ovm ).
endpackage
Using the find_all() method to find all the sequencers that match a search string in an environment. Again this relies on
the sequencer paths being unique which is an assumption that will most likely break down in larger scale environments.
//
// A virtual sequence which runs stand-alone, but finds its own sequencers
class virtual_sequence_base extends ovm_sequence #(ovm_sequence_item);
`ovm_object_utils(virtual_sequence)
// Sub-Sequencer handles
bus_sequencer_a A;
gpio_sequencer_b B;
//
function void get_sequencers;
ovm_component tmp[$];
//find the A sequencer in the testbench
tmp.delete(); //Make sure the queue is empty
ovm_top.find_all("*m_bus_agent_h.m_sequencer_h", tmp);
if (tmp.size() == 0)
`ovm_fatal(report_id, "Failed to find mem sequencer")
else if (tmp.size() > 1)
`ovm_fatal(report_id, "Matched too many components when looking for mem sequencer")
else
$cast(A, tmp[0]);
//find the B sequencer in the testbench
tmp.delete(); //Make sure the queue is empty
ovm_top.find_all("*m_gpio_agent_h.m_sequencer_h", tmp);
if (tmp.size() == 0)
endclass: virtual_sequence_base
Ovm/Sequences/VirtualSequencer
A virtual sequence is a sequence which
controls a stimulus generation process
using several sequencers. Since
sequences, sequencers and drivers are
focused on interfaces, almost all
testbenches require a virtual sequence to
co-ordinate the stimulus across different
interfaces and the interactions between
them.
A virtual sequence can be implemented in one of two ways, a stand-alone virtual sequence or a virtual sequence that is
designed to run on a virtual sequencer.
A virtual sequencer is a sequencer that is not connected to a driver itself, but contains handles for sequencers in the
testbench hierarchy.
`ovm_component_utils(virtual_sequencer)
// Note that the handles are in terms that the test writer understands
bus_master_sequencer bus;
gpio_sequencer gpio;
endclass: virtual_sequencer
endclass:env
`ovm_object_utils(virtual_sequence_base)
end
bus = v_sqr.bus;
gpio = v_sqr.gpio;
endtask: body
endclass: virtual_sequence_base
random_bus_seq bus_seq;
random_gpio_chunk_seq gpio_seq;
`ovm_object_utils(example_virtual_seq)
task body();
super.body; // Sets up the sub-sequencer pointers
gpio_seq = random_gpio_chunk_seq::type_id::create("gpio_seq");
bus_seq = random_bus_seq::type_id::create("bus_seq");
repeat(20) begin
bus_seq.start(bus);
gpio_seq.start(gpio);
end
endtask: body
endclass: example_virtual_seq
test_seq.start(m_env.m_v_sqr);
//...
endtask: run
gpio_sequencer gpio;
bus_sequencer gpio_bus;
endclass: soc_env_virtual_sqr
Coding Guideline: Virtual Sequences should check for null sequencer pointers before
executing
Virtual sequence implementations are based on the assumption that they can run sequences on sub-sequencers within
agents. However, agents may be passive or active depending on the configuration of the testbench. In order to prevent
test cases crashing with null handle errors, virtual sequences should check that all of the sequencers that they intend to
use have valid handles. If a null sequencer handle is detected, then they should bring the test case to an end with an
ovm_report_fatal call.
// Either inside the virtual sequence base class or in
task body;
end
`ovm_fatal(get_full_name(), "GPIO sub-sequencer null pointer: this test case will fail, check config or virtual sequence")
end
else begin
gpio = v_sqr.gpio;
end
`ovm_fatal(get_full_name(), "BUS sub-sequencer null pointer: this test case will fail, check config or virtual sequence")
end
else begin
gpio_bus = v_sqr.gpio_bus;
end
endtask: body
Ovm/Sequences/Hierarchy
When dealing with sequences, it helps to
think in layers when considering the
different functions that a testbench will be
asked to perform. At the lowest layer
associated with each agent are API
sequences. The middle layer which makes
use of the API sequences to get work done
are worker sequences. Finally at the top of
the testbench controlling everything is a
virtual sequence.
API Sequences
API sequences are the lowest layer in the
sequence hierarchy. They also do the least
amount of work in that they are doing a
very targeted action when they are run.
API sequences should perform small
discreet actions like performing a read on a
bus, a write on a bus, a read-modify-write
on a bus or waiting for an interrupt or some
other signal. The API sequences would be
included in the SystemVerilog package which defines the Agent on which the sequences are intended to run. Two
example API sequences would look as follows:
task body();
req = spi_item::type_id::create("spi_request");
start_item(req);
if ( !(req.randomize() with {req.addr == local::addr;} )) {
`ovm_error(report_id, "Randomize Failed!")
finish_item(req);
rdata = req.data;
endtask : body
endclass : spi_read_seq
task body();
req = spi_item::type_id::create("spi_request");
start_item(req);
if ( !(req.randomize() with {req.addr == local::addr;
req.data == local::wdata; } )) {
`ovm_error(report_id, "Randomize Failed!")
finish_item(req);
endtask : body
endclass : spi_write_seq
Worker Sequences
Worker sequences make use of the low level API sequences to build up middle level sequences. These mid-level
sequences could do things such as dut configuration, loading a memory, etc. Usually a worker sequence would only be
starting API sequences and sending sequence items to a single sequencer. A worker sequence would look like this:
//Worker sequence for doing initial configuration for Module A
class moduleA_init_seq extends ovm_sequence #(spi_item);
`ovm_object_utils(moduleA_init_seq)
task body();
read = spi_read_seq::type_id::create("read");
write = spi_write_seq::type_id::create("write");
endtask : body
endclass : moduleA_init_seq
Virtual Sequences
Virtual sequences are used to call and coordinate all the worker sequences. In most cases, designs will need to be
initialized before random data can be sent in. The virtual sequence can call the worker initialization sequences and then
call other worker sequences or even API sequences if it needs to do a low level action. The virtual sequence will either
contain handles to the target sequencers (recommended) or be running on a virtual sequencer which allows access to all
of the sequencers that are needed to run the worker and API sequences. An example virtual sequence would look like
this:
//Virtual Sequence controlling everything
`ovm_object_utils(test1_seq)
uvm_sequencer_base spi_seqr;
uvm_sequencer_base modA_seqr;
uvm_sequencer_base modB_seqr;
moduleA_init_seq modA_init;
moduleB_init_seq modB_init;
moduleA_rand_data_seq modA_rand_data;
moduleB_rand_data_seq modB_rand_data;
spi_read_seq spi_read;
task body();
if (!$cast(vseqr, m_sequencer)) `ovm_fatal(report_id, "Virtual Sequencer cast failed! Test can not proceed")
modA_init = moduleA_init_seq::type_id::create("modA_init");
modB_init = moduleB_init_seq::type_id::create("modB_init");
modA_rand_data = moduleA_rand_data_seq::type_id::create("modA_rand_data");
modB_rand_data = moduleB_rand_data_seq::type_id::create("modB_rand_data");
spi_read = spi_read_seq::type_id::create("spi_read");
fork
modA_init.start(spi_seqr, this);
modB_init.start(spi_seqr, this);
join
//Now start random data (These would probably be started on different sequencers than m_sequencer for a real design)
fork
modA_rand_data.start(modA_seqr, this);
modB_rand_data.start(modB_seqr, this);
join
if (read_data != 16'hffff)
endtask : body
endclass : test1_seq
Ovm/Driver/Use Models
Stimulus generation in the OVM relies on a coupling between sequences and drivers. A sequence can only be written
when the characteristics of a driver are known, otherwise there is a potential for the sequence or the driver to get into a
deadlock waiting for the other to provide an item. This problem can be mitigated for reuse by providing a set of base
utility sequences which can be used with the driver and by documenting the behaviour of the driver.
There are a large number of potential stimulus generation use models for the sequence driver combination, however most
of these can be characterised by one of the following use models:
Unidirectional Non-Pipelined
In the unidirectional non-pipelined use model, requests are sent to the driver, but no responses are received back from the
driver. The driver itself may use some kind of handshake mechanism as part of its transfer protocol, but the data payload
of the transaction is unidirectional.
An example of this type of use model would be an unidirectional communication link such as an ADPCM or a
PCM interface.
Bidirectional Non-Pipelined
In the bidirectional non-pipelined use model, the data transfer is bidirectional with a request sent from a sequence to a
driver resulting in a response being returned to the sequence from the driver. The response occurs in lock-step with the
request and only one transfer is active at a time.
An example of this type of use model is a simple bus interface such as the AMBA Peripheral Bus (APB).
Pipelined
In the pipelined use model, the data transfer is bidirectional, but the request phase overlaps the response phase to the
previous request. Using pipelining can provide hardware performance advantages, but it complicates the sequence driver
use model because requests and responses need to be handled separately.
CPU buses frequently use pipelines and one common example is the AMBA AHB bus.
Ovm/Driver/Unidirectional
In the unidirectional non-pipelined
sequence driver use model, the data flow
is unidirectional. The sequence sends a
series of request sequence_items to the
DUT interface but receives no response
sequence items. However, the control
flow of this use model is bidirectional,
since there are handshake mechanisms
built into the OVM sequencer communication protocol. The driver may also implement a hand shake on the
DUT interface, but this will not be visible to the controlling sequence.
An Unidirectional Example
An example of a unidirectional dataflow
is sending ADPCM packets using a
PCM framing protocol. The waveform
illustrates the protocol.
The driver implementation for this
example does not have to implement a
hardware handshake, and takes a request
sequence item and converts it into a sequence of pin transitions synchronised to the ADPCM clock which is generated
elsewhere, possibly within the DUT itself.
The driver controls the flow of sequence_items by using get_next_item() to obtain the next sequence_item to be
processed, and then does not make the item_done() call until it has finished processing the item. The sequence is blocked
at its finish_item() call until the item_done() call is made by the driver.
class adpcm_driver extends ovm_driver #(adpcm_seq_item);
`ovm_component_utils(adpcm_driver)
adpcm_seq_item req;
task run();
int top_idx = 0;
// Default conditions:
ADPCM.frame <= 0;
ADPCM.data <= 0;
forever begin
seq_item_port.get_next_item(req); // Gets the sequence_item from the sequence
repeat(req.delay) begin // Delay between packets
@(posedge ADPCM.clk);
end
endclass: adpcm_driver
The sequence implementation in this case is a loop which generates a series of sequence_items. A variation on this theme
would be for the sequence to actively shape the traffic sent rather than send purely random stimulus.
class adpcm_tx_seq extends ovm_sequence #(adpcm_seq_item);
`ovm_object_utils(adpcm_tx_seq)
// ADPCM sequence_item
adpcm_seq_item req;
task body();
req = adpcm_seq_item::type_id::create("req");
repeat(no_reqs) begin
start_item(req);
assert(req.randomize());
finish_item(req);
end
endtask: body
endclass: adpcm_tx_seq
Ovm/Driver/Bidirectional
One of the most common form of
sequence driver use models is the
scenario where the sequencer sends
request sequence_items to the driver
which executes the request phase of the
pin level protocol, and then the driver
responds to the response phase of the
pin-level transaction returning the
response back to the sequence. In this use model the flow of data is bidirectional and a new request phase cannot be
started until the response phase has completed. An example of this kind of protocol would be a simple peripheral bus
such as the AMBA APB.
To illustrate how this use model would be implemented, a DUT containing a GPIO and a bus interface will be used. The
bus protocol used is shown in the timing diagram. The request phase of the transaction is initiated by the valid signal
becoming active, with the address and direction signal (RNW) indicating which type of bus transfer is taking place. The
response phase of the transaction is completed when the ready signal becomes active.
The driver that manages this protocol will collect a request sequence_item from the sequencer and then drive the bus
request phase. The driver waits until the interface ready line becomes active and then returns the response information,
which would consist of the error bit and the read data if a read has just taken place.
The recommended way of implementing the driver is to use get_next_item() followed by item_done() as per the
following example:
class bidirect_bus_driver extends ovm_driver #(bus_seq_item);
`ovm_component_utils(bidirect_bus_driver)
bus_seq_item req;
task run;
// Default conditions:
BUS.valid <= 0;
BUS.rnw <= 1;
// Wait for reset to end
@(posedge BUS.resetn);
forever begin
endclass: bidirect_bus_driver
`ovm_object_utils(bus_seq)
bus_seq_item req;
task body;
req = bus_seq_item::type_id::create("req");
repeat(limit) begin
start_item(req);
// The address is constrained to be within the address of the GPIO function
// within the DUT, The result will be a request item for a read or a write
assert(req.randomize() with {addr inside {[32'h0100_0000:32'h0100_001C]};});
finish_item(req);
// The req handle points to the object that the driver has updated with response data
ovm_report_info("seq_body", req.convert2string());
end
endtask: body
endclass: bus_seq
bus_seq_item rsp;
// Default conditions:
BUS.valid <= 0;
BUS.rnw <= 1;
// Wait for reset to end
@(posedge BUS.resetn);
forever begin
seq_item_port.get(req); // Start processing req item
repeat(req.delay) begin
@(posedge BUS.clk);
end
BUS.valid <= 1;
BUS.addr <= req.addr;
BUS.rnw <= req.read_not_write;
if(req.read_not_write == 0) begin
BUS.write_data <= req.write_data;
end
while(BUS.ready != 1) begin
@(posedge BUS.clk);
end
// At end of the pin level bus transaction
// Copy response data into the rsp fields:
$cast(rsp, req.clone()); // Clone the req
rsp.set_id_info(req); // Set the rsp id = req id
if(rsp.read_not_write == 1) begin
rsp.read_data = BUS.read_data; // If read - copy returned read data
end
rsp.error = BUS.error; // Copy bus error status
BUS.valid <= 0; // End the pin level bus transaction
seq_item_port.put(rsp); // put returns the response
end
endtask: run
req = bus_seq_item::type_id::create("req");
repeat(limit) begin
start_item(req);
// The address is constrained to be within the address of the GPIO function
// within the DUT, The result will be a request item for a read or a write
assert(req.randomize() with {addr inside {[32'h0100_0000:32'h0100_001C]};});
finish_item(req);
get_response(rsp);
// The rsp handle points to the object that the driver has updated with response data
ovm_report_info("seq_body", rsp.convert2string());
end
endtask: body
For more information on this implementation approach, especially how to initialise the response item see the section
on the get, put use model in the Driver/Sequence API article.
( download source code examples online at http://verificationacademy.com/uvm-ovm ).
Ovm/Driver/Pipelined
In a pipelined bus protocol a data
transfer is broken down into two or more
phases which are executed one after the
other, often using different groups of
signals on the bus. This type of protocol
allows several transfers to be in progress
at the same time with each transfer
occupying one stage of the pipeline. The
AMBA AHB bus is an example of a
pipelined bus, it has two phases - the
address phase and the data phase. During
the address phase, the address and the Driver sequence pipelined bus.gif
bus control information, such as the
opcode, is set up by the host, and then
during the data phase the data transfer between the target and the host takes place. Whilst the data phase for one transfer
is taking place on the second stage of the pipeline, the address phase for the next cycle can be taking place on the first
stage of the pipeline. Other protocols such as OCP use more phases.
A pipelined protocol has the potential to increase the bandwidth of a system since, provided the pipeline is kept full, it
increases the number of transfers that can take place over a given number of clock cycles. Using a pipeline also relaxes
the timing requirements for target devices since it gives them extra time to decode and respond to a host access.
A pipelined protocol could be modelled with a simple bidirectional style, whereby the sequence sends a sequence item to
the driver and the driver unblocks the sequence when it has completed the bus transaction. In reality, most I/O and
register style accesses take place in this way. The drawback is that it lowers the bandwidth of the bus and does not stress
test it. In order to implement a pipelined sequence-driver combination, there are a number of design considerations that
need to be taken into account in order to support fully pipelined transfers:
• Driver Implementation - The driver needs to have multiple threads running, each thread needs to take a sequence
item and take it through each of the pipeline stages.
• Keeping the pipeline full - The driver needs to unblock the sequencer to get the next sequence item so that the
pipeline can be kept full
• Sequence Implementation - The sequence needs to have separate stimulus generation and response threads. The
stimulus generation thread needs to continually send new bus transactions to the driver to keep the pipeline full.
Driver Implementation
In order to support pipelining, a driver needs to process multiple sequence_items concurrently. In order to achieve this,
the drivers run method spawns a number of parallel threads each of which takes a sequence item and executes it to
completion on the bus. The number of threads required is equal to the number of stages in the pipeline. Each thread uses
the get() method to acquire a new sequence item, this unblocks the sequencer and the finish_item() method in the
sequence so that a new sequence item can be sent to the driver to fill the next stage of the pipeline.
In order to ensure that only one thread can call get() at a time, and also to ensure that only one thread attempts to drive
the first phase of the bus cycle, a semaphore is used to lock access. The semaphore is grabbed at the start of the loop in
the driver thread and is released at the end of the first phase, allowing another thread to grab the semaphore and take
ownership.
At the end of the last phase in the bus cycle, the driver thread sends a response back to the sequence using the put()
method. This returns the response to the originating sequence for processing.
In the code example a two stage pipeline is shown to illustrate the principles outlined.
//
// This class implements a pipelined driver
//
class mbus_pipelined_driver extends ovm_driver #(mbus_seq_item);
`ovm_component_utils(mbus_pipelined_driver)
@(posedge MBUS.MRESETN);
@(posedge MBUS.MCLK);
fork
do_pipelined_transfer;
do_pipelined_transfer;
join
endtask
//
// This task has to be automatic because it is spawned
// in separate threads
//
task automatic do_pipelined_transfer;
mbus_seq_item req;
forever begin
pipeline_lock.get();
seq_item_port.get(req);
accept_tr(req, $time);
void'(begin_tr(req, "pipelined_driver"));
MBUS.MADDR <= req.MADDR;
MBUS.MREAD <= req.MREAD;
MBUS.MOPCODE <= req.MOPCODE;
@(posedge MBUS.MCLK);
while(!MBUS.MRDY == 1) begin
@(posedge MBUS.MCLK);
end
// End of command phase:
// - unlock pipeline semaphore
pipeline_lock.put();
// Complete the data phase
if(req.MREAD == 1) begin
@(posedge MBUS.MCLK);
while(MBUS.MRDY != 1) begin
@(posedge MBUS.MCLK);
end
req.MRESP = MBUS.MRESP;
req.MRDATA = MBUS.MRDATA;
end
else begin
MBUS.MWDATA <= req.MWDATA;
@(posedge MBUS.MCLK);
while(MBUS.MRDY != 1) begin
@(posedge MBUS.MCLK);
end
req.MRESP = MBUS.MRESP;
end
// Return the request as a response
seq_item_port.put(req);
end_tr(req);
end
endtask: do_pipelined_transfer
endclass: mbus_pipelined_driver
Sequence Implementation
Unpipelined Accesses
Most of the time unpipelined transfers are required, since typical bus fabric is emulating what a software program does,
which is to access single locations. For instance using the value read back from one location to determine what to do next
in terms of reading or writing other locations.
In order to implement an unpipelined sequence that would work with the pipelined driver, the body() method would call
start_item(), finish_item() and get_response() methods in sequence. The get_response() method blocks until the driver
sends a response using its put() method at the end of the bus cycle. The following code example illustrates this:
//
// the bus would work. The sequence waits for each item to finish
//
`ovm_object_utils(mbus_unpipelined_seq)
int error_count;
super.new(name);
endfunction
task body;
error_count = 0;
start_item(req);
addr[i] = req.MADDR;
data[i] = req.MWDATA;
finish_item(req);
get_response(req);
end
start_item(req);
req.MADDR = addr[i];
req.MREAD = 1;
finish_item(req);
get_response(req);
error_count++;
end
end
endtask: body
endclass: mbus_unpipelined_seq
Note: This example sequence has checking built-in, this is to demonstrate how a read data value can be used. The
specifictype of check would normally be done using a scoreboard.
Pipelined Accesses
Pipelined accesses are primarily used to stress test the bus but they require a different approach in the sequence. A
pipelined sequence needs to have a seperate threads for generating the request sequence items and for handling the
response sequence items.
The generation loop will block on each finish_item() call until one of the threads in the driver completes a get() call.
Once the generation loop is unblocked it needs to generate a new item to have something for the next driver thread to
get(). Note that a new request sequence item needs to be generated on each iteration of the loop, if only one request item
handle is used then the driver will be attempting to execute its contents whilst the sequence is changing it.
In the example sequence, there is no response handling, the assumption is that checks on the data validity will be done by
a scoreboard. However, with the get() and put() driver implementation, there is a response FIFO in the sequence which
must be managed. In the example, the response_handler is enabled using the use_response_handler() method, and then
the response_handler function is called everytime a response is available, keeping the sequences response FIFO empty.
In this case the response handler keeps cound of the number of transactions to ensure that the sequence only exist when
the last transaction is complete.
//
// call to get_response();
// by a scoreboard
//
`ovm_object_utils(mbus_pipelined_seq)
int count; // To ensure that the sequence does not complete too early
super.new(name);
endfunction
task body;
use_response_handler(1);
count = 0;
start_item(req);
addr[i] = req.MADDR;
finish_item(req);
end
start_item(req);
req.MADDR = addr[i];
req.MREAD = 1;
finish_item(req);
end
// Do not end the sequence until the last req item is complete
wait(count == 20);
endtask: body
// FIFO empty
count++;
endfunction: response_handler
endclass: mbus_pipelined_seq
If the sequence needs to handle responses, then the response handler function should be extended.
( download source code examples online at http://verificationacademy.com/uvm-ovm ).
// Event pool:
ovm_event_pool events;
`ovm_object_utils(mbus_seq_item)
ovm_event e = events.get(evnt);
e.wait_trigger();
endtask: wait_trigger
endclass: mbus_seq_item
`ovm_component_utils(mbus_pipelined_driver)
@(posedge MBUS.MRESETN);
@(posedge MBUS.MCLK);
fork
do_pipelined_transfer;
do_pipelined_transfer;
join
endtask
//
// This task has to be automatic because it is spawned
// in separate threads
//
task automatic do_pipelined_transfer;
mbus_seq_item req;
forever begin
pipeline_lock.get();
seq_item_port.get(req);
accept_tr(req, $time);
void'(begin_tr(req, "pipelined_driver"));
MBUS.MADDR <= req.MADDR;
MBUS.MREAD <= req.MREAD;
MBUS.MOPCODE <= req.MOPCODE;
@(posedge MBUS.MCLK);
while(!MBUS.MRDY == 1) begin
@(posedge MBUS.MCLK);
end
// End of command phase:
// - unlock pipeline semaphore
// - signal CMD_DONE
pipeline_lock.put();
req.trigger("CMD_DONE");
// Complete the data phase
if(req.MREAD == 1) begin
@(posedge MBUS.MCLK);
while(MBUS.MRDY != 1) begin
@(posedge MBUS.MCLK);
end
req.MRESP = MBUS.MRESP;
req.MRDATA = MBUS.MRDATA;
end
else begin
endclass: mbus_pipelined_driver
endtask
As in the previous example of an unpipelined sequence, the code example shown has a data integrity check, this is purely
for illustrative purposes.
class mbus_unpipelined_seq extends ovm_sequence #(mbus_seq_item);
`ovm_object_utils(mbus_unpipelined_seq)
int error_count;
super.new(name);
endfunction
task body;
error_count = 0;
start_item(req);
addr[i] = req.MADDR;
data[i] = req.MWDATA;
finish_item(req);
req.wait_trigger("DATA_DONE");
end
foreach(addr[i]) begin
start_item(req);
req.MADDR = addr[i];
req.MREAD = 1;
finish_item(req);
req.wait_trigger("DATA_DONE");
error_count++;
end
end
endtask: body
endclass: mbus_unpipelined_seq
Pipelined Access
The pipelined access sequence does not wait for the data phase completion event before generating the next sequence
item. Unlike the get, put driver model, there is no need to manage the response FIFO, so in this respect this
implementation model is more straight-forward.
class mbus_pipelined_seq extends ovm_sequence #(mbus_seq_item);
`ovm_object_utils(mbus_pipelined_seq)
super.new(name);
endfunction
task body;
start_item(req);
addr[i] = req.MADDR;
finish_item(req);
end
start_item(req);
req.MADDR = addr[i];
req.MREAD = 1;
finish_item(req);
end
endtask: body
endclass: mbus_pipelined_seq
Ovm/Sequences/Arbitration
The ovm_sequencer has a built-in
mechanism to arbitrate between
sequences which could be running
concurrently on a sequencer. The
arbitration algorithmn determines which
sequence is granted access to send its
sequence_item to the driver. There is a
choice of six
arbitration algorithms which can be
selected using the set_arbitration()
sequencer method from the controlling
sequence.
Consider the example illustrated in the
diagram. In this example we have four
sequences which are running as Sequence arbitration.gif
repeat(4) begin
#2; // Offset by 2
seq_2.start(m_sequencer, this, 500); // Highest priority
end
end
begin
repeat(4) begin
#3; // Offset by 3
seq_3.start(m_sequencer, this, 300); // Medium priority
end
end
begin
repeat(4) begin
#4; // Offset by 4
seq_4.start(m_sequencer, this, 200); // Lowest priority
end
end
join
endtask: body
The six sequencer arbitration algorithms are best understood in the context of this example:
SEQ_ARB_FIFO (Default)
This is the default sequencer arbitration algorithm. What it does is to send sequence_items from the sequencer to the
driver in the order they are received by the sequencer, regardless of any priority that has been set. In the example, this
means that the driver receives sequence items in turn from seq_1, seq_2, seq_3 and seq_4. The resultant log file is as
follows:
SEQ_3:2 SEQ_4:1
# OVM_INFO @ 81: ovm_test_top.m_driver [RECEIVED_SEQ] Access totals: SEQ_1:3 SEQ_2:2
SEQ_3:2 SEQ_4:2
# OVM_INFO @ 91: ovm_test_top.m_driver [RECEIVED_SEQ] Access totals: SEQ_1:3 SEQ_2:3
SEQ_3:2 SEQ_4:2
# OVM_INFO @ 101: ovm_test_top.m_driver [RECEIVED_SEQ] Access totals: SEQ_1:4 SEQ_2:3
SEQ_3:2 SEQ_4:2
# OVM_INFO @ 111: ovm_test_top.m_driver [RECEIVED_SEQ] Access totals: SEQ_1:4 SEQ_2:3
SEQ_3:3 SEQ_4:2
# OVM_INFO @ 121: ovm_test_top.m_driver [RECEIVED_SEQ] Access totals: SEQ_1:4 SEQ_2:3
SEQ_3:3 SEQ_4:3
# OVM_INFO @ 131: ovm_test_top.m_driver [RECEIVED_SEQ] Access totals: SEQ_1:4 SEQ_2:4
SEQ_3:3 SEQ_4:3
# OVM_INFO @ 141: ovm_test_top.m_driver [RECEIVED_SEQ] Access totals: SEQ_1:4 SEQ_2:4
SEQ_3:4 SEQ_4:3
# OVM_INFO @ 151: ovm_test_top.m_driver [RECEIVED_SEQ] Access totals: SEQ_1:4 SEQ_2:4
SEQ_3:4 SEQ_4:4
SEQ_ARB_WEIGHTED
With this algorithm, the sequence_items to be sent to the driver are selected on a random basis but weighted with the
sequence_items from the highest priority sequence being sent first. In the example, this means that sequence_items from
seq_1 and seq_2 are selected on a random basis until all have been consumed, at which point the items from seq_3 are
selected, followed by seq_4. The resultant log file illustrates this:
SEQ_ARB_RANDOM
With this algorithm, the sequence_items are selected on a random basis, irrespective of the priority level of their
controlling sequences. The result in the example is that sequence_items are sent to the driver in a random order
irrespective of their arrival time at the sequencer and of the priority of the sequencer that sent them:
SEQ_ARB_STRICT_FIFO
The SEQ_ARB_STRICT_FIFO algorithm sends sequence_items to the driver based on their priority and their order in
the FIFO, with highest priority items being sent in the order received. The result in the example is that the seq_1 and
seq_2 sequence_items are sent first interleaved with each other according to the order of their arrival in the sequencer
queue, followed by seq_3s sequence items, and then the sequence_items for seq_4.
SEQ_3:4 SEQ_4:1
# OVM_INFO @ 131: ovm_test_top.m_driver [RECEIVED_SEQ] Access totals: SEQ_1:4 SEQ_2:4
SEQ_3:4 SEQ_4:2
# OVM_INFO @ 141: ovm_test_top.m_driver [RECEIVED_SEQ] Access totals: SEQ_1:4 SEQ_2:4
SEQ_3:4 SEQ_4:3
# OVM_INFO @ 151: ovm_test_top.m_driver [RECEIVED_SEQ] Access totals: SEQ_1:4 SEQ_2:4
SEQ_3:4 SEQ_4:4
SEQ_ARB_STRICT_RANDOM
This algorithm selects the sequence_items to be sent in a random order but weighted by the priority of the sequences
which are sending them. The effect in the example is that seq_1 is selected randomly first and its sequence_items are sent
before the items from seq_2, followed by seq_3 and then seq_4
SEQ_3:4 SEQ_4:3
# OVM_INFO @ 151: ovm_test_top.m_driver [RECEIVED_SEQ] Access totals: SEQ_1:4 SEQ_2:4
SEQ_3:4 SEQ_4:4
SEQ_ARB_USER
This algorithm allows a user defined method to be used for arbitration. In order to do this, the ovm_sequencer must be
extended to override the user_priority_arbitration() method. The method receives an argument which is the
sequencers queue of sequence_items, the user implemented algorithm needs to return an integer to select one of the
sequence_items from the queue. The method is able to call on other methods implemented in the sequencer base class to
establish the properties of each of the sequences in the queue. For instance, the priority of each sequence item can be
established using the get_seq_item_priority() call as illustrated in the following example:
//
// Return the item with the mean average priority
//
function integer user_priority_arbitration(integer avail_sequences[$]);
integer priority[] = new[avail_sequences.size]
integer sum = 0;
bit mean_found = 0;
endfunction: user_priority_arbitration
In the following example, the user_priority_arbitration method has been modified to always select the last sequence_item
that was received, this is more or less the inverse of the default arbitration mechanism.
class seq_arb_sequencer extends ovm_sequencer #(seq_arb_item);
`ovm_component_utils(seq_arb_sequencer)
endclass: seq_arb_sequencer
In the example, using this algorithm has the effect of sending the sequence_items from seq_4, followed by seq_3, seq_2
and then seq_1.
SEQ_3:4 SEQ_4:4
# OVM_INFO @ 141: ovm_test_top.m_driver [RECEIVED_SEQ] Access totals: SEQ_1:4 SEQ_2:3
SEQ_3:4 SEQ_4:4
# OVM_INFO @ 151: ovm_test_top.m_driver [RECEIVED_SEQ] Access totals: SEQ_1:4 SEQ_2:4
SEQ_3:4 SEQ_4:4
Ovm/Sequences/Priority
The OVM sequence use model allows multiple sequences to access a driver concurrently. The sequencer contains
an arbitration mechanism that determines when a sequence_item from a sequence will be sent to a driver. When a
sequence is started using the start() method one of the arguments that can be passed is an integer indicating the priority of
that sequence. The higher the value of the integer, the higher the priority of the sequence. This priority can be used with
the SEQ_ARB_WEIGHTED, SEQ_ARB_STRICT_FIFO, and SEQ_ARB_STRICT_RANDOM arbitration mechanisms,
(and possibly the SEQ_ARB_USER algorithm, if it handles priority) to ensure that a sequence has the desired priority.
Note: the remaining sequencer arbitration mechanisms do not take the sequence priority into account.
For instance, if a bus fabric master port has 3 sequences running on it to model software execution accesses (high
priority), video data block transfer (medium priority) and irritant data transfer (low priority) then the hiearchical
sequence controlling them would look like:
// Coding tip: Create an enumerated type to represent the different priority levels
typedef enum {HIGH_PRIORITY = 500, MED_PRIORITY = 200, LOW_PRIORITY = 50} seq_priority_e;
task body();
op_codes = cpu_sw_seq::type_id::create("op_codes");
video = video_data_seq::type_id::create("video");
irritant = random_access_seq::type_id::create("irritant");
When it is necessary to override the sequencer priority mechanism to model an interrupt or a high priority DMA transfer,
then the sequencer locking mechanism should be used.
Ovm/Sequences/LockGrab
There are a number of modelling scenarios where one sequence needs to have exclusive access to a driver via a
sequencer. One example of this type of scenario is a sequence which is responding to an interrupt. In order to
accomodate this requirement, the ovm_sequencer has a locking mechanism which is implemented using two calls -
lock() and grab(). In terms of modelling, a lock might be used to model a prioritised interrupt and a grab might be used to
model a non-maskable interrupt (NMI). The lock() and grab() calls have antidote calls to release a lock and these are
unlock() and ungrab().
lock
The sequencer lock method is called from a sequence and its effect is that the calling sequence will be granted exclusive
access to the driver when it gets its next slot via the sequencer arbitration mechanism. Once lock is granted, no other
sequences will be able to access the driver until the sequence issues an unlock() call which will then release the lock. The
method is blocking and does not return until lock has been granted.
grab
The grab method is similar to the lock method, except that it takes immediate effect and will grab the next sequencer
arbitration slot, overriding any sequence priorities in place. The only thing that stops a sequence from grabbing a
sequencer is a pre-existing lock() or grab() condition.
unlock
The unlock sequencer function is called from within a sequence to give up its lock or grab. A locking sequence must call
unlock before completion, otherwise the sequencer will remain locked.
ungrab
The ungrab function is an alias of unlock.
Related functions:
is_blocked
A sequence can determine if it is blocked by a sequencer lock condition by making this call. If it returns a 0, then the
sequence is not blocked and will get a slot in the sequencer arbitration. However, the sequencer may get locked before
the sequence has a chance to make a start_item() call.
is_grabbed
If a sequencer returns a 1 from this call, then it means that it has an active lock or grab in progress.
current_grabber
This function returns the handle of the sequence which is currently locking the sequencer. This handle could be used to
stop the locking sequence or to call a function inside it to unlock it.
Gotchas:
When a hierarchical sequence locks a sequencer, then its child sequences will have access to the sequencer. If one of the
child sequences issues a lock, then the parent sequence will not be able to start any parallel sequences or send any
sequence_items until the child sequence has unlocked.
A locking or grabbing sequence must always unlock before it completes, otherwise the sequencer will become
deadlocked.
Example:
The following example has 4 sequences which run in parallel threads. One of the threads has a sequence that does a lock,
and a sequence that does a grab, both are functionally equivalent with the exception of the lock or grab calls.
Locking sequence:
class lock_seq extends ovm_sequence #(seq_arb_item);
`ovm_object_utils(lock_seq)
int seq_no;
task body();
seq_arb_item REQ;
if(m_sequencer.is_blocked(this)) begin
ovm_report_info("lock_seq", "This sequence is blocked by an existing lock");
end else begin
ovm_report_info("lock_seq", "This sequence is not blocked by an existing lock");
end
if(m_sequencer.is_grabbed()) begin
REQ = seq_arb_item::type_id::create("REQ");
REQ.seq_no = 6;
repeat(4) begin
start_item(REQ);
finish_item(REQ);
end
// Unlock call - must be issued
m_sequencer.unlock(this);
endtask: body
endclass: lock_seq
Grabbing sequence:
class grab_seq extends ovm_sequence #(seq_arb_item);
`ovm_object_utils(grab_seq)
task body();
seq_arb_item REQ;
if(m_sequencer.is_blocked(this)) begin
ovm_report_info("grab_seq", "This sequence is blocked by an existing lock");
end else begin
ovm_report_info("grab_seq", "This sequence is not blocked by an existing lock");
end
if(m_sequencer.is_grabbed()) begin
if(m_sequencer.current_grabber() != this) begin
ovm_report_info("grab_seq", "Grab sequence waiting for current grab or lock to complete");
end
end
REQ = seq_arb_item::type_id::create("REQ");
REQ.seq_no = 5;
repeat(4) begin
start_item(REQ);
finish_item(REQ);
end
// Ungrab which must be called to release the grab (lock)
m_sequencer.ungrab(this);
endtask: body
endclass: grab_seq
The overall controlling sequence runs four sequences which send sequence_items to the driver with different levels of
priority. The driver reports from which sequence it has received a sequence_item. The first grab_seq in the fourth thread
jumps the arbitration queue. The lock_seq takes its turn and blocks the second grab_seq, which then executes
immediately the lock_seq completes.
task body();
seq_1 = arb_seq::type_id::create("seq_1");
seq_1.seq_no = 1;
seq_2 = arb_seq::type_id::create("seq_2");
seq_2.seq_no = 2;
seq_3 = arb_seq::type_id::create("seq_3");
seq_3.seq_no = 3;
seq_4 = arb_seq::type_id::create("seq_4");
seq_4.seq_no = 4;
grab = grab_seq::type_id::create("grab");
lock = lock_seq::type_id::create("lock");
m_sequencer.set_arbitration(arb_type);
fork begin // Thread 1
repeat(10) begin
#1;
seq_1.start(m_sequencer, this, 500); // Highest priority
end
end begin // Thread 2
repeat(10) begin
#2;
seq_2.start(m_sequencer, this, 500); // Highest priority
end
end begin // Thread 3
repeat(10) begin
#3;
seq_3.start(m_sequencer, this, 300); // Medium priority
end
end begin // Thread 4
fork
repeat(2) begin
#4;
seq_4.start(m_sequencer, this, 200); // Lowest priority
end
#10 grab.start(m_sequencer, this, 50);
join
repeat(1) begin
#4 seq_4.start(m_sequencer, this, 200);
end
fork
lock.start(m_sequencer, this, 200);
#20 grab.start(m_sequencer, this, 50);
join
end join
endtask: body
Ovm/Sequences/Slave
Overview
A slave sequence is used with a driver that responds to events on an interface rather than initiating them. This type of
functionality is usually referred to as a responder.
A responder can be implemented in several
ways, for instance a simple bus oriented
responder could be implemented as a
uvm_component interacting with a slave
interface and reading and writing from
memory according to the requests from the
bus master. The advantage of using a slave sequence is that the way in which the slave responds can be easily changed.
One interesting characteristic of responder functionality is that it is not usually possible to predict when a response to a
request will be required. For this reason slave sequences tend to be implemented as long-lasting sequences, i.e. they last
for the whole of the simulation providing the responder functionality.
In this article, two approaches to implementing a slave sequence are described:
• Using a single sequence item
• Using a sequence item for each slave phase (in the APB example used as an illustration there are two phases).
In both cases, the sequence and the driver loop through the following transitions:
1. Slave sequence sends a request to the driver - "Tell me what to do"
2. Driver detects a bus level request and returns the information back to the sequence - "This is what you should do"
3. Slave sequence does what it needs to do to prepare a response and then sends a response item to the driver - "Here
you go"
4. Driver completes the bus level response with the contents of the response item, completes handshake back to the
sequence - "Thank you"
Sequence item
The response properties of a slave sequence item should be marked as rand and the properties driven during the master
request should not. If you were to compare a master sequence_item and a slave sequence_item for the same bus protocol,
then you would find that which properties were marked rand and which were not would be reversed.
class apb_slave_seq_item extends ovm_sequence_item;
`ovm_object_utils(apb_slave_seq_item)
//------------------------------------------
// Data Members (Outputs rand, inputs non-rand)
//------------------------------------------
logic[31:0] addr;
logic[31:0] wdata;
logic rw;
//------------------------------------------
// Constraints
//------------------------------------------
constraint delay_bounds {
delay inside {[0:2]};
}
constraint error_dist {
slv_err dist {0 := 80, 1 := 20};
}
//------------------------------------------
// Methods
//------------------------------------------
extern function new(string name = "apb_slave_seq_item");
extern function void do_copy(ovm_object rhs);
extern function bit do_compare(ovm_object rhs, ovm_comparer comparer);
extern function string convert2string();
extern function void do_print(ovm_printer printer);
extern function void do_record(ovm_recorder recorder);
endclass:apb_slave_seq_item
task apb_slave_sequence::body;
apb_slave_agent_config m_cfg = apb_slave_agent_config::get_config(m_sequencer);
apb_slave_seq_item req;
apb_slave_seq_item rsp;
wait (m_cfg.APB.PRESETn);
forever begin
req = apb_slave_seq_item::type_id::create("req");
rsp = apb_slave_seq_item::type_id::create("rsp");
// Slave request:
start_item(req);
finish_item(req);
// Slave response:
if (req.rw) begin
memory[req.addr] = req.wdata;
end
start_item(rsp);
rsp.copy(req);
assert (rsp.randomize() with {if(!rsp.rw) rsp.rdata == memory[rsp.addr];});
finish_item(rsp);
end
endtask:body
forever
begin
if (!APB.PRESETn) begin
APB.PREADY = 1'b0;
APB.PSLVERR = 1'b0;
@(posedge APB.PCLK);
end
else begin
// Setup Phase
seq_item_port.get_next_item(req);
seq_item_port.item_done();
// Access Phase
seq_item_port.get_next_item(rsp);
seq_item_port.item_done();
end
end
endtask: run_phase
Complete APB3 Slave Agent ( download source code examples online at [1] ).
Sequence items
In this implementation, we will use more than one sequence item (usually called phase level sequence items) to
implement the slave functionality. Depending on the bus protocol, at least two sequence items will be required; one to
implement the request phase and a second to implement the response phase.One way of looking at this is to consider it as
the single sequence implementation sliced in two. However, with more complex protocols there could be more than two
phase level sequence items.
The request sequence item will contain those data members that are not random and will, consequently, have no
constraints. The response sequence item will contain all those random data members in addition to some data members
that overlap with the request sequence item. Those overlapping members are needed by the response sequence item to
make some decisions, for example a read/write bit is required by the driver to know if it needs to drive the read data bus
with valid data.
For example, this is the APB3 slave setup (request) sequence item.
class apb_slave_setup_item extends apb_sequence_item;
`ovm_object_utils(apb_slave_setup_item)
//------------------------------------------
// Data Members (Outputs rand, inputs non-rand)
//------------------------------------------
logic[31:0] addr;
logic[31:0] wdata;
logic rw;
endclass
And this is the access (response) sequence item. Note the rand data members and the constraints.
class apb_slave_access_item extends apb_sequence_item;
`ovm_object_utils(apb_slave_setup_item)
//------------------------------------------
// Data Members (Outputs rand, inputs non-rand)
//------------------------------------------
rand logic rw;
constraint delay_bounds {
delay inside {[0:2]};
}
constraint error_dist {
slv_err dist {0 := 80, 1 := 20};
}
endclass
The run_phase of the driver will always use the ovm_sequence_item to get_next_item and then cast the received
sequence item to the appropriate/correct type.
task apb_slave_driver::run();
apb_sequence_item item;
apb_slave_setup_item req;
apb_slave_access_item rsp;
forever
begin
if (!APB.PRESETn) begin
...
end
else begin
seq_item_port.get_next_item(item);
if ($cast(req, item))
begin
....
end
else
`uvm_error("CASTFAIL", "The received sequence item is not a request seq_item");
seq_item_port.item_done();
// Access Phase
seq_item_port.get_next_item(item);
if ($cast(rsp, item))
begin
....
end
else
`uvm_error("CASTFAIL", "The received sequence item is not a response seq_item");
seq_item_port.item_done();
end
end
endtask: run
Complete APB3 Slave Agent ( download source code examples online at [1] ).
using multiple sequence items
Ovm/Stimulus/Signal Wait
In the general case, synchronising to hardware events is taken care of in OVM testbenches by drivers and monitors.
However, there are some cases where it is useful for a sequence or a component to synchronise to a hardware event such
as a clock without interacting with a driver or a monitor. This can be facilitated by adding methods to an
object containing a virtual interface (usually a configuration object) that block until a hardware event occurs on the
virtual interface. A further refinement is to add a get_config() method which allows a component to retrieve a pointer to
its configuration object based on its scope within the OVM testbench component hierarchy.
import ovm_pkg::*;
`include "ovm_macros.svh"
`ovm_object_utils(bus_agent_config)
virtual bus_if BUS; // This is the virtual interface with the signals to wait on
//
// Task: wait_for_reset
//
// This method waits for the end of reset.
task wait_for_reset;
@( posedge BUS.resetn );
endtask
//
// Task: wait_for_clock
//
// This method waits for n clock cycles.
task wait_for_clock( int n = 1 );
repeat( n ) begin
@( posedge BUS.clk );
end
endtask
//
// Task: wait_for_error
//
task wait_for_error;
@(posedge error);
endtask: wait_for_error
//
// Task: wait_for_no_error
//
task wait_for_no_error;
@(negedge error);
endtask: wait_for_no_error
//
// Function: get_config
//
// This method gets the my_config associated with component c. We check for
// the two kinds of error which may occur with this kind of
// operation.
//
static function bus_agent_config get_config( ovm_component c );
ovm_object o;
bus_agent_config t;
end
return t;
endfunction
endclass: bus_agent_config
`ovm_object_utils(bus_seq)
bus_seq_item req;
// Handle for the configuration:
bus_agent_config m_cfg;
task body;
req = bus_seq_item::type_id::create("req");
// Get the configuration, passing m_sequencer as the component parameter
m_cfg = bus_agent_config::get_config(m_sequencer);
endclass: bus_seq
A component example:
//
// A coverage monitor that should ignore coverage collected during an error condition:
//
`ovm_component_utils(transfer_link_coverage_monitor)
T pkt;
covergroup tlcm_1;
HDR: coverpoint pkt.hdr;
SPREAD: coverpoint pkt.payload {
bins spread[] {[0:1023], [1024:8095], [8096:$]}
}
cross HDR, SPREAD;
endgroup: tlcm_1
super.new(name, parent);
tlcm_1 = new;
endfunction
// The purpose of the run method is to monitor the state of the error
// line
task run;
no_error = 0;
// Get the configuration
m_cfg = bus_agent_config::get_config(this);
m_cfg.wait_for_reset; // Nothing valid until reset is over
no_error = 1;
forever begin
m_cfg.wait_for_error; // Blocks until an error occurs
no_error = 0;
m_cfg.wait_for_no_error; // Blocks until the error is removed
end
endtask: run
endclass: transfer_link_coverage_monitor
Ovm/Stimulus/Interrupts
In hardware terms, an interrupt is an event which triggers a new thread of processing. This new thread can either take the
place of the current execution thread, which is the case with an interrupt service routine or it can be used to wake up a
sleeping process to initiate hardware activity. Either way, the interrupt is treated as a sideband signal or event which is
not part of the main bus or control protocol.
In CPU based systems, interrupts are typically managed in hardware by interrupt controllers which can be configured to
accept multiple interrupt request lines, to enable and disable interrupts, prioritise them and latch their current status. This
means that a typical CPU only has one interrupt request line which comes from an interrupt controller and when the
CPU responds to the interrupt it accesses the interrupt controller to determine the source and takes the appropriate action
to clear it down. Typically, the role of the testbench is to verify the hardware implementation of the interrupt controller,
however, there are circumstances where interrupt controller functionality has to be implemented in the testbench.
In some systems, an interrupt sevice route is re-entrant, meaning that if an ISR is in progress and a higher priority
interrupt occurs, then the new interrupt triggers the execution of a new ISR thread which returns control to the first ISR
on completion.
A stimulus generation flow can be adapted to take account of hardware interrupts in one of several ways:
• Exclusive execution of an ISR sequence
• Prioritised execution of an ISR sequence or sequences
• Hardware triggered sequences
Exclusive ISR Sequence
The simplest way to model interrupt handling is to trigger the execution of a sequence that uses the grab() method to get
exclusive access to the target sequencer. This is a disruptive way to interrupt other stimulus generation that is taking
place, but it does emulate what happens when an ISR is triggered on a CPU. The interrupt sequence cannot be interrupted
itself, and must make an ungrab() call before it completes.
The interrupt monitor is usually implemented in a forked process running in a control or virtual sequence. The forked
process waits for an interrupt, then starts the ISR sequence. When the ISR sequence ends, the loop starts again by waiting
for the next interrupt.
//
// Sequence runs a bus intensive sequence on one thread
// which is interrupted by one of four interrupts
//
class int_test_seq extends ovm_sequence #(bus_seq_item);
`ovm_object_utils(int_test_seq)
task body;
set_ints setup_ints; // Main activity on the bus interface
isr ISR; // Interrupt service routine
int_config i_cfg; // Config containing wait_for_IRQx tasks
setup_ints = set_ints::type_id::create("setup_ints");
ISR = isr::type_id::create("ISR");
i_cfg = int_config::get_config(m_sequencer); // Get the config
endtask: body
endclass: int_test_seq
Inside the ISR, the first action in the body method is the grab(), and the last action is ungrab(). If the ungrab() call was
made earlier in the ISR sequence, then the main processing sequence would be able to resume sending sequence_items to
the bus interface.
//
// Interrupt service routine
//
// Looks at the interrupt sources to determine what to do
//
`ovm_object_utils(isr)
bit error;
logic[31:0] read_data;
task body;
bus_seq_item req;
req = bus_seq_item::type_id::create("req");
// Read from the GPO register to determine the cause of the interrupt
assert (req.randomize() with {addr == 32'h0100_0000; read_not_write == 1;});
start_item(req);
finish_item(req);
finish_item(req);
`ovm_info("ISR:BODY", "IRQ1 cleared", OVM_LOW)
end
if(req.read_data[2] == 1) begin
`ovm_info("ISR:BODY", "IRQ2 detected", OVM_LOW)
req.write_data[2] = 0;
start_item(req);
finish_item(req);
`ovm_info("ISR:BODY", "IRQ2 cleared", OVM_LOW)
end
if(req.read_data[3] == 1) begin
`ovm_info("ISR:BODY", "IRQ3 detected", OVM_LOW)
req.write_data[3] = 0;
start_item(req);
finish_item(req);
`ovm_info("ISR:BODY", "IRQ3 cleared", OVM_LOW)
end
start_item(req); // Take the interrupt line low
finish_item(req);
endtask: body
endclass: isr
Note that the way in which this ISR has been coded allows for a degree of prioritisation since each IRQ source is tested
in order from IRQ0 to IRQ3.
( download source code examples online at http://verificationacademy.com/uvm-ovm ).
Prioritised ISR Sequence
A less disruptive approach to implementing interrupt handling using sequences is to use sequence prioritisation. Here,
the interrupt monitoring thread starts the ISR sequence with a priority that is higher than that of the main process. This
has the potential to allow other sequences with a higher priority than the ISR to gain access to the sequencer. Note that in
order to use sequence prioritisation, the sequencer arbitration mode needs to be set to SEQ_ARB_STRICT_FIFO,
SEQ_ARB_STRICT_RAND or STRICT_ARB_WEIGHTED.
Prioritising ISR sequences also enables modelling of prioritised ISRs, i.e. the ability to be able to interrupt an ISR with a
higher priority ISR. However, since sequences are functor objects rather than simple sub-routines, multiple
ISR sequences can be active on a sequencer, all that prioritisation affects is their ability to send a sequence_item to the
driver. Therefore, whatever processing is happening in an ISR will still continue even if a higher priority ISR "interrupts"
it, which means that sequence_items from the first lower priority ISR could potentially get through to the driver.
The following code example demonstrates four ISR sequences which are started with different priorities, allowing a
higher priority ISR to execute in preference to a lower priority ISR.
`ovm_object_utils(int_test_seq)
task body;
set_ints setup_ints; // Main sequence running on the bus
isr ISR0, ISR1, ISR2, ISR3; // Interrupt service routines
int_config i_cfg;
setup_ints = set_ints::type_id::create("setup_ints");
// ISR0 is the highest priority
ISR0 = isr::type_id::create("ISR0");
ISR0.id = "ISR0";
ISR0.i = 0;
// ISR1 is medium priority
ISR1 = isr::type_id::create("ISR1");
ISR1.id = "ISR1";
ISR1.i = 1;
// ISR2 is medium priority
ISR2 = isr::type_id::create("ISR2");
ISR2.id = "ISR2";
ISR2.i = 2;
// ISR3 is lowest priority
ISR3 = isr::type_id::create("ISR3");
ISR3.id = "ISR3";
ISR3.i = 3;
i_cfg = int_config::get_config(m_sequencer);
end
forever begin // Medium priority
i_cfg.wait_for_IRQ1();
ISR1.isr_no++;
ISR1.start(m_sequencer, this, MED);
end
forever begin // Medium priority
i_cfg.wait_for_IRQ2();
ISR2.isr_no++;
ISR2.start(m_sequencer, this, MED);
end
forever begin // Lowest priority
i_cfg.wait_for_IRQ3();
ISR3.isr_no++;
ISR3.start(m_sequencer, this, LOW);
end
join_any
disable fork;
endtask: body
endclass: int_test_seq
`ovm_object_utils(dsp_con_seq)
dsp_con_config cfg;
dsp_con_seq_item req;
cfg = dsp_con_config::get_config(m_sequencer);
req = dsp_con_seq_item::type_id::create("req");
cfg.wait_for_reset;
repeat(2) begin
do_go(4'h1); // Start Accelerator 0
cfg.wait_for_irq0; // Accelerator 0 complete
do_go(4'h2); // Start Accelerator 1
cfg.wait_for_irq1; // Accelerator 1 complete
do_go(4'h4); // Start Accelerator 2
cfg.wait_for_irq2; // Accelerator 2 complete
do_go(4'h8); // Start Accelerator 3
cfg.wait_for_irq3; // Accelerator 3 complete
end
cfg.wait_for_clock;
endtask: body
endclass:dsp_con_seq
Ovm/Sequences/Stopping
Once started, sequences should not be stopped.
There are two methods available in the sequence and sequencer API that allow sequences to be killed. However, niether
method checks that the driver is not currently processing any sequence_items and the result is that any item_done()
or put() method called from the driver will either never reach the controlling sequence or will cause an OVM fatal error
to occur because the sequences return pointer queue has been flushed.
The methods are:
• <sequence>.kill()
• <sequencer>.stop_sequences()
Do not use these methods unless you have some way of ensuring that the driver is inactive before making the call.
Ovm/Sequences/Layering
Sequence layering is a technique which
allows sequences to be retargetted to a
new agent, without change, using a
sequence item translation process. The
technique can also be used for protocol
layering, where an upper layer can be
converted to a lower layer without any
loss of abstraction, and the translation
process can provide additional processing between protocols.
The most convenient way to implement sequence layering is to use the ovm_layering_agent extension kit. This involves
using the ovm_layering_agent class together with a specialised sequence which takes care of the downstream
(requests) and upstream (response) sequence item translation.The layering agent comprises the following
sub-components:
• layering_sequencer - which accepts sequences from the upstream sequence
• translator_sequence - A sequence/driver hybrid which runs on the downstream agents sequencer but has port
connections to the layering sequencer
• layering_monitor - which converts downstream analysis items and translates them into upstream analysis items
• layering_analysis_port - for publishing the analysis items
• layering_checker - An optional checker component
The layering monitor and the layering analysis port are optional components of the layering agent, their purpose is to
translate downstream analysis transactions to upstream transactions.
From the users perspective, the ovm_layering_agent simplifies the implementation, since the user only has to implement
the translator_sequence and the layering_monitor and everything else is encapsulated within the agent.
create_mapping
In the build method of the component containing the layering_agent, the layering_agent is first constructed, and then its
create_mapping method is called. The arguments passed to the create_mapping() method are used when it goes through
its build and connection process to ensure that the right components and connections are set up. This method requires the
following input arguments:
• layering_name - A string used to describe the upstream layer
• layering_sequencer_wrapper - An ovm_object_wrapper for the upstream layer sequencer type - i.e.
<upstream_sequencer>::get_type()
• bus_agent_name - A string that matches the name of the downstream agents handle - This can be in terms of the
current scope or a path to the downstream agent using the OVM path naming.
• translator_sequence_wrapper - An ovm_object_wrapper for the translation sequence - this sequence needs to be
implemented by the user.
• default_sequence_wrapper - An ovm_object_wrapper for an upstream sequence that will run on the upstream
sequencer. This defaults to null, which means that no default sequence is run and this is the preferred use model.
• bus_sequencer_name - A string giving the handle name of the sequencer within the downstream agent, this defaults
to "sequencer"
add_monitor
By default, the layering agent does not include a downstream to upstream analysis transaction translation, in order to
include this in the agent the add_monitor() method needs to be called during build with the following arguments:
• monitor_wrapper - An ovm_object_wrapper for the user implemented monitor, which translates downstream
analysis transactions to upstream analysis transactions.
• checker_wrapper - An ovm_object_wrapper for a user implemented checker. This is an option and defaults to null,
which means that a checker is not built.
The monitor and the checker are both user implemented. The checker subscribes to the upstream monitors analysis port.
In our example, the upstream sequencer, sequence_item and default sequence come from the flip agent, and the
downstream agent is the simple agent, the layering agent declaration and build method would be implemented as follows:
class env extends ovm_env;
`ovm_component_utils( env );
//
// Variable: simple_bus_agent
//
// This is the downstream bus agent. It implements the simple bus protocol,
// but will be forced into handling the flip protocol as well.
//
simple_agent simple_bus_agent;
//
// Variable: flip_layering_agent
//
// This is the upstream agent. It implements the flip protocol which will be
//
// Function: new
//
// The standard OVM constructor
//
function new( string name , ovm_component parent = null );
super.new( name , parent );
endfunction
//
// Function: build
//
// This method creates the downstream bus agent and the upstream layering agent.
// It then calls create_mapping and add_monitor on the upstream layering agent to
// establish a two way layering between the two.
//
function void build();
simple_bus_agent = new("simple_agent" , this );
flip_layering_agent = new("flip_agent" , this );
flip_layering_agent.create_mapping(
"flip" , flip_sequencer::get_type() ,
"simple_agent",
flip_simple_translator::get_type() ,
flip_basic_sequence::get_type()
);
flip_layering_agent.add_monitor(
flip_simple_monitor::get_type() ,
ovm_listener #( flip_item )::get_type()
);
endfunction: build
endclass: env
Note that the flip_layering_agent is parametertised with the flip_item. The ovm_listener added as the checker component
in the add_mapping call is a class which is part of the ovm_layering extension package and its default functionality is to
print the content of the analysis transactions that it receives.
task body();
flip_item flip_req , flip_rsp;
simple_item simple_req , simple_rsp;
super.body();
forever begin
sequencer_port.get( flip_req );
simple_req = simple_item::type_id::create("simple_req");
start_item( simple_req );
simple_req.addr = flip_req.addr;
finish_item( simple_req );
get_response( simple_rsp );
flip_rsp = new();
flip_rsp.set_id_info(flip_req);
flip_rsp.addr = simple_rsp.addr;
flip_rsp.set_id_info(flip_req);
response_port.write( flip_rsp );
end
endtask: body
endclass: flip_simple_translator
f.addr = t.addr;
f.data = ~t.data;
layering_ap.write( f );
endfunction: write
Use Models
Sequence layering can be used for vertical reuse, to translate between high level sequence sequence items and lower
level sequence items. It can also be used with a register package to write abstract sequences based on register model
indirections or it can be used to model the layering of protocols such as communication protocols.
Register Layering
To be continued - this should be done with a register aware sequence and not using a whole new register_layering_agent
kit.
Protocol Layering
Sequence layering reflects the way in which
many communication protocols are built up
using layers which become increasingly
abstract. For instance, the PCI Express
protocol has a physical layer, on top of which
is a datalink layer followed by a transaction
layer followed by other layer types. Some
protocol layering merges several types of
upper layer onto lower layers, for
instance, Ethernet is used as a transport
mechanism for a number of different types of communication protocol, this is achieved by interleaving different types of
data packet on the payload of successive Ethernet frames.
Sequence layering can be used to generate communication protocol stimulus by emulating the way in which these
protocols are structured. This is achieved by chaining layering agents in a layering flow. The merging of multiple streams
can be done by having several upstream sequences running on a layering sequencer and using the translator sequence to
effect the merge or interleaving of the traffic to the downstream sequence_item type.
The following example code shows how two layering agents can be chained together to translate a frame into packets
and then to translate the packets into simple bus transactions:
class env extends ovm_env;
`ovm_component_utils( env );
//
// Variable: large_packet_agent
//
// This is the upstream agent. It will force the mid level large packet
//
//
// Variable: large_packet_agent
//
// This is the mid level agent. It will be forced into recieving frame
// packets, and will force the downstream bus agent to accept large packets
//
//
// Variable: simple_bus_agent
//
// This is the downstream bus agent. It implements the simple bus protocol,
// but will be forced into handling the large packet protocol as well.
//
simple_agent simple_bus_agent;
//
// Function: new
//
endfunction
//
// Function: build
//
// This method constructs the three agents and creates mappings between them.
//
frame_agent.create_mapping(
"frame", frame_sequencer::get_type(),
"large_packet_agent",
frame_large_packet_translator::get_type(),
frame_seq::get_type(), "layering_sequencer"
);
large_packet_agent.create_mapping(
"large_packet", large_packet_sequencer::get_type(),
"simple_bus_agent",
large_packet_simple_translator::get_type(),
large_packet_seq::get_type()
);
large_packet_agent.add_monitor(
large_packet_simple_monitor::get_type()
);
endfunction:build
endclass: env
Ovm/Registers
Learn all about methodology related to using the UVM Register Package on OVM. Note: UVM Register Package for
OVM is available for download on both Verification Academy and OvmWorld
Topic Overview
Introduction
The UVM register model provides a way of tracking the register content of a DUT and a convenience layer for accessing
register and memory locations within the DUT. This package is available for use with OVM version 2.1.2, please see
http:/ / verificationacademy. com/ verification-methodology for a link to download both OVM 2.1.2 and the UVM
Register package for OVM
The register model abstraction reflects the structure of a hardware-software register specification, since that is the
common reference specification for hardware design and verification engineers, and it is also used by software engineers
developing firmware layer software. It is very important that all three groups reference a common specification and it is
crucial that the design is verified against an accurate model.
The UVM register model is designed to faciliate productive verification of programmable hardware. When used
effectively, it raises the level of stimulus abstraction and makes the resultant stimulus code straight-forward to reuse,
either when there is a change in the DUT register address map, or when the DUT block is reused as a sub-component.
VIP Developer Viewpoint
In order to support the use of the UVM register package, the developer of an On Chip Bus verification component needs
to develop an adapter class. This adapter class is responsible for translating between the UVM register package's generic
register sequence_items and the VIP specific sequence_items. Developing the adapter requires knowledge of the target
bus protocol and how the different fields in the VIP sequence_item relate to that protocol.
Once the adapter is in place, it can be used by the testbench developer to integrate the register model into the
UVM testbench.
To understand how to create an adapter, the suggested route through the register material is:
1 Integrating Describes how the adaptor fits into the overall testbench architecture Background
Relevance
Integration Pre-requisites
If you are integrating a register model into a testbench, then the pre-requisites are that a register model has been written
and that there is an adaptor class available for the bus agent that is going to be used to interact with the DUT bus
interface.
Integration Process
In the testbench, the register model object needs to be constructed and a handle needs to be passed around the testbench
enviornment using either the configuration and/or the resource mechanism.
In order to drive an agent from the register model an association needs to be made between it and the target sequencer so
that when a sequence calls one of the register model methods a bus level sequence_item is sent to the target bus driver.
The register model is kept updated with the current hardware register state via the bus agent monitor, and a predictor
component is used to convert bus agent analysis transactions into updates of the register model.
The testbench integrator might also be involved with implementing other analysis components which reference the
register model, and these would include a scoreboard and a functional coverage monitor.
For the testbench integrator, the recommended route through the register material is outlined in the table below:
2 Integrating Overview of the register model stimulus and prediction architecture Essential
10 Scoreboarding Implementing a register model based scoreboard Important if you need to mantain a scoreboard.
11 FunctionalCoverage Implementing functional coverage using the register Important if you need to enhance a functional coverage
model model
Ovm/Registers/Adapter
Overview
The UVM register model access methods generate bus read and write cycles using generic register transactions. These
transactions need to be adapted to the target bus sequence_item. The adaption needs to be bidirectional in order to
convert register transaction requests to bus sequence items, and to be able to convert bus sequence item responses back to
bus sequence items. The adaption should be implemented by extending the uvm_reg_adapter base class.
Implementing An Adapter
The generic register item is implemented as a struct in order to minimise the amount of memory resource it uses. The
struct is defined as type uvm_reg_bus_op and this contains 6 fields:
These fields need to be mapped to/from the target bus sequence item and this is done by extending the uvm_reg_adapter
class which contains two methods - reg2bus() and bus2reg() which need to be overlaid. The adapter class also contains
two property bits - supports_byte_enable and provides_responses, these should be set according to the functionality
supported by the target bus and the target bus agent.
uvm_reg_adapter
Methods Description
reg2bus Overload to convert generic register access items to target bus agent sequence items
bus2reg Overload to convert target bus sequence items to register model items
supports_byte_enable Set to 1 if the target bus and the target bus agent supports byte enables, else set to 0
provides_responses Set to 1 if the target agent driver sends separate response sequence_items that require response handling
Taking the APB bus as a simple example; the bus sequence_item, apb_seq_item, contains 3 fields (addr, data, we) which
correspond to address, data, and bus direction. Address and data map directly and the APB item write enable maps to the
bus item kind field. When converting an APB bus response to a register item the status field will be set to
UVM_IS_OK since the APB agent does not support the SLVERR status bit used by the APB.
Since the APB bus does not support byte enables, the supports_byte_enable bit is set to 0 in the constructor of the
APB adapter.
The provides_responses bit should be set if the agent driver returns a separate response item (i.e. put(response), or
item_done(response)) from its request item - see Ovm/Driver/Sequence API. This bit is used by the register model
layering code to determine whether to wait for a response or not, if it is set and the driver does not return a response, then
the stimulus generation will lock-up.
Since the APB driver being used returns an item done(), it therefore uses a single item for both request and response, so
the provides_responses bit is also set to 0 in the constructor.
The example code for the register to APB adapter is as follows:
class reg2apb_adapter extends uvm_reg_adapter;
`uvm_object_utils(reg2apb_adapter)
provides_responses = 0;
endfunction
endclass: reg2apb_adapter
The adapter code can be found in the file reg2apb_adapter.svh in the /agents/apb_agent directory in the example which
can be downloaded:
( download source code examples online at http://verificationacademy.com/uvm-ovm ).
Burst Support
The register model access methods support single accesses to registers and this is in line with the usual use model for
register accesses - they are accessed individually, not as part of a burst. If you have registers which need to be tested for
burst mode accesses, then the recommended approach is to initiate burst transfers from a sequence running directly on
the target bus agent.
If you use burst mode to access registers, then the predictor implementation needs to be able to identify bursts and
convert each beat into a predict() call on the appropriate register.
Ovm/Registers/Integrating
Register Model Testbench Integration - Testbench Architecture Overview
Within an UVM testbench a register model is used either as a means of looking up a mirror of the current DUT hardware
state or as means of accessing the hardware via the front or back door and updating the register model database.
For those components or sequences that use the register model, the register model has to be constructed and its handle
passed around using a configuration object or a resource. Components and sequences are then able to use the register
model handle to call methods to access data stored within it, or to access the DUT.
In order to make back door accesses to the DUT, the register model uses hdl paths which are used by simulator runtime
database access routines to peek and poke the hardware signals corresponding to the register. The register model is
updated automatically at the end of each back door access cycle. The way in which this update is done is by calling the
predict() method which updates the accessed registers mirrored value. In order for back door accesses to work, no further
integration with the rest of the testbench structure is required.
The register model supports front door accesses to the DUT by generating generic register transactions which are
converted to target bus agent specific sequence_items before being sent to the target bus agents sequencer, and by
converting any returned response information back from bus agent sequence_items into register transactions. This
bidirectional conversion process takes place inside an adapter class which is specific to the target bus agent. There is
really only way in which the stimulus side of the access is integrated with the testbench, but the update, or prediction, of
the register model content at the end of a front door access can occur using one of three models and these are:
• Auto Prediction
• Explicit Prediction
• Passive Prediction
Auto Prediction
Auto prediction is the default mode of operation for the register access. In this mode, the various access methods which
cause front door accesses to take place automatically call a predict() method using either the data that was written to the
register, or the data read back from the register at the end of the bus cycle.
This mode of operation is the simplest to implement, but suffers from the drawback that it can only keep the register
model up to date with the transfers that it initiates. If any other sequences directly access the target sequencer to update
register content, or if there are register accesses from other DUT interfaces, then the register model will not be updated.
The main advantage of using explicit prediction is that it keeps the register model up to date with all accesses that occur
on the target bus interface. The configuration also has more scope for supporting vertical reuse where accesses to the
DUT may occur from other bus agents via bus bridges or interconnect fabrics.
During vertical reuse, an environment that supports explicit prediction can also support passive prediction as a result of
re-configuration.
Passive Prediction
In passive prediction the register model takes no active part in any accesses to the DUT, but it is kept up to date by the
predictor when any front door register accesses take place.
In this scenario there are a number of places where the predictor could be placed. If it is placed on the Master 0
AXI agent, then the register model is updated only as a result of accesses that it generates. If the predictor is placed on
the slave AXI port agent, then the predictor will be able to update the register model on accesses from both AXI masters.
However, if the predictor is placed on the APB bus it will be able to verify that the correct address mapping is being used
for the APB peripheral and that bus transfers are behaving correctly end to end.
In this example, there could also be up to three address translations taking place (AXI Master 0 to AXI Slave, AXI
Master 1 to AXI Slave, AXI Slave to APB) and the predictor used would need to use the right register model map when
making the call to predict() a target registers mirrored value.
The register model you have will contain one or more maps which define the register address mapping for a specific bus
interface. In most cases, block level testbenches will only require one map, but the register model has been designed to
cope with the situation where a DUT has multiple bus interfaces which could quite easily occur in a multi-master SoC.
Each map will need to have the adapter and predictor layers specified.
Ovm/Registers/Integration
The integration process for the register model involves constructing it and placing handles to it inside the relevent
configuration objects, and then creating the adaption layers.
In the case where the SPI is part of a cluster, then the whole cluster register model (containing the SPI register model as a
block) would be created in the test, and then the SPI env configuration object would be passed a handle to the SPI
register block as a sub-component of the overall register model:
//
// From the build method of the PSS test base class:
//
//
// PSS - Peripheral sub-system with a hierarchical register model with the handle pss_reg
//
// SPI is a sub-block of the PSS which has a register model handle in its env config object
//
m_spi_env_cfg.spi_rm = pss_reg.spi;
//
// From the SPI env
//
// Register layering adapter:
reg2apb_adapter reg2apb;
// Register predictor:
uvm_reg_predictor #(apb_seq_item) apb2reg_predictor;
endfunction: connect
Register Prediction
By default, the register model uses a process called auto_prediction to update the register data base each time a read or
write transaction that the model has generated completes. However, auto-prediction cannot react to situations where there
are multiple bus masters and there are bus cycles which are not initiated by the register model. This problem can be
avoided by disabling the auto_prediction and using a bus monitor combined with an uvm_reg_predictor component.
The uvm_reg_predictor component is derived from a uvm_subscriber and is parameterised with the type of the target bus
analysis transaction. It contains handles for the register to target bus adapter and the register model map that is being
used to interface to the bus agent sequencer. It uses the register adapter to convert the analysis transaction from the
monitor to a register transaction, then it looks up the register by address in the register models bus specific map and
modifies the contents of the appropriate register.
The uvm_reg_predictor component is part of the UVM library and does not need to be extended. However, to integrate it
the following things need to be taken care of:
1. Declare the predictor using the target bus sequence_item as a class specialisation parameter
2. Create the predictor in the env build() method
3. In the connect method - set the predictor map to the target register model register map
4. In the connect method - set the predictor adapter to the target agent adapter class
5. In the connect method - switch off the auto-prediction on the target register model register map
6. In the connect method - connect the predictor analysis export to the target agent analysis port
A predictor should be included at every place where there is a bus monitor on the target bus. The code required is shown
below and is from the second half of the SPI connect method code.
//
// Register prediction part:
//
// Replacing implicit register model prediction with explicit prediction
// based on APB bus activity observed by the APB agent monitor
// Set the predictor map:
apb2reg_predictor.map = m_cfg.spi_rm.APB_map;
// Set the predictor adapter:
apb2reg_predictor.adapter = reg2apb;
// Disable the register models auto-prediction
m_cfg.spi_rm.APB_map.set_auto_predict(0);
// Connect the predictor to the bus agent monitor analysis port
m_apb_agent.ap.connect(apb2reg_predictor.bus_in);
end
The code excerpts shown come from the spi_tb/tests/spi_test_base.svh file (register model construction) and the
spi_tb/env/spi_env.svh file (adapter and predictor) from the example download:
( download source code examples online at http://verificationacademy.com/uvm-ovm ).
Ovm/Registers/RegisterModelOverview
In order to be able to use the UVM register model effectively, it is important to have a mental model of how it is
structured in order to be able to find your way around it.
The register model is implemented using five main building blocks - the register field; the register; the memory; the
register block; and the register map. The register field models a collection of bits that are associated with a function
within a register. A field will have a width and a bit offset position within the register. A field can have different access
modes such as read/write, read only or write only. A register contains one or more fields. A register block corresponds to
a hardware block and contains one or more registers. A register block also contains one or more register maps.
A memory region in the design is modelled by a uvm_mem which has a range, or size, and is contained within a register
block and has an offset determined by a register map. A memory region is modelled as either read only, write only or
read-write with all accesses using the full width of the data field. A uvm_memory does not contain fields.
The register map defines the address space offsets of one or more registers in its parent block from the point of view of a
specific bus interface. A group of registers may be accessible from another bus interface by a different set of address
offsets and this can be modelled by using another address map within the parent block. The address map is also used to
specify which bus agent is used when a register access takes place and which adapter is used to convert generic register
transfers to/from target bus transfer sequence_items.
This structure is illustrated in the following table:
uvm_map Specifies register, memory and sub-block address offsets, target bus interface
In the case of a block level register model, the register block will most likely only contain a collection of registers and a
single address map. A cluster, or sub-system, level register model will have a register block which will contain other
register model blocks for each of the sub-components in the cluster, and register maps for each of the bus interfaces. At
this level, a register map can specify an offset address for a sub-block and the accesses to the registers within that
sub-block will be adjusted to generate the right address within the cluster level map.
The register model structure is designed to be modular and reuseable. At higher levels of integration, the cluster register
model block can be a sub-block within, say, a SoC register block. Again, multiple register maps can be used to relocate
the address offsets for the registers within the sub-blocks according to the bus interface used to access them.
uvm_reg_data_t 64 bits `UVM_REG_DATA_WIDTH Used for register data fields (uvm_reg, uvm_reg_field, uvm_mem)
Both of these types are based on the SystemVerilog bit type and are therefore 2 state. By default, they are 64 bits wide,
but the width of the each type is determined by a `define which can be overloaded by specifying a new define on the
compilation command line.
#
# To change the width of the register uvm_data_t
# using Questa
#
vlog +incdir+$(UVM_HOME)/src +define+UVM_REG_DATA_WIDTH=24 $(UVM_HOME)/src/uvm_pkg.sv
Since this is a compilation variable it requires that you recompile the UVM package and it also has a global effect
impacting all register models, components and sequences which use these data types. Therefore, although it is possible to
change the default width of this variable, it is not recommended since it could have potential side effects.
Ovm/Registers/ModelStructure
Register Model Layer Overview
The UVM register model is built up in layers:
Block Collection of registers (Hardware block level), or sub-blocks (Sub-system level) with one or more maps. May also
include memories.
Map Named address map which locates the offset address of registers, memories or sub-blocks. Also defines the target
sequencer for register accesses from the map.
Register Fields
The bottom layer is the field which corresponds to one or more bits within a register. Each field definition is extended
from the uvm_reg_field class. Fields are contained within an uvm_reg class and they are constructed and then configured
using the configure() method:
//
//
How the configure method is used is shown in the register code example.
When the field is created, it takes its name from the string passed to its create method which by convention is the same as
the name of its handle.
Registers
Registers are modelled by extending the uvm_reg class which is a container for field objects. The overall characteristics
of the register are defined in its constructor method:
//
// uvm_reg constructor prototype:
//
function new (string name="", // Register name
int unsigned n_bits, // Register width in bits
int has_coverage); // Coverage model supported by the register
The register class contains a build method which is used to create and configure the fields. Note that this build method is
not called by the UVM build phase, since the register is an uvm_object rather than an uvm_component. The following
code example shows how the SPI master CTRL register model is put together.
//--------------------------------------------------------------------
// ctrl
//--------------------------------------------------------------------
class ctrl extends uvm_reg;
`uvm_object_utils(ctrl)
//--------------------------------------------------------------------
// new
//--------------------------------------------------------------------
function new(string name = "ctrl");
super.new(name, 14, UVM_NO_COVERAGE);
endfunction
//--------------------------------------------------------------------
// build
//--------------------------------------------------------------------
virtual function void build();
acs = uvm_reg_field::type_id::create("acs");
ie = uvm_reg_field::type_id::create("ie");
lsb = uvm_reg_field::type_id::create("lsb");
tx_neg = uvm_reg_field::type_id::create("tx_neg");
rx_neg = uvm_reg_field::type_id::create("rx_neg");
go_bsy = uvm_reg_field::type_id::create("go_bsy");
reserved = uvm_reg_field::type_id::create("reserved");
char_len = uvm_reg_field::type_id::create("char_len");
endfunction
endclass
When a register is added to a block it is created, causing its fields to be created and configured, and then it is configured
before it is added to one or more reg_maps to define its memory offset. The prototype for the register configure() method
is as follows:
//
//
// hdl_path string
Memories
Memories are modelled by extending the uvm_mem class. The register model treats memories as regions, or memory
address ranges where accesses can take place. Unlike registers, memory values are not stored because of the workstation
memory overhead involved.
The range and access type of the memory is defined via its constructor:
//
// uvm_mem constructor prototype:
//
function new (string name, // Name of the memory model
longint unsigned size, // The address range
int unsigned n_bits, // The width of the memory in bits
string access = "RW", // Access - one of "RW" or "RO"
int has_coverage = UVM_NO_COVERAGE); // Functional coverage
`uvm_object_utils(mem_1_model)
endclass: mem_1_model
Register Maps
The purpose of the register map is two fold. The map provides information on the offset of the registers, memories and/or
register blocks contained within it. The map is also used to identify which bus agent register based sequences will be
executed on, however this part of the register maps functionality is set up when integrating the register model into an
UVM testbench.
In order to add a register or a memory to a map, the add_reg() or add_mem() methods are used. The prototypes for these
methods are very similar:
//
//
bit unmapped=0, // If true, register does not appear in the address map
//
//
There can be several register maps within a block, each one can specify a different address map and a different target bus
agent.
Register Blocks
The next level of hierarchy in the UVM register structure is the uvm_reg_block. This class can be used as a container for
registers and memories at the block level, representing the registers at the hardware functional block level, or as a
container for multiple blocks representing the registers in a hardware sub-system or a complete SoC organised as blocks.
In order to define register and memory address offsets the block contains an address map object derived from
uvm_reg_map. A register map has to be created within the register block using the create_map method:
//
// Prototype for the create_map method
//
function uvm_reg_map create_map(string name, // Name of the map handle
uvm_reg_addr_t base_addr, // The maps base address
int unsigned n_bytes, // Map access width in bits
uvm_endianness_e endian, // The endianess of the map
bit byte_addressing=0); // Whether byte_addressing is supported
//
// Example:
//
AHB_map = create_map("AHB_map", 'h0, 4, UVM_LITTLE_ENDIAN);
The first map to be created within a register block is assigned to the default_map member of the register block.
The following code example is for the SPI master register block, this declares the register class handles for each of the
registers in the SPI master, then the build method constructs and configures each of the registers before adding them to
the APB_map reg_map at the appropriate offset address:
//-------------------------------------------------------------------
// spi_reg_block
//--------------------------------------------------------------------
class spi_reg_block extends uvm_reg_block;
`uvm_object_utils(spi_reg_block)
rand ss ss_reg;
//--------------------------------------------------------------------
// new
//--------------------------------------------------------------------
function new(string name = "spi_reg_block");
super.new(name, UVM_NO_COVERAGE);
endfunction
//--------------------------------------------------------------------
// build
//--------------------------------------------------------------------
virtual function void build();
rxtx0_reg = rxtx0::type_id::create("rxtx0");
rxtx0_reg.configure(this, null, "");
rxtx0_reg.build();
rxtx1_reg = rxtx1::type_id::create("rxtx1");
rxtx1_reg.configure(this, null, "");
rxtx1_reg.build();
rxtx2_reg = rxtx2::type_id::create("rxtx2");
rxtx2_reg.configure(this, null, "");
rxtx2_reg.build();
rxtx3_reg = rxtx3::type_id::create("rxtx3");
rxtx3_reg.configure(this, null, "");
rxtx3_reg.build();
ctrl_reg = ctrl::type_id::create("ctrl");
ctrl_reg.configure(this, null, "");
ctrl_reg.build();
divider_reg = divider::type_id::create("divider");
divider_reg.configure(this, null, "");
divider_reg.build();
ss_reg = ss::type_id::create("ss");
ss_reg.configure(this, null, "");
ss_reg.build();
lock_model();
endfunction
endclass
Note that the final statement in the build method is the lock_model() method. This is used to finalise the address mapping
and to ensure that the model cannot be altered by another user.
The register block in the example can be used for block level verification, but if the SPI is integrated into a larger design,
the SPI register block can be combined with other register blocks in an integration level block to create a new register
model. The cluster block incorporates each sub-block and adds them to a new cluster level address map.This process can
be repeated and a full SoC register map might contain several nested layers of register blocks. The following code
example shows how this would be done for a sub-system containing the SPI master and a number of other peripheral
blocks.
package pss_reg_pkg;
import uvm_pkg::*;
`include "uvm_macros.svh"
import spi_reg_pkg::*;
import gpio_reg_pkg::*;
`uvm_object_utils(pss_reg_block)
spi = spi_reg_block::type_id::create("spi");
spi.configure(this);
spi.build();
AHB_map.add_submap(this.spi.default_map, 0);
gpio = gpio_reg_block::type_id::create("gpio");
gpio.configure(this);
gpio.build();
AHB_map.add_submap(this.gpio.default_map, 32'h100);
lock_model();
endfunction: build
endclass: pss_reg_block
endpackage: pss_reg_pkg
If the hardware register space can be accessed by more than one bus interface, then the block can contain multiple
address maps to support alternative address maps. In the following example, two maps are created and have memories
and registers added to them at different offsets:
//
// Memory sub-system (mem_ss) register & memory block
//
class mem_ss_reg_block extends uvm_reg_block;
`uvm_object_utils(mem_ss_reg_block)
// Memories
rand mem_1_model mem_1;
// Map
uvm_reg_map AHB_map;
uvm_reg_map AHB_2_map;
lock_model();
endfunction: build
endclass: mem_ss_reg_block
Ovm/Registers/QuirkyRegisters
Introduction
Quirky registers are just like any other register described using the register base class except for one thing. They have
special (quirky) behavior that either can't be described using the register base class, or is hard to describe using the
register based class. The register base class can be used to describe the behavior of many different registers - for example
clean-on -read (RC), write-one-to-set (W1S), write-zero-to-set (W0S). These built-in behaviors are set using attributes.
Setting the attribute caused the built-in behavior. Built-in behaviors can be used for a majority of most register
descriptions, but most verification environments have a small number of special registers with behavior that can't be
described but the built-in attributes. These are quirky registers.
Examples of quirky registers include 'clear on the third read', 'read a register that is actually a collection of bits from 2
other registers'. These are registers that have very special behavior. Quirky registers are outside the register base class
functionality and are most easily implemented by extending the base class functions or by adding callbacks.
The register base class library is very powerful and offers many ways to change behavior. The easiest implementations
extend the underlying register or field class, and redefine certain virtual functions or tasks, like set(), or get(), or read().
In addition to replacing functions and tasks, callbacks can be added. A callback is called a specific times from within the
base class functions. For example, the post_predict() callback is called from the uvm_reg_field::predict() call. Adding a
post_predict() callback, allows the field value to be changed (predicted). The UVM user guide and reference guide have
much more information on register function and task overloading as well as callback definition.
`uvm_object_utils(fifo_reg)
endclass
ID Register
A snapshot of some code that implements an ID register is below. (See the full example for the complete text).
static int id_register_pointer = 0;
static int id_register_pointer_max = 10;
static int id_register_value[] =
'{'ha0, 'ha1, 'ha2, 'ha3, 'ha4,
'ha5, 'ha6, 'ha7, 'ha8, 'ha9};
ID Register Model
The ID register model is implemented below. The register itself is similar to a regular register, except the ID register uses
a new kind of file - the id_register_field.
The ID register field implements the specific functionality of the ID register in this case.
// The ID Register.
// Just a register which has a special field - the
// ID Register field.
class id_register extends uvm_reg;
id_register_field F1;
`uvm_object_utils(id_register)
endclass : id_register
The id_register builds the fields in the build() routine, just like any other register. In addition to building the field, a
callback is created and registered with the field. The callback in this case implements the post_predict() method, and will
be called from the underlying predict() code in the register field class.
begin
id_register_field_cbs cb = new();
// Setting the name makes the messages prettier.
cb.set_name("id_register_field_cbs");
uvm_reg_field_cb::add(F1, cb);
end
int id_register_pointer = 0;
int id_register_pointer_max = 10;
int id_register_value[] =
'{'ha0, 'ha1, 'ha2, 'ha3, 'ha4,
'ha5, 'ha6, 'ha7, 'ha8, 'ha9};
Calling get() causes the value to be returned, and the pointer to be adjusted properly. The set() and get() are overridden to
implement the desired model functionality. The ID register field also contains the specific values that will be read back.
Those values could be obtained externally, or could be set from some other location.
id_register_field my_field;
$cast(my_field, fld);
`uvm_info("DBG", $psprintf(
"start: post_predict(value=%0x, pointer=%0d). (%s)",
value, my_field.id_register_pointer, kind.name()),
UVM_INFO)
case (kind)
UVM_PREDICT_READ:
my_field.set(my_field.id_register_pointer+1);
UVM_PREDICT_WRITE:
my_field.set(value);
endcase
`uvm_info("DBG", $psprintf(
" done: post_predict(fld.get()=%0x, pointer=%0d). (%s)",
my_field.get(), my_field.id_register_pointer,
kind.name()), UVM_INFO)
endfunction
endclass
The callback created and registered from the register definition, but it is registered on the field.
( download source code examples online at http://verificationacademy.com/uvm-ovm ).
Ovm/Registers/ModelCoverage
Controlling the build/inclusion of covergroups in the register model
Which covergroups get built within a register block object or a register object is determined by a local variable called
m_has_cover. This variable is of type uvm_coverage_model_e and it should be initialised by a build_coverage() call
within the constructor of the register object which assigns the value of the include_coverage resource to m_has_cover.
Once this variable has been set up, the covergroups within the register model object should be constructed according to
which coverage category they fit into.
The construction of the various covergroups would be based on the result of the has_coverage() call.
As each covergroup category is built, the m_cover_on variable needs to be set to enable coverage sampling on that set of
covergroups, this needs to be done by calling the set_coverage() method.
Method Description
Overall Control
uvm_reg::include_coverage(uvm_coverage_model_e) Static method that sets up a resource with the key "include_coverage". Used to control which
types of coverage are collected by the register model
Build Control
build_coverage(uvm_coverage_model_e) Used to set the local variable m_has_cover to the value stored in the resource database against the
"include_coverage" key
has_coverage(uvm_coverage_model_e) Returns true if the coverage type is enabled in the m_has_cover field
add_coverage(uvm_coverage_model_e) Allows the coverage type(s) passed in the argument to be added to the m_has_cover field
Sample Control
set_coverage(uvm_coverage_model_e) Enables coverage sampling for the coverage type(s), sampling is not enabled by default
get_coverage(uvm_coverage_model_e) Returns true if the coverage type(s) are enabled for sampling
Note: That it is not possible to set an enable coverage field unless the corresponding build coverage field has been set.
An example
The following code comes from a register model implementation of a register model that incorporates functional
coverage.
In the test, the overall register coverage model for the testbench has to be set using the
uvm_reg::include_coverage() static method:
//
// Inside the test build method:
//
function void spi_test_base::build();
uvm_reg::include_coverage(UVM_CVR_ALL); // All register coverage types enabled
//
//..
The first code excerpt is for a covergroup which is intended to be used at the block level to get read and write access
coverage for a register block. The covergroup has been wrapped in a class included in the register model package, this
makes it easier to work with.
//
// A covergroup (wrapped in a class) that is designed to get the
// register map read/write coverage at the block level
//
//
// This is a register access covergroup within a wrapper class
//
// This will need to be called by the block sample method
//
// One will be needed per map
//
class SPI_APB_reg_access_wrapper extends uvm_object;
`uvm_object_utils(SPI_APB_reg_access_wrapper)
option.per_instance = 1;
option.name = name;
// To be generated:
//
// Generic form for bins is:
//
// bins reg_name = {reg_addr};
ADDR: coverpoint addr {
bins rxtx0 = {'h0};
endgroup: ra_cov
endclass: SPI_APB_reg_access_wrapper
The second code excerpt is for the register block which includes the covergroup. The code has been stripped down to
show the parts relevant to the handling of the coverage model.
In the constructor of the block there is a call to build_coverage(), this ANDs the coverage enum argument supplied with
the overall testbench coverage setting, which has been set by the uvm_reg::include_coverage() method and sets up the
m_has_cover field in the block with the result.
In the blocks build() method, the has_coverage() method is used to check whether the coverage model used by the access
coverage block is enabled. If it is, the access covergroup is built and then the coverage sampling is enabled for its
coverage model using set_coverage().
In the sample() method, the get_coverage() method is used to check whether the coverage model is enabled for sampling,
and then the covergroup is sampled, passing in the address and is_read arguments.
//
// The relevant parts of the spi_rm register block:
//
//-------------------------------------------------------------------
// spi_reg_block
//--------------------------------------------------------------------
class spi_reg_block extends uvm_reg_block;
`uvm_object_utils(spi_reg_block)
//--------------------------------------------------------------------
// new
//--------------------------------------------------------------------
function new(string name = "spi_reg_block");
// build_coverage ANDs UVM_CVR_ADDR_MAP with the value set
// by the include_coverage() call in the test bench
// The result determines which coverage categories can be built by this
// region of the register model
super.new(name, build_coverage(UVM_CVR_ADDR_MAP));
endfunction
//--------------------------------------------------------------------
// build
//--------------------------------------------------------------------
virtual function void build();
string s;
//
// Create, build and configure the registers ...
//
lock_model();
endfunction: build
endclass: spi_reg_block
//
Ovm/Registers/BackdoorAccess
The UVM register model facilitates access to hardware registers in the DUT either through front door accesses or back
door accesses. A front door access invloves using a bus transfer cycle using the target bus agent, consequently it
consumes time, taking at least a clock cycle to complete and so it models what will happen to the DUT in real life. A
backdoor access uses the simulator database to directly access the register signals within the DUT, with write direction
operations forcing the register signals to the specified value and read direction accesses returning the current value of the
register signals. A backdoor access takes zero simulation time since it by-passes the normal bus protocol.
Take zero simulation time Use a bus transaction which will take at least one RTL clock
cycle
Write direction accesses force the HW register bits to the specified value Write direction accesses do a normal HW write
Read direction accesses return the current value of the HW register bits Read direction accesses to a normal HW read, data is returned
using the HW data path
In the UVM register model, backdoor accesses are always auto-predicted - the mirrored Frontdoor accesses are predicted based on what the bus
value reflects the HW value monitor observes
Only the register bits accessed are affected, side effects may occur when time advances Side effects are modelled correctly
depending on HW implementation
By-passes normal HW Simulates real timing and event sequences, catches errors due
to unknown interactions
Backdoor access can be a useful and powerful technique and some valid use models include:
• Configuration or re-configuration of a DUT - Putting it into a non-reset random state before configuring specific
registers via the front door
• Adding an extra level of debug when checking data paths - Using a backdoor peek after a front door write cycle and
before a front door read cycle can quickly determine whether the write and read data path is responsible for any errors
• Checking a buffer before it is read by front door accesses - Traps an error earlier, especially useful if the read process
does a data transform, or has side-effects
Some invalid use models include:
• Use as an accelerator for register accesses - May be justified if there are other test cases that thoroughly verify the
register interface
• Checking the DUT against itself- A potential pitfall whereby the DUT behaviour is taken as correct rather than the
specified behaviour
# Compile my design so that only the f field in the r register in the b block is visible
# for backdoor access
vlog my_design +acc=rnb+/tb/dut/b/r/f
Ovm/Registers/Generation
A register model can be written by hand, following the pattern given for the SPI master example. However, with more
than a few registers this can become a big task and is always a potential source of errors. There are a number of other
reasons why using a generator is helpful:
• It allows a common register specification to be used by the hardware, software and verification engineering teams
• The register model can be generated efficiently without errors
• The register model can be re-generated whenever there is a change in the register definition
• Multiple format register definitions for different design blocks can be merged together into an overall register
description
There are a number of register generators available commercially, including Mentor Graphics' Register Assistant.
Register Assistant takes a text based register description as its input and generates a register model and html
documentation as an output. The input formats supported by the software include IPXact and XML generated from a
templated spreadsheet, but there is a TCL API for the Register Assistant data base which can be used to parse alternative
formats on the input side and to write alternative output file formats.
Ovm/Registers/StimulusAbstraction
Stimulus Abstraction
Stimulus that accesses memory mapped registers stimulus should be made as abstract as possible. The reasons for this
are that it:
• Makes it easier for the implementer to write
• Makes it easier for users to understand
• Provides protection against changes in the register map during the development cycle
• Makes the stimulus easier to reuse
Of course, it is possible to write stimulus that does register reads and writes directly via bus agent sequence items with
hard coded addresses and values - for instance read(32'h1000_f104); or write(32'h1000_f108, 32'h05); - but this stimulus
would have to be re-written if the base address of the DUT changed and has to be decoded using the register
specification during code maintenance.
The register model contains the information that links the register names to their addresses, and the register fields to their
bit positions within the register. This means the register model makes it easier to write stimulus that is at a more
meaningful level of abstraction - for instance read(SPI.ctrl); or write(SPI.rxtx0, 32'h0000_5467);.
The register model allows users to access registers and fields by name. For instance, if you have a SPI register model
with the handle spi_rm, and you want to access the control register, ctrl, then the path to it is spi_rm.ctrl. If you want to
access the go_bsy field within the control register then the path to it is spi_rm.ctrl.go_bsy.
Since the register model is portable, and can be quickly regenerated if there is a change in the specification of the register
map, using the model allows the stimulus code to require minimal maintenance, once it is working.
The register model is integrated with the bus agents in the UVM testbench. What this means to the stimulus writer is that
he uses register model methods to initiate transfers to/from the registers over the bus interface rather than using
sequences which generate target bus agent sequence items. For the stimulus writer this reduces the amount of learning
that is needed in order to become productive.
//
// read task prototype
//
task read(output uvm_status_e status,
output uvm_reg_data_t value,
input uvm_path_e path = UVM_DEFAULT_PATH,
input uvm_reg_map map = null,
input uvm_sequence_base parent = null,
input int prior = -1,
//
// Example - from within a sequence
//
// Note use of positional and named arguments
//
spi_rm.ctrl.read(status, read_data, .parent(this));
The write() method writes a specified value to the target hardware register. For front door accesses the mirrored and
desired values are updated by the bus predictor on completion of the write cycle.
Write sequence.gif
//
// write task prototype
//
task write(output uvm_status_e status,
input uvm_reg_data_t value,
input uvm_path_e path = UVM_DEFAULT_PATH,
input uvm_reg_map map = null,
input uvm_sequence_base parent = null,
input int prior = -1,
input uvm_object extension = null,
input string fname = "",
input int lineno = 0);
//
Although the read and write methods can be used at the register and field level, they should only be used at the register
level to get predictable and reusable results. Field level reads and writes can only work if the field takes up and fills the
whole of a byte lane when the target bus supports byte level access. Whereas this might work with register stimulus
written with one bus protocol in mind, if the hardware block is integrated into a sub-system which uses a bus protocol
that does not support byte enables, then the stimulus may no longer work.
The read() and write() access methods can also be used for back door accesses, and these complete and update the mirror
value immediately.
//
// Examples - from within a sequence
//
uvm_reg_data_t ctrl_value;
uvm_reg_data_t char_len_value;
The set() method is used to setup the desired value of a register or a field prior to a write to the hardware using the
update() method.
//
// set function prototype
//
function void set(uvm_reg_data_t value,
string fname = "",
int lineno = 0);
//
// Examples - from within a sequence
//
uvm_reg_data_t ctrl_value;
uvm_reg_data_t char_len_value;
//
// poke task prototype
//
task poke(output uvm_status_e status,
input uvm_reg_data_t value,
input string kind = "",
input uvm_sequence_base parent = null,
input uvm_object extension = null,
input string fname = "",
input int lineno = 0);
//
// Examples - from within a sequence
//
uvm_reg_data_t ctrl_value;
uvm_reg_data_t char_len_value;
randomize
Strictly speaking, this randomize() is not a register model method since the register model is based on SystemVerilog
class objects. Depending on whether the registers and the register fields have been defined as rand or not, they can be
randomized with or without constraints. The register model uses the post_randomize() method to modify the desired
register or field value. Subsequently, the hardware register can be written with the result of the randomization using the
update() method.
The randomize() method can be called at the register model, block, register or field level.
update
If there is a difference in value between the desired and the mirrored register values, the update() method will initiate a
write to a register. The update() can be called at the register level which will result in a single register write, or it can be
called at the block level in which case it could result in several register writes. The mirrored value for the register would
be set to the updated value by the predictor at the completion of the bus write cycle.
//
// Prototype for the update task
//
task update(output uvm_status_e status,
input uvm_path_e path = UVM_DEFAULT_PATH,
input uvm_sequence_base parent = null,
input int prior = -1,
//
// Examples:
//
// Block level:
spi_rm.update(status);
//
// Register level:
spi_rm.ctrl.update(status);
Block level updates will always follow the same register access order. The update process indexes the register array in its
database and the order used is dependent on the order that registers were created and added to the array by the register
model. If randomized or variations in register access ordering are required then you should use individual register
updates with your own ordering, perhaps using an array of register handles.
If multiple stimulus streams are active and using update() at the block level, then there is a chance that registers will be
updated more than once, since multiple block level update() calls would be made.
mirror
The mirror() method initiates a hardware read or peek access but does not return the hardware data value. A frontdoor
read bus level operation results in the predictor updating the mirrored value, whereas a backdoor peek automatically
updates the mirrored value. There is an option to check the value read back from the hardware against the original
mirrored value.
The mirror() method can be called at the field, register or block level. In practice, it should only be used at the register or
block level for front door accesses since field level read access may not fit the characteristics of the target bus protocol.
A block level mirror() call will result in read/peek accesses to all of the registers within the block and any sub-blocks.
//
// mirror task prototype:
//
task mirror(output uvm_status_e status,
input uvm_check_e check = UVM_NO_CHECK,
input uvm_path_e path = UVM_DEFAULT_PATH,
input uvm_sequence_base parent = null,
input int prior = -1,
input uvm_object extension = null,
input string fname = "",
input int lineno = 0);
//
// Examples:
//
spi_rm.ctrl.mirror(status, UVM_CHECK); // Check the contents of the ctrl register
//
spi_rm.mirror(status, .path(UVM_BACKDOOR); // Mirror the contents of spi_rm block via the backdoor
reset
The reset() method sets the register desired and mirrored values to the pre-defined register reset value. This method
should be called when a hardware reset is observed to align the register model with the hardware. The reset() method is
an internal register model method and does not cause any bus cycles to take place.
The reset() method can be called at the block, register or field level.
//
// reset function prototype:
//
function void reset(string kind = "HARD");
//
// Examples:
//
spi_rm.reset(); // Block level reset
//
spi_rm.ctrl.reset(); // Register level reset
//
spi_rm.ctrl.char_len.reset(); // Field level reset
get_reset
The get_reset() method returns the pre-defined reset value for the register or the field. It is normally used in conjunction
with a read() or a mirror() to check that a register has been reset correctly.
//
// get_reset function prototype:
//
function uvm_reg_data_t get_reset(string kind = "HARD");
//
// Examples:
//
uvm_reg_data_t ctrl_value;
uvm_reg_data_t char_len_value;
status uvm_status_e None, must be populated with an To return the status of the method call - can be UVM_IS_OK, UVM_NOT_OK,
argument UVM_IS_X
value uvm_reg_data_t None To pass a data value, an output in the read direction, an input in the write direction
path uvm_path_e UVM_DEFAULT_PATH To specify whether a front or back door access is to be used - can be
UVM_FRONTDOOR, UVM_BACKDOOR, UVM_PREDICT,
UVM_DEFAULT_PATH
map uvm_reg_map null To specify which register model map to use to make the access
prior int -1 To specify the priority of the sequence item on the target sequencer
extension uvm_object null Allows an object to be passed in order to extend the call
fname string "" Used by reporting to tie method call to a file name
Ovm/Registers/MemoryStimulus
Memory Model Overview
The UVM register model also supports memory access. Memory regions within a DUT are represented by a memory
models which have a configured width and range and are placed at an offset defined in a register map. The memory
model is defined as having either a read-write (RW), a read-only (RO - ROM), or a write-only (WO) access type.
Unlike the register model, the memory model does not store state, it simply provides an access layer to the memory. The
reasoning behind this is that storing the memory content would mean incurring a severe overhead in simulation and that
the DUT hardware memory regions are already implemented using models which offer alternative verification
capabilities. The memory model supports front door accesses through a bus agent, or backdoor accesses with direct
access to the memory model content.
Memory read
The read() method is used to read from a memory location, the address of the location is the offset within the memory
region, rather than the absolute memory address. This allows stimulus accessing memory to be relocatable, and therefore
reuseable.
//
//
input uvm_reg_map map = null, // Which map, memory might in be >1 map
//
// Examples:
//
//
Memory write
The write() method is used to write to a memory location, and like the read method, the address of the location to be
written to is an offset within the memory region.
//
//
input uvm_reg_map map = null, // Which map, memory might be in >1 map
//
// Examples:
//
//
Memory burst_read
The burst_read() method is used to read an array of data words from a series of consecutive address locations starting
from the specified offset within the memory region. The number of read accesses in the burst is determined by the size of
the read data array argument passed to the burst_read() method.
//
//
input uvm_reg_map map = null, // Which map, memory might be in >1 map
//
// Examples:
//
uvm_reg_data_t read_data[];
//
// 8 Word transfer:
//
//
//
Memory burst_write
The memory burst write() method is used to write an array of data words to a series of consecutive address locations
starting from the specified offset with the memory region. The size of the data array determines the length of the burst.
//
//
input uvm_reg_map map = null, // Which map, memory might be in >1 map
//
// Examples:
//
uvm_reg_data_t write_data[];
//
// 8 Word transfer:
//
foreach(write_data[i]) begin
write_data[i] = i*16;
end
//
//
Example Stimulus
The following example sequence illustrates how the memory access methods could be used to implement a simple
memory test.
//
// Test of memory 1
//
// Write to 10 random locations within the memory storing the data written
//
`uvm_object_utils(mem_1_test_seq)
uvm_reg_addr_t addr_array[10];
uvm_reg_data_t data_array[10];
super.new(name);
endfunction
task body;
super.body();
// Write loop
addr_array[i] = addr;
data_array[i] = data;
end
// Read loop
`uvm_error("mem_1_test", $sformatf("Memory access error: expected %0h, actual %0h", data_array[i][31:0], data[31:0]))
end
end
endtask: body
endclass: mem_1_test_seq
Ovm/Registers/SequenceExamples
To illustrate how the different register model access methods can be used from sequences to generate stimulus, this page
contains a number of example sequences developed for stimulating the SPI master controller DUT.
Note that all of the example sequences that follow do not use all of the argument fields available in the methods. In
particular, they do not use the map argument, since the access to the bus agent is controlled by the layering. If a register
can be accessed by more than one bus interface, it will appear in several maps, possibly at different offsets. When the
register access is made, the model selects which bus will be accessed. Writing the sequences this way makes it easier to
reuse or retarget them in another integration scenario where they will access the hardware over a different bus
infrastructure.
The examples shown are all derived from a common base sequence class template.
import uvm_pkg::*;
`include "uvm_macros.svh"
import spi_env_pkg::*;
import spi_reg_pkg::*;
`uvm_object_utils(spi_bus_base_seq)
// Common functionality:
// Getting a handle to the register model
task body;
m_cfg = spi_env_config::get_config(m_sequencer);
spi_rm = m_cfg.spi_rm;
endtask: body
endclass: spi_bus_base_seq
`uvm_object_utils(slave_unselect_seq)
task body;
super.body();
spi_rm.ss_reg.write(status, 32'h0, .parent(this));
endtask: body
endclass: slave_unselect_seq
//
`uvm_object_utils(ctrl_set_seq)
super.new(name);
endfunction
bit int_enable = 0;
task body;
super.body();
// The char_len field inside the ctrl_reg determines the SPI character length
assert(spi_rm.ctrl_reg.randomize() with {char_len.value inside {0, 1, [31:33], [63:65], [95:97], 126, 127};});
spi_rm.ctrl_reg.ie.set(int_enable);
spi_rm.ctrl_reg.go_bsy.set(0);
data = spi_rm.ctrl_reg.get();
endtask: body
endclass: ctrl_set_seq
//
//
//
`uvm_object_utils(check_regs_seq)
super.new(name);
endfunction
uvm_reg spi_regs[$];
uvm_reg_data_t ref_data;
task body;
super.body();
spi_rm.get_registers(spi_regs);
spi_regs.shuffle();
foreach(spi_regs[i]) begin
ref_data = spi_regs[i].get_reset();
`uvm_error("REG_TEST_SEQ:", $sformatf("Reset read error for %s: Expected: %0h Actual: %0h", spi_regs[i].get_name(), ref_data, data))
end
end
repeat(10) begin
spi_regs.shuffle();
foreach(spi_regs[i]) begin
assert(this.randomize());
data[8] = 0;
end
end
spi_regs.shuffle();
foreach(spi_regs[i]) begin
ref_data = spi_regs[i].get();
`uvm_error("REG_TEST_SEQ:", $sformatf("get/read: Read error for %s: Expected: %0h Actual: %0h", spi_regs[i].get_name(), ref_data, data))
end
end
end
repeat(10) begin
spi_regs.shuffle();
foreach(spi_regs[i]) begin
if(!(this.randomize()) begin
end
data[8] = 0;
end
end
spi_regs.shuffle();
foreach(spi_regs[i]) begin
ref_data = spi_regs[i].get();
`uvm_error("REG_TEST_SEQ:", $sformatf("poke/peek: Read error for %s: Expected: %0h Actual: %0h", spi_regs[i].get_name(), ref_data, data))
end
end
end
endtask: body
endclass: check_regs_seq
`uvm_object_utils(data_unload_seq)
uvm_reg data_regs[];
task body;
super.body();
// Set up the data register handle array
data_regs = '{spi_rm.rxtx0_reg, spi_rm.rxtx1_reg, spi_rm.rxtx2_reg, spi_rm.rxtx3_reg};
// Randomize access order
data_regs.shuffle();
// Use mirror in order to check that the value read back is as expected
foreach(data_regs[i]) begin
data_regs[i].mirror(status, UVM_CHECK, .parent(this));
end
endtask: body
endclass: data_unload_seq
Note that this example of a sequence interacting with a scoreboard is not a recommended approach, it is provided as a
means of illustrating the use of the mirror() method.
Ovm/Registers/BuiltInSequences
The UVM package contains a library of automatic test sequences which are based on the register model. These
sequences can be used to do basic tests on registers and memory regions within a DUT. The automatic tests are aimed at
testing basic functionality such as checking register reset values are correct or that the read-write data paths are working
correctly. One important application of these sequences is for quick sanity checks when bringing up a sub-system or SoC
design where a new interconnect or address mapping scheme needs to be verified.
Registers and memories can be opted out of these auto tests by setting an individual "DO_NOT_TEST" attribute which is
checked by the automatic sequence as runs. An example of where such an attribute would be used is a clock control
register where writing random bits to it will actually stop the clock and cause all further DUT operations to fail.
The register sequences which are available within the UVM package are summarised in the following tables.
Note that any of the automatic tests can be disabled for a given register or memory by the NO_REG_TEST attribute, or
for memories by the NO_MEM_TEST attribute. The disable attributes given in the table are specific to the sequences
concerned.
uvm_reg_hw_reset_seq NO_REG_HW_RESET_TEST Yes Yes Checks that the Hardware register reset value matches the
value specified in the register model
uvm_reg_single_bit_bash_seq NO_REG_BIT_BASH_TEST No Yes Writes, then check-reads 1's and 0's to all bits of the selected
register that have read-write access
uvm_reg_single_access_seq NO_REG_ACCESS_TEST No Yes Writes to the selected register via the frontdoor, checks the
value is correctly written via the backdoor, then writes a
value via the backdoor and checks that it can be read back
correctly via the frontdoor. Repeated for each address map
that the register is present in.
Requires that the backdoor hdl_path has been specified
uvm_reg_shared_access_seq NO_SHARED_ACCESS_TEST No Yes For each map containing the register, writes to the selected
register in one map, then check-reads it back from all maps
from which it is accessible.
Requires that the selected register has been added to multiple
address maps.
Some of the register test sequences are designed to run on an individual register, whereas some are block level sequences
which go through each accessible register and execute the single register sequence on it.
uvm_mem_single_walk_seq NO_MEM_WALK_TEST No Yes Writes a walking pattern into each location in the range of the
specified memory, then checks that is read back with the
expected value
uvm_mem_single_access_seq NO_MEM_ACCESS_TEST No Yes For each location in the range of the specified memory: Writes
via the frontdoor, checks the value written via the backdoor,
then writes via the backdoor and reads back via the front door.
Repeats test for each address map containing the memory.
Requires that the backdoor hdl_path has been specified.
uvm_mem_shared_access_seq NO_SHARED_ACCESS_TEST No Yes For each map containing the memory, writes to all memory
locations and reads back from all using each of the address
maps.
Requires that the memory has been added to multiple address
maps.
Like the register test sequences, the tests either run on an individual memory basis, or on all memories contained within a
block. Note that the time taken to execute a memory test sequence could be lengthy with a large memory range.
uvm_reg_mem_shared_access_seq NO_SHARED_ACCESS_TEST Executes the uvm_reg_shared_access_reg_seq on all registers accessible from the
specified block.
Executes the uvm_mem_shared_access_seq on all memories accessible from the
specified block
uvm_reg_mem_built_in_seq All of the above Executes all of the block level auto-test sequences
Setting An Attribute
In order to set an auto-test disable attribute on a register, you will need to use the UVM resource_db to set a bit with the
attribute string, giving the path to the register or the memory as the scope variable. Since the UVM resource database is
used, the attributes can be set from anywhere in the testbench at any time. However, the recommended approach is to set
the attributes as part of the register model, this will most likely be done by specifying the attribute via the register model
generators specification input file.
The following code excerpt shows how attributes would be implemented in a register model.
// From the build() method of the memory sub-system (mem_ss) block:
function void build();
//
// .....
//
// Example use of "dont_test" attributes:
// Stops mem_1_offset reset test
uvm_resource_db #(bit)::set({"REG::", this.mem_1_offset.get_full_name()}, "NO_REG_HW_RESET_TEST", 1);
// Stops mem_1_offset bit-bash test
uvm_resource_db #(bit)::set({"REG::", this.mem_1_offset.get_full_name()}, "NO_REG_BIT_BASH_TEST", 1);
// Stops mem_1 being tested with the walking auto test
uvm_resource_db #(bit)::set({"REG::", this.mem_1.get_full_name()}, "NO_MEM_WALK_TEST", 1);
lock_model();
endfunction: build
Note that once an attribute has been set in the UVM resource database, it cannot be 'unset'. This means that successive
uses of different levels of disabling within sequences may produce unwanted accumulative effects.
`uvm_object_utils(auto_tests)
super.new(name);
endfunction
task body;
// Register reset test sequence
uvm_reg_hw_reset_seq rst_seq = uvm_reg_hw_reset_seq::type_id::create("rst_seq");
// Register bit bash test sequence
uvm_reg_bit_bash_seq reg_bash = uvm_reg_bit_bash_seq::type_id::create("reg_bash");
// Initialise the memory mapping registers in the sub-system
mem_setup_seq setup = mem_setup_seq::type_id::create("setup");
// Memory walk test sequence
uvm_mem_walk_seq walk = uvm_mem_walk_seq::type_id::create("walk");
endtask: body
endclass: auto_tests
Example Download
The code for this example can be downloaded via the following link:
( download source code examples online at http://verificationacademy.com/uvm-ovm ).
Ovm/Registers/Configuration
During verification a programmable hardware device needs to be configured to operate in different modes. The register
model can be used to automate or to semi-automate this process.
The register model contains a shadow of the register state space for the DUT which is kept up to date as the design is
configured using bus read and write cycles. One way to configure the design in a testbench is to apply reset to the design
and then go through a programming sequence which initialises the design for a particular mode of operation.
In real life, a programmable device might be used for a while in one mode of operation, then reconfigured to be used in
another mode and the non re-programmed registers will effectively have random values. Always initialising from reset
has the short coming that the design always starts from a known state and it is possible that a combination of register
values that causes a design failure would be missed. However, if the register map is randomized at the beginning of the
simulation and the randomized contents of the register map are written to the DUT before configuring it into the desired
mode, it is possible to emulate the conditions that would exist in a 'mid-flight' re-configuration.
The register model can be used to configure the design by creating a configuration state 'off-line' using whatever mixture
of constrained randomization or directed programming is convenient. If the register model desired values are updated,
then the transfer of the configuration to the DUT can be done using the update() method, this will transfer any new
values that need to be written to the hardware in the order they are declared in the register model.
The transfer of the register model to the DUT can be done either in an ordered way, or it can be done in a random order.
Some designs require that at least some of the register programming is done in a particular order.
In order to transfer the data in a random order, the registers in the model should be collected into an array and then the
array should be shuffled:
//
// Totally random configuration
//
task body;
uvm_reg spi_regs[];
super.body();
spi_rm.get_registers(spi_regs);
assert(spi_rm.randomize());
spi_regs.shuffle(); // Randomly re-order the array
foreach(spi_regs[i]) begin
spi_regs[i].update(); // Only change the reg if required
end
endtask: body
Here is an example of a sequence that configures the SPI using the register model. Note that it uses constraints to
configure the device within a particular range of operation, and that the write to the control register is a setup write which
will be followed by an enabling write in another sequence.
//
//
`uvm_object_utils(SPI_config_seq)
super.new(name);
endfunction
bit interrupt_enable;
task body;
super.body;
assert(spi_rm.randomize() with {
spi_rm.ctrl_reg.go_bsy.value == 0;
spi_rm.ctrl_reg.ie.value == interrupt_enable;
//
spi_rm.ss_reg.cs.value != 0;
spi_rm.divider_reg.ratio.value inside {16'h0, 16'h1, 16'h2, 16'h4, 16'h8, 16'h10, 16'h20, 16'h40, 16'h80};
});
data = spi_rm.ctrl_reg.get();
endtask: body
endclass: SPI_config_seq
A DUT could be reconfigured multiple times during a test case in order to find unintended interactions between register
values.
Ovm/Registers/Scoreboarding
The UVM register model shadows the current configuration of a programmable DUT and this makes it a valuable
resource for scoreboards that need to be aware of the current DUT state. The scoreboard references the register model to
either adapt its configuration to match the current state of a programmable DUT or to adapt its algorithms in order to
make the right comparisons. For instance, checking that a communications device with a programmable packet
configuration has transmitted or received the right data requires the scoreboard to know the current data format.
The UVM register model will contain information that is useful to the scoreboard:
• DUT Configuration, based on the register state
• Data values, based on the current state of buffer registers
The UVM register model can also be used by the scoreboard to check DUT behaviour:
• A register model value can be set up to check for the correct expected value
• A backdoor access can check the actual state of the DUT hardware - This can be useful if there is volatile data that
causes side effects if it is read via the front door.
Functional Overview
The scoreboard reacts to analysis transactions from the SPI agent. These correspond to a completed transfer, and the
transmitted (Master Out Slave In -MOSI) data is compared against the data in the transmit buffer in the register model
before the received (Master In Slave Out - MISO) data is compared against the data peeked from the DUT receive
buffer.The RXTX register mirrored value is also updated using the predict() method, this allows any subsequent front
door access using a checking mirror() method call to check the read data path from these registers.
The SPI scoreboard also checks that the SPI chip select values used during the transfer match the value programmed in
the slave select (SS) register.
The following excerpts from the scoreboard code illustrate how the scoreboard references the register model.
error = 0;
spi.get(item);
no_transfers++;
//
//
bit_cnt = spi_rm.ctrl_reg.char_len.get();
bit_cnt = 128;
end
//
//
tx_data[31:0] = spi_rm.rxtx0_reg.get();
tx_data[63:32] = spi_rm.rxtx1_reg.get();
tx_data[95:64] = spi_rm.rxtx2_reg.get();
tx_data[127:96] = spi_rm.rxtx3_reg.get();
// Find out if the tx (mosi) data is sampled on the neg or pos edge
//
//
// The SPI analysis transaction (item) contains samples for both edges
//
if(spi_rm.ctrl_reg.tx_neg.get() == 1) begin
end
else begin
mosi_data = item.pedge_mosi;
end
//
// Compare the observed MOSI bits against the tx data written to the SPI DUT
//
// Find out whether the MOSI data is transmitted LSB or MSB first - this
error = 1;
end
end
if(error == 1) begin
end
end
error = 1;
end
end
if(error == 1) begin // Need to reverse the mosi_data bits for the error message
rev_miso = 0;
rev_miso[(bit_cnt-1) - i] = mosi_data[i];
end
end
end
if(error == 1) begin
no_tx_errors++;
end
//
// TX Data checked
error = 0;
//
// Look up in the register model which edge the RX data should be sampled on
//
if(spi_rm.ctrl_reg.rx_neg.get() == 1) begin
miso_data = item.pedge_miso;
end
else begin
miso_data = item.nedge_miso;
end
//
//
rev_miso = 0;
rev_miso[(bit_cnt-1) - i] = miso_data[i];
end
miso_data = rev_miso;
end
rx_data = spi_rm.rxtx0_reg.get();
// Peek the RX data in the hardware and compare against the observed RX data
spi_rm.rxtx0_reg.peek(status, spi_peek_data);
rx_data[i] = miso_data[i];
error = 1;
`uvm_error("SPI_SB_RXD:", $sformatf("Bit%0d Expected RX data value %0h actual %0h", i, spi_peek_data[31:0], miso_data))
end
end
// Get the register model to check that the data it next reads back from this
//
// This is somewhat redundant given the earlier peek check, but it does check the
assert(spi_rm.rxtx0_reg.predict(rx_data));
// Repeat for any remaining bits with the rxtx1, rxtx2, rxtx3 registers
//
// Compare the programmed value of the SS register (i.e. its cs field) against
//
no_cs_errors++;
end
The scoreboard code can be found in the spi_scoreboard.svh file in the env sub-directory of the SPI block level testbench
example which can be downloaded via the link below:
( download source code examples online at http://verificationacademy.com/uvm-ovm ).
Ovm/Registers/FunctionalCoverage
Register Based Functional Coverage Overview
The UVM supports the collection of functional coverage based on register state in three ways:
• Automatic collection of register coverage based on covergroups inside the register model on each access
• Controlled collection of register coverage based on covergroups inside the register model by calling a method from
outside the register model
• By reference, from an external covergroup that samples register value via a register model handle
Most register model generators allow users to specify the automatic generation of cover groups based on bit field or
register content. These are fine if you have a narrow bit field and you are interested in all the states that the field could
take, but they quickly loose value and simply add simulation overhead for minimal return. In order to gather register
based functional coverage that is meaningful, you will need to specify coverage in terms of a cross of the contents of
several registers and possibly non register signals and/or variables. Your register model generation may help support this
level of complexity but, if not, it is quite straight-forward to implement an external functional coverage collection
component that references the register model.
The recommended approach is to use an external covergroup that samples register values via the register model handle.
UVM_CVR_ADDR_MAP Collect coverage for addresses read from or written to in address maps
The bit mapped enumeration allows several coverage models to be enabled in one assignment by logically ORing several
different values - e.g. set_coverage(UVM_CVR_ADDR_MAP + UVM_CVR_FIELD_VALS)
A register model can contain coverage groups which have been assigned to each of the active categories and the overall
coverage for the register model is set by a static method in the uvm_reg class called include_coverage(). This method
should be called before the register model is built, since it creates an entry in the resource database which the register
model looks up during the execution of its build() method to determine which covergroups to build.
//
//From the SPI test base
//
// Build the env, create the env configuration
// including any sub configurations and assigning virtural interfaces
As the register model is built, coverage sampling is enabled for the different coverage categories that have been enabled.
The coverage sampling for a category of covergroups within a register model hierarchical object can then be controlled
using the set_coverage() method in conjunction with the has_coverage() method (which returns a value corresponding to
the coverage categories built in the scope) and the get_coverage() method (which returns a value corresponding to the
coverage model types that are currently being actively sampled).
For more detail on how to implement a register model so that it complies with the build and control structure for
covergroups see ModelCoverage
The covergroups within the register model will in most cases be defined by the model specification and generation
process and the end user may not know how they are implemented. The covergroups within the register model can be
sampled in one of two ways.
Some of the covergroups in the register model are sampled as a side-effect of a register access and are therefore
automatically sampled. For each register access, the automatic coverage sampling occurs in the register and in the block
that contains it. This type of coverage is important for getting coverage data on register access statistics and information
which can be related to access of a specific register.
Other covergroups in the register model are only sampled when the testbench calls the sample_values() method from a
component or a sequence elsewhere in the testbench. This allows more specialised coverage to be collected. Potential
applications include:
• Sampling register state (DUT configuration) when a specific event such as an interrupt occurs
• Sampling register state only when a particular register is written to
`uvm_component_utils(spi_reg_functional_coverage)
bit wnr;
spi_reg_block spi_rm;
covergroup reg_rw_cov;
option.per_instance = 1;
bins SS = {5'h18};
bins RD = {0};
bins WR = {1};
endgroup: reg_rw_cov
//
//
// Note that the field value is 64 bits wide, so only the relevant
covergroup combination_cov;
option.per_instance = 1;
bins RATIO[] = {16'h0, 16'h1, 16'h2, 16'h4, 16'h8, 16'h10, 16'h20, 16'h40, 16'h80};
endgroup: combination_cov
endclass: spi_reg_functional_coverage
super.new(name, parent);
reg_rw_cov = new();
combination_cov = new();
endfunction
address = t.addr[4:0];
wnr = t.we;
reg_rw_cov.sample();
if(address == 5'h10)
begin
if(wnr) begin
if(t.data[8] == 1) begin
combination_cov.sample(); // TX started
end
end
end
endfunction: write
Justification:
Wrapping a covergroup in this way has the following advantages:
• The uvm_object can be constructed at any time - and so the covergroup can be brought into existence at any time, this
aids conditional deferred construction*
• The covergroup wrapper class can be overridden from the factory, which allows an alternative coverage model to be
substituted if required
• This advantage may become more relevant when different active phases are used in future.
Example:
class covergroup_wrapper extends ovm_object;
`ovm_object_utils(covergroup_wrapper)
endfunction: sample
endclass: covergroup_wrapper