Chip-Level FPGA Verification: How To Use A VHDL Test Bench To Perform A System Auto Test
Chip-Level FPGA Verification: How To Use A VHDL Test Bench To Perform A System Auto Test
Chip-Level FPGA Verification: How To Use A VHDL Test Bench To Perform A System Auto Test
Synopsis
This paper explores the advantages of chip-level FPGA verification, and describes a basic procedure for
developing a system auto test with a VHDL virtual test bench. It is intended for engineers and technicians
with a good understanding of VHDL programming and test bench operation.
This approach takes FPGA verification beyond the basic level of module-level simulation and waveform
inspection. While it is not appropriate for small, glue logic FPGA verification or full ASIC development,
it is an ideal strategy for moderate-to-large FPGA applications where maximum code coverage and quick
time-to-market are the priorities.
As shown in Figure 1, the architecture for this approach is a VHDL test bench which utilizes a complete
range of test cases, bus-functional models and auto-testing monitors to stimulate and evaluate all aspects
of FPGA performance.
TESTBENCH
TESTCASE(S)
MODEL(S)
UNIT
UNDER
TEST
(UUT)
MONITOR
Figure 1: The VHDL test bench controls and selects the test cases; the test cases define the bus-functional models which
stimulate the UUT and the auto-testing monitor which tests the UUT output.
CONTENTS
Synopsis
1. Introduction: The Need For Chip Level Verification
2. Initial Considerations
- Software Issues
- Writing Reusable Code
Naming Conventions
Defining Parameters
Establishing Subprograms
- Scheduling Code Reviews
3. Architecture For Chip Level Verification
- The VHDL Test Bench
Control
Report
File I/O
Random Numbers
- Test Cases
- Bus-Functional Models
- Self-Checking Monitors
4. Additional Considerations
- Asynchronous Input Timing
- Code Coverage Tools
- Gate Timing Analysis Tools
5. Summary
6. References
2. Initial Considerations
Software & Documentation
As software, VHDL benefits from the same controls as C++ or any other programming language.
Maintainability, repeatability and reusability are all important factors in achieving time-to-market goals.
Source control is essential for releases; there must be an accurate, documented history of the changes and
updates to each module. To organize your source control efforts, consider purchased programs such as
PVCS or freeware such as CVS.
Bug tracking can often be accomplished with a simple spreadsheet. It is important to document each
instance, assign team members to investigate, and record the final resolution so that nothing is missed or
forgotten.
A reader-friendly coding style will make it easier for team members to implement the functionality. The
use of white space, comment line separators and a comment header for procedures will make your code
more readable and understandable.
Naming Conventions
If VHDL code is to be maintained or reused by other engineers it is much more efficient to use common
naming conventions such as those shown in Figure 2. Naming conventions reduce the risk of ambiguity
and simplify the editing process.
Suffix Description
_clk
_rst
_l
_r
_r2
_asyn
All clocks
All resets
Indicates the signal is active low
Indicates the signal has been registered
Indicates the signal has been registered twice
Indicates asynchronous signal
Figure 2: The use of typical naming conventions reduces ambiguity and simplifies editing.
Parameters
VHDL parameters are implemented by defining constants. Local constants can be defined in the
architecture, while global constants are best defined in a separate package for reuse. As shown in the
example below, a FIFO model original which is 16 bit x 256 can easily be changed to 32 bit x 128 with
careful definition of local and global constants.
Constant
Code
Figure 3: VHDL parameters are implemented with local and global constants.
Unconstrained parameters are often used to make the test bench more flexible. For example, the packet
size may vary with different test cases. By using an unconstrained parameter, the size of the data can
actually be determined by the data itself. As shown in Figure 4, unconstrained parameters are constructed
with the definition NATURAL and attributes such as range, left, right, etc.
Unconstrained Parameter
TYPE frame_array IS ARRAY (natural RANGE <>) OF integer;
for i in frame_array'range loop
Figure 4: An example of an unconstrained parameter.
Subprograms
Abstracting functionality into subprograms whenever possible is good software practice. For example, a
CPU Read can be placed in a procedure subprogram to make it more maintainable and reusable. It is
important to keep in mind that VHDL procedures are sequential when called in a process and therefore
have no view of previous events. Since the parameters are local, a procedure cannot check for setup
stability on an input signal. So, setup stability needs to be verified before the procedure call. Procedures
can check for stability hold with internal signals. VHDL supports procedure overloading. Utilize
overloading to define procedures with several parameters list options.
Two examples of VHDL subprograms are shown below. The first procedure is for setting up a clock
delay; the code in the second example initializes or asserts reset.
------------------------------------------------------------------------------procedure clk_dly(clk_cnt:in integer; signal clk:in std_logic) is
begin
for I in 1 to clk_cnt loop
wait until rising_edge(clk);
end loop;
end clk_dly;
------------------------------------------------------------------------------procedure init_reset(signal RESET_IN_L_Signal: inout std_logic) is
begin
RESET_IN_L_Signal <= '0'; -- assert
wait for 126 ns; -- Initialize reset
RESET_IN_L_Signal <= '1'; -- negate
wait for 100 ns; -- let reset trickle through
end init_reset;
Figure 5: examples of VHDL subprograms
Control
The selection of test cases can be accomplished in several ways. The selection information can be read
from an external file, from defined parameters within the test bench architecture, or from defined
parameters in an external package. The external package approach works well because is compiles
quickly and isolates any editing steps from the test bench itself. The example below shows how to define
parameters for selecting which test case should be run. Each test case is specified with a control if/then
statement, so that the parameters can be defined as true or false.
-- define true for test(s) to be run (true or false )
constant do_all: boolean
:= false;
constant do_video_min: boolean := true;
constant do_video_max: boolean := false;
if do_video_min or do_all then
if do_video_max or do_all then
Figure 7: how to define parameters for selecting test cases
The test bench should be designed to run for the full duration of all the selected test cases and then stop
automatically. Since the overall running time is dynamic, running a test bench for a fixed time period may
provide inaccurate results and should be avoided. The VHDL code for completing all test cases and
automatic stop is shown in Figure 8.
assert false report "**** SIMULATION COMPLETE ****"
severity Failure; -- stop sim
wait; -- NEED a wait at the end of a testbench process
Figure 8: how to set the test bench for auto stop after all test cases have been run
Report
The test bench should be designed to report the results of all test cases with run time status reports. It is
good practice to have the test bench report each test failure and display the relevant details next to the
expected results, along with a time stamp. To provide finer resolution, it is also helpful get a report of the
specific subtest within the test case in question. A constant can be added which will turn these reports on
or off, similar to C software debugging with printf statements. Reports can be sent to display, file or both.
The example below shows how to set up a video test case status report:
if v_verbose then
assert false report "**** TEST Video In Minimum ****"
severity note; -- (severity note; error; failure)
end if;
Figure 8: how to set the test bench for auto stop after all test cases have been run
File I/O
Setting up file I/O for VHDL can be a cumbersome process. Each line must be generated before it can
actually be written; the parameters of each line must be parsed after the line is read. Fortunately, the
process can be simplified by employing a utility subprogram that can be reused for other projects. The
subprogram will be easier to maintain if the file name is defined in the entity or package.
The sample file I/O utility subprogram shown in Figure 9 includes parameter settings, read_test and
write_string.
file tb_control_file : text is in "tb_control.txt";
file tb_report_file : text is out "tb_report.txt";
------------------------------------------------------------------procedure read_test(which_test:out integer) is
file file_in: text is in TB_FILE_IN;
variable line_in : line;
variable err: bit;
begin -- read the tests to perform from control file
readline(tb_control_file, line_in);
read(line_in, which_test);
wait for 10 ns;
assert z = err report "z incorrect" severity error;
wait;
end;
----------------------------------------------------------------procedure wr_string(string_out:in string)is
variable line_out : line;
begin
write(line_out, string_out);
writeline(OUTPUT, line_out); -- output display
write(line_out, string_out);
writeline(tb_report_file,line_out); -- output file
end wr_string;
Figure 9: A utility subprogram simplifies File I/O setup.
Random Numbers
Use a random number generator to provide more realistic test bench model parameters. For example, a
propagation delay for a CPU bus-functional model can be set to vary according to a random number to
provide a more realistic simulation. VHDL does not have a built-in random function, but there are several
packages readily available.
cpuOut.oe_n <= '0';
cpuOut.addr <= conv_std_logic_vector(addr,ADDR_SIZE+1);
wait for tsh * rnd_pkg.random;
cpuOut.re_n <= '0' after (tdRd) ;
Figure 10: Use a random number generator to provide a more realistic simulation.
Bus-Functional Models
A bus-functional model (BFM) emulates the timing and function of the bus. For example, a CPU
BFM emulates the bus signals, ce_n, data and address. A BFM does not emulate the processor
boot, fetch, or run mode. A BFM simply emulates the IO signals timing. When implementing a
BFM and utilizing procedure calls, each bus interface signal needs to be passed from the test
bench to the model with each procedure call. The length of the parameterize list can be
minimized by creating a record for the bus signals, as shown below in Figure 11. If this type of
record is utilized, then each interface signal only needs only be passed once.
------------------------------------------------------------------------------constant tsRd
: time := 1.0 ns; -- time read setup
constant thRd
: time := 1.0 ns; -- time read hold
constant tdRd
: time := 1.0 ns; -- time read data
------------------------------------------------------------------------------TYPE cpuOut_type IS RECORD
cs_n : std_logic
addr : std_logic_vector(ADDR_SIZE downto 0);
------------------------------------------------------------------------------procedure cpuread (addr:in integer; rdata:out integer;
signal cpuIn :in cpuIn_type;
signal cpuOut :out cpuOut_type ) is
begin
wait until rising_edge(cpuIn.clk);
wait for tsRd;
cpuOut.cs_n <= '0';
cpuOut.addr <= conv_std_logic_vector(addr,ADDR_SIZE+1);
clk_dly(wait_st,cpuIn.clk);
rdata := conv_integer(cpuIn.data); -- read data
cpuOut. cs_n <= '1' after (tdRd);
wait for (thRd);
cpuOut.cs_n <= '1';
cpuOut.addr <= (others => '0');
end cpuread;
Figure 11: Creating a record for bus signals helps to minimize the parameterize list.
Self-Checking Monitors
The monitors capture and test all output signals from the FPGA. Defining monitors and test vectors can
be challenging. Normally, the first step is to define the monitors as concurrent or sequential. It is
advantageous to minimize the number of processes which run concurrently; if a monitored signal can be
sequentially called by a subprogram, simulation run times can be reduced. For example, a subprogram
could be used to monitor and test data when a test case sends a video frame. However, if the output can
occur at anytime, then the monitor should be defined as concurrent in the top level test bench architecture.
Test vectors and their sources should be viewed from the system level. Test vectors are the stimulus data
and can be defined as files, packages, or processes depending on the test bench requirements. If there are
a large number of vector types, the best solution might be to define a process to generate them. If a test
vector is a large, a bitmap file may be used. Finally, it may be best to define the test vectors in a package
if they are limited in size and type. It is also good practice to use proven test vectors (sometimes called
golden vectors) to test the captured data.
Monitor timing requirements are defined by the external requirements of the system. If the data is being
sent to a FIFO, refer to the FIFO datasheet for timing requirements. Use the attribute stable to verify
setup and hold.
---------------------------------------------------------------------- subprogram example of monitor
procedure test_data(data_in:in frame_type; data_tb:in frame_type)is
begin
if data_in /= data_tb then
log_error (" VinData", data_in, data_tb);
assert false
severity error;
end if;
---------------------------------------------------------------------- concurrent monitor example located in testbench architecture
capture_video : process
begin
wait until eof = '1';
if video_test_en then
capture_video(video_clk, video_in(x,y));
wait;
end process capture_video;
Figure 12: Examples of test monitors
Test Cases
The UUT needs to be exercised with a variety of test cases which represent a broad range of potential
system scenarios. To cover as many scenarios as possible, involve your entire team in the process. The
test cases should be defined to reflect the system output (frames of video, packets of data, etc.) and should
be modeled for nominal, minimum, maximum and error cases. Error cases should include scenarios such
as wrong frame size, corrupt header, wrong number of pixels, etc. For example, a test case for the fiber
optic protocol FICON would model the smallest packet size, largest packet size, a packet with a corrupted
header, etc. A NTSC video test case could be set up for too few lines, too many pixels or a corrupted
trailer.
The final test case should be set up to test the test. Insert a real error and verify that the self-testing
monitors catch the problem. This can be accomplished by inserting an invalid intermediate test bench
signal.
Although test cases can be written in the test bench architecture, it is better practice to use a subprogram if
they are very complex or if there are a large number of cases. Figure 13 provides an example of a frame
minimum test case.
10
4. Additional Considerations
Asynchronous Timing Verification
Asynchronous inputs cannot be accurately modeled for timing; the time must be verified empirically.
While a 50 MHz system has a period of 20 ns, the simulation tool typically has a resolution of only 1ps.
This dictates 20,000 timing possibilities on the input. It would not be practical or realistic to model and
simulate each of these possibilities. Although beyond the scope of this paper, a more efficient approach
would be to verify that time of flight is sufficient by performing a timing analysis.
5. Summary
As FPGA components become more complex and time-to-market considerations become more critical, a
new strategy for FPGA code verification is required. A properly-executed, chip level verification strategy
will provide maximum code coverage in a minimum amount of time. A VHDL test bench written at chip
level is easy to maintain and reuse, and can also be utilized for verification at the behavioral, post
synthesis and gate simulation levels.
With careful consideration of the test bench architecture and a fully developed array of test cases, a
VHDL auto test bench offers the best compromise between absolute design integrity of complex FPGA
devices and practical time-to-market considerations.
6. References
Writing Test Benches: Function Verification of HDL Models by Janick Bergeron
Reuse Methodology Manual by Michael Keating and Pierre Bricaud
VHDL for Logic Synthesis by Andrew Rushton
HDL Chip Design by Douglas J. Smith
11