Benchmark Factory User Guide - 830
Benchmark Factory User Guide - 830
User Guide
© 2019 Quest Software Inc.
ALL RIGHTS RESERVED.
This guide contains proprietary information protected by copyright. The software described in this guide is
furnished under a software license or nondisclosure agreement. This software may be used or copied only in
accordance with the terms of the applicable agreement. No part of this guide may be reproduced or transmitted
in any form or by any means, electronic or mechanical, including photocopying and recording for any purpose
other than the purchaser’s personal use without the written permission of Quest Software Inc.
The information in this document is provided in connection with Quest Software products. No license, express
or implied, by estoppel or otherwise, to any intellectual property right is granted by this document or in
connection with the sale of Quest Software products. EXCEPT AS SET FORTH IN THE TERMS AND
CONDITIONS AS SPECIFIED IN THE LICENSE AGREEMENT FOR THIS PRODUCT, QUEST SOFTWARE
ASSUMES NO LIABILITY WHATSOEVER AND DISCLAIMS ANY EXPRESS, IMPLIED OR STATUTORY
WARRANTY RELATING TO ITS PRODUCTS INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTY
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT. IN NO EVENT
SHALL QUEST SOFTWARE BE LIABLE FOR ANY DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE,
SPECIAL OR INCIDENTAL DAMAGES (INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOSS OF
PROFITS, BUSINESS INTERRUPTION OR LOSS OF INFORMATION) ARISING OUT OF THE USE OR
INABILITY TO USE THIS DOCUMENT, EVEN IF QUEST SOFTWARE HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES. Quest Software makes no representations or warranties with respect to the
accuracy or completeness of the contents of this document and reserves the right to make changes to
specifications and product descriptions at any time without notice. Quest Software does not make any
commitment to update the information contained in this document.
If you have any questions regarding your potential use of this material, contact:
Quest Software Inc.
Attn: LEGAL Dept
4 Polaris Way
Aliso Viejo, CA 92656
Refer to our web site (www.quest.com) for regional and international office information.
Patents
This product includes patent pending technology. For the most current information about applicable patents for
this product, please visit our website at www.quest.com/legal.
Trademarks
Quest, Quest Software, Benchmark Factory, Foglight, Spotlight, SQL Navigator, Toad, SharePlex, and the Quest
logo are trademarks of Quest Software Inc. in the U.S.A. and other countries. For a complete list of Quest
Software trademarks, please visit our website at www.quest.com/legal. Microsoft, Windows, Windows Server,
Visual Studio, SQL Server, SharePoint, Access and Excel are either registered trademarks or trademarks of
Microsoft Corporation in the United States and/or other countries. Oracle is a trademark or registered trademark
of Oracle and/or its affiliates in the United States and other countries. Citrix® and XenApp™ are trademarks of
Citrix Systems, Inc. and/or one or more of its subsidiaries, and may be registered in the United States Patent
and Trademark Office and in other countries. SAP is the registered trademark of SAP AG in Germany and in
several other countries. IBM and DB2 are registered trademarks of International Business Machines
Corporation. All other trademarks, servicemarks, registered trademarks, and registered servicemarks are
property of their respective owners.
Benchmark Factory
User Guide
Tuesday, 6 August 2019
Contents
Contents 3
Licensing 22
Licensing 22
Licenses Determine Feature Availability 22
Toad Edition Licenses 22
Licensing Benchmark Factory 22
Upgrade an Earlier-Version License Key 23
Add Virtual Users 23
Agents 42
About Agents 42
View Agent Information 42
Types of Agent Components 42
Using Agents 43
Set Up New User Agent 43
Agent Settings 43
Install Remote Agent on Windows 45
Configure Firewall 45
Install Remote Agent on Linux 46
Prerequisites 46
Benchmarks 138
Overview of Benchmark Testing 138
Realistic Expectations When Using Benchmarks 138
What Benchmarks Measure 138
Provided Benchmarks 138
AS3AP Benchmark 139
Best Practices 139
Scalable Hardware Benchmark 140
How the Scalable Hardware Benchmark Works 140
Scaling Factor 140
Best Practices 140
TPC-B Benchmark 141
Overview 141
Certification of Transaction Processing Council (TPC) Testing Results 141
Best Practices 141
Scaling Factor 142
TPC-C Benchmark 142
Overview 142
TPC-C Tables 142
Certification of Transaction Processing Council (TPC) Testing Results 143
Best Practices 143
TPC-D Benchmark 144
Overview 144
Certification of Transaction Processing Council (TPC) Testing Results 144
Best Practices 144
Scaling Factor 145
TPC-E Benchmark 145
Settings 172
About Settings 172
General Settings 172
Benchmarks Settings - General 173
Benchmark Settings - Specific Test Types 174
TPC-B Benchmark Settings 175
TPC-B History Tables 175
Replication Benchmark Settings 176
Table Structure Settings 176
Timing Settings 177
User Load Settings 178
Latency Settings 178
Error Handling Settings 179
Repository Settings 180
Statistics Settings 181
Agent Settings 181
Oracle Settings 183
SQL Server Settings 184
Execute File Settings 184
BFScripts 218
About Scripts 218
BFScript Wizard 219
BFScript Wizard 220
Repository 243
Repository Manager 243
Data Repository Migration Wizard 245
Troubleshooting 246
Support Bundle 246
Agent Connection 246
Troubleshooting Standard Benchmark Tests 247
Use Benchmark Factory with SQL Server 2005 Client 247
MySQL Initialization Settings 248
Appendix 255
Change Graph Views 255
Show Data/Show Graph 255
Graph Legend 256
Toolbar 256
Print 257
Copy Data to Clipboard 257
Copy Graph to Clipboard 257
Load Configuration 257
Save Configuration 257
Set as Default 257
Clear Configuration 258
Customize List Controls 258
Create a SQL 2005 Trace Table Using the SQL Server Profiler 258
Creating a SQL 2008/2008 R2 Trace Table Using the SQL Server Profiler 259
Oracle Instant Client Installation 260
Migrating Repository Data Using the DOS Command Line 260
Store Procedure Examples 261
Oracle Trace File Activation 262
System Requirements/Upgrade Requirements/Supported Databases 263
System Requirements 263
Shortcut Keys 265
Creating an ODBC Trace File 266
Configure Firewall for Remote Agent Install/Start-Up 267
Enable WMI on Agent Machine 267
Set Inbound and Outbound Rules 267
Troubleshooting 268
About Us 269
We are more than just a name 269
Our brand, our vision. Together. 269
Contact Quest 269
Technical Support Resources 269
Benchmark Factory Community 270
Benchmark Factory 8.3 is a minor release and includes the following new features and enhancements. For a list
of resolved issues in this release, see the Benchmark Factory 8.3 Release Notes at the Quest Support Portal.
Database Support
This release of Benchmark Factory includes support for the following database versions:
l Oracle 19c
Connections
MySQL Native Connection. Beginning with this release, Benchmark Factory now provides the ability to create a
MySQL connection using Native database connectivity.
1. To create a MySQL Native connection, click the New Connection button in the main toolbar. Then select
MySQL from the Database Type drop-down list.
2. Select the Native tab in the New Connection dialog and enter the connection information.
Note: The following Benchmark tests are supported: TPC-C, TPC-E, and TPC-H.
Database Support
This release of Benchmark Factory includes support for the following database versions:
l Oracle 18c
l MySQL 8.0
Installation Package
The Benchmark Factory Agent for Linux is now distributed as an RPM package. This provides a number of
advantages over the previous archive file (.tar) format. This format allows BMF to provide automatic installation
through the Benchmark Factory Console. You can also use YUM to install the Benchmark Factory Agent RPM
package on Linux manually.
l If you use a version of the Oracle Instant Client later than 10g R2, you must create a symbolic link to the
shared library for the Oracle Instant Client you intend to use.
l Additional Information—For additional prerequisites and more-detailed information, see "Install Remote
Agent on Linux" in the Benchmark Factory Help.
The Benchmark Factory 8.1 Agent for Linux is available for download from the Benchmark Factory
Support Page.
BMFServer.exe Enhancements
l The default REST API port number for BMFServer.exe is now the same as the Benchmark Factory
console, port 30100.
l The BMFServer log file (BMFServer.log) is now located in: \My Benchmark
Factory\<version>\<bitness>\Error Logs
BMFServer.exe
This release includes a non-UI Benchmark Factory application, BMFServer.exe.
l BMFServer.exe is installed into the bin sub-directory of the installation directory when you install
Benchmark Factory.
l To start BMFServer.exe, go to the installation directory. The default location is C:\Program Files\Quest
Software\Benchmark Factory <version>\bin. Then double-click BMFServer.exe.
l BMFServer.exe performs the same functionality as BFactory.exe, except it has no graphic user interface.
l Use BMFServer.exe when automating your continuous improvement/testing process. You can run
BMFServer.exe using a script, a custom application, or the Command Prompt window. BMFServer.exe
can be used with the Benchmark Factory REST API.
l You cannot run the Benchmark Factory console and BMFServer.exe at the same time on the
same machine.
BMFAgent.exe
Benchmark Factory now includes a non-UI Agent, BMFAgent.exe.
l BMFAgent.exe is installed into the bin sub-directory of the installation directory at the time you install
Benchmark Factory.
Tip: You can also specify default values to automatically apply to Oracle connections in Edit |
Settings | Oracle.
Note: Specific database privileges are required to perform this action. The privilege required is
dependent on the database version and the option selected. See the online Help for more
information.
l SQL Server®—For SQL Server connections, you can instruct Benchmark Factory to clear data buffer
caches and procedure caches. To specify this option, select Edit | Connections. Then select a
connection and click the Edit button. In the connection properties dialog, select the Miscellaneous tab.
Then specify the Database Flush option for this connection.
Note: This option is only applicable to SQL Server 2005 or later. The sysadmin fixed server role
is required.
Connections
Microsoft® SQL Server 2017. This release includes support for SQL Server 2017. Benchmark Factory has been
tested against SQL Server 2017 running on Windows or Linux.
IBM® DB2®. This release includes support for IBM DB2 11.1 for LUW and for z/OS.
General
Adding Bind Parameters
It is now easier to add a bind parameter/value pair in Test Options | Transactions.
l When adding a new statement in the Add SQL Transaction dialog or when editing an existing
statement, you can now simply double-click within the Bind Parameters tab to add a bind parameter
and parameter value.
Installation
Universal C Runtime Component
The Universal C Runtime component for Windows is required. See Universal C Runtime Update or Visual C++
Redistributable for Visual Studio 2015 to download this software.
Note: If you encounter an error when installing this software, install the missing prerequisite software. For
Windows 8.1 or Windows Server 2012 R2, install the April 2014 update: https://support.microsoft.com/en-
us/kb/2919355. For other operating systems, see the Universal C Runtime Update Prerequisites section.
Note: The features described in the New in This Release section apply to the commercial version of
Benchmark Factory and may not be available in the freeware edition.
Benchmark Factory® is a database performance testing tool that allows you to conduct database workload
replay, industry-standard benchmark testing, and scalability testing.
Review the following topics for a quick overview of Benchmark Factory.
l Benchmark Factory blogs—Find How To's, useful articles, and tips and tricks for using
Benchmark Factory.
l How to use the REST API (part 1)—Learn how to use the Benchmark Factory REST API.
l How to use the REST API (part 2)—Learn how to modify an existing job or connection using
the REST API.
l REST API—This page lists the Benchmark Factory REST API resources and includes example
request and response content.
l Sample Powershell script usingREST API—This sample PowerShell script demonstrates some
basic functionality of the Benchmark Factory REST API.
l Benchmarking Best Practices—10 best practices to help you get started with database
benchmarking.
l Project Converter—Use this tool to convert an Oracle capture project file created in an earlier
version of Benchmark Factory to .xml format.
l Toad World—Visit other Toad Communities, including Toad for Oracle. Find DBMS and SQL knowledge,
find software downloads, and find answers to your database questions.
l Benchmark Factory Product Information—Find white papers, product demos, and purchasing
information.
Licensing
The Benchmark Factory Licensing dialog allows you to enter a new license key or modify a license.
To extend a trial, purchase a license, or find answers to your licensing questions, contact Quest at
https://www.quest.com/contact.
Example: ACCOUNTNAME-nnn-nnn-nnn
2. Then go to the License Key Upgrade page: http://license.quest.com/upgrade.
3. Enter your e-mail address and your existing license number, and then follow the prompts.
If you need help finding your license number or an upgrade key, please contact the License Administration
team: https://support.quest.com/licensing-assistance.
l Find applications that do not scale well with an increase in the number of users
l Find breaking points, weak links, or bottlenecks of a system
l Quantify application or server performance with realistic workloads
To request additional concurrent load users, please contact your Quest Software representative, or visit the
Benchmark Factory Web site.
The Repository
All test results are collected and stored in the repository for data analysis and reporting. Benchmark Factory
collects a vast amount of statistics, including overall server throughput (measured in transactions per second,
bytes transferred, etc.) and detailed transaction statistics by individual workstation producing a load. You use
these statistics to measure, analyze, and predict the capacity of a system.
Agents
Agents simulate virtual users and send transactions to the system-under-test (database).
Run Reports
Benchmark Factory Run Reports is a separate component used to view the detailed test results in a report
format. You can open Run Reports from the Benchmark Factory console (Tools | Run Reports) or from the Start
menu (Benchmark Factory | Run Reports).
Benchmark Factory uses Agents to deploy the virtual users. An Agent is a software routine that waits in the
background and performs an action when a specified event occurs. One Agent can simulate thousands of virtual
users (limited by hardware and workload characteristics) at a time. Each virtual user has their own connection to
the system under test.
Understanding Benchmarks
A benchmark is a performance test of hardware and/or software on a system-under-test. Benchmark Factory
provides the option of using industry standard benchmarks during the load testing process. Benchmarks
measure system peak performance when performing typical operations.
Benchmark Factory comes equipped with the following industry standard benchmarks:
l AS3AP Benchmark
l Scalable Hardware Benchmark
l TPC-B Benchmark
l TPC-C Benchmark
l TPC-D Benchmark
= Available
= Not available
Get the latest product information and find helpful resources at the Benchmark Factory community at:
https://www.toadworld.com/products/benchmark-factory-for-databases
Benchmark Factory Console
The Benchmark Factory console implements the database workload testing process. This interface is where:
About Agents
The Benchmark Factory Agent is a component used in Benchmark Factory to create virtual users which simulate
real-world user activity by placing transactions against the database-under-test. The Benchmark Factory Agent
Using Agents
Review the following topics to learn how to use the Benchmark Factory Agent.
l Set Up New User Agent—To set up a remote agent
l Agent Settings —To view configured agents and specify default settings
l Install Remote Agent on Windows—To install a remote agent on a Windows platform
l Install Remote Agent on Linux—To install a remote agent on a Linux platform
l Running Benchmark Factory with Multiple Agents—To learn how to run a test using multiple agents
l BMFAgent.exe—To learn about the non-GUI agent
Repository Manager
Note: If you create a new Benchmark Factory 5.5 (or later) repository, earlier versions of Benchmark Factory
will not work against this repository.
The Repository is a database where all of the test results are stored. Benchmark Factory inserts test results
into the repository and provides an easy way to access the data. By default, the Repository is a SQLite
database that resides on the same machine as Benchmark Factory. The Repository can reside on another
database server if required.
Note: By default in Benchmark Factory 7.1.1 or earlier, a MySQL database is created and used as the
Repository, unless you selected the SQLite option during installation. In Benchmark Factory 7.2 or later, by
default a SQLite database is created and used as the Repository.
To change the database, select the Data Source Name of the ODBC connection for the new database. To
migrate data from one database to another, click Data Migration to open the Data Migration Wizard.
Note: If the database structure does not exist on the selected database, a prompt to create the structure will
appear when OK is clicked.
Connection Parameters
Data Source Name Data Source name of the ODBC connection used to connect to
the repository database.
ODBC Driver Current ODBC driver
User Name The User Name used to log into the selected database.
Password The Password associated with the user name used to log into
the database.
Edit DSN Displays the ODBC connection information dialog for the
selected data source.
ODBC Administrator Displays the ODBC Data Source Administrator dialog. Use this
to add and edit ODBC connections.
Test Connection Tests the connection of the currently selected ODBC Data
Source.
Maintenance
Create Creates the repository objects on the selected database.
Delete Deletes the repository objects on the selected database.
Jobs View
The Jobs View pane displays the list of jobs. After you create and save a job, the job is displayed in the Jobs
View pane. You can also use the Jobs View pane to identify the jobs that are currently running and the jobs that
are scheduled to run.
l To edit an existing job, select the job in the Jobs View pane and click . The Edit Job Wizard
opens. To learn more about the Job Wizard, see The Job Wizards.
Test Results
To view test results
l To view a job's test results, select a job in the Jobs View pane. Test results display in the right pane. See
Benchmark Factory Console for an overview of the Benchmark Factory console.
l To compare two or more run results for a test, select the Compare Results tab. Use Ctrl+click to select
multiple test runs. A comparison of the results for the various runs displays.
Note: To save a job as a Benchmark Factory script, select the job and click Save in the Benchmark Factory
toolbar or select File | Save.
BMFServer.exe
BMFServer.exe is a non-UI executable installed with Benchmark Factory. BMFServer.exe performs the same
functionality as Benchmark Factory, except BMFServer.exe has no graphic user interface. This allows you to
easily integrate BMFServer.exe into your continuous integration or continuous testing process.
Note: This feature is not available in the freeware edition of Benchmark Factory.
Details
l BMFServer.exe is installed into the bin sub-directory of the installation directory at the time you install
Benchmark Factory.
The default location is: C:\Program Files\Quest Software\Benchmark Factory <version>\bin
l You cannot run the Benchmark Factory console and BMFServer.exe at the same time on the
same machine.
l To enter or modify your Benchmark Factory license, you must use the Benchmark Factory console, not
BMFServer.exe.
Start BMFServer.exe
To Start BMFServer.exe
1. Open the installation directory. The default installation path is
C:\Program Files\Quest Software\Benchmark Factory <version>
This topic outlines the Benchmark Factory workflow. Click the links in each step to drill down to more-detailed
information.
l Click the checkbox to the left of the agent name to select it. A checkmark displays for the
selected agent.
6. Click Save/Close to save the job and close the wizard.
7. The new job is added to the list of jobs in the Jobs View pane.
8. To rename the job, right-click the job in the Jobs View and select Rename.
Modify a Job
After creating a new job, you can modify the job. For example, you can change the database-under-test, add
tests/steps, or change test options.
b. Select the Test Options tab. Then select the User Load tab.
c. Modify the User Load. Click Save/Close when finished.
l In the Jobs View, select the job to run and click in the jobs toolbar, or right-click the job and
select Run Job.
About Agents
The Benchmark Factory Agent is a component used in Benchmark Factory to create virtual users which simulate
real-world user activity by placing transactions against the database-under-test. The Benchmark Factory Agent
is installed when the Benchmark Factory Console is installed. In addition, you can install additional agents on
other remote machines. Each Benchmark Factory agent can spawn multiple virtual-user sessions and
Benchmark Factory can control hundreds of Agent machines.
After installing additional agents, use your Benchmark Factory Console to define a connection to each agent
machine. When you create a new benchmark test through the Console, you can select which of the defined
agents to use to generate the user load.
Each virtual user is a separate thread, acting independently of the other virtual users, with its own connection to
the system-under-test. Each virtual user tracks its own statistics, including transaction times and the number of
times a transaction executes.
l Select Edit | Settings | Agent to view a list of all the configured agents which are available to be used in
testing, as well as platform information about each agent machine. Use this page to view the agent
global settings, as well.
l Double-click Agent.exe in the bin directory to open the Benchmark Factory Agent. The GUI displays
virtual user statistics during test execution. Select Options | Settings to configure options for this agent.
Using Agents
Review the following topics to learn how to use the Benchmark Factory Agent.
l Set Up New User Agent—To set up a remote agent
l Agent Settings —To view configured agents and specify default settings
l Install Remote Agent on Windows—To install a remote agent on a Windows platform
l Install Remote Agent on Linux—To install a remote agent on a Linux platform
l Running Benchmark Factory with Multiple Agents—To learn how to run a test using multiple agents
Agent Settings
Use this page of the Settings dialog to do the following:
Setup New User Agent Click to setup a new agent or to install a remote agent on
Windows or Linux.
l To learn how to set up an agent, see Set Up New User
Agent.
Tips:
l In the New/Edit Job Wizard, select Agent in the left pane of the wizard to access agent options for
the selected job. You can select agents or set up new agents from this page of the wizard.
l To open the Agent console, go to Program Files\Quest Software\Benchmark Factory
<version>\bin and double-click Agent.exe. See The Benchmark Factory Agent Console on page 50
for more information.
4. Enter the connection information for the remote machine. Review the following for additional information:
5. Click OK. Benchmark Factory checks the remote machine for an installed agent.
6. If no agent is found, Benchmark Factory prompts you to install the agent. Click Yes in the prompt
window. The Setup User Agent dialog opens.
7. In the Installer field, browse to and select the Benchmark Factory installer.
Note: The Installer can be located on your local machine or on the remote machine.
8. Click OK. The installer installs the agent component on the remote machine.
Note: If the agent fails to install, you may need to configure the firewall or attempt one of the
troubleshooting techniques. See Configure Firewall for Remote Agent Install/Start-Up.
9. When the agent is successfully installed, the progress window closes and the new remote agent is
displayed in the list.
10. After the remote agent is installed, you can double-click the agent name in the list to modify
agent options.
Configure Firewall
In order to install remote agents and allow communication with remote agents after installation, you may need to
configure the firewall on the console machine and on each agent machine. See Configure Firewall for Remote
Agent Install/Start-Up to learn more.
If you configure the firewall and then encounter an error when attempting to install a remote agent, find some
troubleshooting techniques here: Troubleshooting.
l PostgreSQL
l Oracle
l MySQL
See the Benchmark Factory Release Notes for information about database versions supported by
Benchmark Factory.
Prerequisites
Oracle Client. If you intend to use the Benchmark Factory Agent for Linux when testing against an Oracle
database, ensure an Oracle Client is installed on the same Linux machine as the Agent. Review the
following details:
l Oracle Client and Instant Client versions 10g R2 and later are supported.
l The Benchmark Factory Agent for Linux is compiled for Oracle Instant Client 10g R2. If you use a version
of the Oracle Instant Client later than 10g R2, you must create a symbolic link to the shared library for the
Oracle Instant Client you intend to use. See the instructions below.
l When installing an Oracle Instant Client, ensure you complete the necessary installation steps, such as
setting the LD_LIBRARY_PATH in the .bash_profile, if necessary. See
https://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html#ic_x64_inst for detailed instant
client installation instructions.
(Oracle Instant Client 11g or later) Create symbolic link for Oracle shared library
1. Locate the directory where the Oracle Instant Client you intend to use is installed.
2. From this directory, execute the following command:
ln -s libclntsh.so.<version> libclntsh.so.10.1
Where <version> is the version of libclntsh.so for the Oracle client you intend to use.
For example, if you intent to use Oracle Instant Client 11g R2, you would run the following:
ln -s libclntsh.so.11.1 libclntsh.so.10.1
RPM Package
Beginning with Benchmark Factory 8.2, the Agent for Linux is provided as an RPM package. The RPM package
can be downloaded or accessed from a quest repository at https://bintray.com/quest/bmfrepo. Benchmark
Factory provides the following methods for installing the Agent on Linux:
Installation
You can install the Benchmark Factory Agent on a remote Linux machine manually or through the
Benchmark Factory Console. To install using the Benchmark Factory Console, you must be able to connect
to the remote machine.
Install the agent on each Linux machine you wish to use as an agent machine.
5. Click OK. Benchmark Factory checks the remote machine for an installed agent.
6. If no agent is found, Benchmark Factory prompts you to install the agent. Click Yes in the prompt
window. Benchmark Factory then uses YUM to install the agent.
7. When the agent is successfully installed, the progress window closes and the new remote agent is
displayed in the list.
8. After the remote agent is installed, you can double-click the agent name in the list to modify
agent options.
Select the RPM package file with the same <version> and <buildnumber> as your Benchmark
Factory Console.
2. Install the RPM package. For example, install the RPM package using YUM.
3. After installing the remote agent on the Linux machine, return to the Benchmark Factory Console to set
up the remote agent. See Set Up New User Agent on page 43 for more information.
3. Use this file along with the appropriate YUM commands to install and manage the RPM package for the
Benchmark Factory Agent on Linux.
4. After installing the remote agent on the Linux machine, return to the Benchmark Factory Console to set
up the remote agent. See Set Up New User Agent on page 43 for more information.
Additional Requirements
Edit the Hosts File
Benchmark Factory uses the name for the Benchmark Factory Console machine. If your network communication
does not include host name resolution, you may need to add the name and IP address of the Benchmark
Factory Console host machine to the hosts file on the Linux machine. This will allow the agent machine to
communicate with the Benchmark Factory Console machine.
Additional Requirements for Running the Agent on Linux
After you install and set up the remote agent on a Linux platform, the Benchmark Factory Console starts the
agent when needed. To successfully run the agent on Linux, ensure the following requirements are met.
l Ensure the at package is installed on the Linux Agent machine. The at package is required in order to
start the Linux Agent from the Benchmark Factory Console. If the package is not installed, you can install
it using your package manager, such as Yum.
yum install at
l Ensure the atd service is running on the Linux Agent machine. Use the following commands:
systemctl start atd
systemctl enable atd
Each Benchmark Factory Agent must be configured with the address of the Benchmark Factory Console. Each
Agent sends load testing results back to the Benchmark Factory Console.
If you use only the agent installed locally on the console machine, make sure your local agent is configured with
the IP address (name) of your local machine.
4. Click OK.
5. Repeat this procedure for each Agent machine
4. As the job runs, all connected Agents will display in the Agent view/pane of the Benchmark
Factory console.
Note: When you run a job using one or more local agents, if Agent utilization of resources on the local
machine is too high, errors could occur.
Agents View
Select View | Agent to open the Agents pane where you can view information about agents that are currently
running. The Agent view displays the status of all connected agents.
Machine Name / IP This is the name or IP address of the Benchmark Factory Console
machine to which this agent connects.
l For a local agent, this is the local Benchmark Factory Console.
l For a remote agent, this is the Benchmark Factory Console used
to run the benchmark tests.
If the agent was installed remotely through the Benchmark Factory
console, the agent is automatically configured. If installed manually, you
must configure the agent. See Running Benchmark Factory with
Multiple Agents on page 165 for more information.
Console Port Enter the port for the Benchmark Factory Console.
Max Virtual Users Use this field to specify the maximum number of virtual users that this
agent is allowed to spawn.
Error Logs / Result Use these fields to specify a location for storing error logs and result
Files files generated for this agent.
4. When the agent is open, information about virtual user activity is displayed while a job is executing.
Review the following for additional information.
l Virtual Users Tab—Displays the raw data from each virtual user. The grid shows each virtual
user and its statistics. Right-click a column header to sort by that column.
l Output Tab—Displays the same information as the Messages window, including messages,
status, and results.
BMFAgent.exe
BMFAgent.exe is a non-UI agent included with Benchmark Factory. BMFAgent.exe performs the same
functionality as Agent.exe, except BMFAgent.exe has no graphic user interface. This allows you to easily
integrate BMFAgent.exe into your continuous integration or continuous testing process.
l On your local machine, the Benchmark Factory console attempts to use Agent.exe first. If Agent.exe is
not found, the console uses BMFAgent.exe.
l When a job uses a remote agent, if Benchmark Factory cannot find BMFAgent.exe on the remote
machine, Agent.exe is used on the remote machine.
l You can run multiple instances of BMFAgent.exe at the same time on the same machine.
Start BMFAgent.exe
To Start BMFAgent.exe (Windows)
1. Open the installation directory. The default installation path is
C:\Program Files\Quest Software\Benchmark Factory <version>
2. Open the bin sub-directory.
3. Double-click BMFAgent.exe.
Modify Settings
When BMFAgent.exe and Agent.exe are installed, default settings are applied, such as the machine name (IP
address) and the port number for the Benchmark Factory console to which the agent connects.
Use one of the following methods to modify the BMFAgent.exe settings:
l Use the BMFAgent.ini file located here: C:\Users\<user>\AppData\Local\Quest Software\BMF\<version>.
l Use the Command Prompt window.
l Open the Agent Settings dialog through the Benchmark Factory console using the following steps:
1. Start BMFAgent.exe.
2. In the console, select View | Agent to open the Agents tab.
3. Right-click the BMFAgent and select Settings.
3. In the Agent Settings dialog, use the Max Virtual Users field to specify the maximum number of virtual
users that this agent is allowed to spawn.
4. Click OK.
Create Outbound Rule on agent machine (if outbound connections are blocked)
1. Select Control Panel | System and Security | Windows Firewall.
2. Click Advanced Settings. The Windows Firewall and Advanced Security dialog opens.
l If outbound connections are blocked, then continue to create a new outbound rule.
l If outbound connections are allowed, then no action is required.
Note: In Windows Firewall, outbound connections are set to “Allow” by default.
Troubleshooting
After enabling WMI and configuring inbound/outbound rules, if you encounter an error while attempting to install
a remote agent because you are denied access, try the following.
2. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System.
3. Add a new DWORD (32-bit) Value.
4. Rename the key to "LocalAccountTokenFilterPolicy".
5. Give it a value of "1".
6. Close the Registry Editor.
Native
Database Alias Enter the database name.
Performance Counters
Benchmark Factory allows you to add additional performance counters to a connection. See Performance
Counters Tab on page 134 for more information.
Native
Hostname Enter the name or IP address of the server.
Port Enter the port number. The default is 3306.
Username Enter the user name to use for this connection.
Password Enter the password associated with the user name.
Database Select the database to which you want to connect. Benchmark Factory
creates a temporary connection and displays the available databases in
the drop-down list.
ODBC
Data Source Name The name of the MySQL ODBC data source.
User Name Enter the user name to use for this connection.
Password Enter the password associated with the user name.
Connection Name Enter a name to use to identify this connection in the My Connections
Performance Counters
Benchmark Factory allows you to add additional performance counters to a connection. See Performance
Counters Tab on page 134 for more information.
ODBC
Data Source Name The name of the ODBC data source.
User Name Enter the user name to use for this connection.
Password Enter the password associated with the user name.
Connection Name Enter a name to use to identify this connection in the My Connections
pane.
Options Click to specify timeout and reconnect options. See Connection Timeout
and Reconnect Options.
Performance Counters
Benchmark Factory allows you to add additional performance counters to a connection. See Performance
Counters Tab on page 134 for more information.
Native tab
User/Schema Schema to which you want to connect.
Password Password for schema to which you want connect.
TNS or Direct tab TNS—Allows you to connect to a database using your TNS names
file.
l Databases—Allows you to connect to a database using
your TNS names file. Select a database from the list.
Direct—Allows you to connect to a database using Host, Port,
Server name or SID.
Connect as Type of connection you connect to the database with: Normal,
SYSDBA, or SYSOPER.
Connect Using Select the Oracle Client to use for this connection.
This specification is used by Benchmark Factory Agents running
on Windows only.
Make this the BMF default Selecting this check box sets this as the default client for
home Benchmark Factory.
Connection Name Enter a name to use to identify the connection in the My
Connections pane.
ODBC tab
Data Source Name Select a data source from the drop-down list.
Click Add DSN to create a new data source.
User Name Enter the user name to use for this connection.
Password Enter the password to use for this connection.
Options Click to specify timeout and reconnect options. See Connection
Timeout and Reconnect Options.
Database Flush
Flush data buffer caches at start of Select to clear data buffer caches between iterations.
each test iteration Note: To perform this action, the Oracle database account must
have certain privileges. In Oracle 10g or later, the ALTER SYSTEM
privilege is required.
Flush shared pool at start of each Select to clear shared pool between iterations.
test iteration Note: To perform this action, the Oracle database account must
have the ALTER SYSTEM privilege.
Note: Cached data can improve performance, so selecting one or both of these options can prevent cached
data from affecting subsequent iterations.
Performance Counters
Select the Performance Counters tab of the Connection dialog to add additional performance counters to a
connection. See Performance Counters Tab on page 134 for more information.
Clustering
Select the Clustering tab of the Connection dialog to enable clustering. See Oracle Clustering Tab
(Connections) on page 69 for more information.
Statistics
Select the Statistics tab of the Connection dialog to specify statistics collection options for this connection. See
Oracle Statistics Tab (Connections) on page 69 for more information.
6. After creating a new PostgreSQL connection, you can collect database information using the Edit
Connection dialog. See Environment Information on page 67 for more information.
ODBC Tab
Data Source Select a data source from the drop-down list.
Name Click Add DSN to create a new data source.
User name Enter the user name to use for this connection.
Password Enter the password associated with the user name.
Connection Name Enter a name to use to identify this connection in the My Connections pane.
Options Click Options to specify timeout and reconnect options. See Connection Timeout
and Reconnect Options.
Performance Counters
Select the Performance Counters tab of the Connection dialog to add additional performance counters to the
connection. See Performance Counters Tab on page 134 for more information.
Native
Server Name The name or the IP address of the server.
Database The name of the database to which you want to connect.
User Name Enter the user name to use for this connection.
Password Enter the password associated with the user name.
Connection Name Enter a name to use to identify this connection in the My Connections
pane.
Options Click to specify timeout and reconnect options. See Connection Timeout
and Reconnect Options.
Performance Counters
Benchmark Factory allows you to add additional performance counters to a connection. See Performance
Counters Tab on page 134 for more information.
Native
Server Name Enter the name or the IP address of the server.
Click the drop-down arrow to retrieve a list of servers running SQL
Server that are currently active on the network.
Note: BFScripts have been enabled on the User Name and Password fields.
ODBC tab
Data Source Name The name of the MS SQL Server ODBC data source.
User Name Enter the user name to use for this connection.
Password Enter the password associated with the user name.
Connection Name Enter a name to use to identify this connection in the My Connections
pane.
Options Click Options to specify timeout and reconnect options. See Connection
Timeout and Reconnect Options.
Note: BFScripts have been added to the Data Source name field in the ODBC Connection dialog.
BFScripts have been enabled on the User Name and Password fields.
Database Flush
Clean data buffer and procedure Select this option to instruct Benchmark Factory to clear cached data
caches at start of each test iteration between iterations. Cached data can improve performance, so
selecting this option can prevent cached data from affecting
subsequent iterations.
Notes:
l This option is only applicable to SQL Server 2005 or later.
l To perform this action, the SQL Server database account
must have the sysadmin fixed server role.
Performance Counters
Select the Performance Counters tab of the Connection dialog to add additional performance counters to a
connection. See Performance Counters Tab on page 134 for more information.
Edit Connections
To edit a connection, use the My Connections pane or click the Edit Connections button in the main toolbar.
Both the My Connections pane and the Edit Connections dialog provide a list of your currently-defined
connections. Use either of these interfaces to view or modify information for each connection.
To edit a connection
1. Use one of the following methods to edit a connection:
l Click in the main toolbar. Select a connection and click in the Edit
Connections dialog.
l Select View | My Connections. In the My Connections pane, select a connection and
click .
2. The Connection dialog for the selected connection opens. Use the DB Connection tab to update the
connection password or other connection information.
3. Select from among the other tabs to add or modify other properties associated with the connection. The
following tabs are available (depending on the connection type).
l Environment Information
l Performance Counters Tab
l Oracle Clustering Tab (Connections)
l Oracle Statistics Tab (Connections)
l Select View | My Connections. In the My Connections pane, use the following toolbar buttons to
manage connections.
Timeout
Time The maximum amount of time Benchmark Factory will try to log on to
the system-under-test. If this amount of time is reached, Benchmark
Factory will return an error.
Infinite timeout Prevents the logon to the system-under-test from timing out and
returning an error.
Reconnect
Enable Reconnect Enables Benchmark Factory to attempt to reconnect to the system-
under-test if the connection is lost.
Number of reconnect The number of times to attempt to reconnect before aborting.
attempts
Time to wait between How long to wait before attempting to reconnect.
reconnect attempts
(seconds)
3. Click OK.
Environment Information
Benchmark Factory can collect and display database and host server information for a connection. The
information is displayed in the Environment tab of the Connection dialog for an existing connection and also in
the Database Under Test page of the New/Edit Job Wizard.
You can also create custom properties to add your own customized information to the connection.
l Click in the main toolbar. Select a connection and click in the Edit
Connections dialog.
l Select View | My Connections. In the My Connections pane, select a connection and
click .
2. Select the Environment tab in the Connection dialog.
3. Click Detect Environment Information.
Note: To successfully view all environment information requested by Benchmark Factory, the login
account used in the connection must have sufficient permissions. See Environment Information on
page 67 for more information.
l AVG_TIME
l BPS
l DEADLOCKS
l TOTAL_ERRORS
l MAX_TIME
l RPS
l TOTAL_BYTES
l TOTAL_ROWS
l TPS
l USERLOAD
You can add performance counters to a connection or a job.
Tip: You can specify default values for Oracle performance collection and reporting options in Edit |
Settings | Oracle.
Jobs View
The Jobs View pane displays the list of jobs. After you create and save a job, the job is displayed in the Jobs
View pane. You can also use the Jobs View pane to identify the jobs that are currently running and the jobs that
are scheduled to run.
l To edit an existing job, select the job in the Jobs View pane and click . The Edit Job Wizard
opens. To learn more about the Job Wizard, see The Job Wizards.
Test Results
To view test results
l To view a job's test results, select a job in the Jobs View pane. Test results display in the right pane. See
Benchmark Factory Console for an overview of the Benchmark Factory console.
Job Status
From the Jobs View pane, you can view job status.
The following job states are identified:
l Scheduled: All jobs currently waiting to run or scheduled to run at a future time.
l Running: Job currently running.
l Completed: All completed jobs.
Note: To save a job as a Benchmark Factory script, select the job and click Save in the Benchmark Factory
toolbar or select File | Save.
1. Click in the Benchmark Factory main toolbar. The New Job Wizard opens.
2. Select a connection. Select a connection from the drop-down list in the Database Under Test page.
4. Add Workload. After selecting a database connection, click at the bottom of the
Database Under Test page, or click Workload in the left pane,.
5. Select a test. On the workload page, select the type of test to perform from the drop-down list. Then
select a test to add to the workload. To learn how to create a specific test, select from the following:
l Industry Standard Benchmark Test—These tests simulate real-world application workloads.
Select from a number of standard benchmarks included with Benchmark Factory.
l Custom Test—To create a custom Mix test, Replay test, Goal test, or SQL Scalability test, see
Custom Tests.
l Create/Delete Benchmark Objects Test—To add a step to create or delete benchmark objects,
see Add a Create/Delete Benchmark Objects Test.
l Execute External File—To add a step to execute a file, see Execute External File.
6. Click Add Test or Select Test at the bottom of the page to add the test to the workload.
7. Specify test options. Specify test options for the selected test. Select one of the links above for detailed
information about test options for each test type.
8. Job Setup. (Optional) You can specify job-level options. Click Job Setup at the bottom of the Test
Options tab, or select the Job Setup tab. See Job Setup Tab to learn more.
9. Agent Setup. After specifying options for the test you selected, set up the agents for this job.
a. Click the Agent Setup button at the bottom of the page, or click Agent in the left pane.
b. Click inside the checkbox to the left of the agent name to select it. A checkmark displays for each
selected agent. See About Agents on page 42 for more information.
Note: When you run a job using one or more local agents, if Agent utilization of resources on
the local machine is too high, errors could occur.
10. After specifying the test-level and job-level options, you can save the job, run the job, or schedule the
job. Review the following:
l To save the job without running it immediately, click . Use this option after
scheduling a job.
l To save the job to an existing job, select the Job Setup tab. In the Save Job section, select the
name of an existing job. Click .
Notes:
l After creating a job, you can save it as a Benchmark Factory script. Select the job in the Jobs View
pane and click Save in the main toolbar or select File | Save.
l To modify an existing job, right-click the job and select Edit Job.
l After creating a job, you can add tests to the workload. Right-click the job and select Edit Job.
To edit a job
6. Click to add the test to the workload. The Summary tab opens.
7. Summary tab. The Summary tab provides a summary of the job and the workload, as well as links to the
commonly edited options for this test. Click each link to navigate to the applicable tab where you can edit
that option. Options shown in red are required. Review the following for additional information:
Scale Click to change the scale factor for this test/step. In the Benchmark Scale
field, specify a scale factor. See Benchmark Scale Factor on page 154 for
more information.
Size Displays the total size of all objects in this Create Objects step. Click to
open the Scale tab where you can modify the database size or the scale
factor. See Benchmark Scale Factor on page 154 for more information.
Number of Tables (Replication test only) Displays the number of tables to create. Click to
modify the number of tables, the number of columns in a table, and the
8. Create Objects - More Options. To specify more options for the Create Objects step, select the Create
Object for test step in the left pane. Then select the Test Options tab. Review the following for
additional information:
9. Transaction Mix - More Options. To specify more options for the Transaction Mix step, select the test
Transaction Mix step in the left pane. Then select the Test Options tab. Review the following for
additional information:
10. To add another test to the workload, click Add Another Test/Step. This takes you back to the selection
page of the Workload section.
11. Job Setup. To configure job-level options, select the Job Setup tab. See Job Setup Tab to learn more
about job-level options.
12. Agent. To set up the agents for this job, click Agent in the left pane. Select the agents/computers to use
for this test. See About Agents on page 42 for more information.
13. After specifying the test-level and job-level options, you can save the job, run the job, or schedule the
job. Review the following:
l To save the job without running it immediately, click . Use this option after
scheduling a job.
l To save the job to an existing job, select the Job Setup tab. In the Save Job section, select the
name of an existing job. Click .
l The number of columns per table and the column types (data types)
l The percentage of each column type (data type) in all tables, for example, 40% INT, 20% VARCHAR
(255), etc.
l The percentage of each statement type (insert, update, delete), as well as the number of statements per
commit (transaction)
5. Click to add the test to the workload. The Summary tab opens.
6. Summary tab. The Summary tab provides a summary of the job and the workload, as well as links to the
commonly edited options for this test. Click each link to navigate to the applicable tab where you can edit
that option. Options shown in red are required. Review the following for additional information:
Scale Click to change the scale factor for this test/step. In the Benchmark Scale
field, specify a scale factor. See Benchmark Scale Factor on page 154 for
more information.
Size Displays the total size of all objects in this Create Objects step. Click to
open the Scale tab where you can modify the database size or the scale
factor. See Benchmark Scale Factor on page 154 for more information.
Number of Tables (Replication test only) Displays the number of tables to create. Click to
modify the number of tables, the number of columns in a table, and the
data types to create. See Replication Table Options Tab on page 114 for
more information.
Transactions Displays the number of transactions. Click to modify the transaction mix for
the transaction step. See Transactions Tab on page 115 for more
information.
User Load Displays the user load—the number of virtual users per test iteration. Click
to review or modify the user load. See Specify User Load on page 154 for
more information.
Length Click to modify the timing for this test. See Timing Tab on page 118 for
more information.
8. Replication Test - More Options. To specify more options for the Replication step, select the
Replication Test step in the left pane. Then select the Test Options tab. Review the following for more
information:
9. To add another test to the workload, click Add Another Test/Step. This takes you back to the selection
page of the Workload section.
10. Job Setup. To configure job-level options, select the Job Setup tab. See Job Setup Tab to learn more
about job-level options.
11. Agent. To set up the agents for this job, click Agent in the left pane. Select the agents/computers to use
for this test. See About Agents on page 42 for more information.
12. After specifying the test-level and job-level options, you can save the job, run the job, or schedule the
job. Review the following:
l To save the job without running it immediately, click . Use this option after
scheduling a job.
l To save the job to an existing job, select the Job Setup tab. In the Save Job section, select the
name of an existing job. Click .
3. Then click .
4. On the Workload page, select Capture/Replay Test from the drop-down list.
5. Then select the Capture and Replay Oracle Workload option.
8. Capture Scenario Wizard. In the Capture Scenario Wizard, enter connection information for the
database from which you want to capture a workload. For more information, see Create Oracle
Connection. Click Next when finished.
9. Select Capture Method. On the Select Capture Method page, select a capture method to use. Review
the options below. Click Next when finished.
Capture using Oracle Captures workloads using trace files written to a specified server
Trace files directory. These files are captured at the session level.
Note: The Trace file capture method is not available for Oracle
connections that use Real Application Clusters (RAC).
Capture using FGAC Captures workloads using the Oracle Fine-Grained Access objects that
(Fine-Grained Access capture activities at the database table and view levels.
Control) Tablespace for capture table—Allows you to select the required
tablespace for Fine Grained Access captures.
Note: The FGAC capture method is enabled only if the feature is
available in the target database.
10. Apply Privileges. If the user does not have the required privileges to do the capture, the Apply Privileges
page opens. Enter the credentials of a DBA-type user account that can apply the necessary privileges.
Click Next when finished.
Note: To view the missing privileges necessary for this user to perform the capture process, click the
View/Save Script button and review the script.
11. Directory Settings. On the Directory Settings page, specify an Oracle server-side directory in which to
place the capture files. Also, specify the capture directory from which Benchmark Factory will replay the
files. Review the following for additional information:
Click Next when finished.
Capture Name Enter a name for the capture, or use the default. This name is used for the
Database Server Specify an Oracle server-side directory where Benchmark Factory should
Directory place the capture files. Specify the path as the server sees it.
Note: You can specify a network directory here. Enter the full network
path to the network directory. The database service must be able to
access the network directory. In a Linux environment, a local directory
must be mounted to the network location.
Capture Directory Specify a directory where Benchmark Factory will look for the capture files to
replay. Do one of the following:
l Manual transfer: Specify a client-side directory and then manually
transfer the capture files to this location.
l Shared directory: Specify the same Oracle server-side (or network)
directory you specified in the preceding field. However, enter the
path as the client computer sees it.
To use this method, you must first map a drive on the client computer
to the Database Server Directory (Oracle server-side or network
directory). Then, browse to and select that mapped drive.
Selecting a shared directory: The shared capture directory can be on the
database server as a local directory or on a network file server as a local
directory. Either location must be accessible by the Benchmark Factory
client computer (Windows network share, Samba, or NFS). A network file
server is the preferred location for the following reasons:
l This minimizes total network I/O required for capture and replay.
l The shared directory can be placed on an I/O subsystem with
sufficient I/O bandwidth to handle both the concurrent I/O writing and
the cumulative size of the trace and export files.
12. Reporting method. On the Reporting Settings page, select the performance reporting method to use.
Review the options below. Click Next when finished.
13. Capture Scope. The Capture Scope page allows you to select schemas for capture. You can select
the entire database or use the list to select specific schemas. Review the options below. Click Next
when finished.
Capture activity for entire Captures activity for the entire database, but not for built-in accounts.
database
Capture only activity upon Captures only the data of the selected database objects. This option
selected owner's database allows you to select individual schemas for capture.
objects
a. To select database objects, move the desired schemas to the
right pane.
b. To specify auto-included schemas, click one or both of the
following buttons:
l Roles—Opens the Auto Included Roles dialog which
displays the roles that can access the objects in the
schemas in the Selected Schemas column. Select which
roles to include or exclude.
l Schemas—Opens the Auto Included Schemas dialog
which displays schemas that can access the objects in
the Selected Schemas. Select which related schemas to
include or exclude.
Perform export as part of Select to instruct Benchmark Factory to export the objects selected on
capture process the previous page (Capture Scope). The export is performed during the
capture procedure.
Note: If you selected to capture activity for the entire database,
exporting the entire database can require significant time and space.
Include export of related schemas—Select to export the auto-included
schemas selected on the Capture Scope page.
15. Filter Settings. (Optional) The Filtered Settings page allows you to add filters to exclude activity from the
capture.
16. Capture Thresholds. Benchmark Factory allows you to specify limits for CPU usage and free space
during a capture. If levels exceed the values you specify, the capture process is stopped.
Benchmark Factory displays the current values to help you determine the best thresholds to specify.
Review the options below. Click Next when finished.
Stop capture if CPU Enter a percentage. If CPU usage exceeds this value, Benchmark
percentage exceeds Factory stops the capture process.
Stop capture if disk/tablespace Enter a value for free space in MB. If the amount of free space falls
free space falls below below this level, Benchmark Factory stops the capture process.
17. Capture Control. Use the Capture Control page to specify when to start the capture.
You can start the capture immediately after finishing the wizard, or you can schedule the capture.
Review the options below. Click Next when finished.
18. Start the capture. The Finish page provides capture specifications. To start the capture (or enable the
schedule), click Capture.
l If you specified a client-side directory (instead of a shared directory) as the Capture Directory
(directory from which the client replays the capture), BMF prompts you to transfer the capture
files to your client-side directory now. Copy or move the trace/xml and export (DMP) files from
the directory where they were generated (Database Server Directory) to the client-side Capture
Directory. Place the files into the existing capture-named sub-directory. After transferring the
files, click OK in the message box. BMF then processes the necessary files and updates the
project file.
Optionally, if you want to transfer the files at a later time, click Cancel.
21. Click the Click here for details links to review the export and capture processes. Close the Capture
Status window when finished. You can open the Capture Status window again from the Captures tab in
the Benchmark Factory console. In the Captures tab, right-click a capture and select View Status.
22. You can now replay the workload.
3. Then click .
6. Click .
7. Select a capture. On the Replay Workload page, select a capture from the Capture to Replay field
using one of the following methods:
l Click the browse button and navigate to the directory where the capture files are located. Select
the project (.mse) file for the capture you want to replay.
l Select one of your previously-replayed captures from the drop-down list.
Benchmark Factory loads the capture details into Capture Properties fields.
8. Transfer and process files. If you did not transfer the trace or .xml capture files to the client-side
directory at the time of the capture, or you transferred the files at a later time, Benchmark Factory warns
you that the capture data must be processed.
l To process the files you transferred after the capture, click Yes in the warning message.
Benchmark Factory closes the New Job Wizard and begins processing the capture data.
l To transfer the files now, first click Yes in the warning message. Benchmark Factory then prompts
you to transfer the files. Transfer the files, and then click OK in the prompt.
When the process is finished, close the Capture Status window. Then open the New Job Wizard again
and select the project (.mse) file again.
9. Import Test. Click . The test is added to the workload and the test Summary
page displays.
10. If you exported the database objects and data required to replay the workload, Benchmark Factory
prompts you for the location of this file. Enter the path to the file location on the database server
(or network).
11. On the Summary page, you can click each of the links to go directly to an option to modify it.
12. To jump to the Test Options tab, click Test Options at the bottom of the Summary page. Review the
following for more information:
Tab Description
Transactions tab Modify the transactions and the transaction mix for the test.
For a custom test, add transactions.
See Transactions Tab on page 115 for more information.
Options tab Enable scaling for the user scenario. See Options Tab (Capture/Replay)
on page 121 for more information.
Timing tab User Startup—Select one of the following options:
l Start all users as quickly as possible—Starts all users
immediately after a test begins.
l Start all users at even intervals for—Sets the amount of time in
which to start all users at even intervals. The interval duration is
equal to this value divided by the number of users.
l Start a new user every [n] seconds—Starts a new user, then
waits the [n] number of seconds before starting the next user.
Advanced tab Specify Repository options, error handling, and connect/disconnect
options for the test. See Advanced Tab on page 119 for more information.
13. After specifying options for this test, you can add another test to the job, configure job setup options, save
and close the job, run the job, or schedule the job. For more information about each of these steps, see
Notes:
l Database objects created by Benchmark Factory for the capture process are automatically dropped
after the capture is finished.
l You can manage your existing capture projects from the Captures tab in the Benchmark
Factory console.
Note: This feature is not available in the freeware edition of Benchmark Factory.
3. Then click .
4. On the Workload page, select Capture/Replay Test from the drop-down list.
5. Then select the Capture and Replay SQL Server Workload option.
6. Click .
7. On the Replay Workload page, click Perform New Capture. The Capture Scenario Wizard opens and
the New Job Wizard closes.
Tip: You can also create a new capture using the same settings as an existing capture
project. In the Captures tab of the Benchmark Factory console, right-click a capture and select
Repeat Capture.
Capture Name Enter a name for the capture, or use the default. This name is used for the
sub-directory where the capture files are stored in the Capture Directory you
specify.
Note: Only alpha and numeric characters and the underscore (_) are
permitted. The name must begin with a letter.
Database Server Specify a server-side directory where Benchmark Factory should place the
Directory capture files. Specify the path as the server sees it.
Note: You can specify a network directory here. Enter the full network
path to the network directory. The database service must be able to
access the network directory.
Capture Directory Specify a directory where Benchmark Factory will look for the capture files to
replay. Do one of the following:
l Manual transfer: Specify a client-side directory and then manually
transfer the capture files to this location.
l Shared directory: Specify the same server-side (or network)
directory you specified in the preceding field. However, enter the
path as the client computer sees it.
To use this method, you must first map a drive on the client computer
to the Database Server Directory (server-side or network directory).
Then, browse to and select that mapped drive.
Selecting a shared directory: The shared capture directory can be on the
database server as a local directory or on a network file server as a local
directory. Either location must be accessible by the Benchmark Factory
client computer (Windows network share, Samba, or NFS). A network file
server is the preferred location for the following reasons:
l This minimizes total network I/O required for capture and replay.
l The shared directory can be placed on an I/O subsystem with
sufficient I/O bandwidth to handle both the concurrent I/O writing and
the cumulative size of the trace and export files.
Capture activity for entire Select to capture all activity for the entire database.
database
Capture only activity for Select to capture activity only for the selected databases. Then select the
selected databases databases from which to capture activity.
11. Backup Scope. Benchmark Factory can export/backup the database objects and data required to replay
the workload on the same data snapshot. Review the description below. Click Next when finished.
Perform backup as part of Select this option to instruct Benchmark Factory to export/backup the
the Capture Process databases selected on the previous page (Capture Scope). The backup
is performed during the capture procedure.
Note: If you selected to capture activity for the entire database,
exporting the entire database can require significant time and space.
12. Filter Settings. (Optional) The Filtered Settings page allows you to add filters to exclude activity from
the capture.
l To add a filter, click Add. Then specify parameters for the filter. Click Next when finished.
13. Capture Thresholds. Benchmark Factory allows you to specify limits for CPU usage and free space
during a capture. If levels exceed the values you specify, the capture process is stopped.
Benchmark Factory displays the current values to help you determine the best thresholds to specify.
Review the options below. Click Next when finished.
Note: The Capture Thresholds feature is not available for SQL Server running in a Linux
environment.
14. Schedule Job. You can schedule a capture or start it immediately. Review the options below. Click Next
when finished.
15. Start the capture. The Submit page provides capture specifications. To start the capture (or enable the
schedule), click Submit.
16. Capture Status. The Capture Status window opens providing export and/or capture status and details of
the process.
Immediate capture: If you selected to start the capture immediately, the Capture Status window displays
information about the capture process, such as the status of the export process and the number of
sessions captured.
l Click the Click for capture details link to view more-detailed information during the export or
capture. You can view which objects are exporting or which session/user is currently being
captured and the total number of sessions captured.
l Click Properties to review the capture description.
17. Click the Click for capture details links to review the export and capture processes. Close the Capture
Status window when finished. You can open the Capture Status window again from the Captures tab in
the Benchmark Factory console. In the Captures tab, right-click a capture and select View Status.
3. Then click .
4. On the Workload page, select Capture/Replay Test from the drop-down list.
5. Then select the Capture and Replay SQL Server Workload option.
6. Click .
8. Import Test. Click . The test is added to the workload and the test Summary
page displays.
9. On the Summary page, you can click each of the links to go directly to an option to modify it.
10. To jump to the Test Options tab, click Test Options at the bottom of the Summary page. Review the
following for more information:
Tab Description
Transactions tab Modify the transactions and the transaction mix for the test.
For a custom test, add transactions.
See Transactions Tab on page 115 for more information.
Options tab Enable scaling for the user scenario. See Options Tab (Capture/Replay)
on page 121 for more information.
Timing tab User Startup—Select one of the following options:
l Start all users as quickly as possible—Starts all users
immediately after a test begins.
l Start all users at even intervals for—Sets the amount of time in
which to start all users at even intervals. The interval duration is
equal to this value divided by the number of users.
l Start a new user every [n] seconds—Starts a new user, then
waits the [n] number of seconds before starting the next user.
Advanced tab Specify Repository options, error handling, and connect/disconnect
options for the test. See Advanced Tab on page 119 for more information.
11. After specifying options for this test, you can add another test to the job, configure job setup options, save
and close the job, run the job, or schedule the job. For more information about each of these steps, see
Quickstart: Create a New Job.
Notes:
l Database objects created by Benchmark Factory for the capture process are automatically dropped
after the capture is finished.
l You can manage your existing capture projects from the Captures tab in the Benchmark
Factory console.
To start the Capture Scenario Wizard from the command line, see Run the Capture Wizard from the
Command Line.
Parameter Description
-? Displays Help
-O | -S Specifies the database type for the capture.
-O performs an Oracle capture.
BFCapture.exe -O
-S performs a Microsoft SQL Server capture.
BFCapture.exe -S
These parameters are ignored when used with another parameter.
If you do not specify a capture database type (Oracle or SQL Server), the Capture Scenario
Wizard prompts you to select one.
Note: The parameters -V, -D, and -C cannot be used at the same time and must have a capture project file
location specified.
ALTER SESSION SET EVENTS '10046 trace name context forever, level 4'
4. Click the Select Test button. The Oracle Trace Input dialog opens.
5. Click Add Trace and browse to and select the trace file (or files).
l To add additional files, click Add Trace.
l To remove a file from the list, select the file and click Remove Trace.
6. When you finish inputting files, click Next. The Oracle Trace Activity dialog opens.
8. Click Finish.
9. If the trace import file exceeds the Benchmark Factory limit for displaying individual transaction, the
following dialog displays:
If you click Yes, the trace file import continues. Individual SQL is converted to .xml files. You can then edit
the .xml files in the Benchmark Factory Session Editor.
10. The test is added to the workload and the test Summary page displays.
11. On the Summary page, you can click each of the links to go directly to an option to modify it.
12. To jump to the Test Options tab, click Test Options at the bottom of the Summary page. Review the
following for more information:
13. After specifying options for this test, you can add another test to the job, configure job setup options, save
and close the job, run the job, or schedule the job. For more information about each of these steps, see
Quickstart: Create a New Job.
l Creating a SQL 2008/2008 R2 Trace Table Using the SQL Server Profiler
Tab Description
Transactions tab Modify the transactions and the transaction mix for the test.
For a custom test, add transactions.
See Transactions Tab on page 115 for more information.
Options tab Enable scaling for the user scenario. See Options Tab (Capture/Replay)
on page 121 for more information.
Timing tab User Startup—Select one of the following options:
l Start all users as quickly as possible—Starts all users
immediately after a test begins.
l Start all users at even intervals for—Sets the amount of time in
which to start all users at even intervals. The interval duration is
equal to this value divided by the number of users.
l Start a new user every [n] seconds—Starts a new user, then
waits the [n] number of seconds before starting the next user.
Advanced tab Specify Repository options, error handling, and connect/disconnect
options for the test. See Advanced Tab on page 119 for more information.
13. After specifying options for this test, you can add another test to the job, configure job setup options, save
and close the job, run the job, or schedule the job. For more information about each of these steps, see
Quickstart: Create a New Job.
The resulting workload contains a user scenario consisting of the ordered sequence of captured SQL
Transactions from the ODBC trace.
See Creating an ODBC Trace File for instructions on how to create an ODBC trace file.
Tab Description
Transactions tab Modify the transactions and the transaction mix for the test.
For a custom test, add transactions.
See Transactions Tab on page 115 for more information.
Options tab Enable scaling for the user scenario. See Options Tab (Capture/Replay)
on page 121 for more information.
Timing tab User Startup—Select one of the following options:
l Start all users as quickly as possible—Starts all users
immediately after a test begins.
l Start all users at even intervals for—Sets the amount of time in
which to start all users at even intervals. The interval duration is
equal to this value divided by the number of users.
l Start a new user every [n] seconds—Starts a new user, then
waits the [n] number of seconds before starting the next user.
Advanced tab Specify Repository options, error handling, and connect/disconnect
options for the test. See Advanced Tab on page 119 for more information.
10. After specifying options for this test, you can add another test to the job, configure job setup options, save
and close the job, run the job, or schedule the job. For more information about each of these steps, see
Quickstart: Create a New Job.
9. If you selected Fixed field, the Fixed Field Column Positions page opens. Configure the
column or columns.
Note: Only the first 20 rows of the file display.
Tab Description
Transactions tab Modify the transactions and the transaction mix for the test.
For a custom test, add transactions.
See Transactions Tab on page 115 for more information.
Options tab Enable scaling for the user scenario. See Options Tab (Capture/Replay)
on page 121 for more information.
Timing tab User Startup—Select one of the following options:
l Start all users as quickly as possible—Starts all users
immediately after a test begins.
l Start all users at even intervals for—Sets the amount of time in
which to start all users at even intervals. The interval duration is
equal to this value divided by the number of users.
l Start a new user every [n] seconds—Starts a new user, then
waits the [n] number of seconds before starting the next user.
Advanced tab Specify Repository options, error handling, and connect/disconnect
options for the test. See Advanced Tab on page 119 for more information.
13. After specifying options for this test, you can add another test to the job, configure job setup options, save
and close the job, run the job, or schedule the job. For more information about each of these steps, see
Quickstart: Create a New Job.
The resulting workload is a mixed workload containing either the most-often executed, the most time-
consuming, or the most-recently executed SQL transactions.
9. Click Finish. The test is added to the workload and the test Summary page displays.
Tab Description
Transactions tab Modify the transactions and the transaction mix for the test.
See Transactions Tab on page 115 for more information.
User Load tab Modify the number of users per test iteration. See Specify User Load on
page 154 for more information.
Timing tab Specify sampling, pre-sampling, and user start-up times for the test. See
Timing Tab on page 118 for more information.
Advanced tab Specify Repository options, error handling, and database checkpoints for
the test.
You can also specify a file to execute at the beginning or end of each
iteration. See Advanced Tab on page 119 for more information.
12. After specifying options for this test, you can add another test to the job, configure job setup options, save
and close the job, run the job, or schedule the job. For more information about each of these steps, see
Quickstart: Create a New Job.
Tab Description
Transactions tab Modify the transactions and the transaction mix for the test.
See Transactions Tab on page 115 for more information.
User Load tab Modify the number of users per test iteration. See Specify User Load on
page 154 for more information.
Timing tab Specify sampling, pre-sampling, and user start-up times for the test. See
Timing Tab on page 118 for more information.
Advanced tab Specify Repository options, error handling, and database checkpoints for
the test.
You can also specify a file to execute at the beginning or end of each
iteration. See Advanced Tab on page 119 for more information.
Scalability Tests
Create SQL Scalability Test
The SQL Scalability test allows you to execute SQL statements, allowing users to spot potential issues not seen
with a single execution. Users can run variations of a SQL statement in order to find the SQL that will perform the
best under a load test.
Use this procedure to create a new SQL Scalability test or a Custom Scalability test.
Tab Description
Transactions tab Add transactions for the test.
See Transactions Tab on page 115 for more information.
User Load tab Modify the number of users per test iteration. See Specify User Load on
page 154 for more information.
Timing tab Execute By—Select one of the following options:
l Number of executions per iteration—Each transaction is
executed by each user for a specified number of times
(recommended).
l Execution time per iteration—Executes each transaction for the
specified length of time.
User Startup—Select one of the following options:
l Start all users as quickly as possible—Starts all users
immediately after a test begins.
l Start all users at even intervals for—Sets the amount of time in
which to start all users at even intervals. The interval duration is
equal to this value divided by the number of users.
6. After specifying options for this test, you can add another test to the job, configure job setup options, save
and close the job, run the job, or schedule the job. For more information about each of these steps, see
Quickstart: Create a New Job.
10. Or you can select the Test Options tab to modify all test options. Review the following for additional
information:
Tab Description
Transactions tab Modify the transactions and the transaction mix for the test.
See Transactions Tab on page 115 for more information.
User Load tab Modify the number of users per test iteration. See Specify User Load on
page 154 for more information.
Timing tab Specify sampling, pre-sampling, and user start-up times for the test. See
Timing Tab on page 118 for more information.
Advanced tab Specify Repository options, error handling, and database checkpoints for
the test.
You can also specify a file to execute at the beginning or end of each
iteration. See Advanced Tab on page 119 for more information.
11. After specifying options for this test, you can add another test to the job, configure job setup options, save
Custom Tests
Custom Tests
Custom tests allow you to create workloads from user provided SQL. These tests provide flexibility for your load
testing requirements. The following customs load scenarios are provided:
l Mix Test—Runs a transaction mix based upon weights for a specified time at each predetermined user
load level.
l Replay Test—Runs multiple transactions with each one running independently on a specified
number of users.
l Goal Test—Used to find maximum throughput or response time values. A transaction mix is executed
at a range of user load levels.
l Scalability Test—Compares the performance of SQL Statement variations under a workload. Each
transaction will execute individually for each user load and timing period.
2. On the workload page, select Custom Test from the drop-down list.
3. Select the Mix test executes ... option.
4. Click the Add Test button.The test is added to the workload and the test Summary page displays.
5. On the Summary page, click the link beside Custom Mix Load Scenario to add transactions. See
Transactions Tab on page 115 for more information.
6. Then specify the weight for each transaction.
7. Or you can select the Test Options tab to modify all test options. Review the following for additional
information:
Tab Description
Transactions tab Modify the transactions and the transaction mix for the test.
See Transactions Tab on page 115 for more information.
User Load tab Modify the number of users per test iteration. See Specify User Load on
page 154 for more information.
Timing tab Specify sampling, pre-sampling, and user start-up times for the test. See
8. After specifying options for this test, you can add another test to the job, configure job setup options, save
and close the job, run the job, or schedule the job. For more information about each of these steps, see
Quickstart: Create a New Job.
Tab Description
Transactions tab Modify the transactions and the transaction mix for the test.
For a custom test, add transactions.
See Transactions Tab on page 115 for more information.
Options tab Enable scaling for the user scenario. See Options Tab (Capture/Replay)
on page 121 for more information.
Timing tab User Startup—Select one of the following options:
l Start all users as quickly as possible—Starts all users
immediately after a test begins.
l Start all users at even intervals for—Sets the amount of time in
which to start all users at even intervals. The interval duration is
equal to this value divided by the number of users.
l Start a new user every [n] seconds—Starts a new user, then
waits the [n] number of seconds before starting the next user.
Advanced tab Specify Repository options, error handling, and connect/disconnect
options for the test. See Advanced Tab on page 119 for more information.
8. After specifying options for this test, you can add another test to the job, configure job setup options, save
Tab Description
Transactions tab Add transactions for the test.
See Transactions Tab on page 115 for more information.
User Load tab Modify the number of users per test iteration. See Specify User Load on
page 154 for more information.
Timing tab Execute By—Select one of the following options:
l Number of executions per iteration—Each transaction is
executed by each user for a specified number of times
(recommended).
l Execution time per iteration—Executes each transaction for the
specified length of time.
User Startup—Select one of the following options:
l Start all users as quickly as possible—Starts all users
immediately after a test begins.
l Start all users at even intervals for—Sets the amount of time in
which to start all users at even intervals. The interval duration is
equal to this value divided by the number of users.
l Start a new user every [n] seconds—Starts a new user, then
waits the [n] number of seconds before starting the next user.
Advanced tab Specify Repository options, error handling, and database checkpoints for
the test.
You can also specify a file to execute at the beginning or end of each
iteration. See Advanced Tab on page 119 for more information.
Create/Load objects if objects don't exist If objects do not exist, Benchmark Factory creates the
(no backup sets created) objects and loads data.
If you select this option, Benchmark Factory does not
create backup tables.
Delete benchmark objects after completion Benchmark Factory deletes objects after a job is
of job finished.
Refresh data using backup sets When backup sets exist, Benchmark Factory reloads
data and recreates indexes.
If backup sets do not exist, Benchmark Factory
creates objects and loads data. Then Benchmark
Factory creates the backup tables.
Recreate objects and reload data every Benchmark Factory always deletes the existing
execution objects and then recreates/reloads for each job
execution.
This allows you to ensure that the dataset is always in
initialized status.
Refresh data using inserts If data does not exist, Benchmark Factory uses bulk
insert to load data.
If the following Benchmark Settings are selected,
l Scale—Click the Scale link to modify the scale factor. The Scale page opens. In the Benchmark
Scale field, select a new scale factor. See Benchmark Scale Factor on page 154 for more
information.
Note: The test is added to the top of the workload list. To move the test down (or up) in the list, right-
click the test and select Move Down (or Move Up).
9. Test Options. To modify test options for the Create Objects test/step, select the Test Options tab. Review
the following for more information:
l Scale—To modify the scale factor. See Benchmark Scale Factor on page 154 for more
information.
l Options—To modify options for object creation and retention. See Options Tab (Create Objects
Step) on page 111 for more information.
l Custom Creation SQL—To use custom SQL to create objects. See Customize SQL for Creating
Objects on page 113 for more information.
Tip: You can specify a default setting for Object/Data Retention in Edit | Settings | Benchmarks.
Distribute index creation (one Select to use multiple virtual users to create indexes at the
index create per virtual user) same time (one virtual user for each index).
Tip: For IBM DB2 databases, you might encounter
deadlock errors if database configuration is incorrect.
If you do not select this option, all indexes are created by a
single virtual user.
Note: To specify parallel index creation for indexes, you
can customize the SQL used to create indexes. See
Customize SQL for Creating Objects on page 113 for more
information.
Distribute Load using Benchmark You can choose to use console or agents to load benchmark
Factory Agents data.
l Select to distribute data generation across multiple
agent machines to speed up the standard benchmark
data load.
l If you do not select this option, the console is used to
load standard benchmark data.
Important Note: Using the console for large data
loads could slow down database load times
significantly. Use only for small scale factors.
Number of virtual users to perform Specify the number of virtual users to use to load table data.
creation/load The number of virtual users used per table will depend on the
table size.
Notes:
l Benchmark Factory recommends using a number
that is a multiple of the number of tables in the
selected standard benchmark test. Refer to
benchmark test specifications for the table count.
l You cannot specify a number less than the number
of tables in the selected standard benchmark test.
4. Select an Object/Data Retention method. Review the following for additional information:
Create/Load objects if objects don't exist If objects do not exist, Benchmark Factory creates the
(no backup sets created) objects and loads data.
If you select this option, Benchmark Factory does not
create backup tables.
Delete benchmark objects after completion Benchmark Factory deletes objects after a job is
of job finished.
Refresh data using backup sets When backup sets exist, Benchmark Factory reloads
data and recreates indexes.
2. Select the Test Options tab and then open the Custom Creation SQL tab.
3. Select the Enable Custom Creation SQL option.
5. To save the modified SQL as an .xml file, click . Specify a file name and click Save. The SQL for the
objects you modified is saved to the file.
Note: By default, the .xml file is saved to the Data directory. To change the location of the Data
directory, go to Edit | Settings | General.
7. To save your changes, click Save/Close or specify the remaining wizard options and run the job.
8. To retain but disable the custom SQL in the Custom Creation SQL tab, clear the Enable Custom
Creation SQL checkbox.
Note: If you modify an object name, column name, or a data type, this change could prevent data/objects
from loading successfully or could cause the standard benchmark test to fail.
Number of Tables Select the number of tables to include in this Create Objects for
Replication step.
Number of Columns per Table Specify a range. The number of columns in each table will be
randomly selected (using Uniform distribution) from this range.
4. Then use the grid to specify the data types to include in the tables, as well as the proportion of each data
type. The grid initially displays the data type list and default values that are specified in the Replication
Table Structure page of the Settings dialog (Edit | Settings).
To define a new data type for this test, click Add. Then specify data type details by entering values in the
grid. Review the following for additional information:
DataType Select a data type from the list (click inside the column to display the drop-down
column list).
Weight column Specify a weight for this data type. This value is used by Benchmark Factory to
determine what percentage of the columns will be defined with this type. The
weight/percentage is applied to each table (if possible) and to the database as a
whole.
The total weight is displayed below the grid.
5. Click Add to add another data type. Click Delete to delete the selected data type. Continue this process
until you have defined all the data types (columns) you want to include in the Replication tables.
Notes:
l Replication tables will automatically include a primary key column.
l The number of columns per table is determined using a Uniform distribution model.
l Right-click a data type in the grid to perform a copy and paste action.
Note: New user scenarios/transactions are added to the top of the list. Use the up and down arrows
to rearrange the order of items in the list.
Option Description
Add Single User Load Specify a single user load, then click Add to add it to the
Selected User Loads list. Repeat until your list is complete.
Add a Range of User Loads Specify a range, then click Add to add the range to the Selected
User Loads list.
Timing Tab
Use the Timing tab to specify timing phases associated with a test.
Advanced Tab
You can use the Advanced tab to specify the information to save in the Repository during a test run, to set error
handling properties, and to specify database checkpoints for the test.
Section Description
Save results to Repository Select to save test information to the Repository. Then select
one or more of the following options:
l Save Real-Time Counter Information—Saves real-
time performance monitoring information to the
Repository during the test. See Performance Counters
Tab on page 134 for more information.
Note: To specify sampling rate, go to Edit |
Settings | Statistics | Real-Time counters.
Error Handling Stop test after first error—The test is stopped when an error
is reported.
You can specify a default setting in Edit | Settings | Error
Handling. See Error Handling Settings.
Execute the following program at (Available only for benchmark tests and some capture/replay
the beginning of each iteration tests.)
Browse to and select the file to execute at the beginning of
each iteration.
Note: This field accepts BFScripts.
Enforce Timeout—Select and enter a time to enforce a
timeout on the file executing.
Execute the following program at (Available only for benchmark tests and some capture/replay
the end of each iteration tests.)
Browse to and select the file to execute at the end of each
iteration.
Note: This field accepts BFScripts.
Enforce Timeout—Select and enter a time to enforce a
timeout on the file executing. If the file does not complete in the
specified time, it is stopped and the job continues.
Database Checkpoints Perform checkpoint at start of each test iteration—Initiates
a database checkpoint at the beginning of a test iteration.
Perform checkpoints during each iteration—Initiates a
database checkpoint during a test iteration.
Number of checkpoints—Specifies the number of
checkpoints to initiate.
Enable Userload scaling Select to enable scaling for all user scenarios in the selected test.
(simulation)
l Move the slider to the right to scale up userload for all
scenarios (transactions) during playback.
l You can also specify the number of users per scenario in the
Transactions tab. See Transactions Tab on page 115 for
more information.
Execute by time Sets the time and length of the capture replay test.
l Or select a user scenario and click to open the list of SQL transactions. Then select Add
SQL Transactions.
4. The Add SQL Transaction dialog opens. Do one of the following:
l Or click to launch the BF Script Wizard. See BFScript Wizard on page 219 for more
information.
Statement Name Enter a name for the statement, or use the default.
Execution Method Select an execution method for the SQL statement.
l Direct SQL Execute—Select to execute the statement directly
without preparation.
l Prepare and Execute SQL—Select to prepare the SQL and
execute immediately after preparation.
l Prepare SQL Only—Select to prepare SQL without executing.
l Execute already prepared SQL—These transactions will run the
SQL statement contained with the reference prepared SQL when
called. If the statement has not been prepared, the action will
generate an error.
Click to execute the SQL statement. See Run SQL Preview on page 158
for more information.
Click to add a bind variable.
l Select the Bind Parameters tab and then click or double-click
within the Bind Parameters window to add a bind variable and
value.
6. Select the Latency tab to specify latency values for the SQL transaction. See Specify Latency on page
162 for more information.
7. Click OK in the Add SQL Transaction dialog to the save the SQL statement and add it to the test.
3. Select the User Scenario tab in the Add User Scenario dialog. Then click to browse to and select an
.xml file containing the SQL you want to import.
4. Click OK to add the User Scenario.
3. Under Session Details, select the SQL statement you wish to edit. The statement displays in the SQL
Statement view.
4. From the SQL tab, you can click in the upper right-hand corner to:
l Run a SQL Preview. See Run SQL Preview on page 158 for more information.
l Launch the BFScript Wizard. See BFScript Wizard on page 219 for more information.
BFScript Wizard
The Benchmark Factory scripting feature known as BFScripts allows you to insert randomized data into the load
testing process. You can use BFScripts when you add SQL transactions. See Transactions Tab on page 115 for
more information.
Script-enabled fields have a yellow background. A field has scripting capabilities if the field's right-click menu
includes the BFScript Wizard option. The BFScript Wizard is a quick and easy way to use Benchmark Factory
scripts. The BFScript Wizard provides you with a list of built-in script functions, grouped by category, from which
to select. Each script function has a short description included, and if applicable, the function parameters. See
About Scripts for an overview of BFScripts.
There are two features in Benchmark Factory that assist you when using scripting capabilities.
l BFScript Wizard
l Script Assist
BFScript Wizard
1. Use one of the following methods to open the BFScript Wizard:
l From within a script-enabled field (yellow background), right-click and select BFScript Wizard.
Script Assist
1. When entering a SQL statement, enter $BF. Script Assist automatically displays a list of scripts from
which you can select.
4. In the Maximum Returned field, enter the maximum number of rows to preview, or select the All option
to preview all rows.
5. Click OK. The SQL Preview window opens.
6. Review the information.
7. Click Close.
Specify Latency
Latency is delay added to the execution of a transaction to control how fast transactions are submitted to the
system-under-test. Latency is used to either make the transaction execution rate more like real-world executions
or control the transaction rate. This delay can be added to the beginning and/or end of a transaction execution.
To specify latency values for an individual transaction, edit the transaction using the Transactions tab of the
New/Edit Job Wizard. You can do this either at the time you add transactions/scenarios to a test or any time
after the job is created.
You can also specify latency for all the child transactions of a test or user scenario at one time. See Replace
Child Latencies on page 159 for more information.
Benchmark Factory allows you to set default latency values for the transactions you add. See Settings | Latency
Settings in the User Guide for more information.
Latency Definitions
No Delay
No Delay means that transactions execute as fast as possible. As soon as one transaction is processed, the
next transaction is issued against the server. In the case of a mixed workload test, each virtual user issues
transactions as fast as possible.
The No Delay option is used when the goal of the test is to stress the system to its limits, without concern for
accurately simulating users. With No Delay specified, a relatively low number of users can stress the system to
its limits. However, there is no easy way to correlate N virtual users running with no delay to some number of
real users.
Think Time
Think Time is used to simulate the amount of time spent thinking about the results of the previous transaction.
This could be time spent performing analysis on the results of a database query.
Specifying Think Time inserts a delay (either fixed or variable) after each transaction executes.
Interarrival Time
Interarrival time is the time between two successive transactions arriving at the server. It is used to determine the
average transaction rate as seen by the server (i.e., the rate at which transactions arrive at the server).
When you specify Interarrival time, Benchmark Factory is instructed to ensure that the transactions arrive at the
server at the specified interval, regardless of how long a transaction actually takes to execute.
If a transaction takes longer than the Interarrival Time, the next interval is measured from the arrival of the next
transaction.
Using Absolute
Use Absolute when you want a fixed value for delay. For example, if a 2000 ms delay is specified for Keying
Time and a 3000 ms delay is specified for Think Time, when a transaction executes, Benchmark Factory waits
2000 ms, then starts the transaction, and then waits an additional 3000 ms before deciding which transaction to
execute next.
If Interarrival Time is used with an Absolute delay of 2000 ms, Benchmark Factory marks the time, executes the
transaction, and waits until two seconds has elapsed from the marked time (assuming the transaction finishes in
less than two seconds) before determining which transaction to execute next.
Distribution Models
Review the following distribution models provided by Benchmark Factory.
Uniform Distribution
Selecting a Uniform delay instructs Benchmark Factory that random delay should be chosen, with an equal
probability of being the minimum value, the maximum value, or any value in between. Uniform delays are
chosen when it is suspected that the delay is highly random within a range or a minimal amount of statistical
analysis has been performed to determine how the actual users react.
Suppose a uniform Keying Time is selected with a minimum value of 1000 ms and a maximum value of 1500
ms. If the transaction is executed more than 500 times, there is a high probability that each possible delay has
been selected at least once. With the other delay types, this is not the case.
If 2000 ms to 2500 ms uniform delay is set for Interarrival time, the tester essentially is setting the test so that a
server sees the transaction every 2 to 2.5 seconds, instead of exactly two seconds as in the Absolute delay time.
Normal Distribution
Normal distributions differ from Uniform delays in that most of the delays chosen by Benchmark Factory will be
close to the average, but can vary by as much as ±10% of the mean. While a Uniform delay is used when users
have latencies within equal likelihood of being anywhere between two values, Normal distributions are chosen
when all users fall within a range, but most of the modeled users have a latency close to the average latency.
Poisson Distribution
A Poisson distribution is very similar to the Normal distribution and can be used most places where a Normal
distribution delay could be used. The biggest difference between a Normal distribution and a Poisson
distribution is that Poisson selects discreet values.
l In the Jobs View pane, right-click a test and select Replace Child Latencies.
4. Click OK to save your changes and close the dialog. The changes are applied to all the child
transactions of the transaction mix. The changes are not inherited by the grandchild transactions. For
example, if the transaction mix contains a user scenario, the latency values for the individual
transactions in the user scenario remain unchanged. To change the latency values for transactions in
the user scenario, right-click the user scenario and select Replace Child Latencies.
3. In the Replace Child Latencies dialog, modify latency options. See Specify Latency on page 162 for
more information.
l Start all users as quickly as possible: Starts all users immediately after a test begins.
l Start all users at even intervals for: Sets the amount of time in which to start all users at even
intervals. The interval duration is equal to this value divided by the number of users.
l Start a new user every: Starts a new user, then waits the entered number of seconds before starting
the next user.
To schedule a job
1. In the New/Edit Job Wizard, select a test under Workload.
4. Select one or more notification types. Review the following for additional information:
Operator E-mail Sends an email containing a file attachment summarizing the results of a job.
Pager E-mail Sends notification of job completion.
Net-Send Operator Sends a network message to a specific machine notifying that a job is complete.
l BPS
l DEADLOCKS
l TOTAL_ERRORS
l MAX_TIME
l RPS
l TOTAL_BYTES
l TOTAL_ROWS
l TPS
l USERLOAD
You can add performance counters to a connection or a job.
3. Click Add.
The new global variable displays.
6. Click Replace.
To copy a workload test to a Replay Test, Mix Test, Goal Test, or Scalability test
1. Right-click a test in the Jobs View pane or in the New/Edit Job Wizard. A drop-down displays.
2. Select the desired type of test you want to copy to.
3. The test is created and displays in the Jobs View or New/Edit Job Wizard.
Provided Benchmarks
Benchmark Factory provides the following benchmark tests:
AS3AP Benchmark
The AS3AP benchmark is an American National Standards Institute (ANSI) Structured Query Language (SQL)
relational database benchmark. The AS3AP benchmark provides the following features:
l Tests database processing power
l Built-in scalability and portability that tests a broad range of database systems
Best Practices
Do not load-test against a production server if possible. Load-testing and benchmarking on a production server
significantly degrades performance. In some cases, load-testing can cause a server to fail. However, if testing
against a production server, take the following precautions:
l Perform the testing when no other users are on the system and no automated processes are running.
Users and automated processes can adversely affect testing results
l Have a recovery plan and backup all data prior to testing
l Determine how long it will take to restore a production server if it went down during load-testing
l Perform manual testing. Manual testing ensures that no unexpected outside activity takes place during
the testing process
Scaling Factor
The AS3AP benchmark scales by factor of 10.
Scaling Factor
The Scalable Hardware benchmark has a scaling factor of one.
Best Practices
TPC-B Benchmark
l Overview
l Certification of Transaction Processing Council (TPC) Testing Results
l Best Practices
l Scaling Factor
To learn how to create a TPC-B benchmark test in Benchmark Factory, see Create Industry Standard
Benchmark Test.
Overview
The Transaction Processing Council is an organization that establishes transaction processing and database
benchmark standards. Find a complete overview and detailed explanation of the TPC-B Benchmark, at:
http://www.tpc.org/tpcb/default.asp
Best Practices
The following provides best practices for the TPC-B Benchmark.
History Tables
The TPC-B benchmark is made up of only one transaction that updates three tables and inserts a record into a
history table. Inserting one record into one history table limits testing performance. The Benchmark Factory
properties page allows the user to set the number of history tables to create during a test. The best ratio of
history tables to virtual users is based on database configuration and hardware. The number of history tables to
use is determined by the tester.
Scaling Factor
The TPC-B benchmark scales by a factor of one.
TPC-C Benchmark
l Overview
l TPC-C Tables
l Certification of Transaction Processing Council (TPC) Testing Results
l Best Practices
To learn how to create a TPC-C benchmark test in Benchmark Factory, see Create Industry Standard
Benchmark Test.
Overview
Find a detailed overview of the TPC-C Benchmark at: http://www.tpc.org/tpcc/default.asp.
The TPC-C benchmark is an online transaction processing benchmark that simulates environments that have a
number of terminal operators that send transactions to a database. This benchmark is focused on the concept of
an order-entry type environment with transaction that include orders, payment recording, order status, and stock
level monitoring. This benchmark portrays the activities of a wholesale supplier. However, the TPC-C is not
limited to one particular business segment. It can represent numerous categories of a business that sell or
distribute products and services.
The TPC-C benchmark simulates a wholesale parts dealer operating out of warehouses. This Benchmark
scales as a company, in theory, expands their business or number of facilities. As the TPC-C benchmark scales,
so do the number components for the benchmark, for example, the sales districts and customers.
TPC-C Tables
The scale factor determines the amount of information initially loaded into the benchmark tables. For the TPC-C
benchmark, each scale factor represents one warehouse as per TPC-C specification. The TPC-C benchmark
involves a mix of five concurrent transactions of different types and complexity. The database is comprised of
nine tables with a wide range of records.
A maximum of 10 users should be run against each warehouse. For example, user loads of 1, 5, and 10, set the
scale to 1. If using other user load values, change the scale factor accordingly.
The TPC-C database consists of the following tables:
l Order_Line
l Item
l Stock
Best Practices
The following provides best practices for the TPC-C Benchmark.
TPC-D Benchmark
l Overview
l Certification of Transaction Processing Council (TPC) Testing Results
l Best Practices
l Scaling Factor
To learn how to create a TPC-D benchmark test in Benchmark Factory, see Create Industry Standard
Benchmark Test.
Overview
The Transaction Processing Council is an organization that establishes transaction processing and database
benchmark standards. Find a complete overview and detailed explanation of the TPC-D Benchmark at:
http://www.tpc.org/tpcd/default.asp
Best Practices
The following provides best practices for the TPC-D Benchmark.
Scaling Factor
The TPC-D benchmark scales by the following factors:
l 0.10
l 1.00
l 10.00
l 30.00
l 100.00
l 300.00
TPC-E Benchmark
l Overview
l Certification of Transaction Processing Council (TPC) Testing Results
l Best Practices
l Scaling Factor
To learn how to create a TPC-E benchmark test in Benchmark Factory, see Create Industry Standard
Benchmark Test.
Overview
The Transaction Processing Council is an organization that establishes transaction processing and database
benchmark standards. Find a complete overview and detailed explanation of the TPC-E Benchmark at:
http://www.tpc.org/tpce/default.asp
Best Practices
The following provides best practices for the TPC-E Benchmark.
Scaling Factor
The TPC-E benchmark scales by factor of 500.
TPC-H Benchmark
Review the following for information about the TPC-H benchmark.
l Overview
l Certification of Transaction Processing Council Testing Results
l Best Practices
l Stream Test
To learn how to create a TPC-H benchmark test in Benchmark Factory, see Create Industry Standard
Benchmark Test.
Overview
The Transaction Processing Council is an organization that establishes transaction processing and database
benchmark standards. Find a complete overview and detailed explanation of the TPC-H Benchmark at:
http://www.tpc.org/tpch/default.asp.
Best Practices
The following provides best practices for the TPC-H Benchmark.
Stream Test
An option when creating a TPC-H workload is to include the TPC-H Stream Test, which is the multi-user version
of the Power Test. The Steam Test, per specification, should maintain the following relationship between the
scale factor and the number of Streams.
Run Reports
Benchmark Factory Run Reports is a separate executable that provides a comprehensive and detailed
collection of database load testing results. With Benchmark Factory you can drill down into a database to view a
wide array of information and statistics that gives you accurate insight into database performance. Run Reports
Viewer allows you to access Benchmark Factory load testing results.
Note: Three instances of Run Reports can be viewed at one time.
l Transaction Time
l Bytes/Second (BPS)
l Rows/Second (RPS)
l Total Bytes
l Total Errors
l Total Rows
l Response Time
Create/Load objects if objects don't exist If objects do not exist, Benchmark Factory creates the
(no backup sets created) objects and loads data.
If you select this option, Benchmark Factory does not
create backup tables.
Delete benchmark objects after completion Benchmark Factory deletes objects after a job is
of job finished.
Refresh data using backup sets When backup sets exist, Benchmark Factory reloads
data and recreates indexes.
If backup sets do not exist, Benchmark Factory
creates objects and loads data. Then Benchmark
Factory creates the backup tables.
Recreate objects and reload data every Benchmark Factory always deletes the existing
execution objects and then recreates/reloads for each job
execution.
This allows you to ensure that the dataset is always in
initialized status.
Refresh data using inserts If data does not exist, Benchmark Factory uses bulk
insert to load data.
If the following Benchmark Settings are selected,
9. Test Options. To modify test options for the Create Objects test/step, select the Test Options tab. Review
the following for more information:
l Scale—To modify the scale factor. See Benchmark Scale Factor on page 154 for more
information.
l Options—To modify options for object creation and retention. See Options Tab (Create Objects
Step) on page 111 for more information.
You can adjust the Benchmark Scale factor when creating a new Industry Standard Benchmark Test or when
adding a Create Benchmark Objects step.
3. Add user loads individually, or specify a range to allow Benchmark Factory to calculate the user load list
automatically. Review the following for more information.
Option Description
Add Single User Load Specify a single user load, then click Add to add it to the
Selected User Loads list. Repeat until your list is complete.
Add a Range of User Loads Specify a range, then click Add to add the range to the Selected
User Loads list.
BFScript Wizard
The Benchmark Factory scripting feature known as BFScripts allows you to insert randomized data into the load
testing process. You can use BFScripts when you add SQL transactions. See Transactions Tab on page 115 for
more information.
Script-enabled fields have a yellow background. A field has scripting capabilities if the field's right-click menu
includes the BFScript Wizard option. The BFScript Wizard is a quick and easy way to use Benchmark Factory
scripts. The BFScript Wizard provides you with a list of built-in script functions, grouped by category, from which
to select. Each script function has a short description included, and if applicable, the function parameters. See
About Scripts for an overview of BFScripts.
There are two features in Benchmark Factory that assist you when using scripting capabilities.
l BFScript Wizard
l Script Assist
BFScript Wizard
1. Use one of the following methods to open the BFScript Wizard:
l From within a script-enabled field (yellow background), right-click and select BFScript Wizard.
Script Assist
1. When entering a SQL statement, enter $BF. Script Assist automatically displays a list of scripts from
which you can select.
To copy a workload test to a Replay Test, Mix Test, Goal Test, or Scalability test
1. Right-click a test in the Jobs View pane or in the New/Edit Job Wizard. A drop-down displays.
2. In the Replace Child Latencies dialog, modify latency options. See Specify Latency on page 162 for
more information.
4. Click OK to save your changes and close the dialog. The changes are applied to all the child
transactions of the transaction mix. The changes are not inherited by the grandchild transactions. For
example, if the transaction mix contains a user scenario, the latency values for the individual
transactions in the user scenario remain unchanged. To change the latency values for transactions in
the user scenario, right-click the user scenario and select Replace Child Latencies.
To schedule a job
1. In the New/Edit Job Wizard, select a test under Workload.
Specify Latency
Latency is delay added to the execution of a transaction to control how fast transactions are submitted to the
system-under-test. Latency is used to either make the transaction execution rate more like real-world executions
Latency Definitions
No Delay
No Delay means that transactions execute as fast as possible. As soon as one transaction is processed, the
next transaction is issued against the server. In the case of a mixed workload test, each virtual user issues
transactions as fast as possible.
The No Delay option is used when the goal of the test is to stress the system to its limits, without concern for
accurately simulating users. With No Delay specified, a relatively low number of users can stress the system to
its limits. However, there is no easy way to correlate N virtual users running with no delay to some number of
real users.
Keying Time
Keying Time is used to simulate the amount of time spent performing data entry (entering information) before
executing a transaction. In many cases, Keying Time is used with Think Time to provide a delay both before and
after a transaction executes.
Specifying a Keying Time inserts a delay (either fixed or variable) before each transaction execution.
Think Time
Think Time is used to simulate the amount of time spent thinking about the results of the previous transaction.
This could be time spent performing analysis on the results of a database query.
Specifying Think Time inserts a delay (either fixed or variable) after each transaction executes.
l Start all users as quickly as possible: Starts all users immediately after a test begins.
l Start all users at even intervals for: Sets the amount of time in which to start all users at even
intervals. The interval duration is equal to this value divided by the number of users.
l Start a new user every: Starts a new user, then waits the entered number of seconds before starting
the next user.
Each Benchmark Factory Agent must be configured with the address of the Benchmark Factory Console. Each
Agent sends load testing results back to the Benchmark Factory Console.
If you use only the agent installed locally on the console machine, make sure your local agent is configured with
the IP address (name) of your local machine.
4. Click OK.
5. Repeat this procedure for each Agent machine
4. As the job runs, all connected Agents will display in the Agent view/pane of the Benchmark
Factory console.
Note: When you run a job using one or more local agents, if Agent utilization of resources on the local
machine is too high, errors could occur.
Overview
The SQL Scalability test allows you to execute SQL statements, letting users spot potential issues not seen with
a single execution. Users can run variations of a SQL statement generated by SQL Tuning in order to find the
SQL that will perform the best under a load test.
Creating a SQL Scalability load scenario requires the following steps.
1. Creating the SQL tuning connection
2. Entering the desired SQL statement
3. Running the statement using the Benchmark Factory SQL Scalability testing
Running the SQL statement using the in Benchmark Factory SQL Scalability Testing
1. Click the Benchmark Factory drop-down icon and select the desired option. Three options are provided:
2. Test for Scalability-Tests the currently displayed SQL.
3. Test All for Scalability-Tests all SQL statements.
4. Test Selected for Scalability-Tests the selected SQL statements.
2. The Benchmark Factory SQL Scalability dialog displays. Click Next. The Measurement Interval
dialog displays.
3. Enter the desired user load.
4. Click Next. The Iteration Length dialog displays.
5. Enter the desired number of executions per iterations or executions per iteration.
6. Click Next. The Real World Latencies dialog displays.
7. Select the desired latency.
8. Click Next. The connection information dialog displays.
Overview Tab
The Overview tabs provides transactions per second testing results for individual user loads and iterations.
l Errors
l Average Transaction Time
l Minimum Transaction Time
l Maximum Transaction Time
Real-Time
Real-Time Statistics provides real-time graphs and raw data. This data allows you to spot system-under test
issues that may be affecting server performance. Right-clicking inside the graph displays a drop-down that
allows you to change graph settings and view.
About Settings
Use the Settings dialog to specify or view the default settings Benchmark Factory uses when you create a new
job. Changes to these settings affect only new jobs, not existing jobs.
l Agent Settings
l Oracle Settings
l SQL Server Settings
l Execute File Settings
General Settings
Use the General tab of the Settings dialog to specify workplace settings. In addition, you can define the location
for error logs and scripts.
TCP/IP Settings Console TCP/IP Port—Specify a port for the Benchmark Factory console if
different than the default. The default setting is port 4568.
Note: Restart Benchmark Factory to apply changes.
Refresh statistics after benchmark load Select to instruct Benchmark Factory to refresh
statistics after loading benchmark data.
Default = selected
Check scale factor before running Instruct Benchmark Factory to check the Benchmark
benchmark test Scale factor of the existing tables against the new
Benchmark Scale requirement before executing a
test.
To skip the scale checking process, do not select this
checkbox.
Default = selected
3. Select a default setting for the Object/Data Retention method. The default setting applies to new Create
Create/Load objects if objects don't exist If objects do not exist, Benchmark Factory creates the
(no backup sets created) objects and loads data.
If you select this option, Benchmark Factory does not
create backup tables.
Delete benchmark objects after completion Benchmark Factory deletes objects after a job is
of job finished.
Refresh data using backup sets When backup sets exist, Benchmark Factory reloads
data and recreates indexes.
If backup sets do not exist, Benchmark Factory
creates objects and loads data. Then Benchmark
Factory creates the backup tables.
Recreate objects and reload data every Benchmark Factory always deletes the existing
execution objects and then recreates/reloads for each job
execution.
This allows you to ensure that the dataset is always in
initialized status.
Refresh data using inserts If data does not exist, Benchmark Factory uses bulk
insert to load data.
If the following Benchmark Settings are selected,
Database Size Specify a database size. The Benchmark Scale readjusts according to
the database size you specify.
Benchmark Scale Specify a Benchmark Scale factor to be used to scale up table sizes and
increase data. See Benchmark Scale Factor on page 154 for more
Database Size Specify a database size. The Benchmark Scale readjusts according to
the database size you specify.
Benchmark Scale Specify a Benchmark Scale factor to be used to scale up table sizes and
increase data. See Benchmark Scale Factor on page 154 for more
information.
The database size readjusts according to the scale factor you specify.
After adjusting benchmark scale (or database size) review the estimates
for individual and total table sizes in the Object Details grid.
Show Empty Tables Select to display any tables that will be created but not populated with
data.
Database Size Specify a database size. The Benchmark Scale readjusts according to
the database size you specify.
Benchmark Scale Specify a Benchmark Scale factor to be used to scale up table sizes and
increase data. See Benchmark Scale Factor on page 154 for more
information.
The database size readjusts according to the scale factor you specify.
After adjusting benchmark scale (or database size) review the estimates
for individual and total table sizes in the Object Details grid.
Show Empty Tables Select to display any tables that will be created but not populated with
data.
Number of Tables Specify a default value for the number of tables to include in a
new Create Objects for Replication step.
Number of Columns per Table Specify a default range for the number of columns to include in
each table in a new Create Objects for Replication step.
Add Click to add a column.Then define the column details by entering
values in the grid. See Replication Table Options Tab on page
114 for more information.
Ave. Number of Statements per Specify a default value for the number of statements to include in
Commit a commit.
Add a Range of Specify a range, then click Add to add the range to the selected
User Loads user loads.
Virtual Users
Benchmark Factory comes with 100 virtual users by default. See Add Virtual Users on page 23 for more
information about adding virtual users.
Latency Settings
You can use the Latency page of the Settings dialog to specify default values for latency. Latency is delay
added to the execution of a transaction to control of how fast transactions are submitted to the system-under-
test. Use Latency to model real-world user interactions.
The default latency settings apply to transactions you add through the Transactions tab.
4. Select Warning on interarrival time overrun to display a warning message if a transaction runs longer
than the interarrival time. The warning message is displayed in the Output window for the agent.
Group Description
Error Handling
Stop test after first Default setting for a new jobs. Terminates the test when a server error
error occurs.
Repository Settings
Note: If you create a new repository in Benchmark Factory 5.5 or later, earlier versions of Benchmark
Factory will not work against this repository.
The repository is a database where all of the test results are stored. Benchmark Factory inserts test results into
the repository and provides an easy way to access all test results data.
By default, the Repository is a SQLite database that resides on the same machine as Benchmark Factory. The
Repository can reside on another database server if required. To change the database, select the Data Source
Name of the ODBC connection for the new database. To migrate data from one database to another, click Data
Migration to open the Data Migration Wizard.
Note: By default in Benchmark Factory 7.1.1 or earlier, a MySQL database is created and used as the
Repository, unless you selected the SQLite option during installation. In Benchmark Factory 7.2 or later, by
default a SQLite database is created and used as the Repository.
If you plan to store a large amount of test data in the repository, you might want to consider using a more robust
database than SQLite.
The Repository Settings page allows you to edit the DSN, perform ODBC administration, and test the
connection. Benchmark Factory also provides a Repository Manager and Data Repository Migration wizard to
assist you with other repository management functions.
Note: If the database structure does not exist on the selected database, Benchmark Factory prompts you to
create the structure.
Data Source Name Data Source name of the ODBC connection used to connect to the repository
database.
User Name The User Name used to log into the selected database.
Statistics Settings
You can use this page of the Settings dialog to specify default values for the statistics collection options.
Note: If you modify the default settings in the Settings dialog, the changes apply to new jobs only, not to
existing jobs.
Save Select to save test information to the Repository. Then select one or more of the
results to following options:
Repository
l Save Real-Time Counter Information—Saves real-time performance
monitoring information to the Repository during the test. See Performance
Counters Tab on page 134 for more information.
Note: To specify sampling rate, go to Edit | Settings | Statistics | Real-Time
counters.
Agent Settings
Use this page of the Settings dialog to do the following:
Setup New User Agent Click to setup a new agent or to install a remote agent on
Windows or Linux.
l To learn how to set up an agent, see Set Up New User
Agent.
Tips:
l In the New/Edit Job Wizard, select Agent in the left pane of the wizard to access agent options for
the selected job. You can select agents or set up new agents from this page of the wizard.
l To open the Agent console, go to Program Files\Quest Software\Benchmark Factory
<version>\bin and double-click Agent.exe. See The Benchmark Factory Agent Console on page 50
for more information.
Statspack Options
Perform Statspack snapshot during Select to use Oracle’s “Stats Pack” utility to collect statistics.
each iteration
l Number of snapshots—Specify the number of
snapshots.
Note: A valid license is required to use the optional Oracle Enterprise Manager (OEM)
Diagnostics Pack.
Database Flush
Flush data buffer caches at start of Select to clear data buffer caches between iterations.
each test iteration Note: To perform this action, the Oracle database account
must have certain privileges. In Oracle 10g or later, the
ALTER SYSTEM privilege is required.
Flush shared pool at start of each Select to clear shared pool between iterations.
test iteration Note: To perform this action, the Oracle database account
must have the ALTER SYSTEM privilege.
Note: Cached data can improve performance, so selecting one or both of these options can prevent
cached data from affecting subsequent iterations.
Tip: You can specify these same Oracle settings for each individual Oracle connection. See Create Oracle
Connection and Oracle Statistics Tab (Connections) for more information.
Database Flush
Clean data buffer and procedure Select this option to instruct Benchmark Factory to clear cached
caches at start of each test iteration data between iterations. Cached data can improve
performance, so selecting this option can prevent cached data
from affecting subsequent iterations.
Notes:
l This option is only applicable to SQL Server 2005
or later.
l To perform this action, the SQL Server database
account must have the sysadmin fixed server role.
Tip: You can specify this option for each individual SQL Server connection. See Create SQL Server
Connection on page 63 for more information.
Iteration Overruns
Iteration overruns occur at the end of an iteration to allow time for all transactions submitted within the test
iteration cycle to complete, so that all transaction statistics can be collected. For example, an agent may execute
a transaction during the last five seconds of test iteration, if this transaction takes 15 seconds to complete, an
iteration overrun of 10 seconds will occur.
2. Select Click here to Add Data points. The Add Data points dialog displays.
3. To change graph views, right-click. See Change Graph Views for more information.
2. From the data points Vs. Userload drop-down list, select the data points to view.
3. To change a graph view, right-click a Benchmark Factory graph to display a drop-down list that allows
you to customize graph settings. See Change Graph Views on page 255 for more information.
Graph Legend
Toggling to Graph Legend displays a legend on the side of the graph.
Print
Choosing Print displays the Print Dialog.
Load Configuration
Benchmark factories graphs allows you to save graph configurations.
Save Configuration
Saves a graph configuration.
Set as Default
Sets a configured graph as default.
Clear Configuration
Clears a graph configuration.
Click to export test results to Word from one or more selected runs.
Click to export test results to Excel from one or more selected runs.
Click to export the selected run's test results as a zip file. The zip file
contains an XML file and other files required to reproduce the test
results.
Click to import test results from a zip file exported from Benchmark
Factory.
Compare Results
You can use the Compare Results tab to compare test results between selected test runs, or to compare results
to the baseline test run.
To compare results
1. Select a job in the Jobs View pane.
l Transaction Time
l Bytes/Second (BPS)
l Rows/Second (RPS)
l Total Bytes
l Total Errors
l Total Rows
l Response Time
Toolbar
Toggling to Toolbar displays the graph toolbar.
Load Configuration
Benchmark factories graphs allows you to save graph configurations.
Save Configuration
Saves a graph configuration.
Set as Default
Sets a configured graph as default.
Clear Configuration
Clears a graph configuration.
See Using Benchmark Factory Run Reports on page 201 for more information.
When Benchmark Factory Run Reports opens, the Results Summary displays. Click Go Back to return to the
previous screen.
Results Summary
The Results Summary provides graphs and user load statistics for the selected test. The following graphs and
tables are provided:
l Results Summary Graph
l Other Results Summary Graphs
l Using Benchmark Factory Run Reports
Results Summary Graph
Note: Not all tests provide the Results Summary graph.
This is a customizable graph that allows you to view selected data points.
4. Click Ok.
Userload
Understanding how user loads affect the performance of a database is essential to end user satisfaction. The
Userload graph plots user load against:
l Response Time
l Total Bytes
l Total Errors
l Total Rows
l Response Time
Reviewing these datapoints allows you to fully understand how "real-world" userloads affect database
performance.
Realtime Summary
The Realtime Statistic graph allow you to view what "actually" happened during a load test. You can plot
userload against:
l Transaction/Second
l Total Rows
l Errors
Realtime Detail
The Realtime Detail graph allows you to view what actually happened during the running of a load test. This
allows you to view the actual timing events. From the Realtime Detail graph you can view:
l Average Response Time
l Average Time
l Bytes/ Second
l Deadlocks
l Maximum Response Time
l Maximum Time
l Minimum Response Time
l Minimum Time
l Rows/Second
From this table you can drill down to view userload stastics.
2. From this view, you drill down further by clicking on required User Scenario name.
Workload
The Workload view provides testing details.
Testbed
The Testbed view shows data on agent configuration and processes during the running of a job.
Clicking Details displays a table with machine and operating system details.
2. Click the Show Test Results icon in the upper right corner of the dialog. Run Reports displays
4. Click on Details for Workload, Testbed, or Database Under Test to drill down on testing results.
3. Enter BMFRunHistory.exe –x 34 88 108. Benchmark Factory testing results with Run ids of 34, 88, and
108 export to Excel.
Term Definition
Bytes The number of bytes from data received from a SQL statement.
Bytes Per Second The number of bytes processed per second over the sampling period. This is just
the Total Bytes divided by the Sampling period in seconds.
Response Time The time it takes from when the SQL is sent to the server responds.
Retrieval Time The time it takes from the server responds to a SQL statement till the last byte of
data/results is obtained.
Rows The number of rows received from a SQL statement.
Rows Per Second The number of rows received per second over the sampling period. Similar to
(RPS) above.
Transactions Per The transactions, or SQL statements, processed by the server per second. A
Second (TPS) transaction in Benchmark Factory can be more than a single SQL statement, such
as the TPC-C transaction New Order, this transaction inserts a new order by
inserting one record into the new order table and 5 – 7 items in the orderline table.
Transaction Time The sum of the Response and Retrieval time.
(sometimes listed as
just Time)
The metrics listed above might also be expressed in test results using the following:
l Average–The average of all recorded values for the statistic over the sampling period.
l Minimum–The minimum value the statistic obtained over the sampling period.
l Maximum–The maximum value the statistic obtained over the sampling period.
l 90th Percentile– This is usual associated with a timing statistic. This is the time value where 90 percent
of all values recorded for a statistic fell below.
About Scripts
Benchmark Factory provides scripting capabilities known as BFScripts. This feature allows you to customize and
randomize the load testing process by using scripts and a number of built-in functions.
The built-in functions are formulas that take one or more values (arguments), perform an operation, and return a
value that simulates real-world user activity. These functions can be used alone or as building blocks for
creating complex user activity. Randomized data is important when attempting to simulate real-world user
activity because data that is random prevents a server from using data stored in its cache.
In the Benchmark Factory console, fields with a yellow background allow you to insert BFScripts. To learn how
to use scripts, see BFScript Wizard.
The following is a list of available scripts/functions:
BFScript Wizard
The Benchmark Factory scripting feature known as BFScripts allows you to insert randomized data into the load
testing process. You can use BFScripts when you add SQL transactions. See Transactions Tab on page 115 for
more information.
Script-enabled fields have a yellow background. A field has scripting capabilities if the field's right-click menu
includes the BFScript Wizard option. The BFScript Wizard is a quick and easy way to use Benchmark Factory
scripts. The BFScript Wizard provides you with a list of built-in script functions, grouped by category, from which
to select. Each script function has a short description included, and if applicable, the function parameters. See
About Scripts for an overview of BFScripts.
BFScript Wizard
1. Use one of the following methods to open the BFScript Wizard:
l From within a script-enabled field (yellow background), right-click and select BFScript Wizard.
Script Assist
1. When entering a SQL statement, enter $BF. Script Assist automatically displays a list of scripts from
which you can select.
Parameters: N/A
Syntax: $BFCreditCardExp()
$BFCurrentDate
Description: Allows you to change the date format used to populate date fields.
Parameters: Format string-The mix of static text and variables as needed.
Variables Definition
%a Abbreviated weekday name
%A Full weekday name
%b Abbreviated month name
%B Full month name
%c Date and time representing appropriate locale
%d Day of month as decimal number (01-31)
%H Hour in 24-hour format (00-23)
%I Hour in 12 hour format (01-12)
%j Day of year as decimal number (001-366)
%m Month as decimal number (01-12)
%M Minute as decimal number (00-59)
%p Current locale's A.M./P.M indicator for 12 hour clock
%S Second as decimal number (00-59)
%U Week of year as decimal number, with Sunday as first day of the week (00-
53)
%w Weekday as decimal number (0-6; Sunday is 0)
%W Week of year as decimal number, with Monday as first day of the week (00-
53)
BFCurrentDateTime
Description: Allows you to change the date/time format used to populate date fields.
Parameters: Format string-The mix of static text and variables as needed.
Variables Definition
%a Abbreviated weekday name
%A Full weekday name
%b Abbreviated month name
%B Full month name
%c Date and time representing appropriate locale
%d Day of month as decimal number (01-31)
%H Hour in 24-hour format (00-23)
%I Hour in 12 hour format (01-12)
%j Day of year as decimal number (001-366)
%m Month as decimal number (01-12)
%M Minute as decimal number (00-59)
%p Current locale's A.M./P.M indicator for 12 hour clock
%S Second as decimal number (00-59)
%U Week of year as decimal number, with Sunday as first day of the week (00-53)
%w Weekday as decimal number (0-6; Sunday is 0)
%W Week of year as decimal number, with Monday as first day of the week (00-53)
%x Date representation for current locale
%X Time representation for current locale
%y Year without century, as decimal (00-99)
%Y Year with century, as decimal number
%z, %Z Either the time-zone name or the time zone abbreviation, depending on
registry settings; no characters if time zone is unknown
%% Percent sign
Syntax: $BFDate(nStart,nEnd)
File Access
$BFFileArray
Description: Selects an item from a list. Returns a single item from a comma-delimited file. The item
returned depends on the mode selected. The syntax of the statement is also slightly
different for each mode. Each virtual user gets a different seed value to generate unique
sequences. Each agent machine must have a file with the name and path that is specified
in the script function. If $BFFileArray is to return strings, the items in the file must be in
double-quotes
Global Variable
$BFGetGlobalVar
Parameters: N/A
Syntax: $BFGetGlobalVar('myvar')
$BFSetGlobalVar
Parameters: N/A
$BFSetGlobalVarRtn
$BFAddress2
Description: Returns a second randomly generated street address string containing an
apartment number, suite number, or villa.
Parameters: N/A
Syntax: $BFAddress2()
Example: $BFAddress2() ; returns "Apt 5442"
$BFCity
Description: Generates a random city name.
Parameters: N/A
Syntax: $BFCity()
Example: $BFCity() ; returns "Trend Blue of Asia"
$BFCountry
Description: Returns a randomly generated country string.
Parameters: N/A
Syntax: $BFCountry()
Example: $BFCounty() ; returns "Canada"
$BFEmail
Description: Returns a random email address string.
Parameters: N/A
Syntax: $BFEmail()
Example: $BFEmail() ; returns "[email protected]"
$BFFirstName
$BFLastName
Description: Returns a random last name string.
Parameters: N/A
Syntax: $BFLastName()
$BFMiddleInitial
Description: Returns a random middle initial character.
Parameters: N/A
Syntax: $BFMiddleInitial()
$BFPhone
Description: Returns a randomly generated telephone string.
Parameters: N/A
Syntax: $BFPhone()
$BFZipCode
Description: Returns a randomly generated zip code string.
Parameters: N/A
Syntax: $BFZipCode()
Example: $BFZipCode() ; returns "52076"
Numerical Manipulation
$BFFormat
Description: Formats a series of up to 16 numbers. If the amount of numbers in the
series is greater than 16, the Maximum Parameters Exceeded message
displays.
Parameters: Format string -A %d for each number in the series.
$BFSum
Description: Returns the summation of a series of numbers.
Parameters: f1-The first number to be summed.
Random Numbers
$BFRand
Description: Returns a random integer between 0 and nMax. Each virtual user gets a different
seed value to generate the same unique sequences for each run.
Parameters: nMax-The maximum integer to be returned by the function.
Syntax: $BFRand(nMax)
Example: $BFRand(100) ; returns "45"
$BFURand
Description: Returns a unique (non-repeating) random integer ranging
between 1 and nMax. Each virtual user gets a different seed value
to generate unique sequences for each run.
Parameters: nMax-The maximum integer to be returned.
Syntax: $BFURand(nMax)
Example: $BFURand(100) ;
returns 78 the first time this function executes, and 50,
and 19 for subsequent executions of this function.
$BFURandRange
Description: Returns unique integers ranging between the value of
nMin and the value of nMax inclusive. Each virtual user
gets a different seed value to generate unique sequences
for each run.
Parameters: nMin-The minimum range integer to return.
nMax-The maximum range integer to return.
Syntax: $BFURandRange(nMin,nMax)
Example: $BFURandRange(1,100) ;
returns 100 the first time this function executes and 95,
and 85 for subsequent executions of this function.
Parameters: N/A
Syntax: $BFCreditCard()
$BFRandList
Description: Returns a string randomly selected from the list of items. If no weight is
specified, a weight of 1 is assumed. Each virtual user gets a different seed
value to generate unique sequences.
Parameters: string1-The first string in a list to return.
nWeight1-Positive integer indicating the relative weight of the first string.
string2-The second string in a list to return.
nWeight2-Positive integer indicating the relative weight of the second
string.
stringN-The last string in a list to return.
nWeightN-Positive integer indicating the relative weight of the last string.
Syntax: $BFRandList(string1[:nWeight1], string2[:nWeight2], …,stringN
[:nWeightN])
$BFRandMultiList
Description: Randomly selects multiple strings based on probabilities from a list. If
Weight is omitted, a value of 100 is assumed. The probability that any
string is include in the returned value is determined by the value of
nWeight. Each string included in the return value is separated by a comma.
Parameters: string1-The first string in a list to return.
$BFRandStr
Description: Returns a random string determined by a mode and having a length n.
Parameters: Length n-The length of a string.
$BFList
Description: Returns an item from a list. The item returned depends on the mode
selected.
Parameters: Retrieval Mode:
RANDOM: Select a random item from the list.
SEQUENTIAL: Select each item sequentially.
UNIQUE: Select a non-repeating item from the list.
string1-The first string to return from a list.
string2-The second string to return from a list.
stringN-The last string to return from a list.
Syntax: $BFList(Retrieval Mode, string1,string2,… stringN)
Example: $BFList (Sequential "1", "2", "3", "4")
Returns 1
2
etc.
$BFList (Random 1, 2, 3, 4)
Returns 2
3
3
1
2
4
etc.
$BFList (Unique, "1", "2", "3","4")
Returns 2
String Manipulation
$BFAsc
Description: Returns the ANSI value of the first character of a string.
Syntax: $BFAsc(string)
$BFChr
Description: Returns the character associated with the specified ANSI code.
Parameters: n-An integer representing an ANSI code.
Syntax: $BFChr(n)
Example: $BFChr(68) ; returns "D"
$BFConcat
Description: Returns a string containing two or more strings.
Parameters: string1-The first string to return.
$BFLen
Description: Returns the number of characters in a string.
Parameters: String-Characters enclosed in quotation marks.
Syntax: $BFLen(string)
Example: $BFLen("Benchmark Factory") ; returns "17"
$BFLower
Description: Returns a string after converting uppercase characters to lowercase
characters.
Parameters: String-Characters enclosed in quotation marks.
Syntax: $BFLowerstring)
Example: $BFLower("SAMPLING") ; returns "sampling"
$BFMid
Description: Extracts a substring from a string.
Parameters: String-Characters enclosed in a quotation marks.
$BFRight
Description: Returns the last n character of a string.
Parameters: String-Characters enclosed in quotation marks.
$BFTrim
Description: Returns a string void of leading and trailing spaces.
Parameters: String-Characters enclosed in quotation marks.
Syntax: $BFTrim(string)
Example: $BFTrim(" happy days are here to stay. ") ; returns "happy days are here to
stay"
$BFTrimLeft
Description: Returns a string void of leading spaces.
Parameters: String-Characters enclosed in quotation marks.
Syntax: $BFTrimLeft(string)
Example: $BFTrimLeft(" hockey is great ") ; returns " hockey is great "
$BFUpper
Description: Returns a string after converting lowercase characters to
uppercase characters.
Parameters: nString value-Characters enclosed in quotation marks.
Syntax: $BFUpper(string)
Test Info
$BFGetVar
Description: Retrieves a previously stored value using $BFSetVar. Allows a
value to be passed from one transaction to another in conjunction
with $BFSetVar, or when value is used multiple times within a
transaction. Each virtual user has its own variable space, so
values are not shared between them.
Parameters: VarName-An alphanumeric identifier of the value stored.
Syntax: $BFGetVar("VarName")
$BFMaxNode
Description: Returns the total number of nodes for all users. This function is intended
only for Oracle clustering.
$BFNode
Description: Returns the node number of the current user. This function is intended
only for Oracle clustering.
Parameters: N/A
Syntax: $BFNode()
Example: $BFNode() ; returns "1"
$BFNumberOfIterations
Description: Returns the current number of iterations of a test.
Parameters: N/A
Syntax: $BFNumberOfIterations()
Example: $BFNumberOfIterations() ; returns "1"
$BFProfile
Description: Returns driver specific information, such as database name.
Parameters: Profile (constant)-The following provides a list of database type
constants:
$BFRunID
Description: Returns the run ID of the current test.
Parameters: N/A
Syntax: $BFRunID()
$BFSetVar
Description: Stores a value for later use by $BFGetVar. Used to store a value to be
reused within its own transaction, or any transaction in a given user
scenario. Each virtual user gets its own variable space, so values are not
shared between them. Typically, $BFSetVar is placed at the beginning of a
dynamic statement, as scripts are evaluated from left to right.
Parameters: Variable Name (VarName)-An alphanumeric identifier of the value stored.
Text to Store (Value)-A string. The value to be stored for later retrieval.
Syntax: Syntax:$BFSetVar("VarName", "Value")
Example: $BFSetVar("Totalrow", "2") ; $BFSetVar sets the variable "Totalrow" to 2
$BFSetVarRtn
Description: Stores and returns a value to be reused within its own transaction, or any
transaction in a given user scenario. Each virtual user gets its own
variable space, so values are not shared between them. Typically,
$BFSetVarRtn is placed at the beginning of a dynamic statement, as scripts
are evaluated from left to right.
Parameters: Variable Name (VarKey)-A string to store the a value.
$BFUserCounter
Description: Returns the user counter.
Parameters: N/A
Syntax: $BFUserCounter()
Example: $BFUserCounter() ; returns "1"
$BFUserID
Description: Returns the current virtual user ID.
Parameters: N/A
Syntax: $BFUserID()
Example: $BFUserID() ; returns "1"
$BFUserLoad
Description: Returns the current user load for the test running.
Parameters: N/A
Syntax: $BFUserload()
Example: $BFUserLoad() ; returns "4"
$BFNextUserload
Description: Returns the next user load for the load scenario running.
$BFPrevUserload
Description: Returns the previous user load for the load scenario running.
Parameters: N/A
Syntax: $BFPrevUserload()
Example: $BFPrevUserload()
When running with userloads 1, 4, 6, 10 this will return "4" when
running at userload 6.
Repository Manager
Note: If you create a new Benchmark Factory 5.5 (or later) repository, earlier versions of Benchmark Factory
will not work against this repository.
The Repository is a database where all of the test results are stored. Benchmark Factory inserts test results
into the repository and provides an easy way to access the data. By default, the Repository is a SQLite
database that resides on the same machine as Benchmark Factory. The Repository can reside on another
database server if required.
Note: By default in Benchmark Factory 7.1.1 or earlier, a MySQL database is created and used as the
Repository, unless you selected the SQLite option during installation. In Benchmark Factory 7.2 or later, by
default a SQLite database is created and used as the Repository.
To change the database, select the Data Source Name of the ODBC connection for the new database. To
migrate data from one database to another, click Data Migration to open the Data Migration Wizard.
Note: If the database structure does not exist on the selected database, a prompt to create the structure will
appear when OK is clicked.
Connection Parameters
Data Source Name Data Source name of the ODBC connection used to connect to
the repository database.
ODBC Driver Current ODBC driver
User Name The User Name used to log into the selected database.
Password The Password associated with the user name used to log into
the database.
Edit DSN Displays the ODBC connection information dialog for the
selected data source.
ODBC Administrator Displays the ODBC Data Source Administrator dialog. Use this
to add and edit ODBC connections.
Test Connection Tests the connection of the currently selected ODBC Data
Source.
Maintenance
Create Creates the repository objects on the selected database.
Delete Deletes the repository objects on the selected database.
Warning: This will delete all test results stored in the
Repository.
3. In the Choose a Data Source page, select the data source name for the database from which you want to
migrate Benchmark Factory data.
l To migrate data from the default SQLite database installed and used with any new installation of
Benchmark Factory 7.2 or later, select Default SQLite.
Note: The option to use a default SQLite database was also available in Benchmark Factory
7.1.1 or earlier.
4. Enter the user name and password, if necessary (for example, if migrating from a non-default database).
Click Next.
5. In the Choose a Destination page, select the data source name of the database to which you want to
migrate the data.
6. Click Next. The Data Migration Wizard completion dialog displays.
7. Click Finish.
Support Bundle
You can create a Support Bundle and send it to Quest Support for review. To help troubleshoot problems, the
support bundle contains information such as:
l BMF version number
l Settings
l License information (just send the key files)
l Error Logs and Result Logs from the associated directories
l Files located in the data directory. This will be not only the xml files for imported users scenarios, but also
dump files.
l Script files XML configuration files
l A file that contains hardware information about the system running on it as well as the output of the
agents configurations from the repository
l Information about versions of all loaded .dlls
2. In the Support Bundle dialog, select the modules you want to send.
3. If creating a support bundle for a single job, select Just for selected job.
4. Then do one of the following:
l To save the bundle, click the Save icon . Then select a location in which to save the zipped file.
l To email the bundle to Support, click Email Quest Software Support.
l To contact Support via the Support Portal, click Quest Support.
By default, the support bundle is created and saved in the following location:
C:\Users\<user name>\My Documents\My Benchmark Factory\<version
number>\BMFSupportBundle.zip.
Agent Connection
If the agent is having a problem connecting to the Benchmark Factory console, please check the following:
Issue Cause/Solution
"Bad packet" error when testing When testing against an Oracle 12c database in a Linux
against Oracle 12c on Linux environment, if you encounter a "bad packet" error when loading
benchmark objects, you might attempt the following workaround.
Workaround: You can reduce the number of rows per commit by
adding a key to the Registry.
1. In the Registry, navigate to HKEY_CURRENT_
USER\Software\Quest Software\Benchmark Factory for
Databases\Benchmark Factory Console\Settings.
2. Add the following DWORD key: RowsPerCommit.
3. Specify a value for the number of rows per commit, for
example 25.
4. Then navigate to HKEY_CURRENT_USER\Software\Quest
Software\Benchmark Factory for Databases\Benchmark
Factory Agent\Settings.
5. Add the same DWORD key and the same value.
Benchmark Factory provides a REST API that allows you to access the functionality of Benchmark Factory, but
without the need to interact with the Benchmark Factory graphic user interface. A REST API is an application
program interface (API) that uses HTTP requests to GET, POST, PUT, and DELETE data. Through the REST
API, you can use a script, command-line tool, or custom application to automate your load testing tasks using
Benchmark Factory.
You can use the Benchmark Factory REST API with the Benchmark Factory console application or
BMFServer.exe. BMFServer.exe is a non-UI application included in the Benchmark Factory installation. See
BMFServer.exe for more information.
Where {server} is the Benchmark Factory host server, {port} is the port number, and {resource} is the name of
the resource.
Example: http://localhost:30100/api/jobs
Response Codes
These are the expected response codes returned. In addition, some other status codes may be returned if either
an internal error occurs or there are other issues.
Code Description
200 Success
201 Success. Resource created.
204 Success. No content returned.
400 The request failed.
404 The specified resource was not found.
405 The method is not supported for the specified resource.
Connections
Connections
Connection
Jobs
Jobs
Job
Connection
Agents
Agent
Tests
Test
Transactions
Transaction
Schedule
Toolbar
Toggling to Toolbar displays the graph toolbar.
Load Configuration
Benchmark factories graphs allows you to save graph configurations.
Save Configuration
Saves a graph configuration.
Set as Default
Sets a configured graph as default.
9. Select the appropriate authentication from the drop-down. Enter the login name and password
if required.
10. Click Connect. The Destination Table window displays.
11. Select the appropriate database from the Database drop-down list.
12. Select the appropriate owner from Owner drop-down.
13. Select the appropriate table from the Table drop-down list.
14. Click OK. The Trace Properties window displays.
15. Click the Events Selection tab.
16. Verify that all check boxes are checked in the TSQL-SQL:BatchStarting Events, and TSQL-
SQL:BatchComplete Events.
17. Click Run.
18. Run SQL statements.
19. When finished running SQL statements, click the Stop selected trace icon. The table is created.
Note: Environment Variables can be reached by right-clicking the My Computer Icon, selecting properties,
then select the Advanced tab, and then select the Environmental Variables button.
Example:
BMFDataMigrationWizard -s [MyDatabase,root,yourpassword] -d
[LocalServer,sa,sa]
Database Examples
BEGIN
YOUR_PROC(:VAR1,:VAR2);
END;
BEGIN
YOUR_PROC2 ( );
END;
exec YOUR_PROC2
To turn on and off the trace use the ALTER SESSION command:
1. Set the TIMED_STATISTICS and MAX_DUMP_ FILE_SIZE parameters used by the sessions:
alter session
set timed_statistics=true
alter session
set max_dump_file_size=unlimited
Note: If using a release before Oracle 8, release 8.1.6, these parameters can be changed with ALTER
SYSTEM commands.
System Requirements/Upgrade
Requirements/Supported Databases
System Requirements
Before installing Benchmark Factory, ensure your system meets the following minimum hardware and software
requirements.
ODBC Database Benchmark Factory supports almost all databases that you can connect to using
Server an ODBC 3.0 or later driver.
Benchmark Factory Supported Operating Systems: CentOS 7.x (64-bit), RHEL 7.x (64-bit), and Oracle
Agent for Linux - Linux 7.x (64-bit)
Requirements Supported Databases for Load Testing: PostgreSQL, Oracle, MySQL and Microsoft
SQL Server
Note: If using the Benchmark Factory Agent for Linux to test against an Oracle
database, ensure an Oracle Client is installed on the same Linux machine as
the Agent.
Upgrade Requirements
l Client libraries for database types used during the workload testing process must be installed on all
testing machines (Benchmark Factory and Agents).
l There is no upgrade path for the Benchmark Factory Repository version 3.3 or earlier.
l If you create a new Benchmark Factory 5.5 or later repository, earlier versions of Benchmark Factory will
not work against this repository.
Shortcut Keys
The following provides a list of shortcut keys used in Benchmark Factory.
Key Action
ALT+1 Displays the Output View window
ALT+2 Displays the Agents View window
ALT+M Creates an email message with the current script attached
ALT+R Runs a job
ALT+S Stops a job
CTRL+B Displays the Benchmark Objects Wizard
Create Outbound Rule on agent machine (if outbound connections are blocked)
1. Select Control Panel | System and Security | Windows Firewall.
2. Click Advanced Settings. The Windows Firewall and Advanced Security dialog opens.
l If outbound connections are blocked, then continue to create a new outbound rule.
l If outbound connections are allowed, then no action is required.
Troubleshooting
After enabling WMI and configuring inbound/outbound rules, if you encounter an error while attempting to install
a remote agent because you are denied access, try the following.
2. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System.
3. Add a new DWORD (32-bit) Value.
4. Rename the key to "LocalAccountTokenFilterPolicy".
5. Give it a value of "1".
6. Close the Registry Editor.
Contact Quest
For sales or other inquiries, visit www.quest.com/contact.