Web Load Testing Tutorial
Web Load Testing Tutorial
SilkPerformer 2010 R2
®
Copyright 2010 Micro Focus (IP) Limited. All Rights Reserved. SilkPerformer contains derivative works of
Borland Software Corporation, Copyright 1992-2010 Borland Software Corporation (a Micro Focus com-
pany).
MICRO FOCUS and the Micro Focus logo, among others, are trademarks or registered trademarks of
Micro Focus (IP) Limited or its subsidiaries or affiliated companies in the United States, United Kingdom
and other countries.
BORLAND, the Borland logo and SilkPerformer are trademarks or registered trademarks of Borland
Software Corporation or its subsidiaries or affiliated companies in the United States, United Kingdom
and other countries.
November 2010
Contents
Introduction 1 Chapter 6
Overview . . . . . . . . . . . . . . . . . . . . . 1 Identifying Baseline
SilkPerformer . . . . . . . . . . . . . . . . . . . 2
Sample Web Application . . . . . . . . . . . . . 3
Performance 59
Overview . . . . . . . . . . . . . . . . . . . . 59
Chapter 1 Finding a Baseline. . . . . . . . . . . . . . . . 60
Confirming a Baseline . . . . . . . . . . . . . . 63
Defining Load Test Projects 5
Overview . . . . . . . . . . . . . . . . . . . . . 5 Chapter 7
Defining a Load Test Project . . . . . . . . . . . 6
Setting Up Monitoring
Chapter 2 Templates 69
Creating Test Scripts 9 Overview . . . . . . . . . . . . . . . . . . . . 69
Setting Up a Monitoring Template . . . . . . . . 70
Overview . . . . . . . . . . . . . . . . . . . . . 9
Creating a Load Test Script . . . . . . . . . . . 10
Trying Out a Generated Script . . . . . . . . . 14
Chapter 8
Defining Workload 77
Chapter 3 Overview . . . . . . . . . . . . . . . . . . . . 77
Analyzing Test Scripts 17 Defining Workload . . . . . . . . . . . . . . . . 80
Overview . . . . . . . . . . . . . . . . . . . . 17
Visual Analysis with TrueLog Explorer . . . . . 18
Chapter 9
Viewing a Summary Report . . . . . . . . . . . 20 Running & Monitoring Tests 85
Finding Errors . . . . . . . . . . . . . . . . . . 23 Overview . . . . . . . . . . . . . . . . . . . . 85
Viewing Page Statistics . . . . . . . . . . . . . 25 Running a Load Test . . . . . . . . . . . . . . 86
Comparing Record and Replay TrueLogs. . . . 28 Monitoring a Test . . . . . . . . . . . . . . . . 88
Monitoring a Server . . . . . . . . . . . . . . . 93
Chapter 4
Customizing Test Scripts 33 Chapter 10
Overview . . . . . . . . . . . . . . . . . . . . 33 Exploring Test Results 95
Customizing Session Handling . . . . . . . . . 34 Overview . . . . . . . . . . . . . . . . . . . . 95
Customizing User Data . . . . . . . . . . . . . 40 Working with TrueLog On Error . . . . . . . . . 96
Adding Verifications . . . . . . . . . . . . . . . 47 Viewing an Overview Report . . . . . . . . . 100
Viewing a Graph. . . . . . . . . . . . . . . . .112
Chapter 5
Defining User Profiles 53 Index 117
Overview . . . . . . . . . . . . . . . . . . . . 53
Defining a custom user profile . . . . . . . . . 54
About these tutorials The Web Load Testing Tutorial offers an overview of using SilkPerformer to set
up and run Web load tests.
This Introduction contains the following sections:
Section Page
Overview 1
SilkPerformer 2
Sample Web Application 3
Overview
The Web Load Testing Tutorial is designed to ease you into the process of load
testing with SilkPerformer, and to get you up and running as quickly as possible.
It will help you take full advantage of SilkPerformer’s ease of use and to exploit
the leading-edge functionality that’s embodied in e-business’s load-testing tool
of choice.
Note This tutorial describes load-testing of Web applications on the
protocol level (HTTP/HTML).
If you want to load-test applications that heavily rely on AJAX
technologies we recommend using browser-driven Web load testing.
Browser-driven Web load testing is a solution that uses real Web
browsers (Internet Explorer) to generate load, thus leveraging the AJAX
logic built into Web browsers to precisely simulate complex AJAX
behavior during testing.
Refer to the Browser-Driven Web Load Testing Tutorial for more
information.
SilkPerformer
SilkPerformer is the industry’s most powerful—yet easiest to use—enterprise-
class load and stress testing tool. Visual script generation techniques and the
ability to test multiple application environments with thousands of concurrent
users allow you to thoroughly test your enterprise applications’ reliability,
performance, and scalability before they’re deployed—regardless of their size
and complexity. Powerful root cause analysis tools and management reports
help you isolate problems and make quick decisions—thereby minimizing test
cycles and accelerating your time to market.
Introduction This tutorial explains how to define a load testing project in SilkPerformer.
What you will learn This chapter contains the following sections:
Section Page
Overview 5
Defining a Load Test Project 6
Overview
The first step in conducting a SilkPerformer load test is to define the basic
settings for the load-testing project. The project is given a name, and optionally
a brief description can be added. The type of application to be tested is specified
from a range of choices that includes all the major traffic that is encountered in
e-business today on the Internet and on the Web, including the most important
database and distributed applications.
The settings that are specified are associated with a particular load-testing
project. It’s easy to switch between different projects, to edit projects, and to
save projects so that they can later be modified and reused.
A project contains all the resources needed to complete a load test. These
include a workload, one or more profiles and test scripts, all the data files that
are accessed from the script, and a specific number of agent computers and
information for server side monitoring. Options for all of these resources are
available directly from the project icon in the corresponding tree list.
Introduction This tutorial explains how to model load test scripts and try out tests scripts via
TryScript runs.
What you will learn This chapter contains the following sections:
Section Page
Overview 9
Creating a Load Test Script 10
Trying Out a Generated Script 14
Overview
The easiest method of creating a load test script is to use the SilkPerformer
Recorder, SilkPerformer’s engine for capturing and recording traffic and
generating test scripts.
First the SilkPerformer Recorder captures and records the traffic between a
client application and the server under test. When recording is complete, the
SilkPerformer Recorder automatically generates a test script based on the
recorded traffic. Scripts are written in SilkPerformer’s scripting language,
Benchmark Description Language (BDL).
During the recording phase, you define transactions. A transaction is a discrete
piece of work that can be assigned to a virtual user in a load test and for which
separate time measurements can be made. You should create new transactions
only for pieces of work that don’t have dependencies on other pieces of work.
Individual time measurements can be made for any action or series of actions
that occur during recording.
Note Ensure that you delete your browser’s cookies if you wish to
record a script that emulates the actions of a first-time user.
7 Using the browser, interact with the target server in the way that you
wish virtual users to interact during the load test (i.e., click links and
explore products). Your actions will be captured and recorded by the
Recorder. When you’re finished, close the browser window and click the
Stop Recording button on the Recorder.
8 The Save As dialog appears. Save the script with a meaningful name that
reflects the behavior of the virtual user who’s actions you defined.
9 The new generated load test script, that’s based on the recorded traffic,
then appears in the SilkPerformer script editor window.
The Workflow – Try Script dialog appears. The active profile is already selected
in the Profile drop-down list. The script you created is selected in the Script
drop-down list. The VUser virtual user group is selected in the Usergroup area.
2 To view the data that is actually downloaded from the Web server during
the TryScript run, select the Animated checkbox
Note To test an application other than a Web application, disable
the Animated option.
3 Click Run.
Note You are not running an actual load test here, only a test run to
see if your script requires debugging.
4 The TryScript run begins. The Monitor window opens, giving you
detailed information about the run’s progress.
5 TrueLog Explorer opens, showing you the data that is actually
downloaded during the TryScript run.
6 If any errors occur during the TryScript run, TrueLog Explorer will assist
you in locating them and customizing any session relevant information.
See “Customizing Session Handling” for details.
Introduction This tutorial explains how to analyze recorded load test scripts based on the
results of TryScript runs.
What you will learn This chapter contains the following sections:
Section Page
Overview 17
Visual Analysis with TrueLog Explorer 18
Viewing a Summary Report 20
Finding Errors 23
Viewing Page Statistics 25
Comparing Record and Replay TrueLogs 28
Overview
Once you’ve initiated a TryScript run for a recorded test script you’ll need to
analyze the results of the TryScript run using TrueLog Explorer. Test script
analysis with TrueLog Explorer involves the following three tasks:
- Viewing Virtual User Summary Reports
- Finding errors
- Comparing replay test runs with record test runs
• Information view displays data regarding load testing scripts and test
runs, including general information about the loaded TrueLog file, the
selected API node, BDL script, HTML references, and HTTP header
data.
Workflow bar
Rendered
Tree View of HTML view
accessed API
nodes
HTML Source
view tab
Information Window
with multiple views
See SilkPerformer’s online documentation for details regarding the statistics that
are included in virtual user summary reports.
Enabling summary Because virtual user summary reports require significant processing, they are
reports not generated by default. To enable the automatic display of virtual user reports
at the end of animated TryScript runs, or when clicking the root node of a
TrueLog file in Tree List view, select the Display virtual user report checkbox
Finding Errors
TrueLog Explorer helps you find errors quickly after TryScript runs. Erroneous
requests can then be examined and necessary customizations can be made via
TrueLog Explorer.
Note When viewed in Tree List view, API nodes that contain replay
errors are tagged with red “X” marks.
3 The Step through TrueLog dialog appears with the Errors option
selected.
4 Click the Find Next button to step through TrueLog result files one error
at a time.
3 The corresponding record TrueLog opens in Compare view and the Step
through TrueLog dialog appears with the Whole pages option selected—
allowing you to run a node-by-node comparison of the TrueLogs.
4 Click the Find Next button to step through TrueLog result files one page
at a time.
Windows displaying
the recorded session
have red triangles
Explore
differences
visually
Introduction This tutorial explains how to analyze a recorded load test script based on the
results of a TryScript run.
What you will learn This chapter contains the following sections:
Section Page
Overview 33
Customizing Session Handling 34
Customizing User Data 40
Adding Verifications 47
Overview
Once you’ve generated a load test script with SilkPerformer and executed a
TryScript run, TrueLog Explorer can help you customize the script in the
following ways:
• Customize session handling - TrueLog Explorer’s parsing
functions allow you to replace static session ID’s in scripts with
dynamic session ID’s—and thereby maintain state information for
successful load test runs.
• Parameterize input data - With user data customization you can
make your test scripts more realistic by replacing static recorded
user input data with dynamic, parameterized user data that changes
with each transaction. Manual scripting isn’t required to run such
“data-driven” tests.
3 Use the Step Through TrueLog dialog to step through HTML server
responses—with the recorded responses displayed alongside the
corresponding replayed responses. Only those differences that indicate
that static information is included in the test script and being sent back to
the server need to be parsed (e.g., a difference between replay and record
The number of
occurences in HTML
code is listed here
6 Once the script has been modified successfully, click the Customize
Session Handling button to initiate a new TryScript run.
7 Click TryScript Run to see if the script runs correctly now that session
handling has been modified.
2 Click Customize.
3 Use the Find Next and Find Previous buttons on the Step through
TrueLog dialog to browse through all WebPageSubmit calls in the
TrueLog (these are the calls that are candidates for user data
customization).
Post Data view shows the page that contains the HTML form that was
submitted by the selected WebPageSubmit call. When your cursor passes
over a form control, a tool tip shows the control’s name in addition to its
initial and submitted values.
Highlighted HTML controls in Post Data view identify form fields that
can be customized. You can replace the recorded values with various
types of input data (including predefined values from files and generic
random values) and generate code into your test script that substitutes
recorded input data with your customizations.
4 Right-click into a form control that you wish to customize and select
Customize Value to open the Parameter Wizard.
5 With the Parameter Wizard you can modify script values in one of two
ways. You can either use an existing parameter that’s defined in the
dclparam or dclrand section of your script, or you can create a new
parameter (based on either a new constant value, a random variable, or
values in a multi-column data file). Once you create a new parameter,
that parameter is added to the existing parameters and becomes available
for further customizations.
7 The Create New Parameter dialog appears. Select the Parameter from
Random Variable radio button and click Next.
8 The Random Variable Wizard appears. From the drop-down list, select
the type of random variable you wish to insert into your test script. A
brief description of the highlighted variable type appears in the lower
window.
9 Click Next.
10 The Name the variable and specify its attributes screen appears. The
Strings from file random variable type generates data strings that can
either be selected randomly or sequentially from a specified file.
Enter a name for the variable in the Name field. Specify whether the
values should be called in Random or Sequential order. Then select a
preconfigured datasource from the File/Name drop-down list.
11 Click Next, the Choose the kind of usage dialog appears. Specify if the
new random value should be used Per usage, Per transaction, or Per
test.
12 Click Finish to modify the BDL form declaration of your test script so
that it uses the random variable for the given form field in place of the
recorded value. The new random variable function appears below in
BDL view.
13 Initiate a TryScript run with the random variable function in your test
script to confirm that the script runs without error.
14 With the Form Submissions radio button selected on the Step through
TrueLog dialog, search for the next form for which you wish to
customize input parameters.
Multi-column data files Parameterization from multi-column data files is a powerful means of
parameterizing data because it defines files in which specific combinations of
string values are stored (e.g., usernames/passwords, first names/last names, etc).
Each column in a data file corresponds to a specific parameter. Multi-column
data files enable a data driven test model and allow you to cover all user data
input with a single data file.
Adding Verifications
TrueLog Explorer allows you to easily add content checks to your test scripts to
verify whether content that is to be sent by servers is in fact received by clients
under real-world conditions.
By comparing replay test runs with record test runs—a uniquely powerful
approach to the challenge of testing end-user experience in client/server
environments—TrueLog Explorer allows you to confirm visually whether or
not embedded objects, text, graphics, table data, SQL responses and more are
actually downloaded and displayed by clients while systems are under heavy
load. This allows you to detect a class of errors that other Web traffic simulation
tools aren’t able to detect: errors that occur only under load that aren't detected
with standard load test scripts.
Content verifications remain useful after system deployment as they can be
employed in ongoing performance management.
By simply identifying the objects that you wish to have verified—by right-
clicking them in rendered HTML, HTML source code, or elsewhere—all
required verification functions can be generated and inserted into BDL scripts.
TrueLog Explorer even offers pre-enabled verification functions for Web,
XML, and database applications. For Web applications this includes checks of
HTML page titles, page digests (entire content checks), tables, and source code.
7 Once the BDL script has been successfully modified, repeat this process
for each verification you wish to add to the BDL script.
8 Once you have finished adding verifications, click Yes on the Workflow -
Add Verifications dialog to initiate a TryScript run.
9 Confirm that verifications have been passed successfully (API nodes that
include verifications are indicated by blue “V” symbols).
Completing your Once you’ve customized how your application handles session information and
customizations user-input data, you have added all necessary verification functions, and have
completed any required manual BDL script editing via SilkPerformer, your load
testing script should run without error.
Section Page
Overview 53
Defining a custom user profile 54
Overview
Load test scripts that offer a range of user behavior can be created based on user
types, which are unique combinations of user groups and load test profiles. New
user types can be created by defining new user groups and test profiles.
User groups are sets of users that share common transactions and transaction
frequency settings. User groups are defined in the dcluser sections of BDL
scripts.
By adding profiles to your load test project you can endow a single user type
with a range of traits (e.g., varying connection speeds, protocols, browsers, etc).
SilkPerformer has a default profile that you can use. In some instances you may
require multiple profiles. For example, if you wanted to emulate three different
modem speeds during your load test, you could create three profiles that define
three different modem speeds.
Project profiles contain important project-specific settings. A project may
contain as many profiles as is required, each with unique settings. New profiles
can easily be added to projects, and existing profiles can be copied, renamed,
and deleted.
Within profiles options can be set for how the SilkPerformer Recorder generates
test scripts from recorded traffic and how protocols are used during recording.
Simulation settings can be defined for script replay. Options for result files
generated during tests are defined. Options are also set for the different kinds of
network traffic that are to be simulated—Internet, Web, CORBA/IIOP, COM,
TUXEDO, Jolt, and database.
3 The New Profile dialog appears. Enter a name (e.g., IE6_DSL) for the
new profile and click OK.
5 Right click the newly created profile name in the Project window and
select Edit Profile from the context menu to display the Edit Profile
dialog.
See SilkPerformer online help for full details regarding available profile
settings.
Introduction This tutorial explains how to identify and confirm the baseline performance of a
Web application.
Note The workflow bar displays the simplified workflow by
default. If you want to create a baseline to set response time
thresholds, enable the normal workflow by selecting Settings ->
System -> Workspace and un-check the Show simple workflow check
box.
What you will learn This chapter contains the following sections:
Section Page
Overview 59
Finding a Baseline 60
Confirming a Baseline 63
Overview
The next step in conducting a SilkPerformer load test is to ascertain baseline
performance (i.e., the ideal performance of the application under test). Baseline
tests are run using only one user per user type, and performance measurements
of unstressed applications form the basis for calculating appropriate numbers of
concurrent users per user type and appropriate boundaries for HTML page
response timers and transaction response timers. The bandwidth required to run
load tests is also calculated from baseline results. Baseline tests utilize the
identical measurement types that are used for actual load tests. As with actual
load tests, baseline tests also output reports and other standard files.
Finding a Baseline
By assigning different profiles to a user group and script you define new user
types that represent unique combinations of script, user group, and profile.
Baseline tests establish baseline performance for load tests using specific user
types. For baseline tests, only one virtual user per user type is executed.
The Find Baseline dialog allows you to define multiple user types (unique
combinations of script, user group, and profile).
The following option settings are automatically set for baseline tests:
• Baseline report files are automatically created
• The Stop virtual users after simulation time (Queuing Workload) option
is activated
• The Random think time option is deactivated
• The Load test description field is set to "BaseLine Test".
• The Display All Errors Of All Users option in the Monitor window is
activated
• The Virtual user output files (.wrt) option is activated
• The Virtual user report files (.rpt) option is activated
2 The Workflow - Find Baseline dialog appears. Select the user types you
wish to have run in the baseline test. One virtual user from each user type
will be executed.
3 If you want to add new user types to your load test, press the Add button
and select a unique combination of script, profile, and user group from
the Add User Type dialog. Each profile defined in a project can be
selected with a user group from any script in that project.
4 If you wish to configure simulation settings for the selected profile, click
the browse button (‘...’) to the right of the drop-down list.
6 The baseline test is run. The Monitor window opens, giving you detailed
information about the progress of the test.
Confirming a Baseline
The next step in conducting a SilkPerformer load test is to confirm that the test
baseline established by the test actually reflects the desired performance of the
application under test. Resulting measurements are used to calculate the
appropriate number of concurrent virtual users, required bandwidth, and
acceptable thresholds for load tests.
This is done by inspecting the results of a test in a baseline report. If results are
satisfactory, they can be stored for further processing.
Once a baseline test is complete, a baseline report is displayed. Baseline reports
are based on XML/XSL and include important test results in tabular form.
Note Baseline reports can be displayed for any load test you wish to
use as a baseline (i.e., you can use the results of any past load test to
generate a baseline report).
3 Assuming you are satisfied with the test results and wish to save them for
further processing (e.g., calculation of the number of concurrent virtual
users and network bandwidth required for the load test), click the Accept
Baseline button.
4 Click Yes.
5 Now that you have an accepted baseline you can set response time
thresholds for the selected timers. MeasureSetBound functions will be
generated into the script to set these thresholds.
Click the set response time thresholds button to display the Automatic
Threshold Generation dialog.
6 Specify the timers for which you want to generate automatic threshold
values based on baseline results in the Timers section of the dialog.
7 Specify multipliers that are to be used for the calculation of the time
bound values in the Multiplier section of the dialog. The average
response times in the baseline test will be multiplied by this factor to
generate time bound values for the specified timers (e.g., a multiplier of
three means that you accept response times three times higher than the
average response time of the timer in the baseline test).
8 Specify whether errors or warning messages should be raised when a
timer value exceeds the specified threshold in the Raise message when
exceeding thresholds section of the dialog. You can also specify the
severity of raised messages.
9 Click OK to have MeasureSetBound functions added to the test script for
each selected timer and accepted user type.
Introduction This tutorial explains how to set up monitoring templates to generate server-side
results information during load tests.
What you will learn This chapter contains the following sections:
Section Page
Overview 69
Setting Up a Monitoring Template 70
Overview
SilkPerformer offers server and client-side monitoring during load tests—
enabling you to view live graphical display of server performance while tests
run. Monitoring servers during tests is important because it enables server-side
results information to be generated. This information can then be viewed and
correlated with other test measurements during results analysis.
Among other uses, monitoring servers helps you determine if bottlenecks are
present and, if so, where they are located. Allowing you to examine the
performance of both operating systems and server applications.
Custom server monitoring templates can be set up or you can use pre-installed
templates (available for virtually all application types).
6 Enter a name for the custom template file and click the Create Custom
Monitor Template button.
7 To edit the newly created template, click the browse button (‘...’) next to
the edit field. Browse for and select the file.
8 With the new template file loaded in the edit field, click Edit Custom
Monitor Template.
9 Performance Explorer appears. Close any monitor windows that are not
relevant to the template.
10 Click the Monitor Server button on the Performance Explorer Workflow
bar.
13 Click Next.
14 In the tree view on the System selection screen, expand the folder that
corresponds to the operating system on which the server and the
application under test run.
15 From the list, select the server application you wish to monitor. To
monitor the operating system, select System.
16 Select the operating system on which the server and application under
test run.
17 Click Next.
19 In the Hostname edit field, enter connection parameters such as the host
name or IP address of the computer that hosts the application, connection
port, user name, and password. The data required here varies based on
the operating system run by the monitored computer.
20 Click Next.
21 The Select displayed measures screen appears. Expand the tree view and
select the measurements you wish to have monitored.
22 Click Finish.
26 To save the monitoring report so that it can later be compared with load
test results for results exploration, select Write Monitor Data from the
Performance Explorer Monitor menu. The file name appears in the File
section of the Monitor information area of the report dialog.
The next time you begin a load test, server monitoring will begin and stop
automatically.
Introduction This tutorial explains how to define workload settings for load tests.
What you will learn This chapter contains the following sections:
Section Page
Overview 77
Defining Workload 80
Overview
The next step in conducting a SilkPerformer load test is to configure workload.
SilkPerformer offers different workload models that can be used as the basis for
load tests. You must select the workload model that best meets your needs prior
to the execution of a load test.
The number of concurrent virtual users per user type, the duration, and the
involved agents must also be configured when defining workload.
The following workload models are available:
Increasing workload
With this workload model, at the beginning of a load test, SilkPerformer
simulates not the total number of defined users, but only a specified number.
Gradually workload is increased until all the users in the user list are running.
This workload model is useful when you want to determine at which load level
your system crashes or does not respond within acceptable response times or
error thresholds.
Dynamic workload
With this model you can manually change the number of virtual users in a test
while the test runs. The maximum number of virtual users to be run is set; within
this limit, the number can be increased and decreased at any time during the test.
No simulation time is specified, and you must end the test manually.
This workload model is useful when you want to experiment with different load
levels and have control over load levels during tests.
Queuing workload
In this model, transactions are scheduled following a prescribed arrival rate.
This rate is a random value based on an average interval calculated from the
simulation time and the number of transactions specified in the script (dcluser
section: number of transactions per user). Load tests are complete when all
virtual users have completed their prescribed tasks.
Note Tests may take longer than the specified simulation time due
to the randomized arrival rates. For example, if you specify a
simulation time of 3,000 seconds and want to execute 100
transactions, then you will receive an average transaction arrival rate
of 30 seconds.
This workload model is especially useful when you want to simulate workloads
that use queuing mechanisms to handle multiple concurrent requests. Typically,
application servers such as servlet engines and transaction servers—which
receive their requests from Web servers rather than end users—can be
Verification workload
Verification test runs are especially useful when combined with SilkPerformer’s
extended verification functionality. This combination can be used for regression
testing of Web-based applications. A verification test run runs a single user of a
specific user type on a specified agent computer.
This workload model is especially useful when you wish to automate the
verification of Web applications and want to begin verification tests from a
command-line interface.
Adjusting workload The Adjust Workload button on the workflow bar launches the workload wizard,
which helps you define all necessary parameters for your workload.
After specifying the simulation time for a load test, the wizard helps you
calculate the number of concurrent virtual users per user type associated with
your load test. This task is important for emulating real user behavior. If you
know the number of expected real user sessions per hour for the application
under test, the number of concurrent virtual users will be calculated based on the
results of accepted baseline tests.
Additionally, required network bandwidth per user type is displayed. This helps
you to check the network infrastructure for bottlenecks. Required bandwidth is
calculated based on accepted baseline results.
You can define multiple workload models in your load test project and save
them for later use, but only one workload model can be active at a time.
Accepted baseline results are associated with workload models—if you copy or
rename a workload model, the accepted baseline results will be copied and
renamed accordingly.
Defining Workload
Procedure To specify the workload of a load test:
1 Click the Adjust Workload button on the SilkPerformer Workflow bar.
2 Select the workload model that most closely meets your needs (the All
Day workload model is illustrated in this tutorial)
4 Specify simulation times for the load test. Depending on the workload
model you selected, you can adjust the duration of each phase of your
load test. During the Increasing phase workload increases gradually to
the specified maximum number of virtual users. In the Steady State
phase all virtual users run. In the Decreasing phase workload decreases
gradually. The Warm-up phase specifies the time at the beginning of a
load test during which measurements are not factored into results
calculations. The Measurement phase restricts time measurements that
are taken for results calculation.
5 Click Next.
Note Once a load test begins, you can change the number of virtual
users for intervals that have not yet begun. Though you cannot
exceed the maximum number of virtual users specified for the load
test.
8 Click OK
9 The Workload Configuration dialog appears, where you can check the
previously entered values. The diagram at the top of the dialog depicts a
graphical representation of the specified workload model. The diagram
uses the data of the user group selected in the workload list.
10 Use the workload list to configure the user groups that will be run in your
test scripts. In the User Type area, select the user types that you wish to
run in your test. All of the user types selected prior to the baseline test are
listed here.
11 In the Max Vusers column, specify the number of virtual users to be run
for each user type.
12 Check the TrueLog On Error option, when you want SilkPerformer to
generate TrueLog files for transactions that contain errors.
13 In the Load Test Description area, enter a description for the load test
(this is optional). This feature is provided only for your project
management needs.
Introduction This tutorial explains how to run and monitor load tests using SilkPerformer.
What you will learn This chapter contains the following sections:
Section Page
Overview 85
Running a Load Test 86
Monitoring a Test 88
Monitoring a Server 93
Overview
Running tests The next step in conducting a SilkPerformer load test is to run a full load test.
Multiple virtual users are run by means of the test script(s) to test the target
server. A large load test requires an appropriate testing environment set up on
the local area network, including a full complement of agent computers to host
the virtual users.
It is essential to carefully set options for the appropriate test type, to accurately
define workloads, and to enable generation of the test results that will be needed
to assess the performance of the server. The logging option however should be
disabled to prevent interference with load test results.
Comprehensive information is provided to testers while load tests run. This
includes real-time information about agent computers, virtual users, and
transactions as they are conducted. In addition, real-time monitoring of the
target server is available in graphical form.
Monitoring tests Graphical displays and full textual reporting of activity on both the client side
and the server side offer easily understandable monitoring of test progress.
Comprehensive overview information about agent computers and virtual users
is available directly from the workbench where tests are conducted. There is full
control over the level of information detail offered—from a global view of the
progress of all agent computers in a test down to exhaustive detail regarding the
transactions conducted by infidel virtual users. Progress information for each
agent and user is available in many categories. Run-time details for each user
include customizable, color-coded readouts on transactions, timers, functions,
and errors as they happen.
Monitoring servers In addition, real-time monitoring of the performance of the target server is
available in graphical form. Charts can display the most relevant performance
information from a comprehensive collection of the most commonly used Web
servers, application servers, database servers, and operating systems in use
today. Multiple charts can be open at the same time, and these can be juxtaposed
to provide the most relevant comparisons and contrasts. A tree-view editor
allows elements from any data source to be combined in the charts. Performance
information from the client application—for example, response times—can
easily be placed in the same chart as performance data from the server. This
enables a direct visual comparison to be made, so that you can see directly how
shortcomings on the server influence client behavior.
Note Clicking the Connect button allows you to initialize the agent
connection and then start the test manually from the monitor view by
clicking the Start all button.
Monitoring a Test
Procedure To monitor all agent computers:
1 While your load test runs, view progress in the Monitor window.
2 View information about the progress of agent computers and user groups
in the top view window. Among the comprehensive statistics that are
available are status of particular agents, percentages of tests completed
on those agents, and number of executed transactions.
3 Click Update to have your changes reflected in the current load test.
Monitoring a Server
Performance Explorer is the primary tool for viewing load test results. A vast
array of graphic facilities allows both real-time monitoring of the target server
while tests run and exhaustive analysis of results once tests are complete.
Exploring test results is made easy by a workflow bar with a click-through user
interface that offers enhanced drag-and-drop functionality.
In real-time monitoring, live charts provide a customizable display of the most
relevant performance information from the target server. Monitoring is available
for a comprehensive collection of the most widely used Web servers,
application servers, and database servers—across most all operating systems.
Multiple charts can be open at the same time, so that, for example, a tester can
watch a graphic display of Web server performance and operating system
performance simultaneously. A tree-view editor with drag-and-drop
functionality allows elements from any data source to be combined in charts.
After a test, the performance of the target server can be charted from both the
client side and the server side. Response time measurements display the client
perspective, while throughput data offers server-side perspective. Charts and
graphs are fully customizable, and they can contain as many or as few of the
measurements taken during tests as are required. Multiple charts, using
information from one or different tests, can be opened at once to facilitate
contrast/compare operations. Templates for the most typical test scenarios
(Web, Database, IIOP) are provided, and these default charts can be populated
easily and quickly with data the tester requires. Here also, drag-and-drop
functionality enables chart elements to be combined from any data source.
Information on client response times and server performance can be placed in a
single chart, so that you can see directly how server performance affects client
behavior.
Because you specified that monitoring start automatically in the “Setting Up a
Monitoring Template” tutorial, Performance Explorer launches and displays the
template you customized in the “Confirming a Baseline” tutorial. Monitoring
begins and ends automatically along with the load test.
Monitor reports automatically begin writing .tsd files when load tests begin and
automatically stop writing when load tests end.
Introduction This tutorial explains how to analyze load test results using SilkPerformer.
What you will learn This chapter contains the following sections:
Section Page
Overview 95
Working with TrueLog On Error 96
Viewing an Overview Report 100
Viewing a Graph 112
Overview
TrueLog On Error TrueLog On Error files provide complete histories of erroneous transactions
uncovered during load tests—enabling you to drill down through real content to
analyze error conditions. TrueLog On Error files maintain histories of all client
requests and server responses. Because they present errors in the context of the
sessions within which they occur and are closely integrated with test scripts,
TrueLog On Error files are uniquely suited for root-cause analysis of system and
application faults.
Overview Reports Once a load test is complete, Performance Explorer provides an overview report
for the load test. These reports include the most important test results in tabular
and graphical form.
Graphs Performance Explorer offers a comprehensive array of graphic features for
displaying test results, primarily in user-defined graphs, with as many elements
as required. The results of different tests can be compared, and there are
extensive features for server monitoring.
7 Click the Find Next button to advance to the first error. Error messages
are displayed on the Info tab in the lower-right window. API nodes that
contain replay errors are tagged with red “X” marks in the tree view.
8 When testing Web applications, errors often occur one or two steps
before they manifest themselves in page content as non-loading images,
error messages, etc. Navigate between pages by selecting them in the tree
view, or navigate between errors by clicking Find Next and Find
Previous.
Detailed Web page statistics show exact response times for each individual Web
page component—allowing you to easily pinpoint the root causes of errors and
slow page downloads.
Detailed Web page drill down results include the following data for each page
component:
• DNS lookup time
• Connection time
• SSL handshake time
• Send request time
• Server busy time
• Response receive time
• Cache statistics
• Custom charts
• Custom tables
• Detailed charts
• General information
Overview reports include many text areas that are predefined. You can edit
these text areas based on your needs. Use the click here to edit text links to
change text and save results with templates to be used later.
5 Click the summary tables tab to advance to the summary tables section of
the report. Summary tables contain summary measurements in tabular
form (i.e., aggregate measurements for all virtual users). The first table
provides general information, such as the number of transactions that
were executed and the number of errors that occurred. All the following
tables provide summary information relevant to the application type
under test.
6 Click the ranking tab to advance to the ranking section of the report. The
ranking section ranks pages in order of slowest page download time (i.e.,
those pages with the longest page time are listed first).
7 Click the user types tab to advance to the user types section of the report.
This section provides detailed measurements for each user type in tabular
form. The measurements include transaction response times, individual
timers, counters, and response time and throughput measurements
related to the application type under test. In addition, errors and warnings
for user groups are listed.
8 Click a user type link to advance further down the report, to user type
profile settings and transaction response time measurements for
individual user types and individual pages.
Custom charts Custom charts can be edited and added to overview reports. You can save your
changes as templates to be displayed for each summary report.
Procedure To add a new custom chart to the overview report:
1 From the custom charts section of the overview report, click the Click
here to customize this section link.
Performance Explorer then inserts the chart you have selected into the custom
charts section of the overview report.
3 The selected template will then be used for creating new overview
reports for all other projects.
Viewing a Graph
Procedure To view results in a graph:
1 Click the Explore Results button in the SilkPerformer Workflow bar.
2 The Workflow - Explore Results dialog appears.
3 Click the Performance Explorer button.
4 Performance Explorer opens.
5 In the Performance Explorer Workflow bar, click the Select Graph
button.
The Result Source dialog box appears.
6 In the File field, specify the .tsd file in which you saved the recorded
monitoring data that you wish to view. You may also click the button to
the right of the field to browse to the Result File.
7 Click Next.
The Template dialog box appears.
8 Select the measures you wish to view and click Finish.
A graph appears that displays the measures you selected.
9 Drag any other relevant timers from the tree view into the graph. Server
side monitoring results can even be selected for display alongside load
test results.
S V
sample web application 3 Variables, random 44
Send request time 25, 99 Verifications
Server busy time 25, 99 Adding 34, 48
Server monitoring 69, 93 Content 47
Session busy time 65 Virtual user summary reports 19, 21
Session handling Virtual Users
Customization 33, 34 Calculating 82
Session information, Invalid 34
Session time 65
SilkCentral
W
Test Manager 2
SilkPerformer 2 web application, sample 3
Source Differences view 36 Web form measurements 65
SSL handshake time 25, 99 Workflow bar 18, 35
Statistics 25, 26, 99 Workload
Step through TrueLog 24, 29 Adjusting 78
X
XML/XSL 63