Front Cover
Front Cover
Using Rational
Performance Tester
Version 7
Introducing Rational Performance
Tester
David Chadwick
Ashish Patel Chip Davis
Jack Reinstrom Mark Dunn
Kent Siefkes Ernest Jessee
Pat Silva Andre Kofaldt
Susann Ulrich Kevin Mooney
Winnie Yeung Robert Nicolas
ibm.com/redbooks
Draft Document for Review December 21, 2007 6:02 am 7391edno.fm
December 2007
SG24-7391-00
7391edno.fm Draft Document for Review December 21, 2007 6:02 am
Note: Before using this information and the product it supports, read the information in
“Notices” on page xxvii.
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxvii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxviii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxix
The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxx
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxxii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxxii
Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1 Overview of IBM Rational Performance Tester . . . . . . . . . . . . . . . . . . . . . . 4
1.1.1 The performance testing problem space. . . . . . . . . . . . . . . . . . . . . . . 4
1.1.2 Need for a performance testing tool . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.3 Goals of IBM Rational Performance Tester . . . . . . . . . . . . . . . . . . . . 6
1.2 Architecture of IBM Rational Performance Tester. . . . . . . . . . . . . . . . . . . . 7
1.2.1 Eclipse and Java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.2 TPTP as a tool framework. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.3 Building a complete tool chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.4 Recording in a distributed environment. . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.5 Scalable distributed playback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.6 Test run monitoring and data collection . . . . . . . . . . . . . . . . . . . . . . 10
1.2.7 Post run analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Contents v
7391TOC.fm Draft Document for Review December 21, 2007 6:02 am
Contents vii
7391TOC.fm Draft Document for Review December 21, 2007 6:02 am
366
10.5.1 HTTP memory usage characteristics of RPT 6.1.1 (and 6.1) . . . . 367
10.5.2 HTTP memory usage characteristics of RPT 6.1.2 . . . . . . . . . . . . 367
10.5.3 Comparison of Plants41k driver memory usage for RPT 6.1.2 vs. 6.1.1
367
10.5.4 Memory reduction of RPT 6.1.2 vs. 6.1.1 for three workloads . . . 368
10.6 Intermediate results for memory footprint measurements . . . . . . . . . . . 369
10.6.1 TradeSVTApp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
10.6.2 InternetShoppingCart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
Contents ix
7391TOC.fm Draft Document for Review December 21, 2007 6:02 am
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
Contents xi
7391TOC.fm Draft Document for Review December 21, 2007 6:02 am
Figures
Figures xv
7391LOF.fm Draft Document for Review December 21, 2007 6:02 am
8-35 Response vs. Time summary correlated with the % Processor Time
resource counter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
8-36 Drag-and-drop a resource counter from the Performance Test Run view .
251
8-37 Resources report in a performance report . . . . . . . . . . . . . . . . . . . . . . 252
9-1 Distributed environment complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
9-2 Transaction breakdown. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
9-3 Correlating ARM transactions with their sub-transactions . . . . . . . . . . . 260
9-4 Tivoli Just-In-Time Instrumentation Overview . . . . . . . . . . . . . . . . . . . . . 263
9-5 IBM Application Server Instrumenter graphical user interface . . . . . . . . 271
9-6 Start menu for IBM Software Development Platform . . . . . . . . . . . . . . . 271
9-7 IBM WebSphere Application Server instrumentation parameters . . . . . . 272
9-8 BEA WebLogic instrumentation parameters . . . . . . . . . . . . . . . . . . . . . . 273
9-9 Remote instrumentation connection parameters . . . . . . . . . . . . . . . . . . 274
9-10 Remote instrumentation advanced configuration . . . . . . . . . . . . . . . . . 274
9-11 Application Server Instrumenter Help Contents . . . . . . . . . . . . . . . . . . 275
9-12 Application Server Instrumenter context-sensitive help (F1). . . . . . . . . 276
9-13 Environment system architecture using the Data Collection Infrastructure.
279
9-14 Data collection infrastructure for a distributed environment . . . . . . . . . 283
9-15 Dynamic discovery process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
9-16 Response time breakdown configuration in a performance test . . . . . . 286
9-17 Response time breakdown configuration in a performance schedule. . 287
9-18 Profile launch configuration action . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
9-19 Profile launch configuration for J2EE application: Host page . . . . . . . . 290
9-20 Profile launch configuration for J2EE application: Monitor page . . . . . . 291
9-21 J2EE performance analysis components to monitor . . . . . . . . . . . . . . . 291
9-22 J2EE performance analysis filters to apply . . . . . . . . . . . . . . . . . . . . . . 292
9-23 J2EE performance analysis sampling options . . . . . . . . . . . . . . . . . . . 292
9-24 Profiling Monitor for J2EE Performance Analysis . . . . . . . . . . . . . . . . . 294
9-25 Page Performance report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
9-26 Page Element Selection Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
9-27 Response Time Breakdown Statistics view . . . . . . . . . . . . . . . . . . . . . 297
9-28 Response Time Breakdown Statistics: Tree layout. . . . . . . . . . . . . . . . 298
9-29 Graphical drill-down reports for Response Time Breakdown . . . . . . . . 300
9-30 Component level drill-down report . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
9-31 UML Sequence Diagram view. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
9-32 View linking service between UML sequence diagram and statistical or log
303
9-33 Execution Statistics view. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
9-34 Execution Statistics view right-click context menu . . . . . . . . . . . . . . . . 305
9-35 Method Invocation Details view. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
9-36 DataQuery Web service API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Figures xvii
7391LOF.fm Draft Document for Review December 21, 2007 6:02 am
Figures xix
7391LOF.fm Draft Document for Review December 21, 2007 6:02 am
Figures xxi
7391LOF.fm Draft Document for Review December 21, 2007 6:02 am
Tables
10-18 Incremental virtual user memory footprint from individual runs. . . . . . 374
10-19 Initial calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
10-20 Final calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
12-1 Citric recorder control buttons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
14-1 The story board for the day in the life of a bank . . . . . . . . . . . . . . . . . . 475
14-2 Sample of workload. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
14-3 Details of hit pages per second for each channel . . . . . . . . . . . . . . . . . 479
A-1 Virtual Tester Pack Sizes - Floating Licenses . . . . . . . . . . . . . . . . . . . . 527
A-2 Performance Tester Product Extensions . . . . . . . . . . . . . . . . . . . . . . . . 527
Examples
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
SAP NetWeaver, SAP R/3, SAP, and SAP logos are trademarks or registered trademarks of SAP AG in
Germany and in several other countries.
Oracle, JD Edwards, PeopleSoft, Siebel, and TopLink are registered trademarks of Oracle Corporation
and/or its affiliates.
Adobe, Acrobat, and Portable Document Format (PDF) are either registered trademarks or trademarks of
Adobe Systems Incorporated in the United States, other countries, or both.
EJB, Java, JavaScript, JDBC, JDK, JNI, JVM, J2EE, J2ME, J2SE, Portmapper, RSM, Solaris, Sun, and all
Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or
both.
ActiveX, Excel, Internet Explorer, Microsoft, MSDN, Windows Server, Windows, and the Windows logo are
trademarks of Microsoft Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
Preface
The document describes how performance testing fits into the overall process of
building enterprise information systems. We discuss the value of performance
testing and its benefits in ensuring the availability, robustness, and
responsiveness of your information systems that fill critical roles for your
enterprise.
This book is not an exhaustive reference manual for the Rational Performance
Tester product. The product has reference material built-in to the help system
and can be accessed through the Help menu or using the F1 key to get context
sensitive help for the current user interface item with focus.
Target audience
The target audience for this book are:
Managers, system architects, systems engineers, and development staff
responsible for designing, building, and validating the operation of enterprise
systems
Rational and other software brand services and sales representatives and key
partners who have to understand and utilize Rational Performance Tester.
Staff members who have to use Rational Performance Tester to size
hardware systems and consultants who have to validate system operation
under projected production conditions
David Chadwick is the Level 3 Technical Support Manager in the IBM Rational
Performance Tester development organization in Raleigh. He also serves as the
product champion and SWAT team member responsible to help internal
organizations and customers adopt the Performance Tester tool and associated
performance testing processes. He has spent over 20 years working in the areas
of computer performance and related tool development. This is his first IBM
Redbooks project which he conceived and gathered an all-star cast to help him
write. David holds a Masters of Science in Electrical Engineering specializing in
Computer Engineering from Stanford University and two Bachelor of Science
degrees from North Carolina State University in Electrical Engineering and
Computer Science.
Chip Davis is the Modular Service Offering (MSO) development lead for Rational
Services in Atlanta, Georgia. He has been involved in providing pre- and
post-sales services for over 6 years and was an early adopter, training material
developer, and trainer for advanced users of the Performance Tester product.
Chip has a Bachelors of Electrical Engineering from Georgia Institute of
Technology.
Ashish Patel is a Software Architect and Development Lead working at the IBM
Toronto Software Lab in Canada. He has over 7 years of industry experience in
software architecture and development and 4 years of business development and
entreprenuerial experience. Ashish participated in IBM’s premiere internship
program, Extreme Blue™, where he co-founded the IBM Performance
Optimization Toolkit (IPOT). He has worked with IBM for 2 years on numerous
start-up products in the software development and test area. Prior to joining IBM,
he created software solutions for one of the largest integrated petroleum
companies in the world and one of Canada’s top providers of energy. He also
founded a privately-held software development and consultancy firm, where he
was appointed President and Chairman of the corporation, and which he
operated for 4 years. Ashish holds a Computer Engineering degree from the
University of Alberta.
Jack Reinstrom is the System Performance Test Manager for the System
Verification Testing team for the IBM Rational server-based tools in Lexington,
Massachusetts. He and his team have adopted Performance Tester to perform
all of their testing. Jack has led the effort to use custom Java coding to drive their
tool APIs when not using standard client-server protocols.
Kent Siefkes is the Second Line Development Manager for the Performance
Tester development team in Raleigh. He has been architecting and managing the
development of performance testing tools for over 20 years.
Preface xxxi
7391pref.fm Draft Document for Review December 21, 2007 6:02 am
Thanks to those who provided technical review comments to the initial draft:
Allan Wagner, Barry Graham, David Hodges, Fariz Saracevic, Karline Vilme, and
Tom Arman.
Your efforts will help increase product acceptance and customer satisfaction. As
a bonus, you'll develop a network of contacts in IBM development labs, and
increase your productivity and marketability.
Find out more about the residency program, browse the residency index, and
apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
Part 1
Part 1 Understanding
Rational
Performance
Tester
The theme for this part is to explain How does this feature or function work in
RPT. This part covers the basic usage flows for how the tool user goes from
recording a user scenario to test execution and then performing test results
analysis.
Chapter 1. Introduction
This chapter starts off describing how performance testing fits into the overall
process of building enterprise information systems. We discuss the value of
performance testing and its benefits in ensuring the availability, robustness, and
responsiveness of your information systems that fill critical roles for your
enterprise. From this premise, we develop the requirements for a performance
testing tool that can aid in filling this value proposition.
These requirements from the problem space are then combined with some basic
business and marketing requirements to lead us to a product architecture for the
IBM Rational Performance Tester tool. We also walk through the some of the
choices that we made including development on top of an open source platform
and choosing Java as its programming language to permit ubiquitous platform
support.
You might think that these examples are extreme or contrived, but in fact, these
cases are typical and happen in every industry. Having your enterprise systems
operating at peak efficiency with ample capacity is a strategic advantage and in
some cases is a bare minimum requirement to compete in some market spaces.
Let us examine another very common scenario that occurs frequently in many
companies facing the challenges of a global economy and the impact of
competition. The XYZ Widget company has made a strategic decision to move its
manufacturing to a plant in the Pacific basin to save on labor costs and to import
some raw materials from South America used in the widget manufacturing
process. This directive sought by the CEO and approved with overwhelming
support by the board of directors must be implemented in the next six months or
the company risks badly missing its profit targets for the fiscal year. The move of
the manufacturing plant has gone well and beyond expectations because a
partner was found with capacity to implement a new widget manufacturing line in
only three months. However, the IT shop is struggling to add the significant
changes required to the ordering, inventory, and distribution management
system. They are forced to bring in consultants to radically change their system
and are getting ready to put the modified system into production. At the end of
the month, the new system will be cutover into production, the existing records
updated to the new formats, and all of the new capabilities to control and track
the orders and inventory across continents will be activated. There will be no
going back to the previous system. If the new system fails, the company's
operation will grind to a halt. What is it worth to test this new system under
production conditions before the cutover?
There have been performance testing tools available in the general marketplace
since the advent of minicomputers in the early 1980s. Before that there were
proprietary tools developed by computer vendors for internal purposes back as
far as the early 1970s and maybe even earlier. However, in today's environment
every company has a need to test their enterprise systems before putting them
into production use. Equipping and staffing a system performance testing
laboratory is now an essential stage in enterprise system development for any IT
shop. Purchasing a performance testing tool capable of full capacity testing of
your production system is a critical asset in the performance test lab.
Chapter 1. Introduction 5
7391-intro.fm Draft Document for Review December 21, 2007 6:02 am
The key for any tool development is to start with clear goals and tie every
decision and technology choice back to those goals. These goals drove many
product decisions from ordering which features came first to how the tool
generated artifacts were stored and persisted.
Now that the number of data points has moved from 100,000 to 100 million
individual measurements, a design imperative is to reduce the data in real time
during the test in a distributed fashion. This principle is coupled with the
increased complexity of the tested system architectures yield a need to have
complete flexibility in reporting and correlating a variety of data sources. Most of
these analyses are dependent on the user’s system architecture and therefore
must be custom designed to show the results needed for that system.
Chapter 1. Introduction 7
7391-intro.fm Draft Document for Review December 21, 2007 6:02 am
Also, there has to be a way where business partners and independent software
vendors can add incremental functionality to the basic tool as developed by the
core development team. This provides other internal business units that have
specific needs with the ability to extend the tool and take advantage of the core
product functionality. If they were having to wait for the tool development team to
build a solution specific to their needs, it may not be built or at least not soon
enough. The tool has included an extensibility interface for this purpose. Several
additional application specific solutions have already been implemented using
The tool has a capture, edit, replay, and analyze usage pattern. Many of the
underlying transformational steps in the process are executed automatically
without user intervention. All steps of the testing process should be done
automatically when a change in an earlier part of the tool chain is recognized. For
example, when an edit is made to the test, it will be automatically consistency
checked on saving. Then when the test is first to be executed, the test will be
automatically transformed into Java code that is compiled, archived, and shipped
to the test agent for execution along with any support data files.
Chapter 1. Introduction 9
7391-intro.fm Draft Document for Review December 21, 2007 6:02 am
By developing a test agent based playback architecture where each test agent
runs independently of the other test agents is very powerful. The scalability of the
playback complex is completely linear with the number of test agents employed,
because there is no real time coupling between test agents, or between the test
agent and the workbench. The resulting data is collected locally on each test
agent during the run, summarized into a statistical periodic update packet, and
sent back to the workbench to provide near real time data to the tool user looking
at the workbench. After the test run completes, any additional detail test log data
is transferred back only after the performance test data has been stored in the
project. If the test log data is too large for the workbench to handle, the
performance test run results still remain available and are completely valid. By
being able to adjust the period of the statistically update, the load on the
workbench from dozens of test agents can be reduced to a manageable level.
The other analysis that has to be done typically is to find the bottleneck or slow
components of a complex system. There are two sets of measurements that help
determine where the system is limited. The first class of measurements is
computer resource utilization. These hardware and software metrics permit you
to understand the overall utilization of hardware and operating system resources.
When the major bottlenecks from the system have been removed, and several of
the subsystems are operating at maximum capacity, you have an efficiently tuned
and balanced system.
The other set of measurements is the breakdown of all the critical transactions of
your application into their constituent parts. Finding those segments of time
spent that are the largest components of the overall response times yield the
bottlenecks of the system. These are the areas where system tuning and
application redesign can improve the overall system performance.
Chapter 1. Introduction 11
7391-intro.fm Draft Document for Review December 21, 2007 6:02 am
In this chapter we describe the performance testing process and the artifacts that
are generated as you complete each of the process steps. Finally we provide a
set of example artifacts that taken together provide a basis for understanding
how the performance testing process can be carried out.
The process of performance testing has not changed significantly since these
earlier system types were being tested. However, the complexities of the system
design with more distributed intelligent hardware components and many more
interconnected software subsystems yield more challenges in the analysis and
tuning parts of the process. On current systems, performance testing should be
done iteratively and often during the system design and implementation. Tests
should be performed during implementation of critical subsystems, during their
integration together into a complete system, and finally under full capacity work
loads before being deployed into production.
The basic steps in the performance testing process are listed here. Each step will
be explained in the sections that follow in terms of the actions taken and results
of the step.
Determine the system performance questions you wish to answer.
Characterize the workload that you wish to apply to the system.
Identify the important measurements to make within the applied workload.
Establish success criteria for the measurements to be taken.
Design the modeled workload including elements of variation.
Build the modeled workload elements and validate each at the various stages
of development (single, looped, parallel, and loaded execution modes).
Construct workloads definitions for each of the experiment load levels to
collect your workload measurements.
Run the test and monitor the system activities to make sure the test is running
properly.
Analyze measurement results and perform system tuning activities as
necessary to improve the results and repeat test runs as necessary.
Publish the analysis of the measurements to answer the established system
performance questions.
There are some fairly standard goals for performance tests that you can use to
start from to elicit a solid direction for the performance testing project. Often
several of these are the expected result by various stakeholders. You have to
explicitly state the goals that you are working towards as a way to set both
positive expectations of what goals you are planning to achieve as well as setting
negative expectations of the goals you are not attempting to meet.
By writing down the performance testing goals (usually as part of the workload
specification) and reviewing them with the stakeholders, you can begin your
testing project with confidence that you can complete the project with a
satisfactory set of results and conclusions. Without this agreement, the pressure
and scrutiny of high profile stakeholders can make the project difficult if not
impossible to complete successfully.
Performance testing is a very high profile and stressful activity as the decision
making based on it can have major business implications. By having everyone in
agreement before the test results are reported, there will be less opportunity to
second guess the testing goals, approach, work load definition, or chosen
metrics.
The workload model may be derived from the number of business transactions
expected during the month-end or quarter-end rush. It might also be based on
the late afternoon batch report run interval that overlays an otherwise uniform
transactional workload. It could also be at lunch hour on the Friday after the
Thanksgiving holiday for a seasonal e-commerce Web site.
In most cases, you can think of a workload model as a matrix of transactions (or
business scenarios) versus frequency of execution spread across the number of
users accessing the system simultaneously. There are many forms used to
describe this quantity of work. One example is shown in Table 2-1, where the
simplified version of the system’s load is reduced to two types of user
transactions and two batch reports that are run in the busy hour.
The impact on the system across all four components is measured and analyzed.
Depending on the system design, any of the four components could cause
significant interactive or batch reporting degradation beyond the minimum
acceptable level of performance expected.
It is very important that the tester not set the performance requirements in
isolation, make system measurements, and present a set of performance results
that recommends slipping the deployment date or a total system redesign. This is
the ultimate recipe for a disastrous project result. The requirements review has to
precede any system performance measurement to ensure their acceptance.
Having to rejustify the workload definition, the measurement points, or the
accuracy of the tool software are among the project distractions that can derail a
performance test project. The performance requirements should be agreed to
and socialized properly among the stakeholders before the measurement data is
available. This will keep the focus of the project on tuning the system and making
sure the system performs at an acceptable level.
On the other hand, if the submit order and order completion screens each take
more than 5 seconds to appear, then most customers will abandon their
shopping cart without purchasing, cancel their orders, or not return for
subsequent purchases. These are more customer- oriented requirements that in
fact dictate the success of the online shopping site system deployment.
The number of user scenarios that make up the workload model is very
subjective and can have a major impact on how long the performance testing
project takes. Simplifying the user scenarios and reducing their number have
many benefits. Many times a revision of the application requires that all test
scripts be recaptured. This can occur because the workflow changes in the
sequencing of user screens, user input data, or even invisible underlying
application parameter data passed back and forth between browser and server.
What may seem like a trivial application change from a user flow perspective may
cause the test scripts to stop working properly. If you have only ten user
scenarios and a highly productive tool such as Performance Tester, an
application version change can be accommodated in a half day of recapture,
minor editing for input data variation, and test debugging. If you have 20 or more
user scenarios, this same event could cost the project a day or more of delay.
Also the number of alternate flows, conditional execution, and more extensive
data variation adds to the overall time to build and maintain the set of tests that
implement the workload model. Keeping the user scenarios basic and central to
the important business workflows of the system is an important concept to obtain
valuable, timely performance results for a reasonable project cost. Once you
accept the need to abstract the workload definition to a simplified version of the
true user behavior (with virtually equivalent system performance), your
performance testing projects will proceed more quickly and cost effectively.
In some cases like the browsing of an online commerce site, repeatedly selecting
the same five items to view in the online catalog may result in the Web server
caching images and even product details. However by adding random selection
of the product viewed, the system will experience a much heavier load on the
non-Web server subsystems such as an image server or a backend database
containing product details. This difference in the workload may impact order
processing times as well as provide a more realistic estimate on the time to view
a product detail page. This may be the critical customer requirement that
determines the success of the Web site.
by the server, the data correlation required by the server application is not
automatically handled, a data dependency exists in the user scenario that
requires data variation, or some Java applet operations must be programmed to
permit the session to run properly. An experienced user of the performance
testing tool should be able to handle any of these four cases and tell them apart.
Often there is a requirement for building a subset of the full workload model to
provide a means of tuning one or more of the subsystems, such as tuning the
capacity of a single application server normally used in a clustered environment
with a Web server front end. This can be accomplished by only having a single
application server active in the cluster or by redirecting the test connections
directly to the application server (bypassing the Web server and load balancer).
Some subsystem tuning and testing involves several subsystems and have to
exercised before the full load tests. One example of this is the database
connection pooling to the backend database. By exercising a heavy load of
database intensive operations, possibly without including the entire load of
non-database related workload, the database connections from the application
servers to the database can be sized properly.
Doing this should ensure that the data is not too voluminous to analyze. First you
filter the data down to only the steady state response interval. Usually by looking
at the user load and the hit rate charts you can discover when a steady state was
reached. You can double check this against the page response time versus time
graphs to make sure the response times of interest have become stable. By
decomposing the transaction times into pages and then page elements that are
taking the most time to process, you can see which operations are those most
critical to tune. Taking a particular page element, you can then break down by
method call the request that you want to analyze from an application tuning
perspective. Depending on the frequency of calls as well as the time spent in
each call, you may want to forward this data to the development organization for
algorithm tuning.
A third type of experiment is used to verify the stability of the system intended for
continuous operation without down time. Many enterprise systems have to be
available 24 hours a day / 7 days a week with little if any regular system down
time. To accept a new software version for one of these systems, a 3-7 day test of
continuous operation at full system load is typical to ensure that the system is
ready to be put into production. For this test you set up the statistical results (at
the page level only) and operating system monitoring data to be sampled once
every 5 to 15 minutes. Test log data is completely turned off for this test. Running
this test may require that you write special logged messages into a data file for
every few minutes to verify that transactions are properly executing because
normal logging is turned off. As long as the number of counters being sampled in
the statistics model is only a few hundred, this mode should permit a several day
test run. If your test runs with very few errors, you can also run the test logging
set to errors only mode and determine the sampled number of users that you
need error data from to get an accurate picture of a failure mode when it occurs.
If there is a concern about the negative impact of Java garbage collection (GC)
on the test agent, you can turn on the verbose GC logging (using the -verbosegc
-verbosegclog:C:\temp\gclog.log) and view the length of time spent doing
garbage collection. There are tools available through the IBM Support Assistant
to analyze these logs. In general, this should not be a problem, unless you are
running very close to the edge in heap size and doing frequent, unproductive
garbage collection cycles.
Once you have validated that the test agents are operating in a healthy operating
region, you should verify that the transactions are operating properly and the
steady state operating region has been identified and the report data filtered
down to only this data. Based on this region, you can export the performance
report and any other reports that have interesting data so that it can be included
in a formal final report of your findings. Candidates include the transaction report
(if those have been defined in the tests), the verification point report, and the
HTTP percentile report (assuming that test log at the primary action level has
been kept). The performance report by itself basically contains all the data
necessary to show what the expected page performance of the system is.
Statistically speaking response times are best reported as a 85th, 90th, or 95th
percentile while throughput of transactions are best reported as averages
completion times or system rates per second or per hour depending on their
execution time.
It is important when planning your performance testing project to allow time in the
schedule for sizing the application server, testing the load balancing across the
cluster, tuning the database connection pool and maybe even some database
table indices or optimizing certain application segments making multiple
database queries in sequence to satisfy a user request. Leave time for possible
work on heap fragmentation or memory leak issues in the application server
testing if the application has not been stress tested before or if this is a significant
The report should also detail out the complete system configuration including
operating system tunables, disk volume layouts, size and characterization of the
test database, network topology with firewall details, load balancers, and
complete specifications of the hardware and software versions present on each
of the systems. Application versions are critical to later understanding the use of
these results as a baseline for future measurements or capacity estimates based
on these test results. Performance Tester reports can all be exported to HTML
which can then be directly cut and pasted into a final report. This makes getting
the raw results and operating system resource data into the final report a very
efficient operation.
Probably the most important part of the final report is the enumeration of the
testing goals with the associated success criteria as compared with the final test
results. Stating clearly whether the system has passed the test goals should be
explicit and summarized. Putting these project findings in an executive summary
at the beginning of the report is recommended with the justification coming from
the main body of the report. The final two sections describe the conclusions
reached by the project and recommendations for future work.
As with any significant project, you should hold a post-mortem discussion with
the project team members and gather both the good practices to continue and
the troubling one that have to be adjusted for a better experience next time. This
should not be part of the final report as it is an organizational and process asset
rather than pertaining to the project content.
Each has its place in the projects movement from testing requirements to the
ultimate analysis document containing summarized performance
characterization and perhaps even recommendations for follow-on projects.
The workload specification along with the test plan constitute the statement of
work for the performance test. The workload specification contains the content
and scope of the test while the test plan discusses resources, schedule, and
tasks required to obtain results defined in the workload specification. The test
tool assets, test results, and performance analysis document can be considered
the valuable output artifacts from the performance test project.
The test plan also contains a test interval for each experiment and the order in
which they should be performed. Together with the workload specification, the
performance testing project can be sized and scheduled.
Probably the most important section of the performance analysis document is the
conclusions from the overall performance test project and the suggested next
steps drawn from the testing project. Normally, when the performance testing
project is done at the end of a system development and prior to system
deployment, the decision is whether or not the system is ready to be deployed.
The test results and their analysis should be organized and emphasized to
support the conclusions stated in the document.
If you record with Mozilla or Firefox, set the application to record as follows:
Start Performance Tester.
Select Window → Preferences, and expand Test.
Select HTTP Recording, and set Application to Record to Mozilla or Firefox.
Type the path to, or browse to, the browser executable.
If you record with another browser, set the application to record as follows:
Start Performance Tester.
Select Window → Preferences, and expand Test.
Select HTTP Recording, and set Application to Record to None.
Manually configure and load your browser. Consult the browser proxy
configuration documentation on how to do this.
If you are recording directly with Internet Explorer (that is, you are not using a
proxy), clear all fields in the Local Area Network (LAN) Settings window
(Figure 3-1).
– Click Advanced and in the Proxy Settings window, and set the fields as
shown in Table 3-1.
HTTP Points to the proxy server and port (typically the same
address and port as the secure proxy)
– The Proxy Settings are shown in Figure 3-3, with information specific to
your proxies.
If you do nothing, this warning is displayed with every browser action, and you
must repeatedly click Yes to continue. Performing the following procedure installs
the recorder certificate on the local host as a trusted authority and thereby
disables warnings from Internet Explorer during recording at secure Web sites.
This procedure does not affect other browsers that record from secure Web
sites—they will display warnings before every action.
3.5 Troubleshooting
Problems in recording usually involve the following areas:
The browser is configured incorrectly. Remember:
– Clear Bypass proxy server for local addresses
– Automatic configuration must not be enabled
– The Proxy Settings Exceptions list must be blank
Changes have been made to the browser during recording.
The Agent Controller is not running, or there are problems with it.
servicelog.log
To perform a HTTP recording, the Agent Controller must be running. If the
servicelog.log cannot be found in the correct location (example: C:\Program
Files\IBM\SDP70Shared\AgentController\config), then the Agent Controller is
not running and you will not be able to record. You will have to start the Agent
Controller before recording.
If the servicelog.log is in the correct location, but you still cannot record, then
you should review the entries in this file. Sometimes you can find the source of
the problem by reviewing this log file.
serviceconfig.xml
This file contains the parameters used when starting the Agent Controller. The
file should be set up correctly during the install process. However, sometimes you
will have to review this file or modify it to get the Agent Controller to work
successfully. You can modify this file by using an editor or by executing
setconfig.bat (or SetConfig.sh on Linux) in the ~\AgentController\bin
directory. With this command-line utility, you can reset the path to the Java
runtime, network access, and encryption settings for network communication to
the Agent Controller.
DirectConnection
Direct Connection
Browser
Browser HTTP
HTTP Internet
Internet
Proxy
Proxy
IEIE Recorder
Recorder WebSites
WebSites
Firefox
Firefox (modify
(modify HTTP
HTTP
Mozilla
Mozilla internet
internet Proxy
Proxy
settings,
settings, Server
Server
listening on
listening on (Apache,
(Apache,
Port 1080)
Port 1080) Squid,
Squid,etc)
etc)
HTTP
HTTPProxy
Proxy
Connection
Connection
There are a few things that you may want to extract from the first two sections of
the recording (*.rec) file. If a problem occurs, the development team helping to
solve the problem will ask for the recorder version (see the <recorderversion>
line) and whether the recording is going through a HTTP proxy (see the
<ProxyEnabled> line. The rest of the information will not be useful to most people.
Example 3-1 shows a typical example of the first two sections of a .REC file.
<TRACE>
We recommend that you always flush the cookie cache prior to recording to add
the proper construction of the cookie cache during playback. This may not be
true for the file cache however.
For example, if the Web site that you are visiting is frequently visited by the user
community and most images would be present in the user’s file cache, then you
should run through the user scenario prior to doing the actual recording to
pre-populate the browser file cache. Then when you record your scenario, the file
cache already contains most of the images and responses accordingly during the
recording. This behavior will cause a different traffic pattern to the server,
including the browser performing a conditional GET to the server to see if the
images have been updated and receiving a 304 Not Modified response but not
the actual image data.
If you truly want to measure the entire server response including the page
components that are not visible, then you should wait until the browser indicates
the server has returned all the data and is done. By not observing this tip, you
may get page content of page one mixed into page two, and neither page
response time accurately measures the real page response time. Because the
whole concept of an HTTP page is an artificial boundary condition not specified
in the protocol, you can help line up the user’s concept of a page with the
recorded user scenario steps by waiting so that successive page contents are
not interleaved.
If this logic is used and the run-time test content is generated using these
options, the protocol extension license token will be checked out during
multi-user playback for that environment. If the extension license token is not
present (or available due to some other user), the test playback can not be
performed.
For certain portal environments where the specialized Siebel or SAP content is
not being tested, you may be able to turn off the specialized decoding at test
generation time so that a protocol extension license is not required for test
playback. However, if that content is being tested, you should leave the test
generation on automatic so that the specialized decoding is done and the
application will play back correctly.
By making use of these recording tips and tailoring these test generation settings
to meet the needs of your recording environment, you will get automatically
generated tests that are closer to the desired result and will require less editing
afterwards.
To properly edit a Performance Tester test you must first thoroughly comprehend
the structure and workings of a test, as well as the purpose of the particular test
you are editing. It is recommended that you have a good understanding of the
topics covered in chapters 1 through 7 of this book before you begin editing tests.
There are a number of reasons why you want to edit and augment a test in
Performance Tester. You will edit a test to:
Associate test data with a test
Manipulate data correlation
Verify system functionality during test execution
Modify or control test flow
Fulfill measurement and reporting needs
Modify or manipulate timing or connections
Improve test readability and reuse
The sections of this chapter explain how to do these tasks for the different types
of tests in Performance Tester.
In general, you would manipulate a test to create an item that reflects the
smallest test execution asset. The tests then become the building blocks used to
create the schedules for test execution. Simulating workload analysis is generally
done more inside schedules than in tests. It is easier and more common to
develop multiple variations of a schedule for multiple test runs during
performance testing.
Before you begin editing a test, you should consider how you may reuse the test
for testing goals. You may want to create another test based on the first, or you
may want to implement the desired test functionality in a schedule, if possible.
Because you typically reuse tests within many schedules, it may not make sense
to continue to significantly modify tests well into the test execution phase of a
testing effort. Making major changes to a test increases the odds that you also
have to edit every schedule that contains the test.
select element to
edit details
double-click to open
Limited copy/paste
Because artifacts and elements in Performance Tester are more than simple text
or Windows objects, you cannot use copy and paste functions to replicate them.
You can, however, replicate tests using the File → Save As feature. The menus in
Performance Tester indicate whether copy/paste is possible or not.
There are several preferences that affect default behavior when editing tests. You
can modify these settings to better suit your needs. Note that these are user
preferences and changes will only apply to a particular user/workstation.
For more information on recommended practices for test editing, see the section
that follows.
Every time you create a new test artifact in Performance Tester, you have to
select a folder location to store it. Because you can have more than one project
open in Performance Tester, you may also have to select the correct project and
folder.
Likewise, when you are associating an artifact with another, such as selecting a
datapool for a test, or selecting a test to put into a schedule, you have to browse
different folders and possibly different projects to select the correct asset. You
must be careful to select the correct project and folder, especially for larger and
more complex testing efforts.
You can rename test assets and move them to other folders or projects if needed.
For medium to large testing efforts, you should document the naming
conventions and test asset organization strategy.
A test search will look for text in any part of the test such as requests, responses,
or other element fields specific to the type of test. There are many options that
can filter the search. The test results are shown in a view similar to the Test
Contents editor and you can jump to the location of each match from the results.
Search results also persist until the next search so you do not have to keep
searching for the same item over and over again while editing the test.
Searching
To search for items within a test:
Open the test.
In the menu bar select Search → Search.
Note: If you use the Search toolbar icon then you must ensure that the
Test Search tab is selected (Figure 4-5).
In the Search for text field, type the text that you want to locate.
You can leave this field blank, depending on your search strategy. For
example, if you know that a string occurs in elements or element instances
that you are not interested in, using the options described below, you can
locate the elements or element instances of interest before entering the
search text into this field.
Options specific to
the type of test
element selected
– To have the search through all child element levels of the test from the test
element selected, select Recursive.
Selecting Restrict search to elements highlighted in Test Contents restricts
the types of elements that you search in this step to those that are selected in
the Test Contents area. For example, if you select HTTP Pages here and only
one page is selected in the Test Contents area, only matches in that page are
found. Otherwise, if the test name is highlighted and Recursive is selected
then string matches in every test page are found.
Select the options specific to the type of test element selected. Under
Elements to search select each type of element you want to search.
When you select an element under Elements to search, the lower-right area
of the Search window will change to display the options for that type of
element. These options are different for each type of element and they allow
you to define how an element should be searched. Figure 4-5 shows the
options for an HTTP Responses element while Figure 4-6 shows the options
for an SAP Element.
Click Search. The results of the search are displayed in two views: the Search
view, which lists the objects that contain matches(Figure 4-77), and the Test
Search Match Preview view (Figure 4-9 on page 50), which displays the
matches that were found.
Search results
Once you have results from your search there are several things you can do.
Most of these are available by right-clicking in the Search view (Figure 4-8).
To preview a found string in the Test Search Match Preview (Figure 4-9), click
the object.
To open your test at the location where an instance was found, double-click
the object. Alternately, you can right-click and select Go To.
This will switch focus to the test editor, expand the Test Contents to the match
location, and highlight the found search item. From here you can make
whatever edit is needed in the test.
Tip: WIth HTTP response content, after using the Go To function you can then
select the Protocol Data - Browser tab to view the rendered HTTP page. This
can be done to further validate that you have located the intended value.
You may perform a number of searches while enhancing a test and it is often
useful to return to a previous search results. Performance Tester will keep a
history of test searches within an Eclipse session (if you exit the application and
restart the history is cleared). You can access the history by clicking Show
Previous Searches (Figure 4-10).
Replacing values
There are several ways to replace values in a Performance Tester test. This is
useful for modifying the environment or other configurations that you will execute
your test will be executed tests in (see 4.8.3, “Changing environments for testing”
on page 128).
An ideal strategy is to use both methods as needed; saving copies for minor and
frequent variations and a configuration management tool for periodic version
control.
Using ClearCase
IBM Rational ClearCase LT can provide version control of test assets through the
Rational ClearCase SCM Adapter in Eclipse. This provides an easy interface
built into the same Eclipse shell as Performance Tester for checking in and
checking out performance test assets such as tests, datapools, custom code and
others. This capability requires the purchase of Rational ClearCase LT licenses
and the installation of client software on the Performance Tester workstation.
Running some types of test types require additional licensing beyond the base
Performance Tester license, which includes HTTP/HTTPS. 4.3.2, “Editing Siebel
tests” on page 55 through 4.3.4, “Editing Citrix tests” on page 62 only apply to
installations which have the extensions to enable these protocols.
The contents of an HTTP test is broken down into one or more pages, then
individual requests with responses. A page will almost always correspond to a
visible page rendered in a browser. Within the test editor you can:
Select values for datapools in the Test Data or Test Element Details. Refer to
4.4, “Providing tests with test data” on page 64 for detailed steps on how to do
this.
Select or modify correlated values in the Test Data or Test Element Details
(refer to 4.5, “Manipulating data correlation” on page 81 for detailed steps).
Add verification to all or part of the test by right-clicking in the Test Contents
(refer to 4.6, “Adding verification to tests” on page 89 for detailed steps).
Insert test elements by right-clicking in the Test Contents (refer to 4.7, “Adding
test elements” on page 105 for detailed steps).
Except where noted, all editing and augmentation described in this chapter apply
to HTTP tests.
For the most part, editing Siebel tests is no different than HTTP. The differences
between Siebel and HTTP tests have primarily to do with test set up, recording,
and generation, but some differences also apply to editing.
The first difference between editing Siebel tests and HTTP tests has to do with
page naming and readability. Siebel tests are broken down into pages and
requests just like those recorded from a Web-based application, but with the
following differences:
The first page of a Siebel test is named Message Bar, which emulates the
ticker-tape message that Siebel application pages display. This “page” can be
configured to repeat at any frequency you choose.
Page names are generated to have more descriptive names and reflect the
displayed screen. Therefore, they typically do not require renaming for
readability (refer to 4.9, “Improving test reuse and documentation” on
page 135).
Other differences between Siebel tests and HTTP tests have to do with
configuring variable test data and correlation. Siebel tests will have improved
identification of datapool candidates, provide data correlation for Siebel specific
data structures, and include additional types of built-in Siebel data sources for
correlation. In many cases, datapool substitutions are the only changes that you
have to make to a Siebel test without needing to manually correlate any other
values.
These built-in data sources can be used independent of any Siebel tests as well
by manually correlating any request data with the data source. The Siebel Data
Correlation Library enables Performance Tester to automatically correlate
Siebel-specific data. These correlations are designated with a unique icon in the
test editor.
When you are editing a Siebel test, you can manually correlate request values
with a built-in Siebel variables. To correlate a request value with a built-in
variable:
Open the test.
Locate the value that should be replaced by a built-in variable.
Highlight the value, then right-click and select Substitute from → Built-in data
sources. The Built-in Datasource Selection wizard displays the types of
variables that can be substituted.
Select the type of variable and click either Next or Finish.
– If you select Current Date, click Next, select the date format then Finish.
– If you select SWE Counter, click Next, enter values for the counter in the
Current Value and Maximum Value fields, and then click Finish.
See Chapter 11, “Testing SAP enterprise applications” on page 383 for more
information on testing SAP applications.
The SAP Protocol Data view contains two pages that are synchronized with each
other and with the test editor (Figure 4-17).
The Screen Capture tab displays a graphical screen capture of the SAP GUI.
You can select all GUI objects such as windows, buttons, fields or areas.
The Object Data tab provides information about the selected GUI object:
identifier, type, name, text, tooltip, and subtype. This information is used to
determine possible events.
SAP events
You can insert SAP GUI set or get events into your test to add items such as a
field selection, a keyboard entry, or an interaction with the GUI. Get events are
contained in SAP screen elements in the test suite. SAP screen elements can be
windows, dialog boxes, or transaction screens that are part of a recorded
transaction. You can only add set events. Verification points can only be added to
get events.
You can use either the test editor or the SAP Protocol Data view to create or edit
SAP set events. When using the SAP Protocol Data view, you can select SAP
screen elements from the screen capture and copy the information directly to the
new event. Using the SAP Protocol Data view to create or edit a SAP event is
much easier than adding an event manually from the test editor. After creating
the event, you can use the test editor to easily change the value. You can also
replace that value with a datapool variable or a reference.
All application logic executes on the server and only screen updates, mouse
movements and keystrokes are transmitted through the ICA session to the server
and client device. The only events the ICA protocol gives Performance Tester to
use for synchronization are Window Create, Activate, and Destroy events. If an
action such as clicking a link in a browser causes new data to be loaded into an
existing window (such as a Web new page), Performance Tester has no way to
know when that page is ready for the next click. It is therefore imperative that all
such actions be preceded by an image synchronization event. Image
synchronization events can be a bitmap hash comparison or textual utilizing the
OCR technology. Image synchronizations should be inserted before the next
action to ensure the application is ready.
After recording, you can edit the events in each window element. Because the
recorded input is primarily made of low level keyboard and mouse input, you can
streamline the test by, for example, replacing key-press events with string inputs.
You can use the comments and recorded screen captures to make navigating
through the test easier. You can replace recorded test values with variable test
data, or add dynamic data to the test. You can also set verification points on
window titles and coordinates or image synchronizations to validate that the test
behaved as expected.
Edit page names and add comments for readability in the Test Contents and
Test Element Details (refer to 4.9, “Improving test reuse and documentation”
on page 135 for detailed steps).
One common workflow for getting test data for tests in Performance Tester would
be to:
Determine and identify the data needed for testing.
Get data from a database or spreadsheet, or generate the data.
Export or save the data into CSV file format.
Import the CSV files into Performance Tester datapools.
Associate the datapools with one or more tests:
– Configure the datapool for the intended execution.
– Identify the test parameters and substitute with datapool values.
When you want to replace data in a test with known values that can be saved in a
table you can use datapools, for all other dynamic data you will use data
correlation (see 4.5, “Manipulating data correlation” on page 81). Both cases
involve the same kind of substitution in a test, the difference is where the data
comes from.
To address these considerations, you can set several options for attaching a
datapool to a test. Note that these options are not tied to the datapool itself but
only affect the association between the test and the datapool.
The options for the first five considerations are described in “Associating
datapools with tests” on page 71. The issue of storing passwords is covered in
“Adding authentication in HTTP tests” on page 131. The last consideration
having to do with the size of the test data set is addressed in 4.4.8, “Custom test
data handling” on page 79.
In the second New Datapool window, enter a description for the datapool. If
you know the initial table dimensions you can enter them now, or add them
later. Click Next.
In the third New Datapool window, leave the fields alone and click Finish.
You now have a new blank datapool. The next step is to define the columns and
enter data as described in “Editing datapools” on page 68.
You may not already have existing test data but instead plan on entering values
during test development. In this case you still may want to consider entering the
data into a spreadsheet first, especially if you will have more than a few dozen or
so values. The reason is that spreadsheet applications, and even some text
editors, are generally easier for data entry than the TPTP datapool editor.
Note: You can only import a CSV file when you first create the datapool; you
cannot import data into and existing datapool.
If the data in the CSV file is encoded differently from what the local computer
expects, select the encoding from the Import Encoding list.
If the first row of the CSV file contains column names, select First row
contains variable names and suggested types. If this check box is not
selected, columns are named Variable1, Variable2, and so on. You can
change the names later when you edit the datapool.
You will typically leave First column contains equivalence class names
cleared. Equivalence classes are used in functional, rather than performance,
testing. Select this check box only if you are using a datapool for both
functional and performance testing, and the datapool already contains
equivalence classes.
Click Finish.
You now have a new datapool populated with test data from the CSV file.
Editing datapools
You can add, modify, or remove data from a datapool using the datapool editor,
similar to the way you work with a spreadsheet. If your datapool changes are
extensive, it may be easier to delete the datapool, enter the revised data into a
CSV file, and import the data into a new datapool.
To add a new row, right-click on the row above the one to be added, and
select Add Record.
Alternately, if the last row in a a datapool is selected then you can simply
press Enter to add a new row.
To remove a row, right-click the row to be deleted, and select Remove Record.
To add a new variable (table column):
– Right-click anywhere in the datapool cells, not on the variable names
(column headers) at the top, and select Add Variable.
– In the Add Variable window, enter a Name for the variable. The variable
Type is optional.
– In the Add drop-down list, select where you want the variable to be added
relative to the other variables, then click OK.
Alternately, you can add, rename, or move variables from the Overview tab, in
the Variables area.
Save the changes.
Tip: For large data sets, you can maximize the datapool view in Performance
Tester to see more rows and columns at once by double-clicking on the
datapool tab (click on the name, not on the x, or you will close the datapool).
To restore back to the original size, double-click on the datapool tab again or
select the menu Window → Reset Perspective.
Set the options which will affect how the test uses the datapool. See
“Datapool options” on page 72 for an complete explanation of these.
Click Select. A reference to the datapool is added to the test, and the Test
Details area is updated with the datapool information.
Save the test.
Tip: To see the datapools that are currently associated with a test, open the
test and in the Test Contents area, click the first line of the test, which is the
test name. Any datapools attached to the test will show in the Test Element
Detail area, on the Common Options tab.
Now that you have created a reference between the test and the datapool, the
next step is to associate a value in the test with a column in the datapool,
described in 4.4.6, “Substituting test values with datapools” on page 741.
Datapool options
There are several options that affect how a datapool will behave with a test.
These options can implement the factors described in 4.4.2, “Test data
considerations” on page 65. These options are not saved as part of a datapool
itself but instead only affect the association between the test and the datapool.
The same datapool may be used with different options for a different test.
You typically set the datapool options when you attach it to a test, as described in
“Associating datapools with tests” on page 71. You can also change the datapool
options as the testing needs change.
Open mode: Virtual users on each computer draw from a shared view of the datapool, with datapool
Shared rows apportioned to them collectively in sequential order, on a first-come-first-served
(per machine) basis.
(default)
This option makes it so that the virtual users or loop iterations will use data from different
rows and that the server will see variable data. The exact row access order among all
virtual users or iterations cannot be predicted, because this order depends on the test
execution order and the duration of the test on each computer.
If unique rows are needed for multiple test agent runs, this mode is not recommended
since each test agent gets its own datapool copy and re-uses all of the data rows in the
datapool. Therefore, virtual users on different test agents will re-use the same data
values.
Open mode: Each virtual user draws from a private view of the datapool, with datapool rows
Private apportioned to each user in sequential order.
This option offers the fastest test execution and ensures that each virtual user gets the
same data from the datapool in the same order. However, this option most likely results
in different virtual users using the same row. The next row of the datapool is used only
if you add the test that is using the datapool to a schedule loop with more than one
iteration.
Open mode: Virtual users on each computer draw from a segmented view of the datapool, with data
Segmented apportioned to them collectively from their segment in sequential order, on a
(per machine) first-come-first-served basis. The segments are computed based on how a schedule
apportions virtual users among computers. For example, if a schedule assigns 25% of
users to group 1 and 75% to group 2, and assigns these groups to computer 1 and
computer 2, the computer 1 view will consist of the first 25% of datapool rows and the
computer 2 view will consist of the remaining 75% of rows.
This option prevents virtual users from selecting duplicate values (for example, account
IDs). This mode should be used when the data rows can only be used once in the user
scenario and you are using multiple test agents. If you disable wrapping, your tests will
log an error and use a null value if the datapool rows for that test agent’s datapool
segment are exhausted. This will ensure that no row is used more than once.
Note: This option requires the datapool to contain only one equivalence class. With the
other options, equivalence classes are ignored.
Option Description
Wrap when the This determines whether the test will reuse data when it reaches the end of the
last row is datapool.
reached
By default, when a test reaches the end of a datapool or datapool segment, it reuses the
data from the beginning. To force a test to stop at the end of a datapool or segment, clear
the check box beside Wrap when the last row is reached. Forcing a stop may be useful
if, for example, a datapool contains 15 records, you run a test with 20 virtual users, and
you do not want the last five users to reuse information. Although the test is marked Fail
because of the forced stop, the performance data in the test is still valid. However, if it
does not matter to your application if data is reused, the default of wrapping is more
convenient. With wrapping, you need not ensure that your datapool is large enough
when you change the workload by adding more users or increasing the iteration count
in a loop.
Fetch only This determines whether the test will make the data in the datapool row the only value
once per user used by each virtual user.
By default, one row is retrieved from the datapool for each execution of a test script, and
the data in the datapool row is available to the test only for the duration of the test script.
Select Fetch only once per user to specify that every access of the datapool from any
test being run by a particular virtual user will always return the same row.
This option is heavily used for login / password data as well as for digital certificate
retrieval.
Selecting a test page shows you a table that lists any datapool candidates
and correlated data in that page. (If correlated data is not displayed, you can
right-click the table and select Show References.) References are shown in
blue letters, and datapool candidates are shown in black letters (Figure 4-24).
If the content of the Value column corresponds exactly with column data in
your datapool, click the row and then click Datapool Variable. The Select
datapool column window opens. Skip to step 6. You can ignore step 8,
because URL encoding is preselected.
Otherwise, double-click the row to navigate to the page request containing the
value that you want to replace from a datapool, and continue to the next step.
The value that you want to replace from a datapool might not be listed in any
page table. In this case, manually locate the request string that includes the
value.
If the value that you want to replace from a datapool is part of a string that has
been designated a datapool candidate, you must remove the light green
highlight: right-click and select Clear Reference. For example, if you searched
for doe, john in your test, the datapool candidate in your test is displayed as
doe%2C+john. Suppose that you do not want to associate this candidate
with a single datapool column containing data in the format doe, john.
Instead, you want to associate doe and john with separate datapool columns.
In this case, you must first remove the light green highlight.
Highlight the value: With the left button pressed, drag your mouse over it.
Right-click the highlighted value and select Substitute from → Datapool
Variable. The Select datapool column window opens (Figure 4-25).
To use a datapool that is not listed, click Add Datapool: the Import Datapool
window opens.
Select the name of the datapool column that you want to associate with the
test value.
Click Use Column. To indicate that the association has been set, the
highlighting for the selected test value turns dark green, and the datapool
table for this page is updated as shown in Figure 4-26.
For more information on custom coding, see Chapter 12, “Testing Citrix
application environments” on page 401.
Not every Web applications that utilizes SSL or has an HTTPS URL necessarily
requires a digital certificate store for performance testing.
The digital certificates in a store are used by HTTP tests through datapools. The
basic steps to do this are:
Create a digital certificate store, as described in “Creating a digital certificate
store” on page 132.
Create a datapool to link to the digital certificates.
Associate the datapool with a test, as described in 4.4.6, “Substituting test
values with datapools” on page 74.
Note: The passwords for all of the certificates that you want to use for
playback must be default.
Type a Certificate Name for the digital certificate. Highlight this name and
then click Substitute from datapool. Choose the datapool that you added
previously, and then choose the column with the certificate name.
Save the test.
When you run this schedule, the certificates from the certificate store will be
submitted to the server.
The substituted value typically comes from the system under test itself. For
example, a system may generate a unique session ID value which is sent to the
client, in this case an Performance Tester test, and that value would then have to
be used in all subsequent requests made during that session. For the test to
work, it would have to correlate the session ID value from the server and replace
it into every request transaction made in the test.
Instead of coming from a server, the replacement value could also come from
some calculation or algorithm. For example, an application may need a unique
incremented value for every transaction made in a given session but the value
could be a simple counter plus some prefix string. In this case, the test could
correlate the value from a simple custom code addition (see 4.7.6, “Custom
code” on page 116).
Creating a reference
When you designate a test value as a reference, or a range of test data as a field
reference, you can use the data elsewhere in the test. This is similar to creating a
variable from a value in software code. A reference, usually located in response
data, points to a specific value that you want to use from a subsequent test
location, usually a request. You can also use a reference as input to an IF-THEN
condition (see 4.7.4, “Conditions” on page 111) or as input to custom Java code
that your test calls (see “4.7.6, “Custom code” on page 116).
A field reference points to a range of test data. Like a single reference you can
use a field reference as input to an IF-THEN condition or custom code.
Note: The Test Search function is a useful way to locate values you want to
reference, especially in response content. See 4.2.2, “Test search and
replace” on page 46 for more information.
To create a field reference, right-click and select Create Field Reference. The
field is highlighted in yellow to indicate that it is a field reference.
Correlating a request
If a test runs without error but does not get the results that you expected, you
may need to correlate a value in a request with a reference in a previous
response. The value that you want to correlate a request value with must already
be designated a test reference (see “Creating a reference” on page 83).
To correlate a request:
Open the test.
Locate the value that should be replaced by a reference.
Highlight the value: with the left mouse button pressed, drag the mouse over
it.
Right-click the highlighted value and select Substitute from → Reference, and
select the correct reference. The value is highlighted in dark green to indicate
that it has been correlated, and the correlation is added to the Test data
substitution table for this page (Figure 4-34).
Select value
Click
Select
reference
The data correlation algorithms used during test generation are based on
well-known best practices. However, because these practices are continually
evolving, various types of errors can occur during automated data correlation:
Insufficient correlation: Test values that should have been correlated were
not. These are some possible causes:
– Two parameters that should be correlated have different names.
– A value should be correlated with a previous value that does not occur in
the expected location.
– A parameter or value should be correlated with a previous parameter or
value that does not occur in the test because it is a computed value.
Suppose that this request should be correlated with the server response
containing customer_id=12345, not id=12345. In this case, the id parameter has
to be correlated with customer_id.
Data correlation typically relates a response value that was returned from the
server with a subsequent request value. The automated correlation algorithms
look in the usual places—URL and Post data—for correlation candidates. But
other schemes for returning parameters are possible. For example, consider the
request
http://www.madeupsite.com?id=12345
Suppose that this request should be correlated with the server response
containing the name and entity pair href name="id" entity="12345", not
id=12345. In this case, the id parameter has to be correlated with name="id", and
value 12345 has to be correlated with entity="12345".
The value for login_timestamp is the concatenation of the login ID and the
current date. In this case, you require custom code that concatenates the login ID
and the date.
For another example, suppose that the server returned the login ID and date as
separate entities (href "customer_id=12345" Date="Dec_12_05"). In this case,
you can put these parameters in separate references and, in subsequent
requests using customer ID and date, substitute them separately.
Superfluous correlation
Automated data correlation is based on pattern matching: a parameter or
parameter value is correlated with a subsequent parameter or parameter value
with an exact or similar name. But sometimes parameters with exact or similar
names are in fact unrelated. In the best case, unneeded correlation is either
harmless or adds a slight load that is inappropriate. In the worst case, the
application does not expect a correlation and fails during playback.
Here is an alternative:
In the test editor, click a page that contains requests that should not be
correlated.
Right-click anywhere in the Test Data table and select Show References.
Click a table row with the superfluous correlation (blue letters indicate
correlated data) and select Remove Substitution.
Incorrect correlation
A parameter requiring data correlation may occur many times throughout a test.
For example, a session ID parameter that is used initially when a user logs in
might also be used in every subsequent request. If multiple instances of a
parameter in a test are not same, the correlation algorithms might choose the
wrong instance.
Page titles
Page title verification points report an error if the title returned by the primary
request for the page is not what was expected. Although the comparison is
case-sensitive, it ignores multiple white-space characters (such as spaces, tabs,
and carriage returns).
Ensure that the Expected page title field shows the string that you expect to
be included in the response. Although you can change the string, the value
listed is what was returned between the <title></title> tags during recording
(Figure 4-36).
Response codes
Response code verification points report an error if the returned response code
is different from what you expected.
Select the verification point to display the response code editing fields in the
Test Element Details area (Figure 4-38).
Response sizes
Response size verification points report an error if the size of a response is
different from what you expected.
Select the verification point to display the response size editing fields in the
Test Element Details area (Figure 4-40).
Response content
Content verification points report an error if a response does not contain an
expected string. This type of verification can be crucial to verify functionality
during load or stress testing where an system may continue to generate valid
responses but with incorrect data.
This is also important because many Web applications today may have enough
servers and load balancers to continue running under very high loads (that is, the
server will not crash or produce error codes), but the system will instead respond
with various messages such as unable to process your request. These
unavailable messages have non-error server codes and may have similar sizes
to the correct responses, making them difficult to distinguish from expected
behavior.
If you defined a content verification point for a particular response, this can then
be added to other responses in the test without having to re-define the same
content verification point.
For HTTP tests, only the User-defined category is displayed initially until you
create your own verification strings or regular expressions. There are two basic
ways to create a new verification point:
To create a new string from scratch, click New String.
To create a new string by editing another string, select it and click Duplicate.
You can edit a specific content verification point in Test Element Details that you
inserted from the Create/Enable Content Verification Point window. In the
following instructions, only the first four steps apply when you are editing in Test
Element Details.
Screen titles
Screen title verification points report an error if the title of an SAP screen is
different from what you expected. To specify the expected screen title, perform
the following steps:
Select the SAP screen in the test editor and ensure that screen title
verification is enabled for the SAP screen. The Test Element Details area
includes a Screen Title Verification Point section.
– If screen title verification was enabled for the entire test, Enable
verification point is selected for all SAP screens in the test.
– If screen title verification was enabled for a specific SAP screen, Enable
verification point is selected for the selected SAP screen.
You can enable or disable screen title verification for a specific SAP screen in
the test editor by selecting or clearing Enable verification point.
Ensure that the Expected screen title field shows the string that you expect to
be included in the page title that is returned when this page is loaded.
When the test was recorded, SAP returned a default title for this screen. This
value is displayed in the Recorded title field, and is automatically copied to the
Expected page title field when Enable verification point is selected. You can
change the string in the Expected page title field as desired (Figure 4-45).
Whenever the test runs with page title verification enabled, an error is reported if
the title returned for the page does not contain the expected title. Although the
comparison is case-sensitive, it ignores multiple white-space characters (such as
spaces, tabs, and carriage returns).
Response times
SAP request response times measure the delay between the moment the user
submits a server request and the moment the server responds. Response time
data is provided by the server. You can set a verification point on a response time
threshold value. If the test encounters a response time above the threshold, the
verification point is failed.
When the Verification points for SAP request response time threshold option is
selected in the SAP Test Generation preferences, all SAP server request
elements contain a response time verification point. The default threshold value
is calculated by applying a multiplier to the recorded response time. You can
change the default multiplier in the SAP Test Generation preferences. The
response time measurements are displayed in the SAP server request element,
which is the last element in an SAP screen.
When you add SAP verification points, SAP get elements retrieve the data from
objects in the SAP GUI such as windows or text fields. SAP get elements are
contained in SAP screens in the test suite. SAP screens can be windows, dialog
boxes, or transaction screens that are part of a recorded transaction.
You can use either the test editor or the SAP Protocol Data view to create or edit
SAP get events and place verification points on them. When using the SAP
Protocol Data view, you can select SAP screen elements from the screen capture
to specify the SAP GUI identifier for the get event. Using this method to create or
edit an SAP verification point is easier than adding it manually from the test
editor.
The SAP Protocol Data view contains two pages that are synchronized with each
other and with the test editor:
Screen Capture displays a graphical screen capture of the SAP GUI. You
can select all GUI objects such as windows, buttons, fields or areas.
Object Data provides information about the selected GUI object: identifier,
type, name, text, tooltip, and subtype.
In the Test Contents area of the test editor, expand a transaction and a SAP
screen. The SAP Protocol Data view displays a screen capture of the
selected transaction.
Inside the transaction, select the item for which you want to enter a new value.
The Screen Capture page of the SAP Protocol Data view displays the screen
capture of the SAP GUI with the corresponding GUI object highlighted in red.
In the SAP Protocol Data view, right-click the GUI field where you want to
enter a new value, and then select Create Verification Point (Figure 4-47).
This opens the Create Verification Point wizard, which already contains the
Identifier data from the recorded session.
Specify the expected value for the verification point (Figure 4-48):
– If you want to verify a text value in the SAP GUI object, ensure that Verify
text is selected and type the Expected value that you want to verify. Click
Finish.
– If you want to verify advanced properties of the SAP GUI object, you can
select Advanced, and then specify the properties attached to the GUI
object as well as the Expected values. Refer to SAP documentation for
information about these properties.
After creating the event, you can use the test editor to easily change the value.
You can also replace that value with a datapool variable or a reference. You can
enable and disable SAP verification points in the test editor.
Window titles
Window title verification points produce a fail status in the test execution report if
the window titles are different from what you expected. These are very similar to
HTTP and Siebel page title and SAP screen title verification points.
You can view all the response times of a test by selecting the Citrix session
element in the test. Response times are listed in the Response Time Definitions
table where they can be renamed or deleted (Figure 4-52).
This method for verifying system behavior can apply to all types of tests in
Performance Tester. For a brief overview on adding custom code to a test, refer
to “Creating and adding new custom code” on page 117. For more information on
custom coding, see Chapter 13, “Testing with custom Java code” on page 429.
Inserting an element inserts a new child object before the selected element,
which would place the new element above the selected element (Figure 4-55).
When using the right-click menu to insert, Performance Tester displays a black
bar showing the location in the test where the new element will be inserted.
Selected element
Adds here
Inserts here
Selected element
Choosing whether to use Add or Insert depends somewhat on what you are
putting into the test and where you want to put it. The choice is not critical
however, because you can move elements after they are added.
Removing elements
You can delete an element from a test by selecting it and clicking on the Remove
button, which is the same as selecting delete. There is no Undo available when
you remove an item, although this is considered an unsaved change which you
could restore by exiting without saving (see “Italics indicates changes not saved”
on page 44).
4.7.2 Delays
The delay element provides the simplest way to add timing control at a high level
between transactions. This is similar to a simple sleep or delay function in
software code. These may be used to emulate user delays without affecting a
transaction, or as workarounds for synchronization issues (less common).
The first primary difference is that think times can be varied for each simulated
user so that each user plays back a slightly different simulated think time during
test execution. Delays are always the same for every simulated user in a test. A
second difference is that schedule settings can override the think time values in a
test, which is typical when simulating workloads in Performance Tester. Delays
are not affected by schedule settings. Finally, while think times are not counted
as a system response time, they are considered part of a transaction. Delays
occur between test elements.
You should use a delay when a test requires a pause of a specific time at a
particular point in the test, similar to the way you would add a sleep or time delay
in software code.
4.7.3 Loops
A loop element allows you to repeat steps within a test. This is added so that
other elements, typically requests and responses, are put inside the loop. This is
similar to adding a simple for-next routine in procedural software code to repeat
certain steps a number of times. Loops can also provide timing and rate control
over transactions in a test.
By default, the cookie cache for a virtual user is not reset during a test run. This
is consistent with a browser's behavior. If a test or schedule contains loops, and a
Web server sets a cookie during the first iteration of the loop, that cookie is
remembered on subsequent iterations.
However, in certain instances, you may want to clear all cookies cached for a
particular virtual user. For example, if you want each iteration of a loop to appear
as a new user, you must reset the cache. If you do not, although the test
completes, verification points that you have set within the test may fail.
There are two ways to reset the cookie cache, and each way has different effects.
To reset the cookie cache when looping in the schedule, or when the test follows
another test in the schedule, use the following method. This resets the cache
whenever the test is entered. Even if your tests do not loop, use this method if
you are running back-to-back tests or Siebel tests.
Open the test.
In the HTTP options tab, select Clear cookie cache when the test starts.
To reset the cookie cache from one loop iteration to the next when you have put a
loop around the entire contents of the test, and the loop is inside the test, add
custom code to the test and call an API, as follows:
Run the test or schedule to add the current Java libraries to the class path.
Open the test and select the test element located at the point where you want
the cookie cache to be reset. Typically, this is at the end of the loop.
Click Add or Insert and select Custom Code. Add appends the custom code
to the bottom of the selected element (test or test page). Insert adds the
custom code above the selected page or page request.
Add the following Java import statement:
import com.ibm.rational.test.lt.execution.http.util.CookieCacheUtil;
Add the following Java code inside the exec method:
CookieCacheUtil.clearCookieCache(tes);
Example 4-1 shows a custom code addition that resets the cookie cache. The
lines that you add to the generated custom code template are shown in bold:
import com.ibm.rational.test.lt.execution.http.util.CookieCacheUtil;
import com.ibm.rational.test.lt.kernel.services.ITest ExecutionServices;
public ClearCookies() {
}
public String exec(ITestExecutionServices tes, String[] args) {
CookieCacheUtil.clearCookieCache(tes);
return null;
}
}
4.7.4 Conditions
Performance Tester allows you to add conditional logic to a test through a
graphical condition element. This works the same as a simple if-then statement
in software code. This allows a test to branch into two or more scenario paths
based on events during execution, simulating a user choice. Like a loop element,
this is added so that other elements, typically requests and responses, are put
inside the condition (the then part).
A condition requires two string operands to compare and at least one is typically
taken from the test itself. For example, you can check the value of a certain
parameter by comparing it to a hard coded string (entered in the conditional’s
Second operand). Or you can compare two test parameters to one another and if
they match, then the condition is met. To do this you must use a reference that is
made before (above in the test contents) the condition in the test. The test
reference is similar to a variable used in the comparison. For information on
creating a reference, see “Creating a reference” on page 83.
If you click No then you will have to manually add pages, screens, windows, or
other test elements into the conditional block.
The selected objects will now be indented in the Test Contents inside the
condition (Figure 4-57).
Notice that you will not see an element labeled Then in the Test Contents,
unless you add an Else block to the condition.
To add an Else block:
a. Select the test items (pages, screens, transactions, and so forth) inside the
If block that you want to execute if the condition is not met. Press Shift or
Ctrl when clicking to select multiple items.
Tip: You may want to add test items to an Else block that are not included
in the initial If block. In this case, you could delete the If element, clicking
Yes when prompted Would you like to keep child objects?, then select all of
the items you want to be part of the If and Else blocks and re-insert the
conditional.
Equals
Contains
Starts with
Ends with
Less than
Less or equal
Greater than
Greater or equal
To create a negation of the operators, for example Not equals or Does not
contain, check Negate the operator under Options.
c. In the Second operand field, either select the input for the block (a
reference containing a string value to be compared with the First operand)
or type a value. When the defaults are used (both operand fields set to
true and the Operator field set to Equals), the block is always processed.
In the Test Element Details area, under Options, choose the desired
comparison type by selecting or clearing the check boxes.
4.7.5 Transactions
A transaction element is used to combine other test elements together in a
logical group. This allows you to get better data from logs and reports by
measuring user transactions, in addition to the individual pages and requests.
When viewing the test results, you can view performance data about any
transactions that you have added. This is important since performance
requirements are typically given in terms transaction, activities, or use cases
from the user perspective.
For example, a system may provide a service get address which involves the
user going through 3 screens/pages to complete. The performance testing effort
is concerned first with the overall system response times for the get address
transaction, which is the combined measurement of the 3 screens. The default
Performance Tester reports would show times and statistics for each individual
screen or page because it would not know what the desired performance
requirement are. By grouping steps of a test together into transactions, the
default reports would then show the measurements that you need.
In the transaction details, you can give the transaction a meaningful name. This
is useful in the Transactions report, which lists transactions by name
(Figure 4-59).
Custom code is typically created from within a test, as if it were part of the test.
However, custom code is saved as Java files separate from the test and different
tests may include calls to the same custom code. Editing custom code is the only
time when editing a Performance Tester test that you will be working directly with
software code.
You have to consider what data, if any, you have to pass to the code as
arguments. To pass a value from the test to custom code, you must first create a
reference for that value (see “Creating a reference” on page 83) and that
reference must occur before (above) the custom code in the test.
Tip: You can better organize custom code by storing it in its own package
(folder). The simplest way to do this is to prefix the Class name with a
package name of your choosing, such as “customcode.”, with a period
separating the package name from the class name. You must use a period
between the two names or the custom code file will be placed in the project
root. For example, entering a custom code Class name of
”customcode.MyCode” will automatically create a package (folder)
named customcode plus a Java file named MyCode inside this package.
Once you create a custom code package, you can use the same name for
all subsequent custom code that you create. If needed, you could also
create multiple packages to store different custom code files.
Doing this will keep the custom code Java files that you edit separate from
the Performance Tester generated Java files which you do not edit. Java
naming conventions should be followed for packages and class names.
Click Generate Code to create the custom code and open the Java editor
(Figure 4-62).
At this point you are ready to edit the code itself; refer to Chapter 13, “Testing
with custom Java code” on page 429 for information on the Performance Tester
custom code interface.
Think times
Think times emulate user pauses between system activity. This is usually the
time a user spends reading or thinking between seeing a response and clicking
or triggering the next request.
Because think times represent a user’s time and not the system’s time, they will
not directly affect measured system performance. A test with very long think
times may still have very small response times and the resulting performance
reports will only reflect the system’s response times. Think times can however,
impact response times by affecting the rate and load that a system experiences
during test execution. Tests that have a lot of very long think times will give a
system more time to recover and respond to subsequent requests faster than a
test with smaller think times that simulate users hitting a system with rapid
successive requests.
For HTTP and Siebel tests, think times are captured at the page element level
(Figure 4-63). For SAP tests, think times are captured at the transaction element
level (Figure 4-64). Citrix tests handle think times differently to ensure proper
synchronization and you do not change these values directly.
Response delays
Response delays, which are different from the Delay test element (see 4.7.2,
“Delays” on page 107), emulate system and internal delays during system
activity. An example of this is the amount of time that a client application takes to
process the received protocol data and render it to the user. These times are
almost always very small, especially compared to think times.
Because these delays represent the system’s time, they do factor in to the
measured system performance. The delay values are captured from the
recorded sessions and, except for rare cases, you will not edit these values.
HTTP delays
Delays in HTTP tests are saved as attributes of a request (Figure 4-65). These
are typically very small and occur before the requests is made. The majority of
HTTP requests in a test will not have any delay but you may find them after the
primary page response, before a request for a JavaScript or cascading style
sheet (css) file for the page.
SAP delays
Delays in SAP tests are saved as attributes of a server request and are called
Interpretation times (Figure 4-70 on page 127). This is the delay between the
moment the data is received by the SAP GUI client and the moment the screen is
updated. It measures interpretation of data by the SAP GUI client, and not SAP
R/3 server performance. These are typically very small and there is usually some
interpretation time for each server request.
Server connections
Unlike SAP or Citrix, a Web-based application may make many connections to
different servers within a single test, even within a single page. Each connection
is represented by an icon in the Test Contents under the request that it is
associated with. By selecting the connection, you can see all of the requests in
the test that use the connection (Figure 4-66).
You can change these using the search and replace functions. The following
steps show how to change the pertinent test elements.
To change the host on all requests:
– Open the test, right-click, and select Test Search. The Search window
opens.
– In Search for text, type the value of the original host.
– In Elements to search, select and highlight HTTP Requests.
– In Fields, select Host and click Replace. The Replace window opens.
– In the With field, type the value of the new host, and click Replace All. You
have now changed the hosts on all of the requests.
If the new host has a different port, change the port number as follows:
– Open the test, right-click, and select Test Search. The Search window
opens.
– In Search for text, type the original port, which is usually 80.
– In Elements to search, select and highlight HTTP Requests.
– In Fields, select Port, and click Replace. The Replace window opens.
– In the With field, type the new port, and click Replace All. You have now
changed the numbers of the ports in all of the requests.
To change the host name and port number in the host header fields:
– Open the test, right-click, and select Test Search. The Search window
opens.
– In Search for text, type the original header.
Note: If you recorded using a nondefault port, type both the host and port,
separated by a colon (host: port). If you recorded using default port 80, or
your host name does not contain a port, type only the recorded host name.
After making any of the above edits to a test, rerun the test and inspect your
results to confirm that the test ran using the expected host and port.
You could also change both the system name and the connection string although
only one will be used for test execution, depending on whether Connection by
string is checked or not.
If you change the SAP system name, you will have to configure the connections
on every test agent machine used for test execution. The system name is used
by the SAP Logon application to start the SAP GUI.
If you change the connection string, you only have to change the test and not the
connections for every test execution machine. However, it can be complicated to
know the correct router string and it is recommend that an SAP administrator
help provide correct connection strings.
Unsigned certificates (rarely used). These are certificates that are neither
signed by a CA nor self-signed. Most Web applications do not use unsigned
certificates.
The subject can contain many different types of identifying data. Typically, the
subject includes the following attributes (Table 4-3):
Attribute Example
This information can be typed as one string, using forward slashes to separate
the data. For example, the above subject would be typed as follows:
/CN=John Smith/O=IBM Corporation/OU=IBM Software Group
/C=US/L=Chicago/ST=IL/[email protected]
If a value contains spaces, enclose the value in quotation marks. Each option
is described in Table 4-4.
Option Description
--add Optional. Adds the certificate to the certificate store. Used with
--generate, this generates a certificate and adds it to the certificate
store.
--remove Optional. Removes the certificate from the certificate store. This
option cannot be used with the --add or --generate options.
Option Description
Use the KeyTool to create and add as many digital certificates as you want. If
you want to create a datapool of the names of certificates in the certificate
store, run KeyTool again with the --list option. This writes a list of names
that can then be imported to a datapool.
You now have a digital certificate store that you can use with performance tests.
Because the KeyTool program has many options, you may want to create an alias
or script file to use to invoke the KeyTool.
You do not have to use the KeyTool command-line program to create a certificate
store. It is possible to use existing PKCS#12 certificates with Performance Tester.
PKCS#12 certificates can be exported from a Web browser. PKCS#12
certificates encode the private key within the certificate by means of a password.
Note: Because the password is required to be default when you play back a
recording in Rational Performance Tester, you must not use certificates
associated with real users. Certificates associated with real users contain
private keys that should not become known by or available to anyone other
than the owner of the certificate. An intruder who gained access to the
certificate store would have access to the private keys of all certificates in the
store. For this reason, you must create, or have created for you, certificates
that are signed by the correct certificate authority (CA) but that are not
associated with real users.
Adding comments
A comment in a Performance Tester test is another test element like those
described in 4.7, “Adding test elements” on page 105. This is the same as a
comment in software code. You can add as many comments as you like to a test
without affecting the execution.
You can enter and view very long comments from the Test Elements Details,
however, only the first 48 characters will be visible in the Test Contents.
Save the test. Figure 4-72 shows a test with a comment and a description.
Tip: Another crucial place to add comments is within custom code modules,
which are simply Java comments.
Adding descriptions
In a given testing effort you create many tests and datapools and there will most
likely be other people who will use these artifacts. Descriptions should be added
to every test and datapool to explain its purpose, usage, and any other relevant
information.
The best time to add a description is when you first create the artifact. If you have
to revise or extend a description, you can do so as described here (Figure 4-72):
Open the test.
Expand the Common properties area in the lower-left corner of the test editor
by clicking on the expand arrow.
In the expanded Common properties, enter text in the Description field.
Though long-term you will wan to use schedules, it is possible to playback a test
without creating a schedule. To quickly verify test integrity you can execute a test
immediately after creating it.
To execute a test without a schedule, right-click on the test then select Run As →
Performance Test. RPT will then launch a one-user playback of the test.
A schedule requires at minimum one test to execute. Select User Group 1 and
click Add, then click Test. When the Select Performance Tests dialog appears,
select a test and click OK. The test should appear under User Group 1. Save the
schedule by selecting File → Save.
Performing test log data transfer During this state the computers executing
the test send detailed test playback
information back to the workbench where
the playback was launched.
Complete Schedule playback has finished.
More details about schedule playback can be viewed from the Summary window.
During playback, select the Summary tab to display the Summary window
(Figure 5-3).
For an intensive load test playback with many virtual users select a larger value
than the default interval of five seconds. For example, select a value of 30 or 60
seconds (Figure 5-4). By changing this sample interval, no data is lost. All of the
individual response time measurements are collected and reduced to interval
based statistics over whatever interval is used for data collection. The main
difference is how often data points are posted to the report window during the
playback. This is done based on this sample interval.
While users are being added by RPT to the schedule, you cannot add additional
users. Based on the staggering option it will take time to add the new users. If
you note that the current state of the playback changes from adding users to
running state, it becomes possible to click Add Users again to add more users.
Add Users
Stop
The options presented allow you to select how you want the stop playback to
occur, and if you want test results and history information. The options are:
Hard Stop A hard stop ends playback immediately. Virtual users in the
middle of executing an action are not allowed to finish the
action. Select this option if you want to end now and do not
need complete results.
Timed Stop A timed stop allows virtual users some time to complete the
action they are currently executing. An action may be a request
for an image from a server or some other request/response
pair depending upon the test you are running. If all virtual
users do not finish within the time you specify then a hard stop
occurs.
Graceful Stop A graceful stop allows users al the time they need to finish the
action they are currently executing. When that action finishes
the virtual user stops and playback stops once all virtual users
have finished.
If you clear Collect tests results and history, RPT will not attempt to provide this
information after playback stops. If you are stopping playback because you have
discovered some problem it is possible that you have no interest in the results of
this playback. Turning off this option can greatly speed the stop playback
sequence.
5.5 Debugging
This section provides suggestions for dealing with common playback problems
and presents tools you can use to help answer the question “What went wrong?”.
Collecting problem determination log data is expensive in terms of CPU and disk
space. For normal operation do not increase the level beyond the default of
Warning and do not sample more users than the default of five.
The data collected is stored on each agent (location) computer executing the
schedule. It is not transferred to your workbench (UI) after playback completes.
Therefore, to view the problem determination log you must logon to each agent
computer and view it there.
The log messages in these XML files are prefaced with msg=. Look for keywords
like exception or error if you are experiencing problems.
The Error Log view should appear near the bottom of the RPT workbench.
Important: The Error Log Filters limit viewable events to 50 at a time. You may
remove this restriction for viewing after playback stops. Leave the limit of 50
viewable events during playback or workbench performance may be
severely impaired.
Recovery
On Windows select Start → Run → cmd and execute the following command:
net start "IBM Rational Agent Controller"
On Linux find the directory AgentController/bin under the path where RPT was
installed and execute the following command:
./RAStart.sh
If this message (Figure 5-9) appears it means RPT was unable to find the agent
controller running on the remote agent computer.
Recovery
You must install Rational Performance Tester Agent on the remote computer, or if
it is already installed you must start the Agent Controller. The commands to start
the Agent Controller are the same as described in “No local Agent Controller” on
page 149.
Connection refused
The Agent Controller has a security feature that allows for restricting playback to
the local computer only.
If one of your agent computers has an Agent Controller configured for local
playback only, you will receive the message shown in Figure 5-10.
Recovery: Windows
Select Start → Run and execute the command cmd to start a command
prompt.
Change directory to the Agent Controller bin directory depending on where
RPT was installed. For example:
cd "Program Files\IBM\SDP70Shared\AgentController\bin"
Stop the Agent Controller:
net stop "IBM Rational Agent Controller"
Run the configuration utility:
setconfig
Take defaults by pressing Return, but specify network access mode ALL.
Start the Agent Controller:
net start "IBM Rational Agent Controller"
Recovery: Linux
Start a terminal shell window.
Change directory to the Agent Controller bin directory depending on where
RPT was installed. For example:
cd /opt/IBM/SDP70Shared/AgentController/bin
Stop the Agent Controller:
./RAStop.sh
Run the configuration utility:
./SetConfig.sh
Take defaults by pressing Return, but specify network access mode ALL.
Start the Agent Controller:
./RAStart.sh
What the message means is that schedule playback ended because the
workbench lost communication with one or more of the agent computers.
Recovery
Go to the agent computer listed and try to determine what happened:
Is there a playback Java process running? If so, it could indicate the origin of
the problem is with the workbench, which is addressed below. You will have to
kill this Java process and any related typeperf or vmstat processes.
View the problem determination log as mentioned previously, looking for any
errors or exceptions.
Perform a search for a recent javacore.* file on the system if you suspect the
Java playback process ended abnormally. The first few lines of the core file
may indicate the source of the problem.
Try again and watch the memory size of the playback process. If the process
is consistently running at its max heap it may not have enough memory.
If there does not appear to be a problem with the agent computer, make sure the
workbench has sufficient memory. Try increasing the workbench heap. Or, try
reducing the level and amount of execution history.
Recovery
Increase TCP/IP ports. By default, Windows limits the number of available ports
to between 1000 and 5000. You can increase this number up to 65000. Be very
careful if you modify the registry. Add the following key to the registry to increase
the number of available ports:
HKEY_LOCAL_MACHINE → SYSTEM → CurrentControlSet → Services → Tcpip →
Parameters → MaxUserPort
Set this dword to 65000 and reboot. The MaxUserPort does not exist, generally,
but can be added.
If your workload involves looping, look at where the loop is specified. Loops with
schedule result in new connections for each loop iteration. Loops within tests
re-use connections across iterations.
You must change to the directory where the scripts reside. To get to the default
location under Windows, you would issue the following two commands:
cd c:\program files\ibm\sdp70shared\plugins
cd com.ibm.rational.test.lt.cmdlineexecute_7.*
Run the cmdline.bat command to display the options for using command line, for
example:
cmdline -eclipsehome "c:\program files\ibm\sdp70"
-plugins "c:\program files\ibm\sdp70shared\plugins"
-workspace d:\test-workspace -project testproj -schedule Schtest
Executing from command line causes RPT to perform the playback as when
launched from the user interface. Results are stored in the workspace and can
be viewed using the RPT user interface. There is no monitor or control available
when executing a schedule using the command line interface.
A performance test schedule is the engine that runs a test. The tester can
emulate simple or complex workloads by defining the schedule contents and
element details. For example, a schedule can:
Group tests under user groups, to emulate the actions of different types of
users
Set the order in which tests run e.g. sequentially, randomly, or in a weighted
order
Set the number of times each test runs
Run tests at a certain rate
Run user groups at remote locations
You add user groups, elements, and other items in the schedule contents section
of the schedule.
Figure 6-2 Schedule with a single user group running five sequential tests
If you run this schedule with 10 users, all 10 users would run all 5 tests
sequentially. This approach does not give the tester much control over the
execution. Another approach is to define a schedule with two or more user
groups, where each user group represents a type of user of the system (as
defined in the workload model). In schedule example two, two user groups have
been defined: Customers and Browser, which represent the types of users of the
system (Figure 6-3).
Figure 6-3 Two group schedule with a 30/70% split of users doing different tests
If you run this schedule with 10 users, three users are assigned to the Customers
group and seven users are assigned to the Browsers group. When the run starts
the three customer users and seven browsers users start in parallel. You would
have three customers each running three tests sequentially and seven browsers
each running two tests sequentially. This is a more realistic test because each
user group contains tests that represent the actions that they do, and the
proportions of the user groups (70% and 30%) represent the proportions of the
users on the system.
In the Group name field, enter a descriptive name for the user group.
In the Group Size section (Table 6-2), select Absolute or Percentage, and
type the number of users or the percentage of users in the group.
Absolute Specifies a static number of virtual users. Type the maximum number
of virtual users that you want to be able to run. For example, if you
type 50, you can run up to 50 virtual users each time you run the
schedule. Typically, you create an Absolute user group only if the
group does not add a workload. For example, if you were running a
test to prepare a Web site for use or a test to restore a Web site to its
initial state.
Option Description
Specify whether the user group will run on your computer or on another
computer (Table 6-3).
Run this group The user group runs on your computer. Use this option is the
on the local workload is small or you are testing the schedule.
computer
Run this group Generally, you should run user groups at a remote location., When
on the user groups run from remote locations, the workbench activity on the
following local computer does not affect the ability to apply load. You must run
locations a user group at a remote location in these cases:
You are running a large number of virtual users and your local
computer does not have enough CPU or memory resources to
support the load. You can conserve resources by running the
users on different locations, so that fewer users run on each
computer.
A test requires specific client libraries or software. The user
group that contains this test must run on a computer that has the
libraries or software installed.
Note: The data stored in the file includes information such as the host
name and deployment directory, You can change this information later by
opening the Test Navigator and double-clicking the file.
– In the Name field, type a descriptive name for the remote computer.
– In the Hostname field, type the IP address or the fully qualified host name
of the remote computer.
– In the Deployment Directory field, type the directory on the remote
computer that will store the test assets. The directory which will be created
if it does not exist, stores the temporary files that are needed during a
schedule run.
– In the Operating System field, select the operating system of the remote
computer and then click Finish.
To add an already declared location:
– Click Add Existing.
– In the Select Location window, select the computer on which the user
group will run and then click OK.
After you have added the user groups to a schedule, you typically add the tests
that each user group will run.
To maintain a set transaction rate for all schedule items that are children of
this loop (Figure 6-5):
– Select Control the rate of iterations.
– At Iterations rate, type a number and select a time unit. This sets the
actual rate.
– Select or clear Randomly vary the delay between iterations. Selecting this
check box causes the delay to vary slightly. This option models your users
more accurately because the iterations are spread out randomly over a
certain period of time.
– Select or clear Delay before the first iteration of the loop. Selecting this
check box staggers the first delay in each iteration so that you get a
realistic mix at the first iteration.
Note: You can also add a more granular loop within a specific test. For example,
you might want to loop through specific pages or page requests. To add a loop to
a test (Figure 6-6):
Open the test.
Select a page or page request. The loop is inserted before the selected page
or request.
Click Insert and select Loop.
In response to the prompt Would you like to move selected objects into the
new loop? click Yes or No. The loop is inserted into the test, and the Test
Element Details area displays the loop definition fields. If you clicked Yes, the
selected items are moved under the loop.
In the Test Element Details area, type the desired number of iterations in the
Iterations field.
Optional: Select the control the rate of iterations check box and type your
preferences for the pacing rate. In specifying a number of iterations per unit of
time, you set a fixed period of time for the iterations to complete. If you select
Randomly vary the delay between iterations, the total delay is randomly
distributed. If this check box is cleared, an identical delay occurs between
each iteration.
After you add a delay, you generally add the schedule items that the delay
controls. The schedule items are at the same level as the delay (Figure 6-7).
The schedule editor supports the standard cutting, pasting and copying of
schedule items using the Edit menu or keyboard shortcuts. Schedule items
include: User groups, tests, loops, and delays. If you cut a schedule item, that
item is not actually removed from the test until you perform another cut or paste
operation. When you paste an item, it is displayed in italics until you save the
schedule.
Execution duration
Log data collection
Use the This has no effect on the think time. The time that it takes for a test to
recorded think play back is the same as the time that it took to record it. For example,
time if you were interrupted for five minutes during recording, the same
five minute think time occurs when you run the test.
Specify a fixed Each virtual user’s think time is exactly the same value - the value that
think time you type. Although this does not emulate users accurately, it is useful
if you want to play a test back quickly.
Increase/decr You type a think time scale and each virtual user’s think time is
ease the think multiplied by a percentage. A value of 100 indicates no change in
time by a think time. A value of 200 doubles the think times, so the schedule
percentage plays back half as fast as it was recorded. A value of 0 indicates no
delay at all.
Vary the think Each virtual user’s think time is randomly generated within the upper
time by a and lower bounds of the percentages that you supply. The
random percentage is based on the recorded think time. For example, if you
percentage select a lower limit of 0 and an upper limit of 90, the think times will
be between 10 percent and 90 percent of the original recorded think
time. The random time will be uniformly distributed within this range.
See “Adding a random selector to a schedule” on page 164 for detailed steps.
For example, assume that a schedule contains the following user groups, all
running on the same computer (Figure 6-9):
If Number of Users = 2, they are all assigned to the Absolute_5 user group.
If Number of Users = 5, they are all assigned to the Absolute_5 user group.
If Number of Users = 10, five are assigned to the Absolute_5 user group,
three are assigned to the Browsers user group, and two are assigned to the
Buyers group. Is the positions of the Browsers and the Buyers were reversed
in the schedule, the Buyers would have three users and the Browsers would
have two.
Before you run a user group at a remote location, you must meet these
requirements:
IBM Rational Agent Controller must be installed on the remote computer,
Firewalls on both the local computer and the remote computer must either be
disabled or be configured to allow connections among the computers.
Generally, you should run user groups at a remote location. You must run a user
group at a remote location in these cases:
You are running a large number of virtual users and your local computer does
not have enough CPU or memory resources to support this load. You can
conserve resources by running the users on different locations, so that fewer
users run on each computer.
A test requires specific client libraries or software. The user group that
contains this test must run on a computer that has the libraries or software
installed.
Figure 6-10 shows a schedule with a user group running at a remote location.
There are two steps you must take to gather performance data over time:
Set a schedule loop with a high iteration number (a number that will not be
reached).
Set the schedule to stop after a specific time.
Hard Stop Stop the schedule run immediately. Neither the test log nor the results
are saved.
Timed Stop Stop the schedule run after an elapsed time. In Timeout, type a
number and select a time unit. This is the default mode of stopping a
schedule.
Graceful Stop Allow computers at all remote locations to spend as much time as
needed to fully stop on their own. The test log and the results are
saved and you can report on the results.
To set the amount of information collected in the test log and the rate of sampling:
Open the schedule.
In the Schedule Details area, select the Test Log tab.
Select the types of events you want to collect under What to Log. You can
select to collect:
– Errors only
– Errors and Warnings
– All Events
For each type of event, set the log level to one of the options shown in
Table 6-6.
Option Description
Primary Typically, you set data collection at this level. Primary test actions include schedule actions
Test plus the following actions:
Actions Test verdict, test start and test stop events
Loop iteration start and loop iteration stop events if loops are present in the test
Transaction start and stop events if transactions are present in the test
Page title verification points in HTTP tests. This option enables you to produce a
Percentile report or to see any page title verification points that you have set. The
following events are collected:
1. The page verdict. You see a page verdict only if a connection problem occurs or if
you have set verification points. Any failures or errors are rolled up to the test verdict
level.
2. The start and stop time of each page
3. The start and stop time of each loop and the number of iterations of each loop if you
have set loops within a page
4. The start and stop time of each transaction and the duration of each transaction if
you have set page-level transactions in your test.
SAP screens and SAP screen title verification points for SAP tests.
Secondary Secondary test actions include primary test actions plus delay events:
Test For HTTP tests, request level events. To collect information about response code or
Actions response size verification points that you have set, set data collection at this level of
detail or greater.
1. The time that the first byte and last byte were sent
2. The time that the first byte and last byte were received
3. The character set of the response data
4. Expected and actual values of page level verification points that you have defined
5. HTTP think events
6. The start and stop time of each transaction and the duration of each transaction if
you have set request level transactions in your test.
SAP actions for SAP tests.
Action Action details include secondary test actions plus this informations:
Details For HTTP tests request and response data, for example, HTTP headers and POST
data
Think times for SAP tests
All All and Action Details provide the same level of data collection for both HTTP and SAP tests.
Note: All and Action Details produce large logs, especially if the tests are long
or you are running a large number of users. If you select these options, set a
sampling rate, rather than collecting all information from all users, which helps
prevent your log from getting too large.
To set a sampling rate (Table 6-7), select Only sample information from a
subset of users. The number or the percentage that you select is applied to
each user group. If you are running user groups at remote locations, the
number or percentage that you select is distributed evenly among the remote
locations.
Fixed number The number is applied to each user group. Assume that your
of users schedule contains two user groups. One group contains four users
and one group contains 1000 users. If you specify 2 for this option two
users are sampled from each group
Percentage of The percentage is applied to each user group- but at least one user
users will be sampled from each group. Assume that your schedule
contains two user groups. One group contains four users and one
group contains 1000 users. If your sampling rate is 10%, one user is
sampled from the first group and 100 users are sampled from the
second group. If your sampling rate is 25%, one user is sampled from
the first group and 250 users are sampled from the second group
You can export the statistics into a CSV file for further analysis. To export select
File → Export and select Test Log.
When you run the schedule it will give the impression that the network traffic is
being generated from multiple hosts.
You can insert custom code into your test to retrieve the runtime IP addresses of
each virtual user.
When you run the schedule it will give the impression that the network traffic is
being generated from multiple hosts.
When you run the schedule it will give the impression that the network traffic is
being generated from multiple hosts.
Before you can enable virtual users to use IP aliases you must:
Configure the aliases at the remote location.
Add the remote location to the user group.
To set the schedule so that the virtual users will use the IP aliases during a run:
Open the schedule.
Click the user group whose virtual users will use aliasing.
Click Run this group on the following locations. The list of locations shows
whether IP aliasing is enabled at that location.
To change whether IP aliasing is enabled or disabled, click a row in the table
and then click Open.
Each remote location has a separate problem determination log, located in the
deployment directory. You define remote locations and the deployment directory
at each location as properties of user groups.
To set the level of problem determination logging and the sampling rate:
Open the schedule.
In the Schedule Details area, select the Problem Determination tab.
Set Problem Determination log level to one of the options shown in Table 6-8.
All, finest, fine Set these options only if you are requested to do so by technical
support
To set a sampling rate (Table 6-9), select Only sample information from a
subset of users. The number or the percentage that you select is applied to
each user group. If you are running user groups from remote locations, the
number or percentage that you select is distributed evenly among remote
locations.
Fixed number The number is applied to each user group. Assume that your
of users schedule contains two user groups. One group contains four users
and one group contains 1000 users. If you specify 2 for this option two
users are sampled from each group
Option Description
Percentage of The percentage is applied to each user group- but at least one user
users will be sampled from each group. Assume that your schedule contains
two user groups. One group contains four users and one group
contains 1000 users. If your sampling rate is 10%, one user is
sampled from the first group and 100 users are sampled from the
second group. If your sampling rate is 25%, one user is sampled from
the first group and 250 users are sampled from the second group
To view the problem determination log, select File → Import → Log File and
import the appropriate Common Base Event XML log. Check the timestamp on
the log and select the one that matches the problem run.
Either the Performance Tester Test Agent must be installed on all of these
servers or the IBM Tivoli Composite Application Manager must be collecting the
necessary transaction trace records on all of these servers. The servers must be
instrumented and the data collection process active during the performance test
in order to collect the response time breakdown information.
None No Statistics are displayed during the run and any report that depends
on statistics is not generated. At the end of the run, you see only a
Summary report that contains three items: the time the run took,
whether the results were on the local computer (or if remote location
which one) and the status of the run which is Complete.
Schedule Select this option if you are interested only in the number of users.
Actions Schedule actions report the number of active and completed users in
the run
Primary Test Select this option to limit the processing required by the workbench.
Actions Primary test actions include all schedule actions plus:
For HTTP tests, HTTP page-related actions (attempts, hits, and
verification points)
For SAP tests, SAP screens
Secondary Select this option to limit the processing required by the workbench.
Test Actions Secondary test actions include all primary test actions plus HTTP
page element related actions. This option does not apply to SAP tests
In Statistics sample interval, type a number and select a time unit. When you
run a schedule, the reports show such information as response time during a
specific interval, the frequency of requests being transferred during an
interval and average response trend during an interval. You set this interval
here.
To set a sampling rate (Table 6-11), select Only sample information from a
subset of users, then select one of the options in Table 6-11. The number or
percentage that you specify is applied to each user group. If you are running
user groups at remote locations, the number or percentage that you select is
distributed evenly among the remote locations
Fixed number The number is applied to each user group. Assume that your
of users schedule contains two user groups. One group contains four users
and one group contains 1000 users. If you specify 2 for this option two
users are sampled from each group
Percentage of The percentage is applied to each user group- but at least one user
users will be sampled from each group. Assume that your schedule contains
two user groups. One group contains four users and one group
contains 1000 users. If your sampling rate is 10%, one user is
sampled from the first group and 100 users are sampled from the
second group. If your sampling rate is 25%, one user is sampled from
the first group and 250 users are sampled from the second group
Typically, you should select Only store All Hosts statistics. Selecting this
option reduces the amount of statistical data stored, this enabling you to test
a larger load over a longer period of time with significantly less memory
usage. Although you will not be able to analyze data from each computer that
adds to your test, this data is generally not of interest.
However, if you are running a performance test over different WANs and if
you are interested in seeing the data from each remote computer, you should
clear this check box. Be aware that storing statistics for each agent separately
involves saving N+1 statistics models where N is the number of test agents.
This has significant implications for Eclipse workbench memory usage during
long test runs that may result in Java heap overflow. This mode is only
recommended for short test runs where the data from multiple test agents
must be reported separately.
The measurement data needs to be shown in ways that demonstrate whether the
measurements represent a valid prediction of the underlying true system
response time when that system is moved into production service. A discussion
of the statistical basis for this assumption follows along with certain tests that can
be performed on the data to ensure the validity of its prediction.
After the statistical discussion, we describe the basic reporting capabilities and
how those reports can be customized to illuminate the results and analysis
desired for your project.
Further, we discuss several styles of data reporting and analysis used to achieve
some basic results and analysis goals with reports.
memory-less systems for the purpose of measuring response times if you verify
that time correlation is not a factor in your results.
In a causal system, if you have random arrival rates of stimuli, then your system
will emit responses with normally distributed response times for any given
system function. If you categorize your responses for the same system function
as samples from a single distribution, you can collect enough data samples to
predict the expected response time value for the system in production. This
assumes that the workload being measured is equivalent to the production
workload, the system software, hardware, and data state are equivalent to the
production environment, and that the measurement data is properly segmented
into sample sets for the distributions of the system functions being predicted.
For example, there is some table growing in the database that must be scanned
from first to last element before a response can be sent back. The length of time
scanning the table grows linearly with the number of table elements. For each
transaction, if the table size grows, the response time will grow as well.
By observing the average response time at the beginning and end of the test
interval and comparing them, you can determine if time correlation is a problem
with your measurement data. Normally by preloading your database tables with
production levels of data, the incremental change in response time over a
reasonable test interval should be very small and relative to the overall change in
table sizes. This should be no more than a few percent and therefore be
insignificant in your test measurements.
If you desired a more formal approach, you could apply the Student t distribution
to test to see if the samples in the beginning interval and those from the ending
interval had means that were statistically different. To quote from Wikipedia entry
on the Student’s t-test:
http://en.wikipedia.org/wiki/Student's_t-test
[The Student t test is used when] a test of the null hypothesis that the means
of two normally distributed populations are equal. Given two data sets, each
characterized by its mean, standard deviation and number of samples, we
can use some kind of t test to determine whether the means are distinct,
provided that the underlying distributions can be assumed to be normal.
You can therefore make the statement that you have 90% confidence that the
underlying mean response time is in the interval between 2.977 and 3.479
seconds.
By using some contrived data, we have derived some rules of thumb that can be
used to quickly assess whether you can trust the sample data to predict the
response time. If for example you use the following values: mean = 3.0, std. dev =
1.0, N = 20, you get the following 90% confidence interval of (2.613, 3.387). This
is plus or minus 12.8% which is marginally acceptable. So if the mean to
standard deviation ratio is at least 3 and there are at least 20 data samples, you
can be 90% confident that the underlying true response time is within 13% of
your measured response time mean.
If your standard deviation is larger than 1/3 of the mean, you want to do
additional checks to make sure you are not representing two or three different
sample sets as predictors of the same functional response. It could be that some
of the responses are cached or pre-fetched and others are not yielding a bimodal
distribution. One way of figuring this out is to look at a frequency (histogram)
chart with buckets set at ½ of a standard deviation wide. This provides a visual
indication of whether the data is appears to be from a Gaussian distribution (be
normally distributed).
7.2.5 How much data is enough to predict the true response time
Using those same confidence interval calculations, you can derive how much
data is enough to predict the true response time:
If you want to have a 90% confidence interval for a value that is within 5% of
the true response time, you can calculate how many samples you need
assuming a mean to standard deviation ratio of 3/1:
N = (t / (0.05*3))**2
Begin by approximating the t value with 1.660 for n = 100.
This yields an answer of N = 122 samples. This is the minimum number of
samples required to get within plus or minus 5% of the true mean with 90%
confidence assuming a sample mean to standard deviation ratio of 3/1.
As you can see, going from 13% to 5% confidence range raises the number of
samples from 20 to 122. Many more samples are needed to obtain a tighter
confidence interval. The same is true if you want to have a higher confidence
value than 90%.
Usually a 90th or 95th percentile is used from the sample data for two pu.rposes:
There may be outlier data points beyond the design of the system
If the vast majority of users are getting equal or better response time, that is
good enough.
In general, percentiles are only used for interactive response times and not for
other measurements.
Reference:
Measuring Computer Performance: A Practitioner's Guide, Chapter 4: Errors
in Experimental Measurements, by David J. Lilja, Cambridge University
Press, 2000.
Figure 7-1 HTTP Performance Report: Response vs. Time Detail tab
Figure 7-3 HTTP Performance Report: Response vs. Time Summary tab
In addition, some higher level targets (HTTP pages) have the following response
time counters:
Maximum [for Interval]—Indicates the maximum response time
measurement collected for a target during a single sample interval. This
counter is not typically included in the default Performance Tester reports.
Minimum [for Run]—Indicates the minimum response time measurement
collected for a target during a single sample interval (Figure 7-2).
Maximum [for Interval]—Indicates the maximum response time
measurement collected for a target during a single sample interval. This
counter is not typically included in default Performance Tester reports.
Maximum [for Run]—Indicates the maximum response time measurement
collected for a target during the entirety of the run (Figure 7-2).
Maximum Response Time for All <target> [for Interval]—Indicates the
maximum response time collected for all like targets during a single sample
interval. This counter is not typically included in standard Performance Tester
reports.
Maximum Response Time for All <target> [for Run]—Indicates the
maximum response time collected for all like targets during the entirety of the
run (Figure 7-4).
Minimum Response Time for All <target> [for Interval]—Indicates the
maximum response time collected for all like targets during a single sample
interval. This counter is not typically included in standard Performance Tester
reports.
In most Performance Tester tests, performance data is collected from more than
one load driver. As this data is collected, it is aggregated to a single statistical
model located beneath the All Hosts location (Figure 7-5).
Figure 7-5 Statistical model hierarchy showing drivers and aggregate node
For most circumstances, this aggregate data is all the tester is interested in.
However, if the tester is interesting in analyzing response time from the point of
view of a particular driver, he may elect to have that data persisted by
de-selecting Only store All Hosts statistics on the Statistics tab of the schedule
root in the schedule editor (Figure 7-6).
Figure 7-6 Deselecting Only store All Hosts statistics for driver specific analysis
After executing a test with the option above deselected, the tester may open a
report focused on a particular driver’s data by right-clicking on the driver in the
Performance Test Runs View and using the provided context menus (Figure 7-7).
When modifying a report in this fashion, beware that the counter is always
selected from the same result and location that the report is focused on. Drag
and drop of a counter from a non-focus results or location is useful for
comparison but creates a static link to the selected results and location that may
be persisted in the report. Subsequent uses of a report persisted in this fashion
will result in always opening the statically linked result and thus will consume
more memory than expected. An alternative to selecting the counter beneath the
run of focus is to select the counter beneath the generic counters root. Generic
counters have no result associated with them and thus add no static link to the
modified report.
Figure 7-8 Adding a counter to a report through drag and drop from the statistical model
hierarchy
Figure 7-9 Accessing add/remove counter wizards from the report graphic context menu
Because the counter Run/Active Users/Count [for Run] is located beneath the
Run model root, it may be added to the graphic of interest using the Add/Remove
Run Statistics wizard. When displayed, the counter wizard will indicate any
counter in its focus category that is currently displayed on the focus graphic.
Counters from the category may be added or removed from the graphic by
selection and de-selection from the tree control. In the case at hand, we select
the Run/Active Users/Count [for Run] counter and click Finish (Figure 7-10).
Report wizard
Along with a host of other report customizations, counters may be added to a
graphic by editing the report with the Report wizard. To edit a report, select the
report instance as shown in the Performance Test Runs view and use the Edit
context menu (Figure 7-11).
Figure 7-11 Editing a report instance from the Performance Test Runs view
The first page of the Report wizard allows the tester to select the report tab he
wants to edit. In the case at hand, we select the Response vs. Time Summary
tab and click Edit (Figure 7-12).
Figure 7-12 Selecting a particular report tab for modification with the Report wizard
The next page of the wizard allows the tester to change the title of the tab or to
change its layout to comply with a provided template or custom layout. Because
no layout or title changes are desired, we click Next.
The next series of pages will be pairs for each graphic on the selected report tab.
The first of the pair and allows title changes, graphic type changes, filter
applications, and other customizations. The second of each pair allows counter
additions and removals from the focus graphic. To add a counter to the Page
Element Response vs. Time tab, click Next until the counter addition/removal
page is shown for that graphic (Figure 7-13).
By default, only Generic Counters are shown in the wizard and in most cases will
serve the testers intent. To add Run/Active Users/Count [for Run] to the graphic,
we navigate to the counter in the Generic Counters tree and click Add. Because
no other changes are required for this report, we click Finish.
Figure 7-13 Add/remove counter page for the Response vs. Time Summary tab
After adding the Active Users counter to the Page Element Response vs. Time
graphic, the report appear as shown in Figure 7-14.
Figure 7-14 Modified Page Element Response vs. Time graphic without scaling
In many cases, the difference in average value of two counters being correlated
on a single graphic causes one or more of the counters to wash out (appear a as
a flat line across the bottom of the graphic). To correct this condition, the report
graphic may be edited in the Report wizard and a scale factor applied to
washed-out counters (Figure 7-15).
This scale factor causes the counters’ trends to be comparable to the other
trends on the graph. The true value of the counter may be seen in the hover text
of the scaled trend and easily interpreted mentally as scale factors are always
factors of 10.
Figure 7-15 Modified Page Element Response vs. Time graphic with scaling
A time range may be created and focused upon by right-clicking on any graphic
on any open report and using the Change Time Range context menu item
(Figure 7-16).
The Time Range wizard shows previously defined time ranges including the
default time range created at runtime and spanning the entirety of the run
(Figure 7-17).
To create a new time range, click New Time Range. A new time range definition
appears in the list where its endpoints and description may be edited. If the user
is merely interested in eliminating ramp-up and ramp-down, click Set to Steady
State for the new time range, and the endpoints of the selected time range are
modified to begin at the first sample interval after maximum active users is
achieved and to end at the last sample interval which contains that number of
active users (Figure 7-18).
To focus on the newly created time range, select it in the table and click Finish. A
progress dialog is displayed because all statistics are re-calculated for the new
time range. After generation is complete, the title of every report tab indicates
that it is focused on the new time range (Figure 7-19).
If at a later time, the tester wants to changes focus back to the default time range
or to define a new time range, he may do so by revisiting the Time Range wizard.
Until the time range focus is changed through the wizard, any analysis activity
carried out on the result will be done on the focus time range.
This analysis is performed by executing a test with typical user behavior and 30
virtual users. After the result is captured, a steady state time range should be
defined focusing on the 30 user load. The HTTP Percentile report is accessed by
selecting a result in the Performance Test Runs view and then Percentile Report
from the HTTP Reports sub-context menu. As shown in Figure 7-20, the first tab
of the HTTP Percentile report considers all pages together and reports the
slowest response time seen by 85, 90, and 95 percent of the users in the 30 user
test.
As shown in Figure 7-21, the remaining three tabs present the same type of data
as the first tab but analyzed on a per-page basis.
Performance Tester provides transaction rates with regard to attempts and hits
for all supported transaction targets. An attempt is accomplished when a virtual
user sends a request. A hit is accomplished when the system under test returns
a response. Any response means that a hit does not imply a return code that
would mark success such as an HTTP 200 code.
Although not always included on a default report, many protocols provide hit and
attempt rates for the primary and even secondary targets. If desired, the tester
may create a custom report or report tab to analyze these rates in their for
Interval or for Run format. Figure 7-23 shows the hit rates for individual pages in
an HTTP test.
Figure 7-23 Custom report tab displaying hit rates for individual pages
Once Add is clicked and a location is selected, the tester is then presented with
the Resource Monitoring wizard. The left side of the wizard contains three types
of resource monitoring data sources. The data source options available are IBM
Tivoli Monitoring, Unix rstatd, and Windows Performance Monitor. Once a source
is selected, the user is prompted for authentication information if applicable
(Figure 7-26).
The 2nd tab of the wizard allows the user to select the counters he or she wishes
to collect from all that the source has to offer. Upon first edit, a default set of
counter is preselected. If the information that the tester wants to gather is not in
the preselected list, clearing Show only selected counters causes a complete list
of counters from the data source to be displayed.
The 3rd tab of the wizard allows the tester to specify the polling interval to use
when collecting data from the sources. It generally is good practice to capture
resource data on the same interval as all other Performance Tester statistics.
Run the transaction report to obtain transaction throughput rates after filtering
for steady state range.
Run percentile report for HTTP response times for steady state range.
Export results from performance transaction percentile and VP reports.
Windows
IBM Tivoli
Performance rstatd …
Monitoring
Monitor
Resource Monitoring …
Eclipse Platform
Because each data collector will have a specific mechanism for retrieving data
from a remote host, the Resource Monitoring platform is designed to allow
custom Java code to be written to implement the data collection mechanism
using a common interface.
Once the data is retrieved, the data must be transformed into the Eclipse Test &
Performance Tools Platform (TPTP) statistical model format. This model format is
based on the Eclipse Modeling Framework (EMF) and is designed to be a
generic method of storing statistical data using XML. The open source Eclipse
TPTP project provides APIs that abstract the details of storing this data and
make is easy to store retrieved data.
Tips:
When using IBM Tivoli Monitoring as the data source, the host field
is the location of the resource management collection agent (or
monitoring agent) and not the Tivoli Enterprise Monitoring Server.
When using Windows Performance Monitor or UNIX rstatd monitor
as the data source, the host field is the target Windows or UNIX
system that has to be monitored.
Click Add Existing (Figure 8-4) if you have existing locations in the
workspace that you want to add and configure to the current working
performance schedule.
Figure 8-4 Add existing location to a performance schedule for resource monitoring
Once you have enabled resource monitoring you have to specify the data
sources to collect from.
Any configuration changes you make for a particular data source are stored with
that location. This means that you have to set up a data source only once. If you
export a schedule, it will contain the data source configuration information. This
includes potentially sensitive information, such as stored passwords.
Performance counter data is stored within the Windows Registry and can be
programmatically accessed using two methods (Figure 8-5):
The first method is to access the Windows Registry directly, however, this
approach can be complex and cumbersome.
The second method, which is the recommended method, is to use the
performance data helper (PDH) API. The PDH interface is an abstraction of
the details required to retrieve performance object and performance counter
data from the Windows registry. In addition, the interface automatically returns
values that are adjusted for appropriate units and scale. The interface is also
the foundation for the Windows Performance Monitor (also known as
Perfmon).
Windows
Custom
Performance
Application
Monitor
Custom
Performance Data Helper (PDH)
Application
Custom
Windows Registry
Application
Windows Kernel
Moreover, any application already using the Windows registry or the PDH
interface automatically are able to monitor these special counters, once the
counters have been registered for collection.
In the case of Performance Tester, a custom application (pdh.dll) was built using
the PDH interface to enable data collection from systems running Microsoft
Windows. The custom application integrates with the Eclipse Platform in the form
of a plug-in extension to Performance Tester. This plug-in interacts with the
custom application using the Java Native Interface (JNI™).
The following is a list of the different performance detail levels available from the
PDH interface.
Novice—Indicates that this counter may be meaningful to most users. This is
the most common counter detail level.
Advanced—Indicates that this counter is likely to be useful only to advanced
users.
Expert—Indicates that this counter is likely to be useful only to the most
advanced users.
Wizard—Indicates that this counter is not likely to be useful to any users.
In the case of Performance Tester, the custom application (pdh.dll) uses the
Advanced detail level. This detail level includes generic counters (including
processor, disk, memory, and more) and specific counters (including network
card interfaces). In this release, this detail level is not configurable.
Network interface
A user of the Windows Performance Monitor data collector must ensure that the
network interface on the physical machine that data is being collected from is
configured correctly. Users must ensure that the connected network interface has
File and Printer Sharing for Microsoft Networks enabled (Figure 8-6).
This is required because the PDH uses the Windows net use command to
establish a connection to remote machines for data collection. This service can
be enabled by opening the Properties dialog on a network connection listed in
the Control Panel of any Microsoft Windows system.
A user of Performance Tester can collect performance counter data from any
system running the Microsoft Windows operating system, only when the
Performance Tester workbench client is executed on a Microsoft Windows
operating system. This limitation is present since the PDH uses the operating
systems at the client and target endpoint for communication.
Enter a target Windows host name from which to collect data from and select
Windows Performance Monitor from the list of available data sources (Figure 8-3
on page 215).
Note: The host you want to monitor must be accessible through the Windows
network. Typically, if you are able to connect to a shared hard disk drive on the
remote host from the workbench, then you will also be able to use the remote
host as a source of resource monitoring data. If the remote host is not
accessible through the Windows network, an error message will display when
you attempt to use the Resource page to select the type of data that you want
to capture.
On the Location tab, type the user name, password, and domain (Figure 8-7).
The user name and password must match to a Windows user account on the
target host system.
The domain is optional, only required if you need to perform cross-domain
authentication.
Select Save Password to save your password locally. If you do not save your
password, you might be prompted for it (depending on the host system
configuration) when editing the configured location or when running
performance schedules that use the location.
On the Resource tab, select the type of data that you want to capture
(Figure 8-8). The tree view shows the host and all of its respective counter
groups and counters. The tree is organized by performance objects, denoted by
folder icons, and performance counters, denoted by stop-watch icons.
Clearing Show only selected counters allows you to see all available counters
(Figure 8-9). Be selective; monitoring all possible resource data requires
substantial amounts of memory. Hover over a counter with your mouse to see
details about what that counter measures.
Figure 8-9 Windows Performance Monitor Resource tab showing all available counters
On the Options tab, the time interval properties can be configured (Figure 8-10):
Type the Polling Interval in seconds, for collecting resource data. For example,
if you accept the default of five seconds, counter information will be collected
at five second intervals from the specified host during the schedule run.
Type the Timeout Interval in seconds. If the resource monitoring host does not
respond within this amount of time during a schedule run, an error is logged.
Once the configuration is complete and saved, the resource monitoring location
is added to the performance schedule (Figure 8-11).
This daemon may already be installed and running on most Solaris™ and Linux
installations, however, this configuration will depend on the vendor and
distribution. The rstat daemon is normally started by the inetd daemon.
Percentage User CPU Time Percentage user time is the percentage of non-idle
processor time spent in user mode.
Percentage of Idle CPU Time Percentage of time that the processor is idle.
Total Disk Transfers per Total number of disk transfers on each of the disk
second interfaces (per second).
Total VM Pages Paged IN per Total number of pages paged in per second (paging vs.
second swapping: a)
Total VM Pages Paged OUT Total number of pages paged out per second (paging
per second vs. swapping: a)
Total VM Pages Swapped IN Total number of pages swapped in per second (paging
per second vs. swapping: a)
Total VM Pages Swapped Total number of pages swapped out per second (paging
OUT per second vs. swapping: a)
Total Inbound packets on all Total number of inbound packets on all interfaces per
interfaces per second second
Total Inbound errors on all Total number of inbound errors on all interfaces per
interfaces per second second
Total Outbound packets on all Total number of outbound packets on all interfaces per
interfaces per second second
Total Outbound errors on all Total number of outbound errors on all interfaces per
interfaces per second second
Total collisions seen on all Total number of collisions seen on all interfaces per
interfaces per second second
Total context switches per Total number of context switches per second
second
Average number of jobs in run Number of jobs in the run queue averaged over the last
queue (1 minute average) 1 minute.b
Average number of jobs in run Number of jobs in the run queue averaged over the last
queue (5 minute average) 5 minutes.b
Average number of jobs in run Number of jobs in the run queue averaged over the last
queue (15 minute average) 15 minutes.b
a. Paging is a light-weight mechanism whereby the operating system reads/writes
pages of memory from/to the disk; swapping is similar to paging, but is more
heavy-weight in that entire processes can have all their pages read/written from/to
disk.
b. The jobs on the run queue are the processes that are ready to run on the pro-
cessor.
Depending on the setup of your account and the UNIX-based operating system,
the system administrator might control the location and maintenance of the RSM
programs that monitor your account. The RSM program obtains the performance
data from the rstatd server, which returns performance statistics obtained from
the kernel.
Now rstatd should be running. Running the following command will help verify
that rstatd is actually started:
rpcinfo -p localhost
The daemon should be listed several times, which is due to the different RPC
versions available for clients to use.
Portmapper service
The rstatd uses the Remote Procedure Call (RPC) system to facilitate connecting
to the system portmapper daemon running on a well-known port. The rstatd will
use RPC to create a listener that asks the portmapper for a port to use for
communication with a remote client. The portmapper selects an unused port and
assigns it to the listener.
The output indicates that the portmapper is operating using the TCP and UDP
protocols using port 111, the default port.
If the portmapper daemon is not listed in the output of the rpcinfo command, as
shown above, the daemon may not be installed or not configured to run. It is
recommended to contact your system administrator and request that the
portmapper RPC program be started.
To view the most commonly used RPC program numbers is held in a file called
/etc/rpc on each system. The file consists of a series of entries, one per service,
such as:
portmapper 100000 portmap sunrpc
rstatd 100001 rstat rstat_svc rup perfmeter
rusersd 100002 rusers
nfs 100003 nfsprog
ypserv 100004 ypprog
Enter a target Linux/UNIX host name from which to collect data from and select
the UNIX rstatd monitor item from the list of available data sources (Figure 8-3 on
page 215).
On the Resource page, select the type of data you want to capture (Figure 8-12).
The tree view shows all available performance counters, with a default set of
counters pre-selected.
Clearing Show only selected counters allows you to see all available counters
(Figure 8-13). Be selective; monitoring all possible resource data requires
substantial amounts of memory. Hover over a counter with your mouse to see
details about what that counter measures.
Figure 8-13 UNIX rstatd monitor Resource tab showing all available counters
On the Options tab, the time interval properties can be configured (Figure 8-14):
Change the Connection information if needed. Typically, you only need to
change this information if a firewall is blocking the default port. If the
portmapper has be configured to use a different Protocol or Portmapper™
Port number, this area allows for custom configuration to be applied for the
specific target Linux/UNIX host being monitored.
Type the Polling Interval in seconds, for collecting resource data. For example,
if you accept the default of 5 seconds, counter information will be collected at
5-second intervals from the specified host during the schedule run.
Type the Timeout Interval in seconds. If the resource monitoring host does not
respond within this amount of time during a schedule run, an error is logged.
IBM Tivoli Monitoring also provides a customized agent called the Universal
Agent. The Universal Agent can monitor any data that is collectable in a given
environment. For example, the Universal Agent can monitor the status of your
company Web site to ensure that it is available.
Each agent has special metadata that describes itself. This metadata is required
when integrating data collection into a custom client, such as Performance
Tester. Today, Performance Tester is capable of data collection from the following
ITM Monitoring Agents:
Operating system agents
– Monitoring Agent for Windows OS
– Monitoring Agent for Linux OS
– Monitoring Agent for UNIX OS
– Monitoring Agent for z/OS
Application agents
– Monitoring Agent for Citrix
– Monitoring Agent for IBM DB2
– Monitoring Agent for IBM WebSphere Application Server
– Monitoring Agent for IBM WebSphere MQ
– Monitoring Agent for Oracle Database
– Monitoring Agent for SNMP-MIB2 (only)
– Monitoring Agent for IBM Tivoli Composite Application Manager for
WebSphere
Once data is retrieved using SOAP requests, the well formed XML responses are
transformed into the Eclipse Test and Performance Tools Platform (TPTP)
statistical model format. This model format is based on the Eclipse Modeling
Framework (EMF) and is designed as a generic method of storing and
representing statistical data, despite which system or data collector the data
points are observed from. The generic model format allows Performance Tester
and other Eclipse-based views to render reports based on this data.
The integration between Performance Tester and IBM Tivoli Monitoring is shown
in Figure 8-17.
Data Model
Requestor
Internet
Adapter
Agent
TPTP Data
SOAP Server
Loader (HTTP/HTTPs)
ITM Agent
(WAS)
Portal Server
Data Loader
Data Warehouse
Views
Eclipse Platform
Figure 8-17 Rational Performance Tester integration with IBM Tivoli Monitoring
There are two mechanisms that statistical data that can be collected from ITM:
Real-time: Real-time data collection requires making SOAP requests to the
monitoring server requesting for the most recently acquired data point.
Importing historically: Importing historical statistical data is possible
because ITM is a managed solution, where the data observed over time is
persisted to the Tivoli Data Warehouse.
Here is a sample HTTP POST request that can be used to query ITM for some
data.
POST http://localhost:1920///kdsmain/soap HTTP/1.0
Accept: */*
Accept-Language: en-us
Referer: http://localhost:1920///kdsmain/soap/kshhsoap.htm
interfacename: CT_SOAP
messagetype: Call
Content-Type: text/xml
Proxy-Connection: Keep-Alive
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1;
.NET CLR 1.1.4322;.NET CLR 1.0.3705)
Host: localhost:1920
Content-Length: 96
Pragma: no-cache
<CT_Get><userid>sysadmin</userid><password></password>
<object>Web_Applications</object></CT_Get>
– CT_Get is a command that the ITM SOAP server understands and declares
the opening of a new transaction query.
– userid and password are used for authenticating with the SOAP server.
– object refers to the ITM object name for which we are querying (explained
in more detail in the next section). In order to get the available agents, the
ITM Object name is set to ManagedSystem.
Types of queries
In this section we list three types of queries.
Example response:
<?xml version="1.0" encoding="ISO-8859-1"?>
<SOAP-ENV:Envelope
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">
<SOAP-ENV:Body>
<SOAP-CHK:Success
xmlns:SOAP-CHK = "http://soaptest1/soaptest/"
xmlns="urn:candle-soap:attributes">
<TABLE name="O4SRV.INODESTS">
<OBJECT>ManagedSystem</OBJECT>
<DATA>
<ROW>
<Timestamp>1050620105552004</Timestamp>
<Name>RDANEK:UA</Name>
<Managing_System>HUB_RDANEK</Managing_System>
<ORIGINNODE>RDANEK:UA</ORIGINNODE>
<Reason>FA</Reason>
<Status>*OFFLINE</Status>
<Product>UM</Product>
<Version>04.10.00</Version>
<Type>V</Type>
<HLOCFLAG>L</HLOCFLAG>
<HHOSTINFO>WinXP~5.1-SP1</HHOSTINFO>
<HHOSTLOC></HHOSTLOC>
<CT_Host_Address>ip:#169.254.198.66[38213]<NM>RDANEK</NM>
</CT_Host_Address>
</ROW>
</DATA>
</TABLE>
</SOAP-CHK:Success></SOAP-ENV:Body></SOAP-ENV:Envelope>
The response is returned in a well-formed XML document structure. Now let
us dissect the response:
– The OBJECT element echoes the same ITM object name that was sent in
the initial SOAP.
– The DATA contains the result data of the query, which is of most interest to
us. Each separate XML fragment of data is enclosed in a ROW element.
– Within each ROW element are the attributes which are known about for the
given object. For example, Timestamp, Name, Managing_System, are
some of the available attributes.
– The NAME element contains the name of the monitoring agent.
– The CT_Host_Address contains the host and/or address on which the
Monitoring Agent is running.
</DATA>
</TABLE>
</SOAP-CHK:Success></SOAP-ENV:Body></SOAP-ENV:Envelope>
The response is returned in a well-formed XML document structure. Now let
us dissect the response:
The OBJECT element echoes the same ITM Object name that was sent in the
initial SOAP.
The DATA contains the result data of the query, which is of most interest to us.
Each separate XML fragment of data is enclosed in a ROW element. In the
example response, the value of the Thread count for the NT_Process object
matching the process with an identifier of 988 is 24.
Example response:
<?xml version="1.0" encoding="ISO-8859-1"?>
<SOAP-ENV:Envelope
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">
<SOAP-ENV:Body>
<SOAP-CHK:Success xmlns:SOAP-CHK = "http://soaptest1/soaptest/"
xmlns="urn:candle-soap:attributes">
<TABLE name="KNT.WTPROCESS">
<OBJECT>NT_Process</OBJECT>
<DATA>
<ROW>
<Timestamp>1050620132453887</Timestamp>
<Thread_Count dt="number">1</Thread_Count>
</ROW>
<ROW>
<Timestamp>1050620132453887</Timestamp>
<Thread_Count dt="number">64</Thread_Count>
</ROW>
<ROW>
<Timestamp>1050620132453887</Timestamp>
<Thread_Count dt="number">3</Thread_Count>
</ROW>
</DATA>
</TABLE>
</SOAP-CHK:Success></SOAP-ENV:Body></SOAP-ENV:Envelope>
Enter a target host name, where the ITM Monitoring Agent resides, from which to
collect data from and select the IBM Tivoli Monitoring item from the list of
available data sources (Figure 8-3 on page 215).
On the Tivoli Enterprise Monitoring Server tab, specify the monitoring server that
you want to use to capture resource monitoring data (Figure 8-18):
Type the IP address or the fully qualified host name of the monitoring server in
the Host field on the Tivoli Enterprise Monitoring Server tab. This is different
from the Host field at the top of the Create and manage configurations wizard,
which indicates the monitoring agent.
Type the user name and password for the monitoring server in Authentication.
Change the Connection information if needed. Typically, your Tivoli system
administrator will specify this information.
Select Save Password to save your password locally. If you do not save your
password, you might be prompted for it (depending on the host system
configuration) when editing the configured location or when running test
schedules that use the location.
After you have specified the monitoring server, you can select resources to
capture. If the host is not managed by the monitoring server, you will see an error
message.
Figure 8-18 IBM Tivoli Monitoring Tivoli Enterprise Monitoring Server tab
On the Resource page, select the type of data that you want to capture
(Figure 8-19). The tree view shows the host and all of its available IBM Tivoli
Monitoring agents, and their respective counter groups and counters.
Clearing Show only selected counters allows you to see all available counters
(Figure 8-20). Be selective; monitoring all possible resource data requires
substantial amounts of memory. Hover over a counter with your mouse to see
details about what that counter measures.
Figure 8-20 IBM Tivoli Monitoring Resource tab showing all available counters
On the Options tab, the time interval properties can be configured (Figure 8-21):
Type the Polling Interval in seconds, for collecting resource data. For example,
if you accept the default of 5 seconds, counter information will be collected at
5-second intervals from the specified host during the schedule run.
Type the Timeout Interval in seconds. If the resource monitoring host does not
respond within this amount of time during a schedule run, an error is logged.
Once the configuration is complete and saved, the resource monitoring location
will be added to the performance schedule (Figure 8-22).
To initiate real-time data collection from an ITM monitoring agent, select Run →
Profile or use the toolbar button to open the launch configuration dialog
(Figure 8-23). This profile action will ask you to switch into the Profile and
Logging perspective of the Performance Tester product. This perspective enables
profiling tools and has a different layout of the views on the workbench.
Figure 8-24 Create and run a real-time IBM Tivoli Monitoring configuration
Figure 8-25 Analyze statistical resource information collected in real-time using IBM Tivoli Monitoring
In the workbench client, you can import the data using two methods:
1. An import wizard that is accessible by selecting File → Import (Figure 8-26).
2. An import wizard that is accessible by right-clicking on a performance report
in the Performance Test Runs view (Figure 8-27) or on the resources report,
which is the last tab in the view.
Once the wizard is loaded, you must specify a resource location configured using
the ITM data source (Figure 8-28).
From the three data collectors offered in Performance Tester, only ITM is capable
of collecting historical data. If you try to add a resource location configured for
any other data source the collection for these locations will be ignored during
import and the user will be alerted (Figure 8-29).
Figure 8-29 Warning indicating that only historical data sources can be used for
importing data
The next stage of the import wizard is to specify the time period from which to
import data (Figure 8-30). This can be specified as either the start and end times
or as the amount of time you specify, counted backward from the time that you
click Finish. You might need to adjust the time interval to compensate for clock
discrepancies between the workbench and the monitoring server.
Note: When specifying the time in a specific number of units, note that, for
consistency, month is defined as 30 days and year is defined as 365 days.
Days refers to 24-hour periods, not calendar days. The units of time selected
are subtracted from the point in time when you click Finish to import the data
to give the start time. For example, if you select 2 months, the time period will
be the 60 days (24-hour time periods) immediately prior to the point in time
that you click Finish.
To import data from ITM, the ITM monitoring agent itself must be configured for
historical data collection. If the agent is not properly configured, an error
message will be displayed to the user when attempting the import (Figure 8-31).
Figure 8-31 Error that ITM Monitoring Agent is not configured for historical data collection
After the import process has completed, the data is displayed in one of two ways,
depending on how the import wizard was invoked:
1. If the user invoked the wizard from the File → Import menu, this action
indicates that there is no association to a performance result and the data is
displayed using the default statistical view in the Profile and Logging
perspective (Figure 8-32).
2. If the user invoked the wizard by right-clicking on a performance result, this
action indicates the imported data is associated to the selected performance
result. The data can be viewed using the Resource tab in the default report
when the performance result is opened.
Figure 8-32 Display and analyze statistical resource information imported from IBM Tivoli Monitoring
The Resource Counters configuration dialog (Figure 8-34) allows you to select
specific counters and adjust the scaling factor for each counter. Adjusting the
scaling factor for a counter is important when trying to analyze correlation
between counters that have different data types. For example, processor time is
a percentage quantity where as memory is measured in bytes.
Once the counters are selected and the configuration dialog is complete, the
counters are drawn on the selected graph (Figure 8-35). Hovering over any data
point using the mouse will provide a tool-tip containing information about the data
point, such as the value of the observation.
Figure 8-35 Response vs. Time summary correlated with the % Processor Time resource counter
Figure 8-36 Drag-and-drop a resource counter from the Performance Test Run view
The layout of the graph can also be customized using the Customize menu item
when you right-click on the graph. This action allows you to set the X and Y axis
limits, legend, change the time range.
Measuring the response time for a transaction from end-to-end is known as the
response time breakdown (RTB) feature in Performance Tester. Response time
breakdown allows customers to monitor their J2EE applications in real-time as
they are executed in a distributed application server environment.
ARM instrumentation has grown in popularity since it was first introduced in 1996
and is now built into software from leading vendors such as IBM,
Hewlett-Packard, SAS, and Siebel. Software that is not already ARM-enabled
can be made to have calls to the ARM API embedded directly in the source code,
or calls can be inserted into the machine code at run-time.
Transaction correlation
To correlate transactions that occur on different processes and machines in a
distributed environment, the ARM standard calls for the use of an ARM
correlator. A correlator is generated for every root (or edge) transaction and each
of its child transactions. A business transaction is determined by building a tree
using these correlators. This process allows the ARM implementation to trace the
path of a distributed transaction through the infrastructure.
ARM implementation
Performance Tester leverages an existing ARM implementation present in the
IBM Tivoli Composite Application Manager (ITCAM) for Response Time Tracking
product, which was formerly known as IBM Tivoli Monitoring for Transaction
Performance (TMTP).
The Tivoli transaction monitoring infrastructure allows notifying the Tivoli ARM
engine as to which transactions need to be measured. This behavior allows for
monitoring a specific type of transaction, such as transactions that invoke a
Servlet; or monitoring a single transaction end-to-end. In addition, transactions
can be monitored when a specific user-defined threshold has been violated.
Correlation using the Tivoli ARM engine is possible using four global unique
identifiers (GUID):
Origin host UUID—Indicates the ID of the physical host where the transaction
originated
Root transID—Indicates the ID where of the transaction originated
Parent transID—Indicates the ID of a transaction that has sub-transactions
Transaction classID—Indicates the ID of any transaction
Once all the transactions are observed at the Tivoli ARM engine, then they are
sent to the Management Server, where the are correlated and persisted.
The Tivoli ARM engine has a plug-in system by which a third-party applications
can register their application with the engine through a well-known Java interface.
Once registered, the application will receive notifications of ARM events and, if
desired, the entire ARM objects as well. This interface is known as the ARM
plug-in interface. This interface enables applications to process ARM events in
real-time, rather than waiting for the events to be available at the management
server.
Furthermore, the interface also allows applications to append data to the ARM
correlator that is sent from one machine to another. Although this can be
dangerous, the total number of bytes should never exceed the size of the
correlator according to the ARM specification, or there may be adverse effects in
the environment. The Tivoli ARM engine does not manage any data that is
appended to the ARM correlator, and therefore, it is up to the third-party
application developer to ensure stability of any modification to the original ARM
correlator generated by the engine.
9.1.3 Instrumentation
To monitor an application using ARM, a software developer must insert snippets
of application code into it, also known as a probe. These probes make calls to the
Open Group ARM standard API, and when the application (with the snippets) is
executed the ARM implementation will process these calls, which is also known
as a transaction. The act of inserting the probe into the application intended to be
monitored is known as instrumentation, because it behaves as the response time
measurement device when the application is executing.
Often having a software developer manually modify application code to insert the
probe can be cumbersome and may increase the overhead in maintaining the
application code itself. Various technologies have been developed in industry to
help automate this process. One mechanism that is widely used is known as
byte-code instrumentation. This mechanism involves a process by which the
application source code is compiled into sets of byte-codes (which are intended
to be executed by virtual machine) and byte-codes representing the probes are
inserted before and after specific locations in the set of byte-codes representing
the original application.
JITI starts when the application classes are loaded by the JVM (for example, the
IBM WebSphere Application Server). The injector alters the Java methods and
constructors specified in the registry by injecting special byte-codes in the
in-memory application class files (Figure 9-4). These byte-codes include
invocations to hook methods that contain the logic to manage the execution of
the probes. When a hook is executed, it gets the list of probes currently enabled
for its location from the registry and executes them. JITI probes make ARM calls
and generates correlators in order to allow sub-transactions to be correlated with
their parent transactions.
Note: Notice that the only difference between version 5 and 6 in the above
paths are the starting anchor directories. In version 6, the server product
introduced a notion of profiles that can be placed in locations or partitions
other than where the product itself is installed. The location where the profile is
stored is known as the WAS_PROFILE_HOME directory, whereas, WAS_HOME is
where the product is installed.
variables.xml
The variables.xml file consists of file system path information. This path
information is used to define several variables that are used in the server.xml
file. The variables.xml file may look like the following XML excerpt:
<variables:VariableMap xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI"
xmlns:variables="http://www.ibm.com/websphere/appserver/schemas/5.0/
variables.xmi" xmi:id="VariableMap_1">
<entries xmi:id="VariableSubstitutionEntry_1166740318828"
symbolicName="SERVER_LOG_ROOT" value="${LOG_ROOT}/server1"
description="The log root directory for server server1."/>
<entries xmi:id="VariableSubstitutionEntry_1166740318891"
symbolicName="WAS_SERVER_NAME" value="server1"
description="Name of the application server."/>
<entries xmi:id="VariableSubstitutionEntry_1166804719953"
symbolicName="MA_LOG_QUALDIR" value="J2EE/server1_100"
description="Base directory for logging for the TMTP Management
Agent"/>
<entries xmi:id="VariableSubstitutionEntry_1166804720094"
symbolicName="MA" value="C:/PROGRA~1/IBM/SDP70/DCI/rpa_prod/TIVOLI~1"
description="Base installation directory of the TMTP Management
Agent"/>
<entries xmi:id="VariableSubstitutionEntry_1166804720500"
symbolicName="MA_LIB"
value="C:/PROGRA~1/IBM/SDP70/DCI/rpa_prod/TIVOLI~1/lib"
description="Management Agent library directory"/>
<entries xmi:id="VariableSubstitutionEntry_1166804720672"
symbolicName="MA_INSTRUMENT"
value="C:/PROGRA~1/IBM/SDP70/DCI/rpa_prod/TIVOLI~1/app/instrument/61"
description="Base installation directory of the TMTP J2EE
Instrumentation application."/>
<entries xmi:id="VariableSubstitutionEntry_1166804720828"
symbolicName="MA_INSTRUMENT_LIB"
value="C:/PROGRA~1/IBM/SDP70/DCI/rpa_prod/TIVOLI~1/app/instrument/
61/lib"
description="TMTP J2EE Instrumentation application library
directory."/>
<entries xmi:id="VariableSubstitutionEntry_1166804721031"
symbolicName="MA_INSTRUMENT_APPSERVER_CONFIG"
value="C:/PROGRA~1/IBM/SDP70/DCI/rpa_prod/TIVOLI~1/app/instrument
/61/appServers/server1_100/config"
description="TMTP J2EE Instrumentation application config
directory."/>
</variables:VariableMap>
server.xml
The server.xml file includes a set of Java Virtual Machine (JVM) arguments
(also known as genericJVMArguments) that are loaded when WAS executes its
bootstrap procedures. These arguments appear at the bottom of the XML file, in
the JavaProcessDef process definitions section, and may appear similar to the
following excerpt:
<jvmEntries xmi:id="JavaVirtualMachine_1166740318031"
verboseModeClass="false"
verboseModeGarbageCollection="false"
verboseModeJNI="false"
runHProf="false"
hprofArguments=""
debugMode="false"
debugArgs="-Djava.compiler=NONE -Xdebug -Xnoagent
-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=7777"
genericJvmArguments="
-Xbootclasspath/a:${MA_INSTRUMENT}\lib\jiti.jar;
${MA_INSTRUMENT}\lib\bootic.jar;${MA_INSTRUMENT}\ic\config;
${MA_INSTRUMENT_APPSERVER_CONFIG}
-Dma.instrument=${MA_INSTRUMENT}
-Dma.appserverconfig=${MA_INSTRUMENT_APPSERVER_CONFIG}
-Dtmtp.user.dir=C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod\TIVOLI~1
-Dcom.ibm.tivoli.jiti.config=${MA_INSTRUMENT_APPSERVER_CONFIG}\
config.properties
-Dcom.ibm.tivoli.transperf.logging.qualDir=${MA_LOG_QUALDIR}
-Dcom.ibm.tivoli.jiti.probe.directory=C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod
\TIVOLI~1\app\instrument\61\lib\probes
-Dws.ext.dirs=C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod\TIVOLI~1\app\instrument
\61\lib\ext -Djlog.propertyFileDir=${MA_INSTRUMENT_APPSERVER_CONFIG}
-Xrunvirt:agent=jvmpi:C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod\TIVOLI~1\app
\instrument\61\appServers\server1_100\config\jiti.properties,
agent=piAgent:server=enabled">
<systemProperties xmi:id="Property_1166804730297"
name="com.ibm.tivoli.jiti.injector.ProbeInjectorManagerChain.file"
value="C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod\TIVOLI~1\app\instrument\61
\appServers\server1_100\config\injector.properties"
description="ITCAM Primary Injector"/>
<systemProperties xmi:id="Property_1166804733531"
name="com.ibm.websphere.pmi.reqmetrics.PassCorrelatorToDB"
value="false"
description="Enables WAS to pass the ARM correlator to the database
when 'true'."/>
</jvmEntries>
The server.xml file also defines a Portable Request Interceptor that is executed
during Remote Method Invocation (RMI) Internet Inter-Orb Protocol (IIOP) calls,
also referred to as RMI-IIOP. This interceptor is defined using a well defined
interface contained in the Java SDK. The XML fragment is placed with the other
interceptors in the file, and is shown here:
<interceptors xmi:id="Interceptor_1094820250154"
name="com.ibm.tivoli.transperf.instr.corba.iiop.TxnPortableInterceptor"/>
pmirm.xml
The pmirm.xml file makes changes to the WAS PMI Request Metrics
configuration by enabling the metrics.
start<server_name>Server
The start<server_name>Server script file starts the WebLogic server. The script
is modified to include a set of Java Virtual Machine (JVM) arguments. These
arguments may appear similar to the following excerpt:
@rem Begin TMTP AppID MedRecServer_100
set PATH=C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod\TIVOLI~1\app\instrument\61
\lib\windows;C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod\TIVOLI~1\app\instrument
\61\lib\windows\sjiti;%PATH%
set MA_INSTRUMENT=
C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod\TIVOLI~1\app\instrument\61
set JITI_OPTIONS=
-Xbootclasspath/a:%MA_INSTRUMENT%\lib\jiti.jar;
%MA_INSTRUMENT%\lib\bootic.jar;%MA_INSTRUMENT%\ic\config;
C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod\TIVOLI~1\app\INSTRU~1\61\APPSER~1
\MEDREC~1\config
-Xrunjvmpi:C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod\TIVOLI~1\app\INSTRU~1
\61\APPSER~1\MEDREC~1\config\jiti.properties
-Dtmtp.user.dir=C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod\TIVOLI~1
-Dcom.ibm.tivoli.jiti.probe.directory=C:\PROGRA~1\IBM\SDP70\DCI
\rpa_prod\TIVOLI~1\app\INSTRU~1\61\lib\probes
-Dma.instrument=%MA_INSTRUMENT%
-Dma.appserverconfig=C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod\TIVOLI~1\app
\INSTRU~1\61\APPSER~1\MEDREC~1\config
-Dcom.ibm.tivoli.jiti.config=C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod\TIVOLI~1
\app\INSTRU~1\61\APPSER~1\MEDREC~1\config\config.properties
-Dcom.ibm.tivoli.transperf.logging.qualDir=J2EE\MedRecServer_100
-Dweblogic.TracingEnabled=true
-Djlog.propertyFileDir=C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod\TIVOLI~1\app
\INSTRU~1\61\APPSER~1\MEDREC~1\config
-Dcom.ibm.tivoli.jiti.injector.ProbeInjectorManagerChain.file=
C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod\TIVOLI~1\app\INSTRU~1\61\APPSER~1
\MEDREC~1\config\injector.properties
set CLASSPATH=
%CLASSPATH%;C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod\TIVOLI~1\app\instrument\6
1\lib\ext\instrument.jar;C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod\TIVOLI~1\app
\instrument\61\lib\ext\ejflt.jar;C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod\TIVO
LI~1\app\instrument\61\lib\ext\jflt.jar;C:\PROGRA~1\IBM\SDP70\DCI\rpa_pr
od\TIVOLI~1\app\instrument\61\lib\ext\jffdc.jar;C:\PROGRA~1\IBM\SDP70\DC
I\rpa_prod\TIVOLI~1\app\instrument\61\lib\ext\jlog.jar;C:\PROGRA~1\IBM\S
DP70\DCI\rpa_prod\TIVOLI~1\app\instrument\61\lib\ext\copyright.jar;C:\PR
OGRA~1\IBM\SDP70\DCI\rpa_prod\TIVOLI~1\app\instrument\61\lib\ext\core_in
str.jar;C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod\TIVOLI~1\app\instrument\61\li
b\ext\armjni.jar;C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod\TIVOLI~1\app\instrum
ent\61\lib\ext\eppam.jar;C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod\TIVOLI~1\app
\instrument\61\lib\ext\concurrency_util.jar
commEnv
In addition, the commEnv script file, which initializes the common environment
before WebLogic starts, modifies the system PATH to include the JITI library. The
modification to the script may appear similar to:
@rem TMTP Begin
set PATH=
C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod\TIVOLI~1\app\instrument\61\lib\window
s;C:\PROGRA~1\IBM\SDP70\DCI\rpa_prod\TIVOLI~1\app\instrument\61\lib\wind
ows\sjiti;%PATH%
Repeat the instrumentation steps for every server on the machine involved in any
data collection for the applications you will be profiling (usually, there will be only
one application server, but it is possible for you to have more than one on a
machine).
Examples
On a Linux machine, to instrument an IBM WebSphere Application Server
V5.x server named server1, installed in the directory
/opt/WebSphere/AppServer (no security):
./instrumentServer.sh -install -type IBM -serverName server1
-serverHome /opt/WebSphere/AppServer -serverVersion 5
Note: The WebLogic server has to be started with the JVM that is included
with the product itself. Also, note that the JRockit VM is not a supported JVM.
For managed WebLogic servers, the Java Home variable (under
Configuration → Remote Start) must point to the Sun JVM shipped in
WebLogic for an instrumented server to start correctly.
To invoke the graphical user interface based instrumenter, use the Start menu
(Figure 9-6):
Start → Programs → IBM Software Development Platform → IBM Rational
Data Collection Infrastructure → Application Server Instrumenter
Profile name—The name of the profile that is setup on the server (only
required for version 6)
Server name—Server instance name (for example, server1)
Server home—Server install home directory (for example, C:\Program
Files\IBM\Websphere\AppServer)
Requires global security—Select Requires global security if the server
requires authentication.
User—The user ID used to authenticate with the server. This field is only
activated if Requires global security is selected.
Password—The password used to authenticate with the server. This field is
only activated if Requires global security is selected.
The advanced configuration is used to indicate options for more tightly secure or
customized SSH server configurations (Figure 9-10).
Private key file—The file that contains the private key data used to
authenticate with the remote host. This field is optional.
Passphrase—The passphrase used to authenticate with the private key file.
This field is optional.
Examples
To instrument an IBM WebSphere Application Server V6.0 server named
server2, installed in the directory /opt/WebSphere/AppServer, with profile name
default, and security enabled, on a remote Linux machine with hostname linux2:
Click Add Remote.
Populate the Application Server tab as follows:
– Type: select IBM WebSphere Application Server v6.x:
– Profile name: default
– Server name: server2
– Server home: /opt/WebSphere/AppServer
– Select Requires global security
– User: my_WAS_userID
– Password: my_WAS_password
– Select or clear Save password as desired
Populate the Connection tab as follows:
– Host: linux2
– User: SSH_linux2_userId
– Password: SSH_linux2_password
Type the command name with the -uninstall argument and all of the other
arguments you used to instrument it originally. For example, on Windows, to
uninstall an IBM WebSphere Application Server V5.1 server instance named
my_Server, installed in C:\Program Files\was5.1, with security enabled:
instrumentServer -uninstall -type IBM -serverName my_Server
-serverHome "C:\Program Files\was5.1" -user my_WAS_userId
-password my_WAS_password -serverVersion 5
Restart the server.
Note: If you have uninstalled the server or removed the server instance
without uninstrumenting it, the instrumentServer utility still thinks that the
server is there, but will be unable to contact the server to uninstrument it. This
will block the uninstallation process for the data collection infrastructure.
Repeat the uninstrumentation steps for every server instrumented for data
collection. Once you are finished, the InstrumentationRegistry.xml file will be
empty, and data collection uninstallation will proceed.
Repeat the uninstrumentation steps for every server instrumented for data
collection. Once you are finished, the InstrumentationRegistry.xml file will be
empty, and data collection uninstallation will proceed.
The Performance Tester uninstall wizard also uses this file to determine if
application servers are still instrumented at the time of uninstalling the product.If
scenario is true, then the wizard will halt and warn the user to use the ASI or
command line tools to uninstrument the application server.
9.3.1 Architecture
Real-time application monitoring in Performance Tester is enabled by utilizing the
data collection infrastructure (DCI). The role of the DCI is to transform and
forward the ARM events, which are reported to the Tivoli ARM engine by an ARM
instrumented application, to the client (Figure 9-13).
Application
Environment
Agent Control
Wizards Views
Interface
Agent Control
PI Virtualizer
RPT
Interface
Report Generation
Data Collection
Data Loader
ARM Engine
Interface
Distributed
Views Data Collection Framework IPOT Agent
Data Model
Eclipse Platform
Figure 9-13 Environment system architecture using the Data Collection Infrastructure
Transaction flow
To understand the architecture, let us consider the flow of a business transaction
through the environment:
The DCI located on the machine where the root (or edge) transaction will be
executed must be started for monitoring. For example, executing a HTTP
request for http://ibm.com/PlantsByWebSphere indicates that the application
PlantsByWebSphere has to be instrumented and that a DCI is installed on the
machine located at ibm.com.
A client invokes an HTTP transaction involving an application that is ARM
instrumented. Two clients can be used:
– Performance Tester—Behaves similar to a browser by executing HTTP
requests for each page in the test. When response time breakdown (RTB)
is enabled, the Performance Tester client adds the ARM_CORRELATOR header
attribute to the request, which enables the DCI to monitor the transaction.
This client automatically establish a connection to the DCI on the machine
where the root transaction will be executed.
– Internet browser—Executing HTTP requests for a selected URL. The user
must manually establish a connection to the DCI on the machine where
the root transaction will be executed.
As the transaction invokes the application operating in the execution
environment, the instrumentation code (or probes) will execute. These probes
initiates ARM transactions that are monitored by the Tivoli ARM engine.
The IPOT agent is a Java-based application that implements the ARM plug-in
interface provided by Tivoli for registering third-party applications with the
Tivoli ARM engine. As ARM events are created and reported to the engine,
they are also reported to the IPOT agent. The events are collected and
organized into a transactional hierarchy, from the root transaction to include
all of its’ sub-transactions. This hierarchy is then converted into a set of events
that are modeled after the TPTP trace model format. The set of events are
then sent to the presentation system.
The target and presentation systems communicate with each other using the
Agent Controller. This component integrates with Eclipse TPTP, which
Performance Tester is based on. The component also handles managing
(starting and stopping monitoring) of the IPOT Agent.
Each ARM event reported to the DCI contains information about the start time
and completion time of the ARM transaction, in addition to meta data. The timing
information helps to compute metrics that help an analyst determine if a
performance problem is present. The meta data helps to indicate the context of
the transaction in the entire transaction hierarchy and the type of transaction
being invoked.
Execution environment
Within the execution environment there are two agents that implement the Java
profiling interface, JITI and JVMPI. JITI has already been discussed earlier. The
JVMPI agent (or Java profiling agent) used in Eclipse TPTP that has been
enhanced in Performance Tester to include features such as security and support
for Citrix and SAP integration.
Each agent offers its own set of data collection features, as shown in Table 9-1.
Table 9-1 Comparison of data collection features between ARM and JVMPI agents
Feature ARM agent JVMPI agent
With two agents that can potentially be used in parallel, the load on the execution
environment can increase dramatically. As a result, the profiling interface (PI)
virtualizer (virt.dll on Windows systems) was added. This component is the
only agent that the JVM is aware of from its’ perspective. When the PI virtualizer
received events from the JVM, it broadcasts those same events to every
JVMPI-based agent it is aware of. In this case, those agents are JITI and the
Java profiling agent. To use the Java profiling agent one would add the following
VM argument:
-XrunpiAgent:server=enabled
Notice that the argument specifies both agents to broadcast the events to. This
configuration can be found in the same configuration files as the instrumentation
as previously discussed.
Workbench
Machine Boundary
The purpose of the dynamic discovery process is to automatically have the client
workbench attach and begin monitoring the DCI located on the machine where
the remote method is being invoked. This is required because when a transaction
is first executed, the client is only attached to the first machine in the
transactions’ path. Therefore, rather than have the user know all the physical
machines that would be involved in any given transaction and manually attach to
each, this process is automated.
Note: The security feature on the Agent Controller must be disabled for
dynamic discovery to work. The security feature is not yet fully-function across
distributed environment. To disable or verify that it is disabled execute the
SetConfig script located in the bin directory of the Agent Controller
installation.
Dynamic Discovery
Machine A Machine B
2
DCI A DCI B
REQUEST_PEER_MONITOR
Agent Agent
Controller A Controller B
DATA FROM ‘B’
Agent ‘A’ UUID
Agent ‘B’ UUID
1 4
5
1 Client connects to DCI A via Agent Controller A
For the most part, the focus has been on collecting transactional information from
application servers. However, there are a few database systems, such as IBM
DB2, which have native ARM instrumentation supported in the product. Enabling
this ARM instrumentation allows deeper monitoring of end-to-end transaction
monitoring. For example, rather than just being able to know that a transaction
has used JDBC to invoke a SQL transaction from a Java application, analysts
can have the transaction information from within the database for that specific
query. This information greatly helps to narrow down if an solution problem is
because of an Java application or the solution’s database.
Furthermore, the environment can be configured to collect the exact SQL query
being executed on a database. To enable this configuration, you must disable
database privacy and security for the local DCI. The instructions for enabling the
collection of SQL statements are as follows (recommended to be done after you
have already instrumented the application server):
Shutdown the application server and DCI.
Navigate to the following directory:
<DCI_INSTALL_DIRECTORY>/rpa_prod/tivoli_comp/app/instrument/61/appServers
/<servername>/config
Open the file monitoringApplication.properties file.
Add the following two lines:
tmtp.isPrivacyEnabled=false
tmtp.isSecurityEnabled=false
Start Monitoring for the DCI.
Start the application server.
Initiate data collection by invoking transactions.
Response time breakdown shows how much time was spent in each part of the
system under test as the system was exercised. The response time breakdown
view is associated with a page element (URL) from a particular execution of a
test or schedule. This shows the insides of the system under test, because the
data collection mechanisms are on the systems under test, not the load drivers.
To configure a performance test, a check box exists in the Test Element Details
section of the Performance Test Editor (Figure 9-16). If the top-most node in the
Test Contents tree is selected—which is the performance test itself—then
selecting Enable response time breakdown from the Test Element Details causes
application monitoring on every page and page element in the performance test.
If only a specific page or page element requires application monitoring, then
select it from the Test Contents tree. This action displays the selected items’
configuration in the Test Element Details pane, from which you can enabled
response time breakdown.
The Response Time Breakdown page allows you to set the type of data that you
see during a run, the sampling rate for that data, and whether data is collected
from all users or a representative sample.
Enable collection of response time data: Select this option to activate
response time breakdown collection. This shows you the response time
breakdown for each page element.
Detail level: Select Low or Medium to limit the amount of data collected. The
higher the detail level, the deeper the transaction is traced and the more
context data is collected. For example, at Low only surface-level transactions
are monitored. In a J2EE-based application this includes Servlets. As the
detail level is increased, collection for EJBs and RMI aspects of a J2EE
application are collected.
Only sample information from a subset of users: If you set the detail level
to High or Medium, set a sampling rate to prevent the log from getting too
large.
Fixed number of users: The number that you select is sampled from each
user group. Unless you have specific reasons to collect data from multiple
users, select Fixed number of users and specify one user per user group.
Percentage of users: The percentage that you select is sampled from each
user group, but at least one user is sampled from each user group.
Enabling response time breakdown in a performance test will not affect the
response time breakdown in any performance schedule that is referencing it. In
addition, enabling response time breakdown in a performance schedule will not
affect the response time breakdown configuration in a performance test. Both,
the test and schedule are separate entities.
After making the appropriate response time breakdown configuration, you must
execute the test or schedule to initiate application monitoring during the load test.
This action can be executed by using the Run menu to launch the test or
schedule. Response time breakdown can only be exploited when executing a
test or schedule from the graphical-based user interface. If a test or schedule is
configured for response time breakdown and then executed using the command
line mechanism, response time breakdown data will not be collected.
To collect real-time response time breakdown data, make sure that you comply
with the following requirements:
The data collection infrastructure must be installed, configured, and running
on all computers from which data is to be collected. See the installation guide
for more information.
The Agent Controller (part of the data collection infrastructure) port on all
involved computers must be set to the default (10002).
The application system must not involve communication between networks
that use internal IP addresses and network address translation.
To initiate real-time data collection, select Run → Profile or use the toolbar button
to open the launch configuration dialog (Figure 9-18). This profile action asks you
to switch into the Profile and Logging perspective of the Performance Tester
product. This perspective enables profiling tools and has a different layout of the
views on the workbench.
The profile configuration dialog allows you to create, manage, and run
configurations for real-time application monitoring. This method for monitoring is
used for J2EE, non-J2EE, and Web service applications that do not have
automated application monitoring—as is the situation with a performance test or
schedule.
In the Launch Configuration dialog, select J2EE Application and click New. If
your application is not running on a J2EE application server, but rather is
ARM instrumented manually, select ARM Instrumented Application instead.
On the Host page (Figure 9-19), select the host where the Agent Controller. If
the host you need is not on the list, click Add and provide the host name and
port number. Test the connection before proceeding.
Figure 9-19 Profile launch configuration for J2EE application: Host page
On the Monitor page (Figure 9-20), select the J2EE Performance Analysis
analysis type—or the ARM Performance Analysis analysis type if profiling
using the ARM Instrumented Application launch configuration. If you want to
customize the profiling settings and filters, click Edit Options.
Figure 9-20 Profile launch configuration for J2EE application: Monitor page
On the Filters page (Figure 9-22), specify the hosts and transactions that you
want to monitor. The filters indicate the types of data that you do want to
collect, that is, they include, rather than exclude.
On the Sampling page (Figure 9-23), you can limit the amount of data being
collected by specifying a fixed percentage or a fixed rate of all the data to
collect.
In the case of a lockup, you might not see any actual errors logged. Use
interaction diagrams, statistical views, and thread analysis tools to find the
problem. For example, if the problem is an endless loop, the UML sequence
diagrams show you repeating sequences of calls, and statistical tables show
methods that take a long time. If the problem is a deadlock, thread analysis tools
will show that threads you expect to be working are actually waiting for something
that is never going to happen.
After you have collected response time breakdown data, you can analyze the
results in the profiling tools to identify exactly what part of the code is causing the
problem.
Generally, you first narrow down the component (which application on which
server) has the problem. Then, you can continue narrowing down to determine
which package, class, and finally which method is causing the problem. Once
you know which method the problem is in, you can go directly to the source code
to fix it.
You can view response time breakdown data in the Test perspective or in the
Profiling and Logging perspective if a performance test or performance schedule
is executed with repsonse time breakdown; otherwise, if you manually launched
the J2EE application or ARM Instrumented application launch configuration then
the data collected is only viewable in the Profile and Logging perspective.
Tracking such problems can involve trial-and-error as you collect and analyze
data in various ways. You might find your own way that works best.
There are two mechanisms that can be used to analyze response time
breakdown data:
Response Time Breakdown Statistics—A tabular format of a transaction and
its’ sub-transaction response time for a particular page or page element.
Interactive Reports—A graphical drill-down process for decomposing a
transaction.
The Response Time Breakdown Statistics view displays an aggregation all the
sub-transactions for the selected page element, which is listed on the top of the
view (Figure 9-27).
There are various tools available in this view that can be helpful when analyzing
a performance problem. Use the navigation information in the upper left corner to
navigate back to previous views. Use the toolbar in the upper right corner to
toggle between tree and simple layouts; add filters; select which columns are
displayed; jump to source code; toggle between percentages and absolute
values; and export the table to CSV, HTML, or XML format.
The tree layout shows the following hierarchy, in order: Host, application,
component, package, class, method (Figure 9-28). Each host comprises a tier in
your enterprise environment. Within each host, there are tiers of applications.
Within each application, there are tiers of components, and so on. The tree
layout helps you identify which tier has the slowest response time.
toolbar
Use the first icon in the toolbar in the upper right corner to toggle between
tree and simple layouts. The simple layout is a flattened version of the tree layout.
Use this layout if you want to see all of the methods without seeing the
relationships shown in the tree layout. The simple layout provides a quick and
easy way to see the slowest or fastest methods.
The default layout is the simple layout. Click on a column heading to sort the
table by that column. Drag the columns to change the order in which they are
displayed. Except for the first column in the tree layout, all of the columns are
moveable.
The exact URL of the selected page element is displayed above the table.
Actions
Click the Filter icon (second in the toolbar) to open the Filters window. There
you can add, edit, or remove filters applied to the displayed results. For more
information, refer to filtering (Figure 9-22 on page 292).
Click the Columns icon (third) to open the Select Columns page. There you
can select which columns are displayed in the table. These settings are saved
with the current workspace, and are applied to all response time breakdown
tables in the workspace.
Click the Percentage icon (forth) to toggle the display between percentages
and absolute values. In the percentage format, the table shows percentages
instead of absolute values. The percentage figure represents the percentage of
the total of all values for that column.
Click the Source icon (fifth) to jump to the source code (if available) in your
workspace. You must first select a method before clicking the source button.
Click the Export icon (last) to open the New Report window. There you can
export the response time breakdown table to CSV, HTML, or XML formats.
Use the navigation information in the upper left corner to navigate back to
previous views. The navigation is presented in the form of a breadcrumb trail,
making it easy to drill-down and drill-up from various reports.
In the past there has been some confusion around the terminology and values
computed for Delivery Time and Response Time. If you are familiar with
Performance Testing, Response Time here does not equate to a Performance
Tester’s definition of the term. The mathematical definitions are:
Delivery Time is Last_Received_Timestamp - First_Received_Timestamp
Response Time is First_Received_Timestamp - First_Sent_Timestamp
Note: You will always see a delivery time and a response time. You may also
see transactions marked with the words DNS Lookup and Connect for
requests that are of this nature.
From this page element drill down report, you can choose to further drill down
into individual host responses, application and application component
responses, and package, class, or method invocation responses for a particular
element. The right-click context menu on the bar-graph displays which drill-down
action is available from the current report. For example, if you are viewing the
application response time breakdown for a particular transaction, the context
menu will display a response time breakdown item for viewing the application
components of the selected application (Figure 9-30).
At each level in the drill-down process, the breadcrumb path becomes more
detailed. This behavior provides an easy way to jump between reports during the
analysis process. You can also choose to jump directly to the Response Time
Breakdown Statistics view from any drill-down graph.
Figure 9-32 View linking service between UML sequence diagram and statistical or log
The Execution Statistics views displays statistics about the application execution
time (Figure 9-33). It provides data such as the number of methods called, and
the amount of time taken to execute every method. Execution statistics are
available at the package, class, method and instance level.
Base Time—For any invocation, the base time is the time taken to execute the
invocation, excluding the time spent in other methods that were called during
the invocation.
Average Base Time—The base time divided by the number of calls.
Cumulative Time—For any invocation, the cumulative time is the time taken to
execute all methods called from an invocation. If an invocation has no
additional method calls, then the cumulative time will be equal to the base
time.
Calls—The number of calls made by a selected method.
Tip: Sort the entire list of methods by the Base Time metric. This action forces
the method that consumes the most processing time to appear at the top of
the list. From here, select the item and use the right-click menu to explore
various analysis options.
Select an item from the Execution Statistics view and display the right-click
context menu (Figure 9-34).
In this menu there are various actions that can be executed, notably:
Open Source—Allows the user to automatically open the source code
associated to the selected package, class, and method combination if the
source is imported into the current workspace.
Method Invocation Details (Figure 9-35)—Provides statistical data on a
selected method including information about the invoker method and the
methods invoked by the selected method. This view can be opened on a
selected method in any of the profiling views, such as the Method Invocation
Details, Execution Flow, or Execution Statistics view.
IBM Tivoli Composite Application Manager (ITCAM) for Response Time Tracking
and ITCAM for WebSphere are two products that are used in the operations
space. These two products collect extensive information when monitoring
applications in production. To take this information and provide meaning to it for a
developer, a transformation can be placed on the data using Performance Tester.
Therefore, when a problem is identified by an IT operator using ITCAM, a
developer can use Rational development and test tools to import the data from
ITCAM. This action of importing data transforms the data and presents it to the
user in their terminology.
9.4.1 Architecture
Acquring data from the ITCAM Management Server is achieved through a
well-defined Web Service Definition Language (WSDL) called DataQuery
(Figure 9-36).
The Web service returns wrapper objects that contains the same ARM objects
that were discovered at runtime of the transaction. Similar to the real-time data
collection scenario, these ARM objects are organized into a transactional
hierarchy, from the root transaction to all of its sub-transactions. This hierarchy is
then converted into a set of events that are modeled after the TPTP trace model
format. This transformation is executed at the client-side of the environment
(Figure 9-37). Once the transformation is complete, the same user interfaces
used for analysis during real-time data collection are available here.
This Web service interface also handles all authentication and authorization
through standard Web service authentication and authorization means.
Web Services
RPT/RAD Management
Agent
Data Warehouse
Eclipse Platform
…
To import response time breakdown data, one of the IBM Tivoli Monitoring
products must be installed on the application server that collects the application
data.
The following ITCAM configuration options can affect what data is available to be
retrieved and imported:
ITCAM has two modes of data collection: aggregated and instance.
– Aggregated statistical data consist of high-level statistics about classes
and methods, such as average execution time. Aggregated statistics might
not reveal intermittent performance problems.
In the workbench client, you can import the data using two methods:
An Import wizard that is accessible by selecting File → Import (Figure 9-38).
An Import wizard that is accessible by right-clicking on a performance report
in the Performance Test Runs view (Figure 9-39) or on the resources report,
which is the last tab in the view.
Import wizard
On the first page of the Import wizard, you must specify in the Host field the host
of the ITCAM Management server (Figure 9-40).
If you use a user ID and password to log on, type them into the User and
Password fields.
Next, specify the time period for which to import data and the type of data
(Figure 9-41).
You might have to adjust the time interval to compensate for clock
discrepancies between the workbench and the management server.
Specify the type of data that you want to import. For detailed analysis, you
typically want to select Import detailed instance-level data, although it might
involve a large quantity of data. If you select Import aggregated statistics, the
data imported will be statistical summaries, not the kind of data that can be
analyzed in sequence diagrams or trace tools.
Notes:
When specifying the time in a specific number of units, for consistency,
month is defined as 30 days and year is defined as 365 days. Days refers
to 24-hour periods, not calendar days. The units of time selected are
subtracted from the point in time when you click Finish to import the data to
give the start time. For example, if you select 2 months, the time period will
be the 60 days (24-hour time periods) immediately prior to the point in time
that you click Finish.
It is important to test the smallest possible portion of the application to
focus only on the part that exhibits the problem. This will simplify the
analysis later. If you are unsure of what the trigger is, you might have to
collect more data and use more thorough filters to view the data.
IBM Tivoli Composite Application Manager for Response Time Tracking
must be configured to record instance-level data to import instance-level
data. If IBM Tivoli Composite Application Manager for Response Time
Tracking is configured to collect only aggregated data, regardless of what
you select here, only aggregated statistical data will be available in the IBM
Tivoli Composite Application Manager for Response Time Tracking
database. IBM Tivoli Composite Application Manager for WebSphere does
not collect aggregated statistics. If you specify Import aggregated statistics
with IBM Tivoli Composite Application Manager for WebSphere, the import
will fail.
Select the policies or traps from which you want to import data (Figure 9-42).
Typically, you will import from policies or traps that have a severe problem status
(such as critical or failure), because they will point you to the code problems.
Selecting only the most serious policies or traps helps limit the quantity of data
imported. Click the Status column heading to sort the policies by severity.
Policies are used by IBM Tivoli Monitoring for Transaction Performance and IBM
Tivoli Composite Application Manager for Response Time Tracking. Traps are
used by IBM Tivoli Composite Application Manager for WebSphere.
The policies, or traps, that you selected are associated with a group of hosts
(computers) in the application system. Select the hosts you want to examine
(usually the hosts with the more critical status). Specifying a subset of hosts will
again reduce the quantity of data imported, helping you focus on where the
problems really are (Figure 9-43).
When importing data from ITCAM for WebSphere, the WebSphere Application
Server node name is listed, rather than the actual host name. This is set by the
server administrator and does not necessarily have to be related to the actual
host name, though that is the default choice.
If you chose to import instance-level data, you can select the transactions for
which you are interested in seeing data (Figure 9-44).
The Time column indicates when the transaction started, and the Duration
column indicates how long the transaction took. You can sort the data by clicking
on any of the column headings.
The transaction pattern listed is not the actual URL of the transaction, but rather
a regular expression pattern match for the URL that it matched in ITCAM.
Note: You can configure the IBM Tivoli Composite Application Manager for
Response Time Tracking listening policy to collect hourly averages by unique
transaction, or by monitor. You must set the listening policy to Hourly Average
by Unique Transaction to see full transaction names.
After you have collected the response time breakdown data, you can begin
analyzing it and diagnosing the problem. You can view the data using several
views (Figure 9-45), including statistics views and sequence diagrams of class
and object interactions (Figure 9-46), to help find the cause of the performance
problem.
Figure 9-46 UML Sequence Diagram view for data imported from ITCAM
Note: When importing response time breakdown data from ITCAM for
WebSphere, there are two authentication layers involved. The first one is
WebSphere authentication, which will reject an invalid user/password on the
system and display an authentication dialog. The other is ITCAM for
WebSphere authentication, which will simply return no data available to import
if authentication fails.
The only case where WebSphere authentication will pass and ITCAM for
WebSphere authentication will fail is when the user types a valid user name on
the underlying operating system (for example, root), but that user is not
registered in ITCAM for WebSphere. In this case, the user must be aware that
the server will not display an error when authentication fails, but the user will
instead see no traps available from which to import.
When importing data from an ITCAM for WebSphere trap, ensure that the
clocks of the monitoring server and the workbench are synchronized. In the
import wizard, the option to import the last n units of time uses the current time
on the local computer, but queries for traps which have activity in that time
period on the monitoring server clock. So if the monitoring server clock is
ahead by 10 minutes, you will have to either wait 10 minutes before the import
wizard will find this transaction available on the server, or query 10 minutes
into the future.
Class names that are stored in method traces by IITCAM for WebSphere may
be truncated when you import data from the ITCAM for WebSphere monitoring
server. To avoid truncation, make the following changes to aa.properties,
aa1.properties, and aa2.properties under MS_HOME/etc:
TEXTSTRING_LENGTH=255
where MS_HOME is the installation home directory of the ITCAM for WebSphere
monitoring server.
Part 2
10
The scope of this chapter is the Windows platform, covering both the workbench
console and driver (test agent) machine considerations, but focusing mostly on
the driver because that is the primary variable in hardware sizing. We only cover
the HTTP protocol; Siebel, SAP, and Citrix protocols are not covered.
10.1 Introduction
This section provides an overview of the high-level process architecture of RPT,
the workbench and driver architecture, and Java memory management
considerations.
This section is introductory in nature and provides useful context for assimilating
and interpreting the measurement results and sizing guidelines. Users interested
in skipping the details and getting to the meat should at least browse 10.2,
“Driver measurements” and then read 10.3, “RPT sizing guidelines” in its entirety.
The last three sections are supplemental and may be treated as an appendix for
those wishing to apply or deepen their understanding of the preceding material.
10.1.1 Abbreviations
The following abbreviations are used in this chapter:
EMF—Eclipse Modeling Framework
Driver—The collection of components that execute a test. Equivalent to test
agent, agent, or injector. Driver often refers to the machine on which the test
is executed, especially if the workbench is located on a different machine.
There can be multiple drivers associated with a test run that is initiated from a
workbench.
Incremental virtual user memory footprint—The memory cost on the
driver of adding one additional virtual user to the workload mix.
Incremental virtual user CPU utilization—The CPU utilization cost,
measured as a percentage, on the driver of adding one additional virtual user
to the workload mix.
JVM—Java Virtual Machine, the process which executes Java byte code
KB—Kilobytes, equivalent to 1024 bytes
MB—Megabytes, equivalent to 1,048,576 (1024 * 1024) bytes
mB—Million bytes, equivalent to 1,000,000 bytes
RPT—IBM Rational Performance Tester
TPTP—Test & Performance Tool Platform (an open source Eclipse project,
refer to http://www.eclipse.org/tptp)
Workbench—The Eclipse workbench process, where the RPT user interface
is hosted. Workbench sometimes refers to the machine on which this process
runs, to distinguish it from the driver machine, if they are different. The
workbench is also referred as controller or console.
Figure 10-1 RPT process architecture (machine, process, and JVM boundaries)
There may be an arbitrary number of drivers, one per machine. One driver may
be located on the same machine as the workbench. The Eclipse workbench
process (javaw.exe) on the workbench machine hosts the user interface for RPT.
Virtual user execution is performed in the RPT engine JVM process (java.exe)
on the driver(s), which is the primary focus of this document. In TPTP
terminology, this engine process is also typically referred to as the test runner.
The TPTP session JVM process is a temporal process that exits upon successful
creation of the engine, so it does not factor into the RPT driver memory footprint.
Multiple workbenches can concurrently share the same driver(s), although this is
not recommended for production load testing runs, which should normally be
performed on dedicated hardware. There also can be multiple RPT engine JVMs
per machine when driven from a single workbench, but this is not normal, and
requires special configuration knowledge.
As shown in Figure 10-2, the workbench not only provides the GUI presentation
face of RPT for all views and editors, test creation and execution, and artifact
management, but it is also where the TPTP EMF data models are loaded. EMF is
an in-memory modeling technology and, unless special approaches are utilized
for a particular EMF model, all of the data stored in an EMF model must be
loaded into memory in its entirety if any part of the model is to be read (for
example when a report is opened). In the case of RPT, the EMF data models are
the primary consumer of workbench memory, and in particular the statistical
model and the execution history in the test model.
conventional load testing implementation that employs a thread for each user.
When a virtual user is not performing an action, but is thinking or waiting for
I/O from the server, the RPT engine architecture consumes no CPU overhead
and almost zero memory overhead. Since idle time spent waiting often forms
the vast majority of time of a virtual user, RPT’s memory footprint can be
extremely low for workloads that have very high numbers of relatively inactive
users.
A 32-bit JVM naturally restricts the maximum heap to somewhat less than 2GB,
but there can be additional restrictions, some of which may vary by operating
system. In practice, the IBM 1.4.2 Sovereign JVM may not successfully initialize
on some system configurations when the maximum heap is set larger than 1900
MB, so this becomes the default practical limit for the RPT driver engine. (As of
RPT 7.0 the version of the IBM JVM has been updated to 1.5 and the practical
maximum heap value is 1500 on Windows and 3000 on Red Hat Linux.
Accordingly, RPT uses a default maximum of 1500 with RPT 7.0 when
automatically setting the JVM max heap value.)
On the first playback on a driver, RPT by default sets the maximum heap size for
the driver JVM equal to 256 MB and then—on all operating systems except
Windows 2000—automatically adjusts the JVM heap size on subsequent runs to
be 256 MB less than the remote driver machine’s physical memory present
during the previous run.
This value is persisted in the driver’s location asset, under the attribute
RPT_DEFAULT_MEMORY_SIZE. (It is reset every run, so it is fruitless to edit it!) When
running users locally, unless you explicitly create a location for the local driver
and reference it in the schedule’s user group when assigning users, the
automatic calibration is not available and the default 256 MB maximum heap size
is used for the local driver every run, not just the first. You can override RPT’s
algorithms for setting driver heap size, as outlined in 10.4, “Setting driver
maximum JVM heap size” on page 364.
If the maximum JVM heap size is exceeded, the java.exe engine process will be
unable to process further actions (whether or not the JVM successfully exits).
The workbench will detect the absence of a heartbeat from the engine process
and will, in approximately 60 seconds, present an error dialog to the end-user,
and terminate the run. The same workbench non-responsive driver
determination may also occur due to excessive garbage collection overhead as
the driver JVM nears heap exhaustion or if it is producing a core file. In these
cases run termination may precede the actual java.exe termination. The bottom
line is that exhaustion of the JVM heap will terminate the run. It can be avoided in
subsequent runs by increasing the maximum heap allocation, reducing the
number of users on that driver, or reducing the total workload size.
heaps show that the bulk of the exaggerated memory allocation disappears
under the much more shallow maximum heap with no ill effect (such that an
ideally constrained heap would eliminate them). In any case, this voom effect can
give an inflated perception of driver memory consumption if you take a periodic
size check and use that value indiscriminately in projecting memory needs for
larger user loads. For this reason, the best practices for measuring and
projecting incremental per-user memory usage outlined in this document always
involve observing memory over time, and using steady-state values that occur
prior to the occasional grossly inflated values described above.
The Task Manager terms are the ones used in the column headings in the
process page of Task Manager and are measured in kilobytes. The
corresponding Performance Monitor counter values are equal to the Task
Manager values multiplied by 1024.
All three metrics are routinely measured in the employed methodology, but to
calculate the incremental virtual user memory usage, Private Bytes [or Virtual
Memory Size, in bytes] is the metric adopted for use in this document. This
metric is preferred because it is immune to external paging influences, whereas
Working Set [or Memory Usage, in bytes] can, on occasion, significantly
underestimate the process’ memory usage in the presence of tight system
memory and other conditions that result in paging out of process memory. (To
see a dramatic example of this, simply observe this metric for any process with a
user interface upon minimizing and restoring its main window.)
VM Size Private Bytes Private Bytes is the current size, in bytes, of memory that this
process has allocated that cannot be shared with other processes.
Memory Usage Working Set Working Set is the current size, in bytes, of the Working Set of this
process. The Working Set is the set of memory pages touched
recently by the threads in the process. If free memory in the
computer is above a threshold, pages are left in the Working Set of
a process even if they are not in use. When free memory falls below
a threshold, pages are trimmed from Working Sets. If they are
needed they will then be soft-faulted back into the Working Set
before leaving main memory.
Peak Memory Working Set Working Set Peak is the maximum size, in bytes, of the Working Set
Usage Peak of this process at any point in time.
For the JVM (java.exe), Private Bytes is typically slightly smaller than Working
Set (assuming no paging out of java.exe pages is in effect), and Working Set
Peak is slightly higher than the maximum value observed for Working Set over
time (this is due simply to the fact that the sample interval for Working Set will
average out the peak since the peak is measured for an arbitrarily small unit of
time.)
4. In most cases, the maximum Private Bytes value from the perfmon graph
using the one minute sample interval is used as the memory usage value.
When this value is unavailable or errant (see the second paragraph of “JVM
memory heap allocation and garbage collection complexities” on page 327 as
to why any greatly exaggerated value past an established steady state is
discounted), the maximum five second interval value is substituted.
Because the counter values are averaged over the sample interval, using a
smaller sample interval will result in slightly higher values. The one minute
sample interval is preferred due to the nature of the predictive aspect of the
methodology, which takes values at smaller user counts to predict memory
usage at higher user counts—the general smoothing effect of adding more
users is best represented by the larger sample interval at smaller user counts.
In the measurements taken for this guide, the choice of the sample interval
had only a small impact on the incremental virtual tester memory footprint in
any case.
5. Measurements are taken at different virtual user loads to establish a definitive
trend, to more easily detect and avoid noise, and to gain confidence in the
results. Typically four or five data points are collected for each workload,
occasionally more, and always at least three, to get a spread of at least 3-5X
from minimum to maximum values.
Most workloads are driven with enough users to start to approach the max
heap (but not to get too close, which would distort the linearity of the
measurements by eating into the JVM’s 30% spare heap target). The
absolute number of virtual users required to achieve both of these goals
depended greatly upon the memory footprint; smaller footprints demanded
more virtual users to get meaningful differences in the memory usage. User
load ranges for various workloads varies from 10 to 50, 100 to 300, and 100
to 1000.
6. Once the plotted values for the virtual user load range are available, the
intercept and slope are calculated by doing a best fit linear regression of the
data points (the Excel® INTERCEPT and SLOPE functions are used). If there
is an obviously errant point, that data point is re-run (preferred) or discarded
(expedited).
7. The calculated intercept value represents the constant start-up memory
footprint of the engine, and is compared against the typical 35 MB value, give
or take a few MB for sanity. There can be some legitimate variance in the
overhead, such as due to the size of the datapools associated with the test(s)
in the schedule.
8. The calculated slope value, which is the ratio of memory use to virtual users,
is taken to be the incremental virtual user memory footprint.
CPU Usage Processor / % Processor Time is the percentage of elapsed time that the processor spends
(on % Processor Time to execute a non-Idle thread. It is calculated by measuring the duration of time
Performance / _Total the idle thread is active in the sample interval, and subtracting that time from
tab) (using Object / interval duration. (Each processor has an idle thread that consumes cycles
[%] Counter / Instance when no other threads are ready to run). This counter is the primary indicator of
hierarchy naming processor activity, and displays the average percentage of busy time observed
style) during the sample interval. It is calculated by monitoring the time that the service
[%] is inactive, and subtracting that value from 100%.
Includes all CPU overhead for the process(es) of interest incurred on all
processors, which includes the CPU usage of any background activity not
associated with the RPT run, so one should be on the lookout for variations due
to unplanned or unknown background activity. Start off with what you believe to
be an otherwise idle system. To validate this and to account for background
activity you cannot control, it is a good idea to take extended "noise"
measurements for durations at least as long as the measurement intervals, as
well as to track history over time during the actual measurement interval to
detect unexplained variations.
CPU (on Process / % % Processor Time is the percentage of elapsed time that all of the process
Processes Processor Time / threads used by the processor to execute instructions. An instruction is the basic
tab) java (using Object / unit of execution in a computer, a thread is the object that executes instructions,
[%] Counter / Instance and a process is the object created when a program is run. Code executed to
hierarchy naming handle some hardware interrupts and trap conditions are included in this count.
style)
[%] Make sure that if there are multiple java.exe processes running that the RPT
engine's process is selected. This counter has the advantage of excluding
activity outside of the RPT engine process (except for some hardware interrupts
and trap conditions due to background activity that occur during the execution
of instructions belonging to the engine process.) However, be very careful using
it on multiprocessors and hyperthreaded processors since a multi-threaded
application (such as the RPT engine) will typically be executing on more than
one processor concurrently, and this counter treats the concurrent execution as
additive, and the total is not normalized to 100% of the total processing capacity.
Therefore non-normalized values taken at face value are exaggerated, which
may not be obvious unless the measurement value is greater than 100%.
CPU Time Not available In Task Manager, the total processor time, in seconds, used by a process since
(on it started.
Processes
tab) This is the equivalent to the product of the measurement interval and the
[secs] average of the above % Processor Time counter for the same process during
that measurement interval. It is interesting that perfmon has an Elapsed Time
counter but no CPU Time counter, whereas Task Manager has a CPU Time
counter but no Elapsed Time counter. If both were available on the same tool,
one could derive utilization [%] from cumulative CPU time.
The % Processor Time counter for the processor object (the first metric in
Table 10-2 vs. the second metric, which is for a specific process) is the one
chosen for use in this document when presenting driver CPU usage because it is
the easiest and safest to apply, and also represents a more conservative result in
practice, as it will tend to overestimate actual RPT engine CPU usage vs.
underestimate it. Recall that the driver also consists of the IBM Rational Agent
Controller (RAService.exe) process, whose CPU overhead is ignored if the
second metric is only applied for the RPT engine’s java.exe process. Although
this overhead is generally small during the steady state portion of the run
(basically proportional to the amount of stats data collection transferred from
driver to workbench per unit time), the choice of the first metric naturally includes
it, which is appropriate.
Although it is not used as the basis for presenting CPU usage results, the
% Processor Time for the RPT engine’s java.exe process is also collected as an
additional reference when collecting the per-process memory counters.
Once we have agreed to how to measure CPU usage, the next step is to define
the methodology used to collect the measurements, and generate an answer.
The most common CPU sizing question most load testers want to know is what is
the virtual user CPU usage, because that allows them to take a workload
typically specified with, among other things, the number of virtual users, and
approximate the total CPU (processor) needs by taking the fixed overhead (if
any) and multiplying the CPU usage per user by the number of users to
determine the total number of driver machines needed for CPU processing
purposes. We refer to this per-user CPU usage as the incremental virtual user
CPU utilization, that is, the CPU cost of adding one additional user to the mix.
We measure it as a percentage of the processing capacity of a single system
(regardless of the number of processors or whether the processors are
hyperthreaded or not).
2. Upon reaching steady state of the applied workload (given that paced loops
are used to accelerate arrival at steady state, at minimum this was after all
virtual users have executed the test iteration once, preferably twice), the
% Processor / % Processor Time / _Total counter instance is added to the two
perfmon instances used to collect per-process memory usage.
3. Prior to leaving steady state (before any virtual user has completed its final
iteration), the afore-mentioned CPU counter is selected and the average
value of the counter for all intervals is read directly from perfmon and used as
the single representative value of the CPU utilization for the number of active
virtual users. (This average is inclusive of up to the last 100 samples; the
intervals of five seconds and one minute are chosen to arrive at an early
result within the first 100 shorter samples of one perfmon instance and allow
comparison to the final complete result for the entire run duration from the
other perfmon instance, which also represents less than 100 samples of the
longer sample interval.)
4. The two results are compared as an audit—they should be close because the
actual choice of the sample interval does not matter for calculating the
average. (The choice of different sample intervals is useful however for
looking at the variation in the peaks.) A significant deviation most likely
indicates that the shorter initial interval simply has not reached steady state
yet, or that some external background activity occurred that discredits the
results, the latter of which would require a rerun.
5. Assuming that the audit of the previous step does not require a rerun, the
average CPU utilization from the longer perfmon instance is accepted as the
legitimate CPU utilization value.
6. Using the same methodology, steps 2 through 5 are repeated for each run at
the same varying user loads chosen for memory footprint measurements.
7. Once the plotted values for the virtual user load range are available, the
intercept and slope are calculated by doing a best fit linear regression of the
data points (the Excel INTERCEPT and SLOPE functions are used). If there
is an obviously errant point, that data point is re-run (preferred) or discarded
(expedited).
8. The calculated intercept value does not represent a fixed start-up overhead
for the RPT engine, as it does in the case of the memory footprint
measurements. Because we are after steady state CPU utilization for driver
sizing purposes, we purposely chose a methodology which ignores start-up
overhead and only takes measurements during steady-state application of the
workload.
However, there is another type of overhead which can show up in the
intercept: any constant background CPU activity that is present in each run
that is independent of the virtual user load will show up in the intercept value.
For example, there could be a periodic process that runs once every few
minutes that on average consumes 2% of the CPU. (This would normally be
detected in step one during the background check. In the final sizing
calculations this type of CPU utilization should be ignored if it would not be
present on the actual systems when the official benchmarks are run, but
should be included if it will be present.)
Excluding background CPU activity, one might then think that the intercept
should always be zero (in theory, on the surface at least), but in practice this is
not always the case because some of the queue overhead processing in the
engine is not completely linear with the number of virtual testers, particularly
at low user counts, as there is some overhead associated with having activity
in the queues independent of the actual size of the queues, and this can vary
somewhat by workload. Typically the intercept values are two percent or less,
and on a practical level do not affect the sizing outcome.
9. The calculated slope value represents the incremental virtual user CPU
utilization.
# of Verification Server/
# of Page Elements Driver
ID Workload # of Points Network
Configuration
Used
Pages
Average Largest Page Response
Total
Page Page Title Code
Local,
2 Plants41k 41 306 7.5 29 41 306 A
100 megabit
TradeStatic- Local,
3 5 19 3.8 13 5 19 B,C
WebPages 100 megabit
Internet-
4 7 250 35.7 104 7 250 Internet A
ShoppingCart
Local,
5 TradeSVTApp 10 33 3.3 12 10 33 B
100megabit
Table 10-5 shows the schedule settings were used for these workloads (unless
noted otherwise in variant measurements).
Schedule Settings
Recorded think
6 Atlantic All None None 10
times
Note that the data collection levels are the out of the box defaults (except for #6),
and use the out-of-the-box sampling levels as well, which are all users for Stats
and Problem Determination, and five fixed users per user group for Execution
History.
The actual values graphed in Figure 10-3 are shown in Table 10-6, along with the
associated Intercept value. Although the meaning of an average for six disparate
workloads is debatable, the average is shown as well, primarily to establish
where the average fixed overhead of 35 mB quoted elsewhere comes from.
Note that there are two different clusters in terms of order of magnitude of the
memory footprints for the six workloads. Two workloads are less than 100 KB
(where a KB = 0.1024 million bytes), and the other four are between 100 and
1000 KB. There will be more analysis later, but it should be pointed out that the
two workloads with the significantly smaller footprints (0.043, 0.062) are for small
static Web pages on a local LAN with servers that were able to deliver very small
response times. The workloads with the next two larger footprints (0.170, 0.269)
were also on the local LAN but were for dynamic applications that generally had
larger pages and/or slower relative response times than the previous two
workloads. The workloads with the largest footprints (0.508, 0.780) were
applications accessed over the internet where the slower network and yet larger
pages had the largest response times.
To explore the effect of these factors on virtual user memory footprint, variations
on the base Plants41k and InternetShoppingCart workloads are performed in the
next two subsections.
Plants41k variations
The base Plants41k workload (#2) has a 41 page test that nominally takes
almost 3 minutes to execute (174 seconds), and is executed in the schedule
using a paced loop at a rate of 15 tests/hour (one test every four minutes), with a
random delay before the first iteration to spread out the users. The following
variants were run at a 20-user load and compared to the base version:
2a) Instead of using the delay before the first paced loop iteration to spread
out the users, each user’s start time was staggered two seconds, that is, a
two second ramp. And instead of using random delays between iterations of
the paced loop such that each user ran a test on average every four minutes,
the loop was used with no pacing, resulting in the test being run continuously,
about once every three minutes (depending upon response time variations).
2b) This variant ran the tests in lock-step. The tests were run continuously
with no intervening delay, but more importantly, there were no delays before
the first loop iteration and no staggered user start-up. This resulted in every
user hitting the first page simultaneously, and the first page of Plants41k was
the biggest page (29 elements).
7) A new workload (#7 Plants2k) was created using the first two pages from
Plants41k, which happened to be the biggest (29 and 21 elements
respectively), raising the average page size by more than a factor of 3.3X. In
addition, the test was run continuously with no pacing and also zero think
times, thus essentially hitting the server with the two biggest pages with no
intervening delays. This essentially creates an extreme worst-case scenario
for memory footprint for this app. Table 10-7 compares the workloads of
Plants41k and Plants2k.
# of Verification Server/
# of Page Elements Driver
ID Workload Points Network
# of Used
Pages Configuration
Average Largest Page Response
Total
Page Page Title Code
Local,
2 Plants41k 41 306 7.5 29 0 0 A
100 megabit
Local,
7 Plants2k 2 50 25.0 29 0 0 A
100 megabit
Each of the three variations was run at a 20 user load, and the maximum Private
Bytes value was observed. Then the incremental virtual user memory footprint
was calculated, using the previously calculated intercept for the base Plants41k
workload for expediency (rather than re-run each variant for the spectrum of user
loads). The results are shown in Table 10-8 and in Figure 10-4.
Table 10-8 Incremental virtual user memory footprint variations: Plants 41K
Incremental
Private Growth
# of Intercept Virtual User
Workload Bytes Factor Description
users [mB] Memory
[mB] [X]
Footprint® [mB]
Figure 10-4 Incremental virtual user memory footprint variations: Plants 41k
Note that in variation 2a, increasing the user activity (throughput) of Plants41k by
about 1.38X (running the test every 240 seconds on average vs. about 174
seconds depending upon response time variations) and somewhat decreasing
the workload dispersion increased the memory footprint only slightly, a little less
than 10%.
Variation 2b, which put all the users in lock-step, had a much bigger factor by
aligning all the users at the start on the first (and biggest) page, thus creating an
initial abnormal peak occurrence for memory demand, and the end result was a
footprint increase of over 4X. An examination of the Private Bytes value over time
shows that it decreases down to ~54 mB after the initial 61.3 mB peak. But
because the employed methodology takes the maximum value of Private Bytes
over any interval, the lock-step effect is higher than what an average over time
would show.
InternetShoppingCart variations
The base InternetShoppingCart workload (#4) has a 7 page test that nominally
takes about 1.25 minutes to execute, and is executed in the schedule using a
paced loop at a rate of 20 tests/hour (one test every four minutes), with a random
delay before the first iteration to spread out the users over the three minute
interval. The following variants were run at a 20 user load and compared to the
base version:
4a) The loop rate was increased to 30/hour, so each user started a test on
average every two minutes.
4b) The loop rate was increased to 40/hour, so each user started a test on
average every 1.5 minutes. In order to reliably achieve this rate the maximum
think time was truncated to 5 seconds.
4c) Loop pacing was turned off and virtual user start-times were not
staggered, so that the users ran in lock-step. In addition, think times were set
to zero. For the seven page shopping cart scenario recorded in this test, this
variant represents the worst case scenario for memory footprint.
# of Verification Server/
# of Page Elements Driver
ID Workload Points Network
# of Used
Configuration
Pages Average Largest Page Response
Total
Page Page Title Code
Internet-
4 7 250 35.7 104 7 250 Internet A
ShoppingCart
Internet-
8 1 104 104.0 104 1 104 Internet A
ShoppingHome
Each of the three variations was run at a 20 user load, and the maximum Private
Bytes value was observed. Then the incremental virtual user memory footprint
was calculated, using the previously calculated Intercept for the base
InternetShoppingCart workload for expediency (rather than re-run each variant
for the spectrum of user loads). The results are shown in Table 10-10 and in
Figure 10-5.
Incremental
Private Virtual User Growth
# of Intercept
Workload Bytes Memory Factor Description
users [mB]
[mB] Footprint [X]
[mB]
4c) InternetShopping-
20 81.5 34.5 2.350 3.032 lock-step, zero think time
Cart - Zero think
Variation 4d represents not only the extreme case for this Web app, by
continuously looping on a very large page (104 page elements) with no delays,
but this page size also is towards the high end size wise of reasonable Web
pages, so it should represent a situation near the high end of a wide range of
Web apps. The same sizing comments (stretched definition of virtual user) made
regarding Variation 7 of Plants41k apply here as well.
Machine Configuration ID
Iteration Cycle Time [sec]
(Throughput Normalized @
Incremental Virtual User
Utilization (Throughput
CPU Utilization [%]
Workload
# of Processors
1 hit/sec)
Iteration
CPU
User
TradeStatic- 0.075 933 19 30 0.633 0.118 591 B 2.793 2 0.662
WebPages
There is a range of over a factor of four in the CPU usage for the three workloads,
which corresponds to supporting from a low of 400 to a high of nearly 1700
virtual users per dual-processor driver machine. When the workloads are
normalized to the same hit rate, the range is reduced to less than a factor of three
between the three workloads. This indicates, like memory, that the number of
virtual users that may be supported per driver machine before being limited by
CPU processing capacity may vary significantly for different workloads (just
based on this small sample size) even after accounting for obvious differences in
http request throughput.
Subsequent sections explain the various columns and provide graphs where
useful.
Figure 10-8 Incremental virtual user CPU utilization (throughput normalized @ 1 hit/sec/user)
Figure 10-9 Number of virtual users at 70% CPU (throughput normalized @ 1 hit/sec/user)
If you have no access to application recordings or workload you are mostly flying
blind, and the recommended guidance is to use initial estimates of 1MB memory
per virtual user and a CPU capacity of 500 virtual users for a dedicated driver
machine using a reasonably configured dual processor server (2.5-3 GHz with at
least 2 GB memory), with an assumption there is an ability to add additional
driver machines if needed. However, if all the hardware has to be procured up
front and there is very low tolerance for coming up short, we recommend
following a more conservative approach of allowing 2 MB memory per virtual
user and reducing the CPU capacity in half to 250 virtual users. Although this will
likely result in the acquisition of excess hardware than needed, it is the prudent
course if there is no ability to acquire additional hardware later.
If you do have workload information but don't have access to the application and
an ability to execute a few sample trial runs, then you should consult the
guidelines in “Memory” on page 355 to more accurately predict memory needs
and the guidelines in “CPU” on page 355 to more accurately predict CPU needs.
Otherwise, for the most reliable sizing guidance, we recommend obtaining some
sample scenario recordings for the application(s) to be load tested and perform
memory sizing projections with a single laptop or desktop machine as follows:
Constrain the JVM heap to 64 MB.
Follow the methodology for determining incremental virtual user memory
footprint (Section 10.2.6) for 10, 30, 50, 70, and 100 users. If necessary either
increase the JVM heap or reduce virtual user counts, but at least get three,
and preferably five different data points by which to calculate the incremental
virtual user memory footprint for the actual workload scenarios. From this
data, you can follow the guidance in Section 10.4.4.1.3 to determine how
many virtual users can be handled per driver machine from a memory needs
perspective (that is, if the RPT engine's virtual user capacity is memory
bound).
For CPU sizing, it is best to utilize a representative dedicated driver machine, but
if that is not possible make the measurements using another available machine
and pro-rate the measured CPU capacity results by accounting for the CPU
processing differences between the trial machine and the actual machine to be
acquired for the load testing project. Follow the CPU measurement methodology
described in 10.1.10, “Methodology for determining incremental virtual user CPU
utilization” on page 335. Avoid small runs that use less than 10% CPU. Try to get
CPU results for at least three runs in the 10-30% range, and preferably 10-60%
range. From those results, project how many virtual users can be driven at 70%
CPU utilization, assuming the RPT engine's capacity is CPU bound. Note that for
most workloads, as of RPT 6.1.2, RPT tends to be CPU bound for typical
workloads and common machine configurations of processing power and
memory. But always examine both the projected memory and CPU needs to
guide the process of selecting machine configurations.
When you have a choice it is better to utilize fewer faster driver systems than a
larger number of smaller driver systems. Fewer drivers means less workbench -
driver communication overhead and the run startup and cleanup times are
shorter. The majority of the driver initialization is done in serially, which can lead
to lengthy startup times for runs involving tens of drivers. There is also some
additional memory savings on the workbench with fewer driver systems; normally
this savings is small, but if you choose to keep individual statistical data for each
driver machine (not the default) then the stats model memory used to store
report counters grows proportional to D+1, where D is the number of drivers.
Although not recommended, if you must resort to driving virtual user workload
from the workbench console machine, it is important to follow these practices to
minimize potential performance problems:
Close all project assets (other than the minimal necessary report(s) on the
current run) before schedule execution and do not open project assets
(especially large ones) until after the schedule execution has completed.
During the run avoid unnecessary operations on the workbench console and
especially avoid any CPU intensive operations, even of relatively short
durations.
Appropriately balance workbench and local driver JVM heap memory
allocation.
If you have less than 4GB on the workbench machine always use a location
for the local driver and constrain its heap so that you do not allow the
workbench machine to use more memory than physically available, which will
result in poor performance due to paging overhead and thus compromise
measurement results.
Memory
Although the workbench memory needs are basically independent of the number
of virtual users, the workbench can demand high amounts of memory for
opening and viewing large project assets, especially large report result sets and
test logs. Therefore the amount of data that can be collected and viewed is
limited by the amount of memory available on the workbench machine, and more
specifically limited to the maximum size of the Java heap allocated to the RPT
Eclipse workbench process.
CPU
Assuming you are using the strongly recommended configuration where all
virtual user load is generated by remote driver machines, the workbench CPU
does not impact the ability to apply load. As such, you can generate very large
loads with a slow workbench CPU by exclusively relying on remote driver
systems for that purpose. However, there are many other aspects of using RPT
ranging from test authoring to result analysis, and they all benefit from a faster
CPU. A dual processor configuration is normally not important, but it can be
useful for:
Very long runs, because the EMF model load time increases with model size.
Better responsiveness for user interactivity during a run.
Use of RPT's response time breakdown functionality. Unlike the test log, the
response time breakdown data is loaded into the workbench memory during
the run.
Memory
Factors that influence driver memory usage
Memory usage on the driver machine relates mostly to the workload size and its
characteristics. For any given workload, memory usage is mostly proportional to
the number of virtual users—once you have accounted for the fixed overhead of
the Rational Agent Controller (about 20 MB, see 10.1.4, “RPT driver architecture”
on page 325) and the base memory usage for the java.exe driver engine
process (about 35 MB, see 10.2.3, “Memory footprint measurements” on
page 339) that is independent of the number of users.
The factors the influence driver memory usage changed dramatically beginning
with release 6.1.2 (see 10.5.3, “Comparison of Plants41k driver memory usage
for RPT 6.1.2 vs. 6.1.1” on page 367 for a comparison with the previous 6.1.1
release.)
Small think times—High user activity increases the density of the memory
demands since the fraction of time the memory required to process each
page is utilized (not available for other uses) increases when there is less idle
time between transactions.
Virtual users that run in lock-step—Low dispersion aligns the memory
demands of the individual virtual users, leading to higher memory peaks
forcing the JVM to allocate more process memory to accommodate these
demands.
Large post data sent to the server and/or large responses such as for large
images received from the server—High I/O requires more memory for buffer
space.
Long page response times—The memory associated with the page and each
of its page elements is held longer.
Refer to the next section for the quantitative impact of these factors on memory
footprint.
This may seem like a trivial task: Divide the available memory by the incremental
virtual user memory footprint. However, what value should be used for the
available memory? Recall that the incremental virtual user memory footprint is
related to the growth of the java.exe driver engine process size. Although the
largest part of the process size is due to the JVM heap, process memory is also
consumed for the native code implementation of the JVM itself which consists of
its code space, its stack space, and its own heap.
So if you use the allocated JVM maximum heap value for available memory, you
will underestimate the potential number of virtual users because you are ignoring
the engine's java.exe process overhead component already included in the
calculation of the incremental virtual user memory footprint. But if you use the
machine's physical memory, you will overestimate the potential number of virtual
users since you are ignoring the memory consumed by the operating system and
other running processes (including the Rational Agent Controller), as well as the
base overhead of the java.exe driver engine process (which is not included in
the incremental virtual user memory footprint.)
The recommendation is to use the values for available memory from Table 10-12,
which should be sufficient for initial sizing purposes. They were based on
measurements observed on Windows XP and also factor in the average 35 MB
base memory usage for the java.exe driver process. Note that the percentage of
memory available for Virtual Users increases with available physical memory
because the overhead (while not fixed) does not grow linearly with the amount of
physical memory.
Table 10-13 Available memory for virtual users vs. physical memory
Driver Machine’s Physical Memory Available Memory for Virtual Users
512 MB 375 MB
768 MB 605 MB
1024 MB 845 MB
For example, if the estimated incremental virtual user memory footprint is 750KB
the virtual user capacity on a 512 MB driver machine that is memory bound
would be 500 virtual users (375 / 0.75). On a 1024 MB machine the virtual user
capacity would be 1126 virtual users (845 / 0.75). On a 2 GB or larger machine
the virtual user capacity would be 2000 virtual users (1500 / 0.75).
CPU
Factors influencing CPU usage
CPU usage on the driver machine relates mostly to the workload size and its
characteristics. For any given workload, memory usage is mostly proportional to
the number of virtual users.
Eight workload factors that increase CPU usage significantly are listed below.
The first three factors in combination dictate the http request rate for each virtual
user. The remaining factors influence the amount of processing that is required
for each http request. The product of the number of http requests per second and
the average amount of CPU overhead to process each request determine the
incremental virtual user CPU utilization.
Many page elements per page—The processing per page is roughly
proportional to the number of page elements per page, all else being equal.
Small think times—High user activity increases the density of the CPU
demands since the fraction of time the CPU is busy processing each page
increases when there is less idle time between pages.
Short page response times—For the same reasons as stated above for small
think times.
Large post data sent to the server or large responses (such as images)
received from the server—High I/O requires more processing overhead per
request, all else being equal.
Frequent data correlations—Data correlation processing involves regular
expression matching on the response content to locate and extract strings
that must later be substituted into the content of subsequent http requests
issued by the same virtual user. The more correlations that exist and the
larger the response the regular expression(s) must be matched against, the
more CPU time is consumed for data correlation processing.
Verbose log levels and sampling rates for test logs and problem
determination—Turning up logging levels can add significant processing
overhead per request. Of course, when trying to run large numbers of users
you should set the logging levels to the bare minimum you can get by with.
Using the guidance and general rule-of-thumb of not using more than 70% CPU
on average on the driver machine, the maximum number of virtual users that can
be supported on the driver is calculated by dividing 70 by the incremental virtual
user CPU utilization (expressed as a percentage). Examples of this calculation
were made for three different workloads in Section 10.3.4.
Main Normalized to
Server Hardware CPU Speed As Measured
Memory 1 hit/second/user
#3 #5 #6 #3 #5 #6
IBM eSeries xServer 235 2.384 GHz 3.75 GB n/a n/a 1659 n/a n/a 387
(2 CPUs)
2.5 GHz
(normalized) 3.8 GB 835 358 1740 529 197 406
We can now summarize in Table 10-15 the range (low and high) and average
virtual user capacity for the normalized 2.5 GHz driver machine for both the
original as measured workloads and the normalized (to an HTTP request rate of 1
Hit/Second/User) versions of the same workloads.
Main Normalized to
Server Hardware CPU Speed As Measured
Memory 1 hit/second/user
Even after normalizing out the differences in HTTP request rates there is still a
wide variation in virtual user capacity of a factor of 2.6 for the three workloads.
Without normalization, the variation is much higher: a factor of 4.8. Given this
was a relatively small sampling of workloads and large variations in several of the
factors influencing the per-request overhead were not present, these results do
not represent the full variability that would be expected over the range of
workloads that may be encountered in the field. Therefore when needing to
accurately project the virtual user capacity of a driver it should be re-emphasized
how important it is to record actual application scenarios and perform some trial
runs to measure the incremental virtual user CPU utilization. Refer to the final
paragraph on CPU sizing in 10.3.1, “Overview and approach” on page 351 for
guidance on how to perform those trial runs.
Finally, if you have no workload information at all and do not have access to the
application under test, refer to the guidance in 10.3.1, “Overview and approach”
on page 351 on how to proceed with default sizing assumptions.
Therefore, for situations that demand high response time measurement accuracy
and are most affected by potential measurement error (for example, if you have
small response times and must validate requirements on the 95th percentile),
you should be more sensitive to CPU load on the drivers and may want to back
off the 70% guideline to 60% or even 50% to be conservative. Of course such an
approach will require more hardware than may actually be needed to achieve
your accuracy requirements, in which case the following measurement-based
approach may be useful.
The following technique can be used to assess and monitor the relative response
time inaccuracies caused by heavily loaded drivers:
1. Use an additional lightly loaded reference driver machine. Assign enough
virtual users on the reference driver to obtain statistically meaningful samples,
but not too many to create significant load on the driver machine. The goal
should be to keep the driver's average CPU utilization at 10% or less, such
that the measurements taken by this lightly loaded driver are not adversely
affected by CPU load, and can serve as the reference against which
If you have a sufficient number of identical driver machines, the above process
can be sped up by varying the number of virtual users assigned to each driver
machine in the same run to gather multiple data points in parallel, as a function of
the virtual user load and CPU utilization on each driver machine.
For the IBM 1.4.2 Sovereign JVM used by the RPT 6.1.2 engine, we do not
recommend exceeding 1900 MB (the Linux limit), although— if it is really
needed—on a Windows machine with more than 2 GB of memory you may be
able to use a value as high as 2000 MB successfully. (As of RPT 7.0 the version
of the IBM JVM has been updated to 1.5 and the practical maximum heap value
is 1500 on Windows and 3000 on Red Hat Linux.) This is to permit adequate
space in the data space address spectrum of 2 GB for the shared class cache
that is kept outside of the heap.
Figure 10-10 Setting the driver engine’s JVM maximum heap size
Figure 10-11 shows the easiest way to create a new location for local users.
Once you have created your local location, you can follow the same steps for
setting the maximum heap size as shown previously for a remote driver.
Therefore the 6.1.2 HTTP memory footprint is no longer affected by test length or
the number of tests in a schedule. It is affected by the dynamics of the workload
that impact the probability that a page is being executed by a virtual user, and the
relationship of virtual user’s activity with respect to each other, as described
elsewhere in this document. But the net impact in almost all cases is a significant
improvement in memory footprint over previous releases.
10.5.4 Memory reduction of RPT 6.1.2 vs. 6.1.1 for three workloads
Table 10-17 compares the incremental virtual user memory footprint for three
workloads in this document for the 6.1.1 and 6.1.2 releases:.
Note that these results are for a single test in a schedule; the 6.1.2 memory
reduction would be proportionally higher as the number of tests in a schedule
increased.
Figure 10-13 6.1.2 vs. 6.1.1 incremental virtual user memory footprint reduction factor
10.6.1 TradeSVTApp
In this section we show the results for the TradeSVTApp workload.
For this workload the individual collecting the data in a couple of cases used two
separate perfmon sessions using 5 and 30 second sample intervals respectively
instead of the 5 and 60 second sample intervals mentioned in the methodology.
The 30 second sample is still fine for our purposes, and is probably less than a
half-percent higher than what would have been obtained using a 60 second
sample interval.
The methodology that is followed is to select the maximum value of Private Bytes
for the established steady-state duration of the run (at least several test iterations
after all users are active), prior to the presence of any unnatural elevation, if any.
In all cases below this equates to the maximum value of Private Bytes for the
entire run, which can be simply read directly from the Maximum counter value.
Figure 10-14 One virtual user (entire run with 30-second sample interval)
Figure 10-15 shows another straightforward case; the selected value of Private
Bytes is 46432256, rounded to 46.4 mB.
Figure 10-15 10 virtual users (entire run with 60-second sample interval)
Figure 10-16 100 virtual users (entire run with 60-second sample interval)
Figure 10-17 200 Virtual Users (Entire run with 30 second sample interval)
Table 10-18 Incremental virtual user memory footprint from individual runs
The slope of 0.170 is the reported value for the incremental virtual user memory
footprint for the TradeSVTApp workload in the “Memory footprint for base
workloads” on page 339.
Figure 10-18 shows a desirable linear relationship in the memory footprint over
the range of virtual users measured. The intercept value of 44.3 is high
compared to other workloads. This is explained by the fact that this is the only
workload that includes datapools, and in fact there are 4 of them, ranging from
100 to 500 rows in size. The datapool data is loaded once and shared across all
virtual users and therefore is overhead independent of the number of users,
which should in fact show up in the intercept value, which it does.
10.6.2 InternetShoppingCart
In this section we show the results for the InternetShoppingCart workload.
Note that there is no one virtual user run for this workload, which generally is not
all that important. (One was actually made to validate the test was running
correctly, but the schedule settings did not match the other runs, so it was not
helpful to include it.)
Figure 10-19 10 virtual users (entire run with 60-second sample interval)
Figure 10-20 20 virtual users (entire run with 60 second sample interval)
Figure 10-21 30 virtual users (entire run with 60-second sample interval)
Figure 10-22 40 virtual users (entire run with 60-second sample interval)
The relatively large percentage difference in the % Delta column for the first two
points indicates a potential discrepancy that needs a closer look. When plotted in
the graph (Figure 10-23), it can be seen clearly that the 10 user value is bringing
the left side of the fitted curve up and lowering the slope.
By discarding this value from the calculation of the intercept and slope a very
good fit is obtained using the other three data points, and the 10-user fitted value
is 42.3 (12% less than the discarded value of 47.4). The slope increased
significantly after this change, up from 0.627 to 0.780.
While data should not be discarded casually, one has to remember that the
purpose of the exercise is prediction and any choices that will improve the
confidence associated with that prediction are generally good ones. The
following reasons validate this choice:
This data point was the clear outlier, and discarding it produced an excellent
fit, which all remaining deltas less than 0.4%.
If in doubt, give higher importance to the measurements at higher user counts
and less importance to the lower user count measurements. There is typically
more noise in the lower user count measurements especially when using
random delays such as the paced loop control for this workload, and more
importantly, the higher user counts should carry more weight in a sizing goal
of predicting the memory usage at much higher user counts.
It is probably better to err on the safe side when establishing HW needs
(although that may depend upon the availability and cost of acquiring the
HW), and in this case the calculated slope after discarding the 10-user value
is a full 20% higher.
The intercept value of 35.3 for this workload (no datapools) is closer to the
expected value than 39.6.
Note that although it was not possible in this case to go back and rerun the test
with similar conditions, that would be the first choice, to see if the outlier went
away or was consistently present.
1 n/a 35.3
10 discarded 42.3
The slope of 0.780 is the reported value for the incremental virtual user memory
footprint for the InternetShoppingCart workload in “Memory footprint for base
workloads” on page 339.
The actual and fitted values are plotted in Figure 10-24 (labeled Final), which
shows a desirable linear relationship in the memory footprint over the range of
virtual users measured.
11
11.1 Introduction
In this chapter we describe how RPT can be used to test SAP enterprise
applications. SAP provides a comprehensive range of enterprise applications to
empower every aspect of customer business operations. SAP enterprise
applications are listed in Figure 11-1.
Although we can find there more than 50 applications, the number of front ends
is limited. The foundation of these solutions is the SAP NetWeaver® platform.
The architecture of SAP NetWeaver Application Server is shown in Figure 11-2.
From a technical perspective we can access the SAP system using the SAP GUI
or a standard Web browser:
SAP GUI for Windows
This is the most popular SAP GUI. Nearly every customer is using this front
end. It is the fastest SAP GUI and has to be installed on the local computer. In
this chapter we concentrate on testing the SAP GUI for Windows.
SAP GUI for HTML or Browser
This is the browser front end. It does not require any additional SAP
installations on the local computer. This front end can also be tested as it
uses the HTTP protocol.
SAP GUI for Java
This version does not depend on the local computers operating system. This
is the less used front end.
If you want to simulate a large number of users, you have to deploy tests on
remote computers. The following software must be installed on each remote
computer:
The SAP GUI for Windows software, configured with the same logon
properties as the client on which the tests were recorded
The Agent Controller that is provided with RPT
Note: You cannot use Linux or AIX computers for recording or execution of
SAP performance tests. Only Windows computers are supported.
Supported environments
The following versions of the SAP GUI for Windows are supported:
SAP GUI for Windows 6.20 with patch level 44 or higher
SAP GUI for Windows 6.40 with patch level 13 or higher
Click on the left icon in the title bar, which opens the menu. Select About SAP
Logon.
Now verify the version information as shown in Figure 11-4. It is version 6.40
with patch level 17. This is a supported environment.
Note: You can also verify the version information directly in the SAP GUI if you
already logged on. Enter ALT+F12 and select About from the menu.
Enable scripting
Performance test recording and execution requires scripting to be enabled on the
SAP server and on all SAP GUI clients that are installed on remote computers.
The following instructions are for SAP version 6.40 and might vary with other
versions. Refer to SAP documentation for further information.
Performance testing relies on the SAP Scripting API and ActiveX®. Make sure
that these options are selected when installing the SAP GUI client.
– Click Save, and then end the transaction. Scripting will be enabled the next
time you log on.
Next we also must enable scripting on the client
– In the SAP GUI client toolbar, click the Customizing of Local Layout toolbar
icon or ALT+F12, and then select Options.
– Select the Scripting page.
– Select Enable Scripting, and then clear both Notify When a Script
Attaches to Running GUI and Notify When a Script Opens a Connection
(Figure 11-6).
– Click OK.
– In the Help menu, select Settings, and then select the F4 Help page.
– In Display, select Dialog (modal) and then click the button.
Note: Ensure that the session that you are recording will be reproducible. For
example, if you create items in SAP and do not delete them, then they will
already exist when the test is run, which might cause the test to fail.
For Recording file name, type a name for the test. In our example we record
the FS10N transaction, so we type FS10N, and click Next.
Select how you want to connect to the SAP server:
– In most cases, select SAP Logon. In SAP system name, enter the
description used by SAP Logon to identify the server (Figure 11-8).
– If your environment does not support SAP Logon, select Connection by
string. For Application server, enter the host name or IP address of the
server, and specify the System Number and other options if required.
Refer to the SAP documentation for details about the other SAP Logon
options.
– If the environment uses gateways or routers to connect to the SAP R/3
server, select Connection by string. For Application server, enter the full
connection string, for example:
/H/gate.sap.com/S/3299/H/172.16.63.17/S/3200
– Leave System Number and other options blank.
If this is the first time you record a SAP performance test, read the Privacy
Warning and select I accept to proceed (Figure 11-9).
Click Finish to start recording. A progress window opens while the SAP GUI
starts. In some cases, you might see a warning that a script is opening a
connection to SAP. To avoid this follow the description in “Enable scripting” on
page 387.
Log on to SAP and perform the transactions that you want to test.
We are recording an example with the FS10N transaction:
– Log on to SAP using a valid user ID/password combination. Also type in
the language.
– Type FS10N and click or press Enter.
– In the next step we are using the following values: G/L account 3100000,
Company code 3000, Fiscal year 2001. Click Execute or press F8.
– Click Cancel or press F12.
– Close the SAP GUI or select System → Log off.
– In the Log Off dialog click Yes.
When you have completed the transactions to be tested, stop the recorder.
You can do this by closing the SAP GUI or by clicking Stop in the
Recorder Control view.
For security reasons, the password cannot be recorded by the SAP test
recorder. Instead, it is requested at the end of the recording session. In the
Enter Password dialog, enter the password for the account used for recording
(Figure 11-10). This is required because the SAP GUI does not allow direct
recording of the password. A progress window opens while the test is
generated. On completion, the Recorder Control view displays the message
Test generation completed, the Test Navigator lists the test, and the test
opens in the test editor.
Note: It is required that you provide a password for the SAP performance test
and you should do this here. Otherwise you have to do this manually in the test
editor.
The generated SAP performance test of our example is shown in Figure 11-11.
If you have provided a password than this performance test is ready for playback.
No additional changes are necessary.
Inserting a new sequence into a test requires that the SAP session reaches the
same state as is expected at the point where the new sequence is inserted. To do
this, the SAP test recorder automatically replays the existing scenario up to the
insertion point before starting the new recording.
When you have finished, in the New Recording window, click Stop to stop the
recording. A progress window opens while the test is generated. On
completion, the Recorder Control view displays the message Test
generation completed, and the test is updated with the new content. You can
also click Cancel to cancel the recording.
After the test has been updated in the Test Navigator, verify that the new
sequence was properly inserted into the test, and then select File → Save to
save the test or select File → Revert to cancel the inserted recording.
If you create a performance schedule and start it with more than one user, only
the first user can logon. The other users will get a warning on multiple logons
(Figure 11-13). Users have now the choice to continue the logon without
terminating the existing sessions.
During test execution, it might not be desirable to display the SAP GUI. Hiding
the SAP GUI improves the performance of the virtual users. This setting specifies
the behavior for the current test suite. However, you can change the default
setting for generated tests in the SAP Test Generation preferences.
Normally you should select Hide as this setting guarantees the best performance
during test execution. Otherwise, if you are still developing SAP tests, it is a good
choice to select Show only first virtual user.
Note: Make sure that you set Display SAP GUI on execution to Hide in the
SAP Options for every SAP test in your performance schedule. You will run
into great trouble if you forget this.
With RPT it is possible to record several transactions into a SAP test. But the
logon and the logout is limited to a SAP performance test. So if you create a
performance schedule and include some SAP performance tests, you will
simulate a lot of logons. Every SAP performance test is creating a new session.
Obviously this is not what a realistic user is doing. We have to think about a
possibility to allow a virtual user creating a session only once at the beginning of
the test. Because a session is limited to a SAP performance test, we have to
create a complex SAP performance test, as shown in Figure 11-15.
We can create the above performance test by recording the three transactions
into one test and then inserting the additional elements, such as the loop and the
if statements. Another option is using copy/paste to get additional transactions
into a performance test.
The first element of the complex test is the logon. After this we have a loop with
30 iterations followed by the logout transaction. You should also consider to
enable the loop option Control the rate of iterations. In the loop we have our
transactions and one custom code element (Example 11-1). The job of this
custom class is to generate a random number between 1 and 3. Depending on
the result of the custom code one of the three If statements is executed.
Especially for long test runs you should change two TCP/IP parameters in the
Windows registry (Figure 11-16). In decimal the value is respectively 65000 and
1000. The default values are 5000 and 4.
[HKLM\System\CurrentControlSet\Services\Tcpip\Parameters]
"MaxUserPort"=dword:0000fde8
[HKLM\System\CurrentControlSet\Services\Tcpip\Parameters]
"MaxFreeTWTcbs"=dword:000003e8
Best practices
The best practices for SAP testing include:
Verify the prerequisites. In particular verify that the SAP GUI for Windows
version, including the patch level, is supported by RPT (“Supported
environments” on page 386).
Enable Scripting on the SAP server and on every client (“Enable scripting” on
page 387).
Record single transactions (“Recording SAP GUI tests” on page 389).
Create complex schedules (“Creating complex SAP GUI tests” on page 397).
Verify SAP playback options before you start heavy load tests.
Remember that every out-of-the-box agent can simulate only about 100
virtual users.
Common problems
Common problems with SAP testing include:
After starting a recording of a SAP test you get the following error dialog
(Figure 11-17) claiming: Connection is disabled by SAP Server.
12
Features include:
Simple wizard-based test recorder
Ability to annotate test recording with automatic screen captures or comments
Test scheduling, execution and results analysis integrated in one solution
Performance reports tailored for Citrix scalability tests
Test recorder and test suite editor optimized for Citrix-based applications
Ability to record ICA protocol and key mouse and keyboard events to simulate
real users
The IBM Rational Performance Tester Extension for Citrix Presentation Server
requires a license key.
12.1 Introduction
Citrix Presentation Server (formerly Citrix MetaFrame) is a remote
access/application publishing product that allows users to connect to applications
available from central servers. One advantage of publishing applications using
Presentation Server is that it lets people connect to these applications remotely,
from anywhere around the world with minimum investment and setup. From the
end users perspective, users log in using the Citrix Presentation Client to their
corporate network in a secure environment without the need to install any
business applications on their own desktops.
Since the ICA Client communicates with the Citrix Presentation Server at a very
low level (mouse movements, key presses), the type of applications under test is
protocol independent. It does not matter if the application is a client-server
application, Java application, or a simple HTML Web application. It all boils down
to mouse movements and key presses when being recorded in a test script.
The Citrix system administrator manages the selection of the available software
and business applications in the Citrix end-user environment through Windows
profiles and accounts. This Citrix environment to which the end-user login, can
be configured uniquely for each user.
Because the Citrix performance tests interact with the ICA Client at a very low
level (mouse movements, key presses) any changes made to the test after the
recording, such as moving test elements, adding loops or conditions, or inserting
new sequences, can break the context of the emulated user actions and cause
synchronization time-outs. It is essential to be aware of the context of user
actions when you are editing the test.
12.3.1 Limitations
Citrix performance tests use window creation and change events, and optionally
image recognition techniques, to synchronize user input with server output.
Before you record a session with a Citrix application, the behavior of that
application must be perfectly reproducible. Specifically, the application must
always create windows and GUI elements at the same locations and in the same
sequence. Mouse or keyboard events must always produce the same output.
If you save a file during a recorded session, the application might issue a
warning for an existing filename when you replay the tests. If the warning was
not in the recorded session, this will break the test and cause errors.
These guidelines should help you record a reliable test and avoid
synchronization time-outs during test execution. Ensure that you have a working
Citrix client environment and that you are able to connect to a Citrix server.
Ensure that the session that you are recording will be reproducible. To record
tests that can be reliably replayed, follow these guidelines:
If you save a file during a recorded session, when replaying the tests, some
applications might produce a warning for an existing filename. If the warning
was not in the recorded session, this might break the test and cause
synchronization time-outs.
If the applications that you are testing contain dialog boxes that run only on
the first execution of a particular program or feature, such as tips or security
warnings, ensure that they are disabled when you record the test. Any
windows or dialog boxes that were recorded but do not reappear on
subsequent executions might break the test and cause errors.
Use dedicated test user accounts for performance tests. Ensure that the user
accounts have minimal potential to cause problems if unpredictable mouse
events occur outside of the application window.
Set up the test accounts and applications to minimize unpredictable window
events, such as new mail notifications, automatic updates, or daily tips.
Disable extensible menus and hover text tool tips when possible.
If possible, avoid using the Start menu to launch applications. Use the Quick
Launch bar, desktop shortcuts, or select Start → Run and enter the name of
the application.
Do not launch applications or open files from locations that are likely to
change, such as Favorites or Recent Files.
When using cascading menus like the Start menu, always wait for a moment
for the submenu to display. After the recording, when editing the test, look at
the mouse sequences that were generated to ensure that they follow the
correct path to display the submenu.
Do not rely on the Recent Documents menus to open files or launch
applications, as these items are likely to change from one execution to
another.
When recording tests, before interacting with a window or dialog box, click the
element to ensure that it gets focus, and then provide input.
Ensure that mouse movements and keyboard events are clearly defined and
relatively slow. If hover help (tool tips) or mouse hover actions are expected,
be sure to wait sufficiently before moving on.
After recording a session, some applications require user input before quitting
(for example, to record any changes). This can cause discrepancies between
the state of the application at the end of a session and at the beginning of a
test execution. To avoid problems, at the end of a recording session, close all
applications manually and cleanly end the session from the Start → Log Off
menu rather than clicking Stop or Close on the Citrix client or the Citrix
Recorder Control.
After recording, and while you edit the test, it is important to perform regular
verification runs in order to validate the test with a single user. After each run,
open the test execution history to make sure that the test synchronizes correctly.
If necessary, change the synchronization level from Conditional to Optional on
window events or image synchronizations that produce unnecessary time-outs.
Only deploy the test on virtual users or run it in a schedule when the test is robust
enough to run flawlessly with a single user.
12.4.1 Prerequisites
Before you can test the performance of Citrix applications, the ICA Client must be
installed on the same computer as IBM Rational Performance Tester. The ICA
Client is required for recording and execution of performance tests. If you are
deploying tests over remote hosts to emulate a large number of virtual users, the
following software must be installed on each remote computer:
ICA Client (ICA Client)
IBM Agent Controller
All IBM Agent Controllers must be of the same RPT version with the same
version of ICA Client installed. The ICA Client can be downloaded free from the
Citrix Web site:
http://www.citrix.com
Use the RPT Software Updater to install the latest version of RPT Version 7 with
the Citrix extension and contact IBM product support for further instructions.
Preferences
The behavior of the recording wizard is controlled by recorder preferences. To
inspect the current settings, select Window → Preferences, and expand Test →
Citrix Recorder. This procedure assumes that the defaults are set (Figure 12-1).
On the Citrix Recording page, select a project. For Recording file name, type
a name for the test, and click Next (Figure 12-3).
The name that you type is the base name for the recording, test, and other
required files. You see these files in standard Navigator or the Java Package
Explorer with their distinguishing suffixes, but you see only the simple name
(test) in the Test Navigator (for example, Citrix Sample Test is the Project
Name and CitrixTest1 is the script name).
On the Citrix Connection Settings page, specify how you want to connect to
the Citrix server (Figure 12-4).
– Select On server to connect directly to a specified Citrix server. Specify
the name or IP address of the server (http://CITRIXSERVER01) or click
Browse to locate a server on the local network. If the Citrix administrator
has defined published applications, you can click Browse to select the
application from the list of published applications on the server. To record
a Citrix desktop session, leave this field blank. Click Next to continue.
– Select With ICA file to connect using a predefined ICA file that is provided
by the Citrix server administrator. Click Browse to locate and select the
ICA file on your system and click Next to continue.
On the Citrix Session Options page (Figure 12-5), you can provide a
description for the session and change the video settings for the ICA Client.
Click Next to continue. Because Citrix performance tests are based on low
level interactions with the server, including mouse and window coordinates,
the Citrix session must be big enough to allow all actions.
You can do this by using a lower resolution for the recorded Citrix session
than what you currently used for your own desktop. This will keep the window
of the Citrix session to appear in full screen on your desktop without you
having to use the horizontal and vertical scroll bars. The additional steps in
clicking on the scroll bars may complicate the script unnecessarily.
If this is the first time you record a Citrix performance test, read the Privacy
Warning and select I accept to proceed.
To start the recording, click Finish. A progress window opens while the ICA
Client starts (Figure 12-6).
A Citrix Recorder Control Window opens at the bottom right hand corner on
the desktop, outside of the recording Citrix session (Figure 12-7). When
Display screen capture preview is selected, the Preview will show screen
captures every 10 seconds (or any other number) as set in the Citrix Recorder
in Window → Preferences → Test → Citrix Recorder. This Citrix Recorder
Control Window stays active throughout the recording session on your
desktop to allow you to stay in control.
Login to the Citrix server and perform the tasks that you want to test. You can
use the Citrix Recorder Control window to add comments or take screen
captures during the recording session to make meaningful indicators of what
you are trying to achieve in a test case.
One or more of the buttons shown in Table 12-1 in the Citrix Recorder Control
box are used to help you capture a script for successful playback with
verification points defined during the recording session.
To add a user comment to the recorded test, click the Insert user comment
button. Because Citrix tests can be long and difficult to read, meaningful
comments can help you locate important elements.
To add an image synchronization to the recorded test, click the Insert image
synchronization button, select an area of the screen that will be used for
synchronization, and click the Insert image synchronization button again.
Image synchronizations allow the test to keep track of the contents of a
screen area during the replay instead of focusing only on window events.
You can use them to maintain synchronization of a test in applications that
do not create or modify many windows, but update the contents of a window
regularly. The contents of an image can be evaluated either as a bitmap
hash code or as a text value obtained by optical character recognition. You
can also add verification points to image synchronizations in the test editor.
Icon Usage of the Citrix Recorder Control Buttons during the Citrix
recording session
To add a screen capture to the recorded test, click the Capture screen
icon. Screen captures make your tests easier to read and help you visualize
the recorded test. To change the settings for screen captures, click Screen
capture preferences and select one of these options:
No automatic screen capture—Select this option if you do not want the
test recorder to record screen captures automatically. When this option
is selected, you can still record screen captures manually. This option is
selected by default.
Capture screen every specified period of time—Select this option to
automatically record a periodic screen capture and specify the delay
between captures.
Capture screen on window creation—Select this option to record a
screen capture each time a window object is created in Citrix.
When you have completed the sequence of actions to be tested, close the
session cleanly and stop the recorder by closing the ICA Client. A progress
window opens while the test is generated. On completion, the Recorder
Control view displays the message Test generation completed, the Test
Navigator lists your test, and the test opens in the test editor.
The test editor lists the window events for a test, in sequential order. New
windows are displayed in bold. The Windows operating system assigns each
window an ID number. This number changes on each execution of the test, but
usually remains the same within the test, providing a means of identifying each
window object.
Note: In some cases, the operating system recycles destroyed window IDs (for
example 65562). The test recorder identifies these properly by appending an
extra number at the end of the window ID if necessary.
The example in “Recording Citrix tests” on page 405 shows the test of opening
Acrobat Reader in a Citrix session, which was generated from a recording of
these tester actions:
Login to the Citrix server CITRIXSERVER01 (use server setup for your own
environment).
Start the Acrobat Reader program through the shortcut icon on the Citrix
desktop.
Click Create Adobe PDF Online in Acrobat Reader.
Open the Web site of Adobe Information in Microsoft Internet Explorer
automatically.
Close the Acrobat Reader program.
Click Cancel to update Adobe in software update.
Log off Windows.
In the preceding example, Test Element Details displays information about the
test because the name of the test, Paint, is selected in the Test Contents area.
The Common Options and Citrix Options apply to the entire test.
The test editor lists the window events for a test, in sequential order
(Figure 12-12). New windows are displayed in bold. The Windows operating
system assigns each window an ID number. This number changes on each
execution of the test, but usually remains the same within the test, providing a
means of identifying each window object.
Inside windows, you see a list of events for the window, such as create window
events, screen captures, mouse or key board actions (Figure 12-15).
Some actions contain data that is highlighted. This highlighting indicates that the
data contains one or both of the following types of information:
Datapool candidate: This is a value, usually one specified by the tester
during recording, that the test generator determined is likely to be replaced by
values in a datapool. An example of a datapool candidate is a string that you
search for in a recorded test. The string is highlighted as a datapool candidate
on the assumption that, before running the test, you might want to associate
the string with a datapool column containing appropriate substitute values.
Correlated data: These are values in a test, usually one of them in a
response and the other in a subsequent request that the test generator
determined needed to be associated in order to ensure correct test execution.
An example is a photograph returned to the browser by a test that searches
an employee database. The test generator automatically correlates employee
names with photographs. Suppose that, before running the test with many
virtual users, you replace the employee name searched for in the recorded
test with names in a datapool. Because the test correlates the data, each
virtual user searches for a different employee, and the server returns an
appropriate photograph.
Click Add to add elements to the selected test element. Alternatively, you can
right-click a test element and select an action from a menu.
The choices that you see depend on what you have selected. For example,
inside a window, you can add a mouse action or a text input. The Insert button
works similarly. Use it to insert an element before the selected element. The
Remove button allows you to delete an item.
Note: Because Citrix performance tests rely on low level interaction with the
server, manually changing test elements is likely to break a recorded test.
Sometimes, the area of the editor where you have to work is obscured. To
enlarge an area, move your cursor over one of the blue lines until your cursor
changes shape (to a vertical line with an up arrow at the top and a down arrow at
the bottom) and drag up or down while holding the left mouse button.
You may have as many screen captures as you want to track along specific steps
in a specific test case. The screen captures do not play a functional role in the
playback. A screen capture is particular useful to help you identify the step with
no Windows Title but only a window ID. For example, Screen capture of [ ]
(65632) indicates that start menu is created in the Program Manager
(Figure 12-17).
They are both in some way similar to their HTTP Report counterparts.
A bar chart that shows indicates the overall success of the run with the
percentage of window synchronization successes and the percentage of
image synchronization success. Synchronization success indicates that the
expected window events in the test match the actual window events in the test
run.
A bar chart that shows indicates the overall success of the run with the
percentage of image synchronization successes and the percentage of image
synchronization success. Synchronization success indicates that the
expected image area or extracted text in the test matches the actual image
area or extracted text in the test run.
This is not considered a highly successful test. An ideal test should have the
highest possible successful rate of the above figures.
The Server Timeout tab provides page shows when the synchronization
time-outs and server errors occurred during the run. The graph does not display
values that equal zero.
The Citrix Verification Points page contains tables with verification point details.
There are two types of Verification points details: Window Verification points and
Citrix Image Synchronization Verification points details.
Figure 12-20 shows an example of good Window activities with 100% success
rate of synchronization.
You record the image synchronization you wanted while you are recording the
test. These images are specific to your Citrix application, that is, it can be an
image of a button, or an area of a screen shot.
These errors are by nature common challenges to Citrix testing. The iterative
tuning of the test scenario helps reducing some of these errors, but unfortunately
these errors cannot be totally eliminated. There is also no proven technique to
clearly distinguish Test errors because of tool from the Real Citrix server errors,
because of the low level of Citrix testing, which—apart from the synchronization
and verification points—should provide some clue.
The ability to synchronize actions with what the server responses determine the
amount of errors the test produce up to a point when synchronization errors can
be eliminated no more—the true server errors are resulted, confirmed further by
server side monitoring.
If you notice that the Citrix playback is often slowed down by too many time-outs,
there is a trick you may apply to produce playback that are quicker, roughly all the
times, executed until the end.
The main concern with this algorithm is that it is implemented neither in version
6.1.2 nor in 7.0. The situation will be better in 7.0 because it will be easy to
deliver hot fixes.
Apply this trick manually within the script until there is a hot fix. The suggested
method below is from an example, it may not be applicable is all cases:
Remove all ACTIVATE, RESIZE, MOVE window events. To find these events
the best is to use the Text search menu (Figure 12-23).
You are now ready to run the test suite again and to check that the execution is
still correct. You should get better results as this can really improve playback.
13
This support adds a both flexible and powerful mechanism to the basic RPT
application under test (AUT) record-and-playback (RAP) technique for test
automation.
Among other things, custom code support allows a RPT user to:
Extend and customize existing RPT-generated test behavior.
Provide test coverage for applications that do have not web clients or no
separate client application at all.
Programmatically create synthetic workload tests (that is, tests that were not
generated from previously recorded user actions) and then execute those
tests.
The following provides a short, partial list of examples of the extensions to RPT
test behavior that can be implemented using custom code support:
Log a message to the RPT problem determination log, perhaps to assist in
product or test debugging.
Log an event or message to the RPT test log, to provide additional test run
context.
Customize the execution flow of a RPT schedule loop, perhaps to implement
special test behavior.
Provide customized product-specific test data processing or validation.
Definition and control of custom application under test transactions.
Use of custom RPT statistical counters for special product metrics collection
and reporting.
Pass (persist) data from one test to another, for the same simulated user.
In this mode, RPT is not used to execute tests that were generated from
previously recorded AUT user actions. Instead, RPT is used as an execution
framework to execute Java test code that is layered on top of an AUT application
program interface (API). The software architecture for this type of test is shown in
Figure 13-1.
RPT Automation
Schedule
User Group
Performance Test
Product API
All of the extensions that can be made added to RAP performance tests using
custom code can also be made to synthetic workload performance tests.
RPT also supports mixing the use of both custom code support modes in the
same test schedule.
This action inserts a new custom code element and displays a Test Element
Details window for the new custom code object on the right (Figure 13-3).
In the Test Element Details window on the right, follow these steps:
Change the Class name to something related to the purpose of the Java test
code to be associated with the custom code element.
Note that by default the class is part of the test package (Figure 13-4). This is
the Java package that RPT uses for all its generated Java test classes.
We recommend that another Java package be used to avoid what may be a
confusing inter-mingling of RPT and user Java test code.
Click either:
– View Code to add existing Java test code to the custom code element
– Generate Code to have RPT generate a new skeleton Java test code
class.
Warning: Be sure to click View Code to add existing Java test code;
Generate Code will overwrite existing Java test code.
Once either button is clicked, RPT either opens the existing Java class and
displays its source code, or creates the corresponding skeleton Java class and
displays the class source code (Figure 13-5).
This Java class was generated by RPT by clicking Generate Code in the above
example, and is part of the user’s myPkg package.
The above custom code class satisfies two important requirements imposed by
RPT on Java test code it is to execute:
The Java test code must implement the ICustomCode2 interface.
The Java test code must accept two arguments, an object of the
ITestExecutionServices class and a String[] array, as its first and second
arguments, respectively.
At this point, the generated custom code class does nothing and simply returns
null (Figure 13-5).
The args[0] string is used to pass a copy of the current AUT response buffer to
undergo validation to the method in Figure 13-6. This validation is to search for a
specific string, as specified by the sptCCRC_devStream value in the RPT
IDataArea for the virtual user for the test that called this method. The
IPDLogManager is used to record informational messages regarding the validation
process and record unexpected errors encountered during validation processing.
And finally, the ITestLogManager is used to report a test verdict based on the
results of the data validation.
The support for three of these aspects in the above example Java method is
provided by the RPT ITestExecutionServices interfaces and classes. This
support frequently plays a key role the implementation of the desired functionality
of Java test code executed via RPT custom code support.
This, and a variety of other information and services, are provided to Java test
code through the RPT ITestExecutionServices interface and classes.
RPT on-line help provides a summary of this information and services at Help
Contents → Extending Rational Performance Tester Functionality → Extending
Test execution with custom code → Test execution services interfaces and
classes.
The following sections examine more closely this information and services.
This interface provides a variety of logging levels, and can be used to report test
error conditions or problems, as well as report informational messages. As a
result, the IPDLogManager interface is often used as a Java test code debugging
tool for code executed by RPT.
Nine logging levels are supported: None, Severe, Warning, Info, Config, Fine,
Finer, Finest, All. These levels range from least verbose to most, and control
which messages instrumented in the Java test code will be recorded in the log.
The logging level selected for a test run is controlled by the RPT schedule for the
test run. This schedule setting is shown in the Schedule Element Details
(Figure 13-7).
The above code fragment writes CustcomCode1: Inside Func(1) to the problem
determination, when the FINE log level is selected within the RPT schedule
calling the fragment.
In the Select dialog window select Profiling and Logging → Log File and click
Next (Figure 13-9).
In the Log Types dialog select Common Base Event XML log, and click Browse
(Figure 13-11).
To locate the problem determination log to import, look in the RPT deployment
directory for the desired machine. Then click OK, and the log is imported and
available for review.
To view only those log entries written by RPT custom code, use perspective
filtering to select only those entries containing the string RPXE9999.
Test log
The second type of log, called the test log, is supported by the ITestLogManager
interface.
The level of detail of test log events reported by custom code to the RPT
workbench is controlled by the What To Log RPT schedule setting
(Figure 13-13).
This support is implemented via the IDataArea interface, and this interface
defines functionality for storing and accessing elements in data areas. The
elements of a data area are similar to program variables and are scoped to the
owning data area (or container).
It is important to understand each kind of data area and when to use it. All three
kinds of data areas provide information that can be read or retrieved. However, of
these three kinds of data areas, only the VirtualUserInfo data area allows a
user to store or write information. As a result, the VirtualUserInfo data area is
most often used within RPT custom code because it provides a mechanism to
persist information and pass it from one performance test to another.
The ITestInfo interface provides read-only context information about the test
that is currently executing. Some commonly used methods of the interface are:
getName—Provides the name of the performance test currently running.
getTestLogLevel—Provides the current test log level for the currently running
performance test.
getPDLogLevel—Provides the current problem determination log level for the
currently running performance test.
The following code fragment provides a simple example of how to use this data
area:
ITestInfo testInfo = (ITestInfo) tes.findDataArea(IDataArea.TEST);
String perfTestName = (String) testInfo.getName();
The following code fragment provides a simple example of how to use this data
area:
IVirtualUserInfo vuInfo = (IVirtualUserInfo)
tes.findDataArea (IDataArea.VIRTUALUSER);
ArrayList folderList = new ArrayList();
// populate folderList ...
vuInfo.put("myTestFolderList", folderList);
The following code fragment provides a simple example of how to use this data
area:
IEngineInfo engInfo = (IEngineInfo) tes
.findDataArea(IDataArea.ENGINE);
int activeUsers = engInfo.getActiveUsers();
Through selective use of these data areas, RPT custom code can both collect
needed information about the environment in which the test code is running,
while also saving and later retrieving data for later use.
For a specific example using ILoopControl, see Test duration control example.
13.4.4 ITransaction
The ITestExecutionServices interface provides access to another interface that
can be used to define and control transactions recorded by RPT. A third interface,
ITransaction, provides a mechanism that can be used to define, start and stop
custom RPT transactions.
While the above example shows how to start and stop a custom code
transaction, the following example demonstrates some of flexibility and power
this interface provides.
Full Load
Period
Ramp-up Ramp-down
Period Period
For convenience, transaction response time data is often sorted by name. This
practice places an additional importance on the form and format of transaction
names. By formatting transactions using the following technique, the data
collected during the full-load period of a test run can be separated from the
remaining data, allowing easier analysis of the system under test under full-load
conditions.
The above method creates a prefix (either ramp_ or load_) that can then be
pre-pended to a transaction name string depending on the then current number
of active users. This will effectively separate transactions executed during the
full-load period of the test run from transactions executed during the ramp-up and
ramp-down periods. When the test run transaction data is analyzed and sorted
after the test run, the transactions executed during the full-load period are
grouped together, while the transactions executed during the other two phases
are grouped together.
The above method would be used by calling it and then using the returned string
as a prefix to the final transaction name string. The following code fragment
shows this usage:
/* execute a CleardiffPred transaction */
// (prefix) (transaction base name)
transactionName = getTransactionName(tes)+ "CleardiffPred" ;
ITransaction transaction = tes.getTransaction(transactionName);
transaction.start();
......
transaction.stop();
There are two approaches that can be used to add a Java code or an API to a
RPT project:
Import Java code as a JAR file
Reuse through export/import
To do so, right-click the project and select Properties → Java Build Path →
Libraries, and then click Add External JARs (Figure 13-15).
In the JAR Selection dialog navigate to the desired JAR file, click Open, and the
JAR file is imported into the RPT project (Figure 13-16).
After the JAR file has been imported, be sure to clean and recompile the RPT
project code:
Select the project in the Test Navigator and Project → Clean (Figure 13-17).
In the Clean dialog, select the desired RPT project and click OK.
At this point, the Java classes within the imported JAR file can now be accessed
within RPT custom code for the project.
In the Archive File dialog select the RPT project and project assets to be
exported.
Once the project has been exported, a different RPT project can be opened and
the export archive can then be imported into that project by following a similar
import process.
Use of datapools and custom code can be combined. When this is done, the
datapool is usually used for one of two purposes:
To provide a variety of user response data (as described above) for use within
the custom code.
To provide configuration or initialization information to the custom code.
For brevity, a custom code element has already been added to the performance
test. For a description of how to insert a custom code element into a performance
test, see 13.3.1, “Adding custom code” on page 432.
In the subsequent dialog select the desired datapool and Open mode
(Segmented, Shared, or Private) to be used when accessing the datapool,
and click Select (Figure 13-21).
The datapool is associated with the performance test and displayed in the
Test Element Details pane (Figure 13-22).
When the performance test has an associated datapool, the custom code
element that is associated with the test can access and use the information
provided by the datapool.
The custom code does not have to use all the fields available in the datapool, but
may select which fields to use.
In the Select Arguments dialog select the datapool fields desired, and click
OK (Figure 13-24).
As stated in 13.3.1, “Adding custom code” on page 432, a custom code element
is required to accept two arguments. One of these is an instance of
ITestExecutionServices, and the other is a String array.
The String array is used to pass values (as arguments) from the datapool to the
custom code. Each field from the datapool that was selected is one of the String
array values. These values can be easily indexed and used for further processing
within the corresponding custom code exec method.
The following example illustrates the use of datapool arguments with custom
code. The example assumes two datapool fields, username and password, have
been selected to be passed as arguments to the custom code:
args[0] = username
args[1] = password
When associating the datapool to the performance test, use the Private open
mode so that each performance test receives the same information.
This section now turns to the topic of how to put this support to practical use.
The following example shows how to pass an HTTP response to a custom code
class.:
To begin, locate the HTTP response in the performance test that has to be
passed to a custom code class, and right-click on the response to create a
field reference (Figure 13-25).
Next, click Add and select Custom Code to add a custom code element to the
performance test after the selected HTTP response (Figure 13-26).
Now select the newly added custom code element and click Add in the
Arguments pane to provide the recently created field reference as an input
argument (Figure 13-27).
In the Select Arguments dialog select the field reference as an input argument
(Figure 13-28).
This makes the HTTP response available to the exec method of the custom code
class for further processing through the args String input array.
Once that is done, reference the passed HTTP response in the custom code as
args[0] to do any validation processing inside the exec method.
To return data from a custom code class to a performance test, the String return
value of the custom code class exec method is used.
Consider the following example custom code class, which retrieves a password
value from the virtual user data area, and then returns this value:
public String exec(ITestExecutionServices tes, String[] args) {
IPDLogManager ipdLogManager = tes.getPDLogManager();
IDataArea dataArea = tes.findDataArea(IDataArea.VIRTUALUSER);
// Retrieve password value
String pwd = (String)dataArea.get("sptCCRC_pwd");
return pwd;
}
As highlighted in the example, the class returns a String value of pwd which is
the desired password string to be used in the performance test.
This custom code class method can be used in conjunction with the Substitute
from custom code option within RPT to use the return value from a custom code
class in a HTTP performance test.
The following example illustrates this substitution procedure. For this example, a
CQWJ_ViewChangeSet performance test includes a ClearQuest Web login HTTP
request, in which the user password originally recorded is to be substituted with a
password provided by a custom code class:
To begin, add the custom code element to the performance test just before
the HTTP response in which the response substitution is to take place. Then
select the HTTP page in the Test Contents pane, right-click on the response
value in the Test Elements Details pane that is to undergo substitution, and
select Substitute From and select the custom code class that was inserted
into the test just before the response (Figure 13-29).
When the schedule is run again, and the performance test is executed, the
response used in the HTTP page will be provided by the custom code class, not
the value originally recorded.
For large-scale test runs simulating hundreds or thousands of virtual users, the
ramp-up period time (that is, the time during which virtual users are introduced
into the test run and become active) can often consume a considerable amount
of time, especially in case where test start stagger periods are long.
Consider a 1,000 user test run with a stagger time of two minutes (120 seconds).
Such a test run would have a load ramp-up time of 999 multiplied by 120
seconds, or 119,880 second (33.3 hours). If the schedule consists of
performance tests that iterate for a fixed number of iterations such that some
portion of those iterations are executed under a full load, the resulting test run
load profile resembles a trapezoid, as shown previously in Figure 13-14 on
page 446.
As is shown in the load profile in that figure, a significant amount of time is also
spent in the ramp-down period. If the objective of the test run is to collect product
performance data during the full load period, a large amount of time is spent in
ramp-up and ramp-down periods, slowing testing progress if multiple test runs
are desired.
One optimization that will improve testing efficiency is to limit the amount of time
spent in the ramp periods. While the ramp-up period is necessary to avoid
overwhelming the test system as the total user load is introduced into the system,
the ramp-down period is not. In fact, if the ramp-down period can be reduced or
eliminated, significant time savings will result.
One solution to realize this time savings is to use custom code at the end of each
test schedule main loop iteration to determine if the desired test run elapsed time
has reached the end of the full load period, indicated by the dashed vertical line
in the Test run user load profile. The custom code class would determine the test
run elapsed time thus far, and whether the full load interval was complete. If it
was complete, the custom code breaks out of the main schedule loop.
The following code fragment implements this elapsed time check and breaks of
out of innermost loop when the desired time limit has been passed:
public String exec(ITestExecutionServices tes, String[] args) {
// Get the test duration limit stored in virtual user data area
// Get the IPDLogMangager to report test execution information
IPDLogManager ipdLogManager = tes.getPDLogManager();
String testDuration = args[0];
int durationMS;
try {
ILoopControl loopcontrol = tes.getLoopControl();
ITime itime = tes.getTime();
IDataArea dataArea =
tes.findDataArea(IDataArea.VIRTUALUSER);
A custom code element, with the above implementation for its exec method,
would cause each virtual user to break out of the main schedule loop once the
time-in-test had surpassed a time limit specified in minutes and previously stored
in the virtual user data area for each user.
However, there are occasions where this method is not viable: Client/server
applications that do not support a Web browser client, or use a protocol that RPT
does not support.
In these cases, there may be another option. If the application provides a client
Java API or a client API that can be called using Java, then RPT custom code
support can be leveraged to provide AUT test automation.
13.7.1 Prerequisites
An important factor to consider when using a synthetic workload approach is
whether the AUT client API to be used also interacts with a system’s GUI
desktop. RPT tests that trigger such interactions will encounter problems,
typically test time-outs, as no human user will respond to the pop-up window or
dialog that has been triggered.
With the above restriction in mind, a careful review of the product client API to
use is warranted. A key question to evaluate during this review is: Can the
desired test workload be generated by API calls that do not trigger desktop
interaction? If this question can be answered with yes, then the synthetic
workload generation approach can be used.
These practices include those for development of well-written Java code, and
should likewise be followed in any RPT custom code synthetic workload test
development effort.
A simple solution to this issue may be to have the simulated user first create a
new defect record, and then save the corresponding record ID for later
modification. Because the simulated user created the record, the user should
have the necessary access privileges needed to modify the record, including
adding an attachment to the record.
As can be seen in this example, the AUT client has an associated state (user is
logged in, has created a defect, and so forth). Figure 13-30 illustrates an
example of AUT client state transitions.
Query Records
or ry
ID
2 c ec
ec e
R Qu
-Q ord
d
ue s
ry
ut
Logo
Modify
s 4
ord
Rec
ery
e
et
u
-Q
el
2c
t
en
ch d
Transition
Start
tta Ad
m
Login
Create
1 2
A
Cr
ea
te
A
Logout
A hm
tta
dd e
c
n
Q
t
ue
yr
D
R
el
ec
Modify
et
or
e
3
sd
Logo
ut
or ery
Q or
R
ds
ue d
ec Qu
ec
ry I D
-
2c
Query Records
R
Figure 13-30 Test application state transition diagram example
In Figure 13-30 AUT client states are represented as numbered circles, while
labeled directed arcs (lines with an arrow at one end) represent execution of a
specific (labeled) use case. These directed arcs may cause the AUT client to
enter a new state (as viewed by the test automation) or remain in the same state.
Test automation simulating a user must manage this AUT client state to operate
within the restrictions imposed by the product.
RPT record-and-playback tests are constrained to follow this AUT client state
management strategy, because support for RPT split-script execution is not
available.
There are at least two potential issues with this approach. One issue is that this
approach may force the execution of certain use cases (for example, login and
logout) more often than desired. The second issue is that a significant amount of
work must be done to reach a desired state, and that state does not persist
beyond the test script execution period.
However, custom code-based RPT tests are not constrained to follow this same
strategy. An RPT test script may invoke custom code classes in any desired
sequence. This flexibility allows the definition of an AUT client home state other
than inactive (the not logged-in state given the previous ClearQuest example).
Recognition of this can lead to RPT schedules executing custom code-based
tests to take on a recognizable structure:
initialization
home state setup
main loop
selector
use case sequence #1
use case sequence #2
use case sequence #3
......
cleanup
The initialization phase handles any required test automation setup, such as
population of information (for example, test server host name, test database
name, use case execute rate) into the IVirtualUserInfo data area. This
information is typically read from an initialization datapool (see 13.5, “Using
datapools” on page 451).
The home state setup phase handles execution of any required use cases to
enter the desired AUT client home state. For example, login to a specific
database, using the previous ClearQuest example.
The main loop of the schedule iterates over execution of a RPT selector,
which in turn executes RPT tests each of which executes a desired sequence
of AUT client use cases (use case sequences). Each of these tests has the
same starting conditions that correspond to the AUT client home state and
after execution, return the AUT client to the same home state. These use
case sequence can be arbitrarily complex and provide a great deal of
flexibility in creating a desired workload for the AUT.
After the main loop completes execution, the cleanup phase handles any use
case execution to gracefully and cleanly shutdown the AUT.
In Figure 13-31:
The CCRC_initialize script handles the initialization phase.
A sequence of performance tests that execute the initialize, login,
mkstream (make UCM stream), createView (create UCM view), SetCs (set
view config spec), update (update CCRC view), and getMyActivities (get a
list of my activities) implements the home state setup phase.
The (main) loop iterates for 40,000 iterations, and invokes:
– A CCRC_userLoop test, which further controls the behavior of loop and
serves to break-out of the loop after a specified time period has elapsed
(see 13.6.3, “Test duration control example” on page 460). If the time
period has not yet elapsed, schedule execution proceeds.
– An RPT random selector that selects one of a set of choice folders that
simulates a specific type of virtual user.
– A CCRC_userLoop test, which further controls the behavior of loop, and
serves to break-out of the loop after a specified time period has elapsed
(refer to 13.6.3, “Test duration control example” on page 460). If the time
period has not yet elapsed, schedule execution proceeds.
The CCRC_rmview test, which removes a previously created CCRC view and
handles the cleanup phase.
This example further also illustrates how an RPT schedule can be structured in
such a way as to also support simulation of different types of users for the AUT.
Within each of the three first-tier random selector choice folders is a second-tier
selector, the choices for which correspond to use cases for that type of user.
Each of the choices for these second tier selector choices corresponds to a
sequence of RPT tests that correspond to a sequence of AUT client use cases.
14
Objectives
The objectives of the Smart Bank showcase are:
Demonstrate the value of IBM infrastructure in the retail banking context.
Provide a tool to engage the customer and interest him in further projects and
sales with IBM hardware, software and services.
Focus on the business value and develop proof-points to show how the
infrastructure can help deliver key banking projects.
Run the demonstration of the proof-points at operational volumes typical of a
European sized bank:
– 6 million customers
– 12 million accounts
– Average daily online transaction throughput of about 300
transactions/second rising to a peak of up to 1200 transactions/second
– Constant background batch workload set-up as the lowest priority
workload during the demonstration
Provide a platform for internal IBM collateral, testing of concepts and
documentation:
– We participated in the z9™-109 Early Support Programme (ESP) project
testing the new machine with near operational volumes and utilization.
– This IBM Redbook is the first piece of external collateral to be created from
the project.
– We provided an analytical database developed in Montpellier based on the
Information FrameWork’s (IFW) Banking Data Warehouse (BDW) to other
IBM teams.
Assets
We were able to leverage the resulting database from an Independent Solution
Vendor (ISV) benchmark conducted in Montpellier (Fidelity). This plus the
agreement with the ISV to use their application software to represent the core
system in our retail bank launched the project.
As the project grew to cover new business proof-points we engaged other ISVs
to provide the application functionality to highlight our infrastructure.
The following is a list of the ISVs and IBM assets engaged in the Smart Bank
showcase:
Fidelity Corebank v4.2—Real-time retail banking application based on a
physical implementation of the Financial Services Data Model (FSDM), which
forms the foundation of the Information FrameWork (IFW) model from IBM
Software group (SWG) in Dublin.
Fair Isaac TRIAD v8.0—Risk calculation engine
Siebel Business Analytics v7.7.1—Business Intelligence application
Stonesoft Stonegate v2.2.7—Linux firewall security software
IBM Banking Data Warehouse (BDW) from IFW, built on the FSDM data
model
ACI Worldwide Ltd, BASE24-es v.6.2—Payments Framework. Refer to the
IBM Redbook Guide to Using ACI Worldwide’s BASE24-es on z/OS,
SG24-7268
IBM ATS-PSSC, Montpellier resources, experts, hardware and software
Note: To have more information about the architecture of the Smart Bank
showcase, refer to the IBM Redbook Infrastructure Solutions: Building a Smart
Bank Operation Environment, SG24-7113.
In this section we cover the proof-points one by one in roughly in the order in
which we addressed them.
The first thing we had to do was to integrate our modern core retail banking
system provided by Fidelity to our channels to allow our customers to transact
with us.
Branch transformation
For the branch channel we adopted a centralized model, where the branch
servers are physically located on centralized servers to help simplify the
environment and reduce costs.
This proof-point focuses on horizontally scalable applications deployed in a
distributed environment (perhaps a regional hub) or in a central server
environment.
The Branch Transformation proof-point also inherits the points from the
multi-channel transformation point and shows end to end integration across
heterogeneous platforms.
Identify back-office areas that can be optimized for more efficient operations.
Having access to the mission critical core system functionality through SOA it
is then possible using the new programming model of J2EE, open standards
and processes to optimize these functions and add new value:
– Greater integration for cross-selling
– Enable transformation of the core operation
– Provide system flexibility to better deal with new products, regulatory
compliance.
Regulatory compliance
Regulatory reporting from the risk framework created with by leveraging the
analytical database created for customer insight.
Show how an enterprise can meet some of the demands of BASEL II through
the use of a central banking data warehouse (analytical database)
Infrastructure simplification
Infrastructure simplification is an underlying message behind the whole
demonstration. With the monitoring and management of a system that can
use virtualization and autonomic technologies.
This, together with the systems management, is the proof-point that this
document helps to address, and then provides the mechanism through which
we demonstrate the other proof-points.
We do not pretend to deliver the ultimate solution to all these problems, but to
show how IBM's strategies and infrastructure can be used to address them with
an appropriate use of technologies and a set of example ISVs who perform
certain business functions.
Online workload
The standard online workload was derived from the research into actual banks
workloads and was set-up to represent a typical days online activity
(Figure 14-1).
Internet Banking
Bill Payments 2%
1
We recognize hat many banks operate across different time zones and that this can even-out or blur
the peaks and troughs of a daily workload. For reasons of simplicity and of showing a more
dramatic dynamic demonstration we have kept the demonstration to one time-zone for the moment.
Table 14-1 shows at a high level of detail the events that we typically run through
for the Smart Bank showcase.
Table 14-1 The story board for the day in the life of a bank
Timea Event Description
06:00 Setting the scene. Here we show low channel traffic coming from our ATMs and Internet
Early morning at the channels as our customers withdraw cash for the day or access our
bank internet bank.
09:00 Branches open. A new channel comes on-line and our customers start transacting via
Workload increasing our branches.
10:00 Business intelligence. The bank’s analysts are through the use of business intelligence
Customer software have detected a trend with our mortgage portfolio and
segmentation continue to identify a customer segment and a promotional opportunity.
10:30 Near real time How have we have created the architecture for our analytical
business analytics database? and achieved a near real-time update of our analytical
data database
(Show the near-real time analytics update speeds through simple SQL
queries from Siebel Business Analytics to BDW and describe the
architecture)
10:45 Leverage the new The bank wants to rapidly create new business products to offer to their
programming model customers. We leverage the capabilities presented with service
oriented architecture (SOA) to do this and also leverage our existing
mission critical applications to do so.
11:00 Introduce business A new capability made possible by service oriented architecture - a
performance dashboard for business performance to measure see how well our
dashboards channels are performing.
13:00 New promotion Our promotion is working better than expected. On off capacity on
launch. demand
Promotion peaks
internet traffic (Move to Hardware Management Console to initiate an OOCoD - on off
capacity on demand request. Show the effect of this request on the
workload via TEP. Highlight virtualization capabilities of the platform)
14:00 Business continuity Now we have some problems. We discuss the concepts of business
continuity and from an IT point of view we have achieved Recovery,
Redundancy and Hardening.
15:00 Credit (Basel II) and Having discussed some of the way to counter operational risk we now
operational risk turn to credit risk reporting, another aspect of Basel II compliance.
17:00 Branches close. Branch consolidation and the ability to have platform independence.
Transforming a branch Choose the platform dependent on quality of service. Simplification of
infrastructure the infrastructure.
Figure 14-2 shows the consolidated workload. Note that the start time of each
channel and that the number of hits of each channel is not the same according to
the hour of the day.
100
80 Workloads
Batch
60
%
% Internet Banking
Workload
Workload ATM
40
Teller
20
0
6:00 10:00 14:00 18:00 22:00 2:00
Time
Figure 14-2 Workload chart
6 11:00 Increase all, more for teller and process 320 135
Backend Hits/
Step Time Description (which channels)
Transactions Sec
1200
1000
800
Backend
600
Hits/sec
400
200
0
00
30
00
30
00
30
00
30
00
30
00
30
0
:0
:3
:0
:3
:0
:3
:0
4:
4:
5:
5:
6:
6:
7:
7:
8:
8:
9:
9:
10
10
11
11
12
12
13
Table 14-3 shows the hit pages per second for each channel for the same time
periods.
Note:
We have two other channels Gold and Silver which are ATM channels
reserved for VIP.
The Process channel is a Teller SOA process.
Table 14-3 Details of hit pages per second for each channel
Channels
Time Internet
ATM Gold Process Silver Teller
Banking
11:00 2 35 4 24 40 2
11:30 3 40 6 30 51 3
12:00 6 58 10 65 126 6
12:00 8 65 12 80 155 8
13:00 54 4 60 7 45 70
Figure 14-4 and Figure 14-5 the simulated workload with these values.
500
450
400
350
Internet Banking
300 Silver
Gold
250
ATM
200 Teller
Process
150
100
50
0
0
0
:0
:3
:0
:3
:0
:3
:0
:3
:0
:3
:0
:3
:0
:3
:0
:3
:0
:3
:0
04
04
05
05
06
06
07
07
08
08
09
09
10
10
11
11
12
12
13
Figure 14-4 Simulated workload
500
450
400
350
300
Internet Banking
250 ATM
Teller
200
150
100
50
0
0
0
:0
:3
:0
:3
:0
:3
:0
:3
:0
:3
:0
:3
:0
:3
:0
:3
:0
:3
:0
04
04
05
05
06
06
07
07
08
08
09
09
10
10
11
11
12
12
13
The main difficulties found during the implementation of the showcase are:
Create an unique random number for each virtual user; this is to prevent that
several virtual users use the same account number (for the implementation
refer to Example 14-4 on page 490).
Execute or not a script (for the implementation refer to Example 14-6 on
page 491).
Modify on demand the cycle of execution (for the implementation refer to
Example 14-7 on page 492).
For the Smart Bank showcase we chose the second solution to change the
latency between each action.
In the performance schedule we create a group for each type of action (one for
ATM_BI, one for ATM_CW, and so forth).
An external tool for RPT, called RPT Tuning Tool, was developed,
and—associated with custom code—provides this dynamic capability. Otherwise,
we would need a different controller for each scenario, because it is not possible
to vary think-time with the controller (only users can be varied).
The RPT Tuning Tool generates the suitable properties files and ensure the
distribution of it to all Remote Agent Controllers (RAC). Refer to Figure 14-6 for
more information.
Thinktime.ATM_CW= 200000
Thinktime.ATM_PI= 200000 2 x Rational Agent
x330: 10.1.26.150
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.util.Properties;
import com.ibm.rational.test.lt.kernel.services.ITestLogManager;
/**
* @author NICOLASR
*/
/**
* Instances of this will be created using a String constructor.
*/
public RunProperties (String fic){
if (fic != null || fic.equals("")) {
ficProp = fic;
}
if (checkModif()) {
readProperties();
}
}
public RunProperties (String fic, ITestLogManager tlm){
tlm.reportMessage("RunProperties .String fic....");
if (fic != null || fic.equals("")) {
ficProp = fic;
}
if (checkModif(tlm)) {
readProperties(tlm);
}
}
/**
* Instances of this will be created using the no-arg constructor.
*/
public RunProperties (){
if (checkModif()) {
readProperties();
}
}
return sret.toString();
}
}catch (FileNotFoundException e) {
readed = false;
e.printStackTrace();
} catch (IOException e) {
readed = false;
e.printStackTrace();
}
return readed;
}
public boolean readProperties(ITestLogManager tlm) {
boolean readed = true;
FileInputStream in = null;
try{
tlm.reportMessage("Read file " + ficProp);
in = new FileInputStream(ficProp);
if (in != null)
{
prop.load(in);
in.close();
}
File file = new File(ficProp);
//tlm.reportMessage("ficPropLastModified = " +
ficPropLastModified);
//tlm.reportMessage("file.lastModified() = " +
file.lastModified());
ficPropLastModified = file.lastModified();
tlm.reportMessage("ficPropLastModified = file.lastModified()="
+ ficPropLastModified);
}catch (FileNotFoundException e) {
readed = false;
tlm.reportMessage("RunProperties.readProperties()-"+e);
tlm.reportMessage("RunProperties.readProperties()-File="
+ficProp+"=-");
e.printStackTrace();
} catch (IOException e) {
readed = false;
tlm.reportMessage("RunProperties.readProperties()-"+e);
tlm.reportMessage("RunProperties.readProperties()-File="
+ficProp+"=-");
e.printStackTrace();
}
return readed;
}
}
For that the code uses the ScriptName initialized previously. This code uses
another custom code shown in 14.4.1, “RunProperties custom code” on
page 484.
import com.ibm.rational.test.lt.kernel.IDataArea;
import com.ibm.rational.test.lt.kernel.services.ITestExecutionServices;
import com.ibm.rational.test.lt.kernel.services.ITestLogManager;
/**
* @author NICOLASR
*/
public class ReadProperties implements
com.ibm.rational.test.lt.kernel.custom.ICustomCode2 {
/**
* Instances of this will be created using the no-arg constructor.
*/
public ReadProperties() {
}
import com.ibm.rational.test.lt.kernel.IDataArea;
import com.ibm.rational.test.lt.kernel.services.ITestExecutionServices;
/**
* @author NICOLASR
*/
public class ATM_BI implements
com.ibm.rational.test.lt.kernel.custom.ICustomCode2 {
/**
* Instances of this will be created using the no-arg constructor.
*/
public ATM_BI() {
}
import java.util.Random;
import com.ibm.rational.test.lt.kernel.IDataArea;
import com.ibm.rational.test.lt.kernel.services.ITestExecutionServices;
import com.ibm.rational.test.lt.kernel.services.IVirtualUserInfo;
/**
* @author NICOLASR
*/
public class Account implements
com.ibm.rational.test.lt.kernel.custom.ICustomCode2 {
import com.ibm.rational.test.lt.kernel.IDataArea;
import com.ibm.rational.test.lt.kernel.services.ITestExecutionServices;
/**
* @author NICOLASR
*/
public class RunState implements
com.ibm.rational.test.lt.kernel.custom.ICustomCode2 {
/**
* Instances of this will be created using the no-arg constructor.
*/
public RunState() {
}
/**
* For description of ICustomCode2 and ITestExecutionServices
* interfaces, see Javadoc located at
* <INSTALL_DIR>/rpt_prod/eclipse/plugins/
* com.ibm.rational.test.lt.kernel_<PROD_VERSION>/pubjavadoc
*/
public String exec(ITestExecutionServices tes, String[] args) {
IDataArea vda = tes.findDataArea( IDataArea.VIRTUALUSER );
IDataArea tda = tes.findDataArea( IDataArea.TEST );
String scriptname = "";
if (tda.containsKey("ScriptName"))
{
scriptname = (String) tda.get("ScriptName" );
}
return (String) vda.get("Running." + scriptname );
}
}
The use of the return code is shown in Figure 14-9 on page 494.
Important:
To not increase the reported execution test, it is necessary to insert a
transaction which will contain this custom code (refer to Figure 14-9 on
page 494)
The objects KAction and KDelay are not objects whose interface is
documented. Their use with is reserved.
import com.ibm.rational.test.lt.kernel.IDataArea;
import com.ibm.rational.test.lt.kernel.action.impl.KAction;
import com.ibm.rational.test.lt.kernel.action.impl.KDelay;
import com.ibm.rational.test.lt.kernel.services.ITestExecutionServices;
import com.ibm.rational.test.lt.kernel.services.ITestLogManager;
/**
* @author NICOLASR
*/
public class DoThinkTime implements
com.ibm.rational.test.lt.kernel.custom.ICustomCode2 {
/**
* Instances of this will be created using the no-arg constructor.
*/
public DoThinkTime() {
}
/**
* For javadoc of ICustomCode2 and ITestExecutionServices
* interfaces, select 'Help Contents' in the
* Help menu and select 'IBM Rational Performance Tester TES'.
*/
public String exec(ITestExecutionServices tes, String[] args) {
ITestLogManager testLogManager = tes.getTestLogManager() ;
IDataArea vda = tes.findDataArea( IDataArea.VIRTUALUSER );
IDataArea tda = tes.findDataArea( IDataArea.TEST );
int multi = 1; // 1 => iThinkTime in milli seconds
int iThinkTime = 10000; // default 10s
Figure 14-9 shows an example of a complete test with its custom code, its
conditional run, and its dynamic think time.
The custom code 14.4.4, “Account custom code” on page 490 was added to
make substitutions into the URL.
IP socket IP socket
RPT Tuning Tool
Send to RAC the new properties to 2 x Rational Agent injection Servlet
change the Scripts dynamically: • Reads RPT Tuning tool file to simulate
Think-time • Calls Internal RPT Up to 100 ATMs ATM Network
delay function for
Wait
varying the injection rate
• Wait to launch channel process
x330: 10.1.26.148
2 x Rational Agent
TCP/IP - HTTP
x330: 10.1.26.150
Branch
There is one RAC per physical
box, x330 which receives both
Controller and RPT Tuning Tool
messages RAC - Remote Agent Controller
Added by Montpellier
We defined two Rational agents by x330 machine because these engines have
4 GB of memory and that the agent’s JVM are limited in memory.
14.6 Conclusion
Figure 14-11 shows the result of a dynamic workload generation during a typical
Smart Bank showcase. This workload is done with eight8 agents and can push
1200 backend transactions/sec (backend transactions are the real transaction on
the level of the server z9. A simple test action can generate several actions on
the server).
With this principle of dynamic workload, it is possible to know which is the peak
load supported by the application or the server to be test.
Part 3
Part 3 Appendixes
Product installation
The IBM Rational Performance Tester product has a distributed architecture and
therefore has separate install procedures for the product console and the remote
test agent. There are two sets of installation media: one for the product console
(three disks) and one for the test agent (two disks). When installing from the
media, the startup program is called the launchpad. The product install options
are listed and initiated from the launchpad (Figure A-1).
From the launchpad, you select the option to install Rational Performance Tester
and it tries to find an existing version of the Installation Manager. If one is not
present on the system, then a copy of the Installation Manager is installed.
Step 1. From the first page of the InstallShield Wizard for the Installation
Manager (Figure A-2), click Next.
Step 2. Accept the license agreement terms (Figure A-3) and click Next.
Step 3. Change the installation directory if you want (Figure A-4) and click Next.
Step 4. Now click Install to start the actual install (Figure A-5).
Step 5. You can monitor the installation in the status bar (Figure A-6).
If this installation was part of installing a Rational desktop product, then the
Installation Manager will come up with an indication of what products are
available for installation from the configured repositories.
Step 1. The preselected package installation sequence will begin when you click
Next (Figure A-8).
Step 2. Select I accept the terms in the license agreements and click Next
(Figure A-9).
Step 3. Browse to the directory where you want the shared desktop product
components to be stored and click Next (Figure A-10).
Step 4. Browse to an empty directory to install the package and click Next
(Figure A-11).
Step 5. If you want to lay down a new version of Eclipse for this package, just
click Next. Otherwise, select Extend an existing Eclipse, browse to the installation
directory of the Eclipse and JVM that you want to extend, and then click Next
(Figure A-12).
Step 6. Select any additional languages that you want installed and click Next
(Figure A-13).
Step 7. Select any optional features you wish to install and click Next
(Figure A-14).
Step 8. Select the level of security you require for the Test Controller. By default,
any computer can connect and use your computer as a Test agent. If this is
satisfactory in your environment, it is the best choice. If you want to restrict test
runs on your computer to only those tests initiated from your machine, you can
customize the settings and select This computer only. The third choice is to
specify a list of allowed computers (Figure A-15). Click Next.
Step 9. Review the features to be installed for completeness and click Install
(Figure A-16).
Step 10. Wait for the Installation Manager to complete the install operation
(Figure A-17).
Step 11. Insert disk 2 when requested, or browse to the installation directory
(Figure A-18).
Figure A-18 Performance Tester Installation - Insert Disk dialog for disk 2
Step 12. Insert disk 3 when requested, or browse to the installation directory
(Figure A-19).
Figure A-19 Performance Tester Installation - Insert Disk dialog for disk 3
Step 13. When the install is complete, you will see a final status message
indicating the success or failure of the product installation. Click Finish
(Figure A-20).
If for any reason the Installation Manager is unable to install the requested
package due to conflicting or non-supported components, this page with have a
big red X and indicate in the installation log why it failed.
Step 2. From the License Server LaunchPad, select Install IBM Rational License
Server (Figure A-22).
Step 4. Click Next at the License Server welcome screen (Figure A-24).
Step 5. Close all other applications, turn off your anti-virus software, and click
Next (Figure A-25).
Step 6. Read and agree with the terms of the license agreement and click Accept
(Figure A-26).
Step 7. Set the installation location for your license server and click Next
(Figure A-27). If you have already installed the product, this will be unchangable.
Step 8. Leave the default setting and click Next (Figure A-28).
Step 11. The license server is now installed; click Finish to exit (Figure A-31).
Configuring the license server and entering license keys are discussed in the
product licensing section that follows.
When installing from the media, the startup program is called the launchpad. The
product install options are listed and initiated from the launchpad (Figure A-32).
Step 1. Click on the link Install IBM Rational Performance Tester Agent.
Step 2. Select the package and version to install and click Next (Figure A-33).
Step 3. Select I accept the terms of the license agreement and click Next
(Figure A-34).
Step 4. Browse to the directory where you want the shared desktop product
components to be stored and click Next (Figure A-35).
Step 5. Browse to an empty directory to install the package and click Next
(Figure A-36).
Step 6. Click Next because the Test Agent does not share an Eclipse instance
(Figure A-37).
Step 7. Select any additional languages that you want installed and click Next
(Figure A-38).
Step 9. Select the level of security you need for the Test Agent Controller. By
default, any computer can connect and use your computer as a test agent. A
good second choice is to select a custom install and listing the computers that
can access this test agent (Figure A-40).
Step 10. Review the features to be installed for completeness and click Install
(Figure A-41).
Step 11. Wait for the Installation Manager to complete the install operation
(Figure A-42).
Figure A-43 Performance Tester Agent - Insert Disk dialog for disk 2
Step 13. When the install is complete, you will see a final status message
indicating the success or failure of the product installation (Figure A-44).
Once the Test Agent has been properly installed, the ACWinService.exe process
on Windows or the RAService process on Linux or Unix should be running. This
indicates that the Test Agent has been started.
To use the Transaction Breakdown Analysis feature, you should then start the
data collection process (tapmagent.exe) running either from the Start Menu on
Windows or from the Test Agent <install dir>/DCI/rpa_prod subdirectory
invoke the startDCI.sh shell script to start the data collection (tapmagent)
process. If this server is hosting a Web application server, you should instrument
that server before starting the data collection process using the Application
Server Instrumenter from the Start Menu on Windows or the
instrumentServer.sh shell script on a Linux or Unix system.
Product licensing
The Performance Tester product is licensed in several different ways to provide
the end user with a limited term fully functional product for trial purposes as well
as a permanent license capability once the product has been purchased. For the
performance testing market space, this style of tool has extremely high value
depending on the scale of the test in terms of virtual users emulated. The user
buys only the capability that they need for their testing needs. Some vendors
resell this capability on a per test environment basis. With Performance Tester,
the general virtual user capability is licensed and then separate optional
environments are priced and licensed separately.
When the user purchases a licensed copy of the software, it comes with a
product activation kit. This can be loaded into the Performance Tester software
installation through the Installation Manager’s Manage Licenses function. This
converts the standard 30-day trial license into a permanent license that never
expires. The user’s sales representative can provide an extension to the 30-day
trial period through a limited term product activation kit also if the sales situation
justifies a longer trial period.
50 virtual testers 500 virtual testers 5,000 virtual testers 50,000 virtual testers
100 virtual testers 1000 virtual testers 10,000 virtual testers 100,000 virtual testers
If you want to run more that one virtual tester of one of the product extended
environments, you need a floating playback license key for that extension. The
currently available product extensions are listed in Table A-2.
Related publications
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this book.
IBM Redbooks
For information about ordering these publications, see “How to get IBM
Redbooks” on page 532. Note that some of the documents referenced here may
be available in softcopy only.
Rational Application Developer V7 Programming Guide, SG24-7501
Building Service-Oriented Banking Solutions with the IBM Banking Industry
Models and Rational SDP, REDP-4232
Model Driven Systems Development with Rational Products, SG24-7368
Building SOA Solutions Using the Rational SDP, SG24-7356
Business Process Management: Modeling through Monitoring Using
WebSphere V6.0.2 Products, SG24-7148
Experience J2EE! Using WebSphere Application Server V6.1, SG24-7297
Web Services Handbook for WebSphere Application Server 6.1, SG24-7257
Best Practices for SOA Management, REDP-4233
Other publications
These publications are also relevant as further information sources:
IBM technote 1221972: Rational Performance Tester 6.1.2 Workbench
Memory Optimization:
http://www-1.ibm.com/support/docview.wss?rs=0&uid=swg21221972
Online resources
These Web sites are also relevant as further information sources:
IBM Rational Performance Tester
http://www.ibm.com/rational/
http://www.ibm.com/software/awdtools/tester/performance/index.html
Eclipse and Test & Performance Tools Platform
http://www.eclipse.org/
http://www.eclipse.org/tptp
SourceForge rstatd for Linux
http://sourceforge.net/projects/rstatd/
RFC standards
http://rfc.net/
Microsoft Perfomance Monitoring
http://msdn2.microsoft.com/en-gb/library/aa830465.aspx
Standard Performance Evaluation Corporation
http://www.spec.org//
SAP Solutions
http://www.sap.com/solutions/index.epx
Citrix
http://www.citrix.com
Index
settings 30
A business
Acrobat Reader 411
analyst 6
activation kit 526
conditions 14
ActiveX 387
continuity 473
Agent Controller 33
intelligence 473
local 149
transaction 17, 256
problems 35
workflows 19
remote 150
byte-code instrumentation 261
analysis 14
tool 14
application C
management 256 caching 20
monitoring 256 capacity 16
real-time 279 measurement 23
redesign 11 capacity testing 14
response measurement 258 changing hosts 128
server Citrix 38
instrumentation 22 authentication 135
version 25 change host 130
Application Server Instrumenter 264, 269 condition 114
architecture 7 delay 108
ARM desktop session 408
agent 281 guidelines 404
instrumentation 284 loop 111
standard 258 performance report 420
authentication 20, 130 performance test 403
Citrix 135 playback 425
SAP 134 Recorder 406
automatic correlation 81 Control Window 410
average response time 187 recording 405
session 417
test 62
B complex 424
balanced system 11
edit 414
bandwidth 23
optimizing 425
Banking Data Warehouse 471
script 406
BEA WebLogic Application Server 261
timing 123
best practices
verification point 101
SAP 400
verification point report 420, 422
bottleneck 11
Windows environment 402
branch transformation 472
Citrix Presentation Server 402
browser
clock
caching 37
synchronize 216
configuration 30
distributed G
application garbage collection 24, 327
analysis 14 generic counter 192
business transactions 256 global
enterprise systems 9 economy 5
environment 9 unique identifiers 260
playback 10 goals 6
distribution graceful stop 147
Student t 183 graphical analysis 300
driver 322
architecture 325
configuration 338 H
hard stop 147
CPU usage 358
header 124
engine overhead 332
headless test 462
local 353, 365
heap
maximum JVM heap size 364
allocation 327
measurement 337
fragmentation 22, 24
memory 355
limit 326
remote 364
HTTP
sizing 355
authentication 131
virtual user capacity 356, 359
change host 128
dynamic
condition 114
discovery 282
connection 123
load generation 14
delay 122
workload implementation 469
header 124
memory footprint 366
E percentile report 24, 201
Eclipse 8 recording 37
Modeling Framework 213, 322 request rate 361
Test & Performance Tools Platform 8 response
Test and Performance Tools Platform 64 custom code 456
user interface 8 element 48
encryption 35 test 53
end-to-end transaction 257 loop 109
engine architecture 325 test generation 39
enterprise system 5
environment variable 32
equipment 14 I
ICA Client 402
execution rate 167
image
Execution Statistics views 303
synchronization 420
expected behavior 90
incremental virtual user CPU utilization 322, 335,
extensibility 7
348
incremental virtual user memory footprint 322, 329
F Independent Computing Architecture 62, 402
Fair Isaac TRIAD 471 Information Technology Lifecycle Management
firewall 25 279
exception 151 infrastructure simplification 473
injector 322
Index 537
7391IX.fm Draft Document for Review December 21, 2007 6:02 am
O status 142
Object Request Broker 259 stop 146
OMEGAMON 253 pmirm.xml 267
operating system monitoring 14 policies 312
optimize key strokes 426 pool
connection 24
portmapper daemon 226
P Presentation Server
paced loop 330
Citrix 402
page
private bytes 328
element 23
probe 261
title 90
problem determination
paging 23
level 175
peak load 17
log 148, 176, 438
PeopleSoft 64
process
percentile report 24, 201
architecture 323
performance
performance testing 15
analysis
product
document 27
activation kit 526
counters 216
licensing 526
data
production
helper 217
cutover 5
measurement 182
productivity 6
problems 294
profit target 5
report 24
protocol extension license 39
requirement 18
proxy
results 182
server 31, 36
schedule 155
settings 32
configure 213
test plan 26
tester 18 Q
testing 4 queries 234
capability 13–14
methodolog 26
R
process 11, 13, 15 ramp-up period 22
tool 5 random arrival 182
time range 199 random selector 164
Plants41k workload 341, 367 RAService.exe 325
platform independence 7 Rational Certificate Store 132
playback 9, 140 Rational ClearCase 52
architecture 10 Rational ClearCase LT 52
command line 153 Rational ClearCase SCM Adapter 52
engine 7 Rational ClearQuest Eclipse Client 463
Java process 152 Rational Functional Tester 395
licensing 526 Rational License Server
mulit-user 39 installation 514
problems 148 Rational Performance Tester
SAP 396 architecture 7
schedule 141 goals 6
Index 539
7391IX.fm Draft Document for Review December 21, 2007 6:02 am
Index 541
7391IX.fm Draft Document for Review December 21, 2007 6:02 am
W
Web-based testing 6
WebLogic
instrumentation parameters 273
WebLogic Application Server 261, 267
WebSphere Application Server 264
Index 543
7391IX.fm Draft Document for Review December 21, 2007 6:02 am
Using Rational
Performance Tester
Version 7
Introducing Rational This IBM Redbooks publication is intended to show
Performance Tester customers how Rational processes and products support and INTERNATIONAL
enable effective systems testing. TECHNICAL
Understanding RPT SUPPORT
The document describes how performance testing fits into
functions and the overall process of building enterprise information
ORGANIZATION
features systems. We discuss the value of performance testing and its
benefits in ensuring the availability, robustness, and
Applying RPT to responsiveness of your information systems that fill critical BUILDING TECHNICAL
enterprise roles for your enterprise. INFORMATION BASED ON
application testing Based on years of project experience, we describe the key PRACTICAL EXPERIENCE
requirements needed in performance testing tools and how
the IBM Rational Performance Tester tool was developed to IBM Redbooks are developed by
meet those requirements. We also walk through the choices the IBM International Technical
that we made to steer the tool architecture into using an Support Organization. Experts
open source platform as its base with Java as its language to from IBM, Customers and
permit ubiquitous platform support. Partners from around the world
create timely technical
This book is structured into two parts: information based on realistic
scenarios. Specific
Understanding Rational Performance Tester recommendations are provided
to help you implement IT
Applying Rational Performance Tester to enterprise solutions more effectively in
application testing your environment.
SG24-7391-00 ISBN