LoadRunner Analysis
LoadRunner Analysis
This manual, and the accompanying software and other documentation, is protected by U.S. and
international copyright laws, and may be used only in accordance with the accompanying license
agreement. Features of the software, and of other products and services of Mercury Interactive
Corporation, may be covered by one or more of the following patents: U.S. Patent Nos. 5,701,139;
5,657,438; 5,511,185; 5,870,559; 5,958,008; 5,974,572; 6,138,157; 6,144,962; 6,205,122; 6,237,006;
6,341,310; and 6,360,332. Other patents pending. All rights reserved.
Mercury Interactive, the Mercury Interactive logo, WinRunner, XRunner, FastTrack, LoadRunner,
LoadRunner TestCenter, TestDirector, TestSuite, WebTest, Astra, Astra SiteManager, Astra SiteTest,
Astra QuickTest, QuickTest, Open Test Architecture, POPs on Demand, TestRunner, Topaz, Topaz
ActiveAgent, Topaz Delta, Topaz Observer, Topaz Prism, Twinlook, ActiveTest, ActiveWatch,
SiteRunner, Freshwater Software, SiteScope, SiteSeer and Global SiteReliance are registered trademarks
of Mercury Interactive Corporation or its wholly-owned subsidiaries, Freshwater Software, Inc. and
Mercury Interactive (Israel) Ltd. in the United States and/or other countries.
ActionTracker, ActiveScreen, ActiveTune, ActiveTest SecureCheck, Astra FastTrack, Astra LoadTest,
Change Viewer, Conduct, ContentCheck, Dynamic Scan, FastScan, LinkDoctor, ProTune, RapidTest,
SiteReliance, TestCenter, Topaz AIMS, Topaz Console, Topaz Diagnostics, Topaz Open DataSource,
Topaz Rent-a-POP, Topaz WeatherMap, TurboLoad, Visual Testing, Visual Web Display and WebTrace
are trademarks of Mercury Interactive Corporation or its wholly-owned subsidiaries, Freshwater
Software, Inc. and Mercury Interactive (Israel) Ltd. in the United States and/or other countries. The
absence of a trademark from this list does not constitute a waiver of Mercury Interactives intellectual
property rights concerning that trademark.
All other company, brand and product names are registered trademarks or trademarks of their
respective holders. Mercury Interactive Corporation disclaims any responsibility for specifying which
marks are owned by which companies or which organizations.
Mercury Interactive Corporation
1325 Borregas Avenue
Sunnyvale, CA 94089 USA
Tel: (408) 822-5200
Toll Free: (800) TEST-911, (866) TOPAZ-4U
Fax: (408) 822-5300
1999 - 2002 Mercury Interactive Corporation, All rights reserved
If you have any comments or suggestions regarding this document, please send them via e-mail to
[email protected].
LRANUG7.6/01
Table of Contents
Welcome to LoadRunner ......................................................................ix
Online Resources ..................................................................................ix
LoadRunner Documentation Set...........................................................x
Using the LoadRunner Documentation Set .........................................xi
Typographical Conventions.............................................................. xiii
Chapter 1: Introducing the Analysis .....................................................1
About the Analysis ................................................................................2
Creating Analysis Sessions ....................................................................3
Starting the Analysis..............................................................................3
Collating Execution Results .................................................................4
Viewing Summary Data ........................................................................4
Aggregating Analysis Data.....................................................................5
Setting the Analysis Time Filter ............................................................8
Setting General Options ........................................................................9
Setting Database Options ....................................................................11
Working with Templates.....................................................................15
Viewing Session Information ..............................................................17
Viewing the Scenario Run-Time Settings ............................................18
Analysis Graphs ...................................................................................18
Opening Analysis Graphs....................................................................21
iii
iv
Table of Contents
vi
Table of Contents
vii
viii
Welcome to LoadRunner
Welcome to LoadRunner, Mercury Interactives tool for testing the performance of applications. LoadRunner stresses your entire application to isolate and identify potential client, network, and server bottlenecks. LoadRunner enables you to test your system under controlled and peak load conditions. To generate load, LoadRunner runs thousands of Virtual Users that are distributed over a network. Using a minimum of hardware resources, these Virtual Users provide consistent, repeatable, and measurable load to exercise your application just as real users would. LoadRunners indepth reports and graphs provide the information that you need to evaluate the performance of your application.
Online Resources
LoadRunner includes the following online tools: Read Me First provides last-minute news and information about LoadRunner. Books Online displays the complete documentation set in PDF format. Online books can be read and printed using Adobe Acrobat Reader, which is included in the installation package. Check Mercury Interactives Customer Support Web site for updates to LoadRunner online books. LoadRunner Function Reference gives you online access to all of LoadRunners functions that you can use when creating Vuser scripts, including examples of how to use the functions. Check Mercury Interactives Customer Support Web site for updates to the online LoadRunner Function Reference.
ix
LoadRunner Context Sensitive Help provides immediate answers to questions that arise as you work with LoadRunner. It describes dialog boxes, and shows you how to perform LoadRunner tasks. To activate this help, click in a window and press F1. Check Mercury Interactives Customer Support Web site for updates to LoadRunner help files. Technical Support Online uses your default Web browser to open Mercury Interactives Customer Support Web site. This site enables you to browse the knowledge base and add your own articles, post to and search user discussion forums, submit support requests, download patches and updated documentation, and more. The URL for this Web site is http://support.mercuryinteractive.com. Support Information presents the locations of Mercury Interactives Customer Support Web site and home page, the e-mail address for sending information requests, and a list of Mercury Interactives offices around the world. Mercury Interactive on the Web uses your default Web browser to open Mercury Interactives home page (http://www.mercuryinteractive.com). This site enables you to browse the knowledge base and add your own articles, post to and search user discussion forums, submit support requests, download patches and updated documentation, and more.
Welcome to LoadRunner
Installation Guide
For instructions on installing LoadRunner, Analysis 7.6, refer to the Installing LoadRunner Analysis Guide.
xi
For information on
Installing LoadRunner The LoadRunner testing process Creating Vuser scripts Creating and running scenarios Analyzing test results
Look here...
Installing LoadRunner guide LoadRunner Controller Users Guide (Windows) Creating Vuser Scripts guide LoadRunner Controller Users Guide (Windows) LoadRunner Analysis Users Guide
xii
Welcome to LoadRunner
Typographical Conventions
This book uses the following typographical conventions: 1, 2, 3 > Stone Sans Bold numbers indicate steps in a procedure. Bullets indicate options and features. The greater than sign separates menu levels (for example, File > Open). The Stone Sans font indicates names of interface elements on which you perform actions (for example, Click the Run button.). Bold text indicates method or function names Italic text indicates method or function arguments, file names or paths, and book titles. The Helvetica font is used for examples and text that is to be typed literally. Angle brackets enclose a part of a file path or URL address that may vary from user to user (for example, <Product installation folder>\bin). Square brackets enclose optional arguments. Curly brackets indicate that one of the enclosed values must be assigned to the current argument. In a line of syntax, an ellipsis indicates that more items of the same format may be included.
[ ] {} ...
xiii
xiv
1
Introducing the Analysis
LoadRunner Analysis provides graphs and reports to help you analyze the performance of your system. These graphs and reports summarize the scenario execution. This chapter describes: Creating Analysis Sessions
Starting the Analysis
Collating Execution Results
Viewing Summary Data
Aggregating Analysis Data
Setting the Analysis Time Filter
Setting General Options
Setting Database Options
Working with Templates
Viewing Session Information
Viewing the Scenario Run-Time Settings
Analysis Graphs
Opening Analysis Graphs
Note: Some fields are not available for filtering when you are working with summary graphs.
2 Select one of the following three options: Automatically aggregate data to optimize performance: Instructs the Analysis to aggregate data using its own, built-in data aggregation formulas. Automatically aggregate Web data only: Instructs the Analysis to aggregate Web data using its own, built-in data aggregation formulas. Apply user-defined aggregation: Applies the aggregation settings you define. To define aggregation settings, click the Aggregation Configuration button. The Data Aggregation Configuration dialog box opens.
Select Aggregate Data if you want to specify the data to aggregate. Specify the type(s) of graphs for which you want to aggregate data, and the graph propertiesVuser ID, Group Name, and Script Nameyou want to aggregate. If you do not want to aggregate the failed Vuser data, select Do not aggregate failed Vusers.
Note: You will not be able to drill down on the graph properties you select to aggregate.
Specify a custom granularity for the data. To reduce the size of the database, increase the granularity. To focus on more detailed results, decrease the granularity. Note that the minimum granularity is 1 second. Select Use granularity of xxx seconds for Web data to specify a custom granularity for Web data. By default, the Analysis summarizes Web measurements every 5 seconds. To reduce the size of the database, increase the granularity. To focus on more detailed results, decrease the granularity. Click OK to close the Data Aggregation Configuration dialog box. 3
Click OK in the Result Collection tab to save your settings and close the Options dialog box.
2 In the Data Time Range box, select Specified scenario time range. 3
Enter the beginning and end of the scenario time range you want the Analysis to display. 4 Click OK to save your settings and close the Options dialog box.
Note: All graphs except the Running Vusers graph will be affected by the time range settings, in both the Summary and Complete Data Analysis modes.
10
2
In the Date Format box, select European to store and display dates in the European format, or US to store and display dates in the American format. 3
In the File Browser box, select Open at most recently used directory if you want the Analysis to open the file browser at the previously used directory location. Select Open at specified directory and enter the directory location at which you want the file browser to open, if you want the Analysis to open the file browser at a specified directory. 4
In the Temporary Storage Location box, select Use Windows temporary directory if you want the Analysis to store temporary files in your Windows temp directory. Select Use a specified directory and enter the directory location in which you want to save temporary files, if you want the Analysis to save temporary files in a specified directory. 5
The Summary Report contains a percentile column showing the response time of 90% of transactions (90% of transactions fall within this amount of time). To change the value of the default 90% percentile, enter a new figure in the Transaction Percentile box. Since this is an application level setting, the column name changes to the new percentile figure (for example, to 80% Percentile) only on the next invocation of LoadRunner Analysis.
11
2 Choose one of the following two options: To instruct LoadRunner to save Analysis result data in an Access 97 database format, select Access 97. To instruct LoadRunner to save Analysis result data in an Access 2000 database format, select Access 2000. 3
Click the Test Parameters button to connect to the Access database and verify that the list separator registry options on your machine are the same as those on the database machine.
12
4
Click the Compact Database button to repair and compress results that may have become fragmented, and prevent the use of excessive disk space.
Note: Long scenarios (duration of two hours or more) will require more time for compacting.
2
Select or enter the name of the machine on which the SQL server or MSDE is running, and enter the master database user name and password. Select Use Windows-integrated security in order to use your Windows login, instead of specifying a user name and password.
13
Note: By default, the user name sa and no password are used for the SQL server.
3
In the Logical storage location box, enter a shared directory on the SQL server/MSDE machine in which you want permanent and temporary database files to be stored. For example, if your SQL servers name is fly, enter \\fly\<Analysis Database>\. Note that Analysis results stored on an SQL server/MSDE machine can only be viewed on the machines local LAN. 4
In the Physical storage location box, enter the real drive and directory path on the SQL server/MSDE machine that correspond to the logical storage location. For example, if the Analysis database is mapped to an SQL server named fly, and fly is mapped to drive D, enter D:\<Analysis Database>.
Note: If the SQL server/MSDE and Analysis are on the same machine, the logical storage location and physical storage location are identical.
5
Click Test Parameters to connect to the SQL server/MSDE machine and verify that the shared directory you specified exists on the server, and that you have write permissions on the shared server directory. If so, the Analysis synchronizes the shared and physical server directories.
Note: If you store Analysis result data on an SQL server/MSDE machine, you must select File > Save As in order to save your Analysis session. To delete your Analysis session, you must select File > Delete Current Session. To open a session stored on an SQL Server/MSDE machine, the machine must be running and the directory you defined must exist as a shared directory.
14
2
Enter the name of the template you want to create, or select the template name using the browse button to the right of the text box. 3
To apply the template to any new session you open, select Automatically apply this template to a new session. 4
To apply the default Analysis granularity (one second) to the template, select Use automatic granularity. For information about setting Analysis granularity, see Changing the Granularity of the Data on page 28. 5
To generate an HTML report using the template, select Generate the following automatic HTML report, and specify or select a report name. For information about generating HTML reports, see Creating HTML Reports on page 270. 6
To instruct the Analysis to automatically save the session using the template you specify, select Automatically save the session as, and specify or select a file name.
15
7 Click OK to close the dialog box and save the template. Once you have saved a template, you can apply it to other Analysis sessions. To use a saved template in another Analysis session: 1
Select Tools > Templates > Apply/Edit Template. The Apply/Edit Template dialog box opens.
2
Select the template you want to use from the template selector, or click the browse button to choose a template from a different location. 3
To apply the template to any new session you open, select Automatically apply this template to a new session. 4
To apply the default Analysis granularity (one second) to the template, select Use automatic granularity. For information about setting Analysis granularity, see Changing the Granularity of the Data on page 28. 5
To generate an HTML report using the template, select Generate the following automatic HTML report, and specify or select a report name. For information about generating HTML reports, see Creating HTML Reports on page 270. 6
To instruct the Analysis to automatically save the session using the template you specify, select Automatically save the session as, and specify or select a file name.
16
Click OK to close the dialog box and load the template you selected into the current Analysis session.
17
Note: The run-time settings allow you to customize the way a Vuser script is executed. You configure the run-time settings from the Controller or VuGen before running a scenario. For information on configuring the run-time settings, refer to the Creating Vuser Scripts guide.
To view the scenario run-time settings: Select File > Scenario Run-Time Settings, or click the Run-Time Settings button on the toolbar. The Scenario Run-Time Settings dialog box opens, displaying the Vuser groups, scripts, and scheduling information for each scenario. For each script in a scenario, you can view the run-time settings that were configured in the Controller or VuGen before scenario execution.
Analysis Graphs
The Analysis graphs are divided into the following categories: Vusers
Errors
Transactions
Web Resources
Web Page Breakdown
User-Defined Data Points
System Resources
Network Monitor
Firewalls
18
Web Server Resources Web Application Server Resources Database Server Resources Streaming Media ERP Server Resources Java Performance Vuser graphs display information about Vuser states and other Vuser statistics. For more information, see Chapter 3, Vuser Graphs. Error graphs provide information about the errors that occurred during the scenario. For more information, see Chapter 4, Error Graphs. Transaction graphs and reports provide information about transaction performance and response time. For more information, see Chapter 5, Transaction Graphs. Web Resource graphs provide information about the throughput, hits per second, HTTP responses per second, number of retries per second, and downloaded pages per second for Web Vusers. For more information, see Chapter 6, Web Resource Graphs. Web Page Breakdown graphs provide information about the size and download time of each Web page component. For more information, see Chapter 7, Web Page Breakdown Graphs. User-Defined Data Point graphs display information about the custom data points that were gathered by the online monitor. For more information, see Chapter 8, User-Defined Data Point Graphs. System Resource graphs show statistics relating to the system resources that were monitored during the scenario using the online monitor. This category also includes graphs for SNMP and TUXEDO monitoring. For more information, see Chapter 9, System Resource Graphs. Network Monitor graphs provide information about the network delays. For more information, see Chapter 10, Network Monitor Graphs. Firewall graphs provide information about firewall server resource usage. For more information, see Chapter 11, Firewall Server Monitor Graphs.
19
Web Server Resource graphs provide information about the resource usage for the Apache, iPlanet/Netscape, and MS IIS Web servers. For more information see Chapter 12, Web Server Resource Graphs. Web Application Server Resource graphs provide information about the resource usage for various Web application servers such as Ariba, ATG Dynamo, BroadVision, ColdFusion, Fujitsu INTERSTAGE, Microsoft ASP, Oracle9iAS HTTP, SilverStream, WebLogic (SNMP), WebLogic (JMX), and WebSphere, and WebSphere (EPM). For more information see Chapter 13, Web Application Server Resource Graphs. Database Server Resource graphs provide information about DB2, Oracle, SQL Server, and Sybase database resources. For more information, see Chapter 14, Database Server Resource Graphs. Streaming Media graphs provide information about RealPlayer Client, RealPlayer Server, and Windows Media Server resource usage. For more information, see Chapter 15, Streaming Media Graphs. ERP Server Resource graphs provide information about SAP R/3 system server resource usage. For more information, see Chapter 16, ERP Server Resource Graphs. Java Performance graphs provide information about resource usage of Enterprise Java Bean (EJB) objects, Java-based applications, and the TowerJ Java virtual machine. For more information, see Chapter 17, Java Performance Graphs.
20
By default, only graphs which contain data are listed. To view the entire list of LoadRunner Analysis graphs, clear Display only graphs containing data. 2 Click the "+" to expand the graph tree, and select a graph. You can view a description of the graph in the Graph Description box.
21
3 Click Open Graph. The Analysis generates the graph you selected and adds it to the graph tree view. The graph is displayed in the right pane of the Analysis. To display an existing graph in the right pane of the Analysis, select the graph in the graph tree view.
22
23
24
Note: The drill down feature is not available for the Web Page Breakdown graph.
In the following example, the graph shows a line for each of the five transactions.
25
In a drill down for the MainPage transaction per Vuser ID, the graph displays the response time only for the MainPage transaction, one line per Vuser.
You can see from the graph that the response time was longer for some Vusers than for others.
26
To drill down in a graph: 1 Right-click on a line, bar, or segment within the graph, and select Drill Down. The Drill Down Options dialog box opens, listing all of the measurements in the graph.
2 Select a measurement for drill down. 3 From the Group By box, select a group by which to sort. 4 Click OK. The Analysis drills down and displays the new graph. 5 To undo the last drill down settings, choose Undo Set Filter/Group By from the right-click menu. 6 To perform additional drill downs, repeat steps 1 to 4. 7 To clear all filter and drill down settings, choose Clear Filter/Group By from the right-click menu.
27
28
To change the granularity of a graph: 1 Click inside a graph. 2 Select View > Set Granularity, or click the Set Granularity button. The Granularity dialog box opens.
3 Enter a new granularity value in milliseconds, minutes, seconds, or hours. 4 Click OK. In the following example, the Hits per Second graph is displayed using different granularities. The y-axis represents the number of hits per second within the granularity interval. For a granularity of 1, the y-axis shows the number of hits per second for each one second period of the scenario. For a granularity of 5, the y-axis shows the number of hits per second for every five-second period of the scenario.
29
Granularity=1
Granularity=5
Granularity=10
In the above graphs, the same scenario results are displayed in a granularity of 1, 5, and 10. The lower the granularity, the more detailed the results. For example, using a low granularity as in the upper graph, you see the intervals in which no hits occurred. It is useful to use a higher granularity to study the overall Vuser behavior throughout the scenario. By viewing the same graph with a higher granularity, you can easily see that overall, there was an average of approximately 1 hit per second.
30
31
2 Choose View > Set Filter/Group By. The Graph Settings dialog box opens.
3 Click the Criteria box of the condition(s) you want to set, and select either "=" or "<>" (does not equal) from the drop down list. 4 Click the Values box of the filter condition you want to set, and select a value from the drop down box. For several filter conditions, a new dialog box opens.
32
5 For the Transaction Response Time filter condition, the Set Dimension Information dialog box opens. Specify a minimum and maximum transaction response time for each transaction.
6 For the Vuser ID filter condition, the Vuser ID dialog box opens. Specify the Vuser IDs, or the range of Vusers, you want the graph to display.
33
7 For the Scenario Elapsed Time filter condition, the Scenario Elapsed Time dialog box opens. Specify the start and end time for the graph in hours:minutes:seconds format. The time is relative to the start of the scenario.
8 For Rendezvous graphs, while setting the Number of Released Vusers condition, the Set Dimension Information dialog box opens. Specify a minimum and maximum number of released Vusers. 9 For all graphs that measure resources (Web Server, Database Server, etc.), when you set the Resource Value condition, the Set Dimension Information dialog box opens displaying a full range of values for each resource. Specify a minimum and maximum value for the resource. 10 Click OK to accept the settings and close the Graph Settings dialog box. To apply a global filter condition for all graphs in the session (both those displayed and those that have not yet been opened), choose File > Set Global Filter or click the Set Global Filter button, and set the desired filters.
Note: You can set the same filter conditions described above for the Summary Report. To set filter conditions for the Summary Report, select Summary Report in the graph tree view, and choose View > Summary Filter. Select the filter conditions you want to apply to the Summary Report in the Analysis Summary Filter dialog box.
34
To sort the graph data according to groups: 1 Select the graph you want to sort by clicking the graph tab or clicking the graph name in the tree view. 2 Choose View > Set Filter/Group By. The Graph Settings dialog box opens.
35
3 In the Available Groups box, select the group by which you want to sort the results.
4 Click the right-facing arrow to move your selection to the Selected Groups box. 5 To change the order in which the results are grouped, select the group you want to move and click the up or down arrow until the groups are in the desired order. 6 To remove an existing grouping, select an item in the Selected Groups box and click the left-facing arrow to move it to the Available Groups box. 7 Click OK to close the dialog box.
36
In addition, you can indicate whether the graph should be displayed with a three-dimensional look, and specify the percent for the three-dimensional graphs. This percentage indicates the thickness of the bar, grid, or pie chart.
3-D %
15 %
95 %
The standard display options also let you indicate how to plot the results that are time-based: relative to the beginning of the scenario (default), or absolute time, based on the system clock of the machine. To open the Display Options dialog box, choose View > Display Options or click the Display Options button.
Choose a graph type, specify a three-dimensional percent, select whether you want a legend to be displayed, and/or choose a time option. Click Close to accept the settings and close the dialog box.
37
You can customize the graph layout by setting the Chart and Series preferences. Click the appropriate tab and sub-tab to configure your graph.
Chart Settings
The Chart settings control the look and feel of the entire graphnot the individual points. You set Chart preferences using the following tabs: Series, General, Axis, Titles, Legend, Panel, Walls, and 3D. Series: displays the graph style, (bar, line, etc.) the hide/show settings, line and fill color, and the title of the series. General: contains options for print preview, export, margins, scrolling, and magnification.
38
Axis: indicates which axes to show as well as their scales, titles, ticks, and position. Titles: allows you to set the title of the graph, its font, background color, border, and alignment. Legend: includes all legend-related settings, such as position, fonts, and divider lines. Panel: shows the background panel layout of the graph. You can modify its color, set a gradient option, or specify a background image. Walls: lets you set colors for the walls of three-dimensional graphs. 3D: contains the three-dimensional settings, offset, magnification, and rotation angle for the active graph.
Series Settings
The Series settings control the appearance of the individual points plotted in the graph. You set Series preferences using the following tabs: Format, Point, General, and Marks. Format: allows you to set the border color, line color, pattern and invert property for the lines or bars in your graph. Point: displays the point properties. Points appear at various points within your line graph. You can set the size, color and shape of these points. General: contains the type of cursor, the format of the axis values, and show/hide settings for the horizontal and vertical axis. Marks: allows you to display the value for each point in the graph and configure the format of those marks.
39
The Legend tab shortcut menu (right-click) has the following additional features: Show/Hide: Displays or hides a measurement in the graph. Show only selected: Displays the highlighted measurement only. Show all: Displays all the available measurements in the graph. Configure measurements: Opens the Measurement Options dialog box in which you can set the color and scale of the measurement you selected. You can manually set measurement scales, or set scale to automatic, scale to one, or view measurement trends for all measurements in the graph.
40
Show measurement description: Displays a dialog box with the name, monitor type, and description of the selected measurement. Animate selected line: Displays the selected measurement as a flashing line. Auto Correlate: Opens a dialog box enabling you to correlate the selected measurement with other monitor measurements in order to view similar measurement trends in the scenario. For more information on the auto correlation feature, see page 48. Sort by this column: Sorts the measurements according to the selected column, in ascending or descending order. Configure columns: Opens the Legend Columns Options dialog box in which you can choose the columns you want to view, the width of each column, and the way you want to sort the columns.
Web page breakdown for <selected measurement> (appears for measurements in the Average Transaction Response Time and Transaction Performance Summary graphs): Displays a Web Page Breakdown graph for the selected transaction measurement. Break down (appears for measurements in the Web Page Breakdown graphs): Displays a graph with a breakdown of the selected page.
41
2 In the Text box, type the comment. The comment will be positioned where you clicked the graph in step 1, and the coordinates of this position appear in Left and Top. You can alter the position of the comment to one of the Auto positions, or specify your own Custom coordinate in Left and Top. 3 Click OK, to save the comment and close the dialog box.
42
To edit an existing comment in the graph: 1 Select Comments > Edit from the right-click menu. Alternatively, choose View > Comments > Edit from the main menu.
2 Choose the comment you want to edit in the left frame. In the example above, Comment 1 is selected. 3 Edit the text. You can alter the position of the comment to one of the Auto positions or specify your own Custom coordinate in the Left and Top boxes. To format the comment, select the remaining tabs Format, Text, Gradient, and Shadow. To delete a comment, choose the comment you want to delete in the left frame of the dialog and click Delete. To add an arrow to the graph: 1 Click the Draw Arrow icon in the main toolbar. The cursor changes to a hairline icon. 2 Click the mouse button within the graph to position the base of the arrow. 3 While holding the mouse button down, drag the mouse cursor to position the head of the arrow. Release the mouse button.
43
4 The arrows position can be changed by selecting the arrow itself. Positional boxes appear at the base and head which can be dragged to alternate positions. To delete an arrow from the graph: 1 Select the arrow by clicking on it. Positional boxes appear at the base and head of the arrow. 2 Click the Delete key on your keyboard.
44
Spreadsheet View
You can view the graph displayed by the Analysis in spreadsheet format using the Graph Data tab below the graph.
The first column displays the values of the x-axis. The following columns show the y-axis values for each transaction. If there are multiple values for the y-axis, as in the Transaction Performance Summary graph (minimum, average, and maximum), all of the plotted values are displayed. If you filter out a transaction, it will not appear in the view. The Spreadsheet shortcut menu (right-click) has the following additional features: Copy All: You can copy the spreadsheet to the clipboard in order to paste it into an external spreadsheet program. Save As: You can save the spreadsheet data to an Excel file. Once you have the data in Excel, you can generate your own customized graphs.
45
2 Specify a time rangethe entire graph (default) or a specific range of time and click OK.
46
3 Click the Raw Data tab below the graph. The Analysis displays the raw data in a grid directly below the active graph.
New Y value = (Previous Y Value - Average of previous values) / STD of previous values
To view a line graph as a standardized graph: 1 Select View > View Measurement Trends, or right-click the graph and choose View Measurement Trends. Alternatively, you can select View > Configure Measurements and check the View measurement trends for all measurements box.
47
Note: The standardization feature can be applied to all line graphs except the Web Page Breakdown graph.
2 View the standardized values for the line graph you selected. Note that the values in the Minimum, Average, Maximum, and Std. Deviation legend columns are real values. To undo the standardization of a graph, repeat step 1.
Note: If you standardize two line graphs, the two y-axes merge into one yaxis.
48
To correlate a measurement in a graph with other measurements: 1 Right-click the measurement you want to correlate in the graph or legend, and select Auto Correlate. The Auto Correlate dialog box opens.
The selected measurement is displayed in the graph. Other measurements can be chosen with the Measurement to Correlate list box. The following settings configure the Auto Correlate tool to automatically demarcate the most significant time period for the measurement in the scenario. Select either Trend or Feature in the Suggest Time Range by box. Trend demarcates an extended time segment which contains the most significant changes. Feature demarcates a smaller dimension segment which makes up the trend.
49
Click Best for the segment of the graph most dissimilar to its adjacent segments. Click Next for further suggestions, each one being successively less dissimilar. If the Suggest time range automatically box is checked, then suggestions are automatically invoked whenever the Measurement to Correlate item changes. Alternatively, you can manually specify From and To values of a time period in the Time Range tab (in hhh:mm:ss format). Or, you can manually drag the green and red vertical drag bars to specify the start and end values for the scenario time range. If you applied a time filter to your graph, you can correlate values for the complete scenario time range by clicking the Display button in the upper right-hand corner of the dialog box.
Note: The granularity of the correlated measurements graph may differ from that of the original graph, depending on the scenario time range defined.
2 To specify the graphs you want to correlate with a selected measurement and the type of graph output to be displayed, click the Correlation Options tab.
50
3 In the select Graphs for Correlation section, choose the graphs whose measurements you want to correlate with your selected measurement:
4 In the Data Interval section, select one of the following two options: Automatic: Instructs the Analysis to use an automatic value, determined by the time range, in order to calculate the interval between correlation measurement polls. Correlate data based on X second intervals: Enter the number of seconds you want the Analysis to wait between correlation measurement polls. 5 In the Output section, select one of the following two options: Show the X most closely correlated measurements: Specify the number of most closely correlated measurements you want the Analysis to display.
51
Show measurements with an influence factor of at least X%: Specify the minimum influence factor for the measurements displayed in the correlated graph. 6 Click OK. The Analysis generates the correlated graph you specified. Note the two new columnsCorrelation Match and Correlationthat appear in the Legend tab below the graph.
The minimum time range should be more than 5% of the total time range of the measurement. Trends which are smaller than 5% of the whole measurement will be contained in other larger segments. Sometimes, very strong changes in a measurement can hide smaller changes. In cases like these, only the strong change is suggested, and the Next button will be disabled.
52
In the following example, the t106Zoek:245.lrr measurement in the Average Transaction Response Time graph is correlated with the measurements in the Windows Resources, Microsoft IIS, and SQL Server graphs. The five measurements most closely correlated with t106Zoek:245.lrr are displayed in the graph below.
To specify another measurement to correlate, select the measurement from the Measurement to Correlate box at the top of the Auto Correlate dialog box.
Note: This feature can be applied to all line graphs except the Web Page Breakdown graph.
For more information on auto correlation, see Chapter 23, Interpreting Analysis Graphs.
53
54
In the following example, the Average Transaction Response Time Graph is displayed with the WAN Emulation overlay. WAN emulation was enabled between the 1st and 3rd minute of the scenario. During this time, the average transaction response times increased sharply. When WAN emulation was stopped, average response times dropped.
55
In the same scenario, throughput on the server fell during the period of WAN emulation. After WAN emulation was stopped, server throughput increased. You can also compare this graph to the Average Transaction Response Time Graph to see how the throughput affects transaction performance.
56
3
Vuser Graphs
After running a scenario, you can check the behavior of the Vusers that participated in the scenario using the following Vuser graphs: Running Vusers Graph Vuser Summary Graph Rendezvous Graph
57
58
59
Rendezvous Graph
The Rendezvous graph indicates when Vusers were released from rendezvous points, and how many Vusers were released at each point. This graph helps you understand transaction performance times. If you compare the Rendezvous graph to the Average Transaction Response Time graph, you can see how the load peak created by a rendezvous influences transaction times. On the Rendezvous graph, the x-axis indicates the time that elapsed since the beginning of the scenario. The y-axis indicates the number of Vusers that were released from the rendezvous. If you set a rendezvous for 60 Vusers, and the graph indicates that only 25 were released, you can see that the rendezvous ended when the timeout expired because all of the Vusers did not arrive.
60
4
Error Graphs
After a scenario run, you can use the error graphs to analyze the errors that occurred during the load test. This chapter describes: Error Statistics Graph Errors per Second Graph
61
62
63
64
5
Transaction Graphs
After running a scenario, you can analyze the transactions that were executed during the test using one or more of the following graphs: Average Transaction Response Time Graph
Transactions per Second Graph
Total Transactions per Second
Transaction Summary Graph
Transaction Performance Summary Graph
Transaction Response Time (Under Load) Graph
Transaction Response Time (Percentile) Graph
Transaction Response Time (Distribution) Graph
65
This graph is displayed differently for each granularity. The lower the granularity, the more detailed the results. However, it may be useful to view the results with a higher granularity to study the overall Vuser behavior throughout the scenario. For example, using a low granularity, you may see intervals when no transactions were performed. However, by viewing the same graph with a higher granularity, you will see the graph for the overall transaction response time. For more information on setting the granularity, see Chapter 2, Working with Analysis Graphs.
66
You can view a breakdown of a transaction in the Average Transaction Response Time graph by selecting View > Show Transaction Breakdown Tree, or right-clicking the transaction and selecting Show Transaction Breakdown Tree. In the Transaction Breakdown Tree, right-click the transaction you want to break down, and select Break Down <transaction name>. The Average Transaction Response Time graph displays data for the sub-transactions. To view a breakdown of the Web page(s) included in a transaction or subtransaction, right-click it and select Web page breakdown for <transaction name>. For more information on the Web Page Breakdown graphs, see Chapter 7, Web Page Breakdown Graphs. You can compare the Average Transaction Response Time graph to the Running Vusers graph to see how the number of running Vusers affects the transaction performance time. For example, if the Average Transaction Response Time graph shows that performance time gradually improved, you can compare it to the Running Vusers graph to see whether the performance time improved due to a decrease in the Vuser load. If you have defined acceptable minimum and maximum transaction performance times, you can use this graph to determine whether the performance of the server is within the acceptable range.
67
68
69
70
You can view a breakdown of a transaction in the Transaction Performance Summary graph by selecting View > Show Transaction Breakdown Tree, or right-clicking the transaction and selecting Show Transaction Breakdown Tree. In the Transaction Breakdown Tree, right-click the transaction you want to break down, and select Break Down <transaction name>. The Transaction Performance Summary graph displays data for the subtransactions. To view a breakdown of the Web page(s) included in a transaction or subtransaction, right-click it and select Web page breakdown for <transaction name>. For more information on the Web Page Breakdown graphs, see Chapter 7, Web Page Breakdown Graphs.
71
72
Note: The Analysis approximates the transaction response time for each available percentage of transactions. The y-axis values, therefore, may not be exact.
In the following graph, fewer than 20 percent of the tr_matrix_movie transactions had a response time less than 70 seconds.
73
It is recommended to compare the Percentile graph to a graph indicating average response time such as the Average Transaction Response Time graph. A high response time for several transactions may raise the overall average. However, if the transactions with a high response time occurred less than five percent of the time, that factor may be insignificant.
74
If you have defined acceptable minimum and maximum transaction performance times, you can use this graph to determine whether the performance of the server is within the acceptable range.
75
76
6
Web Resource Graphs
After a scenario run, you use the Web Resource graphs to analyze Web server performance. This chapter describes: Hits per Second Graph
Hits Summary Graph
Throughput Graph
Throughput Summary Graph
HTTP Status Code Summary Graph
HTTP Responses per Second Graph
Pages Downloaded per Second Graph
Retries per Second Graph
Retries Summary Graph
77
The x-axis represents the elapsed time since the start of the scenario run. The y-axis represents the number of hits on the server. For example, the graph above shows that the most hits per second took place during the fiftyfifth second of the scenario.
Note: You cannot change the granularity of the x-axis to a value that is less than the Web granularity you defined in the General tab of the Options dialog box.
78
The graph above is the transformation of a Hits Summary graph after being grouped by VuserID. It shows the number of hits made by each Vuser.
79
Throughput Graph
The Throughput graph shows the amount of throughput on the server during each second of the scenario run. Throughput is measured in bytes and represents the amount of data that the Vusers received from the server at any given second. This graph helps you evaluate the amount of load Vusers generate, in terms of server throughput. You can compare this graph to the Average Transaction Response Time graph to see how the throughput affects transaction performance. The x-axis represents the elapsed time since the start of the scenario run. The y-axis represents the throughput of the server, in bytes. The following graph shows the highest throughput to be 193,242 bytes during the fifty-fifth second of the scenario.
Note: You cannot change the granularity of the x-axis to a value that is less than the Web granularity you defined in the General tab of the Options dialog box.
80
The graph above is the transformation of a Throughput Summary graph after being grouped by VuserID. It shows the amount of throughput generated by each Vuser.
81
82
83
Description
OK Created Accepted Non-Authoritative Information No Content Reset Content Partial Content Multiple Choices Moved Permanently Found See Other Not Modified Use Proxy Temporary Redirect Bad Request Unauthorized Payment Required Forbidden Not Found Method Not Allowed Not Acceptable Proxy Authentication Required Request Timeout Conflict
84
Code
410 411 412 413 414 415 416 417 500 501 502 503 504 505
Description
Gone Length Required Precondition Failed Request Entity Too Large Request - URI Too Large Unsupported Media Type Requested range not satisfiable Expectation Failed Internal Server Error Not Implemented Bad Gateway Service Unavailable Gateway Timeout HTTP Version not supported
For more information on the above status codes and their descriptions, see http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.
85
Like throughput, downloaded pages per second is a representation of the amount of data that the Vusers received from the server at any given second. However, the Throughput graph takes into account each resource and its size (for example, the size of each .gif file, the size of each Web page). The Pages Downloaded per Second graph takes into account only the number of pages.
86
In the following example, the Throughput graph is merged with the Pages Downloaded per Second graph. It is apparent from the graph that throughput is not completely proportional to the number of pages downloaded per second. For example, between 10 and 25 seconds into the scenario run, the number of pages downloaded per second increased while the throughput decreased.
87
88
89
90
91
Note: It is recommended that, in VuGen, you select HTML-based script in the Recording tab of the Recording Options dialog box.
For more information on recording Web Vuser scripts, see the Creating Vuser Scripts guide.
92
Using the Web Page Breakdown graphs, you can pinpoint the cause of the delay in response time for the trans1 transaction.
93
To view a breakdown of a transaction: 1 Right-click trans1 and select Web Page Breakdown for trans1. The Web Page Breakdown graph and the Web Page Breakdown tree appear. An icon appears next to the page name indicating the page content. See Web Page Breakdown Content Icons on page 95 2 In the Web Page Breakdown tree, right-click the problematic page you want to break down, and select Break Down <component name>. Alternatively, select a page in the Select Page to Break Down box. The Web Page Breakdown graph for that page appears.
Note: You can open a browser displaying the problematic page by rightclicking the page in the Web Page Breakdown tree and selecting View page in browser.
3 Select one of the four available options: Download Time Breakdown: Displays a table with a breakdown of the selected pages download time. The size of each page component (including the components header) is displayed. See the Page Download Time Breakdown Graph for more information about this display. Component Breakdown (Over Time): Displays the Page Component Breakdown (Over Time) Graph for the selected Web page. Download Time Breakdown (Over Time): Displays the Page Download Time Breakdown (Over Time) Graph for the selected Web page. Time to First Buffer Breakdown (Over Time): Displays the Time to First Buffer Breakdown (Over Time) Graph for the selected Web page. See the following sections for an explanation of each of these four graphs. To display the graphs in full view, click the Open graph in full view button. Note that you can also access these graphs, as well as additional Web Page Breakdown graphs, from the Open a New Graph dialog box.
94
Page content: Specifies that the ensuing content, which may include text, images etc, is all part of one logical page.
Text content: Textual information. Plain text is intended to be displayed as-is. Includes HTML text and stylesheets.
Message content: An encapsulated message. Common subtypes are news, or external-body which for specifies large bodies by reference to an external data source Application content: Some other kind of data, typically either uninterpreted binary data or information to be processed by an application. An example subtype is Postscript data. Image content: Image data. Two common subtypes are the jpeg and gif format
Video content: A time-varying picture image. A common subtype is the mpeg format.
Resource content: Other resources not listed above. Also, content that is defined as not available are likewise included.
95
To ascertain which components caused the delay in download time, you can break down the problematic URL by double-clicking it in the Web Page
96
Breakdown tree. In the following example, the cnn.com/WEATHER component is broken down.
The above graph shows that the main cnn.com/WEATHER component took the longest time to download (8.98% of the total download time). To isolate other problematic components of the URL, it may be helpful to sort the legend according to the average number of seconds taken to download a component. To sort the legend by average, click the Graphs Average column heading.
Note: The Page Component Breakdown graph can only be viewed as a pie.
97
98
To ascertain which components were responsible for the delay in response time, you can break down the problematic component by double-clicking it in the Web Page Breakdown tree.
Using the above graph, you can track which components of the main component were most problematic, and at which point(s) during the scenario the problem(s) occurred. To isolate the most problematic components, it may be helpful to sort the legend tab according to the average number of seconds taken to download a component. To sort the legend by average, double-click the Average column heading. To identify a component in the graph, you can click it. The corresponding line in the legend tab is selected.
99
The Page Download Time Breakdown graph breaks down each component by DNS resolution time, connection time, time to first buffer, SSL handshaking time, receive time, FTP authentication time, client time, and error time.
100
Connection
First Buffer
Receive
101
Description Displays the time taken to authenticate the client. With FTP, a server must authenticate a client before it starts processing the clients commands. The FTP Authentication measurement is only applicable for FTP protocol communications. Displays the average amount of time that passes while a request is delayed on the client machine due to browser think time or other client-related delays. Displays the average amount of time that passes from the moment an HTTP request is sent until the moment an error message (HTTP errors only) is returned.
Client Time
Error Time
Note: Each measurement displayed on the page level is the sum of that measurement recorded for each page component. For example, the Connection Time for www.cnn.com is the sum of the Connection Time for each of the pages components.
The above graph demonstrates that receive time, connection time, and first buffer time accounted for a large portion of the time taken to download the main cnn.com URL.
102
If you break the cnn.com URL down further, you can isolate the components with the longest download time, and analyze the network or server problems that contributed to the delay in response time.
Breaking down the cnn.com URL demonstrates that for the component with the longest download time (the www.cnn.com component), the receive time accounted for a large portion of the download time.
103
Note: Each measurement displayed on the page level is the sum of that measurement recorded for each page component. For example, the Connection Time for www.cnn.com is the sum of the Connection Time for each of the pages components.
To isolate the most problematic components, you can sort the legend tab according to the average number of seconds taken to download a component. To sort the legend by average, double-click the Average column heading. To identify a component in the graph, click it. The corresponding line in the legend tab is selected.
104
In the example in the previous section, it is apparent that cnn.com was the most problematic component. If you examine the cnn.com component, the Page Download Time Breakdown (Over Time) graph demonstrates that first buffer and receive time remained high throughout the scenario, and that DNS Resolution time decreased during the scenario.
Note: When the Page Download Time Breakdown (Over Time) graph is selected from the Web Page Breakdown graph, it appears as an area graph.
105
Note: Each measurement displayed on the page level is the sum of that measurement recorded for each page component. For example, the network time for www.cnn.com is the sum of the network time for each of the pages components.
Network time is defined as the average amount of time that passes from the moment the first HTTP request is sent until receipt of ACK. Server time is defined as the average amount of time that passes from the receipt of ACK of the initial HTTP request (usually GET) until the first buffer is successfully received back from the Web server. In the above graph, it is apparent that network time is greater than server time.
106
Note: Because server time is being measured from the client, network time may influence this measurement if there is a change in network performance from the time the initial HTTP request is sent until the time the first buffer is sent. The server time displayed, therefore, is estimated server time and may be slightly inaccurate.
The graph can only be viewed as a bar graph. You can break the main cnn.com URL down further to view the time to first buffer breakdown for each of its components.
It is apparent that for the main cnn.com component (the first component on the right), the time to first buffer breakdown is almost all network time.
107
Network time is defined as the average amount of time that passes from the moment the first HTTP request is sent until receipt of ACK. Server time is defined as the average amount of time that passes from the receipt of ACK of the initial HTTP request (usually GET) until the first buffer is successfully received back from the Web server. Because server time is being measured from the client, network time may influence this measurement if there is a change in network performance from the time the initial HTTP request is sent until the time the first buffer is sent. The server time displayed, therefore, is estimated server time and may be slightly inaccurate.
108
Note: Each measurement displayed on the page level is the sum of that measurement recorded for each page component. For example, the network time for www.cnn.com is the sum of the network time for each of the pages components.
You can break the main cnn.com URL down further to view the time to first buffer breakdown for each of its components.
Note: When the Time to First Buffer Breakdown (Over Time) graph is selected from the Web Page Breakdown graph, it appears as an area graph.
109
Note: The Web page size is a sum of the sizes of each of its components.
110
You can break the main cnn.com URL down further to view the size of each of its components.
In the above example, the cnn.com components size (20.83% of the total size) may have contributed to the delay in its downloading. To reduce download time, it may help to reduce the size of this component.
Note: The Downloaded Component Size graph can only be viewed as a pie graph.
111
112
8
User-Defined Data Point Graphs
After a scenario run, you can use the User-Defined Data Point graphs to display the values of user-defined data points in the Vuser script. This chapter describes the: About User-Defined Data Point Graphs Data Points (Sum) Graph Data Points (Average) Graph
113
step. For more information about data points, see the online LoadRunner Function Reference. Data points, like other LoadRunner data, are aggregated every few seconds, resulting in less data points shown on the graph than actually recorded. For more information see Changing the Granularity of the Data on page 28.
114
115
116
117
correspond to the built-in counters available from the Windows Performance Monitor.
118
Processor
System
119
Object System
Description The instantaneous length of the processor queue in units of threads. This counter is always 0 unless you are also monitoring a thread counter. All processors use a single queue in which threads wait for processor cycles. This length does not include the threads that are currently executing. A sustained processor queue length greater than two generally indicates processor congestion. This is an instantaneous count, not an average over the time interval. This is a count of the page faults in the processor. A page fault occurs when a process refers to a virtual memory page that is not in its Working Set in the main memory. A page fault will not cause the page to be fetched from disk if that page is on the standby list (and hence already in main memory), or if it is in use by another process with which the page is shared. The percentage of elapsed time that the selected disk drive is busy servicing read or write requests. The number of bytes in the nonpaged pool, a system memory area where space is acquired by operating system components as they accomplish their appointed tasks. Nonpaged pool pages cannot be paged out to the paging file. They remain in main memory as long as they are allocated.
Memory
Page Faults/sec
PhysicalDisk
% Disk Time
Memory
120
Object Memory
Measurement Pages/sec
Description The number of pages read from the disk or written to the disk to resolve memory references to pages that were not in memory at the time of the reference. This is the sum of Pages Input/sec and Pages Output/sec. This counter includes paging traffic on behalf of the system cache to access file data for applications. This value also includes the pages to/from noncached mapped memory files. This is the primary counter to observe if you are concerned about excessive memory pressure (that is, thrashing), and the excessive paging that may result. The rate at which the computer is receiving and servicing hardware interrupts. The devices that can generate interrupts are the system timer, the mouse, data communication lines, network interface cards, and other peripheral devices. This counter provides an indication of how busy these devices are on a computer-wide basis. See also Processor:Interrupts/sec. The number of threads in the computer at the time of data collection. Notice that this is an instantaneous count, not an average over the time interval. A thread is the basic executable entity that can execute instructions in a processor. The current number of bytes that the process has allocated that cannot be shared with other processes.
System
Total Interrupts/sec
Objects
Threads
Process
Private Bytes
121
122
Measurement Incoming packets error rate Incoming packets rate Interrupt rate Outgoing packets errors rate Outgoing packets rate Page-in rate Page-out rate Paging rate Swap-in rate Swap-out rate System mode CPU utilization User mode CPU utilization
Description Errors per second while receiving Ethernet packets Incoming Ethernet packets per second Number of device interrupts per second Errors per second while sending Ethernet packets Outgoing Ethernet packets per second Number of pages read to physical memory, per second Number of pages written to pagefile(s) and removed from physical memory, per second Number of pages read to physical memory or written to pagefile(s), per second Number of processes being swapped Number of processes being swapped Percent of time that the CPU is utilized in system mode Percent of time that the CPU is utilized in user mode
123
124
125
Monitor Machine
Measurements Workload completed per second - The total workload on all the servers for the machine that was completed, per unit time Workload initiated per second - The total workload on all the servers for the machine that was initiated, per unit time Current Accessers - Number of clients and servers currently accessing the application either directly on this machine or through a workstation handler on this machine. Current Clients - Number of clients, both native and workstation, currently logged in to this machine. Current Transactions - Number of in use transaction table entries on this machine.
Queue
Bytes on queue - The total number of bytes for all the messages waiting in the queue Messages on queue - The total number of requests that are waiting on queue. By default this is 0.
126
Measurements Bytes received per second - The total number of bytes received by the workstation handler, per unit time Bytes sent per second - The total number of bytes sent back to the clients by the workstation handler, per unit time Messages received per second - The number of messages received by the workstation handler, per unit time Messages sent per second - The number of messages sent back to the clients by the workstation handler, per unit time Number of queue blocks per second - The number of times the queue for the workstation handler blocked, per unit time. This gives an idea of how often the workstation handler was overloaded.
127
128
Measurement MaxTCPConn Latency(milisec) TCPSndConnClose TCPRcvConnClose TCPSndResets TCPRcvResets SYNSent SYNSentRate(/sec) SYNAckSent SYNAckRate(/sec)
Description Maximum TCPConnectionLatency in msec. Total number of FIN or FIN ACK packets transmitted (Client). Total number of FIN or FIN ACK packets received (Client). Total number of RST packets transmitted. Total number of RST packets received. Total number of SYN packets transmitted. Number of SYN packets transmitted per second. Total number of SYN ACK packets transmitted. Number of SYN ACK packets transmitted per second.
129
Measurement MinData Throughput (bytes/sec) MaxData Throughput (bytes/sec) SuccHTTPRequests SuccHTTPRequest Rate(/sec) UnSuccHTTP Requests
Total number of successful HTTP Request Replies (200 OK) received. Number of successful HTTP Request Replies (200 OK) received per second. Number of unsuccessful HTTP Requests.
130
131
Measurement SuccFTPUserRate (/sec) FTPPasses FTPPassRate(/sec) FTPPassLatency (milisecs) MinFTPPassLatency (milisecs) MaxFTPPassLatency (milisecs) SuccFTPPasses SuccFTPPassRate (/sec) FTPControl Connections FTPControl ConnectionRate (/sec) SuccFTPControl Connections SuccFTPControl ConnectionRate (/sec) FTPData Connections FTPDataConnection Rate(/sec) SuccFTPData Connections
Description Number of successful Ftp User command replies received per second. Total number of FTP PASS packets transmitted. Number of FTP PASS packets transmitted per second. Interval between transmitting a Ftp PASS packet and receiving a response in msec. Minimum FTPPassLatency in msec. Maximum FTPPassLatency in msec. Total number of successful FTP PASS replies received. Number of successful FTP PASS replies received per second. Total number of SYN packets transmitted by the FTP client. Number of SYN packets transmitted by the FTP client per second. Total number of SYN ACK packets received by the FTP client. Number of SYN ACK packets received by the FTP Client per second. Number of SYN ACK packets received by the FTP client per second. Number of SYN ACK packets transmitted by the FTP Client or received by the FTP Server per second. Total number of SYN ACK packets transmitted by the FTP Client or received by the FTP Server.
132
Description Number of SYN ACK packets received by the FTP server per second. Total number of error replies received by the FTP client. Total number of client Get requests. Total number of client Put requests. Total number of successful Get requests (data has been successfully transferred from server to client). Total number of successful Put requests (data has been successfully transferred from client to server).
133
Measurement MinSMTPMailFrom Latency(milisecs) MaxSMTPMailFrom Latency(milisecs) SuccSMTPMail Froms SuccSMTPMailFrom Rate(/sec) SMTPRcptTos SMTPRcptToRate (/sec) SMTPRcptTo Latency(milisecs) MinSMTPRcptTo Latency(milisecs) MaxSMTPRcptTo Latency(milisecs) SuccSMTPRcptTos SuccSMTPRcptTo Rate(/sec) SMTPDatas SMTPDataRate(/sec) SMTPDataLatency (milisecs) MinSMTPData Latency(milisecs) MaxSMTPData Latency(milisecs)
Description Minimum SMTPMailFromLatency in msec. Maximum SMTPMailFromLatency in msec. Total number of successful Mail From replies received. Number of successful Mail From replies received per second. Total number of RcptTo packets transmitted. Number of RcptTo packets transmitted per second. Interval between transmitting a RcptTo packet and receiving a response in msec. Minimum SMTPRcptToLatency in msec. Maximum SMTPRcptToLatency in msec. Total number of successful RcptTo replies received. Number of successful RcptTo replies received per second. Total number of Data packets transmitted. Number of Data packets transmitted per second. Interval between transmitting a Data packet and receiving a response in msec. Minimum SMTPDataLatency in msec. Maximum SMTPDataLatency in msec.
134
Description Total number of successful Data replies received. Number of successful Data replies received per second.
135
Measurement SuccPOP3PassRate (/sec) POP3Stats POP3StatRate(/sec) POP3StatLatency (milisecs) MinPOP3Stat Latency(milisecs) MaxPOP3Stat Latency(milisecs) SuccPOP3Stats SuccPOP3StatRate (/sec) POP3Lists POP3ListRate(/sec) POP3ListLatency (milisecs) MinPOP3List Latency(milisecs) MaxPOP3List Latency(milisecs) SuccPOP3Lists SuccPOP3ListRate (/sec) POP3Retrs POP3RetrRate(/sec) POP3RetrLatency (milisecs)
Description Number of successful Pop3 Pass replies received per second. Total number of Pop3 Stat command packets sent. Number of Pop3 Stat command packets transmitted per second. Interval between transmitting a Pop3 Stat packet and receiving a response in msec. Minimum POP3StartLatency in msec. Maximum POP3StartLatency in msec. Total number of successful Pop3 Stat replies received. Number of successful Pop3 Stat replies received per second. Total number of Pop3 List command packets transmitted. Number of Pop3 List command packets transmitted per second. Interval between transmitting a Pop3 List packet and receiving a response in msec. Minimum POP3ListLatency in msec. Maximum POP3ListLatency in msec. Total number of successful Pop3Lists received. Number of successful Pop3Lists received per second. Total number of Pop3 Retr packets transmitted. Number of Pop3 Retr packets transmitted per second. Interval between transmitting a Pop3 Retr packet and receiving a response in msec.
136
Description Minimum POP3RetrLatency in msec. Maximum POP3RetrLatency in msec. Total number of successful Pop3Retrs received. Number of successful Pop3Retrs received per second.
137
Description Number of DNS Request packets transmitted per second. Total number of DNS Request packets re Total number of Answers to the DNS Request packets.
138
139
140
Measurement Interrupts/sec
Description The average number of hardware interrupts the processor is receiving and servicing in each second. It does not include DPCs, which are counted separately. This value is an indirect indicator of the activity of devices that generate interrupts, such as the system clock, the mouse, disk drivers, data communication lines, network interface cards and other peripheral devices. These devices normally interrupt the processor when they have completed a task or require attention. Normal thread execution is suspended during interrupts. Most system clocks interrupt the processor every 10 milliseconds, creating a background of interrupt activity. This counter displays the difference between the values observed in the last two samples, divided by the duration of the sample interval. This value represents the average client latency over the life of a session. This value represents the bandwidth from server to client traffic on this virtual channel. This is measured in bps This value represents the bandwidth from server to client traffic for a session in bps A count of the Page Faults in the processor. A page fault occurs when a process refers to a virtual memory page that is not in its Working Set in main memory. A Page Fault will not cause the page to be fetched from disk if that page is on the standby list, and hence already in main memory, or if it is in use by another process with whom the page is shared.
Latency Session Average Output Seamless Bandwidth Output Session Bandwidth Page Faults/sec
141
Measurement Pages/sec
Description The number of pages read from the disk or written to the disk to resolve memory references to pages that were not in memory at the time of the reference. This is the sum of Pages Input/sec and Pages Output/sec. This counter includes paging traffic on behalf of the system Cache to access file data for applications. This value also includes the pages to/from non-cached mapped memory files. This is the primary counter to observe if you are concerned about excessive memory pressure (that is, thrashing), and the excessive paging that may result. The number of bytes in the Nonpaged Pool, a system memory area where space is acquired by operating system components as they accomplish their appointed tasks. Nonpaged Pool pages cannot be paged out to the paging file, but instead remain in main memory as long as they are allocated. The current number of bytes this process has allocated that cannot be shared with other processes. The instantaneous length of the processor queue in units of threads. This counter is always 0 unless you are also monitoring a thread counter. All processors use a single queue in which threads wait for processor cycles. This length does not include the threads that are currently executing. A sustained processor queue length greater than two generally indicates processor congestion. This is an instantaneous count, not an average over the time interval. The number of threads in the computer at the time of data collection. Notice that this is an instantaneous count, not an average over the time interval. A thread is the basic executable entity that can execute instructions in a processor.
Private Bytes
Threads
142
143
144
10
145
To measure network performance, the Network monitor sends packets of data across the network. When a packet returns, the monitor calculates the time it takes for the packet to go to the requested node and return. The Network Sub-Path Time graph displays the delay from the source machine to each node along the path. The Network Segment Delay graph displays the delay for each segment of the path. The Network Delay Time graph displays the delay for the complete path between the source and destination machines. Using the Network Monitor graphs, you can determine whether the network is causing a bottleneck. If the problem is network-related, you can locate the problematic segment so that it can be fixed. In order for the Analysis to generate Network monitor graphs, you must activate the Network monitor before executing the scenario. In the Network monitor settings, you specify the path you want to monitor. For information about setting up the Network monitor, see the LoadRunner Controller Users Guide (Windows).
146
147
Note: The delays from the source machine to each of the nodes are measured concurrently, yet independently. It is therefore possible that the delay from the source machine to one of the nodes could be greater than the delay for the complete path between the source and destination machines.
148
Note: The segment delays are measured approximately, and do not add up to the network path delay which is measured exactly. The delay for each segment of the path is estimated by calculating the delay from the source machine to one node and subtracting the delay from the source machine to another node. For example, the delay for segment B to C is calculated by measuring the delay from the source machine to point C, and subtracting the delay from the source machine to point B.
149
150
11
151
This graph displays the fwDropped, fwLogged, and fwRejected measurements during the first minute and twenty seconds of the scenario. Note that there are differences in the scale factor for the measurements: the scale factor for fwDropped is 1, the scale factor for fwLogged is 10, and the scale factor for fwRejected is 0.0001. The following measurements are available for the Check Point FireWall-1 server:
Measurement fwRejected fwDropped fwLogged Description The number of rejected packets. The number of dropped packets. The number of logged packets.
152
12
153
the second minute is 1; 10 multiplied by the scale factor of 1/10 (indicated in the Legend tab below).
In the above graph, the CPU usage remained steady throughout the scenario. At the end of the scenario, the number of idle servers increased. The number of busy servers remained steady at 1 throughout the scenario, implying that the Vuser only accessed one Apache server.
154
Note that the scale factor for the Busy Servers measurement is 1/10 and the scale factor for CPU usage is 10. The following default measurements are available for the Apache server:
Measurement # Busy Servers # Idle Servers Apache CPU Usage Hits/sec KBytes Sent/sec Description The number of servers in the Busy state The number of servers in the Idle state The percentage of time the CPU is utilized by the Apache server The HTTP request rate The rate at which data bytes are sent from the Web server
Note: The Apache monitor connects to the Web server in order to gather statistics, and registers one hit for each sampling. The Apache graph, therefore, always displays one hit per second, even if no clients are connected to the Apache server.
155
In the above graph, the Bytes Received/sec and Get Requests/sec measurements remained fairly steady throughout the scenario, while the % Total Processor Time, Bytes Sent/sec, and Post Requests/sec measurements fluctuated considerably. Note that the scale factor for the Bytes Sent/sec and Bytes Received/sec measurements is 1/100, and the scale factor for the Post Requests/sec measurement is 10.
156
The following default measurements are available for the IIS server:
Object Web Service Web Service Web Service Measurement Bytes Sent/sec Bytes Received/sec Get Requests/sec Description The rate at which the data bytes are sent by the Web service The rate at which the data bytes are received by the Web service The rate at which HTTP requests using the GET method are made. Get requests are generally used for basic file retrievals or image maps, though they can be used with forms. The rate at which HTTP requests using the POST method are made. Post requests are generally used for forms or gateway requests. The maximum number of simultaneous connections established with the Web service The current number of connections established with the Web service The number of users that currently have a nonanonymous connection using the Web service The rate of errors due to requests that could not be satisfied by the server because the requested document could not be found. These are generally reported to the client as an HTTP 404 error code. The current number of bytes that the process has allocated that cannot be shared with other processes.
Web Service Web Service Web Service Web Service Web Service
Post Requests/sec
Maximum Connections Current Connections Current NonAnonymous Users Not Found Errors/sec
Process
Private Bytes
157
Note that the scale factor for the 302/sec and 3xx/sec measurements is 100, and the scale factor for the Bytes Sent/sec is 1/100. The following default measurements are available for the iPlanet/Netscape server:
Measurement 200/sec 2xx/sec 302/sec 304/sec Description The rate of successful transactions being processed by the server The rate at which the server handles status codes in the 200 to 299 range The rate of relocated URLs being processed by the server The rate of requests for which the server tells the user to use a local copy of a URL instead of retrieving a newer version from the server
158
Measurement 3xx/sec 401/sec 403/sec 4xx/sec 5xx/sec Bad requests/sec Bytes sent/sec Hits/sec xxx/sec
Description The rate at which the server handles status codes in the 300 to 399 range The rate of unauthorized requests handled by the server The rate of forbidden URL status codes handled by the server The rate at which the server handles status codes in the 400 to 499 range The rate at which the server handles status codes 500 and higher The rate at which the server handles bad requests The rate at which bytes of data are sent from the Web server The HTTP request rate The rate of all status codes (2xx-5xx) handled by the server, excluding timeouts and other errors that did return an HTTP status code
159
160
13
Web Application Server Resource Graphs
After a scenario run, you can use Web Application Server Resource graphs to analyze Web application server performance. This chapter describes: Ariba Graph
ATG Dynamo Graph
BroadVision Graph
ColdFusion Graph
Fujitsu INTERSTAGE Graph
Microsoft Active Server Pages (ASP) Graph
Oracle9iAS HTTP Server Graph
SilverStream Graph
WebLogic (SNMP) Graph
WebLogic (JMX) Graph
WebSphere Graph
WebSphere (EPM) Graph
161
162
Ariba Graph
The following tables describe the default measurements available for the Ariba server: Core Server Performance Counters
Measurement Total Connections Requisitions Finished Description The cumulative number of concurrent user connections since Ariba Buyer was started. The instantaneous reading of the length of the worker queue at the moment this metric is obtained. The longer the worker queue, the more user requests are delayed for processing. The instantaneous reading of the length of the worker queue at the moment this metric is obtained. The longer the worker queue, the more user requests are delayed for processing. The instantaneous reading of the number of concurrent user connections at the moment this metric is obtained The instantaneous reading of the memory (in KB) being used by Ariba Buyer at the moment this metric is obtained The instantaneous reading of the reserved memory (in KB) that is not currently in use at the moment this metric is obtained The amount of time (in hours and minutes) that Ariba Buyer has been running since the previous time it was started The instantaneous reading of the number of server threads in existence at the moment this metric is obtained The instantaneous reading of the number of Ariba Buyer objects being held in memory at the moment this metric is obtained
Free Memory
Up Time
Number of Threads
163
Description The average length of the user sessions (in seconds) of all users who logged out since previous sampling time. This value indicates on average how long a user stays connected to server. The average idle time (in seconds) for all the users who are active since previous sampling time. The idle time is the period of time between two consecutive user requests from the same user. The cumulative count of the number of approves that happened during the sampling period. An Approve consists of a user approving one Approvable. The cumulative count of the number of Approvables submitted since previous sampling time The cumulative count of the number of submitted Approvables denied since previous sampling time The cumulative count of accesses (both reads and writes) to the object cache since previous sampling time The cumulative count of accesses to the object cache that are successful (cache hits) since previous sampling time
Approves
164
Description The average number of packets that Ariba Buyer sent to DB server since the previous sampling time The average number of packets that DB server sent to Ariba Buyer since the previous sampling time
Description The total amount of memory currently available for allocating objects, measured in bytes An approximation of the total amount of memory currently available for future allocated objects, measured in bytes The number of system global info messages written The number of system global warning messages written The number of system global error messages written
Description True if the Dynamo is running a load manager Returns the Dynamos offset into the list of load managing entities True if the load manager is an acting primary manager
165
Description True if the load manager has serviced any connection module requests in the amount of time set as the connection module polling interval The port of the connection module agent A unique value for each managed entity The port for the entrys SNMP agent The probability that the entry will be given a new session Indicates whether or not the entry is accepting new sessions, or if the load manager is allowing new sessions to be sent to the entry. This value is inclusive of any override indicated by lmNewSessionOverride. The override set for whether or not a server is accepting new sessions
lmNewSessionOverride
Description The number of created sessions The number of valid sessions The number of sessions migrated to the server d3Session Tracking
Description The port of the DRP server Total number of DRP requests serviced
166
Description Total service time in msecs for all DRP requests Average service time in msecs for each DRP request True if the Dynamo is accepting new sessions
d3DBConnPooling Measurement dbPoolsEntry dbIndex dbPoolID dbMinConn dbMaxConn dbMaxFreeConn dbBlocking dbConnOut dbFreeResources
Description A pooling service entry containing information about the pool configuration and current status A unique value for each pooling service The name of the DB connection pool service The minimum number of connections pooled The maximum number of connections pooled The maximum number of free pooled connections at a time Indicates whether or not the pool is to block out check outs Returns the number of connections checked out Returns the number of free connections in the pool. This number refers to connections actually created that are not currently checked out. It does not include how many more connections are allowed to be created as set by the maximum number of connections allowed in the pool. Returns the number of total connections in the pool. This number refers to connections actually created and is not an indication of how many more connections may be created and used in the pool.
dbTotalResources
167
BroadVision Graph
The BroadVision monitor supplies performance statistics for all the servers/services available within the application server. The following table describes all the servers/services that are available:
Multiple Instances No No No Yes Yes Yes
Description One-To-One user administration server. There must be one. Alert server handles direct IDL function calls to the Alert system. One-To-One configuration management server. There must be one. Visitor management database server. Content database server. Notification delivery server for e-mail type messages. Each instance of this server must have its own ID, numbered sequentially starting with "1". Notification delivery completion processor. External database accessor. You need at least one for each external data source. Generic database accessor handles content query requests from applications, when specifically called from the application. This is also used by the One-To-One Command Center. Defines a host manager process for each machine that participates in One-To-One, but doesnt run any One-To-One servers. For example, you need a hostmgr on a machine that runs only servers. You dont need a separate hostmgr on a machine that already has one of the servers in this list. Order fulfillment back-end server.
No Yes No
hostmgr
Yes
g1_ofbe_srv
No
168
Description Order fulfillment database server. Order management server. The payment archiving daemon routes payment records to the archives by periodically checking the invoices table, looking for records with completed payment transactions, and then moving those records into an archive table. For each payment processing method, you need one or more authorization daemons to periodically acquire the authorization when a request is made. Payment settlement daemon periodically checks the database for orders of the associated payment processing method that need to be settled, and then authorizes the transactions. Notification schedule poller scans the database tables to determine when a notification must be run. Notification schedule server runs the scripts that generate the visitor notification messages.
pmthdlr_d
Yes
pmtsettle_d
Yes
sched_poll_d
No
sched_srv
Yes
Performance Counters Performance counters for each server/service are divided into logical groups according to the service type. The following section describes all the available counters under each group. Please note that for some services the number of counters for the same group can be different.
169
Counter groups: BV_DB_STAT BV_SRV_CTRL BV_SRV_STAT NS_STAT BV_CACHE_STAT JS_SCRIPT_CTRL JS_SCRIPT_STAT BV_DB_STAT The database accessor processes have additional statistics available from the BV_DB_STAT memory block. These statistics provide information about database accesses, including the count of selects, updates, inserts, deletes, and stored procedure executions. DELETE - Count of deletes executions INSERT - Count of inserts executions SELECT - Count of selects executions SPROC - Count of stored procedure executions. UPDATE - Count of updates executions BV_SRV_CTRL SHUTDOWN
170
NS_STAT The NS process displays the namespace for the current One-To-One environment, and optionally can update objects in a name space. Bind List New Rebnd Rsolv Unbnd BV_SRV_STAT The display for Interaction Manager processes includes information about the current count of sessions, connections, idle sessions, threads in use, and count of CGI requests processed. HOST - Host machine running the process. ID - Instance of the process (of which multiple can be configured in the bv1to1.conf file), or engine ID of the Interaction Manager. CGI - Current count of CGI requests processed. CONN - Current count of connections. CPU - CPU percentage consumed by this process. If a process is using most of the CPU time, consider moving it to another host, or creating an additional process, possibly running on another machine. Both of these specifications are done in the bv1to1.conf file. The CPU % reported is against a single processor. If a server is taking up a whole CPU on a 4 processor machine, this statistic will report 100%, while the Windows NT Task Manager will report 25%. The value reported by this statistic is consistent with "% Processor Time" on the Windows NT Performance Monitor. GROUP - Process group (which is defined in the bv1to1.conf file), or Interaction Manager application name.
171
STIME - Start time of server. The start times should be relatively close. Later times might be an indication that a server crashed and was automatically restarted. IDL - Total count of IDL requests received, not including those to the monitor. IdlQ JOB LWP - Number of light-weight processes (threads). RSS - Resident memory size of server process (in Kilobytes). STIME - System start time. SESS - Current count of sessions. SYS - Accumulated system mode CPU time (seconds). THR - Current count of threads. USR - Accumulated user mode CPU time (seconds). VSZ - Virtual memory size of server process (in kilobytes). If a process is growing in size, it probably has a memory leak. If it is an Interaction Manager process, the culprit is most likely a component or dynamic object (though Interaction Manager servers do grow and shrink from garbage collection during normal use). BV_CACHE_STAT Monitors the request cache status. The available counters for each request are: CNT- Request_Name-HIT - Count of requests found in the cache. CNT- Request_Name-MAX - Maximum size of the cache in bytes CNT- Request_Name-SWAP - Count of items that got swapped out of the cache. CNT- Request_Name-MISS - Count of requests that were not in the cache. CNT- Request_Name-SIZE - Count of items currently in the cache.
172
Cache Metrics Cache metrics are available for the following items: AD ALERTSCHED - Notification schedules are defined in the BV_ALERTSCHED and BV_MSGSCHED tables. They are defined by the One-To-One Command Center user or by an application. CATEGORY_CONTENT DISCUSSION - The One-To-One discussion groups provide moderated system of messages and threads of messages aligned to a particular topic. Use the Discussion group interfaces for creating, retrieving and deleting individual messages in a discussion group. To create, delete, or retrieve discussion groups, use the generic content management API. The BV_DiscussionDB object provides access to the threads and messages in the discussion group database. EXT_FIN_PRODUCT EDITORIAL - Using the Editorials content module, you can point cast and community cast personalized editorial content, and sell published text on your One-To-One site. You can solicit editorial content, such as investment reports and weekly columns, from outside authors and publishers, and create your own articles, reviews, reports, and other informative media. In addition to text, you can use images, sounds, music, and video presentations as editorial content. INCENTIVE - Contains sales incentives MSGSCHED - Contains the specifications of visitor-message jobs. Notification schedules are defined in the BV_ALERTSCHED and BV_MSGSCHED tables. They are defined by the One-To-One Command Center user or by an application. MSGSCRIPT - Contains the descriptions of the JavaScripts that generate visitor messages and alert messages. Contains the descriptions of the JavaScripts that generate targeted messages and alert messages. Use the Command Center to add message script information to this table by selecting the Visitor Messages module in the Notifications group. For more information, see the Command Center Users Guide.
173
PRODUCT - BV_PRODUCT contains information about the products that a visitor can purchase. QUERY - BV_QUERY contains queries. SCRIPT - BV_SCRIPT contains page scripts. SECURITIES TEMPLATE - The Templates content module enables you to store in the content database any BroadVision page templates used on your One-To-One site. Combining BroadVision page templates with BroadVision dynamic objects in the One-To-One Design Center application is one way for site developers to create One-To-One Web sites. If your developers use these page templates, you can use the Command Center to enter and manage them in your content database. If your site doesnt use BroadVision page template, you will not use this content module. JS_SCRIPT_CTRL CACHE DUMP FLUSH METER TRACE JS_SCRIPT_STAT ALLOC ERROR FAIL JSPPERR RELEASE STOP SUCC SYNTAX
174
ColdFusion Graph
The following default measurements are available for Allaires ColdFusion server:
Measurement Avg. Database Time (msec) Avg. Queue Time (msec) Description The running average of the amount of time, in milliseconds, that it takes ColdFusion to process database requests. The running average of the amount of time, in milliseconds, that requests spent waiting in the ColdFusion input queue before ColdFusion began to process the request. The running average of the total amount of time, in milliseconds, that it takes ColdFusion to process a request. In addition to general page processing time, this value includes both queue time and database processing time. The number of bytes per second sent to the ColdFusion server. The number of bytes per second returned by the ColdFusion server. Cache pops. This is the number of database hits generated per second by the ColdFusion server. This is the number of Web pages processed per second by the ColdFusion server. The number of requests currently waiting to be processed by the ColdFusion server. The number of requests currently being actively processed by the ColdFusion server. The number of requests that timed out due to inactivity timeouts.
Bytes In/sec Bytes Out/sec Cache Pops Database Hits/sec Page Hits/sec Queued Requests Running Requests Timed Out Requests
175
176
177
178
Measurement mod_setenvif.c mod_actions.c mod_imap.c mod_asis.c mod_log_config.c mod_env.c mod_alias.c mod_userdir.c mod_cgi.c mod_dir.c mod_autoindex.c mod_include.c mod_negotiation.c mod_auth.c mod_access.c mod_so.c mod_oprocmgr.c mod_jserv.c
Description Sets environment variables based on client information Executes CGI scripts based on media type or request method Handles imagemap files Sends files that contain their own HTTP headers Provides user-configurable logging replacement for mod_log_common Passes environments to CGI scripts Maps different parts of the host file system in the document tree, and redirects URLs Handles user home directories Invokes CGI scripts Handles the basic directory Provides automatic directory listings Provides server-parsed documents Handles content negotiation Provides user authentication using text files Provides access control based on the client hostname or IP address Supports loading modules (.so on UNIX, .dll on Win32) at run-time Monitors JServ processes and restarts them if they fail Routes HTTP requests to JServ server processes. Balances load across multiple JServs by distributing new requests in round-robin order
179
Description Routes requests to the JVM embedded in Oracles database server Handles requests for static Web pages
The following table describes the counters that are available for the Oracle9iAS HTTP server:
Measurement handle.minTime handle.avg handle.active handle.time handle.completed request.maxTime request.minTime request.avg request.active request.time request.completed Description The minimum time spent in the module handler The average time spent in the module handler The number of threads currently in the handle processing phase The total amount of time spent in the module handler The number of times the handle processing phase was completed The maximum amount of time required to service an HTTP request The minimum amount of time required to service an HTTP request The average amount of time required to service an HTTP request The number of threads currently in the request processing phase The total amount of time required to service an HTTP request The number of times the request processing phase was completed
180
Measurement connection.maxTime connection.minTime connection.avg connection.active connection.time connection.completed numMods.value childFinish.count childStart.count
Description The maximum amount of time spent servicing any HTTP connection The minimum amount of time spent servicing any HTTP connection The average amount of time spent servicing HTTP connections The number of connections with currently open threads The total amount of time spent servicing HTTP connections The number of times the connection processing phase was completed The number of loaded modules The number of times the Apache parent server started a child server, for any reason The number of times children finished gracefully.There are some ungraceful error/crash cases that are not counted in childFinish.count The number of times each module declined HTTP requests The number of times that any module passed control to another module using an internal redirect The total CPU time utilized by all processes on the Apache server (measured in CPU milliseconds) The total heap memory utilized by all processes on the Apache server (measured in kilobytes) The process identifier of the parent Apache process The amount of time the server been running (measured in milliseconds)
Decline.count internalRedirect.count
181
SilverStream Graph
The following default measurements are available for the SilverStream server:
Measurement #Idle Sessions Avg. Request processing time Bytes Sent/sec Current load on Web Server Hits/sec Total sessions Free memory Description The number of sessions in the Idle state. The average request processing time. The rate at which data bytes are sent from the Web server. The percentage of load utilized by the SilverStream server, scaled at a factor of 25. The HTTP request rate. The total number of sessions. The total amount of memory in the Java Virtual Machine currently available for future allocated objects. The total amount of memory in the Java Virtual Machine. The total number of times the JAVA Garbage Collector has run since the server was started. The current number of threads not associated with a client connection and available for immediate use. The number of threads associated with a client connection, but not currently handling a user request. The total number of client threads allocated.
Total memory Memory Garbage Collection Count Free threads Idle threads
Total threads
182
Note: The SilverStream monitor connects to the Web server in order to gather statistics, and registers one hit for each sampling. The SilverStream graph, therefore, always displays one hit per second, even if no clients are connected to the SilverStream server.
183
Description The total number of EJB deployment units known to the server The total number of EJB beans actively deployed on the server
Listen Table The Listen Table is the set of protocol, IP address, and port combinations on which servers are listening. There will be multiple entries for each server: one for each protocol, ipAddr, port) combination. If clustering is used, the clustering-related MIB objects will assume a higher priority.
Measurement ListenPort ListenAdminOK ListenState Description Port number. True if admin requests are allowed on this (protocol, ipAddr, port); otherwise false Listening if the (protocol, ipAddr, port) is enabled on the server; not Listening if it is not. The server may be listening but not accepting new clients if its server Login Enable state is false. In this case, existing clients will continue to function, but new ones will not.
184
ClassPath Table The ClassPath Table is the table of classpath elements for Java, WebLogic (SNMP) server, and servlets. There are multiple entries in this table for each server. There may also be multiple entries for each path on a server. If clustering is used, the clustering-related MIB objects will assume a higher priority.
Measurement CPType Description The type of CP element: Java, WebLogic, servlet. A Java CPType means the cpElement is one of the elements in the normal Java classpath. A WebLogic CPType means the cpElement is one of the elements in weblogic.class.path. A servlet CPType means the cpElement is one of the elements in the dynamic servlet classpath. The position of an element within its path. The index starts at 1.
CPIndex
185
ServerRuntime For more information on the measurements contained in each of the following measurement categories, see Mercury Interactives Load Testing Monitors Web site: http://www-svca.mercuryinteractive.com/resources/library/technical/ loadtesting_monitors/supported.html ServletRuntime WebAppComponentRuntime EJBStatefulHomeRuntime JTARuntime JVMRuntime EJBEntityHomeRuntime. DomainRuntime EJBComponentRuntime DomainLogHandlerRuntime JDBCConnectionPoolRuntime ExecuteQueueRuntime ClusterRuntime JMSRuntime TimeServiceRuntime EJBStatelessHomeRuntime WLECConnectionServiceRuntime
186
ServerSecurityRuntime
Measurement UnlockedUsersTotalCount InvalidLoginUsersHighCount Description Returns the number of times a user has been unlocked on the server Returns the high-water number of users with outstanding invalid login attempts for the server Returns the cumulative number of invalid logins attempted on the server while the user was locked Returns false if the MBean represented by this object has been unregistered. Returns the number of currently locked users on the server Private property that disables caching in proxies. Returns the cumulative number of invalid logins attempted on the server Returns the cumulative number of user lockouts done on the server
LoginAttemptsWhileLockedTotalCount
Registered
UserLockoutTotalCount
187
WebSphere Graph
The following measurements are available for the WebSphere server: Run-Time Resources Contains resources related to the Java Virtual Machine run-time, as well as the ORB.
Measurement MemoryFree MemoryTotal MemoryUse Description The amount of free memory remaining in the Java Virtual Machine The total memory allocated for the Java Virtual Machine The total memory in use within the Java Virtual Machine
BeanData Every home on the server provides performance data, depending upon the type of bean deployed in the home. The top level bean data holds an aggregate of all the containers.
Measurement BeanCreates EntityBeanCreates BeanRemoves Description The number of beans created. Applies to an individual bean that is either stateful or entity The number of entity beans created The number of entity beans pertaining to a specific bean that have been removed. Applies to an individual bean that is either stateful or entity The number of entity beans removed The number of stateful beans created The number of stateful bean removed
188
Measurement BeanPassivates
Description The number of bean passivates pertaining to a specific bean. Applies to an individual bean that is either stateful or entity The number of entity bean passivates The number of stateful bean passivates The number of bean activates pertaining to a specific bean. Applies to an individual bean that is either stateful or entity The number of entity bean activates The number of stateful bean activates The number of times the bean data was loaded. Applies to entity The number of times the bean data was stored in the database. Applies to entity The number of times a bean object was created. This applies to an individual bean, regardless of its type. The number of times a stateless session bean object was created The number of times a stateful session bean object was created The number of times an entity bean object was created The number of times an individual bean object was destroyed. This applies to any bean, regardless of its type The number of times a stateless session bean object was destroyed The number of times a stateful session bean object was destroyed
StatelessBeanDestroys StatefulBeanDestroys
189
Description The number of times an entity bean object was destroyed The average number of instances of active beans pertaining to a specific bean. Applies to an individual bean that is either stateful or entity The average number of active entity beans The average number of active session beans The average number of bean objects of this specific type that are instantiated but not yet destroyed. This applies to an individual bean, regardless of its type. The average number of stateless session bean objects that are instantiated but not yet destroyed The average number of stateful session bean objects that are instantiated but not yet destroyed The average number of entity bean objects that are instantiated but not yet destroyed The average method response time for all methods defined in the remote interface to this bean. Applies to all beans The average number of methods being processed concurrently. Applies to all beans The total number of method calls against this bean
BeanMethodActive BeanMethodCalls
190
BeanObjectPool The server holds a cache of bean objects. Each home has a cache and there is therefore one BeanObjectPoolContainer per container. The top level BeanObjectPool holds an aggregate of all the containers data.
Measurement BeanObjectPoolContainer BeanObject NumGet NumGetFound NumPuts NumPutsDiscarded Description The pool of a specific bean type The pool specific to a home The number of calls retrieving an object from the pool The number of calls to the pool that resulted in finding an available bean The number of beans that were released to the pool The number of times releasing a bean to the pool resulted in the bean being discarded because the pool was full The number of times the daemon found the pool was idle and attempted to clean it The average number of beans discarded by the daemon during a clean The average number of beans in the pool
OrbThreadPool These are resources related to the ORB thread pool that is on the server.
Measurement ActiveThreads TotalThreads PercentTimeMaxed Description The average number of active threads in the pool The average number of threads in the pool The average percent of the time that the number of threads in the pool reached or exceeded the desired maximum number
191
Description The number of threads created The number of threads destroyed The configured maximum number of pooled threads
DBConnectionMgr These are resources related to the database connection manager. The manager consists of a series of data sources, as well as a top-level aggregate of each of the performance metrics.
Measurement DataSource ConnectionCreates ConnectionDestroys ConnectionPoolSize ConnectionAllocates ConnectionWaiters ConnectionWaitTime ConnectionTime ConnectionPercentUsed ConnectionPercentMaxed Description Resources related to a specific data source specified by the "name" attribute The number of connections created The number of connections released The average size of the pool, i.e., number of connections The number of times a connection was allocated The average number of threads waiting for a connection The average time, in seconds, of a connection grant The average time, in seconds, that a connection is in use The average percentage of the pool that is in use The percentage of the time that all connections are in use
192
ServletEngine These are resources that are related to servlets and JSPs.
Measurement ServletsLoaded ServletRequests CurrentRequests ServletRT ServletsActive Description The number of servlets currently loaded The number of requests serviced The number of requests currently being serviced The average response time for each request The average number of servlets actively processing requests
193
Measurement ServletIdle ServletErrors ServletBeanCalls ServletBeanCreates ServletDBCalls ServletDBConAlloc SessionLoads SessionStores SessionSize LoadedSince
Description The amount of time that the server has been idle (i.e., time since last request) The number of requests that resulted in an error or an exception The number of bean method invocations that were made by the servlet The number of bean references that were made by the servlet The number of database calls made by the servlet The number of database connections allocated by the servlet The number of times the servlet session data was read from the database The number of times the servlet session data was stored in the database The average size, in bytes, of a session data The time that has passed since the server was loaded (UNC time)
Sessions These are general metrics regarding the HTTP session pool.
Measurement SessionsCreated SessionsActive SessionsInvalidated SessionLifetime Description The number of sessions created on the server The number of currently active sessions The number of invalidated sessions. May not be valid when using sessions in the database mode Contains statistical data of sessions that have been invalidated. Does not include sessions that are still alive
194
BeanData Every home on the server provides performance data, depending upon the type of bean deployed in the home. The top level bean data holds an aggregate of all the containers.
Measurement BeanCreates EntityBeanCreates BeanRemoves Description The number of beans created. Applies to an individual bean that is either stateful or entity The number of entity beans created The number of entity beans pertaining to a specific bean that have been removed. Applies to an individual bean that is either stateful or entity The number of entity beans removed The number of stateful beans created The number of stateful bean removed
195
Measurement BeanPassivates
Description The number of bean passivates pertaining to a specific bean. Applies to an individual bean that is either stateful or entity The number of entity bean passivates The number of stateful bean passivates The number of bean activates pertaining to a specific bean. Applies to an individual bean that is either stateful or entity The number of entity bean activates The number of stateful bean activates The number of times the bean data was loaded. Applies to entity The number of times the bean data was stored in the database. Applies to entity The number of times a bean object was created. This applies to an individual bean, regardless of its type. The number of times a stateless session bean object was created The number of times a stateful session bean object was created The number of times an entity bean object was created The number of times an individual bean object was destroyed. This applies to any bean, regardless of its type The number of times a stateless session bean object was destroyed The number of times a stateful session bean object was destroyed
StatelessBeanDestroys StatefulBeanDestroys
196
Description The number of times an entity bean object was destroyed The average number of instances of active beans pertaining to a specific bean. Applies to an individual bean that is either stateful or entity The average number of active entity beans The average number of active session beans The average number of bean objects of this specific type that are instantiated but not yet destroyed. This applies to an individual bean, regardless of its type. The average number of stateless session bean objects that are instantiated but not yet destroyed The average number of stateful session bean objects that are instantiated but not yet destroyed The average number of entity bean objects that are instantiated but not yet destroyed The average method response time for all methods defined in the remote interface to this bean. Applies to all beans The average number of methods being processed concurrently. Applies to all beans The total number of method calls against this bean
BeanMethodActive BeanMethodCalls
197
BeanObjectPool The server holds a cache of bean objects. Each home has a cache and there is therefore one BeanObjectPoolContainer per container. The top level BeanObjectPool holds an aggregate of all the containers data.
Measurement BeanObjectPoolContainer BeanObject NumGet NumGetFound NumPuts NumPutsDiscarded Description The pool of a specific bean type The pool specific to a home The number of calls retrieving an object from the pool The number of calls to the pool that resulted in finding an available bean The number of beans that were released to the pool The number of times releasing a bean to the pool resulted in the bean being discarded because the pool was full The number of times the daemon found the pool was idle and attempted to clean it The average number of beans discarded by the daemon during a clean The average number of beans in the pool
OrbThreadPool These are resources related to the ORB thread pool that is on the server.
Measurement ActiveThreads TotalThreads PercentTimeMaxed Description The average number of active threads in the pool The average number of threads in the pool The average percent of the time that the number of threads in the pool reached or exceeded the desired maximum number
198
Description The number of threads created The number of threads destroyed The configured maximum number of pooled threads
DBConnectionMgr These are resources related to the database connection manager. The manager consists of a series of data sources, as well as a top-level aggregate of each of the performance metrics.
Measurement DataSource ConnectionCreates ConnectionDestroys ConnectionPoolSize ConnectionAllocates ConnectionWaiters ConnectionWaitTime ConnectionTime ConnectionPercentUsed ConnectionPercentMaxed Description Resources related to a specific data source specified by the "name" attribute The number of connections created The number of connections released The average size of the pool, i.e., number of connections The number of times a connection was allocated The average number of threads waiting for a connection The average time, in seconds, of a connection grant The average time, in seconds, that a connection is in use The average percentage of the pool that is in use The percentage of the time that all connections are in use
199
ServletEngine These are resources that are related to servlets and JSPs.
Measurement ServletsLoaded ServletRequests CurrentRequests ServletRT ServletsActive Description The number of servlets currently loaded The number of requests serviced The number of requests currently being serviced The average response time for each request The average number of servlets actively processing requests
200
Measurement ServletIdle ServletErrors ServletBeanCalls ServletBeanCreates ServletDBCalls ServletDBConAlloc SessionLoads SessionStores SessionSize LoadedSince
Description The amount of time that the server has been idle (i.e., time since last request) The number of requests that resulted in an error or an exception The number of bean method invocations that were made by the servlet The number of bean references that were made by the servlet The number of database calls made by the servlet The number of database connections allocated by the servlet The number of times the servlet session data was read from the database The number of times the servlet session data was stored in the database The average size, in bytes, of a session data The time that has passed since the server was loaded (UNC time)
Sessions These are general metrics regarding the HTTP session pool.
Measurement SessionsCreated SessionsActive SessionsInvalidated SessionLifetime Description The number of sessions created on the server The number of currently active sessions The number of invalidated sessions. May not be valid when using sessions in the database mode Contains statistical data of sessions that have been invalidated. Does not include sessions that are still alive
201
202
14
Database Server Resource Graphs
After running a scenario, you can use Database Server Resource graphs to analyze the resource usage of your DB2, Oracle, SQL Server, and Sybase databases. This chapter describes: DB2 Graph Oracle Graph SQL Server Graph Sybase Graph
203
DB2 Graph
The DB2 graph shows the resource usage on the DB2 database server machine as a function of the elapsed scenario time. The following tables describe the default counters that can be monitored on a DB2 server: DatabaseManager
Measurement rem_cons_in Description The current number of connections initiated from remote clients to the instance of the database manager that is being monitored. The number of remote applications that are currently connected to a database and are currently processing a unit of work within the database manager instance being monitored. The number of local applications that are currently connected to a database within the database manager instance being monitored. The number of local applications that are currently connected to a database within the database manager instance being monitored and are currently processing a unit of work. The number of local databases that have applications connected. The number of agents registered in the database manager instance that is being monitored (coordinator agents and subagents). The number of agents waiting for a token so they can execute a transaction in the database manager. The number of agents in the agent pool that are currently unassigned to an application and are therefore "idle".
rem_cons_in_exec
local_cons
local_cons_in_exec
con_local_dbases agents_registered
agents_waiting_on_token
idle_agents
204
Description The number of agents assigned from the agent pool The number of agents created because the agent pool was empty. The number of times that agents are stolen from an application. Agents are stolen when an idle agent associated with an application is reassigned to work on a different application. The amount of private memory that the instance of the database manager has currently committed at the time of the snapshot. The number of DRDA agents in the DRDA connections pool that are primed with a connection to a DRDA database, but are inactive. The number of times that an agent from the agents pool was primed with a connection and was stolen for use with a different DRDA database. The total number of allocated pages of sort heap space for all sorts at the level chosen and at the time the snapshot was taken. The number of sorts that have requested heaps after the sort heap threshold has been reached. The number of piped sorts that have been requested. The number of piped sorts that have been accepted.
comm_private_mem
inactive_gw_agents
num_gw_conn_switches
sort_heap_allocated
205
Database
Measurement appls_cur_cons appls_in_db2 Description Indicates the number of applications that are currently connected to the database. Indicates the number of applications that are currently connected to the database, and for which the database manager is currently processing a request. The number of connections made by a sub-agent to the database at the node. At the application level, this is the number of subagents associated with an application. At the database level, it is the number of sub-agents for all applications. The total number of allocated pages of sort heap space for all sorts at the level chosen and at the time the snapshot was taken. The total number of sorts that have been executed. The total elapsed time (in milliseconds) for all sorts that have been executed. The total number of sorts that ran out of sort heap and may have required disk space for temporary storage. The number of sorts in the database that currently have a sort heap allocated. The total number of hash joins executed. The total number of times that a single partition of a hash join was larger than the available sort heap space. The number of times that hash join data exceeded the available sort heap space
total_sec_cons num_assoc_agents
sort_heap_allocated
hash_join_overflows
206
Description The number of times that hash join data exceeded the available sort heap space by less than 10%. Indicates the number of logical read requests for data pages that have gone through the buffer pool. The number of read requests that required I/O to get data pages into the buffer pool. Indicates the number of times a buffer pool data page was physically written to disk. Indicates the number of logical read requests for index pages that have gone through the buffer pool. Indicates the number of physical read requests to get index pages into the buffer pool. Indicates the number of times a buffer pool index page was physically written to disk. Provides the total amount of elapsed time spent processing read requests that caused data or index pages to be physically read from disk to buffer pool. Provides the total amount of time spent physically writing data or index pages from the buffer pool to disk. The total number of database files closed. The number of pages read asynchronously into the buffer pool. The number of times a buffer pool data page was physically written to disk by either an asynchronous page cleaner, or a pre-fetcher. A pre-fetcher may have written dirty pages to disk to make space for the pages being pre-fetched.
pool_write_time
207
Measurement pool_async_index_writes
Description The number of times a buffer pool index page was physically written to disk by either an asynchronous page cleaner, or a pre-fetcher. A pre-fetcher may have written dirty pages to disk to make space for the pages being pre-fetched. The number of index pages read asynchronously into the buffer pool by a pre-fetcher. The total elapsed time spent reading by database manager pre-fetchers. The total elapsed time spent writing data or index pages from the buffer pool to disk by database manager page cleaners. The number of asynchronous read requests. The number of times a page cleaner was invoked because the logging space used had reached a predefined criterion for the database. The number of times a page cleaner was invoked because a synchronous write was needed during the victim buffer replacement for the database. The number of times a page cleaner was invoked because a buffer pool had reached the dirty page threshold criterion for the database. The time an application spent waiting for an I/O server (pre-fetcher) to finish loading pages into the buffer pool. The number of buffer pool data pages copied to extended storage. The number of buffer pool index pages copied to extended storage. The number of buffer pool data pages copied from extended storage. The number of buffer pool index pages copied from extended storage.
pool_async_data_read_reqs pool_lsn_gap_clns
pool_drty_pg_steal_clns
pool_drty_pg_thrsh_clns
prefetch_wait_time
208
Description The number of read operations that do not use the buffer pool. The number of write operations that do not use the buffer pool. The number of requests to perform a direct read of one or more sectors of data. The number of requests to perform a direct write of one or more sectors of data. The elapsed time (in milliseconds) required to perform the direct reads. The elapsed time (in milliseconds) required to perform the direct writes. The number of times that the catalog cache was referenced to obtain table descriptor information. The number of times that the system tried to insert table descriptor information into the catalog cache. The number of times that an insert into the catalog cache failed due the catalog cache being full. The number of times that an insert into the catalog cache failed due to a heap-full condition in the database heap. The number of times that an application looked for a section or package in the package cache. At a database level, it indicates the overall number of references since the database was started, or monitor data was reset. The total number of times that a requested section was not available for use and had to be loaded into the package cache. This count includes any implicit prepares performed by the system.
cat_cache_overflows
cat_cache_heap_full
pkg_cache_lookups
pkg_cache_inserts
209
Measurement pkg_cache_num_overflows appl_section_lookups appl_section_inserts sec_logs_allocated log_reads log_writes total_log_used locks_held lock_list_in_use deadlocks lock_escals x_lock_escals
Description The number of times that the package cache overflowed the bounds of its allocated memory. Lookups of SQL sections by an application from its SQL work area. Inserts of SQL sections by an application from its SQL work area. The total number of secondary log files that are currently being used for the database. The number of log pages read from disk by the logger. The number of log pages written to disk by the logger. The total amount of active log space currently used (in bytes) in the database. The number of locks currently held. The total amount of lock list memory (in bytes) that is in use. The total number of deadlocks that have occurred. The number of times that locks have been escalated from several row locks to a table lock. The number of times that locks have been escalated from several row locks to one exclusive table lock, or the number of times an exclusive lock on a row caused the table lock to become an exclusive lock. The number of times that a request to lock an object timed-out instead of being granted. The total number of times that applications or connections waited for locks. The total elapsed time waited for a lock.
210
Measurement locks_waiting rows_deleted rows_inserted rows_updated rows_selected int_rows_deleted int_rows_updated int_rows_inserted static_sql_stmts dynamic_sql_stmts failed_sql_stmts commit_sql_stmts rollback_sql_stmts select_sql_stmts uid_sql_stmts ddl_sql_stmts
Description Indicates the number of agents waiting on a lock. The number of row deletions attempted. The number of row insertions attempted. The number of row updates attempted. The number of rows that have been selected and returned to the application. The number of rows deleted from the database as a result of internal activity. The number of rows updated from the database as a result of internal activity. The number of rows inserted into the database as a result of internal activity caused by triggers. The number of static SQL statements that were attempted. The number of dynamic SQL statements that were attempted. The number of SQL statements that were attempted, but failed. The total number of SQL COMMIT statements that have been attempted. The total number of SQL ROLLBACK statements that have been attempted. The number of SQL SELECT statements that were executed. The number of SQL UPDATE, INSERT, and DELETE statements that were executed. This element indicates the number of SQL Data Definition Language (DDL) statements that were executed.
211
Description The number of automatic rebinds (or recompiles) that have been attempted. The total number of commits initiated internally by the database manager. The total number of rollbacks initiated internally by the database manager. The total number of forced rollbacks initiated by the database manager due to a deadlock. A rollback is performed on the current unit of work in an application selected by the database manager to resolve the deadlock. The number of binds and pre-compiles attempted.
binds_precompiles
Application
Measurement agents_stolen Description The number of times that agents are stolen from an application. Agents are stolen when an idle agent associated with an application is reassigned to work on a different application. At the application level, this is the number of subagents associated with an application. At the database level, it is the number of sub-agents for all applications. The total number of sorts that have been executed. The total elapsed time (in milliseconds) for all sorts that have been executed. The total number of sorts that ran out of sort heap and may have required disk space for temporary storage. The total number of hash joins executed.
num_assoc_agents
total_hash_joins
212
Measurement total_hash_loops
Description The total number of times that a single partition of a hash join was larger than the available sort heap space. The number of times that hash join data exceeded the available sort heap space The number of times that hash join data exceeded the available sort heap space by less than 10%. Indicates the number of logical read requests for data pages that have gone through the buffer pool. The number of read requests that required I/O to get data pages into the buffer pool. Indicates the number of times a buffer pool data page was physically written to disk. Indicates the number of logical read requests for index pages that have gone through the buffer pool. Indicates the number of physical read requests to get index pages into the buffer pool. Indicates the number of times a buffer pool index page was physically written to disk. Provides the total amount of elapsed time spent processing read requests that caused data or index pages to be physically read from disk to buffer pool. The time an application spent waiting for an I/O server (pre-fetcher) to finish loading pages into the buffer pool. The number of buffer pool data pages copied to extended storage. The number of buffer pool index pages copied to extended storage.
prefetch_wait_time
pool_data_to_estore pool_index_to_estore
213
Measurement pool_data_from_estore pool_index_from_estore direct_reads direct_writes direct_read_reqs direct_write_reqs direct_read_time direct_write_time cat_cache_lookups cat_cache_inserts
Description The number of buffer pool data pages copied from extended storage. The number of buffer pool index pages copied from extended storage. The number of read operations that do not use the buffer pool. The number of write operations that do not use the buffer pool. The number of requests to perform a direct read of one or more sectors of data. The number of requests to perform a direct write of one or more sectors of data. The elapsed time (in milliseconds) required to perform the direct reads. The elapsed time (in milliseconds) required to perform the direct writes. The number of times that the catalog cache was referenced to obtain table descriptor information. The number of times that the system tried to insert table descriptor information into the catalog cache. The number of times that an insert into the catalog cache failed due the catalog cache being full. The number of times that an insert into the catalog cache failed due to a heap-full condition in the database heap. The number of times that an application looked for a section or package in the package cache. At a database level, it indicates the overall number of references since the database was started, or monitor data was reset.
cat_cache_overflows
cat_cache_heap_full
pkg_cache_lookups
214
Measurement pkg_cache_inserts
Description The total number of times that a requested section was not available for use and had to be loaded into the package cache. This count includes any implicit prepares performed by the system. Lookups of SQL sections by an application from its SQL work area. Inserts of SQL sections by an application from its SQL work area. The amount of log space (in bytes) used in the current unit of work of the monitored application. The number of locks currently held. The total number of deadlocks that have occurred. The number of times that locks have been escalated from several row locks to a table lock. The number of times that locks have been escalated from several row locks to one exclusive table lock, or the number of times an exclusive lock on a row caused the table lock to become an exclusive lock. The number of times that a request to lock an object timed-out instead of being granted. The total number of times that applications or connections waited for locks. The total elapsed time waited for a lock. Indicates the number of agents waiting on a lock. The total amount of elapsed time this unit of work has spent waiting for locks. The number of row deletions attempted.
215
Measurement rows_inserted rows_updated rows_selected rows_written rows_read int_rows_deleted int_rows_updated int_rows_inserted open_rem_curs
Description The number of row insertions attempted. The number of row updates attempted. The number of rows that have been selected and returned to the application. The number of rows changed (inserted, deleted or updated) in the table. The number of rows read from the table. The number of rows deleted from the database as a result of internal activity. The number of rows updated from the database as a result of internal activity. The number of rows inserted into the database as a result of internal activity caused by triggers. The number of remote cursors currently open for this application, including those cursors counted by open_rem_curs_blk. The number of remote blocking cursors currently open for this application. The number of times that a request for an I/O block at server was rejected and the request was converted to non-blocked I/O. The number of times that a request for an I/O block was accepted. The number of local cursors currently open for this application, including those cursors counted by open_loc_curs_blk. The number of local blocking cursors currently open for this application. The number of static SQL statements that were attempted.
open_rem_curs_blk rej_curs_blk
acc_curs_blk open_loc_curs
open_loc_curs_blk static_sql_stmts
216
Description The number of dynamic SQL statements that were attempted. The number of SQL statements that were attempted, but failed. The total number of SQL COMMIT statements that have been attempted. The total number of SQL ROLLBACK statements that have been attempted. The number of SQL SELECT statements that were executed. The number of SQL UPDATE, INSERT, and DELETE statements that were executed. This element indicates the number of SQL Data Definition Language (DDL) statements that were executed. The number of automatic rebinds (or recompiles) that have been attempted. The total number of commits initiated internally by the database manager. The total number of rollbacks initiated internally by the database manager. The total number of forced rollbacks initiated by the database manager due to a deadlock. A rollback is performed on the current unit of work in an application selected by the database manager to resolve the deadlock. The number of binds and pre-compiles attempted.
binds_precompiles
217
Oracle Graph
The Oracle graph displays information from Oracle V$ tables: Session statistics, V$SESSTAT, system statistics, V$SYSSTAT, and other table counters defined by the user in the custom query. In the following Oracle graph, the V$SYSSTAT resource values are shown as a function of the elapsed scenario time.
The following measurements are most commonly used when monitoring the Oracle server (from the V$SYSSTAT table):
Measurement CPU used by this session Description This is the amount of CPU time (in 10s of milliseconds) used by a session between the time a user call started and ended. Some user calls can be completed within 10 milliseconds and, as a result, the start and end user-call time can be the same. In this case, 0 milliseconds are added to the statistic. A similar problem can exist in the operating system reporting, especially on systems that suffer from many context switches. The total number of bytes received from the client over Net8
218
Description The total number of current logons The total number of files that needed to be reopened because they were no longer in the process file cache Oracle allocates resources (Call State Objects) to keep track of relevant user call data structures every time you log in, parse, or execute. When determining activity, the ratio of user calls to RPI calls gives you an indication of how much internal work gets generated as a result of the type of requests the user is sending to Oracle. The total number of Net8 messages sent to, and received from, the client The total number of bytes sent to the client from the foreground process(es) The total number of current open cursors Closely related to consistent changes, this statistic counts the total number of changes that were made to all blocks in the SGA that were part of an update or delete operation. These are changes that are generating redo log entries and hence will be permanent changes to the database if the transaction is committed. This statistic is a rough indication of total database work and indicates (possibly on a per-transaction level) the rate at which buffers are being dirtied. The total number of file opens being performed by the instance. Each process needs a number of files (control file, log file, database file) in order to work against the database.
User calls
SQL*Net roundtrips to/from client Bytes sent via SQL*Net to client Opened cursors current DB block changes
219
The following table describes the default counters that can be monitored on version 6.5 of the SQL Server:
Measurement % Total Processor Time (NT) Description The average percentage of time that all the processors on the system are busy executing non-idle threads. On a multi-processor system, if all processors are always busy, this is 100%, if all processors are 50% busy this is 50% and if 1/4th of the processors are 100% busy this is 25%. It can be viewed as the fraction of the time spent doing useful work. Each processor is assigned an Idle thread in the Idle process which consumes those unproductive processor cycles not used by any other threads. The percentage of time that a requested data page was found in the data cache (instead of being read from disk)
220
Measurement I/O - Batch Writes/sec I/O - Lazy Writes/sec I/O - Outstanding Reads I/O - Outstanding Writes I/O - Page Reads/sec I/O Transactions/sec User Connections % Processor Time (Win 2000)
Description The number of 2K pages written to disk per second, using Batch I/O. The checkpoint thread is the primary user of Batch I/O. The number of 2K pages flushed to disk per second by the Lazy Writer The number of physical reads pending The number of physical writes pending The number of physical page reads per second The number of Transact-SQL command batches executed per second The number of open user connections The percentage of time that the processor is executing a non-idle thread. This counter was designed as a primary indicator of processor activity. It is calculated by measuring the time that the processor spends executing the thread of the idle process in each sample interval, and subtracting that value from 100%. (Each processor has an idle thread which consumes cycles when no other threads are ready to run). It can be viewed as the percentage of the sample interval spent doing useful work. This counter displays the average percentage of busy time observed during the sample interval. It is calculated by monitoring the time the service was inactive, and then subtracting that value from 100%.
221
Sybase Graph
The Sybase graph shows the resource usage on the Sybase database server machine as a function of the elapsed scenario time. The following tables describe the measurements that can be monitored on a Sybase server:
Object Network Measurement Average packet size (Read) Average packet size (Send) Network bytes (Read) Network bytes (Read)/sec Network bytes (Send) Network bytes (Send)/sec Network packets (Read) Description Reports the number of network packets received Reports the number of network packets sent Reports the number of bytes received, over the sampling interval Reports the number of bytes received, per second Reports the number of bytes sent, over the sampling interval Reports the number of bytes sent, per second Reports the number of network packets received, over the sampling interval Reports the number of network packets received, per second Reports the number of network packets sent, over the sampling interval Reports the number of network packets sent, per second Reports the amount of memory, in bytes, allocated for the page cache
222
Object Disk
Description Reports the number of reads made from a database device Reports the number of writes made to a database device Reports the number of times that access to a device had to wait Reports the number of times access to a device was granted Reports the percentage of time during which the Adaptive Server is in a "busy" state Reports how much "busy" time was used by the engine Reports the number of data page reads, whether satisfied from cache or from a database device Reports the number of data page reads that could not be satisfied from the data cache Reports the number of data pages written to a database device Reports the number of times a stored procedure was executed, over the sampling interval Reports the number of times a stored procedure was executed, during the session Reports the time, in seconds, spent executing a stored procedure, over the sampling interval Reports the time, in seconds, spent executing a stored procedure, during the session
Engine
Executed (session)
223
Object Locks
Description Reports the percentage of successful requests for locks Reports the number of locks. This is an accumulated value. Reports the number of locks that were granted immediately, without having to wait for another lock to be released Reports the number of locks that were granted after waiting for another lock to be released Reports the number of locks that were requested but not granted Reports the average wait time for a lock Reports the number of locks. This is an accumulated value. Reports the percentage of time that the Adaptive Server is in a "busy" state Reports the number of committed Transact-SQL statement blocks (transactions) Reports the number of deadlocks Reports the percentage of times that a data page read could be satisfied from cache without requiring a physical page read Reports the number of data page reads, whether satisfied from cache or from a database device
Not granted Wait time (avg.) SqlSrvr Locks/sec % Processor time (server)
Transactions
Pages (Read)
224
Object Cache
Description Reports the number of data page reads, whether satisfied from cache or from a database device, per second Reports the number of data page reads that could not be satisfied from the data cache Reports the number of data page reads, per second, that could not be satisfied from the data cache Reports the number of data pages written to a database device Reports the number of data pages written to a database device, per second Reports the percentage of time that a process running a given application was in the "Running" state (out of the time that all processes were in the "Running" state) Reports the number of locks, by process. This is an accumulated value. Reports the percentage of times that a data page read could be satisfied from cache without requiring a physical page read, by process Reports the number of data pages written to a database device, by process Reports the number of committed Transact-SQL statement blocks (transactions), during the session
Process
Locks/sec
% Cache hit
Pages (Write)
Transaction
Transactions
225
Object Transaction
Description Reports the number of rows deleted from database tables during the session Reports the number of insertions into a database table during the session Reports the updates to database tables during the session Reports the sum of expensive, inplace and not-in-place updates (everything except updates deferred) during the session Reports the number of committed Transact-SQL statement blocks (transactions) per second Reports the number of rows deleted from database tables, per second Reports the number of insertions into a database table, per second Reports the updates to database tables, per second Reports the sum of expensive, inplace and not-in-place updates (everything except updates deferred), per second
Inserts
Transactions/sec
226
15
227
For example, in the following graph the actual value of RTSP Clients two minutes into the scenario is 200; 20 multiplied by the scale factor of 10.
228
This graph displays the Total Number of Packets, Number of Recovered Packets, Current Bandwidth, and First Frame Time measurements during the first four and a half minutes of the scenario. Note that the scale factor is the same for all of the measurements. The following table describes the RealPlayer Client measurements that are monitored:
Measurement Current Bandwidth (Kbits/sec) Buffering Event Time (sec) Network Performance Percentage of Recovered Packets Percentage of Lost Packets Percentage of Late Packets Time to First Frame Appearance (sec) Number of Buffering Events Number of Buffering Seek Events Buffering Seek Time Number of Buffering Congestion Events Buffering Congestion Time Description The number of kilobytes in the last second The average time spent on buffering The ratio (percentage) between the current bandwidth and the actual bandwidth of the clip The percentage of error packets that were recovered The percentage of packets that were lost The percentage of late packets The time for first frame appearance (measured from the start of the replay) The average number of all buffering events The average number of buffering events resulting from a seek operation The average time spent on buffering events resulting from a seek operation The average number of buffering events resulting from network congestion The average time spent on buffering events resulting from network congestion
229
Measurement Number of Buffering Live Pause Events Buffering Live Pause Time
Description The average number of buffering events resulting from live pause The average time spent on buffering events resulting from live pause
In this graph, the number of RTSP Clients remained steady during the first four and a half minutes of the scenario. The Total Bandwidth and number of Total Clients fluctuated slightly. The number of TCP Connections fluctuated more significantly. Note that the scale factor for the TCP Connections and Total Clients measurements is 10, and the scale factor for Total Bandwidth is 1/1000.
230
The following default measurements are available for the RealPlayer Server:
Measurement Encoder Connections HTTP Clients Monitor Connections Multicast Connections PNA Clients RTSP Clients Splitter Connections TCP Connections Total Bandwidth Total Clients UDP Clients Description The number of active encoder connections The number of active clients using HTTP The number of active server monitor connections The number of active multicast connections The number of active clients using PNA The number of active clients using RTSP The number of active splitter connections The number of active TCP connections The number of bits per second being consumed The total number of active clients The number of active UDP connections
231
Measurement Aggregate Read Rate Aggregate Send Rate Connected Clients Connection Rate Controllers HTTP Streams Late Reads Pending Connections
Description The total, aggregate rate (bytes/sec) of file reads The total, aggregate rate (bytes/sec) of stream transmission The number of clients connected to the server The rate at which clients are connecting to the server The number of controllers currently connected to the server The number of HTTP streams being streamed The number of late read completions per second The number of clients that are attempting to connect to the server, but are not yet connected. This number may be high if the server is running near maximum capacity and cannot process a large number of connection requests in a timely manner. The number of station objects that currently exist on the server The number of stream objects that currently exist on the server The cumulative number of errors occurring per second
232
In this graph, the Total number of recovered packets remained steady during the first two and a half minutes of the scenario. The Number of Packets and Stream Interruptions fluctuated significantly. The Average Buffering Time increased moderately, and the Player Bandwidth increased and then decreased moderately. Note that the scale factor for the Stream Interruptions and Average Buffering Events measurements is 10, and the scale factor for Player Bandwidth is 1/10.
233
The following table describes the Media Player Client measurements that are monitored:
Measurement Average Buffering Events Description The number of times Media Player Client had to buffer incoming media data due to insufficient media content The time spent by Media Player Client waiting for sufficient amount of media data in order to continue playing media clip The number of kbits per second received The number of packets sent by server for a particular media clip The number of interruptions encountered by media player client while playing a media clip. This measurement includes number of time client had to buffer incoming media data, and any errors that occurred during playback. The percentage ratio of packets received to total packets The percentage of stream samples received on time (no delays in reception) The number of lost packets that were recovered. This value is only relevant during network playback. The number of lost packets that were not recovered. This value is only relevant during network playback.
Average Buffering Time (sec) Current bandwidth (Kbits/sec) Number of Packets Stream Interruptions
Stream Quality (Packetlevel) Stream Quality (Sampling-level) Total number of recovered packets Total number of lost packets
234
16
235
SAP Graph
The SAP graph shows the resource usage of a SAP R/3 system server as a function of the elapsed scenario time.
Note: There are differences in the scale factor for some of the measurements.
The following are the most commonly monitored counters for a SAP R/3 system server:
Measurement Average CPU time Average response time Description The average CPU time used in the work process. The average response time, measured from the time a dialog sends a request to the dispatcher work process, through the processing of the dialog, until the dialog is completed and the data is passed to the presentation layer. The response time between the SAP GUI and the dispatcher is not included in this value.
236
Description The average amount of time that an unprocessed dialog step waits in the dispatcher queue for a free work process. Under normal conditions, the dispatcher work process should pass a dialog step to the application process immediately after receiving the request from the dialog step. Under these conditions, the average wait time would be a few milliseconds. A heavy load on the application server or on the entire system causes queues at the dispatcher queue. The time needed to load and generate objects, such as ABAP source code and screen information, from the database. The number of parsed requests sent to the database. The number of logical ABAP requests for data in the database. These requests are passed through the R/3 database interface and parsed into individual database calls. The proportion of database calls to database requests is important. If access to information in a table is buffered in the SAP buffers, database calls to the database server are not required. Therefore, the ratio of calls/requests gives an overall indication of the efficiency of table buffering. A good ratio would be 1:10. The GUI time is measured in the work process and is the response time between the dispatcher and the GUI. The number of rolled-in user contexts. The number of rolled-out user contexts. The processing time for roll ins. The processing time for roll outs.
GUI time
237
Description The queue time in the roll area. When synchronous RFCs are called, the work process executes a roll out and may have to wait for the end of the RFC in the roll area, even if the dialog step is not yet completed. In the roll area, RFC server programs can also wait for other RFCs sent to them. The average response time for all commands sent to the database system (in milliseconds). The time depends on the CPU capacity of the database server, the network, the buffering, and on the input/output capabilities of the database server. Access times for buffered tables are many magnitudes faster and are not considered in the measurement.
238
17
239
measurements to monitor. For more information on activating and configuring the Java performance monitors, see the LoadRunner Controller Users Guide (Windows).
EJB Breakdown
The EJB Breakdown summarizes fundamental result data about EJB classes or methods and presents it in table format. Using the EJB Breakdown table, you can quickly identify the Java classes or methods which consume the most time during the test. The table can be sorted by column, and the data can be viewed either by EJB class or EJB method.
The Average Response Time column shows how long, on average, a class or method takes to perform. The next column, Call Count, specifies the number of times the class or method was invoked. The final column, Total Response Time, specifies how much time was spent overall on the class or method. It is calculated by multiplying the first two data columns together.
240
The graphical representations of each of these columns are the EJB Average Response Time Graph, the EJB Call Count Distribution Graph and the EJB Total Operation Time Distribution Graph. Classes are listed in the EJB Class column in the form Class:Host. In the table above, the class examples.ejb.basic.beanManaged.AccountBean took an average of 22.632 milliseconds to execute and was called 1,327 times. Overall, this class took 30 seconds to execute. To sort the list by a column, click on the column heading. The list above is sorted by Average Response Time which contains the triangle icon specifying a sort in descending order. The table initially displays EJB classes, but you can also view the list of EJB methods incorporated within the classes:
241
The graphs x-axis indicates the elapsed time from the beginning of the scenario run. The y-axis indicates how much time an EJB class or method takes to execute. Each class or method is represented by a different colored line on the graph. The legend frame (which is found below the graph) identifies the classes by color:
242
This legend shows that the green colored line belongs to the EJB class examples.ejb.basic.beanManaged.AccountBean. Looking at the graph above, we see that this class has higher response times than all other EJB classes. At 3:20 minutes into the scenario, it records an average response time of 43 milliseconds. Note that the 43 second data point is an average, taken from all data points recorded within a 5 second interval (the default granularity). You can change the length of this sample interval - see Changing the Granularity of the Data.
Hint: To highlight a specific class line in the graph, select the class row in the legend.
The table at first displays EJB classes, but you can also view the list of EJB methods incorporated within the classes:
243
The graphs x-axis indicates the elapsed time from the beginning of the scenario run. The y-axis indicates how many calls were made to an EJB class or method. Each class or method is represented by a different colored line on the graph. The legend frame (which is found below the graph) identifies the classes by color:
244
This legend shows that the green colored line belongs to the EJB class examples.ejb.basic.beanManaged.AccountBean. Looking at the graph above, we see that calls to this class begins 2:20 minutes into the scenario run. There are 537 calls at the 2:25 minute point.
Hint: To highlight a specific class line in the graph, select the class row in the legend.
245
The number of calls made to the class or method is listed in the Call Count column of the EJB Breakdown table. Each class or method is represented by a different colored area on the pie graph. The legend frame (which is found below the graph) identifies the classes by color:
246
This legend shows that the green colored line belongs to the EJB class examples.ejb.basic.beanManaged.AccountBean. Looking at the graph above, we see that 27.39% of calls are made to this class. The actual figures can be seen in the Call Count column of the EJB Breakdown table: there are 1327 calls to this class out of a total of 4844 calls.
Hint: To highlight a specific class line in the graph, select the class row in the legend.
247
This graph is similar to the EJB Call Count Graph except that the y-axis indicates how many invocations were made to an EJB class or method per second. Each class or method is represented by a different colored line on the graph. The legend frame (which is found below the graph) identifies the classes by color:
248
This legend shows that the green colored line belongs to the EJB class examples.ejb.basic.beanManaged.AccountBean. Looking at the graph above, we see that calls to this class begins 2:20 minutes into the scenario run. There are 107 calls per second at the 2:25 minute mark.
Hint: To highlight a specific class line in the graph, select the class row in the legend.
249
The graphs x-axis indicates the elapsed time from the beginning of the scenario run. The y-axis indicates the total time an EJB class or method is in operation. Each class or method is represented by a different colored line on the graph. The legend frame (which is found below the graph) identifies the classes by color:
250
This legend shows that the green colored line belongs to the EJB class examples.ejb.basic.beanManaged.AccountBean. Looking at the graph above, we see that throughout the scenario, this class consumes more time than any other, especially at 2 minutes and 25 seconds into the scenario run, where all calls to this class take nearly 7 seconds.
Hint: To highlight a specific class line in the graph, select the class row in the legend.
251
Each class or method is represented by a different colored area on the pie graph. The legend frame (which is found below the graph) identifies the classes by color:
252
This legend shows that the green colored line belongs to the EJB class examples.ejb.basic.beanManaged.AccountBean. Looking at the graph above, we see that this class takes up 94.08% of the EJB operational time.
Hint: To highlight a specific class line in the graph, select the class row in the legend.
JProbe Graph
The JProbe graph shows the resource usage of Java-based applications as a function of the elapsed scenario time. The following JProbe counters are available:
Measurement Allocated Memory (heap) Available Memory (heap) Description The amount of allocated memory in the heap (in bytes) The amount of available memory in the heap (in bytes)
253
Note: There are differences in the scale factor between the two available measurements.
254
Description The number of garbage collections during the last poll period. The total time, in milliseconds, spent performing garbage collections since the metric was enabled. (Disabling the metric resets the value to zero). The total number of garbage collections since the metric was enabled. (Disabling the metric resets value to zero). The total number of kilobytes freed since the metric was enabled. (Disabling the metric resets value to zero). Used heap size, in kilobytes.
Total GCs
Total KB Freed
Description The average number of kilobytes per object since the metric was enabled. (Disabling the metric resets value to zero). The objects freed to objects allocated ratio since the metric was enabled (Disabling the metric resets value to zero). The number of kilobytes allocated since the metric was enabled. (Disabling the metric resets value to zero). The change in number of live objects during the last poll period. The number of objects allocated in the last poll period. The number of objects freed during the last poll period.
Free-to-Alloc Ratio
KB Allocated
255
Description The average number of objects freed per garbage collection since the metric was enabled. (Disabling the metric resets value to zero). The Kilobytes allocated since metric was enabled. (Disabling the metric resets value to zero). The number of objects allocated since the metric was enabled. (Disabling the metric resets value to zero). The number of objects freed since the metric was enabled. (Disabling the metric resets value to zero).
TowerJ Graph
The TowerJ graph shows the resource usage of the TowerJ Java virtual machine as a function of the elapsed scenario time. The following measurements are available for the TowerJ Java virtual machine:
ThreadResource Measurement ThreadStartCountTotal ThreadStartCountDelta ThreadStopCountTotal ThreadStopCountDelta
Description The number of threads that were started. The number of threads that were started since the last report. The number of threads that were stopped. The number of threads that were stopped since the last report
256
GarbageCollection Resource Measurement GarbageCollectionCount Total GarbageCollectionCount Delta PreGCHeapSizeTotal PreGCHeapSizeDelta PostGCHeapSizeTotal PostGCHeapSizeDelta NumPoolsTotal NumPoolsDelta NumSoBlocksTotal NumSoBlocksDelta NumLoBlocksTotal NumLoBlocksDelta NumFullSoBlocksTotal NumFullSoBlocksDelta TotalMemoryTotal TotalMemoryDelta NumMallocsTotal NumMallocsDelta NumBytesTotal NumBytesDelta
Description The number of times the garbage collector has run. The number of times the garbage collector has run since the last report. The total pre-GC heap space. The total pre-GC heap space since the last report The total post-GC heap space. The total post-GC heap space since the last report. The number of pools. The number of pools since the last report. The number of small object blocks. The number of small object blocks since the last report. The number of large object blocks. The number of large object blocks. The number of full small object blocks. The number of full small object blocks since the last report. Total memory (heap size). Total memory (heap size) since the last report. The number of current mallocs. The number of current mallocs since the last report. The number of current bytes allocated. The number of current bytes allocated since the last report.
257
Description The total number of mallocs. The total number of mallocs since the last report. The total number of bytes allocated. The total number of bytes allocated since the last report.
Description The number of exceptions thrown. The number of exceptions thrown since the last report.
Description The number of objects. The number of objects since the last report.
Note: There are differences in the scale factor for some of the measurements.
258
18
259
260
In the first run, the average transaction time was approximately 59 seconds. In the second run, the average time was 4.7 seconds. It is apparent that the system works much slower with a greater load.
The Cross Result graphs have an additional filter and group by category: Result Name. The above graph is filtered to the OrderRide transaction for results res12, and res15, grouped by Result Name.
261
2 Click Add to add an additional result set to the Result List. The Select Result Files for Cross Results dialog box opens. 3 Locate a results directory and select its result file (.lrr). Click OK. The scenario is added to the Result List. 4 Repeat steps 2 and 3 until all the results you want to compare are in the Result List. 5 When you generate a Cross Result graph, by default it is saved as a new Analysis session. To save it in an existing session, clear the Create New Analysis Session for Cross Result box. 6 Click OK. The Analysis processes the result data and asks for a confirmation to open the default graphs. After you generate a Cross Result graph, you can filter it to display specific scenarios and transactions. You can also manipulate the graph by changing the granularity, zoom, and scale. For more information, see Chapter 2, Working with Analysis Graphs.
262
Merging Graphs
The Analysis lets you merge the results of two graphs from the same scenario into a single graph. The merging allows you to compare several different measurements at once. For example, you can make a merged graph to display the network delay and number of running Vusers, as a function of the elapsed time. In order to merge graphs, their x-axis must be the same measurement. For example, you can merge Web Throughput and Hits per second, because the common x-axis is Scenario Elapsed Time. The drop-down list only shows the active graphs with an x-axis common with the current graph. The Analysis provides three types of merging: Overlay Tile Correlate Overlay: Superimpose the contents of two graphs that share a common xaxis. The left y-axis on the merged graph shows the current graphs values. The right y-axis shows the values of the graph that was merged. There is no limit to the number of graphs that you can overlay. When you overlay two graphs, the y-axis for each graph is displayed separately to the right and left of the graph. When you overlay more than two graphs, the Analysis displays a single y-axis, scaling the different measurements accordingly.
263
In the following example, the Throughput and Hits per Second graph are overlaid with one another.
Tile: View contents of two graphs that share a common x-axis in a tiled layout, one above the other. In the following example the Throughput and Hits per Second graph are tiled one above the other.
Correlate: Plot the y-axis of two graphs against each other. The active graphs y-axis becomes the x-axis of the merged graph. The y-axis of the graph that was merged, becomes the merged graphs y-axis.
264
In the following example, the Throughput and Hits per Second graph are correlated with one another. The x-axis displays the Bytes per Second (the Throughput measurement) and the y-axis shows the Hits per Second.
265
2 Choose View > Merge Graphs or click the Merge Graphs button. The Merge Graphs dialog box opens and displays the name of the active graph.
3 Select a graph with which you want to merge your active graph. Only the graphs with a common x-axis to the active graph are available. 4 Select the type of merge: Overlay, Tile, or Correlate. 5 Specify a title for the merged graph. By default, the Analysis combines the titles of the two graphs being merged. 6 Click OK. 7 Filter the graph just as you would filter any ordinary graph.
266
19
267
Transaction reports The Summary report provides general information about the scenario run. You can view the Summary report at any time from the Analysis window. You can instruct the Analysis to create an HTML report. The Analysis creates an HTML report for each one of the open graphs. Transaction reports provide performance information about the transactions defined within the Vuser scripts. These reports give you a statistical breakdown of your results and allow you to print and export the data.
268
You can save the Summary report to an Excel file by selecting View > Export Summary to Excel.
269
To create HTML reports: 1 Open all graphs that you want to be included in the report. 2 Choose Reports > HTML Report or click the Create HTML Report button. The Select Report Filename and Path dialog box opens.
270
3 Specify a path and file name for the HTML report and click OK. The Analysis saves a Summary report called by the file name in the selected folder, and the rest of the graphs in a folder with the same name as the file name. When you create an HTML report, the Analysis opens your default browser and displays the Summary report. 4 To view an HTML report for one of the graphs, click on its link in the left frame. 5 To copy the HTML reports to another location, be sure to copy the filename and the folder with the same name. For example, if you named your HTML report test1, copy test1.html and the folder test1 to the desired location.
271
To display a report: 1 Open the desired Analysis session file (.lra extension), or LoadRunner result file (.lrr extension), if it is not already open. 2 From the Reports menu choose a report. The report is generated and displayed. You can display multiple copies of the same report.
Report Header
The header displays general run-time information.
272
Export to file
The report viewer toolbar contains the following buttons: Zoom : Toggles between an actual size, full page, and magnified views of the report. Print : Prints the displayed report. Export to file : Exports the displayed information to a text file. If there are multiple values for the y-axis, as in the Transaction Performance by Vuser graph (min, average, and max), all of the plotted values are displayed.
273
274
275
276
The following values are reported: Start time: the system time at the beginning of the transaction End time: the actual system time at the end of the transaction, including the think time and wasted time. Duration: the duration of the transaction in the following format: hrs:minutes:seconds:milliseconds. This value includes think time, but does not include wasted time. Think time: the Vusers think time delay during the transaction. Wasted time: the LoadRunner internal processing time not attributed to the transaction time or think time. (primarily RTE Vusers) Results: the final transaction status, either Pass or Fail.
277
278
20
279
280
2 In the Server box, type the URL address of the Web server on which TestDirector is installed.
Note: You can choose a Web server accessible via a Local Area Network (LAN) or a Wide Area Network (WAN).
3 Click Connect. Once the connection to the server is established, the servers name is displayed in read-only format in the Server box. 4 From the Project box in the Project Connection section, select a TestDirector project. 5 In the User Name box, type a user name. 6 In the Password box, type a password. 7 Click Connect to connect LoadRunner to the selected project. Once the connection to the selected project is established, the projects name is displayed in read-only format in the Project box. 8 To automatically reconnect to the TestDirector server and the selected project on startup, check the Reconnect on startup box. 9 If you check the Reconnect on startup box, you can save the specified password to reconnect on startup. Check the Save password for reconnection on startup check box. If you do not save your password, you will be prompted to enter it when LoadRunner connects to TestDirector on startup. 10 Click Close to close the TestDirector Connection dialog box.
281
2 To disconnect LoadRunner from the selected project, click Disconnect in the Project Connection section. 3 To disconnect LoadRunner from the selected server, click Disconnect in the Server Connection section. 4 Click Close to close the TestDirector Connection dialog box.
282
283
To open a result file directly from the file system, click the File System button. The Open Result File for New Analysis Session dialog box opens. (From the Open Result File for New Analysis Session dialog box, you may return to the Open Result File for New Analysis Session from TestDirector Project dialog box by clicking the TestDirector button.) 3 Click the relevant subject in the test plan tree. To expand the tree and view sublevels, double-click closed folders. To collapse the tree, double-click open folders. Note that when you select a subject, the sessions that belong to the subject appear in the Run Name list. 4 Select an Analysis session from the Run Name list. The scenario appears in the read-only Test Name box. 5 Click OK to open the session. LoadRunner loads the session. The name of the session appears in the Analysis title bar.
Note: You can also open Analysis sessions from the recent session list in the File menu. If you select a session located in a TestDirector project, but LoadRunneris currently not connected to that project, the TestDirector Connection dialog box opens. Enter your user name and password to log in to the project, and click OK.
284
To open a scenario directly from the file system, click the File System button. The Open Existing Analysis Session File dialog box opens. (From the Open Existing Analysis Session File dialog box, you may return to the Open Existing Analysis Session File from TestDirector Project dialog box by clicking the TestDirector button.)
285
3 Click the relevant subject in the test plan tree. To expand the tree and view sublevels, double-click closed folders. To collapse the tree, double-click open folders. Note that when you select a subject, the sessions that belong to the subject appear in the Run Name list. 4 Select a session from the Run Name list. The session appears in the read-only Test Name box. 5 Click OK to open the session. LoadRunner loads the session. The name of the session appears in the Analysis title bar.
Note: You can also open sessions from the recent sessions list in the File menu. If you select a session located in a TestDirector project, but LoadRunneris currently not connected to that project, the TestDirector Connection dialog box opens. Enter your user name and password to log in to the project, and click OK.
286
21
287
Launch the Import Data Tool by selecting Reports > Microsoft Word Report from the main menu of LoadRunner Analysis:
The dialog is divided into three tabs, Format, Primary Content and Additional Graphs. Once you have set the options you require, press OK. The report is then generated, and a window appears reporting its progress. This process might take a few minutes. When completed, Analysis launches the Microsoft Word application containing the report. The file is saved to the location specified in the Report Location box of the Format tab.
288
3 Select Table of contents to attach a table of contents to the report, placed after the cover page.
289
4 Select Graph details to include details such as graph Filters and Granularity. For example:
These details also appear in the Description tab in the Analysis window. 5 Select Graph Description to include a short description of the graph, as in the following:
The description is identical to the one which appears in the Description tab in the Analysis window. 6 Select Measurement Description to attach descriptions of each type of monitor measurement in the report appendix. 7 Select Include Company logo and use Browse to direct LoadRunner Analysis to the .bmp file of your companys logo.
290
Check the following options to include them in your report: Executive Summary: Includes your own high-level summary or abstract of the LoadRunner test, suitable for senior management. An executive summary typically compares performance data with business goals, states significant findings and conclusions in non-technical language, and
291
suggests recommendations. Press Edit and a dialog appears to enter objectives and conclusions:
292
The summary also includes two other sub-sections, Scenario Summary and Top Time-Consuming Transactions:
The above shows clearly that the transaction vuser_init_Transaction consumes the most time. Scenario configuration: Gives the basic schema of the test, including the name of result files, Controller scheduler information, scripts, and Run Time Settings. Users influence: Helps you view the general impact of Vuser load on performance time. It is most useful when analyzing a load test which is run with a gradual load. Hits per second: Applicable to Web tests. Displays the number of hits made on the Web server by Vusers during each second of the load test. It helps you evaluate the amount of load Vusers generate, in terms of the number of hits. Server Performance: Displays a summary of resources utilized on the servers. Network Delay: Displays the delays for the complete network path between machines. Vuser load scheme: Displays the number of Vusers that executed Vuser scripts, and their status, during each second of a load test. This graph is useful for determining the Vuser load on your server at any given moment.
293
Transaction response times: Displays the average time taken to perform transactions during each second of the load test. This graph helps you determine whether the performance of the server is within acceptable minimum and maximum transaction performance time ranges defined for your system. Terminology: An explanation of special terms used in the report.
294
The above shows that 3 graphs have been generated in the session: Average Transaction Response Time, Hits per Second, and Web Page Breakdown. The two that are selected appear in the Word Report. Select Graph notes to include text from the User Notes tab in the Analysis main window.
295
296
22
297
2 Choose the format of the external data file from the File format list box. In the above example, the NT Performance Monitor (.csv) format is selected. For a description of other formats, see the list of Supported File Types. You can also tailor your own file format. See Defining Custom File Formats on page 304. 3 Select Add File to select an external data file. An Open dialog box appears. The Files of type list box shows the type chosen in step 2. 4 Choose other format options: Date Format: Specify the format of the date in the imported data file e.g. for European dates with a 4 digit year, choose DD/MM/YYYY. Time Zone: Select the time zone where the external data file was recorded. LoadRunner Analysis compensates for the various international time zones and aligns the times with local time zone settings in order to match
298
LoadRunner results. (Note that LoadRunner does not alter the times in the data file itself). Time Zone also contains the option <Synchronize with Scenario start time>. Choose this to align the earliest measurement found in the data file to the start time of the LoadRunner Scenario. If the times within the imported file are erroneous by a constant offset, then you can select the Time Zone option <User Defined> to correct the error and synchronize with LoadRunners results. The Alter File Time dialog box appears, where you specify the amount of time to add or subtract from all time measurements in the imported file:
The example above adds 3 hours (10,800 seconds) to all times taken from the imported data file.
Note: When doing this, you should synchronize the time to GMT (and not to Local Time). To help you with this alignment, the dialog displays the scenario start time in GMT.
In the example above, the start time is 16:09:40. Since the clock on the server machine was running slow and produced measurements in the data file beginning at 13:09, 3 hours are added to all time measurements in the file.
299
Machine: Specify the machine where the external monitor was run. This associates the machine name with the measurement. For example, a file IO rate on the machine fender will be named File IO Rate:fender. This enables you to apply Graph settings by the machine name. See Applying a Filter to a Graph. 5 Select Advanced Settings to specify character separators and symbols not part of the regional settings currently running on the operating system. By selecting Use custom settings, you can manually specify characters which represent various separators and symbols in the external data file.
The above example shows a non-standard time separator, the character %, substituted for the standard : separator. To revert to the operating systems standard settings, select Use local settings. 6 Click Next in the Import Data dialog. Select the type of monitor which generated the external data file. When opening a new graph, you will see your monitor added to the list of available graphs under this particular category (See Opening Analysis Graphs). You can also define your own monitor type. See Defining Custom Monitor Types for Import on page 306. 7 Click Finish. LoadRunner Analysis imports the data file or files, and refreshes all graphs currently displayed in the session.
300
Note: When importing data into a scenario with two or more cross results, the imported data will be integrated into the last set of results listed in the File > Cross with Result dialog box. See Generating Cross Result Graphs on page 262
Windows 2000 Performance Monitor (.csv) Default file type of Windows 2000 Performance monitor, but incompatible with NT Performance monitor. In comma separated value (CSV) format. For example:
301
Standard Comma Separated File (.csv) This file type has the following format:
Date,Time,Measurement_1,Measurement_2, ...
where fields are comma separated and first row contains column titles The following example from a standard CSV file shows 3 measurements: an interrupt rate, a file IO rate and a CPU usage. The first row shows an interrupt rate of 1122.19 and an IO rate of 4.18:
Master-Detail Comma Separated File (.csv) This file type is identical to Standard Comma Separated Files except for an additional Master column which specifies that rows particular breakdown of a more general measurement. For example, a Standard CSV file may contain data points of a machines total CPU usage at a given moment:
Date,Time,CPU_Usage
However, if the total CPU usage can be further broken up into CPU time perprocess, then a Master-Detail CSV file can be created with an extra column ProcessName, containing the name of a process. Each row contains the measurement of a specific processs CPU usage only. The format will be the following:
Date,Time,ProcessName,CPU_Usage
as in the following example:
302
Microsoft Excel File (.xls) Created by the Microsoft Excel application. The first row contains column titles.
Master-Detail Microsoft Excel file (.xls) Created by Microsofts Excel application. The first row contains column titles. It contains an extra Master column. For an explanation of this column, see Master-Detail Comma Separated File (.csv) on page 302.
SiteScope log File (.log) A data file created by Freshwaters SiteScope Monitor for Web infrastructures. For example:
303
Click OK. 3 The following dialog appears. Note that the name given to the format is my_monitor_format:
304
4 Specify which column contains the date and time. If there is a master column (see Master-Detail Comma Separated File (.csv) on page 302), specify its column number. A selection of field separators can be chosen by clicking on the browse button next to the Field Separator list box. Press Save, or go to the next step. 5 Select the Optional tab. Choose from the following options: Date Format: Specify the format of the date in the imported data file. Time Zone: Select the time zone where the external data file was recorded. LoadRunner Analysis aligns the times in the file with local time zone settings to match LoadRunner results. (LoadRunner does not alter the file itself). Machine Name: Specify the machine where the monitor was run. Ignored Column List: Indicate which columns are to be excluded from the data import, such as columns containing descriptive comments. When there is more than one column to be excluded, specify the columns in a comma-separated list. For example, to ignore columns 1, 3 and 7, enter 1,3,7. Convert file from UNIX to DOS format: Monitors often run on UNIX machines. Check this option to convert data files to Windows format. A carriage return (Ascii character 13) is appended to all line feed characters (Ascii character 10) in the UNIX file. Click Save.
305
306
307
MyWebMon is now registered in the list of available monitors that can be generated as a graph:
308
23
Interpreting Analysis Graphs
LoadRunner Analysis graphs present important information about the performance of your scenario. Using these graphs, you can identify and pinpoint bottlenecks in your application and determine what changes are needed to improve its performance. This chapter presents examples of: Analyzing Transaction Performance Using the Web Page Breakdown Graphs Using Auto Correlation Identifying Server Problems Identifying Network Problems Comparing Scenario Results
309
view the behavior of the problematic transaction(s) during each second of the scenario run. Question 1: Which transactions had the highest response time? Was the response time for these transactions high throughout the scenario, or only at certain points during scenario execution? Answer: The Transaction Performance Summary graph demonstrates a summary of the minimum, average, and maximum response time for each transaction during scenario execution. In the example below, the response time of the Reservation transaction averaged 44.4 seconds during the course of the scenario.
310
The Average Transaction Response Time graph demonstrates that response time was high for the Reservation transaction throughout the scenario. Response time for this transaction was especially highapproximately 55 secondsduring the sixth and thirteenth minutes of the scenario.
In order to pinpoint the problem and understand why response time was high for the Reservation transaction during this scenario, it is necessary to break down the transactions and analyze the performance of each page component. To break down a transaction, right-click it in the Average Transaction Response Time or Transaction Performance Summary graph, and select Web Page Breakdown for <transaction name>.
311
312
If the download time for a component was unacceptably long, note which measurementsDNS resolution time, connection time, time to first buffer, SSL handshaking time, receive time, and FTP authentication timewere responsible for the lengthy download. To view the point during the scenario at which the problem occurred, select the Page Download Breakdown (Over Time) graph. For more information regarding the measurements displayed, see Page Download Time Breakdown Graph on page 100. To identify whether a problem is network- or server-related, select the Time to First Buffer Breakdown (Over Time).
The above graph demonstrates that the server time was much higher than the network time. If the server time is unusually high, use the appropriate server graph to identify the problematic server measurements and isolate the cause of server degradation. If the network time is unusually high, use the Network Monitor graphs to determine what network problems caused the performance bottleneck.
313
The above graph demonstrates that the response time for the SubmitData transaction was relatively high toward the end of the scenario. To correlate this transaction with all of the measurements collected during the scenario, right-click the SubmitData transaction and select Auto Correlate.
314
In the dialog box that opens, choose the time frame you want to examine.
315
Click the Correlation Options tab, select the graphs whose data you want to correlate with the SubmitData transaction, and click OK.
316
In the following graph, the Analysis displays the five measurements most closely correlated with the SubmitData transaction.
This correlation example demonstrates that the following database and Web server measurements had the greatest influence on the SubmitData transaction: Number of Deadlocks/sec (SQL server), JVMHeapSizeCurrent (WebLogic server), PendingRequestCurrentCount (WebLogic server), WaitingForConnectionCurrentCount (WebLogic server), and Private Bytes (Process_Total) (SQL server). Using the appropriate server graph, you can view data for each of the above server measurements and isolate the problem(s) that caused the bottleneck in the system.
317
For example, the graph below demonstrates that both the JVMHeapSizeCurrent and Private Bytes (Process_Total) WebLogic (JMX) application server measurements increase as the number of running Vusers increases.
The above graph, therefore, indicates that these two measurements contributed to the slow performance of the WebLogic (JMX) application server, which affected the response time of the SubmitData transaction.
318
319
Continuously open connections can also drain server resources. Unlike browsers, servers providing SSL services typically create numerous sessions with large numbers of clients. Caching the session identifiers from each transaction can quickly exhaust the servers resources. In addition, the keep-alive enhancement features of most Web browsers keep connections open until they are explicitly terminated by the client or server. As a result, server resources may be wasted as large numbers of idle browsers remain connected to the server. The performance of secured Web sites can be improved by: Fine-tuning the SSL and HTTPS services according to the type of application Using SSL hardware accelerators, such as SSL accelerator appliances and cards Changing the level of security according to the level of sensitivity of the data (i.e., changing the key length used for public-key encryption from 1,024 to 512 bits) Avoiding the excessive use of SSL and redesigning those pages that have low levels of data sensitivity to use regular HTTPS
320
The first load test demonstrates the applications performance in its initial state, before any load testing was performed. From the graph, you can see that with approximately 50 Vusers, response time was almost 90 seconds, indicating that the application suffered from severe performance problems. Using the analysis process, it was possible to determine what architectural changes were necessary to improve the transaction response time. As a result of these site architectural changes, the transaction response time for the same business process, with the same number of users, was less than 10 seconds in the last load test performed. Using the Analysis, therefore, the customer was able to increase site performance tenfold.
321
322
Index
A
Acrobat Reader ix Activity reports 271 advanced display settings chart 38 series 39 aggregating data 5 Analysis interpreting graphs 309321 overview 122 sessions 3 working with 2353 Antara FlameThrower graph 128 Apache graph 154 Apply/Edit Template dialog box 16 Ariba graph 163 ASP graph 177 ATG Dynamo graph 165 Auto Correlate dialog box 48 Correlation Options tab 49 Time Range tab 49 auto correlating measurements 48 auto correlating measurements example 314 Average Transaction Response Time graph 66 auto correlation 314 BroadVision graph 168
C
chart settings 38 Check Point FireWall-1 graph 152 Citrix Metaframe XP Graph 139 Client Time in Page Download Time Breakdown graph 102 ColdFusion graph 175 collating execution results 4 comparing scenario runs 321 connecting to TestDirector 280 Connection time in Page Download Time Breakdown graph 101 Context Sensitive Help x coordinates of a point 24 Cross Result dialog box 262 Cross Result graphs 259266
D
Data Aggregation Configuration dialog box 5 Data Import 297 Data Point report 276 Data Points (Average) graph 115 (Sum) graph 114 database options 11
B
Books Online ix breaking down transactions 94
323
LoadRunner Analysis Users Guide DB2 graph 204 Detailed Transaction report 277 disconnecting from TestDirector 282 display options advanced 38 Display Options dialog box 36 standard 36 DNS Resolution time in Page Download Time Breakdown graph 101 documentation set x Downloaded Component Size graph 110 drill down 25, 54 Drill Down Options dialog box 27
F
Failed Transaction report 274 Failed Vuser report 275 filter conditions setting in Analysis 31 filtering graphs 31 FireWall Server graphs 151152 First Buffer time in Page Download Time Breakdown graph 101 FTP Authentication time in Page Download Time Breakdown graph 102 Fujitsu INTERSTAGE graph 176 Function Reference ix
E
Editing MainChart dialog box 38 Adding Comments and Arrows 42 Chart tab 38 Graph Data tab 45 Legend tab 40 Raw Data tab 47 Series tab 39 EJB Average Response Time Graph 242 Breakdown Graph 240 Call Count Distribution Graph 246 Call Count Graph 244 Call Count Per Second Graph 248 Total Operation Time Distribution Graph 252 Total Operation Time Graph 250 enlarging graphs 28 ERP Server Resource graphs 235238 Error graphs 6163 Error Statistics graph 62 Error Time in Page Download Time Breakdown graph 102 Errors per Second graph 63 Excel file exporting to 45 viewing 270
G
global filter, graphs 34 granularity 28 Granularity dialog box 29 graph Antara FlameThrower 128 Apache 154 Ariba 163 ATG Dynamo 165 Average Transaction Response Time 66 BroadVision 168 Check Point FireWall-1 152 Citrix Metaframe XP 139 ColdFusion 175 Data Points (Average) 115 Data Points (Sum) 114 DB2 204 Downloaded Component Size 110 EJB Average Response Time 242 EJB Breakdown 240 EJB Call Count 244 EJB Call Count Distribution 246 EJB Call Count Per Second 248 EJB Total Operation Time 250 EJB Total Operation Time Distribution 252 Error Statistics 62
324
Index graph (contd) Errors per Second 63 Fujitsu INTERSTAGE 176 Hits per Second 78 Hits Summary 79 HTTP Responses per Second 83 HTTP Status Code Summary 82 iPlanet/Netscape 158 JProbe 253 Microsoft Active Server Pages (ASP) 177 Microsoft IIS 156 Network Delay Time 147 Network Segment Delay 149 Network Sub-Path Time 148 Oracle 218 Oracle9iAS HTTP 178 Page Component Breakdown 96 Page Component Breakdown (Over Time) 98 Page Download Time Breakdown 100 Page Download Time Breakdown (Over Time) 104 Pages Downloaded per Second 86 RealPlayer Client 228 RealPlayer Server 230 Rendezvous 60 Retries per Second 88 Retries Summary 89 Running Vusers 58 SAP 236 SilverStream 182 Sitraka JMonitor 254 SNMP Resources 124 SQL Server 220 Sybase 222 Throughput 80 Time to First Buffer Breakdown 106 Time to First Buffer Breakdown (Over Time) 108 Total Transactions per second 69 TowerJ 256 Transaction Performance Summary 71 Transaction Response Time (Distribution) 74 Transaction Response Time (Percentile) 73 Transaction Response Time (Under Load) 72 Transaction Summary 70 Transactions per second 68 TUXEDO Resources 125 UNIX Resources UNIX Resources graph 122 Vuser Summary 59 WebLogic (JMX) 185 WebLogic (SNMP) 183 WebSphere 188 WebSphere (EPM) 195 Windows Media Player Client 233 Windows Media Server 231 Windows Resources 117 Graph Settings dialog box 31 graph types, Analysis Database Server Resources 203226 ERP Server Resource Monitor 235238 Errors 6163 FireWall Server Monitor 151152 Java Performance 239258 Network Monitor 145150 Streaming Media Resources 227234 System Resources 117138 Transaction 6575 User-Defined Data Points 113115 Vuser 5760 Web Application Server Resources 161, 161202 Web Page Breakdown 91111 Web Resources 7789 Web Server Resources 153159 graphs, working with background 39 crossing results 259266 display options 36 merging 263 overlaying, superimposing 263 Group By dialog box Available groups 35 Selected groups 35
325
H
Hits per Second graph 78 Hits Summary graph 79 HTML reports 270 HTTP Responses per Second graph 83 Status Code Summary graph 82 HTTPS 319
N
Network Delay Time graph 147 Segment Delay graph 149 Sub-Path Time graph 148 Network Monitor graphs 145150
IIS graph 156 Importing Data 297 interpreting Analysis graphs 309321 iPlanet/Netscape graph 158
O
Open a New Graph dialog box 21 Open Existing Analysis Session File from TestDirector Project dialog box 285 Options dialog box Database tab 11 General tab 10 Result Collection tab 5, 8 Oracle graph 218 Oracle9iAS HTTP graph 178 overlay graphs 263
J
Java Performance graphs 239258 JProbe graph 253
L
legend 40 Legend Columns Options dialog box 41 legend preferences 39 lr_user_data_point 113
P
packets 146 Page Component Breakdown (Over Time) graph 98 Component Breakdown graph 96 Download Time Breakdown (Over Time) graph 104 Download Time Breakdown graph 100 Pages Downloaded per Second graph 86 Performance reports 271
M
marks in a graph 39 Measurement Options dialog box 40 measurement trends, viewing 47 measurements, auto correlating 48 measurements, auto correlating example 314 measurements, WAN emulation 54 Media Player Client graph 233 Merge Graphs dialog box 265 merging graphs 263 Microsoft Active Server Pages (ASP) graph 177
R
raw data 45 Raw Data dialog box 46
326
Index RealPlayer Client graph 228 Server graph 230 Receive time in Page Download Time Breakdown graph 101 rendezvous Rendezvous graph 60 report Data Point 276 Detailed Transaction 277 Failed Transaction 274 Failed Vuser 275 Scenario Execution 273 Transaction Performance by Vuser 278 viewer 272 reports 267278 Activity and Performance 271 displaying 271 HTML 270 summary 269 Retries per Second graph 88 Retries Summary graph 89 Running Vusers graph 58 Run-Time settings scenario 18 SilverStream graph 182 Sitraka JMonitor graph 254 SNMP Resources graph 124 spreadsheet view 45 SQL Server graph 220 SSL Handshaking time in Page Download Time Breakdown graph 101 standardizing y-axis values 47 Streaming Media graphs 227234 summary data, viewing 4 Summary report 269 superimposing graphs 263 Support Information x Support Online x Sybase graph 222
T
templates applying 16 saving 15 TestDirector connecting to 280 disconnecting from 282 integration 279, 279286 opening a new session 283 opening an existing session 285 saving sessions to a project 286 TestDirector Connection dialog box 280 three-dimensional properties 39 Throughput graph 80 Throughput Summary 81 Throughput Summary graph 81 time filter, setting 8 Time to First Buffer Breakdown (Over Time) graph 108 graph 106 Total Transactions per Second graph 69 TowerJ graph 256
S
SAP graph 236 Save as Template dialog box 15 scale factor Streaming Media graphs 227 Web Server Resource graphs 153 scale of graph 28 Scenario Elapsed Time dialog box 34 Scenario Execution report 273 Scenario Run-Time Settings dialog box 18 security problems 319 Select Report Filename and Path dialog box 270 Session Information dialog box 17 sessions 3 Set Dimension Information dialog box 33
327
LoadRunner Analysis Users Guide Transaction Summary graph 70 Transaction graphs 6575 Transaction Response Time graphs 6675 Average 66 Distribution 74 Percentile 73 Under Load 72 transactions breakdown 94 Transaction Performance by Vuser report 278 Transaction Performance Summary graph 71 Transactions per Second graph 68 TUXEDO Resources graph 125 WebSphere graph 188 WebSphere (EPM) graph 195 Windows Media Server graph 231 Resources graph 117 Word Reports 287
X
x-axis interval 28
Y
y-axis values, standardizing 47
Z U
user_data_point function 113 User-Defined Data Point graphs 113115 zoom 28
V
viewing measurement trends 47 Vuser graphs 5760 Vusers Vuser ID dialog box 33 Vuser Summary graph 59
W
WAN emulation overlay 54 Web Application Server Resource graphs 161 Web Application Server Resource graphs 161202 Web Page Breakdown Content Icons 95 Web Page Breakdown graphs 91111, 312 activating 93 Web Resource graphs 7789 Web Server Resource graphs 153159 WebLogic (JMX) graph 185 (SNMP) graph 183 328