0% found this document useful (0 votes)
2 views

Introduction Testing Tools

This document discusses modern testing tools that enhance software development and quality assurance processes. It outlines when to use or avoid testing tools, provides a selection checklist, and describes various types of testing tools available. Additionally, it emphasizes the importance of selecting the appropriate tool for specific testing needs to avoid inefficiencies and ensure effective testing outcomes.

Uploaded by

Zseanya
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Introduction Testing Tools

This document discusses modern testing tools that enhance software development and quality assurance processes. It outlines when to use or avoid testing tools, provides a selection checklist, and describes various types of testing tools available. Additionally, it emphasizes the importance of selecting the appropriate tool for specific testing needs to avoid inefficiencies and ensure effective testing outcomes.

Uploaded by

Zseanya
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

AU9833/ch20/frame Page 225 Friday, March 17, 2000 3:30 PM

Section IV
Modern Testing
Tools
Recent advances in client/server software tools enable developers to build
applications quickly and with increased functionality. Quality assurance
departments must cope with software that is dramatically improved, but
increasingly complex. Testing tools have been developed to aid in the qual-
ity assurance process.
The objectives of this section are to:
• Describe when a testing tool is useful
• Describe when not to use a testing tool
• Provide a testing tool selection checklist
• Discuss types of testing tools
• Provide descriptions of modern and popular testing tools
• Describe a methodology to evaluate testing tools

225

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 227 Friday, March 17, 2000 3:30 PM

Part 20
Introduction to
Testing Tools
The objective of this section is to provide an overview of some popular test
tools and demonstrate how they can improve the quality and productivity
of a development effort.

JUSTIFYING TESTING TOOLS


There are numerous testing tools, each with specific capabilities and test
objectives. The selection of the best testing tool for a particular develop-
ment environment is a critical success factor for the testing activities. How-
ever, if the right testing tool is not selected and/or the organization is not
positioned for a testing tool, it can easily become “shelfware,” as testing
tools require a learning curve, skills, standards, and must be integrated
into the development methodology.

When to Consider Using a Testing Tool


A testing tool should be considered based on the test objectives. As a gen-
eral guideline, one should investigate the appropriateness of a testing tool
when the human manual process is inadequate. For example, if a system
needs to be stress tested, a group of testers could simultaneously logon to
the system and attempt to simulate peak loads using stopwatches. Howev-
er, this approach has limitations. One cannot systematically measure the
performance precisely or repeatably. For this case, a load testing tool can
simulate several virtual users under controlled stress conditions.
A regression testing tool might be needed under the following circum-
stances:
• Tests need to be run at every build of an application, e.g., time con-
suming, unreliable and inconsistent use of human resources
• Tests are required using multiple data values for the same actions
• Tests require detailed information from system internals such as SQL,
GUI attributes
• There is a need to stress a system to see how it performs
Testing tools have the following benefits:

227

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 228 Friday, March 17, 2000 3:30 PM

MODERN TESTING TOOLS

• Speedy and much faster then their human counterpart


• Run unattended without human intervention
• Provide code coverage analysis after a test run
• Precisely repeatable
• Reusable, just as programming subroutines
• Programmable

When to Not Consider Using a Testing Tool


Contrary to popular belief, it is not always wise to purchase a testing tool.
Some factors that limit a testing tool include:
• Cost
A testing tool may not be affordable to the organization, e.g., the
cost/performance tradeoff
• Culture
The development culture may not be ready for a testing tool, because
it requires the proper skills and commitment to long-term quality
• Usability testing
There are no automated testing tools that can test usability
• One-time testing
If the test is going to be performed only once, a testing tool may not be
worth the required time and expense
• Time crunch
If there is pressure to complete testing within a fixed time frame, a test-
ing tool may not be feasible, because it takes time to learn, set up, and
integrate a testing tool to the development methodology
• Ad hoc testing
If there is no formal test design and test cases, a regression testing tool
will be useless
• Predictable results
If tests do not have predictable results, a regression testing tool will be
useless
• Instability
If the system is changing rapidly during each testing spiral, more time
will be spent maintaining a regression testing tool than it is worth.

TESTING TOOL SELECTION CHECKLIST


Finding the appropriate tool can be difficult. Several questions need to be
answered before selecting a tool. Exhibit 1 lists questions to help the QA
team evaluate and select an automated testing tool. (See also Appendix
F19, Testing Tool Selection Checklist.)

228

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 229 Friday, March 17, 2000 3:30 PM

Introduction to Testing Tools

Exhibit 1. Testing Tool Selection Checklist

Item Yes No N/A Comments


1. How easy is the tool for your
testers to use?

2. Is it something that can be picked


up quickly or is training going to
be required?

3. Do any of the team members


already have experience using the
tool?

4. If training is necessary, are


classes, books, or other forms of
instruction available?

5. Will the tool work effectively on


the computer system currently in
place?

6. Or are more memory, faster


processors, etc., going to be
needed?

7. Is the tool itself easy to use?

8. Does it have a user-friendly


interface?

9. Is it prone to user error?

10. Is the tool physically capable of


testing your application? Many
testing tools can only test in a GUI
environment, while others test in
non-GUI environments.

11. Can the tool handle full project


testing? That is, is it able to run
hundreds if not thousands of test
cases for extended periods of
time?

12. Can it run for long periods of time


without crashing, or is the tool
itself full of bugs?

229

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 230 Friday, March 17, 2000 3:30 PM

MODERN TESTING TOOLS

Exhibit 1. (Continued) Testing Tool Selection Checklist

Item Yes No N/A Comments


13. Talk to customers who currently
or have previously used the tool.
Did it meet their needs?

14. How similar were their testing


needs to yours and how well did
the tool perform?

15. Try to select a tool that is


advanced enough so the costs of
updating tests don’t overwhelm
any benefits of testing.

16. If a demo version is available, try it


out before you make any
decisions.

17. Does the price of the tool fit in the


QA Department or company
budget?

18. Does the tool meet the


requirements of the company
testing methodology?

TYPES OF TESTING TOOLS

Year 2000 Tools


Year 2000 tools help in date-sensitive testing which is critical to the YR2000
problem and also identify the scope and time dimensions of the applica-
tions through tools that automatically parse the source code.
Coverage and range-driven testing requirements will be greater than un-
der normal regression scenarios. Absolutely essential are capture/playback
tools which are “turned on” and record actual production transactions and
data which has been modified to simulate the YR2000 scenarios. With sever-
al days’ worth of such information, the results from these recording sessions
can be sorted and categorized, yielding key information. (See Exhibit 2.)

Web Site Management Tools


Web site management tools are designed to help the Webmaster or busi-
ness manager manage every aspect of a rapidly changing site. It helps de-
tect and repair defects in the structural integrity of their sites, e.g.,

230

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 231 Friday, March 17, 2000 3:30 PM

Introduction to Testing Tools

broken links, orphaned pages, potential performance problems on Web


sites, etc. (See Exhibit 2.)

Requirements-Based Testing Tools


A requirements-based testing tool, or functional test case design tool,
drives clarification of application requirements and uses the “require-
ments” as the basis for test design. Such tools validate requirements by
identifying all functional variations and logical inconsistencies, and they
determine the minimal number of test cases needed to maximize coverage
of the functional requirements.
This allows project teams to review both the requirements and the test cas-
es in a variety of formats to ensure that the requirements are correct, com-
plete, fully understood, and testable. (See Exhibit 2.)

Test Management Tools


A test management tool keeps track of all the testing assets through a com-
mon repository of information and contains such information as test plans,
test cases, and test scripts. It helps quality assurance plan, manage, and
analyze the testing progress and enhances the communication of the de-
velopment team, including testers, developers, project leaders, and QA
managers. Using a test management tool, testers report defects and track
progress, developers correct defects and update the defect status, project
leaders extract information about the progress of the testing process.
Quality assurance managers generate reports and create graphical analy-
sis for management. (See Exhibit 2.)

Regression Testing Tools


Each code change, enhancement, bug fix, and platform port necessitates re-
testing the entire application to ensure a quality release. Manual testing can
no longer keep pace in this rapidly developing environment. A regression
testing tool helps automate the testing process, from test development to ex-
ecution. Reusable test scripts are created which test the system’s function-
ality. Prior to a release, one can execute these tests in an unattended mode,
which fosters the detection of defects and ensures quality deliverables. (See
Exhibit 2.)

Coverage Analysis Tools


The purpose of coverage analysis tools is to monitor the system while a dy-
namic testing tool is executing. They are a form of white-box testing in
which there is knowledge about the internal structure of the program of
the system. Information is provided on how thorough the test was. Graphic
analysis displays how much the system was covered during the test, such
as the percent of code executed and in which locations. This will provide
231

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 232 Friday, March 17, 2000 3:30 PM

MODERN TESTING TOOLS

the tester with information on weaknesses in the test design, which can be
solved with additional test cases.
Unit and integration coverage is provided with these tools. Unit cover-
age entails the coverage of the code and paths within a single program unit.
Integration coverage comprises the interfaces between program units to
determine the linkage between them. (See Exhibit 2.)

Dynamic Testing Tools


Dynamic testing techniques are time dependent and involve executing a
specific sequence of instructions by the computer. The purpose of dynam-
ic testing tools is to examine a program of systems behavior and perfor-
mance while it is executing to verify whether it operated as expected.
Examples include regression testing capture/playback and load/stress
testing tools. (See Exhibit 2.)

Static Testing Tools


The purpose of static testing tools is to uncover defects by examining the
software itself rather than executing it, as with dynamic testing. They are a
form of white-box testing in which there is knowledge about the internal
structure of the program of the system and can be thought of as automated
code inspectors. (See Exhibit 2.)
Automated static testing tools typically operate on the program source
code. There are two broad categories of this type of tool. The first type
gathers and reports information about the program. Generally, this type of
tool does not search for any particular type of error in a program. A symbol
cross-reference generator and a consistency check with the specifications
are examples. The other type of tool detects specific types of errors or
anomalies in a program.
Typical defects reported by these tools include:
• Overly complex code
• Misspellings
• Incorrect punctuation
• Path analysis
• Improper statement sequencing
• Inconsistency of parameters
• Redundant code
• Unreachable code
• Overly complex system structure
• Faults
• Initialized variables
• Coding standard violations
• Inconsistent data attributes

232

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 233 Friday, March 17, 2000 3:30 PM

Introduction to Testing Tools

Load Testing Tools


The purpose of load testing tools is to simulate a production environment
to determine that normal or above-normal volumes of transactions can be
completed successfully in an expected time frame. These tools test the
availability and capacity of system resources, such as CPU, disk, memory,
channel, and communication lines. (See Exhibit 2.)

Comparators
A comparator is a program used to compare two versions of source data to
determine whether the two versions are identical or to specifically identify
where any differences in the versions occur. Comparators are most effec-
tive during software testing and maintenance when periodic modifications
to the software are anticipated.

Windows File Revision Feature. With Windows File Revision Feature one
can compare two versions of a document. The two documents being com-
pared must have different file names or the same file name in different fold-
ers. To use this feature under Windows 95:
1. Open the edited version of the document.
2. On the Tools menu, click Revisions.
3. Click Compare Versions.
4. Click the name of the original document, or type its name in the File
Name box.
5. Accept or reject the revisions.
Exhibit 2 cross-references testing tool vendors with the testing tool
types discussed above (in alphabetical order). Exhibit 3 gives contact in-
formation for the vendors.

VENDOR TOOL DESCRIPTIONS


The following is an overview of some of the major testing tool vendors. No
one tool is favored over another.

McCabe’s Visual 2000


Product Description. McCabe Visual 2000 offers testing capabilities cou-
pled with analysis and assessment features that allow the user to priori-
tize, pinpoint, and manage high-risk areas.
McCabe Visual 2000 provides a date logic impact view of the system
architecture based on date complexity, code quality, and visual impact
diagrams. Project scope and prioritization strategies are based on date
logic impacted instead of lines of code impacted. Year 2000 software re-
mediation and testing strategy can be proposed based on quantifiable
metrics.
233

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 234 Friday, March 17, 2000 3:30 PM
234

MODERN TESTING TOOLS


Exhibit 2. Vendor vs. Testing Tool Type

Requirements-
Vendor Test Based Web Site Load/
Name Y2000 Management Management Management Static Dynamic Performance Coverage Regression
Astra x
Load- x x
Runner
LoadTest x x
PreVue x x
Pure x x
Coverage
Purify x x
Quantify x x
Caliber–RBT x
SQA x
Manager
SQA Robot x x x
SQA x
SiteCheck
Test Library x
Manager

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 235 Friday, March 17, 2000 3:30 PM
Exhibit 2. (Continued) Vendor vs. Testing Tool Type

Requirements-
Vendor Test Based Web Site Load/
Name Y2000 Management Management Management Static Dynamic Performance Coverage Regression
Test Station x x x
Test- x
Director
Visual 2000 x
Visual x
Quality
ToolSet
Visual Re- x x
engineering
ToolSet
Visual Test x x x

Introduction to Testing Tools


Visual x x
Testing
ToolSet
Win-Runner x x x
Xrunner x x x
235

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 236 Friday, March 17, 2000 3:30 PM
236

MODERN TESTING TOOLS


Exhibit 3. Vendor Information

Vendor Name Address Phone Fax Web Site E-Mail


Technology Builders, 400 Interstate North (800) 937-0047 (770) 937-7901 www.tbi.com marketing@tbi. com
Inc. Parkway, Suite 1090,
Atlanta, GA 30339
Mercury Interactive 1325 Borregas Avenue, (408) 822-5200 (408) 822-5300 www.merc-int.com [email protected]
Sunnyvale, CA 94089
Rational Software 18880 Homestead Road, (800) 728-1212 (408) 863-4120 www.rational.com product_info@ rational.com
Corporation Cupertino, CA 95014
McCabe & Associates 5501 Twin Knolls Road, (800) 638-6316 (410) 995-1528 ww.mccabe.com info@mccabe. com
Suite 111,
Columbia, MD 21045
AutoTester, Inc. 8150 N. Central (214) 368-1196 (214) 750-9668 www.autotester.com info@autotester. com
Expressway, Suite 1300,
Dallas, TX 75206
Sun Microsystems, 901 San Antonio Road, (650) 960-1300 — www.sun.com —
Inc. Palo Alto, CA 94303

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 237 Friday, March 17, 2000 3:30 PM

Introduction to Testing Tools

McCabe Visual 2000 helps determine date testing requirements during


the impact assessment and analysis phases. It identifies where to concen-
trate testing resources, what to test, how much to test, and when to stop
testing. During validation, McCabe Visual 2000 monitors the test effort and
ties testing results back to initial assessment information.
McCabe Visual 2000 provides a common methodology and interface
across many languages with full language parsing technology that gener-
ates information on software characteristics, including flowgraphs and
metrics.

Product Features.
• Comprehensive analysis, visual representation, and date-centric soft-
ware testing to ensure improved project planning, architectural in-
sight, and validation for Year 2000 compliance across numerous
languages and platforms.
• An open API enabling the leverage of investments in existing tools
(change and configuration management, editing, renovation, GUI test-
ing, and even other Year 2000 assessment and remediation tools).
• A database of software characteristics including metrics and flow-
graphs, used as a valuable resource for software development and
testing efforts continuing into the next millennium.
• The “On-Screen Battlemap” renders a structure chart of a system and
highlights modules/paragraphs containing date references. The high-
light color is determined by the complexity of the code. Potential high-
risk areas are easily detected.
• The “Scatterplot Diagrams” provide a view of the quality of the system
and thoroughness of testing performed within a system. Modules are
sorted by their ranking in user-specified metrics. Error prone or un-
tested modules are effortlessly located.
• The “Unit-Level Slice” pinpoints date references and helps identify po-
tential test paths for testing millennium bugs.

System Requirements.
200 Mhz Pentium
64 Mb RAM

Platforms Supported.
SunOS 5.x (Solaris)
Windows NT
Windows 95
Languages Supported: C, C++, Fortran, Visual Basic, COBOL, ASM370,
Model204, Ada, Java

237

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 238 Friday, March 17, 2000 3:30 PM

MODERN TESTING TOOLS

McCabe’s Visual Testing ToolSet


Product Description. The McCabe Visual Testing ToolSet (VTT) employs
robust tools and proven methodology. Based on software metrics widely
accepted in the industry, VTT penetrates the complexity of modern sys-
tems. It graphically displays entire software systems and pinpoints where
testing effort can be most effective.
The visual environment helps managers understand complex software
projects and judge the resources needed to meet project goals. Developers
use VTT to ensure that they have tested the code in the most thorough and
efficient way. VTT calculates the cyclomatic complexity and identifies the
relevant test paths necessary for structured testing. Unlike other testing
methods, the number of tests is driven by the code’s complexity. Studies
have show that testing based on the cyclomatic complexity metric reduces
the errors found in the delivered code.

Product Features.

• A visual environment that allows one to plan software testing resourc-


es in advance. Graphical displays of test paths and the number of tests
required, visually identify the most complex areas to help focus test-
ing efforts.
• Comprehensive software testing results that identify tested and un-
tested paths and easily pinpoint problematic areas. Reports and
graphical displays help testers quickly assess the need for additional
tests.
• Multiple levels of code coverage including unit level, integration level,
path coverage, branch coverage, and Boolean coverage.
• The Battlemap renders a structure chart of a system and color codes
modules based on their testedness. Untested areas are easily detected.
• The Combined Coverage Metrics report ranks modules based on their
testedness. Software metrics and code coverage information are also
included.
• The Untested Graph/Listing calculates the remaining paths to be test-
ed to complete structured testing. This can be very useful in creating
new tests to complete a test suite.
• The Dependency Analyzer finds and helps one inspect data dependen-
cies in code. With the dependency analyzer, one can determine wheth-
er a dependency is breakable with a careful selection of data or that
the dependency is inherent in the code and the module can never be
fully tested.
• Visual Testing ToolSet Reports provide testing statistics and indicate
which specific tests remain. These include the Integration Coverage,
Path Coverage, Branch Coverage, Code Coverage, Combined Cover-
age, and Boolean (MC/DC) Coverage reports.

238

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 239 Friday, March 17, 2000 3:30 PM

Introduction to Testing Tools

• Visual Testing ToolSet provides a common methodology and interface


across many languages with full language parsing technology that gen-
erates valuable information on software testing, including identifying
untested areas and high complexity modules.

System Requirements.
166 Mhz Pentium
32 Mb RAM

Platforms Supported.
SunOS4.x (Motif)
SunOS 5.x (Solaris–Motif)
Windows NT
Windows 95
IBM RS6000, AIX 4.2
HP700, HP-UX 10.x
SGI, IRIX 5.x, 6.x
Languages Supported: C, C++, Fortran, Visual Basic, COBOL, ASM370,
Model204, Ada83, Ada95, PL/1, Java

McCabe’s Visual Quality ToolSet


Product Description. The Visual Quality ToolSet (VQT) answers the fol-
lowing questions: When is the quality acceptable or when will it be accept-
able? How do you explain to developers and managers the basics of quality
software? How do you establish an effective program that builds quality
into the product? VQT combines graphical technology and objective stan-
dards of measurement, or metrics, to assess software quality and maintain-
ability. The ToolSet computes over 100 metrics, including McCabe’s
ground-breaking cyclomatic complexity metric.

Product Features.
• Insight into software quality through module-by-module metric calcula-
tion. Metrics including cyclomatic complexity and essential complexity
help identify where a program is more likely to contain errors. Metrics
measurements are also traced over time to track program improvement.
• A visual environment for understanding software. Graphical displays
represent the structure of code and the metrics rankings to provide
assessment even of large systems.
• A database of software characteristics including metrics and flow-
graphs, used as a resource for future software changes and upgrades.
• The Battlemap renders a structure chart of a system and color-codes
modules based on their metrics. Potential high-risk areas are easily
detected.

239

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 240 Friday, March 17, 2000 3:30 PM

MODERN TESTING TOOLS

• The Scatterplot provides a view of the quality of the system. Modules


are sorted by their ranking in user-specified metrics. Error prone or
complex modules are located.
• The Metrics Trends report includes details of metrics calculations of
individual modules over a period of time. This allows for tracking the
quality of the code throughout the development life cycle.

System Requirements.
166 Mhz Pentium
32 Mb RAM

Platforms Supported.
SunOS4.x (Motif)
SunOS 5.x (Solaris–Motif)
Windows NT
Windows 95
IBM RS6000, AIX 4.2
HP700, HP-UX 10.x
SGI, IRIX 5.x, 6.x
Languages Supported: C, C++, Fortran, Visual Basic, COBOL, ASM370,
Model204, Ada83, Ada95, PL/1, Java

McCabe’s Visual Reengineering ToolSet


Product Description. Organizations today must respond to an avalanche
of change in business conditions. Responding to change means that soft-
ware must rapidly adapt to new needs. To be adapted, software must be un-
derstood. Yet due to a long history of evolution, lack of appropriate tools,
and shifting personnel, legacy software is often neither understood nor
well documented.
To meet these needs, the Visual Reengineering ToolSet (VRT) provides a
rich graphical environment in which code can be analyzed, displayed, and
understood. VRT serves well in any of these reengineering tasks: the ongo-
ing maintenance of existing systems, their modification to add new func-
tions or capabilities, or their migration to new hardware platforms or
architectures such as client/server. 20 VRT can save time and resources
during large reengineering projects.

Product Features.

• Analysis, visual representation, and metrics calculation to ensure im-


proved project planning, architectural insight, and identification of
complex code.
• Dynamic analysis of a program pinpoints code related to a specific
functionality.
240

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 241 Friday, March 17, 2000 3:30 PM

Introduction to Testing Tools

• Module comparison feature to locate redundant and reusable code


which may be reengineered to reduce program size and complexity.
• The Slice report highlights executed code on the source code listing
and the graphical flowgraph. This feature greatly assists in function
extraction and reuse.
• The Module Comparison report identifies similar modules within the
system. Modules are compared based on user-defined metrics or oth-
er criteria. Wasteful, redundant code can be removed.
• The Histogram report identifies modules that exceed user-defined met-
rics thresholds. Complex, unmaintainable code is easily pinpointed.
• Visual Reengineering ToolSet parses code and displays a picture of the
structure of the software. Metrics indicate likely problems so one can
focus efforts where they will have the most impact — on the modules
of high complexity.
• The data dictionary tracks the use of data elements and highlights the
associated modules on the Battlemap. With use of the data complexity
metric, the data change can be visualized, the complexity of the data
change quantified, and the related test paths generated.
• Visual Reengineering ToolSet provides a common methodology and
interface across many languages and platforms with a language pars-
ing technology that generates valuable information on software char-
acteristics, including flowgraphs and metrics. This is especially useful
in platform migration projects.

System Requirements.
166 Mhz Pentium
32 Mb RAM

Platforms Supported.
SunOS4.x
SunOS 5.x (Solaris)
Windows NT
Windows 95
SCO Open Desktop 3.0
SunOS 5.x, (Solaris x86)
IBM RS6000, AIX 3.2, AIX 4.x
HP700, HP-OX, 10.x
SGI, IRIX 5.x, 6.x
Languages Supported: C, C++, Fortran, Visual Basic, COBOL, ASM370,
Model204, Ada83, Ada95, PL/1, Java

Rational’s SQA Suite (Version 6.1)


Product Description. SQA Suite™ is an integrated product suite for
the automated testing of Windows NT™, Windows® 95, and Windows 3.x
241

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 242 Friday, March 17, 2000 3:30 PM

MODERN TESTING TOOLS

client/server and Internet applications. SQA Suite includes a scalable, inte-


grated, server-based test repository. It combines client/server and Internet
testing power, management tools, and a formal methodology to set the
standard for automated testing of cross-Windows client/server and Inter-
net applications. SQA Suite is comprised of four products from Rational —
SQA Robot™, SQA SiteCheck™, SQA Manager™, and SQA LoadTest™.
There are two versions of SQA Suite: TeamTest Edition and LoadTest
Edition. SQA Suite: TeamTest Edition can be used to test code and deter-
mine if the software meets requirements and performs as expected and in-
cludes three components:
• SQA Robot
• SQA Manager (including WebEntry)
• SQA SiteCheck
SQA Suite: LoadTest Edition provides integrated testing of structure,
function, and performance of Web-based applications and includes four
components:
• SQA Robot
• SQA Manager (including WebEntry)
• SQA SiteCheck
• SQA LoadTest

Rational’s SQA SiteCheck Tool


Tool Description. SQA SiteCheck is a Web site management tool for the In-
tranet or World Wide Web. It is designed to help the Webmaster or business
manager keep up with every aspect of the rapidly changing site. The prima-
ry purpose of SQA SiteCheck is to detect broken links, orphaned pages, and
potential performance problems on Web sites. SQA SiteCheck helps Web-
masters and Web site administrators detect and repair defects in the struc-
tural integrity of their sites.
SQA SiteCheck includes many features that allow it to test Web sites that
use the most current technology to present active content such as HTML
forms and Java applets. It is also capable of testing secure sites making use
of SSL, proxy servers, and multiple security realms to protect the data sent
to and from the site. SQA SiteCheck’s advanced level of integration with
McAfee VirusScan enables one to detect infected documents on a site be-
fore visitors do.

Tool Features.

• A fully integrated internal browser and HTML editor


• Full support of the Secure Sockets Layer (SSL)

242

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 243 Friday, March 17, 2000 3:30 PM

Introduction to Testing Tools

• Filters for Web-based forms, frames, Java, JavaScript, ActiveX, and VB-
Script
• Automatic tracking of moved or orphan pages and broken links
• Fixes links without needing a separate editor
• Includes automatic virus scanning
• Pinpoints all slow pages and predicts performance time for all commu-
nication paths
• Can impersonate both Microsoft Internet Explorer and Netscape Nav-
igator to see the different server responses to the different browsers
• Integration with SQA Suite: SQA Robot as the Web Site Test Case

System Requirements.
16 Mbytes, 32 Mbytes recommended for NT
10 Mbytes of disk space
PC with 486 processor, Pentium-class processor recommended

Platforms Supported.
Microsoft® Windows 95® or Windows NT 4.0 or later
ActiveScan View requires Microsoft Internet Explorer™ v3.0 or later

Rational’s SQA Robot (Version 6.1)


Product Description. SQA Robot allows one to create, modify, and run au-
tomated tests on cross-Windows client/server applications. It offers reusabil-
ity and portability of test recordings across Windows platforms to provide
one recording that plays back on all Windows platforms. SQA Robot in-
cludes Object Testing™ of Object Properties and Data.

Product Features.

• Comprises an integrated product suite for testing Windows NT,


Windows 95, and Windows 3.x client/server and Internet applications
to deliver solutions for testing cross-Windows client/server and Inter-
net applications
• Uses Object Testing to completely test 32- and 16-bit Windows objects
and components, including ActiveX Controls, OLE Controls (OCXs),
Visual Basic Controls (VBXs), Visual Basic® objects, PowerBuilder®
objects, Oracle Developer/2000 objects, Delphi® objects, Win32 con-
trols, etc.
• Delivers Rational’s Object-Oriented Recording™ technology to pro-
vide a fast, intuitive test creation with a short learning curve
• Includes SQABasic™, an integrated, Visual Basic syntax-compatible
scripting environment to deliver an integrated programming environ-
ment for script development

243

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 244 Friday, March 17, 2000 3:30 PM

MODERN TESTING TOOLS

• Includes integrated Web site testing power with SQA SiteCheck™, to


deliver Web site analysis, performance measurement, and repair tech-
nology
• Delivers seamless integration with the scalable, integrated network
SQA Repository — test assets and results are centralized for easy
analysis and improved communication among test team members

System Requirements.
Microsoft Windows 3.x
Windows 95
Windows NT
16 Mbytes; 24 Mbytes recommended for Windows NT and Windows
95, 8 Mbytes for Windows 3.x
40 Mbytes of disk space
PC with 486 processor, Pentium-class processor recommended

Platforms Supported.
Microsoft Visual Basic (versions 3, 4 & 5)
Sybase/Powersoft PowerBuilder (versions 4, 5 & 6)
Borland Delphi (versions 2.01 & 3.0)
PeopleSoft PeopleTools (versions 6 & 7)
Centura (version 1)
Microsoft Visual C++ (version 5 and later)

Rational’s SQA Manager (Version 6.1)


Product Description. SQA Manager helps the entire test team run
smoothly. It lets QA plan, manage, and analyze all aspects of cross-Win-
dows client/server applications. SQA offers a viable way to ensure the cli-
ent/server application is production-ready before deployment.
SQA Manager helps one keep track of all the test assets. Multiple test
projects can be defined in the scalable SQA Repository, and larger projects
can be broken into smaller projects. One can store data which is usable
across projects on the network. Test cases (verification points) and test
scripts can be stored and organized to ensure there is no duplication of ef-
fort and that the latest versions of a test are being run. Tight integration with
SQA Robot™ means information about test creation and execution is updat-
ed automatically, and one can open an SQA Robot test script from SQA Man-
ager.
SQA Manager is seamlessly integrated with the SQA Repository for a
team-testing environment, ensuring effective communication. Every mem-
ber of the development team benefits from access to the same up-to-date
information about the testing projects. Testers can report defects and
track their progress. Developers access the defect management system to
244

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 245 Friday, March 17, 2000 3:30 PM

Introduction to Testing Tools

update the status of a defect, project leaders extract information about the
progress of a testing project, and QA managers generate reports and
graphs to measure progress and report to upper management. An adminis-
trator can assign and restrict access privileges for security and give team
members different privileges to open, fix, verify, and resolve a defect.
SQA Manager WebEntry provides defect entry and tracking capabilities
that are accessed via a web browser. This enables end users of an applica-
tion to submit defects directly to an SQA Repository via a Web browser as
well as list and display information about defects previously entered, giv-
ing one the ability to benefit from users’ feedback at any stage of the re-
lease cycle.
SQA Manager delivers an automated test solution with an integrated,
powerful report writer and graphing engine. The customizable report writ-
er lets one create reports using any data in the SQA Repository through an
intuitive drag-and-drop interface. One can also customize or use any of the
more than 50 preformatted reports. The graphing engine offers a variety of
customizable graphs to help analyze the progress of the testing project.
Any information can be instantly sent via email.

Product Features.

• Delivers an advanced, integrated, email-enabled test planning, work-


flow tracking, and defect management for a comprehensive test man-
agement solution for cross-Windows client/server and Internet
applications
• Provides a scalable, industrial-strength client/server test repository
to integrate the testing process across all Windows platforms
• Imports RequisitePro requirements, Rational Rose Use Case/Use Case
Scenarios, PowerBuilder Library (.PBLs) and ASCII text files for auto-
matic generation of test requirements and test procedure names
• Provides graphical test planning to organize test plans based on the
requirements of the application
• Tracks defects along a customizable, rules-based workflow for defect
management and tracking
• Provides customizable reporting and graphing and a variety of stan-
dard reports and graphs for analysis of test progress and coverage
• Submits defects directly to the SQA Repository through SQA Manager
WebEntry™, a browser-based defect entry and listing system, for In-
ternet-based defect entry and viewing of defect status

System Requirements.
Microsoft Windows 3.x
Windows 95
Windows NT

245

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 246 Friday, March 17, 2000 3:30 PM

MODERN TESTING TOOLS

16 Mbytes; 24 Mbytes recommended for Windows NT and Windows


95, 8 Mbytes for Windows 3.x
20 Mbytes
PC with 486 processor, Pentium-class processor recommended

Platforms Supported.
Windows NT
Windows 95
Windows 3.x

Rational’s SQA LoadTest (Version 6.1)


Product Description. SQA LoadTest offers a method for load, stress, and
multiuser testing of Windows client/server applications. It is an automated
network testing tool for Windows that allows complete multimachine test
synchronization without programming. SQA LoadTest lets one test 32-bit
and 16-bit Windows NT and Windows 95 client/server applications and
16-bit Windows 3.x client/server applications.
With features such as virtual user testing, DataSmart™ Recording, Web-
Smart™ Playback, and HTTP Class Error Collection and Analysis, SQA
LoadTest provides a method of ensuring the quality of the HTTP Web serv-
ers. SQA LoadTest also provides a solution for complete cross-Windows
testing by enabling one to test 32-bit and 16-bit Windows NT and Windows
95 client/server applications, and 16-bit Windows 3.x applications.

Product Features.

• Provides DataSmart Recording™ which automatically creates data-


driven test scripts, enabling hundreds of virtual users to run the same
transaction with each user sending different data to the server, with-
out programming
• Offers WebSmart™ Playback to enable playback of recorded HTTP
sessions by automatically handling changing Web page content
• Offers tracking and analysis of HTTP errors during load and stress
testing of Web-based applications
• Delivers HTTP/HTTPS virtual user testing to provide virtual user load
and stress testing for HTTP/HTTPS Web servers
• Supports distributed testing on 32- and 16-bit applications on Win-
dows NT and Windows 95 and 16-bit applications on Windows 3.x cli-
ent/server machines for centralized control of multiple agent stations
from a single master station
• Delivers a 100% visual interface for creating client/server multima-
chine tests through a point-and-click interface — no programming is
required

246

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 247 Friday, March 17, 2000 3:30 PM

Introduction to Testing Tools

• Provides an incremental loading option so one can start machines


during test execution to vary the system load without programming
• Integrates with SQA Suite, including SQA Robot and SQA Manager, to
deliver the one solution for testing cross-Windows client/server and
Web applications

System Requirements (Master System).


Memory: 32 Mbytes
Disk Space: 100 Mbytes

System Requirements (Agent System).


PC with 486 processor; Pentium processor recommended
For GUI playback: Microsoft Windows 3.x, Windows NT, or Windows
95
For Web Virtual User recording playback: Windows NT 4.0
Networks: Native support for TCP/IP, IPX/SPX, NetBIOS/NetBEUI

Platforms Supported.
Windows NT 4.0
Windows 95

Rational’s Visual Test (Version 4.0r)


Product Description. Visual Test is an automated testing tool that brings
new levels of productivity to developers and testers and makes it easier for
organizations to deploy mission-critical applications for the Microsoft Win-
dows 95 and Windows NT operating systems and for the World Wide Web.
Visual Test helps developers create tests for applications of virtually any
size and created with any development tool. Visual Test is integrated with
Microsoft Developer Studio, a desktop development environment, and has
extensive integration with Microsoft Visual C++.

Product Features.

• Provides language independent testing of 32-bit Windows applica-


tions, components, and dynamically linked libraries
• Automates the repetitive tasks of regression testing
• Uses TestBasic, a powerful automated test programming language,
which enables one to develop reusable, maintainable, and extendible
test assets
• Includes the Suite Manager so that from a single point one can orga-
nize tests and collect test results
• Offers redistributable components so that the tests designed and de-
veloped by QA engineers are redistributable, providing maximum cost
effectiveness

247

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 248 Friday, March 17, 2000 3:30 PM

MODERN TESTING TOOLS

• Tests for control existence and location, retrieves property values,


and allows updating of properties providing thorough testing of the
application’s OLE controls (OCXs) and ActiveX controls
• Supports special procedures that allow one to distribute and monitor
testing tasks across a network
• Provides Microsoft Test 3.0 for testing of 16-bit Windows applications
running in any Windows environment

System Requirements.
A CD-ROM drive
VGA or higher-resolution video adapter
Microsoft mouse or compatible pointing device
Optional: NetBIOS-compatible network
8 Mb of memory for Windows 95
12 Mb for Windows NT workstation (16 Mb recommended)
16 Mb for Windows NT workstation on RISC (20 Mb recommended)
15 Mb of available hard-disk space
Personal computer with a 386DX/25 or higher processor or a Digital
Alpha running Microsoft Windows 95 or Windows NT workstation
3.51 or later operating system

Platforms Supported.
Microsoft Windows 95
Windows NT

Rational’s preVue Tool


Product Description. With preVue, Rational offers enterprise-wide testing
solutions. Products and services are provided that reduce risk, lower
costs, and increase user satisfaction when deploying applications for cli-
ent/server, X Window, ASCII, and Web environments.
The newest release of the preVue product line, release 5.0, offers graph-
ical analysis capabilities, client/server support for load testing, and the
new preVue-Web extension. preVue-Web allows performance testing of the
WW server with thousands of Web users.
preVue-C/S applies heavy user loads to database servers and applica-
tion servers to give accurate performance and scalability data. Under-
standing system limitations and pinpointing potential breakpoints before
they are seen by end users is only possible when a real-life user load is ap-
plied to a server.
preVue-Web records HTTP traffic, downloaded Java applets, user think-
time, number of bytes received, connects and disconnects generated by
any browser, running on any platform. By not requiring any recording soft-

248

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 249 Friday, March 17, 2000 3:30 PM

Introduction to Testing Tools

ware to be installed on the client browser or server machines, the traffic re-
corded can be used to generate heavy user loads against a Web server even
as the environment changes. preVue-Web can record Internet and intranet
application traffic from any Windows, Windows 95, Windows NT, MacOS,
OS/2 or UNIX system. preVue-Web software is supported on all major UNIX
platforms and Windows NT.
preVue-X automates both GUI regression testing and load testing for X
Window applications and does not require special hooks into the applica-
tion or X libraries.
The tool operates at the X protocol level, between the X server and the
X client applications. It operates independently from the graphical user in-
terface (Open Look, Motif, CDE, etc.), toolkits, and network.
preVue-ASCII (Version 5.0) is a remote terminal emulator (RTE) that rep-
licates users running applications on a system under test (SUT). preVue-
ASCII automates multiuser testing of the applications by replacing both us-
ers and physical devices with software scripts that deliver an accurate
workload of user activity. It measures the quality and performance of the
applications under large user loads.

Product Features.
(preVue-C/S Features)
• Emulates 2-tier and 3-tier network traffic
• SQL, HTTP, and Tuxedo traffic is captured and automatically turned
into client-emulation scripts
• Supports testing of Oracle, Sybase, Informix, and SQL Server data-
bases
• Presentation-quality data analysis tools
• Real-time test monitoring
• Server response time measured under varying user loads
• Integrated reporting with performance monitoring tools
• Easily varies the user workload during playback
• Tests are independent of client operating system and hardware
environment
(preVue-Web Features)
• Measures Web server response times under large user loads
• Automatically captures and plays back HTTP traffic and download-
ing of Java applets
• Accurately emulates and time stamps concurrent responses to mul-
tiple HTTP requests
• Provides emulation of users of any Web browser, running on any cli-
ent platform
• Supported on all major UNIX platforms and Windows NT
• Integrates with preVue-C/S to test both database and Web servers
249

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 250 Friday, March 17, 2000 3:30 PM

MODERN TESTING TOOLS

(preVue-X Features)
• A single tool for both GUI and performance testing
• Nonintrusive approach lets one “test what you ship”
• Tests all versions of UNIX, X server, GUI tool kits, etc.
• Automatically generates test scripts reproducing user inputs and
system responses
(preVue-ASCII Features)
• Cost-effectively and accurately emulates large user loads
• Tests any screen-based application in any operating system envi-
ronment
• Measures the user’s perception of performance–response times at
the user’s terminal
• Automatically generates test scripts reproducing user inputs and
system responses
• Provides the realism of actual users, yet tests are reproducible
• Support on all major UNIX platforms
• Uncovers quality and performance problems with new software re-
leases before the users see them
• Determines how the applications perform with new system hard-
ware or software upgrades
• Verifies the quality of applications following Year 2000 code changes
• Tests the capacity of the current system as the number of users in-
creases

System Requirements.
N/A

Platforms Supported.
UNIX
Windows NT

Rational’s PureCoverage Tool


Product Description. PureCoverage is a code coverage analysis tool
that helps developers and quality assurance engineers identify untested
code. With patented Object Code Insertion technology (OCI), PureCover-
age will check all parts of an application, including source code, third-
party libraries and DLLs, shared or system libraries and DLLs, for code
that has or has not been executed under test. PureCoverage provides ac-
curate and complete code coverage information needed to evaluate tests
and pinpoint parts of a program that are not being exercised in testing.

Product Features.

• Detects untested code with or without source code

250

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 251 Friday, March 17, 2000 3:30 PM

Introduction to Testing Tools

• Detects untested code everywhere in UNIX applications


• C and C++ source code
• Third-party libraries
• Shared or system libraries
• Detects untested code everywhere in Windows applications
• C and C++, Visual Basic, and Java source code
• ActiveX, DirectX, OLE, and COM components
• Dynamic Link Libraries (DLLs)
• Third-party DLLs
• Windows operating system code
• Provides detailed coverage data per
• Function
• Line
• Basic block
• Application
• File
• Library
• Directory
• Intuitive Displays
• Outline view for efficient browsing of summary coverage informa-
tion
• Customizable views control data displayed and sorting criteria
• Point-and-click access to line-by-line coverage data via an Annotat-
ed Source view
• Robust reporting mechanism includes ability to
• Merge data over multiple runs and dynamically update coverage
statistics
• Merge data from multiple applications sharing common code
• Generate difference reports between multiple runs or executables
• Generate difference and low threshold reports
• Email nightly coverage data to development and testing teams
• Export data suitable for spreadsheets
• Integrated development solution with
• Purify for run-time error detection in UNIX environments
• ClearDDTS in UNIX environments for immediate coverage reporting
with PureCoverage output
• Microsoft Visual Studio in the Windows NT environment

Platforms Supported.
UNIX
• Sun SPARC workstations running SunOS 4.x, Solaris 2.3 - 2.6 HP9000
Series 700/800 workstations running HP-UX 9.0.x through 10.30
• Intel architecture only
Windows NT

251

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 252 Friday, March 17, 2000 3:30 PM

MODERN TESTING TOOLS

• Windows NT 3.51 or above


• Visual C++ 2.2 or above
• Visual Basic 5.0 or above
• Java applications run through the Microsoft Virtual Machine for
Java

Rational’s Purify Tool


Product Description. Purify is a C and C++ run-time error and memory
leak detection tool, using patented Object Code Insertion technology
(OCI). It checks all application code, including source code, third-party li-
braries and DLLs, shared or system libraries and DLLs. Developers and qual-
ity assurance engineers can identify and eliminate run-time problems in all
parts of their applications. Purify is available on both UNIX and Windows NT
platforms.

Product Features.

• Pinpoints run-time errors with or without source code


• Detects errors everywhere in UNIX applications
• C and C++ source code
• Third-party libraries
• Shared or system libraries
• Detects errors everywhere in Windows applications
• C and C++ source code
• ActiveX, DirectX, OLE, and COM components
• Dynamic Link Libraries (DLLs)
• Third-party DLLs
• Windows operating system code
• Error checking categories include
• Heap-related errors
• Stack-related errors
• Memory leaks
• Windows handle leaks
• Windows COM-related errors
• Windows API errors
• Intuitive Display
• Outline view for efficient error message browsing
• Color support for identifying critical errors quickly
• Detailed reports include stack trace and source line display
• Point-and-click access to source code for editing
• Advanced Debugging Capabilities
• Pinpoints bug origin by stack trace and source line number
• Just-In-Time Debugging quickly isolates errors with the debugger
• Filters and suppressions provide control over error-checking data
• Integrated development solution with:
252

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 253 Friday, March 17, 2000 3:30 PM

Introduction to Testing Tools

• Most common UNIX debugging tools


• Microsoft Visual Studio development environment

Platforms Supported.
UNIX
• Sun SPARC workstations running SunOS 4.x, Solaris 2.3 - 2.6
• HP9000 Series 700/800 workstations running HP-UX 9.0.x through
10.30
• SGI workstations running IRIX 5.3, 6.2, 6.3 and 6.4
Windows NT
• Intel architecture only
• Windows NT 3.51 or above
• Visual C++ 2.2 or above

Rational’s Quantify Tool


Product Description. Quantify is a performance analysis tool that gives
developers a way to identify application performance bottlenecks. Using
Rational’s patented Object Code Insertion (OCI) technology, Quantify
counts the individual machine instruction cycles it takes to execute an ap-
plication and records the exact amount of time the application spends in
any given block of code.

Product Features.

• Pinpoints performance bottlenecks in all parts of an application, in-


cluding user functions, system calls, shared and third-party libraries
• Detects performance problems everywhere in UNIX applications
• C and C++ source code
• Third-party libraries
• Shared or system libraries
• Detects performance problems everywhere in Windows applications:
• C and C++, Visual Basic and Java source code
• ActiveX, DirectX, OLE, and COM components
• Dynamic Link Libraries (DLLs)
• Third-party DLLs
• Windows operating system code
• Presents performance data in graphical displays
• Offers multiple, complementary views of performance data
• Collects per-thread performance data
• Automatically compares runs for fast verification of performance im-
provements

Platforms Supported.
UNIX

253

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 254 Friday, March 17, 2000 3:30 PM

MODERN TESTING TOOLS

• Sun SPARC workstations running SunOS 4.x, Solaris 2.3 - 2.6


• HP9000 Series 700/800 workstations running HP-UX 9.0.x through
10.30
Windows NT
• Intel architecture only
• Windows NT 3.51 or above
• Visual C++ 2.2 or above
• Visual Basic 5.0 or above
• Java applications run through the Microsoft Virtual Machine for
Java

Technology Builders’ Caliber–RBT


Product Description. Caliber–RBT is a software testing tool that validates
requirements by identifying all functional variations and logical inconsis-
tencies. It determines the necessary test cases by providing complete cov-
erage of the functional requirements defined. It manages the test library by
providing functional requirement coverage analysis and archiving both
new and existing test definition libraries. It also aids in project manage-
ment by providing quantitative measurements of the testing process.
The use of Caliber–RBT does not require that the user have good speci-
fications. In the past, the company has analyzed only two projects that had
good specs, and issues were found even in those. The use of Caliber–RBT
and the supporting Requirements Based Testing (RBT) process drives the
clarification of the application rules.
Typically, one is dealing with high-level (not testable) requirements and
more detail in the design documents. However, the information in the de-
sign documents is not generally readable by the subject matter experts
(SMEs). Caliber–RBT is used to clean up the wording. The SMEs review the
test cases and the Caliber–RBT-generated Functional Specification. Since it
is known that the set of tests generated by Caliber–RBT are mathematically
equivalent to the original source material, any issue the SMEs find with the
tests is really an issue with the specifications. Even where the specifica-
tions are fairly good, it has been found that the tests are easier to review
than the specifications.
The algorithms used by Caliber–RBT are based on those used by engi-
neers in testing integrated circuits. This lends strong, proven rigor to the
test case design process. It also results in highly optimized test libraries —
and more functional coverage for fewer test cases. In head-to-head compar-
isons, the tool generally covers twice as much function in half the test
cases — a four-to-one reduction for equivalent coverage. Caliber–RBT also
tells one where to insert diagnostic probe points in the code to ensure that
one is receiving the right answer for the right reason.

254

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 255 Friday, March 17, 2000 3:30 PM

Introduction to Testing Tools

The coverage facility in Caliber–RBT is the equivalent of a code cover-


age monitor. Code coverage monitors tell one the percent and number of
statements and branches executed. The tool reports the percent of func-
tions tested.
The functional coverage facility is used to plan the testing effort. If time
is limited, it might not be possible to run all the tests prior to production
startup. In such cases, one can determine which subsets provide the most
coverage for the fewest tests. Of course, testing would continue after pro-
duction starts. However, one can maximize the testing within the time and
resource constraints.
A factor often overlooked in selecting testing tools is the synergy between
them. For example, the best-selling test tools by far are the capture playback
tools. However, they also represent the largest volume of “shelfware” once
people realize how much work it is to build and maintain the test scripts.
There are two issues to deal with. The first is that, as long as the specifica-
tions keep changing, one cannot finalize the scripts. Either this happens too
late to code the scripts or the cost of scrap and rework is too great. Cali-
ber–RBT and the RBT process stabilize the specifications in a timely manner.
The second issue is the effort to code the scripts. It is normally estimated
that playback scripting entails at least 3 to 5 times the effort spent on design-
ing the test cases. Caliber–RBT significantly reduces the number of tests, re-
sulting in major savings in the playback scripting effort.
Caliber–RBT also has a significant impact on the acceptability of code
coverage monitors. Today, most applications go into production with less
than half of the statements and branches having been executed. When peo-
ple start using coverage monitors, their first pass numbers are usually
around 30%, or even less. The tool gives them bad news. People do not like
to use tools that give them bad news, especially bad news that managers
will see. They stop using such tools quickly. The test cases generated by
Caliber–RBT generally cover 70 to 95% of the code. In other words, people
obtain good news from the coverage monitor and are thus more willing to
use it.

Product Features.
(Validating Requirements Features)
• The system’s functional requirements are defined by an analyst to
Caliber–RBT via a series of Cause-Effect Graph statements. Cali-
ber–RBT then translates the input Cause-Effect Graph statements
into a set of “functional variations.” It combines these functional
variations into a suite of logical test cases. These functional varia-
tions and test cases can be reviewed by the analyst to verify the
completeness and accuracy of the requirements specification vs.
the Cause-Effect Graph input.

255

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 256 Friday, March 17, 2000 3:30 PM

MODERN TESTING TOOLS

• The product provides feedback in the form of diagnostic messag-


es associated with the functional variations. Further analysis of
these diagnostics ensures the quality of the requirements specifi-
cation in terms of logical consistency, completeness, and lack of
ambiguity.
(Verifying the Design and Code Features)
• Caliber–RBT analyzes each individual relation defined by an analyst
in order to identify the “functional variations.” Functional varia-
tions describe all the expected actions of the system (i.e., effects) if
it performs per the input specifications (i.e., causes). These func-
tional variations are then reduced to the minimum set of variations
necessary to detect a functional error in the software under test.
The minimum set of variations are then logically combined into a
suite of test case definitions such that each functional variation is
covered by at least one of the test cases. This suite of test cases,
then, may also be referred to as “the minimum set of tests neces-
sary to detect a functional error in the software.”
(Managing Test Cases Features)
• Produces a cross-reference showing which functional variations are
covered by each test case. This information is useful in isolating
failing functions and in subsetting the test library to test specific
functions.
• Can also be used to evaluate the functional coverage achieved us-
ing previously existing test libraries. This will typically be used to
demonstrate that Caliber–RBT will achieve more coverage using
fewer tests.
• Can also be used to generate the supplemental tests necessary to
bring a previously existing test library up to full functional cover-
age. This allows Caliber–RBT to specify what additional test cases
are required when the previously existing test library was created
without the benefit of Caliber–RBT usage, or when a functional
change has been made to the specifications for a test library previ-
ously created by Caliber–RBT. In other words, one only needs to in-
strument the supplemental tests in order to update the test library
instead of starting over from scratch.
(Project Management Features)
• Provides a quantifiable yardstick, via the functional variations, for
measuring the status of the testing effort. For example, a test status
report stating that testing is 92% finished is more meaningful than
the wishful thinking or speculation that other reports often repre-
sent.
• Provides a cost-effective approach to testing. Studies have shown
that 56% of all errors have their roots in poorly specified require-

256

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 257 Friday, March 17, 2000 3:30 PM

Introduction to Testing Tools

ments. However, 82% of the total cost associated with system er-
rors have their roots in these same requirements. The cost
difference between detecting an error in the requirements at the
time of writing them (call it x) vs. detecting an error after the sys-
tem is in production has been measured at 270x.
• Allows the project manager the option of accelerating the project
when faced with tight schedules. This is possible because the test-
ing effort can be performed in parallel with the analysis, design, and
coding efforts. Also, earlier detection of errors will minimize the
costly and time-consuming rework effort associated with errors not
detected until the end of the development cycle.

System Requirements.
The 16-bit version of Caliber–RBT Release 5.3 requires Microsoft Win-
dows 3.1x running on a “386” (or, preferably, faster)-based
machine.
The 32-bit version of Caliber–RBT Release 5.3 runs twice as fast as the
16-bit version and requires Microsoft Windows 9x or NT running on
a “486” (or, preferably, much faster)-based machine.
Disk space — 6 megabytes (additional disk space will be required for
user data files).

Platforms Supported.
Microsoft Windows 3.1x
Microsoft Windows 9x
Microsoft Windows NT
While a specific OS/2 version of the code is not available, clients do
have Caliber–RBT under OS/2. A special install procedure has been
put together to make this easier.

AutoTester’s Test Library Manager


Product Description. Test Library Manager provides a long-term solution
to the issues of analysis, management, and maintenance of an automated
test library. Serving as a central repository on a network, Test Library Man-
ager consolidates the application tests and results for simplified access
and greater control. With Test Library Manager, one can preserve test in-
tegrity through centralized change and version control, perform global
modifications to tests based on application changes, and accumulate re-
sults for effective test analysis.
Test Station and Test Library Manager work in tandem to give one an
automated testing solution for a character-based PC, midrange, and
mainframe applications. Test Station is an integrated environment that
allows virtually anyone to develop, document, and execute a compre-

257

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 258 Friday, March 17, 2000 3:30 PM

MODERN TESTING TOOLS

hensive automated test library. Test Library Manager is a central repos-


itory for your test library components which provides change and
version control and global maintenance for tests across an entire appli-
cation development life cycle.

Product Features.

• Change and Version Control — Through user access rights and stan-
dard change control procedures, one controls access to the test li-
brary and monitors any changes made and who makes them. Change
control logs document all activity for a complete audit trail. For test-
ing multiple releases of applications, Test Library Manager stores cor-
responding versions of test files for quick access.
• Simplified Maintenance — Centralized control means simplified main-
tenance. The inevitable modifications which need to be made to the
test files are handled through Test Library Manager. Test library com-
ponents can be modified using the Test Library Manager’s built-in ed-
itors. Search and replace facilities provide global editing of data
values across selected tests or your entire library.
When tests must be modified due to changes in your application
screens and fields, Test Library Manager automatically identifies the
affected tests, thus eliminating time-consuming review of the test files.
• Analysis and Reporting — Test Library Manager’s current and histor-
ical reporting capabilities give one quick access to consolidated test
results whenever needed. For an analysis of testing over the life of an
application, Test Library Manager stores cumulative results including:
change logs for tracking all modifications made to the test library, test
and error logs for assessing system failure rates over time, and host
response logs for monitoring system performance over time
• Customized reporting options allow one to review and analyze only
the data which is critical to the tester. In addition, test results can be
exported for use with other tools including text editors, spreadsheets,
or databases for further analysis.

System Requirements (Test Library Manager).


IBM PC 386 or 100% compatible machines
MS-DOS or PC-DOS 5.0 or higher
1.5 Megabytes minimum storage requirements

System Requirements.
(Windows 3.X)
IBM PC-386 or greater and 100% compatibles
4 Megabytes minimum memory plus Windows system requirements

258

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 259 Friday, March 17, 2000 3:30 PM

Introduction to Testing Tools

10 Megabytes minimum disk storage


Supports Wall Data Rumba (DOS version only), DCA Irma, Attach-
mate Extra! and IBM (DOS version only) PC3270 terminal emu-
lation
(Windows 95 and Windows NT)
IBM PC-486 or greater and 100% compatibles
4 Megabytes minimum memory plus Windows 95 or Windows NT
system requirements (V.4x only; V3.51 not supported)
8 Megabytes minimum disk storage per installed copy
(OS/2):
IBM PC-486 or greater and 100% compatibles
4 Megabytes minimum memory plus OS/2 system requirements
10 Megabytes minimum disk storage
Supports IBM OS/2 Communications Manager terminal emulation

Platforms Supported.
Windows 3.1x
Windows 95
Windows NT
OS/2 LAN MGr
Novell 3.12

AutoTester’s Test Station Tool


Product Description. AutoTester Test Station is designed specifically to
help increase the quality of character-based PC and host applications. Au-
toTester provides capture/replay style test creation, yet stores the tests as
well-documented, easily maintainable, object-aware tests. The product in-
cludes an easy-to-use menu-driven interface as well as a powerful com-
mand set for advanced scripting needs.
Test Station and Test Library Manager work in tandem to provide auto-
mated testing solutions for character-based PCs, midrange, and mainframe
applications. Test Station is an integrated environment that allows virtual-
ly anyone to develop, document, and execute a comprehensive automated
test library. Test Library Manager is a central repository for the test library
components that provide change and version control and global mainte-
nance for tests across an entire application development life cycle.

Product Features.

• Flexible Test Capture — Lets one build consistent, documented tests


that can be used over the life of the application from tester to tester

259

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 260 Friday, March 17, 2000 3:30 PM

MODERN TESTING TOOLS

and release to release. With Test Station, tests can be captured at any
point in the software development process.
• Unattended Test Execution — Tests are intelligent scripts which pro-
vide dynamic verification of application responses against expected
results, duplicating expectations and decision points. When unexpect-
ed application responses occur during test execution, the tests identi-
fy those responses and react accordingly. Recovery options log the
details of application failures and then continue the testing process if
possible. In addition, Test Station’s playback synchronization pro-
vides proper test playback regardless of system performance.
• Reusability and Maintainability — Helps one develop an automated
test library that can be easily modified to account for new or different
application behavior over time. For ease of maintenance, tests can be
edited while in the application and then executed immediately, or they
can be edited off-line with Test Station or any text editor.
• Documentation and Reporting — Each step of every test is automati-
cally documented in English for ease of understanding. Tests are iden-
tified with detailed descriptions, test case numbers, and test
requirement identifiers for cross-reference purposes. After test execu-
tion, detailed results are available online or in report format for imme-
diate review and analysis.
• Scripting — Includes the AutoTester Scripting Language. Designed to
supplement the capabilities of Test Station, this language is a com-
mand set which can accommodate unique testing needs and provide
general task automation functionality.

System Requirements.
(Test Station)
IBM PC 386 or 100% compatible machines
MS-DOS or PC-DOS 5.0 or higher
326K conventional memory or 30K with LIM 4.0 compliant
expanded memory manager or DPMI 0.9 compliant extended
memory manager
10 Megabytes minimum storage requirements
Supports most network terminal emulation and communications
protocols, including IBM 3270, IBM 5250 (AS400), Hewlett-Pack-
ard 2392, Tandem 6530 and Unisys
(Windows 3.X)
IBM PC-386 or greater and 100% compatibles
4 Megabytes minimum memory plus Windows system require-
ments
10 Megabytes minimum disk storage

260

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 261 Friday, March 17, 2000 3:30 PM

Introduction to Testing Tools

Supports Wall Data Rumba (Office 2.1A) and Attachmate Extra!


V4.3A terminal emulation
(Windows 95 and Windows NT-16-BIT)
IBM PC-486 or greater and 100% compatibles
4 Megabytes minimum memory plus Windows 95 or Windows NT
system requirements (V4.x, V3.51 not supported)
8 Megabytes minimum disk storage per installed copy
(OS/2): 2.11 and OS/2 warp
IBM PC-486 or greater and 100% compatibles
4 Megabytes minimum memory plus OS/2 system requirements
10 Megabytes minimum disk storage
Supports IBM OS/2 Communications Manager terminal emula-
tion

Platforms Supported.
Windows 3.1x
Windows 95
Windows NT — V4.x local testing 16-BIT applications only. No emula-
tion supported.

Mercury Interactive’s TestDirector Test Management Tool


Product Description. TestDirector™ helps corporate IS personnel plan
and organize the testing process. With TestDirector one can create a data-
base of manual and automated tests, build test cycles, execute tests, and
report and track bugs. One can also create reports and graphs to help re-
view the progress of test planning, execution, and bug tracking before a
software release.
When working with WinRunner, one has the option of creating tests and
saving them directly in the TestDirector database. One can also execute
tests in WinRunner and then use TestDirector to review the overall results
of a test cycle.
TestDirector provides test management for planning, executing, and
communicating quality control during the entire development process.
TestDirector allows testers to translate business processes into a test plan
that acts as a central point of control for all aspects of the test. With the
flexibility to support both manual and automated testing, TestDirector is
scalable to keep hundreds of users informed of project status.

Product Features.
(Test Planning Features)
• Intuitive user interface can be used easily by a broad range of users

261

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 262 Friday, March 17, 2000 3:30 PM

MODERN TESTING TOOLS

• Both manual and automated tests are organized in the same visual,
hierarchical tree
• Quick access folders allow for easy navigation through the test plan
• Test plan steps are converted automatically to WinRunner test tem-
plates
• Existing documents in other formats — Microsoft Word, Microsoft
Excel, and others — can be included in TestDirector’s repository
• Complete control over access privileges for both groups and indi-
viduals
(Scalable Architecture Features)
• Collaborative groupware provides access to all tests and defects
with TestDirector’s central repository
• Single repository for all test data supports industry-standard data-
bases
• Complete control over access privileges
(Test Execution Features)
• Integration with WinRunner provides support for automated test-
ing within the TestDirector environment
• Batch or individual tests are automatically launched from TestDi-
rector; test results are then reported immediately back to TestDi-
rector’s repository
• All tests are clearly labeled as manual or automated
• Testers are guided step-by-step through manual tests, while allow-
ing users to report actual behaviors, compare them to expected re-
sults, and report each step as passed or failed as they perform each
test step
• Defect reports can be created at any point during test execution,
importing information such as actual and expected results to the
defect report
• Failures of automated tests show the exact place where the error
occurred
(Defect Tracking Features)
• Defect reports include complete information, including the exact way
to reproduce the problem, who in the development group has respon-
sibility for correcting the problem, and where the defect occurred
• Remote Defect Reporter allows external users, such as off-site beta
testers, to report defects using the same structure
• Remote users are automatically notified of changes in the status of
relevant defect reports
• Defects are associated with the phase of the defect life cycle, the
tests which produced them, and the application function where
they occurred

262

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 263 Friday, March 17, 2000 3:30 PM

Introduction to Testing Tools

Reporting and Analysis


• Fully-customizable reports using ReportSmith, Crystal Reports, Mi-
crosoft Excel, and other third-party reporting tools are supported
• Reports can be invoked at any stage of the testing process
• Extensive built-in reports can be filtered by subject, status, assign-
ment, history, designer, and more

System Requirements.
Minimum 16 MB RAM
Minimum 40 MB disk space

Platforms Supported.
Oracle
Sybase
Microsoft SQL Server
Microsoft Access

Mercury Interactive’s WinRunner Functional Testing Tool for Windows


Product Description. WinRunner® provides a way to test client/server
GUI applications, and with WinRunner’s RapidTest™ scripting, new testers
can overcome the initial barriers to test automation by giving the test
script development process instant momentum. Application testers and
developers can now get high-quality software without compromising on-
time deployment for Windows, Windows 95, and Windows NT.
RapidTest automatically creates a full suite of GUI tests from the appli-
cation. RapidTest gets users started fast — instead of requiring users to
create their first set of tests manually — they can use a Wizard to create
test scripts directly from the application.
Today, test automation has successfully replaced manual test execution
with automated test execution. But when it comes to building the automat-
ed tests, most conventional testing tools still rely exclusively on one-line-
at-a-time scripting techniques like programming and object-oriented re-
cording. These conventional tools merely transferred the burden from
manual testing to manual test development.

Product Features.
(Visually Integrated Scripting Features)
• Visual testing for powerful, flexible test creation productivity
• Interpreted development workspace with test script interpreter
and multiple document interface for simple management of script
development

263

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 264 Friday, March 17, 2000 3:30 PM

MODERN TESTING TOOLS

• Powerful script language to test everything needed to test


• Exception handling with built-in routines for automatic recovery
• Powerful script debugger to quickly “test” and fix scripts when
problems occur
• Flexible verification to know exactly if the application is working or
not
• New visual reporting that integrates high-level summary reports
with detailed records for every test verification result, in a new, in-
teractive reporting tool
(Script Mapping Features)
• Handles application changes automatically, using Script Mapping
for Adaptable and Reusable Tests
• Learns the application hierarchy, organizing objects by window. It
also handles independent GUI maps for separate applications si-
multaneously, and can invoke them automatically during testing.
• GUI map provides a single point of control for multiple tests by up-
dating one attribute of an object in the map — its effect updates all
scripts automatically.
• Includes an interactive editing tool for viewing or modifying the
map file. Users can choose which attributes to track for which ob-
jects, and what to name the objects in the test script, affording flex-
ibility for defining how WinRunner looks for and identifies
application objects.
(Custom Control Features)
• Integrated object support for major development tools and indus-
try-standard controls
• Open API for custom controls to enable users to define their own
testing support for objects
• Analog recording and text recognition as an alternative for verifica-
tion
(Powerful Client/Server GUI Test Automation)
• Provides a new, fully documented open testing API, enabling users
to create full automated testing support for custom objects — cap-
ture, replay, and verification
• Supports point-to-point mouse movements, bitmap comparisons,
or bitmapped test based on fixed window coordinates
• Can automate tests that depend on movement between fixed win-
dow coordinates, such as in graphical or drawing programs and
programs that do not have GUI objects
• Text recognition makes it possible to read text displayed by these
objects as alphanumeric data and provides the ability to perform
key test operations when hooks are not available to retrieve text
data from displayed objects.

264

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 265 Friday, March 17, 2000 3:30 PM

Introduction to Testing Tools

System Requirements.
Minimum 16 MB RAM
Minimum 16 MB disk space

Platforms Supported.
Windows 95
Windows NT

Mercury Interactive’s XRunner Functional Testing Tool for UNIX


Product Description. XRunner® offers a tool set for GUI test automation.
Its fully integrated Visual Testing™ environment incorporates simplified
test script management, point-and-click selection, interactive debugging,
etc. To help one get started, XRunner’s Script Wizard learns the application
by navigating its way through all available UI paths to create a complex test
script suite. With XRunner, one is guaranteed that GUI application testing
is fast, reliable, and complete across all UNIX platforms.
XRunner extends a set of automated testing utilities to ensure GUI, re-
ducing the time and expertise needed for creating, running, and maintain-
ing automated tests. XRunner runs on all UNIX platforms and may be
ported for testing across multiple environments such as Microsoft’s Win-
dows 3.x, Window 95, and Windows NT. One can develop a test once on one
platform and replay it on another for added versatility.

Product Features.
(Automated GUI Regression Testing Features)
• XRunner runs on all UNIX platforms and may be ported for testing
across multiple environments
• RapidTest™ Script Wizard automatically learns the entire applica-
tion and generates tests for unattended regression testing
• Visual Testing environment for combining object-oriented record-
ing
• Point-and-click test generation and test script logic into a single en-
vironment
• Flexible verification and replay options
• Sophisticated reporting tools
• Portability across multiple platforms and more
(Automatic Test Generation Features)
• A GUI regression test that captures a baseline checkpoint of GUI at-
tributes for every window that opens
• A bitmap regression test that compares bitmaps between versions
by creating a screen capture for every window that opens

265

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 266 Friday, March 17, 2000 3:30 PM

MODERN TESTING TOOLS

• A user interface (UI) test that checks adherence to X Window UI


conventions for every window that opens
• A template test that creates a test framework for future use
(Fully Integrated Scripting Environment Features)
• Provides flexibility to create test scripts as one uses the application
and offers point-and-click, recording and programming
• Recording actions performed on a widget, such as selecting an item
from a list or pressing a specific button, XRunner records a context-
sensitive test script. XRunner is smart enough to select the item or
press the button even when the UI changes.
• XRunner also supports analog test scripts when the tests are de-
pendent upon movements between fixed window coordinates and
do not have individual GUI objects. An analog test script will replay
exact mouse movements or clicks and keystrokes, such as clicking
the left mouse button.
• One can also use the programming method when enhancing tests
created by recording, adding loops for flow control, setting and us-
ing variables, using conditional branching, filtering, and report
messaging. XRunner’s Test Script Language (TSL) is based on the
C programming language with added testing functions. By imple-
menting the programming test method, users can tailor their tests
to meet specific functions.
• XRunner’s test script interpreter provides test development power,
since it supports simultaneous point-and-click test development,
recording of user operations, and enhanced test script program-
ming.
• To create the best possible script based on the testing require-
ments, XRunner fully supports mixing test script methods rather
than requiring one to use them separately. It also provides an inter-
active debugger that enables one to “test the tests” for optimal per-
formance.
(Flexible Verification Features)
• Using a point-and-click verification method of selecting the objects
on the screen, one chooses the type of checkpoint to insert in the
test script.
• Text recognition is a verification option exclusive to XRunner.
• XRunner is the only tool with a complete Optical Character Recog-
nition (OCR) engine to recognize text, such as checking console
windows for error messages.
• XRunner can also verify images, objects, files, and tables. For exam-
ple, XRunner supports tables in Oracle Developer/2000 applica-
tions.
• Likewise, XRunner provides open systems extensions that will al-
low one to launch shell scripts, system utilities, and tools.
266

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 267 Friday, March 17, 2000 3:30 PM

Introduction to Testing Tools

• XRunner can also verify Motif programs using WidgetLint, a set of


verification functions used to test Motif applications. XRunner de-
tects widget color and attachment problems, as well as any unman-
aged widgets to help one effectively debug Motif applications. Its
open API allows one to implement WidgetLint verification func-
tions.
(Enhanced Replay Modes)
• XRunner provides several test script replay modes. Built-in auto-
matic and custom synchronization allows one to run tests unat-
tended to maximize the application development time.
• In addition, XRunner can run in background mode, freeing up the
workstation during the day. One can continue writing code while
XRunner executes test scripts.
• Provides exception handling to keep test execution on track. Excep-
tion handling offers automatic built-in recovery including
• Overcoming unexpected conditions and resuming test execution
without halting the test
• Invoking a series of procedures to dismiss unexpected objects
• Rewinding test script execution to a previous step
• Navigating elsewhere in the application
• Recording errors in the test log along with steps taken to resume
testing
• Exiting the test when encountering certain surprise conditions
• XRunner also enables one to define error recovery routines to
ensure reliable replay and keep tests from coming to an abrupt
halt.
(Interactive Reporting Tool Features)
• XRunner’s interactive reporting tool combines a high-level view
with detailed statistics about what bugs were found by the test and
where.
• Includes the ability to drill down errors into greater detail, pinpoint-
ing the exact line in a test script. Both graphical and textual reports
chart the testing results for further analysis.
• Interactive reporting identifies bugs that were found by the test,
both in summary and in detail. A color-coded tree shows all execut-
ed tests along with their results.
(Script Mapping Features)
• XRunner handles application changes automatically, using Script
Mapping for Adaptable, Reusable Tests (SMARTest), which auto-
matically maintains object-specific data independent of individual
scripts.
• XRunner’s SMARTest monitors GUI application changes automati-
cally so that the tests will run correctly.

267

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 268 Friday, March 17, 2000 3:30 PM

MODERN TESTING TOOLS

• Automatically creates a SMARTest GUI map for the tested applica-


tion. When SMARTest learns the application hierarchy, it captures
key application attributes and organizes objects hierarchically, win-
dow by window. SMARTest guarantees test scripts will work cor-
rectly when the application changes without requiring rework.
(Portability Features)
• XRunner’s TSL is designed to port tests across all UNIX and Mi-
crosoft Windows (Windows 3.x, Windows NT, Windows 95) plat-
forms.
• It provides a scalable load testing solution for managing the risks of
client/server systems.

System Requirements.
16MB minimum RAM
Approximately 100 MB disk space

Platforms Supported.
UNIX

Mercury Interactive’s LoadRunner Load/Stress Testing Tool


Product Description. LoadRunner® is an integrated client, server, and
Web load testing tool. It provides a scalable load testing solution for man-
aging the risks of client/server systems. Using a minimum of hardware re-
sources, LoadRunner provides consistent, repeatable, and measurable load
to exercise a system. It exercises the client, server, and Web system just as
real users do. It contains a single point of control for client, server, and Web
load testing and supports hundreds or even thousands of virtual users.
By automating both client and server load testing from a single point of
control, LoadRunner helps developers get an accurate view of system be-
havior and performance throughout the application development life cycle.

Product Features.
(Client Load Testing Features)
• Exercises the system, driving real applications through the virtual
clients simultaneously from a single point of control
• Includes an integrated set of new load testing components: Virtual
User Generator, ScenarioWizard, Visual Controller, and Load Ana-
lyzer
• Synchronizes all virtual users to create peak loads, pinpoint bottle-
necks, and isolate problems
• Records test scripts automatically at GUI, SQL, Web, and terminal
levels
• Aids in isolating problems at client, server, and network level
268

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 269 Friday, March 17, 2000 3:30 PM

Introduction to Testing Tools

(Server Load Testing Features)


• Supports both two-tier and three-tier client/server architectures
• Generates an abstract data file of virtual users for nonprogrammers
• Generates a C code file of virtual users for programmers
• Verifies data retrieved from the server
• Supports multiple client/server protocols
(Data Analysis Features)
• Presents graphs and reports for analyzing load testing data
• Compares data across platform configurations and virtual users to
help isolate and pinpoint problems
• Displays both code and GUI
• Measures performance “end-to-end” from client through applica-
tion server and to the database
• Handles GUI changes automatically by maintaining scripts at object
level
(Web Load Testing Features)
• Supports HTTP, HTML and Java applets
• Defines transactions automatically for individual and groups of
HTTP messages
• Supports GET, POST, CGI messages
• Creates test scripts by recording the actions of a user or user
groups surfing a Web site
• Determines the maximum number of concurrent users a Web site
can handle
(RTE Load Testing Features)
• Records user interactions with character-based applications to cre-
ate test scripts
• Inserts synchronization points automatically on unique text or cur-
sor positions on the screen
• Generates a log file for debugging scripts and scenarios
• Replays RTE virtual user sessions just like a movie recording
• Verifies values as defined by row, column, or screen while server is
under peak data conditions from the server visually with an online
server monitor
• Exports data to standard formats (Microsoft Word, Microsoft Excel,
Lotus 1-2-3, email, etc.)

System Requirements.
Controller
32 MB RAM
70 MB disk space
Virtual Users
Minimum 2 MB per virtual user

269

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 270 Friday, March 17, 2000 3:30 PM

MODERN TESTING TOOLS

256 MB/100 virtual users


Disk space: 10 MB each

Platforms Supported.
Windows 3.x
Windows NT
Windows 95
Sun OS, Solaris, HP-UX, IBM AIX, NCR

Client/Server Protocols Supported.


SQL: Oracle OCI, Oracle UPI, Sybase dbLib, Sybase CtLib, Informix
I-NET
ODBC
TP Monitors: Tuxedo
Messaging: WinSocket
Web: HTTP, Java
Character-based: TTY, IBM 5250, IBM 3270
Applications: SAP R/3, Oracle Financials, PeopleSoft, Baan

Mercury Interactive’s Astra Site Manager Web Site Management Tool


Product Description. Astra SiteManager™ is a comprehensive, visual
Web site management tool designed to meet the challenges faced by Web-
masters and business managers of rapidly growing Web sites with changing
contents and shapes. Astra SiteManager scans the entire Web site, high-
lighting functional areas with color-coded links and URLs, to unfold a com-
plete visual map of the site. It pinpoints broken links or access problems,
compares maps as the site changes, identifies key usage patterns for im-
proving Web site effectiveness, and validates dynamically generated pag-
es.
Mercury Interactive’s Astra SiteManager offers a single solution for gain-
ing control of the Web site. From one page to the entire Web site, Astra Site-
Manager automatically scans and creates a visual map of the sites’ URLs
and their connections. This visual map includes all Web objects — Com-
mon Gateway Interface (CGI) scripts, Java applets, and HTML. Unlike other
products which create complicated tree displays, Astra SiteManager maps
the entire site in an easy-to-read format with map properties and helpful
URL statistics. If one needs to focus on specific components, one can zoom
in to find the information needed.

Product Features.
(Visual Web Display Features)
• View an entire Web site on screen

270

Copyright © 2000 CRC Press, LLC


AU9833/ch20/frame Page 271 Friday, March 17, 2000 3:30 PM

Introduction to Testing Tools

• Select from a variety of navigational options including zoom-in,


zoom-out, window panning, instant focus, and moving viewpoint
• Invoke filters for hiding irrelevant URLs when trouble-shooting or
creating what-if scenarios
• Print all Astra SiteManager map information for off-line work
• Choose textual view using split-screen display
(Action Tracking Features)
• Display usage patterns for instant analysis and Web site optimization
• Evaluate how users navigate the site using color-coded arrows and
numerical statistics
• View hits per path
(Link Analysis Features)
• Detect broken links or pages quickly and easily
• Repair broken links instantly using any HTML editor, Netscape Nav-
igator Gold or Notepad
• Select an updated link table to confirm that links are repaired
• Graphically compare previous Web site layouts to monitor all new,
updated, deleted, or modified URLsDynamic
(Scanning Features)
• See and validate pages that are generated on the fly
• Map not only static links, but dynamically generated pages that rep-
resent information contained in a database or obtained in real-time
• Ensure new transactions are working properly and database con-
nectivity is maintained for better customer service
(Plug-in API Features)
• Use Java, C++, or Visual Basic to create additional plug-in modules
for specific Web management needs or authoring tool environ-
ments
• Integrate Astra SiteManager into the software applications

System Requirements.
486 or Pentium-based machine
6 MB RAM minimum
5 MB hard disk space
TCP/IP dial-up or LAN connection
Any Web browser (Netscape Navigator, Microsoft Internet Explorer,
etc.)
Access to organization’s Web site and one of its log files is preferred,
but not required

Platforms Supported.
Windows 95
Windows NT 3.51 or 4.0
271

Copyright © 2000 CRC Press, LLC

You might also like