How To Test Software
How To Test Software
Tanja Toroi
Report
Department of Computer
Science and Applied
Mathematics
University of Kuopio
March 2002
Contents
1 INTRODUCTION ................................................................................................................................................ 3
3 TEST MANAGEMENT....................................................................................................................................... 7
LITERATURE ............................................................................................................................................................. 18
APPENDICES
A Testing business component systems
B Sample test plan
C How to derive test cases from user requirements
D Rational Test Tools
E Software testing tools
F Rational Unified Process
2
1 Introduction
This report reviews testing results of the PlugIT/TEHO project's first phase. Testing and inspec-
tion processes are described in Chapter 2. Test management is shortly described in Chapter 3. Test
case creation by white box and black box testing methods is described in Chapter 4. In Chapter 5
software testing tools are handled. Testing component systems is described in Chapter 6 and fi-
nally in Chapter 7 some further research problems are mentioned.
3
Testing and inspection processes are organized in levels according to the V-model (see Figure 1.).
Architecture Integration
pec
g
testing
tin
design
tio
tes
Component
n
test planning
Component Component
design testing
Coding
The V-model integrates testing and construction by showing, how the testing levels (on the right)
verify the constructive phases (on the left). In every construction phase corresponding test plan is
made. Requirement specification document and design documents are inspected carefully before
the coding phase begins. In inspection the checklists are used.
Requirements elicitation involves listening users' needs or requirements, asking questions that
assist the users in clarifying their goals and constraints, and finally recording the users' viewpoint
of the system requirements. Requirements give us guidelines for the construction phases and give
the acceptance criteria for the software or the software component. Acceptance criteria are used in
acceptance testing. In the requirement elicitation phase the acceptance test plan is made. Accep-
tance test plan helps testing when users test the system and check whether their requirements are
fulfilled.
Requirement analysis is the stage where the software engineer thinks about the information ob-
tained during the first stage. He transforms the users' view of the system into an organized repre-
sentation of the system and writes clear statements of what the system is expected to provide for
the users. The end product is called a requirement specification document. In the requirement
analysis phase the system test plan is made. System test plan is used in the system testing phase.
During the requirement specification use cases and conceptual models (for example class diagram)
are designed.
4
The software architecture of a program or computing system is the structure or structures of the
system, which comprise software components, the externally visible properties of those compo-
nents and the relationships among them [BaC98]. The design of software architecture can be di-
vided into
• functionality-based architecture design, which decomposes the architecture into needed com-
ponents and the relationships between them,
• evaluation of quality attributes for architecture , and
• transformation of software architecture in order to improve quality [Bos00].
Component design means that the structure of each individual component is designed. A compo-
nent structure is presented in Figure 2. In component design phase all services the component of-
fers are defined accurately. Interface includes each operation the component has, parameters for
each operation and type, which tells the direction of the operation (in or out). The component can
not be isolated, but it must collaborate with other components. So, dependencies from the other
components must be designed, too. Component's functional logic is often object-oriented. Thus we
must design classes and relationships between them. Functional logic is separated from interface
and dependency implementation. Here we need proxies. Component execution environment is
defined, too. The component test plan is made in this phase.
functional logic
dependencies
interfaces
5
In the coding phase software components are implemented according to the component design
document. They can be implemented with traditional, object-oriented or component-based tech-
niques.
In component testing phase each individual component is tested on the source code level. The
term component refers to a lowest level software module. Each programmer uses white box testing
methods and tests his own components. We need test stubs and drivers to simulate components,
which are not yet ready. Test stub is a component, which accepts all the input parameters and
passes back suitable output data that lets the testing process continue. The stub is called by the
component under test. Test driver represents a component that calls the components under test and
makes them executable. Component testing is based on component design documents.
In integration testing phase the previously tested software components are integrated and tested
incrementally. Integration is done either top-down or bottom-up. In top-down integration the top-
level component is tested by itself. Then all the components called by the components already
tested are integrated and tested as a larger subsystem. The approach is reapplied until all the com-
ponents have been integrated. In bottom-up integration each component at the lowest level of the
system hierarchy is first tested individually. Then are tested those components, which call the pre-
viously tested components. This principle is followed repeatedly until all the components have
been tested. The bottom-up approach is preferred in object-oriented development and in systems
with a large number of stand-alone components. The focus of integration testing is on the co-
operation of components and their interfaces. Test stubs and drivers are needed to simulate not-
tested components. When integrating distributed components, the business components are
formed. When integrating business components, business component systems are formed (see
Chapter 6). Integration testing is based on architecture design documents.
In system testing phase the tested business components are integrated and the system is tested as a
whole business component system. The goal of the system testing is to demonstrate in what degree
a system does not meet its requirements and objectives. System testing is based on requirements
document. Requirements must be specific enough so that they are testable. System testers can be
both software engineers and actual users. System testing can have several different forms, such as
performance testing, volume testing, load/stress testing, security testing and configuration testing.
6
In acceptance testing phase customers and users check that the system fulfills their actual needs
and requirements. Usability is taken into consideration in this phase. Acceptance testing is based
on requirement specification. Acceptance testing can take forms of beta or alpha testing. Beta
testing is performed by the real users and in a real environment. Alpha testing is performed by the
real users at the developing company. In integration, system and acceptance testing phases the
black box testing methods are used.
The whole testing process depends on those who are testing the components. The process is dif-
ferent if the tester is provider, integrator or customer of the component.
3 Test management
Systematic testing has to be planned and executed according to the plan and it has to be docu-
mented and reported. There is an example of the test plan document in Appendix B.
Test case specification defines test cases, their inputs, expected outputs and execution conditions.
Test cases can be created using white box or black box techniques (see Chapter 4).
Test report (bug report) is generated after each test. It describes defects found and what is the im-
pact of the defect (critical, major, minor, cosmetic). The most critical defects are always fixed.
They stop the user from using the system further. Major defects stop the user from proceeding in
the normal way but a work-around exists. They can also be in many components and the engineer
needs to correct everything. Minor defects cause inconvenience but they do not stop the user from
proceeding. Cosmetic defects can be left unfixed if there is not enough time and resources. A
cosmetic defect could be for example an aesthetic issue. Test report describes also the inputs, ex-
pected outputs and actual outputs, test environment, test procedure steps, testers, repeatability
(whether repeated; whether occurring always, occasionally or just once) and other additional in-
formation.
7
testing tools for the methods presented here are available on the market. An example of granular
test cases is shortly described in Chapter 4.3. The complete example can be found in Appendix C.
8
sex ==
no yes
male
payment = 100;
age > 17 age > 35
no && age < yes no && age < yes
31 51
51
no
payment = 140;
payment = 110;
age >
no
yes
50
no
payment = 140;
The adequacy of control-flow testing is measured in terms of coverage. The coverage indicates
how extensively the system is executed with a given set of test cases. There are five basic forms of
coverage: statement coverage, branch coverage, condition coverage, multiple condition coverage
and path coverage.
• In statement coverage each statement is executed at least once. This is the weakest criterion
and does not normally ensure a faultless code. On the other hand 100% statement coverage is
usually too expensive and hard to achieve, especially if the source code includes "dead code"
(= a code, which can never be reached).
9
• In branch coverage each statement is executed at least once and each decision takes on all pos-
sible outcomes at least once. Branch coverage criterion is stronger than that of statement cov-
erage because if all the edges in a flow graph are traversed then all the nodes are traversed as
well.
• In condition coverage each statement is executed at least once and each condition in a decision
takes on all possible outcomes at least once. Complete condition coverage does not necessarily
imply complete branch coverage so they do not compensate each other.
• In multiple condition coverage each statement is executed at least once and all possible com-
binations of condition outcomes in each decision occur at least once. This is the strongest cri-
terion and forces to test the component with more test cases and in more detail than the other
criteria.
• In path coverage every possible execution path is traversed. Exhaustive path coverage is gen-
erally impractical and impossible because loops increase the amount of execution paths.
The essential differences between statement coverage, branch coverage, condition coverage and
multiple condition coverage can be summarized by the following example (see Figure 4.).
sex ==
male && yes payment = 140;
age > 50
no
There are a minimum number of test cases for each coverage criterion.
• Statement coverage needs only a single test case:
− (sex = male, age = 51).
• Branch coverage can be reached by two test cases:
− (sex = male, age = 51) for the yes-branch and
− (sex = female, age = 51) for the no-branch.
10
• Condition coverage needs also two test cases:
− (sex = male, age = 51) for the atomic combination (true, true) over the control predicate
(sex = male) && (age > 50) and
− (sex = female, age = 11) for the atomic combination (false, false).
• Multiple condition coverage can not be reached with fewer than four test cases:
− (sex = male, age = 51) for the atomic combination (true, true),
− (sex = male, age = 12) for the combination (true, false),
− (sex = female, age = 52) for the combination (false, true), and
− (sex = female, age = 22) for the combination (false, false).
Multiple condition coverage is the most effective criterion of these because it forces one to test the
component with more test cases than the other criteria.
It has to be remarked that different testing tools can and should be used to check coverage criteria
and to help testing (see Appendix D (Rational PureCoverage) and Appendix E (Test Evaluation
Tools).
11
4.2.1 Equivalence partitioning
Commonly used black box testing methods are equivalence partitioning and boundary value
analysis. In most cases the system can not be exhaustively tested, so the input space must be
somehow partitioned. Equivalence partitioning is a test case selection technique in which the tester
examines the entire input space defined for the system under test and tries to find sets of input that
are processed "identically". The identical behavior means that test inputs in one equivalence class
traverse the same path through the system. Equivalence classes are defined based on requirement
specification document. The hypothesis is:
• If the system works correctly with one test input in an equivalence class the system works cor-
rectly with every input in that equivalence class.
• Conversely, if a test input in an equivalence class detects an error, all other test inputs in the
equivalence class will find the same error.
However, equivalence partitioning is always based on tester's intuition and thus it may be imper-
fect.
12
6. One or several equivalence classes are always defined for the illegal values. Illegal value is
incompatible with the type of the input parameter (for example, if the valid input value is inte-
ger, illegal value is real, char or string).
Usually, the input does not consist of a single value only but from several consecutive values. In
that case the equivalence classes for the whole input can be designed as combination of the
equivalence classes for the elementary domains. For example, if the component excepts as input
two integers x and y that both shall be within the range 1…10000, the following equivalence par-
titioning can be defined:
− for x: one valid, two invalid (too small, too large) and one illegal (for example, real) equiva-
lence class
− for y: one valid, two invalid and one illegal equivalence class
for (x,y) the combination of the above classes:
− (x valid, y valid) accepted values
− (x valid , y too small)
− (x valid, y too large) error message
− (x valid, y real)
− (x too small, y valid)
…
There are totally 4 * 4 = 16 equivalence classes.
This leads to an explosion in the number of equivalence classes. There is only one valid equiva-
lence class and the most of the obtained classes consist of input values that the component should
not accept. In practice it is not sensible to test the component with all the invalid and illegal com-
binations of input values but to select classes so that each invalid and illegal class have been cov-
ered at least once.
When equivalence classes have been identified test cases have to be selected. It is assumed that all
the values in an equivalence class are identical from the testing point of view. We can select any
test case in an equivalence class so that at least one test case is selected from every class to the
execution. When using equivalence partitioning the testing is on one hand effective and covers
customers' requirements and on the other hand it is not too arduous and complex because we can
reduce test cases effectively and still test the component adequately. Redundancy is also as mini-
13
mal as possible. Equivalence partitioning can be improved by boundary value analysis (see Chap-
ter 4.2.2).
There is an example of how to derive test cases from user requirements in Appendix C. First there
is a Patient management application, which consists of one dialog. Test cases are derived using
equivalence partitioning. Second there is an application with many dialogs. Now test cases are
derived from use case diagram and activity diagram. Finally there is an example of testing action
flow of many people. In this case test cases are derived from use cases.
14
boundary values
valid values
valid values
equivalence classes
equivalence partitioning
values
invalid values
valid values
valid values
illegal values
15
cess can not be totally automated yet. We have evaluated Rational Test Studio tools, which in-
cludes Rational Administrator, Rational Test Manager, Rational Robot, Rational PureCoverage,
Rational Purify, Rational Quantify and Rational TestFactory (see Rational Test Tools in Appendix
D). Also other software testing tools have been investigated and they have been grouped into test
design tools, graphical user interface test drivers, load and performance testing tools, test man-
agement tools, test implementation tools, test evaluation tools and static analysis tools (see Soft-
ware testing tools in Appendix E).
Management tools
Test execution and comparison tools
Load
Requirement Acceptance
performance
specification test
and simulation
tools
System test
Implementation
Test
design tools
tools Architectural s
design GUI drivers
Functionality
Inspection test
tools
Detailed
design Dynamic
Integration analysis and
test debugging
tools
Static Coverage
analysis Code Unit test tools
tools
16
tions of lower level components. The advantages of this approach are the possibility to master
development and deployment complexity, to decrease time to market, and to increase scalability of
software systems. Also the great number of dependencies which occur in object-oriented approach
can be mastered better, because majority of the dependencies between classes remain inside one
component where the number of classes is much less than in the total system or subsystem. The
abstraction levels decrease the work needed in testing, because the testing work can be divided
into sufficiently small concerns and previously tested components can be considered as black
boxes, whose test results are available. Furthermore, errors can be easily detected, because not so
many components are considered at one time. Testing business component systems can be read in
more detail in Appendix A.
17
• COTS (component-off-the-shelf)
Literature
[BaC98] Bass L., Clements P., Kazman R.: Software Architecture in Practice, Addison-
Wesley 1998.
[Bos00] Bosch Jan: Design and use of Software Architectures Adopting and evolving a prod-
uct-line approach, Addison-Wesley 2000.
[Bel00] Bell, D.: Software engineering, A programming approach. Addison-Wesley, 2000.
[DRP01] Dustin, E., Rashka, J., Paul, J.: Automated software testing, Introduction, Manage-
ment and Performance. Addison-Wesley, 2001.
[HaM01] Haikala, I., Märijärvi, J.: Ohjelmistotuotanto. Suomen Atk-kustannus, 2001.
[HeS00] Herzum P., Sims, O.: Business Component Factory. Wiley Computer Publishing,
New York, 2000.
[Jal99] Jalote, P.: CMM in practice. Addison-Wesley, 1999.
[Kit95] Kit, E.: Software testing in the real world. Addison-Wesley, 1995.
[Mye79] Myers, G.: The art of software testing. John Wiley & Sons, 1979.
[Paa00] Paakki, J.: Software testing. Lecture notes, University of Helsinki, 2000.
[PeK90] Perry, D., Kaiser, G.: Adequate testing and object-oriented programming. J. Object-
Oriented Programming, Jan./Feb. 1990, 13-19.
[TEM02] Toroi, T., Eerola, A., Mykkänen, J.: Testing business component systems. 2002,
(submitted).
18
Testing business component systems
Tanja Toroi Anne Eerola Juha Mykkänen
University of Kuopio University of Kuopio University of Kuopio
Department of computer science and Department of computer science and Computing Centre
applied mathematics applied mathematics HIS Research & Development Unit
P.O.B 1627, FIN-70211 Kuopio P.O.B 1627, FIN-70211 Kuopio P.O.B 1627, FIN-70211 Kuopio
+358-17-163767 +358-17-162569 +358-17-162824
[email protected] [email protected] [email protected]
Socket
defined build-time and run-time interface, which may be network
Plug
User user interface
Business component
Workspace (Java, COM)
when calling the component. Thus an interface of DC can be
defined as follows:
Interface = (operation, (parameter, type, [in|out])*)*
enterprise
User interface implementation should be separated from business Enterprise CEE
logic implementation and database access. This leads to the
definition of categories for distributed components, i.e. user DC,
workspace DC, enterprise DC and resource DC. User DC differs
from the other DCs because it does not have network addressable persistence
Resource framework
interface but it has user interface [20]. In object-oriented approach
the distributed component usually consists of classes and
relationships between them, but traditional programming
approaches can be used, too. Thus a DC hides the implementation
details from the component user. Component technologies offer
usually a degree of location transparency and platform and
programming language neutrality. For example, a distributed Figure 1. The business component and the component
component could be lab order entry. execution environment (Adapted from [5]).
Performance
INTERFACE
Entity Monitor
Lab Test Department Patient
Database
Integrity
Manager
Test Result
Analyzer
Utility Auxiliary
Lab Test
AddressBook
CodeBook
• First, in unit testing phase the internal logic of the derived from the use cases or contracts. Furthermore, we need a
component is tested as a white box. Second, the external dependency graph (chapter 3.3), which shows if all the parts of
interface of the component is tested. Here the component is the system have been covered by the test cases.
considered as a black box and it is tested in the execution
environment, where it should work.
• In integration testing phase the component is considered as a 3.2 Test Cases
composition component of its internal components. Now, the 3.2.1 Definition of Test Cases
results of previous steps are available. First, the co-operation A test case is generally defined as input and output for the system
of internal components of the composition component is under test. Kaner [7] describes that a test case has a reasonable
tested. Here, the internal components are black boxes. probability of catching an error. It is not redundant. It’s the best of
Second, the interface of the composition component is tested. its breed and it is neither too simple nor too complex. A test case
The alternation, presented above, is also true if we consider the is defined in Rational Unified Process [9] as a set of test inputs,
role of component provider and component integrator: execution conditions, and expected results developed for
particular objective, such as to exercise a particular program path
The provider of the component needs white box testing for
or to verify compliance with a specific requirement. We insert the
making sure that the component fulfils the properties defined in
granularity aspect in the definitions of test cases:
the contract and black box testing for making sure that the
interface can be called in environments specified in contract. • Test case at the business component system level is an action
flow between business components and users.
The third-party integrator integrates self-made and bought
components into a component-based system. He uses black box • Test case at the business component level is a sequence of
testing for making sure that he gets a product that fulfills operations between distributed components.
requirements. Integration testing of self-made and bought • Test case at the distributed component level is a method
components means that calling dependencies of components are sequence between object classes.
considered. Thus from the systems point of view the internal logic
of the system is considered. We denote this as white box testing
although code of internal components is not available. At last
external interfaces of the system are tested and here the total
3.2.2 Use Cases and Contracts
Use cases and scenarios provide means for communication
system is considered as a black box.
between users and a software system [6, 21]. Thus they are useful
We can also consider a customer who buys a component-based while deriving the responsibilities of BCS. The responsibilities of
system. He does not know the internal logic of the system. Thus total BCS, which is considered as a composition component, are
he uses black box testing techniques for acceptance testing. divided into responsibilities of each internal BC of BCS. They are
defined recursively if needed. Thus we can suppose that for BCS
and BC we have use case descriptions, which show those needs of
In the following chapters we first define test cases (chapter 3.2).
the stakeholders that are addressed to that component. With use
When executing the system with carefully chosen test cases tester
cases we know how to actually use the system under test. Every
can see if the user requirements are fulfilled. Test cases are
use case must have at least one test case for the most important operation each parameter's input domain is divided in so called
and critical scenarios. equivalence classes [15]. Equivalence partitioning is always the
However, use case diagrams and scenarios usually show only the tester's subjective view and thus may be imperfect. If the
communication between users and a software system. Thus the co- operation has many input parameters then the equivalence classes
operation of human actors is not presented. We propose that the for the whole input can be designed as a combination of the
definition of use case diagram is extended to contain human equivalence classes. This leads to an explosion in the number of
actions as well as automated actions [8]. Consequently, action equivalence classes. Test cases are selected so that at least one test
flows can be derived from these use case diagrams and it is input is selected from every equivalence class. So we get testing
possible to test that human workflow fits together with actions of which is on one hand effective and covers customers'
BCS. requirements and on the other hand it is not too arduous and
complex. Redundancy is also as minimal as possible. For
Next, we present examples of test cases of different granularity distributed components in resource, enterprise and workspace tier
components. Examples are simplified, but they describe how contracts may be the only possibility to define test cases. But it
abstraction levels differ when moving from BCS level to DC should be remembered that contracts do not specify the
level. At the DC level test cases are the most accurate and cooperation between more than two components. Thus they are
detailed. not sufficient while testing action flow of stakeholders and the
For example, an action flow of Lab Test BCS (see Fig. 2) could usability of the whole system.
be following (actors in parenthesis): Finally we check that test cases traverse all the paths of all the
• Create or choose a patient; (human and Patient BC) dependency graphs as presented in the next chapter.
• Create a lab order; (human and Lab Test BC)
• Send the lab order to the lab; (human, Lab Test and 3.3 Dependency Graph
Department BC)
3.3.1 General
• Take a sample; (human) Test cases, which are defined by use cases or contracts do not
• Analyze the sample and return results; (Test Result Analyzer necessarily cover the whole functionality of the system. Besides
BC) use cases and contracts, we also need the dependency graph,
which shows dependencies between the components of the same
• Derive reference values; (Lab Test Codebook BC) granularity level. If we do not know all the dependencies we do
• Save lab test results and reference values; (Lab Test BC) not know how to test them. We use the dependency graph to
assure that the whole functionality of the component has been
An operation sequence of Lab Test BC could be following:
covered by test cases. Without dependency graph there may
• Input a patient number; (human and user DC) remain some critical paths that have not been executed or there
• Find lab test results with reference values; may be some paths that have been tested many times. Redundant
(enterprise and resource DC) testing increases always the testing costs. Then test cases have to
be removed. If there are paths, which are not traversed at all we
• Output lab test results with reference values; (user DC) must examine carefully if the components on those paths are
• Evaluate results; (human) needless for some reason or we have to add test cases so that all
the paths will be traversed.
• Decide further actions; (human)
The following chapter describes how to compose the dependency
• Send lab test results and advice for further actions to the graph and provides an example. The algorithm creates a
department; (human, user and enterprise DC) dependency graph for each external interface of the composition
A method sequence of Lab Test user DC could be following: component. So testing and maintaining component-based systems
is easier than if we had only one big graph.
• Input a patient number; (human and Patient class)
• Choose an activity lab test consulting;
(human and Menu class) 3.3.2 Dependency Graph Creation
• Send message “find lab test results” to enterprise DC; We create a graph based on dependencies between different
(proxy) components in a composition component. Term composition
component can either be a business component or a business
• Receive lab test results; (proxy) component system. If it is a business component, dependencies
• Output lab test results; (Lab Test class) are between distributed components and if it is a business
component system, dependencies are between business
• Output reference values; (Reference class)
components. A node in a graph represents a component. A
directed edge from component A to component B means that
The other possibility to define test cases is to utilize contracts, component A depends on component B (component B offers
specifying the operations offered by the components. In testing we functionality to component A). The inner for-loop checks
must have a requirement document in which each operation of the dependencies only at the same tier or tiers below because
interfaces has been described in detail. There we get input messages from one component to another go from upper to lower
parameters of the operations and their domains. For each tier. The outer for-loop checks all the external interfaces the
composition component has and creates a graph for each interface.
Our algorithm follows the breadth first search algorithm.
A B User/Workspace
no other dependencies
A // create an edge from the interface to c0 not_visited = {C,D,E,F} - {C} = {D,E,F}
not_visited = {A,B,C,D,E,F} targets = {A,B} ∪ C = {A,B,C}
not_visited ≠ ∅ and called - targets = {A,B,C,D}- {A,B,C} = {D}
target = {D}
called = {A,B,C,D} ∪ E = {A,B,C,D,E}
A B compared to the graphs of traditional or object-oriented software.
We will study automatic test case selection in the future research.
C D E
4. TESTING COMPONENTS OF
called = {A,B,C,D,E} ∪ F = {A,B,C,D,E,F} DIFFERENT GRANULARITIES
A B 4.1 Distributed Components
Technical heterogeneity means use of different component
technologies, programming languages and operating system
C D E environments. Productivity and flexibility of software
implementation is increased by separating the functional logic
from the runtime environment (socket) and the dependencies of
F
the component technology, i.e. interface implementation and
proxies which implement dependencies [5]. This profits the
testing process, too. The functional logic is tested separately from
no other dependencies interface implementation and proxies, which are substituted with
not_visited = {D,E,F} - {D} = {E,F} driver and stub correspondingly if needed. Testing a distributed
targets = {A,B,C} ∪ D = {A,B,C,D} component depends on implementation. If a DC has been
implemented by traditional techniques we can use traditional
not_visited ≠ ∅ and called - targets = {A,B,C,D,E,F}- {A,B,C,D} testing techniques and if it has been implemented by object
= {E,F} oriented techniques we can use object oriented testing methods. In
target = {E} object oriented approach the functional code is usually
called = {A,B,C,D,E,F} ∪ D = {A,B,C,D,E,F} implemented with classes and relationships between them. Testing
means that methods and attributes of each class must be tested as
A B well as the inheritance relationship between classes and
association and aggregation relationships between objects. At the
DC level test cases are usually derived from contracts. The
C D E
execution of an operation, defined in the contract of DC, causes
usually collaboration between several objects, which
communicate with each other by sending messages. Thus the
F method sequence is one important subject to be tested. The
objects dependency graphs can be derived analogously to the
method presented in chapter 3.3 and it should be consistent with
no other dependencies UML’s collaboration diagrams defined in analysis stage.
not_visited = {E,F} - {E} = {F} Furthermore, when we are testing a DC implemented by object
targets = {A,B,C,D} ∪ E = {A,B,C,D,E} oriented methods we can use several testing techniques [11]. For
not_visited ≠ ∅ and called - targets = {A,B,C,D,E,F}- object state testing there are methods like [12, 23] and for testing
{A,B,C,D,E} = {F} inheritance relationship between classes there are methods like
target = {F} [16, 10].
no dependencies Interfaces of DC must be tested locally and from the network. For
Next the outer for-loop examines other external interfaces; in this user DCs the usability testing is important as well as the
case enterprise level interface, which refers to component C. functionality tests. Testing resource tier DCs is more difficult if
Algorithm creates the following graph for this interface. several databases or independent islands of data [5] are used.
C D E
4.2 Business Components
Vitharana & Jain [26] have presented a component assembly and
F testing strategy:
“Select a set of components that has maximum number of calls to
other components within the set and minimum number of calls to
3.3.4 Selecting Test Cases other components outside the current set.
When we have created dependency graphs we have to create test Of the remaining components, select a component (or a set of
cases based on those graphs. Test cases are created so that as components) that has the maximum number of interactions with
many paths in a graph as possible are covered by one test case. methods in the current application subsystem.”
This is called path coverage. Test suite satisfies 100% path
coverage if all the paths in the dependency graphs have been The strategy has been illustrated by an example but it has not been
executed. We should remember that the graphs are not very large proved. However, the authors critique the strategy: The logical
because of components’ granularity and because interfaces act as relationships between components should be taken into
centralized connection points. So the complexity is lower if consideration while developing an integration test strategy.
We propose that in assembly and testing the business logic should If use cases are not available, the contracts of the distributed
be taken into account. Business components form a coherent set components, which are visible outside the boundaries of BC are
of properties, thus to test them as a whole is worthwhile. A used. In this case the designer should decide the order of the
business component is integrated from distributed components. operations.
Thus testing business component means: Phase 2:
• First, the integration testing of those distributed components, BC's external interface, especially the network addressability,
which belong to the business component is performed. should be tested. Here the BC is a black box. Partly the same test
• Second, the external interface of the business component is cases as before can be used. Now the internal logic is not
tested. considered, but the correspondence of operation calls with input
parameter values is compared to the return values. Thus all the
While integrating a BC we propose that the integration strategy by
contracts of BC must be tested.
Vitharana and Jain is modified as follows:
The assembly and testing go in two parallel parts:
In single-user domain part, the user and workspace tiers are 4.3 Business Component Systems
integrated: Business component system is assembled and tested using strategy
• It is profitable to start integration from user tier. presented in [26]. Utilities are often ready-made COTS, which
Consequently, the comments from stakeholders are received need only acceptance testing. This means that the interface is
as soon as possible. tested and the component is seen as a black box. Entity BCs call
utilities, thus entities are tested next. Process BCs call entities,
• The workspace tier should be integrated next, because it is thus they are tested last. Thus the order is: first utility BC, second
connected with the user tier. entity BC, third process BC.
In multi-user domain part, the resource and enterprise tiers are Testing business component system means:
integrated:
• First, integration of those business components, which
• The resource tier is integrated first, because the DC in belong to the business component system is tested.
resource tier does not send messages to any lower tier.
• Second, the external interface of business component system
• The enterprise tier is integrated next because it sends most is tested.
messages to the resource tier.
In integration testing BCS's internal logic is considered. Here each
Finally the total BC is integrated by combining the results of the BC of BCS is a black box, which has been unit tested. BCS itself
above two parts. The above approach has several advantages. For is considered as a white box. Interfaces and dependencies are
example time to market decreases and controllability increases. tested utilizing dependency graph similarly as before.
Test cases of BCS are constructed using use cases, which show
Testing business components is divided into two phases: the action flow of users of the BCS. Of course, it is possible that
the inputs come from some other system, but these are considered
Phase 1: similarly to human inputs.
BC’s internal logic is considered using dependency graph
It is sufficient to test that the most important and critical action
similarly as before. Here each DC, which belongs to the business
flows of users go through without errors and that BCs call each
component is a black box, but BC itself is considered as a white
other with right operations and parameters. Exception cases [21]
box. Interfaces and dependencies are tested. The dependency
are tested after normal cases. While considering exceptions an
graph is generated using the algorithm presented in chapter 3.3. If entity BC can send events to a process BC. This means that
some of the DCs is not ready and has not passed through unit forming dependency graph needs to be slightly modified. The
testing it is substituted with a stub. contracts of components specify what the components offer, not
The best way to derive test cases for BC is to utilize use cases. what the users need and not the order of users actions, thus
Because BC is a coherent set of operations of the business contracts of BCs are not useful while testing BCS. At last, BCS's
concept it is plausible that the most important and critical use external interface is tested with local calls and calls over network.
cases of BC are defined at the analysis stage. Thus test cases can Here all the contracts of BCS must be tested.
be built according to these use cases. The distributed components In conclusion, before testing the whole business component
in user tier are the only components for which the user gives the system each business component in it is tested. Before a business
input. For other BC-external interfaces the inputs come from some component is tested each distributed component in it is tested.
other systems or from the network. The values needed in BC´s The dependencies considered stay all the time at one component
internal dependencies are calculated inside DCs. Normal cases are granularity level.
tested before exceptions [21]. While considering exceptions the
events go from the lower level to the upper level. For example,
when an exception is noticed, a resource DC sends an event to an
enterprise DC, which further sends an event to a user DC. This 5. RELATED WORK
means that the algorithm forming dependency graph needs to be According to Weyuker's [24] anticomposition axiom adequately
slightly modified while testing exceptions. testing each individual component in isolation does not
necessarily suffice to adequately test the entire program.
Interaction cannot be tested in isolation. So, it is important to test
components’ interaction in the new environment as components inside a business component system at the business component
are grouped together. level. While testing business components the dependencies stay at
Our work has got influence from Wu et al. [28]. However, we the distributed component level. At distributed component level
wanted that dependencies stay at the same abstraction level, i.e. we consider the dependencies between classes. From the above
they must not go from upper level to the lower level or vice versa follows that dependencies stay simple and at the same level, and
in testing. In presentation of Wu et al. there is no clear separation the dependencies tested at the same time are similar, except at the
of abstraction levels. For example, a dependency between DC level. Thus testing work is divided into small pieces and the
components causes dependencies between classes. The interface amount of testing work decreases. This facilitates regression
implementation and functional logic are tested separately and not testing too.
in order. In our algorithm, dependencies stay at the same Our work has been done at the University of Kuopio as a part of
granularity component level: in system level, in business PlugIT research project in which our testing method is being
component level, or in distributed component level. The evaluated in practice. The validation of the method containing
dependencies between classes in object oriented systems need to also theoretical proof of decreasing the work in testing in practice
be considered only at the lowest level. This reduces the is going on at the moment. The goal of PlugIT project is to reduce
dependencies and especially those dependencies, which must be threshold of introduction of health care information systems by
considered at the same time. defining more effective and open standard solutions for system
Regnell et al. have considered use case modeling and scenarios in level integration. Our concern is for the quality assurance and
usage-based testing [17]. They investigate usage of software testing of health care information systems.
services in different users and users’ subtypes. They consider
dependency relationships, where relationships go from component
level to the class and object level as in [28]. 7. ACKNOWLEDGMENTS
Gao et al. have presented Java-framework and tracking model to We would like to thank Hannele Toroi, testing manager at Deio
allow engineers to add tracking capability into components in for giving us insight into test implementation and testing problems
component-based software [3]. The method is used for monitoring in practice. This work is part of PlugIT project, which is funded
and checking various behaviors, status performance, and by the National Technology Agency of Finland, TEKES together
interactions of components. It seems that the results of Gao et al. with a consortium of software companies and hospitals.
could be added in our approach in order to support the debugging
aspects.
Our research emphasizes testing functional requirements of 8. REFERENCES
business component system. The quality requirements of [1] Bosch, J. Design and use of software architectures. Adopting
stakeholders such as security, response times and maintainability and evolving a product-line approach. Addison-Wesley,
must be tested too. This has been considered in [1]. Different 2000.
quality properties need to be tested separately although scenarios
can be utilized here too. Testing quality requirements leads to the [2] Fowler, M., and Kendall, S. UML Distilled Applying the
consideration of architectures. standard Object Modeling Language. Addison-Wesley, 1997.
[3] Gao, J., Zhu, E., Shim, S., and Chang, L.: Monitoring
software components and component-based software. In
6. CONCLUSION Proc. of 24th Annual International Computer Software &
We have presented a method for testing functionality of business Applications Conference, 2000.
component systems. For testing functionality we utilize test cases [4] Gotel, O. Contribution structures for requirements
and dependency graphs. Test cases are derived from use cases or traceability. PhD thesis, University of London, 1995.
contracts. http://www.soi.city.ac.uk/~olly/work.html.
Why do we need test cases and dependency graphs? To assure that
[5] Herzum P., and Sims, O. Business Component Factory.
the whole composition component’s functionality has been
Wiley Computer Publishing, New York, 2000.
covered by test cases it is necessary to use the dependency graphs.
If we only test that the test cases are executed right, they give right [6] Jacobson, Christerson, M., Jonsson, P., and Övergaard, G.
result, and leave the system in consistent state, there may remain Object-Oriented Software Engineering – A Use Case Driven
some critical paths in the system that have not been executed or Approach. Addison-Wesley, Harlow, 1995 2nd edn.
there may be some paths that have been tested many times. If [7] Kaner, C. Testing computer software. John Wiley & Sons,
there are paths, which are not traversed at all our test suite does New York, 1999.
not correspond the functionality of the system. In this case, we
must examine carefully if [8] Korpela, M., Eerola, A., Mursu, A., and Soriyan, HA.: Use
cases as actions within activities: Bridging the gap between
• new test cases should be inserted or
information systems development and software engineering.
• the components on non-traversed path are needless for some Abstract. In 2nd Nordic-Baltic Conference on Activity Theory
reason. and Sociocultural Research, Ronneby, Sweden, 7-9
In our method, components of different granularities are tested September 2001.
level by level. Thus in integration testing the dependencies stay
[9] Kruchten, P. The Rational Unified process, an introduction. [19] Roper, M. Software testing. McGraw-Hill, England, 1994.
Addison-Wesley, 2001.
[20] Sametinger, J. Software Engineering with Reusable
[10] Kung, D., Gao, J., Hsia, P., Wen, F., Toyoshima, Y., and Components. Springer-Verlag, 1997.
Chen C.: Change impact identification in object oriented
[21] Schneider, G., and Winters, J.P. Applying use cases.
maintenance. in Proc of IEEE International Conference on
Addison-Wesley Longman, 1998.
Software Maintenance 1994, 202-211.
[22] Szyperski, C. Component Software: Beyond Object-Oriented
[11] Kung, D., Hsia, P., and Gao, J. Testing object-oriented Programming. Addison-Wesley, Harlow, 1999.
software. IEEE computer society, USA, 1998.
[23] Turner, C.D., and Robson, D.J.: The state-based testing of
[12] Kung, D., Lu, Y., Venugopalan, N., Hsia, P., Toyoshima, Y., object-oriented programs. In Proc. of IEEE Conference on
Chen C., and Gao, J.: Object state testing and fault analysis
Software Maintenance 1993, 302-310.
for reliable software systems. In Proc. of 7th International
Symposium on Software Reliability Engineering, 1996. [24] Weyuker, E.: The evaluation of program-based software test
data adequacy criteria. Communications of the ACM, 31:6,
[13] Meyer, B. Object-oriented software construction. Prentice June 1988, 668-675.
Hall, London, 1988.
[25] Wilde, N., and Huitt, R.: Maintenance Support for Object-
[14] Mowbray, T., and Ruh, W. Inside CORBA: Distributed Oriented Programs. IEEE Transactions on Software
object standards and applications. Addison-Wesley, 1997.
Engineering, 18, 12, Dec. 1992, 1038-1044.
[15] Myers, G. The art of software testing. John Wiley & Sons, [26] Vitharana, P., and Jain, H.: Research issues in testing
New York, 1979.
business components. Information & Management, 37, 2000,
[16] Perry, D., and Kaiser, G.: Adequate testing and object 297-309.
oriented programming. Journal of Object-Oriented
[27] Wu, Y., Pan, D., and Chen, M-H.: Techniques for testing
Programming, Jan/Feb, 1990, 13-19.
component-based software. Technical Report TR00-02, State
[17] Regnell, B., Runeson, P., and Wohlin, C.: Towards University of New York at Albany, 2000.
integration of use case modelling and usage-based testing.
[28] Wu, Y., Pan, D. and Chen, M-H. Techniques of maintaining
The Journal of Systems and Software, 50, 2000, 117-130.
evolving component-based software. In Proceedings of the
[18] Robertson, S., and Robertson, J. Mastering the requirements International Conference on Software Maintenance, San
process. Addison-Wesley, 1999. Jose, CA (USA), October 2000
APPENDIX B
REVISION HISTORY
2
TEHO Test plan 1.0
Content
1 INTRODUCTION .....................................................................................................................................................4
1.1 PURPOSE ..............................................................................................................................................................4
1.2 BACKGROUND .....................................................................................................................................................4
1.3 SYSTEM OVERVIEW .............................................................................................................................................4
1.4 REFERENCES ........................................................................................................................................................4
2 TEST ENVIRONMENT ...........................................................................................................................................5
2.1 HARDWARE .........................................................................................................................................................5
2.2 SOFTWARE ...........................................................................................................................................................5
3 STAFF AND TRAINING NEEDS ...........................................................................................................................5
10 RISKS .........................................................................................................................................................................9
11 APPROVALS.............................................................................................................................................................9
3
TEHO Test plan 1.0
1 Introduction
1.1 Purpose
This document describes the plan for testing the TehoTest System.
This test plan identifies existing project information, resources for
testing and functions to be tested. Additionally it describes testing
strategies, provides an estimate of the test efforts. It includes a list of
the deliverable test documentation and risks, project milestones and
approvals.
1.2 Background
1.4 References
This test plan includes references to following project documents:
• Project Management Plan 1.0
• Requirements Specification Document 1.0
• System Design Document 1.0
• Product Description
4
TEHO Test plan 1.0
2 Test Environment
2.1 Hardware
Hardware requirements for testing are
• PentiumIII processor
• 300MB free memory
• CD-ROM drive
• client and server computer
• HP LaserJet printer
2.2 Software
5
TEHO Test plan 1.0
6 Features to be Tested
6.1 Use Case 1
UC1 Login to the system.
6.2 Use Case 2
UC2 Insert patient’s information
6.3 Use Case 3
UC3 Logout from system
6
TEHO Test plan 1.0
8 Test Scope
The aim of TEHO test program is to verify that the TEHO system
satisfies the requirements specified in System Requirements
Specification. Automated test tools will be used to help testing process
when it is possible.
7
TEHO Test plan 1.0
8
TEHO Test plan 1.0
9 Project Milestones
Task Effort Start Date End Date
Plan Test
Design Test
Implement Test
Execute Test
Evaluate Test
10 Risks
Number Risk Effect Mitigation Strategy
1 Load testing will be delayed if the 2 weeks schedule slip Program for load testing
maximum load for the system cannot be has been ordered from
created XXX Ltd
2 Installation program is delayed
11 Approvals
The persons who must approve this plan are listed as follows:
• Project Manager (XX Ltd)
• Project Manager (YY Ltd)
• Project Management Group
- Sam Software
- Tim Dollar
- Frank McRobin
9
APPENDIX C
1 Introduction
This example describes how test cases can be derived from requirements
specification documents with a black box testing method including three
approach models: testing of the application with one dialog, testing of
the application with many dialogs and testing of the multi-user system.
2 Requirements Specification
Patient System consists of two applications:
Requirement 3: Payment
Payment is calculated based on age and type of the patient. The result
will be displayed on the screen.
Male
Age Payment
18 - 35 100 euro
36 - 50 120 euro
51 - 140 euro
Female (* Payments in this example are not same than in a real world)
Age Payment
18 - 30 80 euro
31 - 50 110 euro
51- 140 euro
Child
Age Payment
0 - 17 50 euro
The user is able to insert patient data in the patient database (name,
personal id, adress and symptoms).
The user must be able to update patient data. Results are displayed on
screen.
2
3 Test case design for an application with one dialog
3.1 Equivalence Partitioning
All input fields in the application must be tested. However, the system
cannot be tested with all possible inputs because it requires too much
time and work. The set of input values for the system must be
partitioned for example by using a test case selection like equivalence
partitioning.
Test cases should also be created for boundary values, special characters
and decimal values. The system should give for user warnings of illegal
inputs (Error Messages). The following example shows how equivalence
classes and test cases can be created for a simple application that
calculates patient payments (Figure 2).
3
3.2 Equivalence classes for Calculate Payment application
There are two input parameters that user can enter in Calculate Payment
application: patient type and age.
Patient Type
• Male
• Female
• Child
Age
• < 0 (invalid)
• 0-17 (valid)
• 18-35 (valid)
• 36-50 (valid)
• 51-145 (valid)
• > 145 (invalid)
• not integer (illegal)
4
Child Age Expected
Result
Choose Child -45 Press Calculate Error Message
Choose Child 15 Press Calculate 50 euro
Choose Child 18 Press Calculate 50 euro
Choose Child 40 Press Calculate Error Message
Choose Child jxxcc Press Calculate Error Message
Choose Child 146 Press Calculate Error Message
Choose Child !#% Press Calculate Error Message
Insert patient includes for example Save patient use case. A nurse is able
to insert patient (fill name, personal id, address, symptoms and save
them into the database) and update patient data. The nurse can also
search a patient from the database or close the program.
5
<<include>>
Insert patient
<<include>>
Update patient
The following activity diagram (Figure 4) describes the work flow and
the order of displayed dialogs of Patient Management application.
Arrows represent transitions from one function to the another. When a
user has logged in, the main dialog of the program is displayed for
him/her. The user fills all text fields (name, personal id, address,
symptoms) and presses Save button.
6
User System
Press Save
Show Would you like to
button (Di1.B1)
save data? dialog (Di2)
[ data in fields is correct ]
Show Incorrect
input dialog (Di4)
Press Ok button
(Di4.B1)
Press Cancel [ No ]
button (Di2.B2)
[ Yes ]
Press Yes
button (Di2.B1)
Show You have saved data
dialog (Di3)
Press Ok
button (Di3.B1)
7
4.3 Dialogs of the Patient Management application
Patient Management application consists of following dialogs:
Notification Dialogs
8
Error Message Dialogs
Prerequisites:
• Login into the system has to be successfully completed
• Main Dialog is displayed for the user
Test Case 1.1: Correct saving (user has inserted valid data to the text fields)
Step Input Expected outcome Special
Considerations
1 Fill all text fields The text inserted by the
Name = Matt Pitt user is visible in fields
Personal ID = 120775-
765R
Address = unknown
Symptoms= Broken arm
2 Press Save button Would you like to save
data dialog (Di2)
3 Press Yes button in Di2 You have saved data Check whether patient
(fields contain valid data) dialog (Di3) exists in the database
9
Test Case 1.2: Cancel the data saving
Step Input Expected outcome Special
Considerations
1 Fill all text fields: The text inserted by the
Name = Sam user is visible in fields
Personal ID = 041081-5678
Address = Wall Street 73
Symptoms= Head Ache
2 Press Save button Would you like to save
data dialog (Di2)
3 Press Cancel button in Di2 Main dialog (Di1)
4 Press Ok button in Di3 Main dialog (Di1) Check that data was
not saved to the
database
Test Case 1.3 Incorrect saving (user has inserted non-valid data to the text fields)
Step Input Expected outcome Special
Considerations
1 Fill text fields incorrect or The text inserted by the
leave a required field user is visible in fields
empty (with errors)
Name *= empty
Personal ID = 150984-543
Address =
Symptoms=
2 Press Save button Would you like to save
data dialog (Di2)
3 Press Yes button Incorrect input dialog
(Di4)
4 Press Ok button Main dialog (Di1)
10
Lab Test Business Component system
Process
Lab Test
Workflow
Performance
INTERFACE
Entity Monitor
Lab Test Department Patient
Database
Integrity
Manager
Test Result
Analyzer
Utility Auxiliary
Lab Test
AddressBook
CodeBook
Most important action flows has to be tested. The action flow describes
functions between several people and several systems. Action flows are
derived from requirement specification. The essential point is that one
function follows another in right order. If in our example the patient has
not been created in the system, it’s not possible to insert medical data
about patient to the system.
11
Create or choose a
patient
Reception
Labratorist Department
Take a sample BC
Derive reference
values
Receptionist
12
Step Input Expected outcome Special
Considerations
6 Take a sample - - -
7 Analyze the - - Check that the
sample analyser works
correctly
8 Derive reference - - Check that the
values analyser works
correctly
9 Save lab test Lab test Lab test results saved Check that lab test
results results + results have been
Save button saved
13
Lab Test BC
Input a patient
number User DC
Evaluate results
Resource
DC
Decide further
actions
Figure 11: Save lab test results and reference values – a lower level description
14
6 Test Report
The test report table includes columns of test case table inserted with
actual result, information about whether the test case was passed or
failed, possible defect, severity of the defect, the date when the defect
was fixed and person who fixed the defect.
15
Project: TEHO Test Level: Functional
Test Case: TC1 Calculate Payment Environment: WinNT
Description: Calculate payment for the patient Technique: Equivalence Partitioning
Referred
Author: Marko Jäntti Documents: Requirements Specification
Date: 1.2.2002
Fixed by: David Designer (DD)
Id Prerequisites Test Input Expected result Actual result P/F* Defect and severity** Fixed (Date)
Patient type=male
Age= -1
1.1 Press calculate Error Message Ok P
Patient type=male
Age= 5
1.2 Press calculate Error Message Ok P
Patient type=male
Age= 18
1.3 Press calculate Payment is 100 euro Ok P
Patient type=male
Age= 36
1.4 Press calculate Payment is 120 euro Ok P
Patient type=male
Age= 51
1.5 Press calculate Payment is 140 euro Payment 160 F Error in calculation, S2 DD 13.3
Patient type=male
Age= 146
1.6 Press calculate Error Message Ok P
Patient type=male
Age= @#111
1.7 Press calculate Error Message Ok P
* Pass/Fail ** Severity of defect: S1 = Critical, S2 = Major, S3 =Average, S4 = Minor
Id Prerequisites Test Input Expected result Actual result P/F* Defect and Fixed (Date)
16
severity**
Patient type=female
Age= -5
1.8 Press calculate Error Message Ok P
Patient type=female Error message
Age= 2 must be
1.9 Press calculate Error Message No error message F displayed DD 12.3
Patient type=female
Age= 18
1.10 Press calculate Payment is 100 euro Ok P
Patient type=female
Age= 31
1.11 Press calculate Payment is 120 euro Ok P
Patient type=female
Age= 51
1.12 Press calculate Payment is 140 euro Ok P
Patient type=female
Age= 146
1.13 Press calculate Error Message Ok P
Patient type=female
Age= aWqi
1.14 Press calculate Error Message Ok P
17
Defect and
Id Prerequisites Test Input Expected result Actual result P/F* severity** Fixed (Date)
Patient type=child
Age= -45
1.15 Press calculate Error Message Ok P
Patient type= child
Age= 13
1.16 Press calculate Payment is 50 euro Ok P
Patient type= child
Age= 18
1.17 Press calculate Error Message Ok P
Patient type= child
Age= 42
1.18 Press calculate Error Message Ok P
Patient type= child Exception
Age= 140 java.lang.ArithmeticExcept handling doesn’t
1.19 Press calculate Error Message ion F work DD 14.3
Patient type= child
Age= 146
1.20 Press calculate Error Message Ok P
Patient type= child
Age= xxciji
1.21 Press calculate Error Message Ok P
18
Project: TEHO Test Level: Functional
Test Case: TC1 Insert Patient Environment: WinNT
Description: Insert patient information to the database Technique: Equivalence Partitioning
TC1.1 Correct saving Referred Documents: Requirements Specification 1.0: Activity Diagram
TC1.2 Cancel saving
TC1.3 Incorrect saving
Author: Marko Jäntti
Date: 1.2.2002
Defect and
Id Prerequisites Test Input Expected result Actual result P/F severity Fixed (Date)
TC1.1 Main Dialog Fill all text fields: The text inserted by the Ok P
Step1 displayed Name = Sam, user is visible in fields
Personal ID =
041081-5678,
Address =
Wall Street 73
Symptoms=
Head Ache
Step2 Text fields Press Save-button Would you like to Ok P
contain data (Di1.B1) save data dialog (Di2)
Step3 Di2 displayed Press Cancel button Main dialog (Di1) Ok P
in Di2
* Pass/Fail ** Severity of defect: S1 = Critical, S2 = Major, S3 =Average, S4 = Minor
19
Defect and
Id Prerequisites Test Input Expected result Actual result P/F* severity Fixed (Date)
TC1.2 Main Dialog Fill all text fields: The text inserted by the Ok P
Step1 displayed Name = Sam, user is visible in fields
Personal ID =
041081-5678,
Address =
unknown
Symptoms=
Head Ache
Step2 Fields contain Press Save-button Would you like to Ok P
valid data (Di1.B1) save data dialog (Di2)
Step3 Di2 displayed Press Yes button You have saved Ok P
in Di2 data dialog (Di3)
Button listener
Step 4 Press Ok in Di3 Main dialog (Di1) Di3 still displayed F doesn’t work, S3 DD 16.3
Special Considerations: TC 1.2 Step 3 Check whether patient with valid values exists in the database
20
Defect and
Id Prerequisites Test Input Expected result Actual result P/F* Severity** Fixed (Date)
TC1.3 Main Dialog Fill text fields with The text inserted by the Ok P
Step1 displayed invalid values user is visible in fields
Name = empty (with errors)
Personal ID =
041081-5678,
Address =
unknown
Symptoms=""
Step2 Fields contain Press Save-button Incorrect input dialog Dr.Watson Fatal error F System went DD 15.3
invalid data (Di1.B1) Di4 down, S1
Step3 Di4 displayed Press Ok button Main dialog (Di1) Ok P
in Di4
Special Considerations:
* Pass/Fail ** Severity of defect: S1 = Critical, S2 = Major, S3 =Average, S4 = Minor
21
APPENDIX D
1 INTRODUCTION .....................................................................................................................................................3
2
1 Introduction
2 Rational TestStudio
Rational TestStudio consists of following products:
3
2.1 Rational Administrator
2.1.1 Goal
4
Rational Administrator/Insert test user
2.1.4 Evaluation
5
2.2 Rational TestManager
2.2.1 Goal
Rational TestManager is a tool for managing all test activities, test
assets and data. TestManager can be used in planning, implementing
and evaluating tests.
2.2.2 Purpose
Software developers, testers and business analysts can see test results
from their own view and are able to use the information in their own
work:
A test user can create a new test plan with test case folders that
include test cases. It’s possible to associate a test case with created
iterations that are phases of software engineering process. A test user
can also rename iteration. The advantage of organizing test cases is an
easier way to find out which tests should be executed in each phase.
6
Test plans and iterations
Construction
Test Case
Transition
Test user is able to analyse test coverage with on-line reports, for
example test case distribution or test case results trend as shown in the
following picture.
7
On-line report of test case results
2.2.4 Evaluation
Creating test plans, test cases and iterations is quite simple to do. In
the same way, there are no difficulties in creating on-line reports and
associating test cases with iterations, external documents and
requirements in RequisitePro project.
8
2.3 Rational Robot
2.3.1 Goal
Rational Robot is a tool for developing scripts for functional testing
and performance testing. Robot helps you to plan, develop, execute
and analyze tests.
2.3.2 Purpose
Robot scripts can be GUI scripts (for functional testing) or VU and
VB scripts (for performance testing).
First create a script and give the name for your script. The type of the
script is GUI (record graphical user interface activities) or VU
(collects client/server requests).
9
Record a new script or update an existing script by choosing Record
command.
Robot records activities (mouse moves and key enters) when the
following dialog is displayed for the user:
2.3.4 Evaluation
Rational Robot is quite logical to use for GUI recording but there
could be more instructions how to start recording VU scripts (What
should be entered to the Start Application dialog).
A test user must create a new script if somebody makes changes to the
graphical user interface, for example the test user creates Script 1 for
10
User Interface 1. A system developer changes places of two buttons in
UI 1 -> creates UI 2.
11
With PureCoverage you can see how many calls coverage items
(classes or methods) receive and how many methods or lines are
missed or hit while program is running. PureCoverage can be used for
Visual C/C++ code in .exes, .dlls, OLE/ActiveX controls, and
COM objects
Visual Basic projects, p-code, and native-compiled code in
.exes, .dlls, OLE/ActiveX controls, and COM objects
Java applets, class files, JAR files, and code launched by
container programs in conjunction with a Java virtual machine
(JVM)
Managed code in any source language that has been compiled
for Microsoft's .NET platform, including C#, Visual C/C++,
and Visual Basic, in .exes, .dlls, OLE/ActiveX controls, and
COM objects
Components launched by container programs
Microsoft Excel and Microsoft Word plug-ins
12
PureCoverage creates a list of all modules and functions in the application
13
PureCoverage marks code lines with different colours
14
2.4.4 Evaluation
Java programmers and testers can use Purify with Java Virtual
Machine to develop and tune the memory efficiency of applets and
applications. Purify provides tools for managing memory and
15
analysing data. Program finds out problems like methods that use too
much memory or objects that cause difficulties for garbage collecting.
Purify can run applets, Class files, JAR-files and supports container
programs like JVM viewer or Internet Explorer.
Purify automatically displays the memory tab when a test user has run
a Java program. Memory allocation graph is a graphical representation
of the total amount of memory allocated to the program (while the
program was running). In addition, Purify creates
Call graph
Function list
Object list
16
2.5.4 Evaluation
2.6.2 Purpose
Software developers are able to check with Quantify where their
application spends most of its time. Quantify helps you to
Collect performance data
Analyze performance data
Compare performance data
17
Java applets, class files, JAR files, and code launched by
container programs in conjunction with a Java virtual machine
(JVM)
18
Quantify displays function details of the test application
By clicking the right mouse button in the call graph and Switch to |
Annotated Source you will see how much each line of the program
code requires time.
19
Quantify shows how much time each code line or method spends.
2.6.4 Evaluation
Quantify is quite easy to use for measuring performance of test
applications. Program is suitable for software developers in
performance testing.
20
shorten the product testing cycle by minimizing the time
invested in writing navigation code.
The Application mapper creates the application map (explores the user
interface of the application-under-test (AUT)). Software developers
can check which components are included in the application and what
do components look like.
21
2.7.4 Evaluation
This test tool would need more instructions for users. The only thing
that our test team could do with TestFactory was creating the
component list and program displayed the component like in the user
interface.
22
SilkPerformer helps system testers in load tests and monitoring. The
program for load testing has to create realistic load for application to
show its functionality and reliability. Load test includes scalability,
realisticity and integration of architectures. Server Analysis Module
acts as a part of SilkPerformer and measures efficiency of servers and
machines in programming environment.
SilkPilot (Silk for CORBA and EJB) is a tool for CORBA’s and
EJB/J2EE server’s functional and regression testing. SilkPilot can be
used for
23
SOFTWARE TESTING TOOLS
Pentti Pohjolainen
Department of Computer
Science and Applied
Mathematics
University of Kuopio
March 2002
2
CONTENTS
1 Introduction ............................................................................................. 5
2 References............................................................................................... 10
3 Division of the Tools .............................................................................. 11
3.1 Test Design Tools .........................................................................................................11
3.1.1 Test Case Tools ...........................................................................................................11
3.1.2 Database Tools.............................................................................................................12
3.1.3 Data Generators ...........................................................................................................13
3.1.4 General Test Design ...................................................................................................14
3.2 GUI Test Drivers ...........................................................................................................16
3.3 Load and Performance Testing Tools.................................................................25
3.4 Test Management Tools ............................................................................................32
3.4.1 CORBA .........................................................................................................................32
3.4.2 C/C++.............................................................................................................................33
3.4.3 Others .............................................................................................................................36
3.5 Test Implementation Tools ......................................................................................48
3.5.1 Java .................................................................................................................................48
3.5.2 C/C++.............................................................................................................................51
3.5.3 Others .............................................................................................................................53
3.6 Test Evaluation Tools .................................................................................................58
3.6.1 Java .................................................................................................................................58
3.6.2 C/C++.............................................................................................................................60
3.6.3 Others .............................................................................................................................65
3.7 Static Analysis Tools ..................................................................................................69
3.7.1 Java .................................................................................................................................69
3.7.2 C/C++.............................................................................................................................71
3.7.3 Others .............................................................................................................................75
3
4
1 Introduction
This work started from the subject of my pro gradu thesis “The newest methods and tools for
software testing”. After a long search there were nearly 600 (six hundred) tools found. I am sure,
that there are tools much more than these now occurred. A very good aid to me was the list in
Internet (www.testingfaqs.org), which Brian Marick made famous and is now maintained by Danny
Faught. Other sources have been Internet overall, the brochures of the products and the literature.
Because the amount of the tools was so large, I had to restrict them and to take up only the most
interesting once. The division to the main groups was: Design, GUI (Graphical User Interface),
Load and Performance, Management, Implementation, Evaluation, Static Analysis and outside of
inspection: Defect Tracking, Web Sites and Miscellaneous.
The limits between the groups are out of focus, because there are many tools, which can belong to
several classes (See Figure 1.).
Design
Static Analysis
Implementation
Management
GUI
Evaluation
Load and
Performance
5
A short description of the main groups:
Regression Tools
Regression testing tools are used to test software after modification. Dividing in
groups as above (one or two typical examples per group are presented) there were:
Design: None
GUI: Auto Tester for Windows (No 3) is specifically meant to support project teams
in automating regression testing. Others 4, 6, 12, 15 and 27.
Load and Performance: Teleprocessing Network Simulator (No20) can be used to
automate regression testing. Others10 and 12.
Management: Test Manager (No 2) provides an interactive development environment
for working with regression test suites. OTF – On Object Testing Framework (No 18)
is a tool for Smalltalk objects, in which regression testing is automatic with full
logging of results. Others 1, 4, 5, 10, 14, 16, 27, 28, 29, 37 and 38.
Implementation: Junit (No 5) is a regression testing framework used by developers
who implement unit tests in Java. Others 1, 15 and 18.
6
Evaluation: Logiscope (No 26) identifies effective non regression if program files
have been modified.
Statistic Analysis: ParaSoft Jtest (No 6) automatically performs regression testing of
Java code.
Total 28 tools.
Requirement Tools
Requirement-based or requirement definition related tools.
Design: Caliber-RBT (No 1) is a test case design system for requirement-based
testing. Others 6 and 15.
GUI: Panorama-2 (No 8) is a tool for requirement analysis. Another 17.
Load and Performance: SilkRealizer (No 19) is a tool that enables users develope and
deploy system level testing simulating real world events to assure that applications
will meet the requirements.
Management: AutoAdviser (No13) provides from requirements through production a
central repository for organizing and managing business requirements, tests and
associated files. Others 18, 21, 35 and 37.
Implementation: None
Evaluation: Panorama C/C++ (No 15) is a requirement and test case analysis tool.
Static Analysis: None
Total 12 tools.
Component Tools
Tools, which have some relationships with component-programming.
Design: Inferno’s (No 2) capabilities include creation of an intuitive library of
reusable components that support shared-scripts and data-driven-scripts.
GUI: None
Load and Performance: None
Management: SilkPilot (No 1) lets you test the behaviour of distributed objects within
your application’s server components. AutoAdviser (No 13) consolidates your test
library components and provides test team members with access to those components.
Others 2, 32, 40 and 42.
Implementation: AssertMate for Java (No 2) is a system that aids Java engineers use
assertions to ensure proper implementation of component interfaces. Another 1.
Evaluation: QC/Coverage (No 16) helps users by identifying code components that
have not been adequately tested.
Static Analysis: WhiteBox (No 33) provide insight into the complexity of different
components of the software.
Total 11 tools.
Integration Tools
Tools used with integration testing.
Design: ObjectPlanner (No 13) allows software developers and managers to calculate
the approximate time schedules to perform unit and integration testing. Another 15.
GUI: Solution Evaluation Tool (No 14) is usable in testing the integration of new
applications.
Load and Performance: None
Management: Cantata (No 3) is a solution for unit and integration testing of C and
C++ code. Others 5, 6, 14, 25, 32 and 40.
Implementation: Autotest (No 14) is an automatic test harness tool for integration
testing. Another 15.
Evaluation: Cantata++ (No 7) is an effective tool for C++ integration testing. Others
18 and 27.
7
Static Analysis: None.
Total 15 tools.
Object-oriented Tools
Tools used specially with object-oriented systems. All Java and C++ tools fall
automatically in this category although the search has not found them with keyword
object.
Design: T-VEC Test Generation System (No 15) is integrated in an environment to
support the development and management of structured or object-oriented
requirements specifications. Another 12.
GUI: Vermont HighTest Plus (No 23) includes object-level record/playback of all
Windows and VB controls. Others 4, 8, 16, 17, 18 and 27.
Load and Performance: None
Management: TOOTSIE (No 42) is a total object-oriented testing support
environment. Others 1, 2 and 18.
Implementation: ObjectTester (No 11) is a software tool to automate the generation of
C++ unit test scripts. Others 1 and 2.
Evaluation: TotalMetric for Java (No 1) is a software metrics tool to calculate and
display object-oriented metrics for the Java language.
Static Analysis: ObjectDetail (No 12) is a software tool to help automate the metrics
generation of C++ programs. Another 27.
Total 19 tools.
Coverage Tools
Code coverage, test case coverage, test coverage and so on.
Design: Caliber-RBT (No 1) uses the requirements as a basis to design the minimum
number of test cases needed for full functional coverage. Others 2, 4 and 15.
GUI: Smalltalk Test Mentor (No 15) automatically gathers execution and method
coverage metrics. Others 8, 18 and 20.
Load and Performance: DataShark (No 6) generates the minimal number of test cases
with maximum coverage based on equivalence class partitioning and boundary
condition analysis.
Management: Cantata (No 3) is a code coverage analyser. Others10, 13 and 38.
Implementation: None
Evaluation: DeepCover (No 2) provides test coverage analysis for C/C++ and Java
applications. ObjectCoverage (No 13) a branch coverage analysis tool for C++. Others
all but 1, 21 and 24.
Static Analysis: Plum Hall (No 15) is a static analysis and code coverage tool for
testing C/C++ code. Others 21, 31 and 33.
Total 45 tools.
Test case Tools
Tools used e.g. in test case design.
Design: Validator/Req (No 6) performs test case generation. Others 1, 2, 3, 4 and 5.
GUI: imbus GUI Test Case Library (No 26) is a library of predefined test cases for
automated GUI testing. Others 8 and 19.
Load and Performance: DataShark (No 6) generates the minimal number of test cases
with maximum coverage based on equivalence class partitioning and boundary
condition analysis.
Management: QCIT (nr 21) tracks the software testing process from requirement
development, through test plan and test case development and execution. Test Case
Manager-TCM (No 30) organizes test cases for storage and execution logging. Others
5, 10, 25, 26, 28, 31, 34, 35 and 40.
8
Implementation: Autotest (No 14) controls a series of test cases on different programs
and software units Visula. Others 11 and 13.
Evaluation: Object/Coverage (No 13) analyses statements and generates a test case
coverage report. Others 15 and 24.
Static Analysis: Poly Space Verifier (No 16) is a tool designed to directly detect run-
time errors and non-deterministic constructs at compilation time. Others 15, 31 and
33.
Total 31 tools
Use case Tools
Use case testing and design.
Every group none.
Total sum 161 tools.
If you are interested for example in regression tools, you have them now in the same group and you
don’t have to scan through all the 198.
Here you had two examples how to divide tools in the groups. There are many other grounds to do
the same thing. Everyone can think himself what is the best.
9
We can also place the different types of tools in the software development life cycle (Figure 2.).
The limits between groups are ambiguous. The division is based on the divisions by Fewster and
Graham [FeG99] and Tervonen [Ter00].
Test management tools can be used in the whole software development life cycle. Test design and
inspection tools can be used in requirement specification, in architectural design and in the detailed
design phases. The static analysis tools help testing in the coding phase. Execution and comparison
tools can be used overall on the right side of V-model. Dynamic analysis tools are usable in
functionality, integration and unit testing. They assess the system while the software is running.
Coverage tools are designed specifically for unit testing. Acceptance and system tests fall in load
and performance tools. GUI test drivers have features of many other tools and are useful in the
whole implementation and evaluation area, but they are designed for GUI testing and are distinctly
an own group.
Management tools
Test execution and comparison tools
Load
Requirement Acceptance
performance
specification test
and simulation
tools
System test
Test Implementation
Architectural tools
design
design GUI drivers s
tools
Functionality
test
Inspection
tools Detailed
design Dynamic
Integration
analysis and
test
debugging
tools
Static Coverage
analysis Code Unit test tools
tools
Figure 2. Division of the tools in the software development life cycle (V-model)
2 References
[FeG99] Fewster, M., Graham, D.: Software Test Automation. ACM Press, New York, 1999.
[Ter00] Tervonen, I.: Katselmointi ja testaus. Lecture notes in University of Oulu, 2000.
10
3 Division of the Tools
Every tool has a similar structure of the description. It contains firstly in the header line: Name of
the tool, Company name and www-address(es). Then the description begins with one sentence,
which explains the main scope of the tool. A short software description is after that and finally in
the last lines possible free use and platforms of the tool if known.
11
Test design and execution. RadSTAR is a model-based, automated software testing
approach initially developed in the USA by IMI Systems Inc. It is a combination of a test
planning methodology and an automated engine, which executes the planned test cases.
Platforms: Any
12
7. DataFactory, Quest Software Inc., www.quest.com
Populate test databases with meaningful test data. DataFactory will read a database schema
and display database objects such as tables and columns, and users can then point, click, and
specifically define how to populate the table.
Generates meaningful data from an extensive test database that includes tens of thousands of
names, zip codes, area codes, cities, states, and more.
Enables users to test with millions of rows of data, giving a realistic picture of database
performance in the production environment.
Project-oriented architecture allows multiple tables and databases to be loaded in a single
session.
Supports direct access to Oracle, DB2, Sybase and any ODBC (Open Database
Connectivity) compliant database.
Enables developers to output to the database or to a flat text file.
Maintains referential integrity, including support for multiple foreign keys.
Gives developers the option to use DataFactory data tables or import their own data from a
database or delimited text file.
13
Multi Purpose Data Tool for IT people. JustData is a rapid data generation tool for IT
person(s) who need to create large amounts of structured data, prior to testing applications.
Working with ADO (Active Data Objects)/SQL ANSI92 compliant databases systems,
General applications, Spreadsheet applications, MSSQL Server V6.0 - 7.0, Oracle 7.0 - i8.0
SQL*Loader.
Platforms: Windows 95, 98, NT, 2000
14
resources required to test an application are usually not known. ObjectPlanner analyses C++
classes and calculates the approximate time that will be required to perform unit testing.
This allows software developers and managers to calculate the approximate time schedules
to perform unit and integration testing.
Platforms: SPARC - SunOs 4.1.X and Solaris 2.X
15
3.2 GUI Test Drivers
16
where the designated window is to be checked to see if it differs from the "canonical" test
run originally recorded, signalling a test failure if this is the case. It can also be used to write
GUI automation scripts.
Freeware.
Platforms: Record mode requires Expect. Playback or scripting will work on any Posix-
compliant system with a working port of Tcl/Tk (programming languages).
17
users from the complexities of script languages. Certify detects application changes and
automatically maps them to affected tests, simplifying maintenance and protecting the
investment in the test repository.
Free demo.
Platforms: Windows 98, NT, 2000. May require third party test tool for client/server
applications.
18
Dynamic test result display in colourful class inheritance chart, function call graph, on-line
accessible reports, and logic/control flow diagrams with unexecuted path/segment/condition
outcome highlighted.
Platforms: SUN Sparc, OS/Solaris, Windows NT, Windows 95, HP-UX (new).
GUI capture, script development test automation tool. QARunTM is an enterprise-wide test
script development and execution tool that is part of Compuware's QACenterTM family of
application testing products. QARun's automated capabilities improve the productivity of
testers, enabling them to efficiently create test scripts, verify tests, and analyze test results.
Using QARun, tests can be easily maintained for repeat use.
With Compuware's QACenter, software testers and application developers can now ensure
application quality with the enterprise-wide testing solution that includes client/server
automated testing tools, mainframe testing tools, test process management tools and testing
services. QACenter consists of client/server automated testing tools as well as Compuware's
mainframe testing products, QAHiperstationTM, for VTAM (Virtual Telecommunications
Access Method) applications, and QAPlaybackTM, for CICS (Customer Information Control
System) -based applications.
Platforms: All Windows and character based platforms
19
Platforms: Windows 3.x, Windows/95, Windows/NT, OS/2
20
15.Smalltalk Test Mentor, SilverMark, Inc., www.silvermark.com,
http://www.testingfaqs.org/t-gui.htm#stm
Test framework for Smalltalk. Automated testing tool for VisualAge for Smalltalk and
Cincom's VisualWorks. Test Mentor is a automated testing framework for Smalltalk. It
seamlessly integrates UI record/playback with domain object testing for deep regression
testing of your applications. Test Mentor automatically gathers execution and method
coverage metrics, and provides analysis tools so you can rapidly measure the health of your
project.
Platforms: All platforms supported by VisualAge for Smalltalk and Cincom's VisualWorks
21
metrics.
Version: 1.1b
Free evaluation copy.
Platforms: Win 95/98/NT, Win3.1, AIX, OS/2
22
21.TestQuest Pro Test Automation System, TestQuest, Inc.,
www.testquest.com, http://www.testingfaqs.org/t-gui.htm#TestQuest
Automated software testing tool. TestQuest provides non-intrusive test automation tools and
services for information appliances, general computing, handheld devices and industrial
control. Our products, services and expertise enable you to easily automate the testing of
your complex products resulting in decreased test cycle time and increased product quality
and customer satisfaction.
Free demo.
Platforms: The software under test may be executing under any operating system. The
TestQuest Pro runs on MS Windows 2000.
23
25.WinRunner, www.merc-int.com, http://www.testingfaqs.org/t-
gui.htm#winrunner
Automated GUI client testing tool. Automated Testing Tool for MS Windows applications.
WinRunner is an integrated, functional testing tool for your entire enterprise. It captures,
verifies and replays user interactions automatically, so that you can identify defects and
ensure that business processes, which span across multiple applications and databases, work
flawlessly the first time and remain reliable.
Platforms: Windows NT and OS/2
24
Free evaluation.
Platforms: Sun Solaris/SunOS, Digital Unix, HP-UX, AIX, Silicon Graphics IRIX, DEC
VMS, Linux, other Unix platforms
25
2. AutoController, AutoTester Inc., www.autotester.com,
http://www.testingfaqs.org/t-load.htm#Controller
Load Tester/Manager. AutoController provides a centralized environment for distributed or
stress testing of Windows and OS/2 client/server applications across a network.
AutoController is the automated solution for distributed testing of your Windows and OS/2
GUI and client/server applications.
From a central point, you can execute and monitor tests created with AutoTester for
Windows and OS/2, across a network, for load, stress and performance testing purposes.
Any combination of tests may be scheduled for concurrent or synchronized playback across
any combination of network machines.
During execution, test progress is monitored allowing complete control of each workstation.
When the tests are complete, the results are immediately available for failure and
performance analysis.
AutoController gives you the power and flexibility you need to put your GUI and
client/server applications to the test.
Platforms: Windows 95, Windows NT, OS/2 2.1 and higher.
26
Platforms:
Console platforms: WinNT and OS/2
Endpoint platforms: Win31, Win95, WinNT for x86, WinNT for Alpha OS/2,NetWare,
AIX, Digital UNIX, HP-UX, Linux, Sun Solaris, Novell Netware, MVS
27
Version: JDK (Java Development Kit).
Platforms: 1.1.1. Java and JDK 1.1 platforms.
Quality and performance testing. preVue is a remote terminal emulator that provides quality
and performance testing of character-based applications.
Version: 5.0
Platforms: Sun Solaris/SunOS, AIX, Silicon Graphics IRIX, OS/2, HP-UX, Digital Unix,
DEC VMS
28
Multi-user quality and performance testing. preVue-C/S provides accurate and scaleable
multi-user quality and performance testing for a wide range of client/server environments.
preVue-C/S supports successful deployment of client/server applications through emulation
of hundreds or thousands of users for stress load testing and performance measurement.
Version: 5.0
Platforms: Win3.1, Sun Solaris/SunOS, AIX, Silicon Graphics IRIX, OS/2, Win 95/98/NT,
HP-UX, Digital Unix, MacOS, IBM AS/400
29
15.Rational Quantify, Rational Software Corp, www.rational.com
Performance testing tool. Rational Quantify for Windows automatically pinpoints
performance bottlenecks in Visual Basic, Visual C++ and Java applications. It enables you
to analyze performance data and make the changes that will have the greatest impact. And
you can measure the results of each improvement as you tune your application for maximum
performance.
Rational Quantify offers server-side as well as client Java support. With Quantify installed
on your web server, you can find Java performance bottlenecks. You can run Quantify
against Java Server Pages (JSPs) and Java Servlets running on Web servers such as BEA
WebLogic Server or Apache Jakarta Tomcat.
Free evaluation.
30
For instance, the Database RemoteCog can be used to create SQL scripts to perform
database maintenance, shutdown and startup. You can even parameterize the SQL to handle
multiple databases. Using the OS RemoteCog, you can launch programs and scripts across
one or many machines. You can easily control and run different software packages from
one place using one interface - the RemoteCog Control Center. This simplifies training,
support, reduces the chance of errors and reduces total costs.
Free trial.
31
21.VisionSoft/PERFORM, VisionSoft Inc., VisionSoft, Inc , www.methods-
tools.com/tools/testing.html
Performing and optimizing tool. PERFORM analyzes your application's execution behavior
to identify the most executed sections for performance improvement. Statement frequency
(ie. execution counts) and function/method execution time data is collected. PERFORM
automatically produces reports and color highlighted source code displays to locate places
where code can be optimized for performance improvement. PERFORM can apply 10
different source code optimization techniques along with cache optimization methods to
yield performance improvements. PERFORM works with any C/C++ build and application
execution environment.
Version: 6.3
Platforms: Sun Solaris/SunOS, HP-UX, OS/2, MacOS, AIX, DEC VMS, VxWorks, Win
95/98/NT, Win3.1, Silicon Graphics IRIX, DOS
3.4.1 CORBA
32
The Professional Edition includes all Standard Edition features, plus powerful test
automation and code generation capabilities.
Platforms: Siemens, Stratus, Win 95/98/NT
3.4.2 C/C++
33
Programmable source-level thread debugger and test execution engine for programs written
in CHILL, C or C++. The Pilot is designed for networked, real-time software. The Pilot can
handle multiple programs, each with multiple threads. Programs need not be stopped in
order to be attached. Single threads may be stopped by command or at breakpoint, without
stopping the program. Interrupt handling and other processing may continue while some
threads are stopped or single stepped. Pilot commands may be interpreted at breakpoints, for
much higher throughput use functions called inside the program at breakpoint (trace with
user-specified data, breakpoint filter, program manipulation). The Pilot is fully
programmable. Scripts are written in a scripting language with expressions, 'for' and 'if'
statements just like the source language (adapts to the source language of the application, if
you program in C, you get C expressions and control statements.) A general macro facility
(textual macros with parameters) lets you program the Pilot for complex tasks. Fast start-up
and access, typically less than 5 seconds even for very large telecom applications. Minimal
interference until program control services are actually used, full speed real-time execution.
Platforms:
Hosts: Sparc/Solaris, SCO Unix on 486 PCs.
Targets: All hosts, and other platforms (please enquire).
34
Reengineering legacy systems. McCabe Reengineer is an interactive visual environment for
understanding, simplifying, and reengineering large legacy software systems. Based on
twenty years experience of measuring and reengineering software applications, McCabe
Reengineer provides comprehensive system analysis to locate high risk and error prone code
that will form the basis of reengineering efforts. By automating the documentation of critical
software characteristics you can immediately attain: faster understanding of architecture,
location of high-risk code, focused development efforts and accurate resource planning.
McCabe Reengineer brings focus, speed, and reliability to your reengineering process,
resulting in cheaper accelerated redevelopment, with faster time to market.
Supported languages: Ada, C, C++, COBOL, FORTRAN, Java, Visual Basic.
35
to construct executable test harnesses for both host and embedded environments. Utilities
are also included to construct and execute test cases, generate the reports necessary to
provide an audit trail of expected and actual results, perform automated regression testing
and code coverage.
Free demo.
Platforms: Solaris, SunOS, HP UX, AIX, Alpha Unix, NT/95, VMS
3.4.3 Others
11.Aegis, http://www.canb.auug.org.au/~millerp/aegis/,
http://www.testingfaqs.org/t-driver.htm#aegis
Software configuration management tool for a team of developers. Aegis is a transaction-
based software configuration management system. It provides a framework within which a
team of developers may work on many changes to a program independently, and Aegis
coordinates integrating these changes back into the master source of the program, with as
little disruption as possible. Aegis has the ability to require mandatory testing of all change
sets before they are committed to the repository. Tests are retained in the repository, and
may be replayed later by developers, to make sure future change sets don't break existing
functionality. Correlations between source files and test files allow Aegis to suggest relevant
tests to developers. Bug fixes are not only required to have their tests pass on the fixed code,
but they are required to fail on the unfixed code immediately before commit, to demonstrate
that the bug has been reproduced accurately.
Platforms: Everything. Aegis is open source software.
36
AutoAdviser, managers, business analysts, application users, testers, and developers can
ensure software quality throughout the entire application lifecycle.
Central Repository:
AutoAdviser is a true workgroup solution for the use, management, and maintenance of
your application test libraries. Serving as a central repository, AutoAdviser consolidates
your test library components and provides test team members with access to those
components. Business requirements, test plans, tests, and test results are all stored and
managed from within AutoAdviser.
Test Planning:
AutoAdviser helps you plan your testing to ensure that all critical business procedures are
tested and business requirements are addressed. Business requirements are stored in the
repository and linked directly to your AutoTester tests. AutoAdviser displays your business
requirements in a hierarchical format allowing you to quickly analyze the business process
flow of your applications.
Full documentation features provide easy reference to requirement details and associated
tests. Test execution and reporting can be controlled at the business requirement level for
measuring test coverage. With AutoAdviser, you can ensure that each and every function of
your application is adequately tested before release.
Test Organization:
AutoAdviser greatly reduces test creation and maintenance time by organizing your testing
projects into hierarchical groups. From these groups, tests from current projects can be
copied or grouped with tests from other projects.
For example, if you had a common navigational sequence, you would create a test once,
then copy it into each test group that required navigational testing. If the navigational
sequence changes, you would need to update only one test component - instead of hundreds
of tests.
As your applications progress through their lifecycle, AutoAdviser provides a structured
approach to the entire testing process. By combining individual tests into groups and
projects, you can verify specific sections of an application, from a single dialog to an entire
functional area without having to recreate entirely new test projects.
Test Scheduling & Execution:
AutoAdviser allows you to control your entire testing effort from one central location. Tests
can be executed directly within AutoAdviser and can be scheduled for immediate playback
or to run at a specific time in the future. For future execution, you can set a countdown timer
or specify an exact date/time specification.
Change Control:
Project managers control the entire testing effort with AutoAdviser's change control
features. Various levels of access rights, from report viewing to test modification to full
project management privileges allow you to manage access to your test library components.
AutoAdviser monitors changes made to your library and protects your testing assets by
preventing users from overwriting files or modifying the same test files at the same time.
AutoAdviser also produces audit trail reports that track changes in the AutoAdviser
database, such as who modified a test file and when, making it easy to evaluate the status of
your test library.
Quality Analysis and Drill-Down Reporting:
AutoAdviser's reporting options allow you to make informed decisions concerning the
release of your applications. Instead of just producing simple pass/fail statistics,
AutoAdviser offers a multitude of customizable reports that make it easy to analyze the
progress of your testing and development effort.
AutoAdviser's status reports provide a snapshot of a project's current state by calculating
37
coverage and success of tests and requirements. To provide an early warning before project
milestones are missed, AutoAdviser's progress reports measure the change in coverage and
success between project test runs or dates. In addition, graphical drill-down reports give you
an overall project status and allow you to quickly get more information by "drilling down"
to the desired level of detail.
38
Freeware.
Platforms: Most Unix machines
39
Platforms: All Windows and character based platforms
40
Configuration management. Rational ClearCase is software configuration management
solution that simplifies the process of change. To software teams of all sizes, it offers tools
and processes you can implement today and tailor as you grow. Rational ClearCase provides
a family of products that scale from small project workgroups to the distributed global
enterprise, enabling you to:
Accelerate release cycles by supporting unlimited parallel development.
Unify your change process across the software development lifecycle.
Scale from small teams to the enterprise without changing tools or processes.
41
26.SDTF – SNA Development Test Facility, Applied Computer Technology,
www.acomtech.com, http://www.testingfaqs.org/t-driver.htm#sdtf
Network test driver/manager. SNA (Systems Network Architecture) network product testing
system. Provides PC-based environment for development, architectural conformance
verification, load/stress and performance testing. Provides over 13,000 ready-to-run
validated tests, and allows development of additional tests using test case development tools
and an open API (Application Programming Interface).
Platforms: DOS, Windows 98, Windows NT
42
automatic output synchronization and test case preview and optimization. SMARTS
automates testing by organizing tests into a hierarchical tree, allowing automatic execution
of all or a subset of tests, and reporting status, history, regression, and certification reports.
EXDIFF verifies bitmaps captured during a recording session and automatically compares
them with actual images at playback. EXDIFF can also determine a successful test based on
actual values, using Optical Character Recognition and translating the values to ASCII
characters. STW/Regression includes automated load generation for multi-user applications,
and employs a test management component for automated test execution and management to
allow unattended test runs.
Platforms: Sun Solaris/SunOS, AIX, Silicon Graphics IRIX, HP-UX, DOS, Digital Unix,
SCO Unix, Solaris X86
Organizes test cases for storage and execution logging. Test Case Manager (TCM) is a tool
designed for software test engineers to organize test cases for storage and execution logging.
Test cases are written up in a standard format and saved into the system. Test cases can be
organized by level (Smoke, Critical Path, Acceptance Criteria, Suggested), by area (GUI
breakdown, installation, data, etc.), by status (pass, fail, untested, etc.), or other breakdown
criteria. Once test cases are built, testers use TCM to track and report success or failure of
test cases. TCM provides an unlimited number of central, multi-user databases, each of
which will support an entire test team. TCM is intended for use by small to midsize software
development companies or organizations. Most features are implemented as intuitive
wizards for users to step through.
Freeware.
Platforms: Windows. Requires Microsoft Access.
43
32.Test Mentor – Java Edition, SilverMark, Inc., www.javatesting.com,
http://www.testingfaqs.org/t-driver.htm#TestMentorJava
Java component, unit and function test automation tool. A functional test and test modelling
tool for Java developers & QA Engineers to use as they develop their Java classes, clusters,
subsystems, frameworks, and other components, either deployed on the client or the server
during unit and integration testing.
Free trial.
Platforms: Client (user-interface) runs on Windows platforms only, test execution on all
Java platforms
44
execution engine that organizes test cases and test information, drives high-speed test
execution, and captures all results and journal information for complete IEEE (Institute of
Electrical and Electronic Engineers) standard audit trail documentation and requirements
traceability. TestExpert's open architecture provides "Best-of-Class" solution through
seamless integration with testing tools from industry leaders, such as PureAtria, Rational,
Mercury Interactive, and Segue and allows custom integration with existing "in-house"
systems.
Platforms: Solaris, SunOS, HP/UX, NT
45
Version: 1.8
Platforms: Sun Solaris/SunOS, HP-UX, Digital Unix
46
TEWS is intended for software production and quality departments.
Quality departments - (software) project quality monitoring.
Design managers – ensuring the total quality of software production and projects.
Project managers - monitoring and analysis of testing.
Project groups and testers – enhancing test planning, more specific test specifications with
concrete test cases, software product and component quality assurance.
Platforms: Windows 95/98/NT
47
3.5 Test Implementation Tools
3.5.1 Java
One Test Log for the Life of the Project - You use a test tool for the results. In AQtest all
test results, and even test errors, go to one log. The log begins when you define a test project
48
and ends when you close it, perhaps many months later. Tens of different tests, hundreds of
iterations, thousands of results all are posted to the log as messages. There are five types of
messages, five priorities for each. Messages can be filtered by date, time, test, type, priority,
then automatically formatted into clear, to-the-point test reports that tell you exactly what
you want to know about how your application is doing. The reports are in XML for reading
on any machine equipped with IE 5.0 or later.
Capture and Storage of Object Properties, Screen Clips and Output Files for Comparison -
The basis of testing is evaluation. The way to automate this is to get results and compare
them to verified results already achieved, or created for the purpose. AQtest can
automatically capture results as collections of selected properties from an object and its
children (such as a dialog and its controls), as pictures from the screen or from some other
source, or as files of any type, from any source. The supplied library then makes most
comparisons between "new" and "standard" the business of one or two lines of code.
Scripting by Recording or by Coding - Build your test scripts quickly, and build them with
the power to thoroughly test your business process. Use any mixture of recorded actions,
hand-written code, calls to AQtest's powerful test library, and script tuning in the full-
featured editor and debugger -- whatever fits the need, whatever is easiest for the user.
Everyone in your organization can start using AQtest immediately for his or her job.
Intelligent Script Recording - AQtest's recorder goes much beyond what the world's best
macro recorder could do. It records selections, string input, checkings/uncheckings, etc., on
Windows controls, identified by parent, class, caption, etc. The recorded script is compact, it
"speaks" the way the developer sees things, and it remains valid after interface tweaks. But
that is only the beginning. The intelligence goes much deeper than Windows screen objects.
AQtest recognizes typical UI library objects for your development tool. In fact, it can
recognize "from outside" even the most application-specific objects if the application is
compiled for this. On the other hand, the recorder can also log single keyboard events and
single mouse events at absolute screen positions, with total detail, if that's what you need.
Testing that Reaches Beyond the Black Box - Can't access functionality you know is in your
application? Ever wanted to execute application methods from scripts? AQtest lets you go
into the application under test like no other tool can. Open Applications are applications
linked with one AQtest file (supplied in source code, of course). This gives scripts access to
most of their public elements -- objects, properties, methods. And, if the Open Application
is compiled with external debugger information AQtest's Debug Info Agent™ can use it to
watch and manage even the application's private elements, just like the IDE (Integrated
Development Environment) debugger can.
Debug Info Agent™ is an exclusive AQtest technology, and an industry first. It gives
"external" testing more access to application internals than even the application source code
has, from other modules. "External" testing can watch or run anything in the application,
just as easily as it watches and runs the application's user interface -- including calling
methods and changing property values. Easy answeres for the tough questions.
49
Object Browser - AQtest's Object Browser is divided into two panes like the Windows
Explorer. In the left pane it displays a tree of all objects (processes, windows, etc.) that exist
in a system, with full expand-collapse capacities. In the right pane, it displays all the
available properties and methods of the object selected in the left pane. This right-pane view
can itself expand its details in order to dig into the object's properties. You can also select
objects directly onscreen with the mouse, and get the same detail analysis. The Browser
serves two purposes. First, it lets you orient yourself among all the objects that your test
scripts will have access to. Second, it lets you capture and store any collection of properties,
or the image of any onscreen object, for later comparison with test output. Before storage,
the specific properties to keep can be selected, or the image to save can be trimmed as
needed.
Automated Self-testing - The entire process of "digging into" an application from the
outside can be reversed. Again by linking in just one file, any application can gain complete
control of the AQtest engine (an OLE server). Anything a script can do, application code
can do, in the application's own programming language. In other words, the application can
test itself, embed recorded scripts, use AQtest's scripting tools to feed itself input and to
capture its own output, use the AQtest library and the result sets stored by AQtest to analyze
its own behavior, and post its conclusions to the project log just like any other test would. It
can also call and execute external AQtest scripts if that is useful. Self-testing applications
are an obvious way to simplify Unit testing. Moreover, self-testing code is written in
application source language, has the same access to application internals as the rest of the
source code, and can be debugged using the same IDE debugger.
Entirely COM-based, Entirely Open Architecture - AQtest is a major automated testing tool
that is entirely COM-based. It is the first because this is not easy to do. But the advantages
are many. For instance, because the AQtest engine is an OLE server, it can be run from any
COM-based language interpreter, or from application code. It can also have an application
log its own details to it, so that it becomes "open" to external tests. Because the engine
"sees" everything through one interface, COM, it is source-language independent, and can
even read debugger information and use it at runtime through the Debug Info Agent(tm).
Another advantage is that the various internal interfaces of AQtest are defined in IDL
libraries that are supplied with the product, and which you can use to extend it. In fact, the
coming Java and .NET support are such plug-in extensions.
Fully customizable interface - The onscreen interface of AQtest is clear, attractive and
intuitive, by design. And it is flexible. Everything (everything) can be customized in a few
seconds to adapt it to your particular needs of the moment.
Platforms: Windows 95, 98, NT, or 2000.
50
component interfaces, and internal variables. Until now, assertions have been missing from
the Java development environment.
AssertMate provides fast and accurate assertion capability for programmers and class level
testers alike. AsserMate enables developers to make use of pre-conditions, post-conditions,
and data assertions to validate behavioral correctness of Java programs, while providing a
simple system for placing assertions without modifying source code.
Free download.
Platforms: Win 95/98/NT
3.5.2 C/C++
51
Platforms: Windows 2000/NT 4; Solaris 2.5.1 / SunOS 5.5.1; Sparc & UltraSparc. AIX 4.2
52
11.ObjectTester, ObjectSoftware Inc., www.obsoft.com,
http://www.testingfaqs.org/t-impl.htm#objecttest
C++ Unit Testing Tool. ObjectTester is a software tool to automate the generation of C++
unit test scripts. Software written in the C++ language is made up of data structures. In C++,
a "class" represents a data structure and is made up of data and methods. ObjectTester
analyzes C++ classes and generates unit test cases for them. The test cases generated are in
the C++ language and are free from syntax or semantic errors. A unit test script is generated
for each class. This script can be modified by developers to include any "pre" or "post"
conditions for each method.
Platforms: SPARC - SunOs 4.1.X and Solaris 2.X.
3.5.3 Others
53
Each test is defined by its input data, instructions on how to run the test, and the expected
output data. The tool requires that the programs to be tested are capable of being driven
solely from data held in files.
Test process produces output data and then comparison process compares this and expected
output data producing test report.
Autotest can automatically re-run all failed tests at the end of a set of tests, to overcome
problems of unexpected events such as network failures.
54
DateWise's unique technique calculates the difference between two files, telling the user if
the files matched and what the difference was between the files or where any unresolvable
differences are located, just by providing the names of the two files. Furthermore, this tool
does not require explicit delimiters (such as spaces or punctuation) to appear around the
words or tokens contained in the text or binary files (unlike competitive word-by-word
comparison utilities). The powerful technique is not a "silver bullet" because it uses a
publicly known technology for producing its results (Patent No. 6,236,993 covers the
technology the tool is based on. Other patents are pending.).
Platforms: MS-DOS, Windows, HP-UX, Solaris, and OS/390, but the tool was written in
ANSI-C for portability to other platforms and we are willing to port it elsewhere.
55
Memory error detection, including leaks, overwrites, invalid references. HeapAgent is an
interactive memory error debugging tool. Detects a broad range of heap and stack errors and
reports more error information than any other tool. Includes tools for browsing live heap
information to diagnose the cause of errors. No relinking/recompiling, works with
debuggers, and causes minimal runtime slowdowns.
Free trial.
Platforms: Windows 3.x, Windows NT, Windows 95
56
Freeware.
Platforms: Windows (running Microsoft Access 97 or Access 2000)
57
3.6 Test Evaluation Tools
3.6.1 Java
58
Test coverage tool. JavaScope is a software testing tool that quantitatively measures how
thoroughly Java applications and applets have been tested. JavaScope is created specifically
for, and focused exclusively on, testing Java. JavaScope integrates seamlessly with both
JavaStar and JavaSpec.
No support since 1999.
Version: JDK.
Platforms: 1.1.1. Java and JDK 1.1 platforms.
6. VisualTestCoverage, (IBM),
http://www.alphaworks.ibm.com/tech/visualtest
59
Test coverage tool. VisualTestCoverage is a technology for determining testing adequacy
for programs written in VisualAge for Smalltalk, Generator, and Java. When the
programmer is testing a VisualAge application, the tool counts how many connections,
events, methods of the program were executed, and displays this information in a view that
resembles the VA composition editor. It provides feedback to the developer about whether
he has done enough testing on each class. It covers all elements within those classes, and not
just the methods.
Platforms: Windows 95
3.6.2 C/C++
60
Test Coverage Analyzer for C/C++. CTC++ is a powerful instrumentation-based tool
supporting testing and tuning of programs written in C and C++ programming languages.
Everyone working with C or C++ is a potential user of CTC++.
CTC++ is available in two sale packages:
"CTC++": using the tool on a host environment. The tool utilities and the instrumented code
under test are run on the selected host.
"CTC++ Host-Target & Kernelcoverage": in addition to using CTC++ normally on the host
environment, cross-compiling the instrumented code for a target, running tests on the target,
getting the test results back to the host, and viewing the coverage reports on the host. This
package facilitates also measuring coverage from the kernel mode code on the host.
CTC++ facilitates:
Measuring test coverage => ensure thorough testing, you know when to stop testing, etc.
Function coverage (functions called).
Decision coverage (conditional expressions true and false in program branches, case
branches in switch statements, catch exception handlers in C++, control transfers).
Statement coverage (statements executed; concluded from function and decision
coverages).
Condition coverage (elementary conditions true and false in conditional expressions).
Multicondition coverage (all possible ways exercised to evaluate a conditional
expression).
Searching execution bottlenecks => no more guessing in algorithm tunings.
Function execution timing (if needed, the user may introduce his own time-taking
function for measuring whatever is interesting).
Execution counters.
Conveniently presented test results.
Hierarchical, HTML-browsable coverage reports.
Pure textual reports.
Ease of use.
Instrumentation phase as a "front end" to the compilation command => very simple to
use.
Can be used with existing makefiles.
Usable "in the large".
Instrumentation overhead very reasonable.
You can select what source files to instrument and with what instrumentation options.
Besides full executables, libraries and DLLs can also be measured.
Integrated to Microsoft Visual C++ Developer Studio.
See description of the CTC++/VC++ integration.
Good management and visibility of testing.
Easy to read listings (textual and HTML).
In terms of the original source code.
Untested code highlighted.
Various summary level reports.
TER-% (test effectiveness ratio) calculated per function, source file, and overall.
61
C test coverage. The Generic Coverage Tool (GCT) is a freeware coverage tool that
measures how thoroughly tests exercise C programs. It measures branch, multicondition,
loop, relational operator, routine, call, race, and weak mutation coverage for C programs.
See http://www.testing.com
Freeware.
Platforms: Most UNIX
62
while" constructs. ObjectCoverage analyzes these statements and generates a test
case/branch coverage report.
Platforms: SPARC - SunOs 4.1.X and Solaris 2.X
63
16.QC/Coverage, CenterLine Software,
www.uniforum.org/news/html/dir.open/24665.html
Code coverage and analysis tool. QC/Coverage is a code coverage and analysis tool for
quickly identifying how much of an application's code was exercised during testing. By
identifying code components that have not been adequately tested, QC/Coverage helps users
focus their efforts, avoid wasting time, and make better use of testing resources.
QC/Coverage also provides industry-standard testing metrics to assist in test planning, with
specialized support for C++. QC/Sim, a companion product, simulates situations such as a
network crash or full disk that are difficult to duplicate in reality.
Platforms: DEC Ultrix, HP-UX, IBM AIX, Novell UnixWare, OS/2, Sun Solaris 2.x, SunOS
64
Test Data Observation Tool. T-SCOPE, a test data observation tool, works directly with
TCAT and S-TCAT, part of the fully integrated TestWorks/Coverage testing suite. T-
SCOPE dynamically depicts logical branches and function-calls as they are exercised during
the testing process. Slider bars can also be selected that will show the percentage of
coverage achieved for individual modules or the entire program, as each test is run. Color
annotation indicates how logical branch and function-call execution correspond to the
minimum and maximum threshold values.
Platforms: SPARC SunOS; SPARC Solaris; HP-9000; SGI Irix 5.3, 6.2; DEC-Alpha; NCR
3000; SCO/UnixWare 2.1.1; DOS; Win 3.x/NT/95
3.6.3 Others
65
the tool for each measurement. It can produce reliable traces and measurements when
programs are executed out of cache or are dynamically relocated.
The CodeTEST tool suite includes:
Performance Analysis
o Measures function and task execution times
o Counts call-pair linkages to identify thrashing
o Non-sampled measurements of 32,000 functions at one time
Coverage Analysis
o Displays coverage at program, function, or source levels
o Plots coverage over time
o Interactive measurements simplify test creation and refinement
Memory Allocation Analysis
o Dynamic display shows memory leaks in progress before system crashes
o Pinpoints memory allocation and free errors to offending source line
o Measures true worst case allocation
Trace Analysis
o Traces embedded programs at source, control-flow, or high (task) level
o Deep trace captures over 100 thousand source lines of execution
o Powerful triggering and trace display options zero in on problems
Platforms: MS Windows 95, Sun Sparc, HP 9000
66
Static Analysis and Code Coverage Toolset. Analyses source code statically to enforce
coding standards & calculate metrics on quality, complexity, & maintainability. Static
Analysis also features coding error detection, data flow analysis, variable cross reference, &
code visualisation. Dynamic Analysis measures code coverage of statements, branches, test
paths, subconditions, & procedure calls.
Analyses: Ada, Algol, C/C++, Cobol, Coral 66, Fortran, Pascal, PL/M, PL/1, Assemblers
(Intel + Motorola).
Platforms: Solaris, SunOS, Windows 95/NT/3.1x, Vax/VMS, Open VMS, HP-UX, Digital
Unix, WinOS/2, AIX, IRIX, SCO, MVS.
67
28.SofInst, SES Software-Engineering Service GmbH,
http://www.testingfaqs.org/t-eval.htm#sofinst
Test Coverage Tool. SofInst is a test coverage tool for measuring mainframe COBOL and
PL/I programs with CICS, DLI, SQL or DDL/DML macros. On the basis of the
instrumentation technique, the programs can be instrumented at different 5 levels of
instrumentation. The coverage report shows which probes were executed how many times
together with a coverage ratio.
Platforms: IBM/MVS Mainframe
68
3.7 Static Analysis Tools
3.7.1 Java
69
3. JavaPureCheck, Sun Microsystems, www.methods-
tools.com/tools/testing.html
JavaPureCheck is a static testing tool, which reads class files and reports everything which
might cause applications or applets to be unportable. JavaPureCheck is the official purity
checker testing tool of JavaSoft's 100% Pure Java Initiative.
Version: 4.0.
Freeware.
Platforms: Java and JDK 1.1 platforms.
70
7. Rational Purify, Rational Software Corp, www.rational.com
Error detection. Rational Purify® for UNIX has long been the standard in error detection for
Sun, HP, and SGI UNIX platforms. With patented Object Code Insertion technology (OCI),
Purify provides the most complete error and memory leak detection available. It checks all
application code, including source code, third party libraries, and shared and system
libraries. With Purify, you can eliminate problems in all parts of your applications, helping
you deliver fast, reliable code.
Free download.
Platforms: Sun, HP, SGI UNIX
3.7.2 C/C++
71
Platforms: Windows 2000/NT/9x, HPUX, Solaris, Linux
11.CodeWizard, (ParaSoft), http://www2.parasoft.com/jsp/home.jsp
C/C++ source code analysis tool. CodeWizard, an advanced C/ C++ source code analysis
tool, uses coding guidelines to automatically identify dangerous coding constructs that
compilers do not detect. Containing over 240 rules that express industry-respected coding
guidelines, CodeWizard allows for the suppression of individual guidelines and includes
RuleWizard, a sophisticated tool for customizing these guidelines and creating new ones.
CodeWizard is particularly effective on a daily basis at both individual and group levels to
simplify code reviews and make code more readable and maintainable.
Platforms: Win NT/2000; Win 95/98/ME; IBM AIX 4.3.x; Compaq Tru64; Unix 5.x, HP-
UX 11.x; Linux; HP-UX 10; DEC Alpha 4.x; Sequent 4.x
72
Platforms: Windows NT/95/98/2000, Linux, DEC Alpha, IBM RS/6000 (AIX 4.x), HP (HP-
UX 10 & 11), SGI (IRIX 6.x), Solaris
14.PC-lint/FlexeLint, Gimpel Software, http://www.gimpel.com/,
http://www.testingfaqs.org/t-static.htm#pc-lint
Static Analysis. PC-lint and FlexeLint will check your C/C++ source code and find bugs,
glitches, inconsistencies, non-portable constructs, redundant code, and much more. It looks
across multiple modules, and so, enjoys a perspective your compiler does not have.
Platforms: Windows, MS-DOS, OS/2, Unix, Sun, HP, VMS, MVS, VM, OS-9, Mac, etc
73
Interval and a variety of graphical output reports such as Call Trees, Control Structure and
Demographic Analysis.
Platforms: PC (Win NT & 95), Sun (SunOS and Solaris), HP (HPUX), Dec Alpha (OSF1),
IBM (AIX), SGI (IRIX), SNI (SINIX)
74
Platforms: Sun Solaris/SunOS, AIX, Silicon Graphics IRIX, HP-UX, Digital Unix, Alpha
NT
21.TestBed, Eastern Systems Inc., www.easternsystems.com,
http://www.testingfaqs.org/t-static.htm#TestBed
Static & Dynamic Analysis tool. TestBed is a complete, powerful, integrated tool set that
enhances developer's understanding of C, C++, Ada, COBOL, NT, Solaris and Pascal
source code and highlights areas of concern for attention. The result is greater software
reliability, reduced maintenance costs, and easier re-engineering.
TestBed's Static Analysis capabilities include programming standards verification,
structured programming verification, complexity metric measurement, variable cross-
reference, unreached code reporting, static control flow tracing, and procedure call analysis.
TestBed Dynamic Analysis executes an instrumented version of the source code to detect
code coverage defects, statement execution frequency analysis, LCSAJ (Linear Code
Sequence and Jump) subpath coverage, and multiple subcondition coverage.
The combination of techniques provides a clear picture of the quality and structure of tested
code. In addition, TestBed options such as Tbsafe help users achieve quality certifications
including RTCA/DO178B and BS EN ISO9000/TickeIT. TestBed itself was produced under
an ISO9000 quality management system of which TestBed is an important part.
3.7.3 Others
75
24.Dependency Walker, Steve P. Miller, http://www.dependencywalker.com/,
http://www.testingfaqs.org/t-static.htm#dependencywalker
A tool for troubleshooting system errors related to loading and executing modules.
Dependency Walker is a free utility that scans any 32-bit or 64-bit Windows module (exe,
dll, ocx, sys, etc.) and builds a hierarchical tree diagram of all dependent modules. For each
module found, it lists all the functions that are exported by that module, and which of those
functions are actually being called by other modules. Another view displays the minimum
set of required files, along with detailed information about each file including a full path to
the file, base address, version numbers, machine type, debug information, and more.
Dependency Walker is also very useful for troubleshooting system errors related to loading
and executing modules. Dependency Walker detects many common application problems
such as missing modules, invalid modules, import/export mismatches, circular dependency
errors, mismatched machine types of modules, and module initialization failures.
Freeware.
Platforms: Windows 95, 98, Me, NT 3.51, NT 4.0, and Windows 2000
76
27.McCabe Visual Quality Toolset, McCabe and Associates, www.methods-
tools.com/tools/testing.html
McCabe Visual Quality Toolset calculates McCabe metrics, Halstead metrics, object-
oriented metrics, and other user customizable metrics. It produces structure charts, flow
graphs, and metrics reports to display software structures visually. Version: 5.11.
Platforms: Win3.1, Sun Solaris/SunOS, AIX, Silicon Graphics IRIX, DEC VMS, Win
95/98/NT, Digital Unix, HP-UX.
77
and a variety of graphical output reports such as Call Trees, Control Structure and
Demographic Analysis.
Platforms: Sun (SunOS and Solaris), HP (HPUX), Dec Alpha (OSF1), IBM (AIX), SGI
(IRIX).
Metrics and code coverage. WhiteBox tool to perform software analysis that is needed to
apply a complete WhiteBox methodology solution. The WhiteBox tool collects valuable
static code metrics and dynamic code coverage information which are used to assist with test
planning for software. Static code metrics provide insight into the complexity of different
components of the software; they can be used to help determine where faults are likely to be
hiding and therefore where to focus testing resources. Dynamic code coverage is used to
determine how much of the code was reached when tests are performed. Using WhiteBox,
you can track the code coverage levels obtained for a given test case or a suite of test cases
78
against the source code to determine missing or incomplete requirements, which when
updated can be used to produce additional tests that cause the previously unreached code to
be exercised.
Free evaluation.
79
RUP ja sen tärkeimmät hyödyt ja tavoitteet
Rational Unifed Process (RUP) on työkalu ohjelmistojen suunnitteluun. Se antaa nä-
kökulmia työn suunnitteluun ja suunnittelun vastuuseen. Sen päämääränä on varmis-
taa korkealaatuinen tuote, joka sopii ennustetun aikataulun ja budjetin sisälle. Ohjel-
mistojen tulee täyttää käyttäjien vaatimukset, sekä valmistua sovitussa ajassa ja sovi-
tulla rahamäärällä.
Horisontaalinen aikaulottuvuus:
Projektin kannalta hanke on jaettu neljään vaiheeseen: tutkimusvaihe, valmisteluvai-
he, rakentamisvaihe ja käyttöönottovaihe. Vaiheet antavat projektille alun, välietapit
ja päättävät projektin (voi jatkua toisena projektina).
Vertikaalinen ulottuvuus:
Kuvaa hankkeen etenemistä prosessina vaiheesta toiseen niin, että vaiheet limittyvät
ja projektin eri vaiheissa eri tehtävien osuudet vaihtelevat.
RUP:n hyötyihin kuuluu sen mahdollistama tiimin sisäinen kommunikointi, jossa kai-
killa jäsenillä on samat työkalut. Tiimin jäsenet ymmärtävät toisiaan, kun kaikilla on
sama kieli käytössä, sekä sen luoma mahdollisuus tuottaa laadukkaita ohjelmia.
RUP:n avulla saadaan juuri se tieto, jota kullakin hetkellä tarvitaan. RUP auttaa myös
UML:n käytössä sekä ohjaa ohjelmoinnin piirteissä. RUP näyttää miten voidaan lisätä
parhaat toiminnot ohjelmistosuunnitteluun, ja kuinka työkaluja voidaan käyttää auto-
matisoimaan suunnittelua.
2
Best Practices ja niiden tarkoitus
Best Practices (parhaat käytännöt) sisältää kuvassa 2 näkyvät asiat. Best Practicesin
välineet ovat vaatimustenhallinta, visuaalinen mallintaminen, automaattinen testaus ja
muutostenhallinta.
3
Visual modeling (visuaalinen mallintaminen) auttaa käyttämään semanttisuutta oi-
kein, graafisesti ja tekstillisesti suunnittelemaan notaation ohjelmiston suunnittelussa.
Notaatio, kuten UML, sallii abstraktin tason nousta samalla kun pitää yllä tarkkaa
syntaksia ja semantiikkaa. Tämä parantaa suunnitteluryhmän kommunikointia ja mal-
lin esittämistä sekä systeemin toiminnallisuutta.. Malli (model) on yksinkertainen
näkymä systeemistä, kuten Use Case, luokkadiagrammi jne.
4
Toisen vaiheen tuloksena saadaan arkkitehtuurin elinkaari
Kolmannen vaiheen jälkeen voidaan määrittää, onko tuote valmis siirtymään testiym-
päristöön
5
Notaation mukaiset kaaviot toimivat yhteisenä kommunikointivälineenä ohjelmisto-
kehitysprosessin eri toimijoiden välillä. UML tarjoaa semantiikan, joka on tarpeeksi
monipuolinen esittämään kaikki tärkeät strategiat ja taktiset päätökset.
UML mahdollistaa tarkan ja selkeän mallin esittää vaatimukset, jolloin eri asianosai-
set voivat tarkastella vaatimuksia luotettavasti. Lisäksi UML mahdollistaa eri vaihto-
ehtojen löytämisen ja vertailemisen pienillä kustannuksilla. UML luo myös pohjan
implementoinnille.
Rational ClearQuest:
Virheiden seuranta ja muutospyyntöjen hallintatyökalu. Rational ClearQuest on jous-
tava työkalu, jolle eri laitealustat, tiimien monimutkaisuus tai maantieteellinen sijainti
eivät tuota ongelmia.
Rational RequisitePro:
Optimoi vaatimustenhallintaa. On helppokäyttöinen työkalu vaatimustenhallintaan,
joka aikataulun ja kustannusten puitteissa auttaa luomaan sovelluksen, johon asiakas
on tyytyväinen.
Rational Rose:
Tarjoaa ohjelmistokehitykseen visuaaliset mallinnusvälineet. Perustuu UML:ään.
Rational SoDA:
Työkalu raporttien generointiin. Tukee päivittäistä raportointia ja dokumentaatiota.
SoDA:n tarkoituksena kerätä tietoa eri lähteistä ja tehdä valitun mallin mukainen ra-
portti.
Rational TestManager:
Väline ohjelmiston testauksen hallintaan. Sen avulla jokainen kehitysprosessiin osal-
listuja voi tarkastella testausta ja sen tuloksia omasta perspektiivistään lähinnä valvo-
malla, että testaus on suoritettu kattavasti.
6
Rational Administrator:
Tavoitteena on keskittää projektin hallintaa ja hallita Rational- tuotteiden tietovaras-
tojen välisiä suhteita, sekä Rational Testin käyttäjiä ja käyttäjäryhmiä. Administrato-
ria käytetään projektin luomiseen, yhdistämiseen ja testitietokannan luomiseen, yh-
teyden muodostamiseen testituotteiden ja Requiste Pro:n välille, sekä Rational Clear-
Quest- tietokannan luomiseen.