The document outlines the principles and goals of software testing, categorizing them into immediate, long-term, and post-implementation goals, such as bug discovery, quality assurance, and customer satisfaction. It emphasizes the importance of risk management and the need for effective testing processes throughout the software development lifecycle. Additionally, it discusses key definitions and terminology related to software testing, including the distinctions between errors, bugs, and failures.
The document outlines the principles and goals of software testing, categorizing them into immediate, long-term, and post-implementation goals, such as bug discovery, quality assurance, and customer satisfaction. It emphasizes the importance of risk management and the need for effective testing processes throughout the software development lifecycle. Additionally, it discusses key definitions and terminology related to software testing, including the distinctions between errors, bugs, and failures.
10 Software Testing: Principles and Practioes
After having discussed the myths, we will now identify the requirements for
software testing. Owing to the importance of software testing, let us first iden-
tify the concerns related to it. The next section discusses the goals of software
testing.
1.4 Goats oF Sortware TestiNG
To understand the new concepts of software testing and to define it thoroughly,
let us first discuss the goals that we want to achieve from testing. The goals
of software testing may be classified into three major categories, as shown in
Fig. 1.2.
Immediate Goals
‘= Bug discovery
= Bug prevention
Long-term Goals
= Reliability
= Quality
Software testing = Customer satistaction
1 Risk management
Post-implementation Goals
‘= Reduced maintenance cost
1 Improved testing process
Figure 1.2 Software testing goals
Short-term or immediate goals These goals are the immediate results after
performing testing. These goals may be set in the individual phases of SDLC.
Some of them are discussed below.
Bug discovery The immediate goal of testing is to find errors at any stage of
software development. More the bugs discovered at an early stage, better will
be the success rate of software testing.
Bug prevention Itisthe consequentaction of bug discovery. From the behaviour
and interpretation of bugs discovered, everyone in the software development
team gets to learn how to code safely such that the bugs discovered should
not be repeated in later stages or future projects. Though errors cannot be
prevented to zero, they can be minimized. In this sense, bug prevention is a
superior goal of testing.Introduction to Software Testing
11
Long-term goals These goals affect the product quality in the long run, when
one cycle of the SDLC is over. Some of them are discussed here.
Quality Since software is also a produet, its quality is primary from the users?
point of view. Thorough testing ensures superior quality. Therefore, the first,
goal of understanding and performing the testing process is to enhance the
quality of the software product. Though quality depends on various factors,
such as correctness, integrity, efficiency, etc., reliability is the major factor to
achieve quality. The software should be passed through a rigorous reliability
analysis to attain high quality standards. Reliability is a matter of confidence
that the software will not fail, and this level of confidence increases with
rigorous testing. The confidence in reliability, in turn, increases the quality, as
shown in Fig. 1.3.
Figure 1.3 Testing produces reliability and quality
Customer satisfaction From the users’ perspective, the prime concern of test-
ing is customer satisfaction only. If' we want the customer to be satisfied with
the software product, then testing should be complete and thorough. Testing
should be complete in the sense that it must satisfy the user for all the speci-
fied requirements mentioned in the user manual, as well as for the unspeci-
fied requirements which are otherwise understood. A complete testing process
achieves reliability, reliability enhances the quality, and quality in turn, in-
creases the customer satisfaction, as shown in Fig. 1.4.
Sima oat
Provides
Customer
satisfaction
Figure 1.4 Quality leads to customer satisfaction
Risk management Risk is the probability that undesirable events will occur in a
system. These undesirable events will prevent the organization from successfully
implementing its business initiatives. Thus, risk is basically concerned with the
business perspective of an organization.
Risks must be controlled to manage them with ease. Software testing may
act as a control, which can help in eliminating or minimizing risks (see Fig. 1.5).12 Software Testing: Principles and Practioes
Thus, managers depend on software testing to assist them in controlling their
business goals. The purpose of software testing as a control is to provide infor-
mation to management so that they can better react to risk situations [4]. For ex-
ample, testing may indicate that the software being developed cannot be deliv-
ered on time, or there is a probability that high priority bugs will not be resolved
by the specified time. With this advance information, decisions can be made to
minimize risk situation.
Hence, it is the testers’ responsibility to evaluate business risks (such as
cost, time, resources, and critical features of the system being developed) and
make the same a basis for testing choices. Testers should also categorize the
levels of risks after their assessment (like high-risk, moderate-tisk, low-risk)
and this analysis becomes the basis for testing activities. Thus, risk manage-
ment becomes the long-term goal for software testing.
Reliability
Software testing
Contoled by Provides
Risk factors Customer
= Cost satisfaction
Time
= Resources
= Critical features
Figure 1.5 Testing controlled by risk factors
Post-implementation goals These goals are important after the product is
released, Some of them are discussed here.
Reduced maintenance cost The maintenance cost of any software product is
not its physical cost, as the software does not wear out. The only maintenance
cost in a software product is its failure due to errors. Post-release errors are
costlier to fix, as they are difficult to detect. Thus, if testing has been done
rigorously and effectively, then the chances of failure are minimized and in
turn, the maintenance cost is reduced.
Improved software testing process A testing process for one project may not
be successful and there may be scope for improvement. Therefore, the bug
history and post-implementation results can be analysed to find out snags in
the present testing process, which can be rectified in future projects. Thus,
the long-term post-implementation goal is to improve the testing process for
future projects.14 Sofware Testing: Principles and Practices
the destructive approach of software testing, the definitions of successful and
unsuccessful testing should also be modified.
1.6 Sortware Testinc Derinitions
‘Many practitioners and researchers have defined software testing in their own
way. Some are given below.
Testing is the process of executing a program with the intent of finding errors.
Myers [2]
A successful test is one that uncovers an as-yet-undiscovered error,
Myers [2]
Testing can show the presence of bugs but never their absence.
W. Dijkstra [125]
Program testing is a rapidly maturing area within software engineering that is re
ceiving increasing notice both by computer science theoreticians and practitioners. Its
-gencral aim is to affirm the quality of software systems by systematically exercising the
softeware in carefully controlled circumstances.
E. Miller[84]
Testing is a support function that helps developers look good by finding their mistakes
before anyone else does.
James Bach [83]
Software testing is an empirical investigation conducted to provide stakeholders with
information about the quality of the product or service under test, with respect to the
context in which itis intended to operate.
Cem Kaner [85]
‘The underlying motivation of program testing is to affirm software quality with meth-
ads that can be economically and effectively applied to both large-scale and small-scale
systems.
Miller [126]
Tasting is a concurrent lifecycle process of engineering, using and maintaining testware
(ice, testing artifacts) in order to measure and improve the quality of the software being
tested.
Craig [117]
Since quality is the prime goal of testing and it is necessary to meet the
defined quality standards, software testing should be defined keeping in view
the quality assurance terms. Here, it should not be misunderstood that the
testing team is responsible for quality assurance. But the testing team mustSoftware Testing Terminology and Methodology
33
2.1 Sortware Testinc TERMINOLOGY
2.1.1 Dernitions
As mentioned earlier, terms like error, bug, failure, defect, etc. are not synon-
ymous and hence these should not be used interchangeably. All these terms
are defined below.
Failure When the software is tested, failure is the first term being used. [ff
means the inability of a system or component to perform a required function
ACCOFAINE LOLISSPECHACALION| In other words, when results or behaviour of the
system under test are different as compared to specified expectations, then
failure exists.
Fault/Defect/Bug Failure is the term which is used to describe the problems
ina system on the output side, as shown in Fig, 2.1. aHNESaICORCIGORIELEE
actual causes a system to produce failure, Fault is synonymous with the words
MefeclORNMNg Therefore, fault is the reason embedded in any phase of SDLC
and results in failures, ERARTDSA/G Rares are aif€staGORTORBUBS
One failure may be due to one or more bugs and one bug may cause one ot
fore"faillires| Thus, when a bug is executed, then failures are generated. But
this is not always true. SOHTe/SUgSRifGSie SENSE Hes AFSHGt
executed, as they do not get the required conditions inthe system So, hidden
bugs may not always produce failures. They may execute only in certain rare
conditions.
Failure
Inputs
Software system
Figure 2.1 Testing terminology
Errot Whenever‘ development team member makes a mistake in any phase
ofS DLCHERTOTSTare|PROAUERA 1k might bea typographical error, a misleading
of a specification, a misunderstanding of what a subroutine does, and so on,
Error is a very general term used for human mistakes. Thus, an error causes a
bug and the bug in turn causes failures, as shown in Fig. 2.2.
Figure 2.2. Flow of faults34 _Softare Testing: Principles and Practices
Example 2.1
Consider the following module in a software
Module A()
{
while(a > i);
{
}
print (“The value of x is”, x);
+
Suppose the module shown above is expected to print the value of x which
is critical for the use of software. But when this module will be executed, the
value of x will not be printed. It is a failure of the program. When we try to
look for the reason of the failure, we find that in Module A(), the while loop is
not being executed. A condition is preventing the body of while loop to be
executed. This is known as bug/defect/faull. On close observation, we find a
semicolon being misplaced after the while loop which is not its correct syntax
and it is not allowing the loop to execute. This mistake is known as an error,
Test case Test case is a well-documented procedure designed to test the
functionality of a feature in the system. A test case has an identity and is
associated with a program behaviour. The primary purpose of designing a
test case is to find errors in the system. For designing the test case, it needs to
provide a set of inputs and its corresponding expected outputs. The sample for
a test case is shown in Fig. 2.3.
Test Case ID
Purpose
Preconditions
Inputs
Expected Outputs
Figure 2.3 Test case template
‘Best case ID is the identification number given to each test case
Purpose defines why the case is being designed.
Preconditions for running the inputs in a system can be defined, if required,
ina test case.Software Testing Terminology and Methodology
Interface and Integration Bugs
External interface bugs include invalid timing or sequence assumptions related
to external signals, misunderstanding external input and output formats, and
user interface bugs. Internal interface bugs include input and output format
bugs, inadequate protection against corrupted data, wrong subroutine call
sequence, call parameter bugs, and misunderstood entry or exit parameter
values. Integration bugs result from inconsistencies or incompatibilities
between modules discussed in the form of interface bugs. There may be bugs
in data transfer and data sharing between the modules.
System Bugs
There may be bugs while testing the system as a whole based on various
parameters like performance, stress, compatibility, usability, etc. For example,
ina real-time system, stress testing is very important, as the system must work
under maximum load. If the system is put under maximum load at every
factor like maximum number of users, maximum memory limit, etc. and if it
fails, then there are system bugs.
Testing Bugs
One can question the presence of bugs in the testing phase because this phase
is dedicated to finding bugs. But the fact is that bugs are present in testing
phase also. After all, testing is also performed by testers — humans. Some
testing mistakes are: failure to notice/report a problem, failure to use the most
promising test case, failure to make it clear how to reproduce the problem,
failure to check for unresolved problems just before the release, failure to
verify fixes, failure to provide summary report.
2.1.8 Testinc PRINCIPLES
Now it is time to learn the testing principles that are largely based on the dis-
cussion covered in the first chapter and the present one. These principles can
be seen as guidelines for a tester.
Effective testing, not exhaustive testing All possible combinations of tests
become so large that itis impractical to test them all. So considering the domain
of testing as infinite, exhaustive testing is not possible. Therefore, the tester’s
approach should be based on effective testing to adequately cover program
logic and all conditions in the component level design.
Resting is not a single phase performed in SDLC Testing is not just an activity
performed after the coding in SDLC. As discussed, the testing phase after
coding is just a part of the whole testing process. ‘Testing process starts as soon,
4344 _ Software Testing: Principles and Practices
as the specifications for the system are prepared and it continues till the release
of the product
Destructive approach for constructive testing Testers must have the psychol-
ogy that bugs are always present in the program and they must think about
the technique of how to uncover them (this is their art of creativity). This
psychology of being always suspicious about bugs is a negative/destructive
approach. However, it has been proved that such a destructive approach helps
in performing constructive and effective testing. Thus, the criterion to have a
successful testing is to discover more and more bugs, and not to show that the
system does not contain any bugs.
Early testing is the best policy Whenis the right time to start the testing process?
As discussed earlier and we will explore later that testing process is not a phase
after coding, rather it starts as soon as requirement specifications are prepared.
Moreover, the cost of bugs can be reduced tenfold, as bugs are harder to detect
in later stages if they go undetected. Thus, the policy in testing is to start as
early as possible.
Probability of existence of an error in a section of a program is proportional
to the number of errors already found in that section Suppose the history of a
software is that you found 50 errors in Module X, 12 in Module Y, and 3 in
Module Z. The software was debugged but after a span of time, we find some
errors again and the software is given to a tester for testing. Where should the
tester concentrate to find the bugs? This principle says that the tester should
start with Module X which has the history of maximum errors. Another way
of stating it is that errors seem to come in clusters. The principle provides us
the insight that if some sections are found error-prone in testing, then our next,
testing effort should give priority to these error-prone sections.
‘Testing strategy should start at the smallest module level and expand towards
the whole program This principle supports the idea of incremental testing.
Testing must begin at the unit or module level, gradually progressing towards
integrated modules and finally the whole system. Testing cannot be performed
directly on the whole system. It must start with the individual modules and
slowly integrate the modules and test them. After this, the whole system should
be tested.
Testing should also be performed by an independent team When programmers
develop the software, they test it at their individual modules. However, these
programmers are not good testers of their own software. They are basically
constructors of the software, but testing needs a destructive approach.Software Testing Terminology and Methodology
45
Programmers always think positively that their code does not contain bugs.
‘Moreover, they are biased towards the correct functioning of the specified
requirements and not towards detecting bugs. Therefore, it is always
recommended to have the software tested by an independent testing team.
Testers associated with the same project can also help in this direction, but this
is not effective. For effective testing, the software may also be sent outside the
organization for testing,
Everything must be recorded in software testing As mentioned earlier, testing
is not an intuitive process; rather itis a planned process. It demands that every
detail be recorded and documented. We must have the record of every test
case run and the bugs reported. Even the inputs provided during testing and
the corresponding outputs are to be recorded. Executing the test cases in a
recorded and documented way can greatly help while observing the bugs.
Moreover, observations can be a lesson for other projects. So the experience
with the test cases in one project can be helpful in other projects
Invalid inputs and unexpected behaviour have a high probability of finding
an error Whenever the software is tested, we test for valid inputs and for the
functionality that the software is supposed to do. But thinking in a negative
way, we must test the software with invalid inputs and the behaviour which is
not expected in general. This is also a part of effective testing.
Testers must participate in specification and design reviews Testers’ role is
not only to get the software and documents and test them. If they are not
participating in other reviews like specification and design, it may be possible
that either some specifications are not tested or some test cases are built for no
specifications.
Let us consider a program. Let § be the set of specified behaviours of the
program, P be the implementation of the program, and T be the set of test
cases. Now consider the following cases (see Fig. 2.7):
(i) There may be specified behaviours that are not tested (regions 2 and 5).
(ii) Test cases that correspond to unspecified behaviours (regions 3 and 7).
(ii) Program behaviours that are not tested (regions 2 and 6).
The good view of testing is to enlarge the area of region 1. Ideally, all three
sets S, P, and T must overlap each other such that all specifications are imple-
mented and all implemented specifications are tested. This is possible only
when the test team members participate in all discussions regarding specifica-
tions and design46
Software Testing: Priciles and Practices
a /
Figure 2.7 Venn diagram for §, P, T
2.2 Sortware Testine Lire Cyce (STLC)
Since we have recognized software testing as a process, like SDLC, there
is need for a well-defined series of steps to ensure successful and effective
software testing. This systematic execution of each step will result in saving
time and effort. Moreover, the chances are that more number of bugs will be
uncovered.
‘The testing process divided into a well-defined sequence of steps is termed.
as software testing life cycle (STLC). The major contribution of STLC is to in-
volve the testers at early stages of development. This has a significant benefit
in the project schedule and cost. The STLC also helps the management in
measuring specific milestones
STLC consists of the following phases (see Fig, 2.8)
Testpinind Yves strategy, size of test
cases, duration, cost, isk
responsibitties|
Testdesign || Test cases
and
procedures
Test execution
Bug reports
and metres
Post-Execution!
test review
Figure 2.8 Software testing life cycleSoftware Testing Terminology and Methodology
47
Test Planning
The goal of test planning is to take into account the important issues of testing
strategy, viz, resources, schedules, responsibilities, risks, and priorities, as a
roadmap. Test planning issues are in tune with the overall project planning.
Broadly, following are the activities during test planning:
= Defining the test strategy.
= Estimate the number of test cases, their duration, and cost.
= Plan the resources like the manpower to test, tools required, documents
required,
= Identifying areas of risks.
= Defining the test completion criteria.
= Identification of methodologies, techniques, and tools for various test,
cases.
= Identifying reporting procedures, bug classification, databases for test-
ing, bug severity levels, and project metrics.
Based on the planning issues as discussed above, analysis is done for vari-
ous testing activities. The major output of test planning is the test plan docu-
‘ment. Test plans are developed for each level of testing. Afier analysing the
issues, the following activities are performed:
= Develop a test case format.
= Develop test case plans according to every phase of SDLC.
a Identify test cases to be automated (if applicable)
Prioritize the test cases according to their importance and criticality.
= Define areas of stress and performance testing.
= Plan the test cycles required for regression testing.
Test Design
One of the major activities in testing is the design of test cases. However, this
activity is not an intuitional process; rather it is a well-planned process.
The test design is an important phase after test planning. It includes the
following critical activities.
Determining the test objectives and their prioritization ‘This activity decides the
broad categories of things to test. The test objectives reflect the fundamental
elements that need to be tested to satisfy an objective. For this purpose, you
need to gather reference materials like software requirements specification
and design documentation. Then on the basis of reference materials, a teamAB _ Sofware Testng: Principles and Practices
of experts compile a list of test objectives. This list should also be prioritized
depending upon the scope and risk.
Preparing list of items to be tested The objectives thus obtained are now
converted into lists of items that are to be tested under an objective.
Mapping items to test cases After making a list of items to be tested, there
is a need to identify the test cases. A matrix can be created for this purpose,
identifying which test case will be covered by which item. The existing test
cases can also be used for this mapping. Thus it permits reusing the test cases.
This matrix will help in:
{a) Identifying the major test scenarios.
(b) Identifying and reducing the redundant test cases.
(c) Identifying the absence of a test case for a particular objective and as a
result, creating them.
Designing the test cases demands a prior analysis of the program at func-
tional or structural level. Thus, the tester who is designing the test cases must
understand the cause-and-effect connections within the system intricacies, But
look at the rule quoted by Tsuneo Yamaura—There is only one rule in designing
test cases: Gover all features, but do not make too many test cases.
Some attributes of a good test case are given below:
fa) A good test case is one that has been designed keeping in view the
criticality and high-risk requirements in order to place a greater priority
upon, and provide added depth for testing the most important functions
{12}.
(b) A good test case should be designed such that there is a high probability
of finding an error.
(c) Test cases should not overlap or be redundant. Each test case should
address a unique functionality, thereby not wasting time and resources.
(c) Although it is sometimes possible to combine a series of tests into one
test case, a good test case should be designed with a modular approach
so that there is no complexity and it can be reused and recombined to
execute various functional paths. It also avoids masking of errors and
duplication of test-creation efforts [7, 12]
(d) A successful test case is one that has the highest probability of detecting
an as-yet-undiscovered error [2]
Selection of test case design techniques While designing test cases, there are
two broad categories, namely black-box testing and white-box testing. Black-Software Testing Terminology and Methodology
box test case design techniques generate test cases without knowing the
internal working of a system. These will be discussed later in this chapter. The
techniques to design test cases are selected such that there is more coverage
and the system detects more bugs.
Creating test cases and test data The next step is to create test cases based on
the testing objectives identified. The test cases mention the objective under
which a test case is being designed, the inputs required, and the expected
outputs. While giving input specifications, test data must also be chosen and
specified with care, as this may lead to incorrect execution of test cases.
‘Setting up the test environment and supporting tools The test created above
needs some environment settings and tools, if applicable. So details like
hardware configurations, testers, interfaces, operating systems, and manuals
must be specified during this phase
Creating test procedure specification This is a description of how the test case
will be run, Itis in the form of sequenced steps. This procedure is actually used
by the tester at the time of execution of test cases.
‘Thus, the hierarchy for test design phase includes: developing test objec-
tives, identifying test cases and creating their specifications, and then devel-
oping test case procedure specifications as shown in Fig. 2.9. All the details
specified in the test design phase are documented in the test design specifica-
tion. This document provides the details of the input specifications, output
specifications, environmental needs, and other procedural requirements for
the test case
Test data
Test Test cases Test case
and their Procedure
objectives specication specification
Figure 2.9 Test case design steps
Test Execution
In this phase, all test cases are executed including verification and validation.
Verification test cases are started at the end of each phase of SDLC. Valida-
tion test cases are started after the completion of a module. It is the deci-
sion of the test team to opt for automation or manual execution. Test results
are documented in the test incident reports, test logs, testing status, and test
summary reports, as shown in Fig. 2.10. These will be discussed in detail in
Chapter 9.
4950 Software Testing: Principles and Praces
Test incident
report
Test
Test lo
execution 8
Tost summary
report
WE
Figure 2.10 Documents in test execution
Responsibilities at various levels for execution of the test cases are outlined
in Table 2.1.
Table 2.1 Testing level vs responsibility
Test Execution Level Person Responsible
Unit Developer ofthe module
Integration Testers and Developers
System Testers, Developers, End-users
Acceptance Testers, End-users
Post-Execution/Test Review
As we know, after successful test execution, bugs will be reported to the con-
cerned developers. This phase is to analyse bug-related issues and get feed-
back so that maximum number of bugs can be removed. This is the primary
goal of all test activities done earlier
‘As soon as the developer gets the bug report, he performs the following
activities:
Understanding the bug The developer analyses the bug reported and builds
an understanding of its whereabouts.
Reproducing the bug Next, he confirms the bug by reproducing the bug and
the failure that exists. This is necessary to cross-check failures. However, some
bugs are not reproducible which increases the problems of developers.
Analysing the nature and cause of the bug After examining the failures of
the bug, the developer starts debugging its symptoms and tracks back to the
actual location of the error in the design. The process of debugging has been
discussed in detail in Chapter 17.
After fixing the bug, the developer reports to the testing team and the mod-
ified portion of the software is tested once again.Software Testing Terminology and Methodology 5
After this, the results from manual and automated testing can be collected.
The final bug report and associated metrics are reviewed and analysed for
overall testing process. The following activities can be done:
= Reliability analysis can be performed to establish whether the software
meets the predefined reliability goals or not. If so, the product can be
released and the decision on a release date can be taken. If not, then the
time and resources required to reach the reliability goals are outlined.
= Coverage anabysis can be used as an alternative criterion to stop testing,
= Overall defect analysis can identify risk areas and help focus our efforts on
quality improvement.
2.3 Sortware Testi MetHoooLocy
Sofiware testing methodology is the organization of software testing by means
of which the test strategy and test tactics are achieved, as shown in Fig. 2.11
All the terms related to software testing methodology and a complete testing
strategy is discussed in this section.
Testing strategy
Integration
system
Testing tacos
Testing techniques Testing tools
Blackbox
Figure 2.11 Testing methodology