0% found this document useful (0 votes)
26 views82 pages

ISTQB Material

Uploaded by

Kuldeep Kewat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views82 pages

ISTQB Material

Uploaded by

Kuldeep Kewat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 82

International Software Testing Qualifications Board

Foundation Level Certification

Preparation Guide

Page 1 of 82
Table of Contents

Table of Contents
Table of Contents
Acknowledgements
Introduction to this syllabus
1. Fundamentals of testing (K2) ......................................................................................................... 5
1.1 Why is testing necessary (K2).................................................................................................. 6
1.1.1 Software systems context (K1) ................................................................................6
1.1.2 Causes of software defects (K2)..............................................................................6
1.1.3 Role of testing in software development, maintenance and operations (K2)
7
1.1.4 Testing and quality (K2) ........................................................................................... 7
1.1.5 How much testing is enough? (K2) .........................................................................7
1.2 What is testing (K2) ................................................................................................................ 7
1.3 General testing principles (K2) ................................................................................................ 8
1.4 Fundamental test process (K1) ................................................................................................ 9
1.4.1 Test planning and control (K1)................................................................................10
1.4.2 Test analysis and design (K1). ..............................................................................11
1.4.3 Test implementation and execution (K1) ..............................................................11
1.4.4 Evaluating exit criteria and reporting (K1)...............................................................11
1.4.5 Test closure activities (K1) ....................................................................................11
1.5 The psychology of testing (K2) .............................................................................................. 12
2. Testing throughout the software life cycle (K2) ...........................................................................13
2.1 Software development models (K2)........................................................................................14
2.1.1 V-model (K2) ......................................................................................................... 15
2.1.2 Iterative development models (K2) .......................................................................17
2.1.3 Testing within a life cycle model (K2) ...................................................................24
2.2 Test levels (K2)....................................................................................................................... 24
2.2.1 Component testing (K2) .........................................................................................24
2.2.2 Integration testing (K2) ......................................................................................... 25
2.2.3 System testing (K2)................................................................................................ 26
2.2.4 Acceptance testing (K2).........................................................................................27
2.3 Test types: the targets of testing (K2).....................................................................................28
2.3.1 Testing of function (functional testing) (K2)............................................................28
2.3.2 Testing of software product characteristics (non-functional testing) (K2)...............29
2.3.3 Testing of software structure/architecture (structural testing) (K2).........................30
2.3.4 Testing related to changes (confirmation and regression testing) (K2) ..................32
2.4 Maintenance testing (K2)........................................................................................................ 32
3. Static techniques (K2) .................................................................................................................. 34
3.1 Reviews and the test process (K2) ........................................................................................35
3.2 Review process (K2) ............................................................................................................ 36
3.2.1 Phases of a formal review (K1) ..............................................................................36
3.2.2 Roles and responsibilities (K1) .............................................................................36
3.2.3 Types of review (K2) ............................................................................................. 37
3.2.4 Success factors for reviews (K2) ...........................................................................38
3.3 Static analysis by tools (K2) ................................................................................................... 39
4. Test design techniques (K3). ....................................................................................................... 41
4.1 Identifying test conditions and designing test cases (K3). .....................................................43
4.2 Categories of test design techniques (K2). ............................................................................44
4.3 Specification-based or black-box techniques (K3) ................................................................45
4.3.1 Equivalence partitioning (K3) .................................................................................45
4.3.2 Boundary value analysis (K3) ................................................................................47
4.3.3 Decision table testing (K3) .....................................................................................49
Page 2 of 82
4.3.4 State transition testing (K3) ..................................................................................50
4.3.5 Use case testing (K2) ........................................................................................... 51
4.4 Structure-based or white-box techniques (K3) .......................................................................51
4.4.1 Statement testing and coverage (K3) ....................................................................51
4.4.2 Decision testing and coverage (K3) .......................................................................53
4.4.3 Other structure-based techniques (K1) ..................................................................53
4.5 Experience-based techniques (K2) .......................................................................................55
4.6 Choosing test techniques (K2) .............................................................................................. 55
5. Test management (K3) ................................................................................................................ 56
5.1 Test organization (K2) ........................................................................................................... 58
5.1.1 Test organization and independence (K2). ............................................................58
5.1.2 Tasks of the test leader and tester (K1) ................................................................59
5.2 Test planning and estimation (K2) ......................................................................................... 60
5.2.1 Test planning (K2) ............................................................................................... .61
5.2.2 Test planning activities (K2) ..................................................................................60
5.2.3 Exit criteria (K2) ..................................................................................................... 63
5.2.4 Test estimation (K2) ............................................................................................. 63
5.2.5 Test approaches (test strategies) (K2) .................................................................63
5.3 Test progress monitoring and control (K2) ............................................................................64
5.3.1 Test progress monitoring (K1) ..............................................................................64
5.3.2 Test Reporting (K2) 56...........................................................................................64
5.3.3 Test control (K2). ................................................................................................... 66
5.4 Configuration management (K2) ............................................................................................66
5.5 Risk and testing (K2) ............................................................................................................ 67
5.5.1 Project risks (K1, K2).............................................................................................. 67
5.5.2 Product Risks (K2) ................................................................................................ 68
5.6 Incident management (K3) .................................................................................................... 68
6. Tool support for testing (K2). ........................................................................................................ 71
6.1 Types of test tool (K2) ........................................................................................................... 74
6.1.1 Test tool classification (K2) ....................................................................................74
6.1.2 Tool support for management of testing and tests (K1) ........................................74
6.1.3 Tool support for static testing (K1) ........................................................................76
6.1.4 Tool support for test specification (K1) ..................................................................77
6.1.5 Tool support for test execution and logging (K1) ...................................................77
6.1.6 Tool support for performance and monitoring (K1) ...............................................78
6.1.7 Tool support for specific application areas (K1) .....................................................79
6.1.8 Tool support using other tools (K1) .......................................................................79
6.2 Effective use of tools: potential benefits and risks (K2) .........................................................79
6.2.1 Potential benefits and risks of tool support for testing (for all tools) (K2) ..............79
6.2.2 Special considerations for some types of tool (K1) ................................................80
6.3 Introducing a tool into an organization (K1) ..........................................................................81
7. Standards………........................................................................................................................... 82

Page 3 of 82
Acknowledgements
This document is used for preparing International Software Testing Qualifications Board
foundation level certification.

This document also contains ISTQB proprietary preparation guide for the benefit of the readers
along with the other reference material.

Sincere thanks to all who helped in preparing this guide.

How this guide is organized

The ISTQB proprietary material is given in the square and the other reference material follows
outside the square. For example:

1.1 Why is testing necessary (K2)


Terms
Bug, defect, error, failure, fault, mistake, quality, risk, software, testing.

A bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from
working as intended, or produces an incorrect result. Bugs arise from mistakes and errors, made
by people, in either a program's source code or its design
A defect is one that causes a reproducible or catastrophic malfunction. A malfunction is
considered reproducible if it occurs consistently under the same circumstances.

Page 4 of 82
1. Fundamentals of testing (K2)

Learning objectives for fundamentals of testing


The objectives identify what you will be able to do following the completion of each
module.
1.1 Why is testing necessary? (K2)
 Describe, with examples, the way in which a defect in software can cause harm to a
person, to the environment or to a company. (K2)
 Distinguish between the root cause of a defect and its effects. (K2)
 Give reasons why testing is necessary by giving examples. (K2)
 Describe why testing is part of quality assurance and give examples of how testing
contributes to higher quality. (K2)
 Recall the terms mistake, defect, failure and corresponding terms error and bug. (K1)

1.2 What is testing? (K2)


 Recall the common objectives of testing. (K1)
 Describe the purpose of testing in software development, maintenance and operations as
a means to find defects, provide confidence and information, and prevent defects. (K2)
1.3 General testing principles (K2)
 Explain the fundamental principles in testing. (K2)
1.4 Fundamental test process (K1)
 Recall the fundamental test activities from planning to test closure activities and the main
tasks of each test activity. (K1)
1.5 The psychology of testing (K2)
 Recall that the success of testing is influenced by psychological factors (K1):
 Clear objectives;
 A balance of self-testing and independent testing;
 Recognition of courteous communication and feedback on defects.
 Contrast the mindset of a tester and of a developer. (K2)

Page 5 of 82
1.1 Why is testing necessary (K2)
Terms
Bug, defect, error, failure, fault, mistake, quality, risk, software, testing.

A bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from
working as intended, or produces an incorrect result. Bugs arise from mistakes and errors, made
by people, in either a program's source code or its design
A defect is one that causes a reproducible or catastrophic malfunction. A malfunction is
considered reproducible if it occurs consistently under the same circumstances.
An error may be a piece of incorrectly written program code. A syntax error is an ungrammatical
or nonsensical statement in a program; one that cannot be parsed by the language
implementation. A logic error is a mistake in the algorithm used, which causes erroneous results
or undesired operation.
Failure (or flop) in general refers to the state or condition of not meeting a desirable or intended
objective.
Fault is an abnormal condition or defect at the component, equipment, or sub-system level which
may lead to a failure.
Mistake (an error): A human action that produces an incorrect result.
Quality: Quality refers to the inherent or distinctive characteristics or properties of a person,
object, process or other thing. It can be defined as “The totality of features and characteristics of
a product or service that bear on its ability to satisfy stated or implied needs. ” or “ Consistent
performance of a uniform product meeting the customer's needs for economy and function. “
Risk: Risk is the potential impact (positive or negative) to an asset or some characteristic of value
that may arise from some present process or from some future event.
Software: A collection of programs which is developed to obtain a specific objective.
Testing: Testing is the process used to help identify the correctness, completeness, security and
quality of developed computer software.
1.1.1 Software systems context (K1)
Software systems are an increasing part of life, from business applications (e.g. banking)
to consumer products (e.g. cars). Most people have had an experience with software that did not
work as expected. Software that does not work correctly can lead to many problems, including
loss of money, time or business reputation, and could even cause injury or death.
1.1.2 Causes of software defects (K2)
A human being can make an error (mistake), which produces a defect (fault, bug) in the
code, in software or a system, or in a document. If a defect in code is executed, the system will
fail to do what it should do (or do something it shouldn’t), causing a failure? Defects in software,
systems or documents may result in failures, but not all defects do so. Defects occur because
human beings are fallible and because there is time pressure, complex code, complexity of
infrastructure, changed technologies, and/or many system interactions. Failures can be caused
by environmental conditions as well: radiation, magnetism, electronic fields, and pollution can
cause faults in firmware or influence the execution of software by changing hardware conditions.

An error is a human action that produces an incorrect result. A fault is a manifestation of an error
in software. Faults are also known colloquially as defaults or bugs. A fault, if encountered, may
cause a fault, which is a deviation of the software from its existing delivery or service.
We can illustrate these points with the true story Mercury spacecraft. The computer program
aboa spacecraft contained the following statement wri1 the FORTRAN programming language.
DO 100 i = 1.10
The programmer's intention was to execute a succeeding statements up to line 100 ten times
then creating a loop where the integer variable I was using the loop counter, starting 1 and ending
at 10.

Page 6 of 82
Unfortunately, what this code actually does is writing variable i do to decimal value 1.1 and it does
that once only. Therefore remaining code is executed once and not 10 times within the loop. As a
result spacecraft went off course and mission was abort considerable cost!
The correct syntax for what the programmer intended is
DO 100 i =1,10
1.1.3 Role of testing in software development, maintenance and operations
(K2)
Rigorous testing of systems and documentation can help to reduce the risk of problems
occurring in an operational environment and contribute to the quality of the software system, if
defects found are corrected before the system is released for operational use. Software testing
may also be required to meet contractual or legal requirements, or industry-specific standards.
1.1.4 Testing and quality (K2)
With the help of testing, it is possible to measure the quality of software in terms of
defects found, for both functional and non-functional software requirements and characteristics
(e.g. reliability, usability, efficiency and maintainability). For more information on non-functional
testing see Chapter 2; for more information on software characteristics see ‘Software Engineering
– Software Product Quality’ (ISO 9126). Testing can give confidence in the quality of the software
if it finds few or no defects. A properly designed test that passes reduces the overall level of risk
in a system. When testing does find defects, the quality of the software system increases when
those defects are fixed. Lessons should be learned from previous projects. By understanding the
root causes of defects found in other projects, processes can be improved, which in turn should
prevent those defects reoccurring and, as a consequence, improve the quality of future systems.
Testing should be integrated as one of the quality assurance activities (e.g. alongside
development standards, training and defect analysis).
1.1.5 How much testing is enough? (K2)
Deciding how much testing is enough should take account of the level of risk, including
technical and business product and project risks, and project constraints such as time and
budget. (Risk is discussed further in Chapter 5.) Testing should provide sufficient information to
stakeholders to make informed decisions about the release of the software or system being
tested, for the next development step or handover to customers.
1.2 What is testing (K2)
Terms
Code, debugging, development (of software), requirement, review, test basis, test case,
testing, test objectives.

Code: In computer programming, the word code refers to instructions to a computer in a


programming language. In this usage, the noun "code" typically stands for source code, and the
verb "to code" means to write source code, to program. This usage may have originated when the
first symbolic languages were developed and were punched onto cards as "codes".
Debugging: Debugging is a methodical process of finding and reducing the number of bugs, or
defects, in a computer program or a piece of electronic hardware thus making it behave as
expected. Debugging tends to be harder when various subsystems are tightly coupled, as
changes in one may cause bugs to emerge in another.
Development (of software): A set of activities that results in software products. Software
development may include new development, modification, reuse, re-engineering, maintenance, or
any other activities that result in software products.
Requirement: A formal statement of: (1) An attribute to be possessed by the product or a
function to be performed by the product. (2) the performance standard for the attribute or function.
(3) the measuring process to be used in verifying that the standard has been met.
A Review, in which the software is evaluated, the current requirements are reviewed, and
changes and additions to requirements proposed.

Page 7 of 82
Test case : A test case is a set of conditions or variables under which a tester will determine if a
requirement upon an application is partially or fully satisfied. It may take many test cases to
determine that a requirement is fully satisfied.
Test basis : Objective of the test or the purpose for which the test is conducted.
Testing : The process of exercising or evaluating a system or system component by manual or
automated means to confirm that it satisfies specified requirements, or to identify differences
between expected and actual results.
Test objectives : A statement defining the purpose of the test.

Background
A common perception of testing is that it only consists of running tests, i.e. executing the
software.
This is part of testing, but not all of the testing activities. Test activities exist before and
after test execution, activities such as planning and control, choosing test conditions, designing
test cases and checking results, evaluating completion criteria, reporting on the testing process
and system under test, and finalizing or closure (e.g. after a test phase has been completed).
Testing also includes reviewing of documents (including source code) and static analysis. Both
dynamic testing and static testing can be used as a means for achieving similar objectives, and
will provide information in order to improve both the system to be tested, and the development
and testing processes.
There can be different test objectives:
 Finding defects;
 Gaining confidence about the level of quality and providing information;
 Preventing defects.
The thought process of designing tests early in the life cycle (verifying the test basis via test
design) can help to prevent defects from being introduced into code. Reviews of documents (e.g.
requirements) also help to prevent defects appearing in the code. Different viewpoints in testing
take different objectives into account. For example, in development testing (e.g. component,
integration and system testing), the main objective may be to cause as many failures as possible
so that defects in the software are identified and can be fixed. In acceptance testing, the main
objective may be to confirm that the system works as expected, to gain confidence that it has met
the requirements. In some cases the main objective of testing may be to assess the quality of the
software (with no intention of fixing defects), to give information to stakeholders of the risk of
releasing the system at a given time. Maintenance testing often includes testing that no new
errors have been introduced during development of the changes. During operational testing, the
main objective may be to assess system characteristics such as reliability or availability.
Debugging and testing are different. Testing can show failures that are caused by defects.
Debugging is the development activity that identifies the cause of a defect, repairs the code and
checks that the defect has been fixed correctly. Subsequent confirmation testing by a tester
ensures that the fix does indeed resolve the failure. The responsibility for each activity is very
different, i.e. testers test and developers debug.
The process of testing and its activities is explained in Section 1.4.
1.3 General testing principles (K2) 35 minutes
Terms
Exhaustive testing: Testing that covers all combinations of input values and preconditions for an
element of the software under test
Principles
A number of testing principles have been suggested over the past 40 years and offer
general guidelines common for all testing.

Page 8 of 82
Principle 1 – Testing shows presence of defects
Testing can show that defects are present, but cannot prove that there are no defects.
Testing reduces the probability of undiscovered defects remaining in the software but, even if no
defects are found, it is not a proof of correctness.
Principle 2 – Exhaustive testing is impossible
Testing everything (all combinations of inputs and preconditions) is not feasible except for
trivial cases. Instead of exhaustive testing, we use risk and priorities to focus testing efforts.
Principle 3 – Early testing
Testing activities should start as early as possible in the software or system development
life cycle, and should be focused on defined objectives.
Principle 4 – Defect clustering
A small number of modules contain most of the defects discovered during pre-release
testing, or show the most operational failures.
Principle 5 – Pesticide paradox
If the same tests are repeated over and over again, eventually the same set of test cases
will no longer find any new bugs. To overcome this “pesticide paradox”, the test cases need to be
regularly reviewed and revised, and new and different tests need to be written to exercise
different parts of the software or system to potentially find more defects.
Principle 6 – Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical software is
tested differently from an e-commerce site.
Principle 7 – Absence-of-errors fallacy
Finding and fixing defects does not help if the system built is unusable and does not fulfill
the users’ needs and expectations.
1.4 Fundamental test process (K1)
Terms
Confirmation testing, exit criteria, incident, regression testing, test basis, test condition,
test coverage, test data, test execution, test log, test plan, test strategy, test summary report, test
ware.

Confirmation Testing: The process of testing that an implementation conforms to the


specification on which it is based.
Exit Criteria: Define the required conditions and standards that must be present for entry into the
next stage of the development process.
Incidents: Incident: A distinct event, often one that disrupts the normal operation or procedure.
Regression testing: Retesting of a previously tested program following modification to ensure
that faults have not been introduced or uncovered as a result of the changes made.
Test basis: Objective of the test or the purpose for which the test is conducted.
Test condition: A test condition is a particular behavior that you need to verify of the system
under test (SUT).
Test coverage:
The degree to which a given test or set of tests addresses all specified requirements for a given
system or component
As the simplest methodology, two metrics can be used to analyze the test coverage.
 Coverage of Requirements in the Test Cases.
 Number of Test cases executed VS Number of Test cases written.

Test data: simulated transactions that can be used to test processing logic, computations and
controls actually programmed in computer applications.
Test execution: The processing of a test case suite by the software under test, producing an
outcome.
Page 9 of 82
Test log: A detail of what tests cases were run, who ran the tests, in what order they were run,
and whether or not individual tests were passed or failed.
Test plan: A record of the test planning process detailing the degree of tester independence, the
test environment, the test case design techniques and test measurement techniques to be used,
and the rationale for their choice.
Test strategy:
A test strategy is a statement of the overall approach to testing, identifying what levels of testing
are to be applied and the method, techniques and tool to be used.
Four components of a good test strategy
a) Critical success factor
b) Risk analysis
c) Assumptions
d) Methodology to be followed
Test summary report: A detail of all the important information to come out of the testing
procedure, including an assessment of how well the testing was performed, an assessment of the
quality of the system, any incidents that occurred, and a record of what testing was done and how
long it took to be used in future.
Test planning : This final document is used to determine if the software being tested is viable
enough to proceed to the next stage of development.
Test ware: TestWare is a suite of Windows™ software tools that is used by design engineers,
test engineers, and production personnel to design, program, and run functional tests on the
Automating Test System. It also provides tools to serialize, record, print, and archive test results
in a production test environment.

Background
The most visible part of testing is executing tests. But to be effective and efficient, test
plans should also include time to be spent on planning the tests, designing test cases, preparing
for execution and evaluating status.
The fundamental test process consists of the following main activities:
 Planning and control;
 Analysis and design;
 Implementation and execution;
 Evaluating exit criteria and reporting;
 Test closure activities.
Although logically sequential, the activities in the process may overlap or take place concurrently.
1.4.1 Test planning and control (K1)
Test planning is the activity of verifying the mission of testing, defining the objectives of
testing and the specification of test activities in order to meet the objectives and mission. Test
control is the ongoing activity of comparing actual progress against the plan, and reporting the
status, including deviations from the plan. It involves taking actions necessary to meet the
mission and objectives of the project. In order to control testing, it should be monitored throughout
the project. Test planning takes into account the feedback from monitoring and control activities.
Test planning has the following major tasks:
 Determining the scope and risks, and identifying the objectives of testing.
 Determining the test approach (techniques, test items, coverage, identifying and
interfacing the teams involved in testing, testware).
 Determining the required test resources (e.g. people, test environment, PCs).
 Implementing the test policy and/or the test strategy.
 Scheduling test analysis and design tasks.
 Scheduling test implementation, execution and evaluation.
 Determining the exit criteria. Test control has the following major tasks:
 Measuring and analyzing results;
 Monitoring and documenting progress, test coverage and exit criteria;

Page 10 of 82
 Initiation of corrective actions;
 Making decisions.
1.4.2 Test analysis and design (K1)
Test analysis and design is the activity where general testing objectives are transformed
into tangible test conditions and test designs.

Test analysis and design has the following major tasks:


 Reviewing the test basis (such as requirements, architecture, design, interfaces).
 Identifying test conditions or test requirements and required test data based on analysis
of test items, the specification, behavior and structure.
 Designing the tests.
 Evaluating testability of the requirements and system.
 Designing the test environment set-up and identifying any required infrastructure and
tools.
1.4.3 Test implementation and execution (K1)
Test implementation and execution is the activity where test conditions are transformed
into test cases and testware, and the environment is set up. Test implementation and execution
has the following major tasks:
 Developing and prioritizing test cases, creating test data, writing test procedures and,
optionally, preparing test harnesses and writing automated test scripts.
 Creating test suites from the test cases for efficient test execution.
 Verifying that the test environment has been set up correctly.
 Executing test cases either manually or by using test execution tools, according to the planned
sequence.
 Logging the outcome of test execution and recording the identities and versions of the software
under test, test tools and testware.
 Comparing actual results with expected results.
 Reporting discrepancies as incidents and analyzing them in order to establish their cause (e.g. a
defect in the code, in specified test data, in the test document, or a mistake in the way the test was
executed).
 Repeating test activities as result of action taken for each discrepancy. For example, re execution
of a test that previously failed in order to confirm a fix (confirmation testing), execution of a
corrected test and/or execution of tests in order to ensure that defects have not been introduced in
unchanged areas of the software or that defect fixing did not uncover other defects (regression
testing).
1.4.4 Evaluating exit criteria and reporting (K1)
Evaluating exit criteria is the activity where test execution is assessed against the defined
objectives. This should be done for each test level.
 Evaluating exit criteria has the following major tasks:
 Checking test logs against the exit criteria specified in test planning.
 Assessing if more tests are needed or if the exit criteria specified should be changed.
 Writing a test summary report for stakeholders.
1.4.5 Test closure activities (K1)
Test closure activities collect data from completed test activities to consolidate
experience, testware, facts and numbers. For example, when a software system is released, a
test project is completed (or cancelled), a milestone has been achieved, or a maintenance
release has been completed. Test closure activities include the following major tasks:
 Checking which planned deliverables have been delivered, the closure of incident reports
or raising of change records for any that remain open, and the documentation of the
acceptance of the system.

Page 11 of 82
 Finalizing and archiving testware, the test environment and the test infrastructure for later
reuse.
 Handover of testware to the maintenance organization.
 Analyzing lessons learned for future releases and projects, and the improvement of test
maturity.
1.5 The psychology of testing (K2) 35 minutes
Terms
Independent testing : A common practice of software testing is that it is performed by an
independent group of testers after finishing the software product and before it is shipped to the
customer. This practice often results in the testing phase being used as project buffer to
compensate for project delays. Another practice is to start software testing at the same moment
the project starts and it is a continuous process until the project finishes.
Background
The mindset to be used while testing and reviewing is different to that used while
analyzing or Developing. With the right mindset developers are able to test their own code, but
separation of this responsibility to a tester is typically done to help focus effort and provide
additional benefits, such as an independent view by trained and professional testing resources.
Independent testing may be carried out at any level of testing. A certain degree of independence
(avoiding the author bias) is often more effective at finding defects and failures. Independence is
not, however, a replacement for familiarity, and developers can efficiently find many defects in
their own code. Several levels of independence can be defined:
 Tests designed by the person(s) who wrote the software under test (low level of
independence).
 Tests designed by another person(s) (e.g. from the development team).
 Tests designed by a person(s) from a different organizational group (e.g. an independent
test team).
 Tests designed by a person(s) from a different organization or company (i.e. outsourcing
or certification by an external body).
People and projects are driven by objectives. People tend to align their plans with the objectives
set by management and other stakeholders, for example, to find defects or to confirm that
software works. Therefore, it is important to clearly state the objectives of testing. Identifying
failures during testing may be perceived as criticism against the product and against the author.
Testing is, therefore, often seen as a destructive activity, even though it is very constructive in the
management of product risks. Looking for failures in a system requires curiosity, professional
pessimism, a critical eye, and attention to detail, good communication with development peers,
and experience on which to base error guessing. If errors, defects or failures are communicated
in a constructive way, bad feelings between the testers and the analysts, designers and
developers can be avoided. This applies to reviewing as well as in testing. The tester and test
leader need good interpersonal skills to communicate factual information about defects, progress
and risks, in a constructive way. For the author of the software or document, defect information
can help them improve their skills. Defects found and fixed during testing will save time and
money later, and reduce risks. Communication problems may occur, particularly if testers are
seen only as messengers of unwanted news about defects. However, there are several ways to
improve communication and relationships between testers 
 Start with collaboration rather than battles – remind everyone of the common goal of
better quality systems.
 Communicate findings on the product in a neutral, fact-focused way without criticizing the
person who created it, for example, write objective and factual incident reports and
review findings.
 Try to understand how the other person feels and why they react as they do.
 Confirm that the other person has understood what you have said and vice versa.

Page 12 of 82
2. Testing throughout the software life cycle (K2)

Learning objectives for testing throughout the software life cycle


The objectives identify what you will be able to do following the completion of each module.

2.1 Software development models (K2)


 Understand the relationship between development, test activities and work products in
the development life cycle, and give examples based on project and product
characteristics and context (K2).
 Recognize the fact that software development models must be adapted to the context of
project and product characteristics. (K1)
 Recall reasons for different levels of testing, and characteristics of good testing in any life
cycle model. (K1)

2.2 Test levels (K2)


 Compare the different levels of testing: major objectives, typical objects of testing, typical
targets of testing (e.g. functional or structural) and related work products, people who
test, types of defects and failures to be identified. (K2)

2.3 Test types: the targets of testing (K2)


 Compare four software test types (functional, non-functional, structural and change-
related) by example. (K2)
 Recognize that functional and structural tests occur at any test level. (K1)
 Identify and describe non-functional test types based on non-functional requirements.
(K2)
 Identify and describe test types based on the analysis of a software system’s structure or
architecture. (K2)
 Describe the purpose of confirmation testing and regression testing. (K2)

2.4 Maintenance testing (K2)


 Compare maintenance testing (testing an existing system) to testing a new application
with respect to test types, triggers for testing and amount of testing. (K2)
 Identify reasons for maintenance testing (modification, migration and retirement). (K1)
 Describe the role of regression testing and impact analysis in maintenance. (K2)

Page 13 of 82
2.1 Software development Models (K2)

Terms
Commercial off the shelf (COTS), incremental development model, test level, validation,
verification, V-model.

Background
Testing does not exist in isolation; test activities are related to software development activities.
Different development life cycle models need different approaches to testing.

What is Software Development Life Cycle (SDLC)?


The various activities that are undertaken when developing software are commonly
modeled as a software development lifecycle. The software development lifecycle begins with the
identification of a requirement for software and ends with the formal verification of the developed
software against that requirement.
The software development lifecycle does not exist by itself; it is in fact part of an overall
product lifecycle. Within the product lifecycle, software will undergo maintenance to correct errors
and to comply with changes to requirements. The simplest overall form is where the product is
just software, but it can become much more complicated with multiple software developments,
each forming part of an overall system to comprise a product.

The General Model


Software life cycle models describe phases of the software cycle and the order in which
those phases are executed. There are tons of models, and many companies adopt their own, but
all have very similar patterns. The general, basic model is shown below:

General Life Cycle Model

Requirements Design Implementation Testing Maintenance

Each phase produces deliverables required by the next phase in the life cycle.
Requirements are translated into design. Code is produced during implementation that is driven
by the design. Testing verifies the deliverable of the implementation phase against requirements.

Requirements
Business requirements are gathered in this phase. This phase is the main focus of the
project managers and stakeholders. Meetings with managers, stakeholders and users are held in
order to determine the requirements. Who is going to use the system? How will they use the
system? What data should be input into the system? What data should be output by the
system? These are general questions that get answered during a requirements gathering phase.
This produces a nice big list of functionality that the system should provide, which describes
functions the system should perform, business logic that processes data, what data is stored and
used by the system, and how the user interface should work. The overall result is the system as
a whole and how it performs, not how it is actually going to do it.

Design
The software system design is produced from the results of the requirements phase.
Architects have the ball in their court during this phase and this is the phase in which their focus
lies. This is where the details on how the system will work are produced. Architecture, including

Page 14 of 82
hardware and software, communication, software design (UML is produced here) is all part of the
deliverables of a design phase.

Implementation
Code is produced from the deliverables of the design phase during implementation, and
this is the longest phase of the software development life cycle. For a developer, this is the main
focus of the life cycle because this is where the code is produced. Implementation my overlap
with both the design and testing phases. Many tools exists (CASE tools) to actually automate the
production of code using information gathered and produced during the design phase.

Testing
During testing, the implementation is tested against the requirements to make sure that
the product is actually solving the needs addressed and gathered during the requirements
phase. Unit tests and system/acceptance tests are done during this phase. Unit tests act on a
specific component of the system, while system tests act on the system as a whole. So in a
nutshell, that is a very basic overview of the general software development life cycle model. Now
lets delve into some of the traditional and widely used variations.

Maintenance

2.1.1 V-model (K2)

Although variants of the V-model exist, a common type of V-model uses four test levels,
corresponding to four development levels.
The four levels used in this syllabus are:
 Component (unit) testing;
 Integration testing;
 System testing;
 Acceptance testing.

In practice, a V-model may have more, fewer or different levels of development and
testing, depending on the project and the software product. For example, there may be
component integration testing after component testing, and system integration testing after
system testing.
Software work products (such as business scenarios or use cases, requirement
specifications, design documents and code) produced during development are often the basis of
testing in one or more test levels. References for generic work products include Capability
Maturity Model Integration (CMMI) or ‘Software life cycle processes’ (IEEE/IEC 12207).
Verification and validation (and early test design) can be carried out during the development of
the software work products.

Testing is emphasized more in this model. The testing procedures are developed early in
the life cycle before any coding is done, during each of the phases preceding implementation.
Requirements begin the life cycle model. Before development is started, a system test plan is
created. The test plan focuses on meeting the functionality specified in the requirements
gathering.
The high-level design phase focuses on system architecture and design. An integration
test plan is created in this phase as well in order to test the pieces of the software systems ability
to work together.
The low-level design phase is where the actual software components are designed, and
unit tests are created in this phase as well.

Page 15 of 82
The implementation phase is, again, where all coding takes place. Once coding is
complete, the path of execution continues up the right side of the V where the test plans
developed earlier are now put to use.

V-Shaped Life Cycle Model

Advantages
 Simple and easy to use.
 Each phase has specific deliverables.
 Higher chance of success over the waterfall model due to the development of test plans
early on during the life cycle.
 Works well for small projects where requirements are easily understood.

Disadvantages
 Very rigid, like the waterfall model.
 Little flexibility and adjusting scope is difficult and expensive.
 Software is developed during the implementation phase, so no early prototypes of the
software are produced.
 Model does not provide a clear path for problems found during testing phases.

Firstly, in our experience, there is rarely a perfect, one-to-one relationship between the
documents on the left hand side and the test activities on the right. For example, functional
specifications don’t usually provide enough information for a system test. System tests must often
take account of some aspects of the business requirements as well as physical design issues for
example. System testing usually draws on several sources of requirements information to be
thoroughly planned.
Secondly, and more important, the V-Model has little to say about static testing at all. The V-
Model treats testing as a “back-door” activity on the right hand side of the model. There is no
mention of the potentially greater value and effectiveness of static tests such as reviews,
inspections, static code analysis and so on. This is a major omission and the V-Model does not
support the broader view of testing as a constantly prominent activity throughout the development
lifecycle.

The W-Model of testing.


Page 16 of 82
Paul Herzlich introduced the W-Model approach in 1993. The W-Model attempts to address
shortcomings in the V-Model. Rather than focus on specific dynamic test stages, as the V-Model
does, the W-Model focuses on the development products themselves. Essentially, every
development activity that produces a work product is “shadowed” by a test activity. The purpose
of the test activity specifically is to determine whether the objectives of a development activity
have been met and the deliverable meets its requirements. In its most generic form, the W-Model
presents a standard development lifecycle with every development stage mirrored by a test
activity. On the left hand side, typically, the deliverables of a development activity (for example,
write requirements) is accompanied by a test activity “test the requirements” and so on. If your
organization has a different set of development stages, then the W-Model is easily adjusted to
your situation. The important thing is this: the W-Model of testing focuses specifically on the
product risks of concern at the point where testing can be most effective.

The W-Model and static test techniques.

If we focus on the static test techniques, you can see that there is a wide range of techniques
available for evaluating the products of the left hand side. Inspections, reviews, walkthroughs,
static analysis, requirements animation as well as early test case preparation can all be used.

2.1.2 Iterative development models (K2)

Iterative development is the process of establishing requirements, designing, building and


testing a system, done as a series of smaller developments. Examples are: prototyping, rapid
application development (RAD), Rational Unified Process (RUP) and agile development models.
The increment produced by iteration may be tested at several levels as part of its development.
An increment, added to others developed previously, forms a growing partial system, which
should also be tested. Regression testing is increasingly important on all iterations after the first
one. Verification and validation can be carried out on each increment.

There are a number of different models for software development lifecycles. Some of the
more commonly used models are:

Page 17 of 82
 Waterfall Lifecycle Model
 Spiral Lifecycle Model
 Incremental Lifecycle Model
 Iterative Lifecycle Model
 Progressive Development Lifecycle Model
 Prototyping Model
 RAD Lifecycle Model

Waterfall Lifecycle Model


This is the most common and classic of life cycle models, also referred to as a linear-
sequential life cycle model. It is very simple to understand and use. In a waterfall model, each
phase must be completed in its entirety before the next phase can begin. At the end of each
phase, a review takes place to determine if the project is on the right path and whether or not to
continue or discard the project. Unlike what I mentioned in the general model, phases do not
overlap in a waterfall model.

Waterfall Life Cycle Model

Advantages
 Simple and easy to use.
 Easy to manage due to the rigidity of the model at each phase has specific deliverables
and a review process.
 Phases are processed and completed one at a time.
 Works well for smaller projects where requirements are very well understood.

Disadvantages
 Adjusting scope during the life cycle can kill a project
 No working software is produced until late during the life cycle.
 High amounts of risk and uncertainty.
 Poor model for complex and object-oriented projects.
 Poor model for long and ongoing projects.
 Poor model where requirements are at a moderate to high risk of changing.

Spiral Model

Page 18 of 82
The spiral model is similar to the incremental model, with more emphases placed on risk
analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and
Evaluation. A software project repeatedly passes through these phases in iterations (called
Spirals in this model). The baseline spiral, starting in the planning phase, requirements are
gathered and risk is assessed. Each subsequent spiral builds on the baseline spiral.
Requirements are gathered during the planning phase. In the risk analysis phase, a
process is undertaken to identify risk and alternate solutions. A prototype is produced at the end
of the risk analysis phase.
Software is produced in the engineering phase, along with testing at the end of the
phase. The evaluation phase allows the customer to evaluate the output of the project to date
before the project continues to the next spiral.
In the spiral model, the angular component represents progress, and the radius of the
spiral represents cost.

Spiral Life Cycle Model

Advantages

 High amount of risk analysis


 Good for large and mission-critical projects.
 Software is produced early in the software life cycle.

Disadvantages
 Can be a costly model to use.
 Risk analysis requires highly specific expertise.
 Projects success is highly dependent on the risk analysis phase.
 Does not work well for smaller projects.

Incremental Model
The incremental model is an intuitive approach to the waterfall model. Multiple
development cycles take place here, making the life cycle a multi-waterfall cycle. Cycles are
divided up into smaller, more easily managed iterations. Each iteration passes through the
requirements, design, implementation and testing phases.
A working version of software is produced during the first iteration, so you have working
software early on during the software life cycle. Subsequent iterations build on the initial software
produced during the first iteration.
Page 19 of 82
Incremental Life Cycle Model

Advantages
 Generates working software quickly and early during the software life cycle.
 More flexible and less costly to change scope and requirements.
 Easier to test and debug during a smaller iteration.
 Easier to manage risk because risky pieces are identified and handled during its
iteration.
 Each iteration is an easily managed milestone.
Disadvantages
 Each phase of an iteration is rigid and do not overlap each other.
 Problems may arise pertaining to system architecture because not all requirements
are gathered up front for the entire software life cycle.
Iterative Model
An iterative lifecycle model does not attempt to start with a full specification of
requirements. Instead, development begins by specifying and implementing just part of the
software, which can then be reviewed in order to identify further requirements. This process is
then repeated, producing a new version of the software for each cycle of the model.

Iterative Life Cycle Model

Consider an iterative lifecycle model which consists of repeating the following four
phases in sequence:

A Requirements phase, in which the requirements for the software are gathered and analyzed.
Iteration should eventually result in a requirements phase that produces a complete and final
specification of requirements.
Page 20 of 82
A Design phase, in which a software solution to meet the requirements is designed. This may be
a new design, or an extension of an earlier design.

An Implementation and Test phase, when the software is coded, integrated and tested.

A Review phase, in which the software is evaluated, the current requirements are reviewed, and
changes and additions to requirements proposed.
For each cycle of the model, a decision has to be made as to whether the software
produced by the cycle will be discarded, or kept as a starting point for the next cycle (sometimes
referred to as incremental prototyping). Eventually a point will be reached where the requirements
are complete and the software can be delivered, or it becomes impossible to enhance the
software as required, and a fresh start has to be made.

Page 21 of 82
The iterative lifecycle model can be likened to producing software by successive
approximation. Drawing an analogy with mathematical methods that use successive
approximation to arrive at a final solution, the benefit of such methods depends on how rapidly
they converge on a solution.
The key to successful use of an iterative software development lifecycle is rigorous
validation of requirements, and verification (including testing) of each version of the software
against those requirements within each cycle of the model. The first three phases of the example
iterative model is in fact an abbreviated form of a sequential V or waterfall lifecycle model. Each
cycle of the model produces software that requires testing at the unit level, for software
integration, for system integration and for acceptance. As the software evolves through
successive cycles, tests have to be repeated and extended to verify each version of the software.

Progressive Development Lifecycle Model


A common problem with software development is that software is needed quickly, but it
will take a long time to fully develop. The solution is to form a compromise between time scales
and functionality, providing "interim" deliveries of software, with reduced functionality, but serving
as a stepping-stone towards the fully functional software. It is also possible to use such a
stepping stone approach as a means of reducing risk.
The usual names given to this approach to software development are progressive
development or phased implementation. The corresponding lifecycle model is referred to as a
progressive development lifecycle. Within a progressive development lifecycle, each individual
phase of development will follow its own software development lifecycle, typically using a V or
waterfall model. The actual number of phases will depend upon the development.

Progressive Life Cycle Model

Each delivery of software will have to pass acceptance testing to verify the software fulfils
the relevant parts of the overall requirements. The testing and integration of each phase will
require time and effort. Therefore, there is a point at which an increase in the number of
development phases will actually become counter productive, giving an increased cost and time
scale, which will have to be weighed carefully against the need for an early solution.
The software produced by an early phase of the model may never actually be used; it
may just serve as a prototype. A prototype will take short cuts in order to provide a quick means
of validating key requirements and verifying critical areas of design. These short cuts may be in
areas such as reduced documentation and testing. When such short cuts are taken, it is essential
to plan to discard the prototype and implement the next phase from scratch, because the reduced
quality of the prototype will not provide a good foundation for continued development.

Page 22 of 82
Prototyping Model
Prototyping has been discussed in the literature as an important approach to early
requirements validation. A prototype is an enact able mock-up or model of a software system that
enables evaluation of features or functions through user and developer interaction with
operational scenarios. Prototyping exposes functional and behavioral aspects of the system as
well as implementation considerations, thereby increasing the accuracy of requirements and
helping to control their volatility during development.

Specifications Demonstration Refinements SRS


Preliminary Design To To Document
User Implementation Client SRS
Testing
Requirement

The requirements of a system or a class of systems are gathered in an evolutionary


fashion. Requirements knowledge is never complete, but rather evolves over time as new
requirements are identified, existing requirements are expanded, and obsolete requirements are
discarded.
There are really two broad categories of prototyping approaches: those that involve the
creation of a series of fielded prototypes, and those intent on exploring ideas without resorting to
field deployment. The former are most commonly referred to as field or evolutionary prototypes,
while the latter go by many names, including rapid, concept, throw- away, experimental, and
exploratory prototypes. For convenience, we will refer to these broad categories as evolutionary
and concept prototypes respectively.
Concept prototyping is a mechanism for achieving validation prior to commitment.
Concept prototyping may be used to validate requirements prior to commitment to specific
designs. Similarly, concept prototyping may be used to validate potential designs prior to
commitment to specific implementations. In this sense, prototyping as a software development
paradigm can be seen as tacit acceptance that requirements are not fully known or understood
prior to design and implementation. Concept prototyping can be used as a means to explore new
requirements and thus assist in the ongoing evolution of requirements.
Viewed from a different perspective, the entire lifecycle of a product can be seen as a
series of increasingly detailed evolutionary prototypes. The evolutionary view of the software
lifecycle considers the first delivery to be an initial fielded prototype. Subsequent modifications
and enhancements result in delivery of further, more mature prototypes. This process continues
until eventual product retirement. Adoption of this view eliminates the arbitrary distinction between
developers and maintainers resulting in an important shift in mindset affecting strategies for cost
estimation, development approach, and product acquisition.

RAD Lifecycle Model


The RAD (Rapid Application Development) model is a high speed version of the waterfall
model. The emphasis is on short development cycle. Short development cycle is achieved by
using component-based construction. Typical use for RAD development is for information
systems.

Following are the four phases of RAD:

Requirements Planning:
Page 23 of 82
The Requirements Planning stage consists of a review of the areas immediately
associated with the proposed system. This review produces a broad definition of the system
requirements in terms of the functions the system will support.
The deliverables from the Requirements Planning stage include an outline system area
model (entity and process models) of the area under study, a definition of the system's scope,
and a cost justification for the new system.

User Design:
The User Design stage consists of a detailed analysis of the business activities related to
the proposed system. Key users, meeting in workshops, decompose business functions and
define entity types associated with the system. They complete the analysis by creating action
diagrams defining the interactions between processes and data. Following the analysis, the
design of the system is outlined. System procedures are designed, and preliminary layouts of
screens are developed. Prototypes of critical procedures are built and reviewed. A plan for
implementing the system is prepared.

Construction:
In the Construction stage, a small team of developers, working directly with users,
finalizes the design and builds the system. The software construction process consists of a series
of "design-and-build" steps in which the users have the opportunity to fine-tune the requirements
and review the resulting software implementation. This stage also includes preparing for the
cutover to production.
In addition to the tested software, Construction stage deliverables include documentation
and instructions necessary to operate the new application, and routines and procedures needed
to put the system into operation.

Implementation
The implementation stage involves implementing the new system and managing the change from
the old system environment to the new one. This may include implementing bridges between
existing and new systems, converting data, and training users. User acceptance is the end point
of the implementation stage.

Page 24 of 82
2.1.3 Testing within a life cycle model (K2)
In any life cycle model, there are several characteristics of good testing:

 For every development activity there is a corresponding testing activity.


 Each test level has test objectives specific to that level.
 The analysis and design of tests for a given test level should begin during the
corresponding
 Development activity.
 Testers should be involved in reviewing documents as soon as drafts are available in the
Development life cycle.

Test levels can be combined or reorganized depending on the nature of the project or the
system architecture. For example, for the integration of a commercial off the shelf (COTS)
software product into a system, the purchaser may perform integration testing at the system level
(e.g. integration to the infrastructure and other systems, or system deployment) and acceptance
testing (functional and/or non-functional, and user and/or operational testing).

2.2 Test levels (K2)

Terms
Alpha testing, beta testing, component testing (also known as unit, module or program
testing), contract acceptance testing, drivers, field testing, functional requirements, integration,
integration testing, non-functional requirements, operational (acceptance) testing, regulation
acceptance testing, robustness testing, stubs, system testing, test-driven development, test
environment, user acceptance testing.

Background
For each of the test levels, the following can be identified: their generic objectives, the
work product(s) being referenced for deriving test cases (i.e. the test basis), the test object (i.e.
what is being tested), typical defects and failures to be found, test harness requirements and tool
support, and specific approaches and responsibilities.

2.2.1 Component testing (K2)


Component testing searches for defects in, and verifies the functioning of, software (e.g.
modules, programs, objects, classes, etc.) that are separately testable. It may be done in isolation
from the rest of the system, depending on the context of the development life cycle and the
system. Stubs, drivers and simulators may be used.
Component testing may include testing of functionality and specific non-functional
characteristics, such as resource-behavior (e.g. memory leaks) or robustness testing, as well as
structural testing (e.g. branch coverage). Test cases are derived from work products such as a
specification of the component, the software design or the data model.
Typically, component testing occurs with access to the code being tested and with the
support of the development environment, such as a unit test framework or debugging tool, and, in
practice, usually involves the programmer who wrote the code. Defects are typically fixed as soon
as they are found, without formally recording incidents.
One approach in component testing is to prepare and automate test cases before coding.
This is called a test-first approach or test-driven development. This approach is highly iterative
and is based on cycles of developing test cases, then building and integrating small pieces of
code, and executing the component tests until they pass.

It is the component testing perspective that is important and not the size of the pieces
being tested. That perspective views the software being tested as intended for integration with
other pieces rather than as a complete system in it. These both helps to determine what features
of the software are tested and how they are tested.

Which ones should we test?


One of the most intense arguments in testing object-oriented systems is whether detailed
component testing is worth the effort. That leads me to state an obvious (at least in my mind)
axiom: Select a component for testing when the penalty for the component not working is greater
than the effort required to test it. Not every class will be sufficiently large, important or complex to
meet this test so not every class will be tested independently.
There are several situations in which the individual classes should be tested regardless
of their size or complexity:
Reusable components - Components intended for reuse should be tested over a wider range of
values than a component intended for a single focused use.
Domain components - Components that represent significant domain concepts should be tested
both for correctness and for the faithfulness of the representation.
Commercial components - Components that will be sold as individual products should be tested
not only as reusable components but also as potential sources of liability.

2.2.2 Integration testing (K2)


Integration testing tests interfaces between components, interactions to different parts of
a system, such as the operating system, file system, hardware or interfaces between systems.
There may be more than one level of integration testing and it may be carried out on test
objects of varying size. For example:
 Component integration testing tests the interactions between software components and is
done after component testing;
 System integration testing tests the interactions between different systems and may be
done after system testing. In this case, the developing organization may control only one
side of the interface, so changes may be destabilizing. Business processes implemented
as workflows may involve a series of systems. Cross-platform issues may be significant.

The greater the scope of integration, the more difficult it becomes to isolate failures to a
specific component or system, which may lead to increased risk.
Systematic integration strategies may be based on the system architecture (such as top-
down and bottom-up), functional tasks, transaction processing sequences, or some other aspect
of the system or component. In order to reduce the risk of late defect discovery, integration
should normally be incremental rather than “big bang”.
Testing of specific non-functional characteristics (e.g. performance) may be included in
integration testing.
At each stage of integration, testers concentrate solely on the integration itself. For
example, if they are integrating module A with module B they are interested in testing the
communication between the modules, not the functionality of either module. Both functional and
structural approaches may be used.
Ideally, testers should understand the architecture and influence integration planning. If
integration tests are planned before components or systems are built, they can be built in the
order required for most efficient testing.

You can do integration testing in a variety of ways but the following are three common
strategies:
The Top-Down approach to integration testing requires the highest-level modules be
test and integrated first. This allows high-level logic and data flow to be tested early in the
process and it tends to minimize the need for drivers. However, the need for stubs complicates
test management and low-level utilities are tested relatively late in the development cycle.
Another disadvantage of top-down integration testing is its poor support for early release of
limited functionality.
The Bottom-Up approach requires the lowest-level units be tested and integrated first.
These units are frequently referred to as utility modules. By using this approach, utility modules
are tested early in the development process and the need for stubs is minimized. The downside,
however, is that the need for drivers complicates test management and high-level logic and data
flow are tested late. Like the top-down approach, the bottom-up approach also provides poor
support for early release of limited functionality.
The third approach, sometimes referred to as the Umbrella approach, requires testing
along functional data and control-flow paths. First, the inputs for functions are integrated in the
bottom-up pattern discussed above. The outputs for each function are then integrated in the top-
down manner. The primary advantage of this approach is the degree of support for early release
of limited functionality. It also helps minimize the need for stubs and drivers. The potential
weaknesses of this approach are significant, however, in that it can be less systematic than the
other two approaches, leading to the need for more regression testing.
A big bang project is one that has no staged delivery. The customer must wait,
sometimes months, before seeing anything from the development team. At the end of the wait
comes a "big bang", which often results in the customer being disappointed.

2.2.3 System testing (K2)


System testing is concerned with the behavior of a whole system/product as defined by
the scope of a development project or program.
In system testing, the test environment should correspond to the final target or production
environment as much as possible in order to minimize the risk of environment-specific failures not
being found in testing.
System testing may include tests based on risks and/or on requirements specifications,
business processes, use cases, or other high level descriptions of system behavior, interactions
with the operating system, and system resources.
System testing should investigate both functional and non-functional requirements of the
system. Requirements may exist as text and/or models. Testers also need to deal with
incomplete or undocumented requirements. System testing of functional requirements starts by
using the most appropriate specification-based (black-box) techniques for the aspect of the
system to be tested. For example, a decision table may be created for combinations of effects
described in business rules. Structure-based techniques (white-box) may then be used to assess
the thoroughness of the testing with respect to a structural element, such as menu structure or
web page navigation. (See Chapter 4.)
An independent test team often carries out system testing.

Generally speaking, System testing is the first time that the entire system can be tested
as a whole system against the Functional Requirement Specification(s) (FRS) and/or the System
Requirement Specification (SRS), these are the rules that describe the functionality that the
vendor (the entity developing the software) and a customer have agreed upon. System testing
tends to be more of an investigatory testing phase, where the focus is to have almost a
destructive attitude and test not only the design, but also the behavior and even the believed
expectations of the customer. System testing is intended to test up to and beyond the bounds
defined in the software/hardware requirements specification(s).
One could view System testing as the final destructive testing phase before Acceptance
testing.
Types of System testing
 Functional testing
 User interface testing
 Model based testing
 Error exit testing
 User help testing
 Security Testing
 Capacity testing
 Performance testing
 Sanity testing
 Regression testing
 Reliability testing
 Recovery testing
 Installation testing
 Maintenance testing
 Documentation testing

Although different testing organizations may prescribe more or less types of testing,
within System testing, this list serves as a general frame work or foundation to begin with.

2.2.4 Acceptance testing (K2)


Acceptance testing is often the responsibility of the customers or users of a system; other
stakeholders may be involved as well.
The goal in acceptance testing is to establish confidence in the system, parts of the
system or specific non-functional characteristics of the system. Finding defects is not the main
focus in acceptance testing. Acceptance testing may assess the system’s readiness for
deployment and use, although it is not necessarily the final level of testing. For example, a large-
scale system integration test may come after the acceptance test for a system.

Acceptance testing may occur as more than just a single test level, for example:

 A COTS software product may be acceptance tested when it is installed or integrated.


 Acceptance testing of the usability of a component may be done during component
testing.
 Acceptance testing of a new functional enhancement may come before system testing.

Typical forms of acceptance testing include the following:

User acceptance testing


Typically verifies the fitness for use of the system by business users.

Operational (acceptance) testing


The acceptance of the system by the system administrators, including:
 Testing of backup/restore;
 Disaster recovery;
 User management;
 Maintenance tasks;
 Periodic checks of security vulnerabilities.

Contract and regulation acceptance testing


Contract acceptance testing is performed against a contract’s acceptance criteria for
producing custom-developed software. Acceptance criteria should be defined when the contract
is agreed.
Regulation acceptance testing is performed against any regulations that must be adhered
to, such as governmental, legal or safety regulations.

Alpha and beta (or field) testing


Developers of market, or COTS, software often want to get feedback from potential or
existing customers in their market before the software product is put up for sale commercially.
Alpha testing is performed at the developing organization’s site. Beta testing, or field-testing, is
performed by people at their own locations. Potential customers, not the developers of the
product, perform both. Organizations may use other terms as well, such as factory acceptance
testing and site acceptance testing for systems that are tested before and after being moved to a
customer’s site.
2.3 Test types: the targets of testing (K2)
Terms
Automation, black-box testing, code coverage, confirmation testing, functional testing,
interoperability testing, load testing, maintainability testing, performance testing, portability
testing, regression testing, reliability testing, security testing, specification-based testing, stress
testing, structural testing, test suite, usability testing, white-box testing.

Background
A group of test activities can be aimed at verifying the software system (or a part of a
system) based on a specific reason or target for testing.
A test type is focused on a particular test objective, which could be the testing of a
function to be performed by the software; a non-functional quality characteristic, such as reliability
or usability, the structure or architecture of the software or system; or related to changes, i.e.
confirming that defects have been fixed (confirmation testing) and looking for unintended changes
(regression testing).
A model of the software may be developed and/or used in structural and functional
testing. For example, in functional testing a process flow model, a state transition model or a plain
language specification; and for structural testing a control flow model or menu structure model.

2.3.1 Testing of function (functional testing) (K2)


The functions that a system, subsystem or component are to perform may be described
in work products such as a requirements specification, use cases, or a functional specification, or
they may be undocumented. The functions are “what” the system does.
Functional tests are based on these functions and features (described in documents or
understood by the testers), and may be performed at all test levels (e.g. tests for components
may be based on a component specification).
Specification-based techniques may be used to derive test conditions and test cases
from the functionality of the software or system. (See Chapter 4.) Functional testing considers the
external behavior of the software (black-box testing).
A type of functional testing, security testing, investigates the functions (e.g. a firewall)
relating to detection of threats, such as viruses, from malicious outsiders.

It is testing without knowledge of the internal workings of the item being tested.
Functional testing considers the external behavior of the software (black-box testing).
For example, when black box testing is applied to software engineering, the tester would
only know the "legal" inputs and what the expected outputs should be, but not how the program
actually arrives at those outputs. It is because of this that black box testing can be considered
testing with respect to the specifications, no other knowledge of the program is necessary.
Here, the tester and the programmer can be independent of one another, avoiding
programmer bias toward his own work.

Advantages of Black Box Testing:

- more effective on larger units of code than glass box testing


- tester needs no knowledge of implementation, including specific programming languages
- tester and programmer are independent of each other
- tests are done from a user's point of view
- this will help to expose any ambiguities or inconsistencies in the specifications
- test cases can be designed as soon as the specifications are complete

Disadvantages of Black Box Testing:

- only a small number of possible inputs can actually be tested, to test every possible input
stream would take nearly forever
- without clear and concise specifications, test cases are hard to design
- there may be unnecessary repetition of test inputs if the tester is not informed of test
cases the programmer has already tried
- may leave many program paths untested
- cannot be directed toward specific segments of code which may be very complex (and
therefore more error prone)
- most testing related research has been directed toward glass box testing

Other functional testing techniques include: Transaction testing, Syntax testing, Domain
testing, Logic testing, and State testing.

2.3.2 Testing of software product characteristics (non-functional testing)


(K2)
Non-functional testing includes, but is not limited to, performance testing, load testing,
stress testing, usability testing, interoperability testing, maintainability testing, reliability testing
and portability testing. It is the testing of “how” the system works.
Non-functional testing may be performed at all test levels. The term non-functional testing
describes the tests required to measure characteristics of systems and software that can be
quantified on a varying scale, such as response times for performance testing. These tests can
be referenced to a quality model such as the one defined in ‘Software Engineering – Software
Product Quality’ (ISO 9126).

Non-functional requirements are properties and qualities the software system must
possess while providing its intended functional requirements or services. These types of
requirements have to be considered while developing the functional counterparts. They greatly
affect the design and implementation choices a developer may make. They also affect the
acceptability of the developed software by its intended users.
In the following, we briefly describe the three categories of non-functional requirements
that may be imposed on a software system.

System-related Non-Functional Requirements:


These types of requirements impose some criteria related to the internal qualities of the
system under development and the hardware/software context in which this system will operate.

Operational requirements: These requirements specify the environment in which the software
will be running, including, hardware platforms, external interfaces, and operating systems.

Performance requirements: These requirements specify possibly lower and upper bounds on
speed, response time and storage characteristics of the software.

Maintainability requirements: These requirements specify the expected response time for
dealing with the various maintenance activities, such as future release dates.

Portability requirements: These requirements specify future plans for porting the software to
different operating environments. These requirements are linked to both operating and
maintainability requirements and may impose certain design decisions and implementation
choices such as the choice of a programming language.

Security requirements: These requirements specify the levels and types of security
mechanisms that need to be satisfied during the operations of the system. These may include
adherence to specific security standards and plans, and the implementation of specific
techniques.

Plan for Non-functional testing of evolution:


The non-functional testing of evolution will include the following types of testing:

Reliability Testing:
Under this category, evolution client will be tested to evaluate the ability to perform its
required functions under stated conditions and set of operations for a specified period of time or
number of iterations.

Performance Testing:
This testing will be conducted for evolution to evaluate the time taken or response time of
an evolution to perform it's required functions (in mailer, calendar, address book and tasks) under
stated conditions in comparison with different versions of evolution.

Scalability Testing:
Scalability is the capability of the software product to be upgraded to accommodate
increased loads. Scalability testing of the evolution client will be done together with the
performance testing.

Compatibility Testing:
Testing whether the system is compatible with other systems with which it should
communicate.
Testing to be carried out to validate proper inter-working of interconnecting network
facilities and equipment. Compatibility tests are performed prior to cutover to validate functional
capabilities and services provided over the interconnections. [T1.234-1993]
Compatibility testing measures how well pages display on different clients; for example:
browsers, different browser version, different operating systems, and different machines. At issue
are the different implementations of HTML by the various browser manufacturers and the different
machine platform display and rendering characteristics. Also called browser compatibility testing
and cross-browser testing.

2.3.3 Testing of software structure/architecture (structural testing) (K2)


Structural (white-box) testing may be performed at all test levels. Structural techniques
are best used after specification-based techniques, in order to help measure the thoroughness of
testing through assessment of coverage of a type of structure.
Coverage is the extent that a test suite, expressed as a percentage of the items being
covered, has exercised a structure. If coverage is not 100%, then more tests may be designed to
test those items that were missed and, therefore, increase coverage. Coverage techniques are
covered in Chapter 4.
At all test levels, but especially in component testing and component integration testing,
tools can be used to measure the code coverage of elements, such as statements or decisions.
Structural testing may be based on the architecture of the system, such as a calling hierarchy.
Structural testing approaches can also be applied at system, system integration or
acceptance testing levels (e.g. to business models or menu structures).

What is a White Box Testing Strategy?


White box testing strategy deals with the internal logic and structure of the code. White
box testing is also called as glass, structural, open box or clear box testing. The tests written
based on the white box testing strategy incorporate coverage of the code written, branches,
paths, statements and internal logic of the code etc.
In order to implement white box testing, the tester has to deal with the code and hence is
needed to possess knowledge of coding and logic i.e. internal working of the code. White box test
also needs the tester to look into the code and find out which unit/statement/chunk of the code is
malfunctioning.

Advantages of White box testing are:


- As the knowledge of internal coding structure is prerequisite, it becomes very easy to find out
which type of input/data can help in testing the application effectively.
- The other advantage of white box testing is that it helps in optimizing the code
- It helps in removing the extra lines of code, which can bring in hidden defects.

Techniques under White/Glass Box Testing Strategy:

Unit Testing:
The developer carries out unit testing in order to check if the particular module or unit of
code is working fine. The Unit Testing comes at the very basic level as it is carried out as and
when the unit of the code is developed or a particular functionality is built.

Static and dynamic Analysis:


Static analysis involves going through the code in order to find out any possible defect in
the code. Dynamic analysis involves executing the code and analyzing the output.

Statement Coverage:
In this type of testing the code is executed in such a manner that every statement of the
application is executed at least once. It helps in assuring that all the statements execute without
any side effect.

Branch Coverage:
No software application can be written in a continuous mode of coding, at some point we
need to branch out the code in order to perform a particular functionality. Branch coverage testing
helps in validating of all the branches in the code and making sure that no branching leads to
abnormal behavior of the application.

Security Testing:
Security Testing is carried out in order to find out how well the system can protect itself
from unauthorized access, hacking – cracking, any code damage etc. which deals with the code
of application. This type of testing needs sophisticated testing techniques.

Mutation Testing:
A kind of testing in which, the application is tested for the code that was modified after
fixing a particular bug/defect. It also helps in finding out which code and which strategy of coding
can help in developing the functionality effectively.
Besides all the testing types given above, there are some more types which fall under
both Black box and White box testing strategies such as: Functional testing (which deals with the
code in order to check its functional performance), Incremental integration testing (which deals
with the testing of newly added code in the application), Performance and Load testing (which
helps in finding out how the particular code manages resources and give performance etc.) etc.

2.3.4 Testing related to changes (confirmation and regression testing) (K2)


When a defect is detected and fixed then the software should be retested to confirm that
the original defect has been successfully removed. This is called confirmation testing. Debugging
(defect fixing) is a development activity, not a testing activity.
Regression testing is the repeated testing of an already tested program, after
modification, to discover any defects introduced or uncovered as a result of the change(s). These
defects may be either in the software being tested, or in another related or unrelated software
component. It is performed when the software, or its environment, is changed. The extent of
regression testing is based on the risk of not finding defects in software that was working
previously.
Tests should be repeatable if they are to be used for confirmation testing and to assist
regression testing. Regression testing may be performed at all test levels, and applies to
functional, non-functional and structural testing. Regression test suites are run many times and
generally evolve slowly, so regression testing is a strong candidate for automation.
Common methods of regression testing include re-running previously run tests and
checking whether previously fixed faults have re-emerged.
In most software development situations it is considered good practice that when a bug is
located and fixed, a test that exposes the bug is recorded and regularly retested after subsequent
changes to the program. Although this may be done through manual testing procedures using
programming techniques, it is often done using automated testing tools. Such a 'test suite'
contains software tools that allow the testing environment to execute all the regression test cases
automatically; some projects even set up automated systems to automatically re-run all
regression tests at specified intervals and report any regressions. Common strategies are to run
such a system after every successful compile (for small projects), every night, or once a week.

2.4 Maintenance testing (K2) 15 minutes


Terms
Impact analysis, maintenance testing, migration, modifications, and retirement.

Background
Once deployed, a software system is often in service for years or decades. During this
time the system and its environment is often corrected, changed or extended. Maintenance
testing is done on an existing operational system, and is triggered by modifications, migration, or
retirement of the software or system.
Modifications include planned enhancement changes (e.g. release-based), corrective and
emergency changes, and changes of environment, such as planned operating system or
database upgrades, or patches to newly exposed or discovered vulnerabilities of the operating
system.
Maintenance testing for migration (e.g. from one platform to another) should include
operational tests of the new environment, as well as of the changed software. Maintenance
testing for the retirement of a system may include the testing of data migration or archiving if long
data-retention periods are required.
In addition to testing what has been changed, maintenance testing includes extensive
regression testing to parts of the system that have not been changed. The scope of maintenance
testing is related to the risk of the change, the size of the existing system and to the size of the
change.
Depending on the changes, maintenance testing may be done at any or all test levels
and for any or all test types.
Determining how the existing system may be affected by changes is called impact
analysis, and is used to help decide how much regression testing to do.
Maintenance testing can be difficult if specifications are out of date or missing.

The objective of the Testing stage is to ensure that the system is modified to take care of
the business change, and the system meets the business requirements to a level acceptable to
the users.
It is quite well known that fixing bugs during maintenance quite often leads to new
problems or bugs. This leads to frustration and disappointment for customers and also leads to
increased effort and cost for re-work for the maintaining organization. This kind of “bug creep”
usually occurs due to several reasons. The most common ones being:
 Integration: The maintenance engineer fixes a bug in one component and makes necessary
changes for them. While doing so, the impact of this change in other components and
areas is often ignored. The engineer then tests this component (which works fine) and
proceeds to offer a patch to the customer. When the customer installs it, the problem
reported is solved but new problems crop up.
 Deployment: The maintenance engineer has no clue of the deployment scenarios to be
supported. Whether it is in terms of firewalls, proxy servers, client browsers, security
policies, networking issues, etc which exist at the customer site. These are often ignored
during the maintenance phase thus leading to a theoretical fix of the problem.
 Ideal setup: While fixing bugs, the maintenance engineer quite often as an “ideal setup”. All
registry entries have been made. Database connections have been setup. Authorizations
and permissions exist. Relationships have been set. But when a bug is fixed and deployed,
the engineer has to realize that all these and many other system variables need to be
checked and set.
These and several such problems can be addressed by including Maintenance Testing in the
processes for releasing maintenance patches.
Broadly the various steps that make Maintenance Testing are:
 Prepare for Testing
 Conduct Unit Tests
 Test System
 Prepare for Acceptance Tests
 Conduct Acceptance Tests

Of course, including Maintenance Testing as just one more “activity to be performed” will not
deliver the desired results. The test engineers working on maintenance testing should be
passionate about delivering a solution that solves the problems of customers and allows them to
proceed seamlessly. Applied systematically and scientifically, maintenance testing can add
significantly to the satisfaction of customers who are paying hefty maintenance contracts and
could lead to increased trust between the customer and the maintenance provider.
3. Static techniques (K2)

Learning objectives for static techniques


The objectives identify what you will be able to do following the completion of each module.

3.1 Reviews and the test process (K2)


 Recognize software work products that can be examined by the different static
techniques.
 (K1)
 Describe the importance and value of considering static techniques for the assessment of
software work products. (K2)
 Explain the difference between static and dynamic techniques. (K2)

3.2 Review process (K2)


 Recall the phases, roles and responsibilities of a typical formal review. (K1)
 Explain the differences between different types of review: informal review, technical
review, walkthrough and inspection. (K2)
 Explain the factors for successful performance of reviews. (K2)

3.3 Static analysis by tools (K2)


 Describe the objective of static analysis and compare it to dynamic testing. (K2)
 Recall typical defects and errors identified by static analysis and compare them to
reviews and dynamic testing. (K1)
 List typical benefits of static analysis. (K1)
 List typical code and design defects that may be identified by static analysis tools. (K1)
3.1 Reviews and the test process (K2) 15 minutes
Terms
Dynamic testing, reviews, static analysis.

Background
Static testing techniques do not execute the software that is being tested; they are
manual (reviews) or automated (static analysis).
Reviews are a way of testing software work products (including code) and can be
performed well before dynamic test execution. Defects detected during reviews early in the life
cycle are often much cheaper to remove than those detected while running tests (e.g. defects
found in requirements).
A review could be done entirely as a manual activity, but there is also tool support. The
main manual activity is to examine a work product and make comments about it. Any software
work product can be reviewed, including requirement specifications, design specifications, code,
test plans, test specifications, test cases, test scripts, user guides or web pages.
Benefits of reviews include early defect detection and correction, development
productivity improvements, reduced development timescales, reduced testing cost and time,
lifetime cost reductions, fewer defects and improved communication. Reviews can find omissions,
for example, in requirements, which are unlikely to be found in dynamic testing.
Reviews, static analysis and dynamic testing have the same objective – identifying
defects. They are complementary: the different techniques can find different types of defect
effectively and efficiently. In contrast to dynamic testing, reviews find defects rather than failures.
Typical defects that are easier to find in reviews than in dynamic testing are: deviations
from standards, requirement defects, design defects, insufficient maintainability and incorrect
interface specifications.

Difference between static and dynamic testing:

1. Static testing is about prevention. Dynamic testing is about cure.


2. Static tools offer greater marginal benefits.
3. Static testing is many times more cost effective than dynamic testing.
4. Static testing beats dynamic testing by a wide margin.
5. Static testing is more effective.
6. Static testing gives you comprehensive diagnostics for your code.
7. Static testing activities 100% statement coverage in a relatively short time, while dynamic
testing often achieves less than 50% statement coverage, because dynamic testing finds
bugs only in parts of the code that are actually executed.
8. Dynamic testing usually takes longer than static testing. Dynamic testing may involve
running several test cases, each of which may take longer than compilation.
9. Dynamic testing finds fewer bugs than static testing.
10. Static testing can be done before compilation, while dynamic testing can take place only
after compilation and linking.
11. Static testing can find all of the followings that dynamic testing cannot find: syntax errors,
code that is hard to maintain, code that is hard to test, code that does not conform to
coding standards, and ANSI violations.

3.2 Review process (K2)


Terms
Entry criteria, exit criteria, formal review, informal review, inspection, kick-off, metrics,
moderator/ inspection leader, peer review, reviewer, review meeting, review process, scribe,
technical review, walkthrough.
Background
Reviews vary from very informal to very formal (i.e. well structured and regulated). The
formality of a review process is related to factors such as the maturity of the development
process, any legal or regulatory requirements or the need for an audit trail.
The way a review is carried out depends on the agreed objective of the review (e.g. find
defects, gain understanding, or discussion and decision by consensus).

A review is a team gathering detect defect at an early development to assess on the continuity of
the project.

Conventions of reviews are:


 Defects are detected, but not corrected.
 Product is reviewed and not the creator of the product.
 All the people involved in the review are responsible for the outcome of the review.

Advantages of Review:
 The defects are found in the early phase.
 Cost effective.
 Co-ordination between the team members increases.
 Offers training, which helps the members to understand about the product.
 Reduces the work of a tester.

3.2.1 Phases of a formal review (K1)


A typical formal review has the following main phases:
 Planning: selecting the personnel, allocating roles; defining the entry and exit criteria for
more formal review types (e.g. inspection); and selecting which parts of documents to
look at.
 Kick-off: distributing documents; explaining the objectives, process and documents to
the participants; and checking entry criteria (for more formal review types).
 Individual preparation: work done by each of the participants on their own before the
review meeting, noting potential defects, questions and comments.
 Review meeting: discussion or logging, with documented results or minutes (for more
formal review types). The meeting participants may simply note defects, make
recommendations for handling the defects, or make decisions about the defects.
 Rework: fixing defects found, typically done by the author.
 Follow-up: checking that defects have been addressed, gathering metrics and checking
on exit criteria (for more formal review types).

3.2.2 Roles and responsibilities (K1)


A typical formal review will include the roles below:
Manager: decides on the execution of reviews, allocates time in project schedules and
determines if the review objectives have been met.
Moderator: the person who leads the review of the document or set of documents, including
planning the review, running the meeting, and follow-up after the meeting. If necessary, the
moderator may mediate between the various points of view and is often the person upon whom
the success of the review rests.
Author: the writer or person with chief responsibility for the document(s) to be reviewed.
Reviewers: individuals with a specific technical or business background (also called checkers or
inspectors) who, after the necessary preparation, identify and describe findings (e.g. defects) in
the product under review. Reviewers should be chosen to represent different perspectives and
roles in the review process and they take part in any review meetings.
Scribe (or recorder): documents all the issues, problems and open points that were identified
during the meeting.
Looking at documents from different perspectives, and using checklists, can make
reviews more effective and efficient, for example, a checklist based on perspectives such as user,
maintainer, tester or operations, or a checklist of typical requirements problems.

3.2.3 Types of review (K2)


A single document may be the subject of more than one review. If more than one type of review is
used, the order may vary. For example, an informal review may be carried out before a technical review, or
an inspection may be carried out on a requirement specification before a walkthrough with customers.
The main characteristics, options and purposes of common review types are:

Informal review
Key characteristics:
 No formal process;
 There may be pair programming or a technical lead reviewing designs and code;
 Optionally may be documented;
 May vary in usefulness depending on the reviewer;
 Main purpose: inexpensive way to get some benefit.

It is a one-to-one meeting, which can happen between any two individuals who are
involved in a project. No proper plan is done. Outputs of the product are not formally reported. It
happens in all stages of the software development and it is also called as Peer-reviews.

Walkthrough
Key characteristics:
 Meeting led by author;
 Scenarios, dry runs, peer group;
 Open-ended sessions;
 Optionally a pre-meeting preparation of reviewers, review report, lists of findings and scribe (who
is not the author)
 May vary in practice from quite informal to very formal
 Main purposes: learning, gaining understanding, defects finding.

Technical review
Key characteristics:
 Documented, defined defect-detection process that includes peers and technical experts;
 May be performed as a peer review without management participation;
 Ideally led by trained moderator (not the author);
 Pre-meeting preparation;
 Optionally the use of checklists, review report, lists of findings and management participation;
 May vary in practice from quite informal to very formal;
 Main purposes: discuss, make decisions, evaluate alternatives, find defects, solve technical
 Problems and check conformance to specifications and standards.

Inspection
Key characteristics:
 Led by trained moderator (not the author)
 Usually peer examinations
 Defined roles
 Includes metrics
 Formal process based on rules and checklists with entry and exit criteria
 Pre-meeting preparation
 Inspection report, list of findings
 Formal follow-up process
 Optionally, process improvement and reader
 Main purpose: find defects.

3.2.4 Success factors for reviews (K2)

Success factors for reviews include:


 Each review has a clear predefined objective.
 The right people for the review objectives are involved.
 Defects found are welcomed, and expressed objectively.
 People issues and psychological aspects are dealt with (e.g. making it a positive
experience for the author).
 Review techniques are applied that are suitable to the type and level of software work
products and reviewers.
 Checklists or roles are used if appropriate to increase effectiveness of defect
identification.
 Training is given in review techniques, especially the more formal techniques, such as
inspection.
 Management supports a good review process (e.g. by incorporating adequate time for
review activities in project schedules).
 There is an emphasis on learning and process improvement.

Reporting Review Results


The Item results from Technical interviews can be bundled in a single report. The review
report should contain the following information:

1. For Inspections: The group checklist with all items covered and components relating to
each item
2. For Inspections: A status or Summary Report
3. A list of defects found, and classified by type and frequency
4. Review metric data

The inspection report on the reviewed item may contain a summary of defects and problems
found and a list of review attendees, and some review measures such as the time period for the
review and the total number of major/minor defects. The several status options available are as
follows:

1. Accept: The reviewed item is accepted in its present form or with minor rework required
that does not need further verification
2. Conditional Accept: The reviewed item needs rework and will be accepted after the
moderator has checked and verified the rework
3. Reinspect: Considerable rework must be done on the reviewed item. The inspection
needs to be repeated when the rework is done.

Review, Rework and Follow –Up


If problems/defects have been identified in the reviewed items there must be a rework
period so that the authors of the reviewed item resolve all of these. The rework/follow-up periods
embody a set of tasks. During rework and follow-up the defects /problems are repaired and the
item is retested by the review moderator or the review group as a whole.
Review Metrics
It is important to collect measurement data related to the review process so that the
review process can be evaluated and improved. The defect data collected from a review is also
very useful for predicting product quality, analyzing the development process, performing defect
causal analysis, and establishing defect prevention activities.
Some basic measurements that can be collected are:

1. Size of the item reviewed


2. The review time
3. The number of defects found
4. The number of defects that have escaped and were found in later review and testing
activities

Conclusion
The many benefits of reviews and arguments for use of reviews are strong enough to
convince organizations that they should use reviews as a quality and productivity-enhancing tool.
One of the maturity goals that must be satisfied to reach level 4 of TMM is to establish a review
program. This implies that a formal review program needs to be put in place, supported by
policies, resources, training, and requirements for mandatory reviews of software artifacts.

3.3 Static analysis by tools (K2)


The objective of static analysis is to find defects in software source code and software
models. Static analysis is performed without actually executing the software being examined by
the tool; dynamic testing does execute the software code. Static analysis can locate defects that
are hard to find in testing. As with reviews, static analysis finds defects rather than failures. Static
analysis tools analyze program code (e.g. control flow and data flow), as well as generated output
such as HTML and XML.

The value of static analysis is:


 Early detection of defects prior to test execution
 Early warning about suspicious aspects of the code or design, by the calculation of
metrics, such as a high complexity measure
 Identification of defects not easily found by dynamic testing
 Detecting dependencies and inconsistencies in software models, such as links
 Improved maintainability of code and design
 Prevention of defects, if lessons are learned in development
 Typical defects discovered by static analysis tools include: referencing a variable with an
undefined value; inconsistent interface between modules and components; variables that
are never used; unreachable (dead) code; programming standards violations; security
vulnerabilities; syntax violations of code and software models

Developers use Static testing before and during component and integration testing, whereas the
designers use it during software modeling. Static analysis tools may produce a large number of
warning messages, which need to be well managed to allow the most effective use of the tool.
Compilers may offer some support for static analysis, including the calculation of metrics.

Example - QA·MISRA

About MISRA:
"Guidelines For the Use of the C Language in Vehicle Based Software" was published by the
Motor Industry Software Reliability Association to promote safe use of the C language in the
automotive industry. It contains rules defining a subset of the C language and places great
emphasis on the value of static analysis to enforce compliance. MISRA is widely accepted as a
model for best practices by leading developers, not just in the automotive industry, but in
aerospace, telecom, medical devices, defense, and others.
Features
 Detects and reports non-compliant code
 Links warning messages directly with the source code and the appropriate
 Standards.
 Provides cross references via further HTML links to the appropriate rule definition and
explanatory examples
 Produces code quality reports detailing the number and type of violations that occurred in
each file whilst linking them to the appropriate part of the source code
 Generates textual and graphical software metric reports that highlight code testability,
maintainability and portability
 Draws code visualization diagrams that enhance source code comprehension and
simplify the review process
 Integrates with configuration management tools
 Allows users to tailor or add checks appropriate to individual company standards or
conventions
Benefits
 Ensures all code complies with statically enforceable standards
 Allows tailoring and extension of the rules to meet local requirements
 Educates developers with regard to "safe" language usage
 Offers an automatic, repeatable and efficient code verification method
 Establishes a software quality benchmark against which subsequent revisions of code
can be measured and compared
 Enhances source code comprehension
 Improves software testability and maintainability
 Improves code portability
 Prevents coding and implementation errors from reaching the software testing phase
 Identifies software issues that may not otherwise be identified
 Reduces software development time and cost
 Increases software quality
 Supports software validation, software process maturity and various
4.Test design techniques (K3)

Learning objectives for test design techniques


The objectives identify what you will be able to do following the completion of each module.

4.1 Identifying test conditions and designing test cases (K3)


 Differentiate between a test design specification, test case specification and test
procedure specification. (K1)
 Compare the terms test condition, test case and test procedure. (K2)
 Write test cases: (K3)
o Showing a clear traceability to the requirements;
o Containing an expected result.
 Translate test cases into a well-structured test procedure specification at a level of detail
relevant to the knowledge of the testers. (K3)
 Write a test execution schedule for a given set of test cases, considering prioritization,
and technical and logical dependencies. (K3)

4.2 Categories of test design techniques (K2)


 Recall reasons that both specification-based (black-box) and structure-based (white-box)
approaches to test case design are useful, and list the common techniques for each. (K1)
 Explain the characteristics and differences between specification-based testing,
structure-based testing and experience-based testing. (K2)

4.3 Specification-based or black-box techniques (K3)


 Write test cases from given software models using the following test design techniques:
(K3)
o Equivalence partitioning;
o Boundary value analysis;
o Decision tables;
o State transition diagrams.
 Understand the main purpose of each of the four techniques, what level and type of
testing could use the technique, and how coverage may be measured. (K2)
 Understand the concept of use case testing and its benefits. (K2)

4.4 Structure-based or white-box techniques (K3)


 Describe the concept and importance of code coverage. (K2)
 Explain the concepts of statement and decision coverage, and understand that these
concepts can also be used at other test levels than component testing (e.g. on business
procedures at system level). (K2)
 Write test cases from given control flows using the following test design techniques: (K3)
 Statement testing;
 Decision testing.
 Assess statement and decision coverage for completeness. (K3)

4.5 Experience-based techniques (K2)


 Recall reasons for writing test cases based on intuition, experience and knowledge about common
defects. (K1)
 Compare experience-based techniques with specification-based testing techniques. (K2)

4.6 Choosing test techniques (K2)


 List the factors that influence the selection of the appropriate test design technique for a particular
kind of problem, such as the type of system, risk, customer requirements, models for use case
modeling, requirements models or tester knowledge. (K2)
4.1 Identifying test conditions and designing test (k3)

Terms
Test cases, test case specification, test condition, test data, test procedure specification, test
script, traceability.

Terms Definition
Test cases:
Test Case is a commonly used term for a specific test. This is usually the smallest unit of
testing. A Test Case will consist of information such as requirements testing, test steps,
verification steps, prerequisites, outputs, test environment, etc. A set of inputs, execution
preconditions, and expected outcomes developed for a particular objective, such as to exercise a
particular program path or to verify compliance with a specific requirement.
Test case specification:
A document specifying the test approach for a software feature or combination or
features and the inputs, predicted results and execution conditions for the associated tests.

Test condition:
The Test condition compares the values of two formulae. If the comparison yields true,
then the condition is true. Otherwise, it is false

Test procedure specification:


A detail of how the tester will physically run the test, the physical set-up required, and the
procedure steps that need to be followed.
Test script:
The Test condition compares the values of two formulae. If the comparison yields true,
then the condition is true. Otherwise, it is false
Traceability:
A document showing the relationship between Test Requirements and Test Cases.

Background
The process of identifying test conditions and designing tests consists of a number of steps:
 Designing tests by identifying test conditions.
 Specifying test cases.
 Specifying test procedures.
The process can be done in different ways, from very informal with little or no documentation, to
very formal (as it is described in this section). The level of formality depends on the context of the
testing, including the organization, the maturity of testing and development processes, time
constraints, and the people involved.
During test design, the test basis documentation is analyzed in order to determine what
to test, i.e. To identify the test conditions. A test condition is defined as an item or event that could
be verified by one or more test cases (e.g. a function, transaction, quality characteristic or
structural element).
Establishing traceability from test conditions back to the specifications and requirements
enables both impact analysis, when requirements change, and requirements coverage to be
determined for a set of tests. During test design the detailed test approach is implemented based
on, among other considerations, the risks identified (see Chapter 5 for more on risk analysis).
During test case specification the test cases and test data are developed and described
in detail by using test design techniques. A test case consists of a set of input values, execution
preconditions, expected results and execution post-conditions, developed to cover certain test
condition(s). The ‘Standard for Software Test Documentation’ (IEEE 829) describes the content of
test design specifications and test case specifications.
Expected results should be produced as part of the specification of a test case and
include outputs, changes to data and states, and any other consequences of the test. If expected
results have not been defined then a plausible, but erroneous, result may be interpreted as the
correct one. Expected results should ideally be defined prior to test execution. The test cases are
put in an executable order; this is the test procedure specification.
The test procedure (or manual test script) specifies the sequence of action for the
execution of a test. If tests are run using a test execution tool, the sequence of actions is
specified in a test script (which is an automated test procedure).
The various test procedures and automated test scripts are subsequently formed into a
test execution schedule that defines the order in which the various test procedures, and possibly
automated test scripts, are executed, when they are to be carried out and by whom. The test
execution schedule will take into account such factors as regression tests, prioritization, and
technical and logical dependencies.

4.2 Categories of test design techniques (K2)


Terms
Black-box techniques, experience-based techniques, specification-based techniques, structure
based techniques, white-box techniques.

Background
The purpose of a test design technique is to identify test conditions and test cases.
It is a classic distinction to denote test techniques as black box or white box. Black-box techniques
(also called specification-based techniques) are a way to derive and select test conditions or test cases based
on an analysis of the test basis documentation, whether functional or non-functional, for a component or
system without reference to its internal structure. White-box techniques (also called structural or structure-
based techniques) are based on an analysis of the internal structure of the component or system.
Some techniques fall clearly into a single category; others have elements of more than one
category. This syllabus refers to specification-based or experience-based approaches as black-box
techniques and structure-based as white-box techniques.
Common features of specification-based techniques:
 Models, either formal or informal, are used for the specification of the problem to be solved, the
software or its components.
 From these models test cases can be derived systematically.

Common features of structure-based techniques:


 Information about how the software is constructed is used to derive the test cases, for example,
code and design.
 The extent of coverage of the software can be measured for existing test cases, and further test
cases can be derived systematically to increase coverage.

Common features of experience-based techniques:


 The knowledge and experience of people are used to derive the test cases.
 Knowledge of testers, developers, users and other stakeholders about the software, its usage and
its environment;
 Knowledge about likely defects and their distribution.

4.3 Specification-based or black-box techniques (K3)


Terms
Boundary value analysis, decision table testing, equivalence partitioning, state transition testing,
use case testing.

Specification Based Testing refers to the process of testing a program based on what its
specification says its behavior should be. Specification Based testing is also called as Black Box
Testing. In particular, we can develop test cases based on the specification of the program's
behavior, without seeing an implementation of the program. Furthermore, we can develop test
cases before the program even exists!

Consider the following specification for a program:


Write a program that simulates a pocket calculator. The input is an arithmetic expression
that contains only integers and the arithmetic operators +, -, *, /, %, and ** (exponentiation).
Assume that the input is written in infix notation. Report an error if the input contains characters
other than those mentioned above. The expression may be as long as 1,000 characters and as
short as 3 characters (e.g., 3+2). The program reads input entered at the terminal and prints the
expression's value.

Without writing a program to solve this problem (you don't really know enough C yet to do
it), based only on the specification given, and without knowing anything about the ultimate
implementation, generate a set of test data that you think would be sufficient to test a program
that would be written in accordance with this specification. If you think the specification is
incomplete in anyway, state what assumptions you are making about how it should be completed
or clarified.

Features:
 Models, either formal or informal, are used for the specification of the problem to be
solved, the software or its components.
 From these models test cases can be derived systematically.

What is Black Box Testing?


 Black- Box testing relies to the specification of the system or component which is being
tested to derive test cases. The system is a “Black- box” whose behaviour can only be
determined by studying its inputs and the related outputs. Another name for this is
Functional testing or Specification Based testing because mathematical functions
can be specified using only their inputs and outputs.
 Black-box test design treats the system as a "black-box", so it doesn't explicitly use
knowledge of the internal structure. Black-box test design is usually described as
focusing on testing functional requirements. Synonyms for Black box include: behavioral,
functional, opaque-box, and closed-box.

4.3.1 Equivalence partitioning (K3)


Inputs to the software or system are divided into groups that are expected to exhibit similar
behavior, so they are likely to be processed in the same way. Equivalence partitions (or classes) can be
found for both valid data and invalid data, i.e. values that should be rejected. Partitions can also be
identified for outputs, internal values, time-related values (e.g. before or after an event) and for interface
parameters (e.g. during integration testing). Tests can be designed to cover partitions. Equivalence
partitioning (EP) is applicable at all levels of testing.
Equivalence partitioning as a technique can be used to achieve input and output coverage. It can
be applied to human input, input via interfaces to a system, or interface parameters in integration testing.

Why learn equivalence partitioning?


Equivalence partitioning drastically cuts down the number of test cases required to test
a system reasonably. It is an attempt to find the most errors with the smallest number of test
cases.
 Input data for a program unit usually falls into a number of partitions.
e.g. all negative integers, zero, all positive numbers.
 Each partition of input data makes the program behave in a similar way.
 Two test cases based on members from the same partition are likely to reveal the same
bugs.

Equivalence partitioning divides the input domain of a program into classes. For each of
these equivalence classes, the set of data should be treated the same by the module under test
and should produce the same answer. Test cases should be designed so the inputs lie within
these equivalence classes.
(Beizer, 1995) For example, for tests of “Go to Jail” the most important thing is whether the player
has enough money to pay the $50 fine. Therefore, the two equivalence classes can be
partitioned, as shown in Figure

In the case of Equivalence class partitioning, the following must be considered

1. The tester must consider both valid and invalid equivalence class
2. The derivation of input or outputs equivalence class is a heuristic process.

List of conditions
1. If an input condition of software-under-test is specified as a range of values, select one
equivalence class that covers the allowed range and two invalid equivalence classes, one
outside each end of the range.
a. For e.g., for a range i.e. 1-499, select one valid equivalence class that includes
all the values from 1 to 499. Select a second equivalence class that consists of
all values less than 1, and a third equivalence class that consists of all values
less than 499.
2. If an input condition of software-under-test is specified as a number of values, select one
valid equivalence class that covers all allowed number of values and two invalid
equivalence classes that are outside each end of the allowed number.
a. For e.g., if the specification for a real estate related module says that a house
can one to four owners, then we select one valid equivalence class that includes
all the valid number of owners and then two invalid equivalence classes for less
than one owner and more than four owners.

3. If an input condition of software-under-test is specified as a set of valid input values,


select one valid equivalence class that contains all the members of the set and one
invalid equivalence class for any value outside the set.
a. For e.g., if the specification states that the colors allowed RED, GREEN and
BLUE then select one valid equivalence class that includes the set RED, GREEN
and BLUE and one invalid equivalence class for all other values.
4. If an input condition of a software-under-test is specified as a ‘must be’ condition then
select one valid equivalence class to represent the ‘must be’ condition and one invalid
equivalence class that does not contain the ‘must be’ condition
For e.g., if the specification for a module states that the first character must be a letter then select
one valid equivalence class where the character is a letter and one invalid equivalence class where
the character is not a letter.
4.3.2 Boundary value analysis (K3)
Behavior at the edge of each equivalence partition is more likely to be incorrect, so
boundaries are an area where testing is likely to yield defects. The maximum and minimum
values of a partition are its boundary values. A boundary value for a valid partition is a valid
boundary value the boundary of an invalid partition is an invalid boundary value. Tests can be
designed to cover both valid and invalid boundary values. When designing test cases, a value on
each boundary is chosen.
Boundary value analysis can be applied at all test levels. It is relatively easy to apply and
its defect finding capability is high; detailed specifications are helpful.
This technique is often considered an extension of equivalence partitioning and can be
used on input by humans as well as, for example, on timing or table boundaries. Boundary values
may also be used for test data selection.

Boundary value is defined as a data value that corresponds to a minimum or maximum


input, internal, or output value specified for a system or component (IEEE, 1990).
Equivalence class partitioning method requires the tester has access to a specification of
input/output behavior for the target software. The test cases developed based on Equivalence
class partitioning can be strengthened by use of another technique called Boundary Value
Analysis . Boundary Value Analysis requires that the tester select element close to the edges, so
that both the upper and lower edges of the Equivalence class are covered by the test cases.
In our above example, the boundary of the class is at 50, as shown in Figure 5. We
should create test cases for the Player 1 having $49, $50, and $51. These test cases will help to
find common off-by one error.

List of conditions
1. If an input condition of software-under-test is specified as a range of values, develop valid
test cases for the ends of the range , and invalid test cases for possibilities just above
and below the ends of the range.
a. For e.g., if a specification states that an input value for a module must lie in the
range between -1.0 to +1.0, valid tests that include values for ends of the range,
as well as invalid tests for values just above and below the ends of the range .
The result would be -1.0,-1.1 and 1.0,1.1
2. If an input condition of software-under-test is specified as a number of values, develop
valid test cases for the minimum and maximum numbers as well as invalid test cases that
include one lesser and one greater than the maximum and minimum.
a. For e.g., if the specification for a real estate related module says that a house
can one to four owners, tests that include 0,1 owners and 4,5 owners would be
developed.
3. If an input or output of the software-under-test is an ordered set, such as a table or a
linear list, develop tests that focus on the first and last elements of the set.
An example of the Application of Equivalence Class Partitioning and Boundary Value
Analysis.
Suppose we are testing a module that allows a user to enter new widget identifiers into a
widget database. The input specification for the module states that a widget identifier should
consist of 3-15 alphanumeric characters of which the first two must be letters. We have three
separate input conditions,
1. It must consist of alphanumeric characters
2. The range for the total number of characters is between 3 and 15
3. The first two characters must be letters
In the case of Equivalence Class Partitioning we consider condition 1 and derive two equivalence
classes

EC1 – Part name is alphanumeric, valid


EC2 – Part name is not alphanumeric, invalid

Then we treat condition 2, the range of allowed characters 3-15

EC3 – The widget identifier has between 3 and 15 characters, valid


EC4 – The widget identifier has less than 3 characters, invalid
EC5 – The widget identifier has greater than 15 characters, invalid

Then we treat condition 3, case for the first 2 characters

EC6 – The first two characters are letters, valid


EC7 – The first two characters are not letters, invalid

In the case of Boundary Value Analysis, a simple set of abbreviation can be used to represent the
bound groups. For e.g.

BLB – a value just below the lower bound


LB – a value on the lower boundary
ALB – a value just above the lower boundary
BUB – a value just below the upper bound
UB – a value on the upper boundary
AUB – a value just above the upper boundary

For our example, the values would be


BLB – 2
LB – 3
ALB – 4
BUB – 14
UB – 15
AUB – 16

4.3.3 Decision table testing (K3)


Decision tables are a good way to capture system requirements that contain logical conditions, and
to document internal system design. They may be used to record complex business rules that a system is to
implement. The specification is analyzed, and conditions and actions of the system are identified. The input
conditions and actions are most often stated in such a way that they can either be true or false (Boolean).
The decision table contains the triggering conditions, often combinations of true and false for all input
conditions, and the resulting actions for each combination of conditions. Each column of the table
corresponds to a business rule that defines a unique combination of conditions that result in the execution
of the actions associated with that rule. The coverage standard commonly used with decision table testing is
to have at least one test per column, which typically involves covering all combinations of triggering
conditions.
The strength of decision table testing is that it creates combinations of conditions that might not
otherwise have been exercised during testing. It may be applied to all situations when the action of the
software depends on several logical decisions.

A decision table is a two-dimensional matrix with one row for each possible action and
one row for each relevant condition and one column for each combination of condition states.
Decision tables can very concisely and rigorously show complex conditions and their resulting
actions while remaining comprehensible to a human reader.
The first set of rows indicates the possible actions that may be taken. An "X" in an action
row shows that the action will be taken under the condition states indicated in the column below.
In a bivariate decision table, conditions are binary, restricting condition evaluations to "yes" and
"no". This results in a number of columns equal to 2 number of conditions. This can quickly result in a
huge number of columns as the number of conditions rise. Fortunately, it is unusual that every
combination of conditions results in a different action.

Consider the following example,


If the flight is more than half full and costs more than $350 per seat, we server free
cocktails unless it is a domestic flight. We charge for cocktails on all domestic flights that is, all
the ones where we serve cocktails. Cocktails are served on all flights, domestic or not, that are
more than half-full.

In the figure, the use of the "-", means that the condition in that row does not affect the action to
be taken. Looking at the first column, we see that no action will be taken no matter the state of
the last two conditions as long as the first condition is false. Each "-" notation reduces the number
of columns necessary and increases the comprehensibility of the table. In this example the 2 3 = 8
possible combinations are reduced to 4.

4.3.4 State transition testing (K3)


A system may exhibit a different response depending on current conditions or previous history (its state). In
this case, that aspect of the system can be shown as a state transition diagram. It allows the tester to view
the software in terms of its states, transitions between states, the inputs or events that trigger state changes
(transitions) and the actions, which may result from those transitions. The states of the system or object
under test are separate, identifiable and finite in number. A state table shows the relationship between the
states and inputs, and can highlight possible transitions that are invalid. Tests can be designed to cover a
typical sequence of states, to cover every state, to exercise every transition, to exercise specific sequences
of transitions or to test invalid transitions. State transition testing is much used within the embedded
software industry and technical automation in general. However, the technique is also suitable for modeling
a business object having specific states or testing screen-dialogue flows (e.g. for internet applications or
business scenarios).
A system may exhibit a different response depending on current conditions or previous
history (its state). It allows the tester to view the software in terms of its states, transitions
between states, the inputs or events that trigger state changes (transitions) and the actions,
which may result from those transitions. The states of the system or object under test are
separate, identifiable and finite in number .Once the states of the system are identified then a test
case is written to test the triggers or stimuli that cause a transition from one condition to another
state. The tests can be designed using a finite-state diagram or an equivalent table.
Key Concepts
• State: a condition in which a system is waiting for one or multiple events
• Transition: represents change from one state to another caused by an event
• Event: input that may cause a transition
• Action: operation initiated because of a state change (occur on transitions)

4.3.5 Use case testing (K2)


Tests can be specified from use cases or business scenarios. A use case describes interactions
between actors, including users and the system, which produce a result of value to a system user.
Each use case has preconditions, which need to be met for a use case to work successfully.
Each use case terminates with post-conditions, which are the observable results and final state of
the system after the use case has been completed. A use case usually has a mainstream (i.e. most likely)
scenario, and sometimes-alternative branches.
Use cases describe the “process flows” through a system based on its actual likely use, so the test cases
derived from use cases are most useful in uncovering defects in the process flows during real world use of
the system. Use cases, often referred to as scenarios, are very useful for designing acceptance tests with
customer/user participation. They also help uncover integration defects caused by the interaction and
interference of different components, which individual component testing would not see.

4.4 Structure-based or white-box techniques (K3)


Terms
Code coverage, decision coverage, statement coverage, structural testing, structure-based testing,
white-box testing.

Background
Structure-based testing/white-box testing is based on an identified structure of the software or
system, as seen in the following examples:
 Component level: the structure is that of the code itself, i.e. statements, decisions or branches.
 Integration level: the structure may be a call tree (a diagram in which modules call other modules).
 System level: the structure may be a menu structure, business process or web page structure.
 In this section, two code-related structural techniques for code coverage, based on statements and
decisions, are discussed. For decision testing, a control flow diagram may be used to visualize the
alternatives for each decision.

Code coverage analysis is a structural testing technique (AKA glass box testing and white box
testing). Structural testing compares test program behavior against the apparent intention of the
source code.
Structural testing examines how the program works, taking into account possible pitfalls
in the structure and logic.
Structural testing is also called path testing since you choose test cases that cause paths
to be taken through the structure of the program.
At first glance, structural testing seems unsafe. Structural testing cannot find errors of
omission. However, requirements specifications sometimes do not exist, and are rarely complete.
This is especially true near the end of the product development time line when the
requirements specification is updated less frequently and the product itself begins to take over the
role of the specification.

4.4.1 Statement testing and coverage (K3)


In component testing, statement coverage is the assessment of the percentage of executable
statements that have been exercised by a test case suite. Statement testing derives test cases to execute
specific statements, normally to increase statement coverage.

To achieve statement coverage, every executable statement in the program is invoked at least
once during software testing. Achieving statement coverage shows that all code statements are
reachable statement coverage is considered a weak criterion because it is insensitive to some
control structures.

Consider the following code segment (ref. 12):


if (x > 1) and (y = 0) then z := z / x; end if;
if (z = 2) or (y > 1) then z := z + 1; end if;

By choosing x = 2, y = 0, and z = 4 as input to this code segment, every statement is executed at


least once

4.4.2 Decision testing and coverage (K3)


Decision coverage, related to branch testing, is the assessment of the percentage of decision
outcomes (e.g. the True and False options of an IF statement) that have been exercised by a test case suite.
Decision testing derives test cases to execute specific decision outcomes, normally to increase decision
coverage.
Decision testing is a form of control flow testing as it generates a specific flow of control through
the decision points. Decision coverage is stronger than statement coverage: 100% decision coverage
guarantees 100% statement coverage, but not vice versa.

This measure reports whether Boolean expressions tested in control structures (such as
the if-statement and while-statement) evaluated to both true and false. The entire Boolean
expression is considered one true-or-false predicate regardless of whether it contains logical-and
or logical-or operators.
Additionally, this measure includes coverage of switch-statement cases, exception
handlers, and interrupts handlers.
Also known as: branch coverage, all-edges coverage, basis path coverage ,C2 ,
decision-decision-path testing . "Basis path" testing selects paths that achieve decision coverage.
Decision coverage requires two test cases: one for a true outcome and another for a
false outcome.
For simple decisions (i.e., decisions with a single condition), decision coverage ensures
complete testing of control constructs. But, not all decisions are simple. For the decision (A or B),
test cases (TF) and (FF) will toggle the decision outcome between true and false.
However, the effect of B is not tested; that is, those test cases cannot distinguish
between the decision (A or B) and the decision A.

4.4.3 Other structure-based techniques (K1)


There are stronger levels of structural coverage beyond decision coverage, for example, condition
coverage and multiple condition coverage.
The concept of coverage can also be applied at other test levels (e.g. at integration level) where the
percentage of modules, components or classes that have been exercised by a test case suite could be
expressed as module, component or class coverage.
Tool support is useful for the structural testing of code.

Condition Coverage
Condition coverage requires that each condition in a decision take on all possible
outcomes at least once (to overcome the problem in the previous example), but does not require
that the decision take on all possible outcomes at least once.
In this case, for the decision (A or B) test cases (TF) and (FT) meet the
coverage criterion, but do not cause the decision to take on all possible outcomes. As with
decision coverage, a minimum of two tests cases is required for each decision.
Condition coverage reports the true or false outcome of each Boolean sub -expression,
separated by logical-and and logical-or if they occur. Condition coverage measures the sub-
expressions independently of each other.
This measure is similar to decision coverage but has better sensitivity to the control flow.

However, full condition coverage does not guarantee full decision coverage. For example,
consider the following C++/Java fragment.
bool f(bool e) { return false; }
bool a[2] = { false, false };
if (f(a && b)) ...
if (a[int(a && b)]) ...
if ((a && b) ? false : false) ...

All three of the if-statements above branch false regardless of the values of a and b.
However if you exercise this code with a and b having all possible combinations of values,
condition coverage reports full coverage.

Multiple Condition Coverage


Multiple condition coverage reports whether every possible combination of boolean sub-
expressions occurs. As with condition coverage, the sub-expressions are separated by logical-
and and logical-or, when present. The test cases required for full multiple condition coverage of a
condition are given by the logical operator truth table for the condition.
For languages with short circuit operators such as C, C++, and Java, an advantage of
multiple condition coverage is that it requires very thorough testing. For these languages, multiple
condition coverage is very similar to condition coverage.
A disadvantage of this measure is that it can be tedious to determine the minimum set of
test cases required, especially for very complex Boolean expressions. An additional disadvantage
of this measure is that the number of test cases required could vary substantially among
conditions that have similar complexity. For example, consider the following two C/C++/Java
conditions.

a && b && (c || (d && e))


((a || b) && (c || d)) && e

To achieve full multiple condition coverage, the first condition requires 6 test cases while
the second requires 11. Both conditions have the same number of operands and operators. As
with condition coverage, multiple condition coverage does not include decision coverage.
For languages without short circuit operators such as Visual Basic and Pascal, multiple
condition coverage is effectively path coverage (described below) for logical expressions, with the
same advantages and disadvantages.

Consider the following Visual Basic code fragment.


If a And b Then
...

Multiple condition coverage requires four test cases, for each of the combinations of a
and b both true and false. As with path coverage each additional logical operator doubles the
number of test cases required.
4.5 Experience-based techniques (K2)

Terms
Error guessing, exploratory testing.

Background
Perhaps the most widely practiced technique is error guessing. Tests are derived from the tester’s
skill and intuition and their experience with similar applications and technologies. When used to augment
systematic techniques, intuitive testing can be useful to identify special tests not easily captured by formal
techniques, especially when applied after more formal approaches. However, this technique may yield
widely varying degrees of effectiveness, depending on the testers’ experience.
A structured approach to the error guessing technique is to enumerate a list of possible errors and
to design tests that attack these errors. These defect and failure lists can be built based on experience,
available defect and failure data, and from common knowledge about why software fails.
Exploratory testing is concurrent test design, test execution, test logging and learning, based on a
test charter containing test objectives, and carried out within time boxes.
It is an approach that is most useful where there are few or inadequate specifications and severe time
pressure, or in order to augment or complement other, more formal testing. It can serve as a check on the
test process, to help ensure that the most serious defects are found.

4.6 Choosing test techniques (K2)

Terms
No specific terms.

Background
The choice of which test techniques to use depends on a number of factors, including the type of
system, regulatory standards, customer or contractual requirements, level of risk, type of risk, test
objective, documentation available, knowledge of the testers, time and budget, development life cycle, use
case models and previous experience of types of defects found. Some techniques are more applicable to
certain situations and test levels; others are applicable to all test levels.

5.Test management (K3)


Learning objectives for test management
The objectives identify what you will be able to do following the completion of each module.

5.1 Test organization (K2)


 Recognize the importance of independent testing. (K1)
 List the benefits and drawbacks of independent testing within an organization. (K2)
 Recognize the different team members to be considered for the creation of a test team. (K1)
 Recall the tasks of typical test leader and tester. (K1)

5.2 Test planning and estimation (K2)


 Recognize the different levels and objectives of test planning. (K1)
 Summarize the purpose and content of the test plan, test design specification and test procedure
documents according to the ‘Standard for Software Test Documentation’ (IEEE
 829). (K2)
 Recall typical factors that influence the effort related to testing. (K1)
 Differentiate between two conceptually different estimation approaches: the metrics-based
approach and the expert-based approach. (K2)
 Differentiate between the subject of test planning for a project, for individual test levels (e.g.
system test) or specific test targets (e.g. usability test), and for test execution. (K2)
 List test preparation and execution tasks that need planning. (K1)
 Recognize/justify adequate exit criteria for specific test levels and groups of test cases (e.g. for
integration testing, acceptance testing or test cases for usability testing). (K2)

5.3 Test progress monitoring and control (K2)


 Recall common metrics used for monitoring test preparation and execution. (K1)
 Understand and interpret test metrics for test reporting and test control (e.g. defects found and
fixed, and tests passed and failed). (K2)
 Summarize the purpose and content of the test summary report document according to the
Standard for Software Test Documentation’ (IEEE 829). (K2)

5.4 Configuration management (K2)


 Summarize how configuration management supports testing. (K2)

5.5 Risk and testing (K2)


 Describe a risk as a possible problem that would threaten the achievement of one or
more stakeholders’ project objectives. (K2)
 Remember that risks are determined by likelihood (of happening) and impact (harm
resulting if it does happen). (K1)
 Distinguish between the project and product risks. (K2)
 Recognize typical product and project risks. (K1)
 Describe, using examples, how risk analysis and risk management may be used for test
planning. (K2)

5.6 Incident Management (K3)


 Recognize the content of the ‘Standard for Software Test Documentation’ (IEEE 829) incident
report. (K1)
 Write an incident report covering the observation of a failure during testing. (K3)
5.1 Test organization (K2)

Terms
Tester, test leader, test manager.

5.1.1 Test organization and independence (K2)


 The effectiveness of finding defects by testing and reviews can be improved by using independent
testers. Options for independence are:
 Independent testers within the development teams.
 Independent test team or group within the organization, reporting to project management or
executive management.
 Independent testers from the business organization, user community and IT.
 Independent test specialists for specific test targets such as usability testers, security testers or
certification testers (who certify a software product against standards and regulations).
 Independent testers outsourced or external to the organization.
For large, complex or safety critical projects, it is usually best to have multiple levels of testing, with some
or all of the levels done by independent testers. Development staff may participate in testing, especially at
the lower levels, but their lack of objectivity often limits their effectiveness.
The independent testers may have the authority to require and define test processes and rules,
but testers should take on such process-related roles only in the presence of a clear management
mandate to do so.

The benefits of independence include:


 Independent testers see other and different defects, and are unbiased.
 An independent tester can verify assumptions people made during specification and
implementation of the system.
Drawbacks include:
 Isolation from the development team (if treated as totally independent).
 Independent testers may be the bottleneck as the last checkpoint.
 Developers lose a sense of responsibility for quality.
Testing tasks may be done by people in a specific testing role, or may be done by
someone in another role, such as a project manager, quality manager, developer, business
and domain expert, infrastructure or IT operations.
It is important for a Software Organization to have an Independent Testing group. The
group should have a formalized position in the organizational hirearchy. A reporting structure
should be established and resources allocated to the group. The group should be staffed by
people who have skills and motivations to:
 Maintain testing policy statements
 Plan the testing efforts
 Monitor and track testing efforts so that they are on time and within the budget
 Measure process and product attributes
 Provide mangement with independent product and process quality information
 Design and execute tests with no duplication of effort
 Identify the area that can be automated
 Participate in reviews to ensure quality
 Work with analysts, designers, coders and clients to ensure quality goals are met
 Maintain a repository of of test-related information
 Support process improvement process
5.1.2 Tasks of the test leader and tester (K1)
In this syllabus two test positions are covered, test leader and tester. The activities and tasks
performed by people in these two roles depend on the project and product context, the people in the roles,
and the organization. Sometimes the test leader is called a test manager or test coordinator. A project
manager, a development manager, a quality assurance manager or the manager of a test group may perform
the role of the test leader. In larger projects two positions may exist: test leader and test manager. Typically
the test leader plans, monitors and controls the testing activities and tasks as defined in Section 1.4.

Typical test leader tasks may include:


 Coordinate the test strategy and plan with project managers and others.
 Write or review a test strategy for the project, and test policy for the organization.
 Contribute the testing perspective to other project activities, such as integration planning.
 Plan the tests – considering the context and understanding the risks – including selecting test
approaches, estimating the time, effort and cost of testing, acquiring resources, defining test levels,
cycles, approach, and objectives, and planning incident management
 Initiate the specification, preparation, implementation and execution of tests, and monitor and
control the execution.
 Adapt planning based on test results and progress (sometimes documented in status reports) and
take any action necessary to compensate for problems.
 Set up adequate configuration management of testware for traceability.
 Introduce suitable metrics for measuring test progress and evaluating the quality of the testing and
the product.
 Decide what should be automated, to what degree, and how.
 Select tools to support testing and organize any training in tool use for testers.
 Decide about the implementation of the test environment.
 Schedule tests.
 Write test summary reports based on the information gathered during testing.

Typical tester tasks may include:


 Review and contribute to test plans.
 Analyze, review and assess user requirements, specifications and models for testability.
 Create test specifications.
 Set up the test environment (often coordinating with system administration and network
management).
 Prepare and acquire test data.
 Implement tests on all test levels, execute and log the tests, evaluate the results and document the
deviations from expected results.
 Use test administration or management tools and test monitoring tools as required.
 Automate tests (may be supported by a developer or a test automation expert).
 Measure performance of components and systems (if applicable).
 Review tests developed by others.
 People who work on test analysis; test design, specific test types or test automation may be
specialists in these roles. Depending on the test level and the risks related to the product and the
project, different people may take over the role of tester, keeping some degree of independence.
 Typically testers at the component and integration level would be developers, testers at the
acceptance test level would be business experts and users, and testers for operational acceptance
testing would be operators.

5.2 Test planning and estimation


Terms
Entry criteria, exit criteria, exploratory testing, test approach, test level, test plan, test procedure,
test strategy.

Planning versus Estimating


Drawing up a plan and drawing up an estimate are not separate activities. It is difficult to
draw up an estimate without having in mind the various activities and the times at which they
performed. Similarly it is not advisable to draw up a plan without making allowances for the
estimate; the test manager or leader must continually check whether the plan still corresponds to
the estimate. In practice, these activities appear to be carried out more or less simultaneously.

5.2.1 Test planning (K2)


This section covers the purpose of test planning within development and implementation projects, and for
maintenance activities. Planning may be documented in a project or master test plan, and in separate test
plans for test levels, such as system testing and acceptance testing. Outlines of test planning documents are
covered by the ‘Standard for Software Test Documentation’ (IEEE 829). The test policy of the
organization, the scope of testing, objectives, risks, constraints, criticality, testability and the availability of
resources influence planning. The more the project and test planning progresses the more information is
available and the more detail that can be included in the plan. Test planning is a continuous activity and is
performed in all life cycle processes and activities. Feedback from test activities is used to recognize
changing risks so that planning can be adjusted.

5.2.2 Test planning activities (K2)


Test planning activities may include:
 Defining the overall approach of testing (the test strategy), including the definition of the test
levels and entry and exit criteria.
 Integrating and coordinating the testing activities into the software life cycle activities: acquisition,
supply, development, operation and maintenance.
 Making decisions about what to test, what roles will perform the test activities, when and how the
test activities should be done, how the test results will be evaluated, and when to stop testing (exit
criteria).
 Assigning resources for the different tasks defined.
 Defining the amount, level of detail, structure and templates for the test documentation.
 Selecting metrics for monitoring and controlling test preparation and execution, defect resolution
and risk issues.
 Setting the level of detail for test procedures in order to provide enough information to support
 Reproducible test preparation and execution.

A test plan is a specific version of a project plan with clauses that meet these requirements. The
main characteristics of a project plan are below:
A project plan can be considered to have five key characteristics that have to be managed:

A circle called scope surrounded by a triangle. The triangles corner are marked: time,
resource, quality. There are two small arrows at each corner of the triangle marked risk.
 Scope: defines what will be covered in a project and what will not be covered
 Resource: what can be used to meet the scope
 Time: what tasks are to be undertaken and when
 Quality: the spread or deviation allowed from a desired standard
 Risk: defines in advance what may happen to drive the plan off course, and what will be
done to recover the situation
The point of a plan is to balance. A balanced seesaw with scope and quality on the left
hand side, and time, resource and risk on the right hand side.
 The scope, and quality constraint against,
 The time and resource constraint,
 While minimising the risks.

The international standard IEEE Std 829-1998 gives advice on the various types of test
documentation required for testing including test plans. The test plan section of the standard
defines 16 clauses.
The 16 clauses of the IEEE 829 test plan standard are:
1. Test plan identifier
2. Introduction.
3. Test items.
4. Features to be tested.
5. Features not to be tested.
6. Approach.
7. Item pass/fail criteria.
8. Suspension criteria and resumption requirements.
9. Test deliverables.
10. Testing tasks.
11. Environmental needs.
12. Responsibilities.
13. Staffing and training needs.
14. Schedule.
15. Risks and contingencies.
16. Approvals.

These can be matched against the five characteristics of a basic plan, with a couple left over that
form part of the plan document itself.
Scope
Scope clauses define what features will be tested.

3. Test Items: The items of software, hardware, and combinations of these that will be
tested.
4. Features to Be Tested: The parts of the software specification to be tested.
5. Features Not to Be Tested: The parts of the software specification to be EXCLUDED
from testing.
Resource
Resource clauses give the overall view of the resources to deliver the tasks.

11. Environmental Needs: What is needed in the way of testing software, hardware,
offices etc.
12. Responsibilities: Who has responsibility for delivering the various parts of the plan.
13. Staffing And Training Needs: The people and skills needed to deliver the plan.
Time
Time clauses specify what tasks are to be undertaken to meet the quality objectives, and when
they will occur.

10. Testing Tasks: The tasks themselves, their dependencies, the elapsed time they will
take, and the resource required.
14. Schedule: When the tasks will take place.
Often these two clauses refer to an appendix or another document that contains the detail.
Quality
Quality clauses define the standard required from the testing activities.

2. Introduction: A high level view of the testing standard required, including what type of
testing it is.
6. Approach: The details of how the testing process will be followed.
7. Item Pass/Fail Criteria: Defines the pass and failure criteria for an item being tested.
9. Test Deliverables: Which test documents and other deliverables will be produced.

Risk
Risk clauses define in advance what could go wrong with a plan and the measures that will be
taken to deal with these problems.

8. Suspension Criteria And Resumption Requirements: This is a particular risk clause


to define under what circumstances testing would stop and restart.
15. Risks And Contingencies: This defines all other risk events, their likelihood, impact
and counter measures to over come them.

Plan Clauses
These clauses are parts of the plan structure.

1. Test Plan Identifier: This is a unique name or code by which the plan can be
identified in the project's documentation including its version.
16. Approvals: The signatures of the various stakeholders in the plan, to show they
agree in advance with what it says.

The IEEE 829 standard for a test plan provides a good basic structure. It is not restrictive in that it
can be adapted in the following ways:
 Descriptions of each clause can be tailored to an organisation's needs,
 More clauses can be added,
 More content added to any clause,
 Sub-sections can be defined in a clause,
 Other planning documents can be referred to.
If a properly balanced test plan is created then a project stands a chance of delivering a system
that will meet the user's needs.

5.2.3 Exit criteria (K2)


The purpose of exit criteria is to define when to stop testing, such as at the end of a test level or when a set
of tests has a specific goal.

Typically exit criteria may consist of:


 Thoroughness measures, such as coverage of code, functionality or risk.
 Estimates of defect density or reliability measures.
 Cost.
 Residual risks, such as defects not fixed or lack of test coverage in certain areas.
 Schedules such as those based on time to market.
5.2.4 Test estimation (K2)
Two approaches for the estimation of test effort are covered in this syllabus:
 Estimating the testing effort based on metrics of former or similar projects or based on typical
values.
 Estimating the tasks by the owner of these tasks or by experts.
Once the test effort is estimated, resources can be identified and a schedule can be drawn up.
The testing effort may depend on a number of factors, including:
 Characteristics of the product: the quality of the specification and other information used for test
models (i.e. the test basis), the size of the product, the complexity of the problem domain, the
requirements for reliability and security, and the requirements for documentation.
 Characteristics of the development process: the stability of the organization, tools used, test
process, skills of the people involved, and time pressure.
 The outcome of testing: the number of defects and the amount of rework required.

5.2.5 Test approaches (test strategies) (K2)


One way to classify test approaches or strategies is based on the point in time at which the bulk of the test
design work is begun:
 Preventative approaches, where tests are designed as early as possible.
 Reactive approaches, where test design comes after the software or system has been produced.
 Typical approaches or strategies include:
 Analytical approaches, such as risk-based testing where testing is directed to areas of greatest risk.
 Model-based approaches, such as stochastic testing using statistical information about failure rates
(such as reliability growth models) or usage (such as operational profiles).
 Methodical approaches, such as failure based (including error guessing and fault-attacks), check-
list based, and quality characteristic based.
 Process- or standard-compliant approaches, such as those specified by industry-specific standards
or the various agile methodologies.
 Dynamic and heuristic approaches, such as exploratory testing where testing is more reactive to
events than pre-planned, and where execution and evaluation are concurrent tasks.
 Consultative approaches, such as those where test coverage is driven primarily by the advice and
guidance of technology and/or business domain experts outside the test team.
 Regression-averse approaches, such as those that include reuse of existing test material, extensive
automation of functional regression tests, and standard test suites.

Different approaches may be combined, for example, a risk-based dynamic approach.


The selection of a test approach should consider the context, including:
 Risk of failure of the project, hazards to the product and risks of product failure to humans, the
environment and the company.
 Skills and experience of the people in the proposed techniques, tools and methods.
 The objective of the testing endeavor and the mission of the testing team.
 Regulatory aspects, such as external and internal regulations for the development process.
 The nature of the product and the business.

5.3 Test progress monitoring and control (K2)

Terms
Defect density, failure rate, test control, test coverage, test monitoring, test report.

5.3.1 Test progress monitoring (K1)


The purpose of test monitoring is to give feedback and visibility about test activities. Information
to be monitored may be collected manually or automatically and may be used to measure exit criteria, such
as coverage. Metrics may also be used to assess progress against the planned schedule and budget.
Common test metrics include:
 Percentage of work done in test case preparation (or percentage of planned test cases prepared).
 Percentage of work done in test environment preparation.
 Test case execution (e.g. number of test cases run/not run, and test cases passed/failed).
 Defect information (e.g. defect density, defects found and fixed, failure rate, and retest results).
 Test coverage of requirements, risks or code.
 Subjective confidence of testers in the product.
 Dates of test milestones.
 Testing costs, including the cost compared to the benefit of finding the next defect or to run the
next test.

5.3.2 Test Reporting (K2)


Test reporting is concerned with summarizing information about the testing endeavor, including:
 What happened during a period of testing, such as dates when exit criteria were met?
 Analyzed information and metrics to support recommendations and decisions about future actions,
such as an assessment of defects remaining, the economic benefit of continued testing, outstanding
risks, and the level of confidence in tested software.
 The outline of a test summary report is given in ‘Standard for Software Test Documentation’
(IEEE829).
 Metrics should be collected during and at the end of a test level in order to assess:
 The adequacy of the test objectives for that test level.
 The adequacy of the test approaches taken.
 The effectiveness of the testing with respect to its objectives.

The outline of a test summary report is given in ‘Standard for Software Test Documentation’
(IEEE 829)

A test summary report shall have the following structure:


a) Test summary report identifier;
b) Summary;
c) Variances;
d) Comprehensive assessment;
e) Summary of results;
f) Evaluation;
g) Summary of activities;
h) Approvals.

The sections shall be ordered in the specified sequence. Additional sections may be
included just prior to Approvals. If some or all of the content of a section is in another document,
then a reference to that material may be listed in place of the corresponding content. The
referenced material must be attached to the test summary report or available to users of the
summary report.
Details on the content of each section are contained in the following sub clauses.
a) Test summary report identifier
Specify the unique identifier assigned to this test summary report.

b) Summary
Summarize the evaluation of the test items. Identify the items tested, indicating their
version/revision level. Indicate the environment in which the testing activities took place.
For each test item, supply references to the following documents if they exist: test plan,
test design specifications, test procedure specifications, test item transmittal reports, test logs,
and test incident reports.

c) Variances
Report any variances of the test items from their design specifications. Indicate any
variances from the test plan, test designs, or test procedures. Specify the reason for each
variance.

d) Comprehensiveness assessment
Evaluate the comprehensiveness of the testing process against the comprehensiveness
criteria specified in the test plan if the plan exists. Identify features or feature combinations that
were not sufficiently tested and explain the reasons.

e) Summary of results
Summarize the results of testing. Identify all resolved incidents and summarize their
resolutions. Identify all unresolved incidents.

f) Evaluation
Provide an overall evaluation of each test item including its limitations. This evaluation
shall be based upon the test results and the item level pass/fail criteria. An estimate of failure risk
may be included.

g) Summary of activities
Summarize the major testing activities and events. Summarize resource consumption
data, e.g., total staffing level, total machine time, and total elapsed time used for each of the
major testing activities.

h) Approvals
Specify the names and titles of all persons who must approve this report. Provide space
for the signatures and dates.

5.3.3 Test control (K2)


Test control describes any guiding or corrective actions taken as a result of information and
metrics gathered and reported. Actions may cover any test activity and may affect any other software life
cycle activity or task.
Examples of test control actions are:
 Re-prioritize tests when an identified risk occurs (e.g. software delivered late).
 Change the test schedule due to availability of a test environment.
 Set an entry criterion requiring fixes to have been retested by a developer before accepting them
into a build.

5.4 Configuration management (K2)

Terms
Configuration management, version control.

Background
The purpose of configuration management is to establish and maintain the integrity of the products
(components, data and documentation) of the software or system through the project and product life cycle.

Configuration Management(CM) is the process which controls the changes made to a system and
manages the different versions of the evolving software product.
Configuration Management(CM) is also a process of identifying and defining the Configuration
Items in a system, recording and reporting the status of Configuration Items and Requests For
Change, and verifying the completeness and correctness of Configuration Items.

For testing, configuration management may involve ensuring that:


 All items of testware are identified, version controlled, tracked for changes, related to each other
and related to development items (test objects) so that traceability can be maintained throughout
the test process.
 All identified documents and software items are referenced unambiguously in test documentation.
 For the tester, configuration management helps to uniquely identify (and to reproduce) the tested
item, test documents, the tests and the test harness. During test planning, the configuration
management procedures and infrastructure (tools) should be chosen, documented and
implemented.

Significance of configuration management process


CM processes recognize, manage and report changes made to existing software. It enables
1. Testing and reuse of high quality software.
2. To identify status of documents (functional and physical characteristics) and code.
3. To maintain different versions of the application.
4. To confirm specified requirement.
5. Maintenance support
6. Chaos and confusion elimination.

Configurable items
Some of the configurable items in CM are as follows
1. Requirement Phase: e.g. business requirement specification, updates in project plan.
2. Design Phase: e.g. High level design document, test specification
3. Coding Phase: e.g. tools used, program documentation
4. Testing Phases: e.g. manual test cases, automation test scripts, test strategy, test logs,
test results.

5.5 Risk and testing (K2)


Terms
Product risk, project risk, risk, risk-based testing.

Background
Risk can be defined as the chance of an event, hazard, threat or situation occurring and its
undesirable consequences, a potential problem. The level of risk will be determined by the likelihood of an
adverse event happening and the impact (the harm resulting from that event).

5.5.1 Project risks (K1, K2)


 Project risks are the risks that surround the project’s capability to deliver its objectives, such as:
 Supplier issues:
o Failure of a third party;
o Contractual issues.
 Organizational factors:
o Skill and staff shortages;
o Personal and training issues;
o Political issues, such as
o Problems with testers communicating their needs and test results;
o Failure to follow up on information found in testing and reviews (e.g. not improving
o Development and testing practices).
o Improper attitude toward or expectations of testing (e.g. not appreciating the value of
finding defects during testing
 Technical issues:
o Problems in defining the right requirements;
o The extent that requirements can be met given existing constraints;
o The quality of the design, code and tests.
o When analyzing, managing and mitigating these risks, the test manager is following well
established project management principles. The ‘Standard for Software Test
Documentation’ (IEEE 829) outline for test plans requires risks and contingencies to be
stated.

5.5.2 Product Risks (K2)


Potential failure areas (adverse future events or hazards) in the software or system are known as
product risks, as they are a risk to the quality of the product, such as:
 Error-prone software delivered.
 The potential that the software/hardware could cause harm to an individual or company.
 Poor software characteristics (e.g. functionality, security, reliability, usability and performance).
 Software that does not perform its intended functions.
Risks are used to decide where to start testing and where to test more; testing is used to
reduce the risk of an adverse effect occurring, or to reduce the impact of an adverse effect.
Product risks are special type of risk to the success of a project. Testing as a risk-control activity
provides feedback about the residual risk by measuring the effectiveness of critical defect
removal and of contingency plans. A risk-based approach to testing provides proactive
opportunities to reduce the levels of product risk, starting in the initial stages of a project. It
involves the identification of product risks and their use in guiding the test planning, specification,
preparation and execution of tests. In a risk-based approach

The risks identified may be used to:


 Determine the test techniques to be employed.
 Determine the extent of testing to be carried out.
 Prioritize testing in an attempt to find the critical defects as early as possible.
 Determine whether any non-testing activities could be employed to reduce risk (e.g.
providing training to inexperienced designers).
 Risk-based testing draws on the collective knowledge and insight of the project
stakeholders to determine the risks and the levels of testing required addressing those
risks.
 To ensure that the chance of a product failure is minimized, risk management activities
provide a disciplined approach to:
 Assess (and reassess on a regular basis) what can go wrong (risks).
 Determine what risks are important to deal with.
 Implement actions to deal with those risks.
 In addition, testing may support the identification of new risks, may help to determine
what risks should be reduced, and may lower uncertainty about risks.

5.6 Incident management (K3)

Terms
Incident logging.

Background
Since one of the objectives of testing is to find defects, the discrepancies between actual and
expected outcomes need to be logged as incidents. Incidents should be tracked from discovery and
classification to correction and confirmation of the solution. In order to manage all incidents to completion,
an organization should establish a process and rules for classification. Incidents may be raised during
development, review, testing or use of a software product. They may be raised for issues in code or the
working system, or in any type of documentation including development documents, test documents or user
information such as “Help” or installation guides.
Incident reports have the following objectives:
 Provide developers and other parties with feedback about the problem to enable identification,
isolation and correction as necessary.
 Provide test leaders a means of tracking the quality of the system under test and the progress of the
testing.
 Provide ideas for test process improvement.
 A tester or reviewer typically logs the following information, if known, regarding an incident:
 Date of issue, issuing organization, author, approvals and status.
 Scope, severity and priority of the incident.
 References, including the identity of the test case specification that revealed the problem.

Details of the incident report may include:


 Expected and actual results.
 Date the incident was discovered.
 Identification or configuration item of the software or system.
 Software or system life cycle process in which the incident was observed.
 Description of the anomaly to enable resolution.
 Degree of impact on stakeholder(s) interests.
 Severity of the impact on the system.
 Urgency/priority to fix.
 Status of the incident (e.g. open, deferred, duplicate, waiting to be fixed, fixed awaiting
confirmation test or closed).
 Conclusions and recommendations.
 Global issues, such as other areas that may be affected by a change resulting from the incident.
 Change history, such as the sequence of actions taken by project team members with respect to the
incident to isolate, repair and confirm it as fixed.
 The structure of an incident report is covered in the ‘Standard for Software Test Documentation’
(IEEE 829) and is called an anomaly report.

Test Incident Report:

A test incident report shall have the following structure:

a) Test incident report identifier;


b) Summary;
c) Incident description;
d) Impact.

a) Test incident report identifier


Specify the unique identifier assigned to this test incident report.

b) Summary
Summarize the incident. Identify the test items involved indicating their version/revision level.
References to the appropriate test procedure specification, test case specification, and test log
should be supplied.

c) Incident description
Provide a description of the incident. This description should include the following items:
a) Inputs;
b) Expected results;
c) Actual results;
d) Anomalies;
e) Date and time;
f) Procedure step;
g) Environment;
h) Attempts to repeat;
i) Testers;
j) Observers.

Related activities and observations that may help to isolate and correct the cause of the incident
should be included.

d) Impact
If known, indicate what impact this incident will have on test plans, test design specifications, test
procedure specifications, or test case specifications.
6. Tool support for testing (K2)

Learning objectives for tool support for testing


The objectives identify what you will be able to do following the completion of each module.
6.1 Types of test tool (K2)
 Classify different types of test tools according to the test process activities. (K2)
 Recognize tools that may help developers in their testing. (K1)
6.2 Effective use of tools: potential benefits and risks (K2)
 Summarize the potential benefits and risks of test automation and tool support for testing.
(K2)
 Recognize that test execution tools can have different scripting techniques, including data
driven and keyword driven. (K1)
6.3 Introducing a tool into an organization (K1)
 State the main principles of introducing a tool into an organization. (K1)
 State the goals of a proof-of-concept/piloting phase for tool evaluation. (K1)
 Recognize that factors other than simply acquiring a tool are required for good tool
support.(K1)
What is automated testing?
The principle of automated testing is that there is a program that runs the program being
tested, feeding it the proper input, and checking the output against the expected output. Once the
test suite is written, no human intervention is needed, either to run the program or to look to see if
it worked; the test suite does all that, and indicates whether the program’s output was as
expected. The main focus of automation is improving the execution portion of testing life cycle. In
order to perform automated testing various categories of tools are used.

Tools: The efficient and time saving way of testing are through the usage of testing tools. The
tools are used to continuously improve the quality of testing as well as it also helps in reusability
and faster performance. The tools help to complete the execution without any manual intervention
and can be also used to evaluate usability from different perspectives.

Why Automation Tools


 The usage of tools make testing easier, faster and more reliable
 To perform repeated execution of test scripts
 Beneficial when the script keeps on changing (regression)
 Allows to perform a fast test execution on different OS platforms
 Allows to execute the test scripts on different machines with different configuration
simultaneously
 Automation is an advantage in performance testing, when the script has to be executed
for many users together
 Tools are mainly used to facilitate processes
 Helps to uncover those defects that are created accidentally or inadvertently
 Provides highest test coverage limits
 Runs all day and nights in an unattended mode
 System continues running even if a test case fails
 Write out meaningful logs
 One point maintenance
 Easy to update reusable modules
 Text strings stored in variables easy to find and update
 Automated most important business functions first.
 Quickly add scripts and modules to the system for new features
 Don’t waste time with very complex features, keep it simple
 Track components of the automated testing system in a database
 Use same architecture for Web or GUI based application testing
 Make sure baseline data is defined and process in place to refresh data
 Keep test environment clean and up-to-date
 Test case management - store test cases in a database for maintenance purposes
 Track tests that pass, as well as test that fail.

When to Automate
Mainly automation comes in to picture at the level of System testing. Automation is only
possible in stages when the application works 100%. The primary purpose of an automated test
is to verify that a requirement, once validated, functions properly in successive builds or
modifications of the AUT. Furthermore, because automated testing reduces the time necessary to
perform regression testing, this is where its benefits are realized the most.

What should be automated


Automation should be performed mainly to test the critical paths of the AUT. First we
should automate the primary functions that will be performed by the targeted end-users. Slowly,
we should add the not-so-critical portions of the application as time permits. The scripts should be
designed in such a way that they are flexible and easy to update.
Early in the project we should not automate testing of such things as login, user preferences, or
other options, status bars, help screens, or any other areas of the application that development
will pay little attention to until later in the project.

How to conduct Automation


The first thing to be done here is to identify an appropriate tool. So, this can be achieved
after a tool evaluation is performed. This tool evaluation should be conducted by using two or
three tools to automate some complex and easy scenarios; this activity should also include the
people who are going to participate in automation. While trying to automate the scenarios one
should never use record and playback concept the team should remember that the automated
testing tool is mere a servant and not the master. So, an appropriate structured testing
methodology should be designed because the automated testing methodology is independent of
the tool and the tools main role is to support that methodology.

Automated Testing Methodologies:

Reusable modules
The basic building block is a reusable module. These modules are used for navigation,
manipulating controls, data entry, data validation, error identification (hard or soft), and writing out
logs. Reusable modules consist of commands, logic, and data. They should be grouped together
in ways that make sense. Generic modules used throughout the testing system, such as
initialization and setup functionality, are generally grouped together in files named to reflect their
primarily function, such as "Init" or "setup". Others that are more application specific, designed to
service controls on a customer window, for example, are also grouped together and named
similarly. All the modules that service the customer screen are organized in one file, or library.
That way, when the customer screen is modified for any reason, updates to the testing system
are located in one place; hence the principle of One Point Maintenance comes into existence.

Test Cases
The next step in the methodology is to turn reusable modules into automated test cases.
Here, a well-structured manual test case is converted into scripts, which is nothing but reusable
modules. The goal here is to build the reusable modules in a very methodical manner. The
action-response pairs from the test case are scripted into reusable modules. These pairs also
determine the size or granularity of a reusable module, which generally consist of just one action-
response pair. Automating test cases in this manner allows the testing system to take on a very
predictable structure. One benefit of this predictability is the ability to begin building the
automated testing system from the requirements early in the software-testing life cycle.
Multi Level Validation
Multi level data validation increases the usefulness and flexibility of the testing system. The more
levels of data validation, or evaluation, the more flexible the testing system. Multi level validation
and evaluation is the ability of the testing system to perform dynamic data validation at multiple
levels and collect information from system messages to evaluate data.
Dynamic data validation is the process whereby the automated testing tool, in real time, gets data
from a control, compares it to an expected value and writes the result to a log file.
Validation refers to correctness of data decided dynamically. Evaluation refers to system
messages that will be collected but evaluated after the fact.

6.1 Types of test tool (K2)


Terms
Configuration management tool, coverage measurement tool, debugging tool, driver, dynamic
analysis tool, incident management tool, load testing tool, modeling tool, monitoring tool,
performance testing tool, probe effect, requirements management tool, review process support
tool, security tool, static analysis tool, stress testing tool, stub, test comparator, test data
preparation tool, test design tool, test harness, test execution tool, test management tool, unit test
framework tool.

6.1.1 Test tool classification (K2)


There are a number of tools that support different aspects of testing. Tools are classified in this
syllabus according to the testing activities that they support. Some tools clearly support one
activity; others may support more than one activity, but are classified under the activity with which
they are most closely associated. Some commercial tools offer support for only one type of
activity; other commercial tool vendors offer suites or families of tools that provide support for
many or all of these activities. Testing tools can improve the efficiency of testing activities by
automating repetitive tasks. Testing tools can also improve the reliability of testing by, for
example, automating large data comparisons or simulating behavior. Some types of test tool can
be intrusive in that the tool itself can affect the actual outcome of the test. For example, the actual
timing may be different depending on how you measure it with different performance tools, or you
may get a different measure of code coverage depending on which coverage tool you use. The
consequence of intrusive tools is called the probe effect. Some tools offer support more
appropriate for developers (e.g. during component and component integration testing). Such tools
are marked with “(D)” in the classifications below.

6.1.2 Tool support for management of testing and tests (K1)


Management tools apply to all test activities over the entire software life cycle.

Test management tools


Characteristics of test management tools include:

 Support for the management of tests and the testing activities carried out.
 Interfaces to test execution tools, defect tracking tools and requirement management
tools.
 Independent version control or interface with an external configuration management tool.
 Support for traceability of tests, test results and incidents to source documents, such as
requirement specifications.
 Logging of test results and generation of progress reports.
 Quantitative analysis (metrics) related to the tests (e.g. tests run and tests passed) and
the test object (e.g. incidents raised), in order to give information about the test object,
and to control and improve the test process.

Example

TestDirector – A Mercury Interactive product, allows you to deploy high-quality applications


quickly and effectively by providing a consistent, repeatable process for gathering requirements,
planning and scheduling tests, analyzing results, and managing defects and issues.

Requirements management tools


Requirements management tools store requirement statements, check for consistency
and undefined (missing) requirements, allow requirements to be prioritized and enable individual
tests to be traceable to requirements, functions and/or features. Traceability may be reported in
test management progress reports. The coverage of requirements, functions and/or features by a
set of tests may also be reported.
Examples:
1. AnalystPro - Robust tools for specification and requirements tracking, including
the ability to create and modify specifications and diagrams/models.
2. GatherSpace - Hosted requirements management tool that has the unique
combination of being easy to use, feature rich, very inexpensive, and use case
friendly.

Incident management tools


Incident management tools store and manage incident reports, i.e. defects, failures or
perceived problems and anomalies, and support management of incident reports in ways that
include:

 Facilitating their prioritization.


 Assignment of actions to people (e.g. fix or confirmation test).
 Attribution of status (e.g. rejected, ready to be tested or deferred to next release).

These tools enable the progress of incidents to be monitored over time, often provide support for
statistical analysis and provide reports about incidents. They are also known as defect tracking
tools.

Examples:
1. Bugzilla
2. Source Forge

Configuration management tools


Configuration management (CM) tools are not strictly testing tools, but are typically
necessary to keep track of different versions and builds of the software and tests.

C M tools:

 Store information about versions and builds of software and testware.


 Enable traceability between testware and software work products and product variants.
 Are particularly useful when developing on more than one configuration of the
hardware/software environment (e.g. for different operating system versions, different
libraries or compilers, different browsers or different computers).

Examples:
1. Win CVS
2. Visual Source Safe

6.1.3 Tool support for static testing (K1)


Review process support tools
Review process support tools may store information about review processes, store and
communicate review comments, report on defects and effort, manage references to review rules
and/or checklists and keep track of traceability between documents and source code. They may
also provide aid for online reviews, which is useful if the team is geographically dispersed.
Static analysis tools (D)
Static analysis tools support developers, testers and quality assurance personnel in
finding defects before dynamic testing. Their major purposes include:

 The enforcement of coding standards.


 The analysis of structures and dependencies (e.g. linked web pages).
 Aiding in understanding the code.

Static analysis tools can calculate metrics from the code (e.g. complexity), which can give
valuable information, for example, for planning or risk analysis.

Where to use static analysis tools


Static analysis tools can and should be used throughout the product cycle. Developers
can use a lightweight version of the tool to check for simple bugs that may have been missed at
development time. Build managers or lab technicians should use the tool to discover more
sophisticated bugs at code integration time. After a build is created testers can use static analysis
tools to ensure code coverage and to discover complex sections of the product that should be
tested more thoroughly.
Reasons to Use Static Analysis Tools:
Testing using static analysis tools can drastically reduce a number of bugs which may be
difficult to find in black box testing such as buffer overrun, encoding and dangerous function
usage. A quick scan with a static analyzer may discover many bugs that are difficult to find by
other means.
One of the first things a hacker will attempt to discover and steal is source code. Source
code will allow the hacker to discover new vulnerabilities in your application at the source level,
which is much faster and easier than attempting to find them by traditional white box methods.
Hackers employ the use of static analysis tools to help them quickly discover security problems in
the application. Even if the applications specific source code is not compromised a hacker can
still use the tools to analyze other source code to find common vulnerabilities that might exist in
the application they are trying to attack. By scanning your code with a static analysis tool and
fixing the important bugs discovered you are closing off an avenue of attack for a potential
hacker.
It has been shown that the sooner a bug is discovered and fixed the less it costs. Late
discovered security vulnerabilities can cost a company millions of dollars. The same vulnerability,
if found in the development phase, would be very inexpensive to fix. Most static analysis tools can
point directly to problematic line of code so the developer doesn't have to track down the problem
from a high level bug report.

Example: Refer 3.3

Modeling tools (D)


Modeling tools are able to validate models of the software. For example, a database
model checker may find defects and inconsistencies in the data model; other modeling tools may
find defects in a state model or an object model. These tools can often aid in generating some
test cases based on the model (see also Test design tools below). The major benefit of static
analysis tools and modeling tools is the cost effectiveness of finding more defects at an earlier
time in the development process. As a result, the development process may accelerate and
improve by having less rework.

6.1.4 Tool support for test specification (K1)


Test design tools
Test design tools generate test inputs or the actual tests from requirements, from a
graphical user interface, from design models (state, data or object) or from code. This type of tool
may generate expected outcomes as well (i.e. may use a test oracle). The generated tests from a
state or object model are useful for verifying the implementation of the model in the software, but
are seldom sufficient for verifying all aspects of the software or system. They can save valuable
time and provide increased thoroughness of testing because of the completeness of the tests that
the tool can generate. Other tools in this category can aid in supporting the generation of tests by
providing structured templates, sometimes called a test frame, that generate tests or test stubs,
and thus speed up the test design process.

Example:

Allparis a test case generation tool. Allpairs.pl is a Perl script that constructs a
reasonably small set of test cases that include all pairings of each value of each of a set of
parameters. Allpairs is a command-line executable based on a Perl script. Source is included.
See the Test Methodology section of http://satisfice.com

Test data preparation tools (D)


Test data preparation tools manipulate databases, files or data transmissions to set up
test data to be used during the execution of tests. A benefit of these tools is to ensure that live
data transferred to a test environment is made anonymous, for data protection.

6.1.5 Tool support for test execution and logging (K1)


Test execution tools
Test execution tools enable tests to be executed automatically, or semi-automatically,
using stored inputs and expected outcomes, through the use of a scripting language. The
scripting language makes it possible to manipulate the tests with limited effort, for example, to
repeat the test with different data or to test a different part of the system with similar steps.
Generally these tools include dynamic comparison features and provide a test log for each test
run. Test execution tools can also be used to record tests, when they may be referred to as
capture playback tools. Capturing test inputs during exploratory testing or unscripted testing can
be useful in order to reproduce and/or document a test, for example, if a failure occurs.

Test harness/unit test framework tools (D)


A test harness may facilitate the testing of components or part of a system by simulating
the environment in which that test object will run. This may be done either because other
components of that environment are not yet available or are replaced by stubs and/or drivers, or
simply to provide a predictable and controllable environment in which any faults can be localized
to the object under test. A framework may be created where calling the object to be tested and/or
giving feedback to that object can execute part of the code, object, method or function, unit or
component. It can do this by providing artificial means of supplying input to the test object, and/or
by supplying stubs to take output from the object, in place of the real output targets. Test harness
tools can also be used to provide an execution framework in middleware, where languages,
operating systems or hardware must be tested together. They may be called unit test framework
tools when they have a particular focus on the component test level. This type of tool aids in
executing the component tests in parallel with building the code.

Test comparators
Test comparators determine differences between files, databases or test results. Test
execution tools typically include dynamic comparators, but a separate comparison tool may do
post execution comparison. A test comparator may use a test oracle, especially if it is automated.

Coverage measurement tools (D)


Coverage measurement tools can be either intrusive or non-intrusive depending on the
measurement techniques used, what is measured and the coding language. Code coverage tools
measure the percentage of specific types of code structure that have been exercised (e.g.
statements, branches or decisions, and module or function calls). These tools show how
thoroughly the measured type of structure has been exercised by a set of tests.

Security tools
Security tools check for computer viruses and denial of service attacks. A firewall, for
example, is not strictly a testing tool, but may be used in security testing. Other security tools
stress the system by searching specific vulnerabilities of the system.

6.1.6 Tool support for performance and monitoring (K1)


Dynamic analysis tools (D)
Dynamic analysis tools find defects that are evident only when software is executing,
such as time dependencies or memory leaks. They are typically used in component and
component integration testing, and when testing middleware.

Performance testing/load testing/stress testing tools


Performance testing tools monitor and report on how a system behaves under a variety
of simulated usage conditions. They simulate a load on an application, a database, or a system
environment, such as a network or server. The tools are often named after the aspect of
performance that it measures, such as load or stress, so are also known as load testing tools or
stress testing tools. They are often based on automated repetitive execution of tests, controlled
by parameters.

Examples:
1. LoadManager - Load, Stress, Stability and Performance testing tool from Alvicom.
Runs on all platforms supported by Eclipse and Java such as Linux, Windows, HP Unix, and
others.
2. Test LOAD - An automated load testing solution for IBM iSeries from Origin Software
Group Ltd. Rather than placing artificial load on the network, it runs natively on the server,
simulating actual system performance, monitoring and capturing batch activity, server jobs and
green-screen activity. For web and other applications

Monitoring tools
Monitoring tools are not strictly testing tools but provide information that can be used for
testing purposes and which is not available by other means. Monitoring tools continuously
analyze, verify and report on usage of specific system resources, and give warnings of possible
service problems. They store information about the version and build of the software and
testware, and enable traceability.

Examples:
1. eXternalTest - Site monitoring service from eXternalTest. Periodically checks servers
from different points of the world; view what customers see with screen shots using different
browsers, OSs, and screen resolutions
2. StillUp - Site monitoring service from Deep Blue Systems Ltd. Capabilities include
http/https response monitoring, ping routers, firewall's, etc; SMS and email notification, trace
route on all failures. Also available is a free service for monitoring up to 10 URLs at 59 minute
intervals

6.1.7 Tool support for specific application areas (K1)


Individual examples of the types of tool classified above can be specialized for use in a
particular type of application. For example, there are performance testing tools specifically for
web-based applications, static analysis tools for specific development platforms, and dynamic
analysis tools specifically for testing security aspects. Commercial tool suites may target specific
application areas (e.g. embedded systems).

6.1.8 Tool support using other tools (K1)


The test tools listed here are not the only types of tools used by testers – they may also
use spreadsheets, SQL, resource or debugging tools (D), for example.

6.2 Effective use of tools: potential benefits and risks (K2)


Terms
Data-driven (testing), keyword-driven (testing), scripting language.

6.2.1 Potential benefits and risks of tool support for testing (for all tools)
(K2)
Simply purchasing or leasing a tool does not guarantee success with that tool. Each type
of tool may require additional effort to achieve real and lasting benefits. There are potential
benefit opportunities with the use of tools in testing, but there are also risks.

Potential benefits of using tools include:

 Repetitive work is reduced (e.g. running regression tests, re-entering the same test data,
and checking against coding standards).
 Greater consistency and repeatability (e.g. tests executed by a tool, and tests derived
from requirements).
 Objective assessment (e.g. static measures, coverage and system behavior).
 Ease of access to information about tests or testing (e.g. statistics and graphs about test
 progress, incident rates and performance).

Risks of using tools include:

 Unrealistic expectations for the tool (including functionality and ease of use).
 Underestimating the time, cost and effort for the initial introduction of a tool (including
training and external expertise).
 Underestimating the time and effort needed to achieve significant and continuing benefits
from the tool (including the need for changes in the testing process and continuous
improvement of the way the tool is used).
 Underestimating the effort required to maintain the test assets generated by the tool.
 Over-reliance on the tool (replacement for test design or where manual testing would be
better).

6.2.2 Special considerations for some types of tool (K1)


Test execution tools
Test execution tools replay scripts designed to implement tests that are stored
electronically. This type of tool often requires significant effort in order to achieve significant
benefits. Capturing tests by recording the actions of a manual tester seems attractive, but this
approach does not scale to large numbers of automated tests. A captured script is a linear
representation with specific data and actions as part of each script. This type of script may be
unstable when unexpected events occur. A data-driven approach separates out the test inputs
(the data), usually into a spreadsheet, and uses a more generic script that can read the test data
and perform the same test with different data. Testers who are not familiar with the scripting
language can enter test data for these predefined scripts. In a keyword-driven approach, the
spreadsheet contains keywords describing the actions to be taken (also called action words), and
test data. Testers (even if they are not familiar with the scripting language) can then define tests
using the keywords, which can be tailored to the application being tested. Technical expertise in
the scripting language is needed for all approaches (either by testers or by specialists in test
automation). Whichever scripting technique is used, the expected results for each test need to be
stored for later comparison.

Performance testing tools


Performance testing tools need someone with expertise in performance testing to help
design the tests and interpret the results. For more information refer 6.1.6

Static analysis tools


Static analysis tools applied to source code can enforce coding standards, but if applied
to existing code may generate a lot of messages. Warning messages do not stop the code being
translated into an executable program, but should ideally be addressed so that maintenance of
the code is easier in the future. A gradual implementation with initial filters to exclude some
messages would be an effective approach. For more information refer 6.1.6

Test management tools


Test management tools need to interface with other tools or spreadsheets in order to
produce information in the best format for the current needs of the organization. The reports need
to be designed and monitored so that they provide benefit. For more information refer 6.1.2

6.3 Introducing a tool into an organization (K1)


Terms
No specific terms.
Background
The main principles of introducing a tool into an organization include:
 Assessment of organizational maturity, strengths and weaknesses and identification of
 Opportunities for an improved test process supported by tools.
 Evaluation against clear requirements and objective criteria.
 A proof-of-concept to test the required functionality and determine whether the product
meets its objectives.
 Evaluation of the vendor (including training, support and commercial aspects).
 Identification of internal requirements for coaching and mentoring in the use of the tool.
 The proof-of-concept could be done in a small-scale pilot project, making it possible to
minimize impacts if major hurdles are found and the pilot is not successful.

A pilot project has the following objectives:


 Learn more detail about the tool.
 See how the tool would fit with existing processes and practices, and how they would
need to change.
 Decide on standard ways of using, managing, storing and maintaining the tool and the
test assets (e.g. deciding on naming conventions for files and tests, creating libraries and
defining the modularity of test suites).
 Assess whether the benefits will be achieved at reasonable cost.

Success factors for the deployment of the tool within an organization include:

 Rolling out the tool to the rest of the organization incrementally.


 Adapting and improving processes to fit with the use of the tool.
 Providing training and coaching/mentoring for new users.
 Defining usage guidelines.
 Implementing a way to learn lessons from tool use.
 Monitoring tool use and benefits.
Standards

IEEE 829 - IEEE 829-1998, also known as the 829 Standard for Software Test Documentation is
an IEEE standard that specifies the form of a set of documents for use in eight defined stages of
software testing, each stage potentially producing its own separate type of document. The
standard specifies the format of these documents. The documents for which this standard applies
are as follows:
 Test Plan
 Test Design Specification
 Test Case Specification
 Test Procedure Specification
 Test Item Transmittal Report
 Test Log
 Test Incident Report
 Test Summary Report

IEEE 1028 – This particular standard is used for software inspection. An inspection is one of the
most common sorts of review practices found in software projects.

IEEE 12207/ISO/IEC – Provides a common framework for developing and managing software.
This standard provides industry a basis for software practices that would be usable for both
national and international business.

ISO 9126 – It is an international standard for the evaluation of software. This standard is divided
into four parts which address the following subjects:
 Quality model
 External Metrics
 Internal Metrics
 Quality in use of metrics

BS 7925-2: 1998 – This is a Software Component Testing standard. This standard defines the
process for software component testing using specified test case design and measurement
techniques. This will enable users of the standard to directly improve the quality of their software
testing, and improve the quality of their software products.

DO-178B: 1992 – This standard is published by Radio Technical Commission for Aeronautics
(RTCA) and is basically for Software considerations in Airborne Systems and Equipment
Certification. It provides guidelines for the production of airborne systems equipment software. It
is used internationally to specify the safety and airworthiness of software for avionics systems. It
describes techniques and methods appropriate to ensure the integrity and reliability of such
software. It has been used to secure Federal Aviation Administration (FAA) approval of digital
computer software. It enforces good software development practices and system design
processes. It describes traceable processes for objectives such as:
 High-level requirements are developed
 Low-level requirements comply with high-level requirements
 Source code complies with low-level requirements
 Source code is traceable to low-level requirements
 Test coverage of high-level and low-level requirements is achieved

IEEE 610.12:1990 – It is basically called as IEEE Std 610.12- 1990 IEEE Standard Glossary of
Software Engineering Terminology – Description. It identifies terms currently in use in the field of
Software Engineering.
IEEE 1008:1993 – It is a standard for Software Unit Testing. It defines an Integrated approach to
systematic and documented Unit testing. The standard can be applied to the unit testing of any
digital computer software or firmware and to the testing of both newly developed and modified
units.

IEEE 1012:1986 – It is a standard for Verifications and Validations. Software verification and
validation (V&V) processes determine whether the development products of a given activity
conform to the requirements of that activity and whether the software satisfies its intended use
and user needs. This standard applies to software being developed, maintained, or reused
[legacy, commercial off-the shelf (COTS), non-developmental items]. The term software also
includes firmware, micro code, and documentation. Software V&V processes includes analysis,
evaluation, review, inspection, assessment, and testing of software products.

IEEE 1044:1993 – It is a uniform approach to the classification of anomalies found in software


and its documentation is provided. This standard is not intended to define procedural or format
requirements for using the classification scheme. It does identify some classification measures
and does not attempt to define all the data supporting the analysis of an anomaly.

IEEE 1219:1998 – This is a standard for Software Maintenance. This standard describes the
process of managing and executing software maintenance activities.

ISO/IEC 2382-1:1993 – This is a standard for data processing – Vocabulary – Part 1:


Fundamental terms. Presents, in English and French, 144 terms in the following fields: general
terms, information representation, hardware, software, programming, applications and end user,
computer security, data management. In order to facilitate their translation into other languages,
the definitions are drafted so as to avoid, as far as possible, any peculiarity attached to the
language.

ISO 9000:2000 – It is one of the most important Quality Management Standard. The ISO 9000
2000 Standards apply to all kinds of organizations in all kinds of areas. Some of these areas
include manufacturing, processing, servicing, printing, forestry, electronics, computing, steel,
legal services, financial services, accounting, trucking, banking, retailing, drilling, recycling,
aerospace, construction, exploration, textiles, pharmaceuticals, oil and gas, pulp and paper,
petrochemicals, publishing, shipping, energy, telecommunications, plastics, metals, research,
health care, hospitality, utilities, aviation, machine tools, food processing, agriculture,
government, education, recreation, fabrication, sanitation, software development, consumer
products, transportation, instrumentation, tourism, biotechnology, chemicals, consulting,
insurance, and so on.

ISO/IEC 12207: 1995 - This standard describes the major component processes of a complete
software life cycle and the high-level relations that govern their interactions. This standard covers
the life cycle of software from conceptualization of ideas through retirement. It also describes how
to tailor the standard for a project. This standard defines the following life cycle processes:
Primary Processes
Supporting Processes
Organization Processes
ISO/IEC 14598-1:1996 – This standard is an expansion of ISO 9126: 1991. This standard is
mainly used to evaluate the software product. The evaluation process is broken down into four
main stages as follows:
 Establish evaluation requirements
 Specify the evaluation
 Design the evaluation
 Execute the evaluation
CMMI – Capability Maturity Model Integration is a process improvement approach that provides
organizations with the essential elements of effective processes. It is used to guide process
improvement in an organization. It basically helps in the following ways:
 Integrate separate organizational functions
 Sets process improvement goals and priorities
 Provides guidance for quality processes
 Provides a point of reference for appraising current processes

You might also like