ISTQB Material
ISTQB Material
Preparation Guide
Page 1 of 82
Table of Contents
Table of Contents
Table of Contents
Acknowledgements
Introduction to this syllabus
1. Fundamentals of testing (K2) ......................................................................................................... 5
1.1 Why is testing necessary (K2).................................................................................................. 6
1.1.1 Software systems context (K1) ................................................................................6
1.1.2 Causes of software defects (K2)..............................................................................6
1.1.3 Role of testing in software development, maintenance and operations (K2)
7
1.1.4 Testing and quality (K2) ........................................................................................... 7
1.1.5 How much testing is enough? (K2) .........................................................................7
1.2 What is testing (K2) ................................................................................................................ 7
1.3 General testing principles (K2) ................................................................................................ 8
1.4 Fundamental test process (K1) ................................................................................................ 9
1.4.1 Test planning and control (K1)................................................................................10
1.4.2 Test analysis and design (K1). ..............................................................................11
1.4.3 Test implementation and execution (K1) ..............................................................11
1.4.4 Evaluating exit criteria and reporting (K1)...............................................................11
1.4.5 Test closure activities (K1) ....................................................................................11
1.5 The psychology of testing (K2) .............................................................................................. 12
2. Testing throughout the software life cycle (K2) ...........................................................................13
2.1 Software development models (K2)........................................................................................14
2.1.1 V-model (K2) ......................................................................................................... 15
2.1.2 Iterative development models (K2) .......................................................................17
2.1.3 Testing within a life cycle model (K2) ...................................................................24
2.2 Test levels (K2)....................................................................................................................... 24
2.2.1 Component testing (K2) .........................................................................................24
2.2.2 Integration testing (K2) ......................................................................................... 25
2.2.3 System testing (K2)................................................................................................ 26
2.2.4 Acceptance testing (K2).........................................................................................27
2.3 Test types: the targets of testing (K2).....................................................................................28
2.3.1 Testing of function (functional testing) (K2)............................................................28
2.3.2 Testing of software product characteristics (non-functional testing) (K2)...............29
2.3.3 Testing of software structure/architecture (structural testing) (K2).........................30
2.3.4 Testing related to changes (confirmation and regression testing) (K2) ..................32
2.4 Maintenance testing (K2)........................................................................................................ 32
3. Static techniques (K2) .................................................................................................................. 34
3.1 Reviews and the test process (K2) ........................................................................................35
3.2 Review process (K2) ............................................................................................................ 36
3.2.1 Phases of a formal review (K1) ..............................................................................36
3.2.2 Roles and responsibilities (K1) .............................................................................36
3.2.3 Types of review (K2) ............................................................................................. 37
3.2.4 Success factors for reviews (K2) ...........................................................................38
3.3 Static analysis by tools (K2) ................................................................................................... 39
4. Test design techniques (K3). ....................................................................................................... 41
4.1 Identifying test conditions and designing test cases (K3). .....................................................43
4.2 Categories of test design techniques (K2). ............................................................................44
4.3 Specification-based or black-box techniques (K3) ................................................................45
4.3.1 Equivalence partitioning (K3) .................................................................................45
4.3.2 Boundary value analysis (K3) ................................................................................47
4.3.3 Decision table testing (K3) .....................................................................................49
Page 2 of 82
4.3.4 State transition testing (K3) ..................................................................................50
4.3.5 Use case testing (K2) ........................................................................................... 51
4.4 Structure-based or white-box techniques (K3) .......................................................................51
4.4.1 Statement testing and coverage (K3) ....................................................................51
4.4.2 Decision testing and coverage (K3) .......................................................................53
4.4.3 Other structure-based techniques (K1) ..................................................................53
4.5 Experience-based techniques (K2) .......................................................................................55
4.6 Choosing test techniques (K2) .............................................................................................. 55
5. Test management (K3) ................................................................................................................ 56
5.1 Test organization (K2) ........................................................................................................... 58
5.1.1 Test organization and independence (K2). ............................................................58
5.1.2 Tasks of the test leader and tester (K1) ................................................................59
5.2 Test planning and estimation (K2) ......................................................................................... 60
5.2.1 Test planning (K2) ............................................................................................... .61
5.2.2 Test planning activities (K2) ..................................................................................60
5.2.3 Exit criteria (K2) ..................................................................................................... 63
5.2.4 Test estimation (K2) ............................................................................................. 63
5.2.5 Test approaches (test strategies) (K2) .................................................................63
5.3 Test progress monitoring and control (K2) ............................................................................64
5.3.1 Test progress monitoring (K1) ..............................................................................64
5.3.2 Test Reporting (K2) 56...........................................................................................64
5.3.3 Test control (K2). ................................................................................................... 66
5.4 Configuration management (K2) ............................................................................................66
5.5 Risk and testing (K2) ............................................................................................................ 67
5.5.1 Project risks (K1, K2).............................................................................................. 67
5.5.2 Product Risks (K2) ................................................................................................ 68
5.6 Incident management (K3) .................................................................................................... 68
6. Tool support for testing (K2). ........................................................................................................ 71
6.1 Types of test tool (K2) ........................................................................................................... 74
6.1.1 Test tool classification (K2) ....................................................................................74
6.1.2 Tool support for management of testing and tests (K1) ........................................74
6.1.3 Tool support for static testing (K1) ........................................................................76
6.1.4 Tool support for test specification (K1) ..................................................................77
6.1.5 Tool support for test execution and logging (K1) ...................................................77
6.1.6 Tool support for performance and monitoring (K1) ...............................................78
6.1.7 Tool support for specific application areas (K1) .....................................................79
6.1.8 Tool support using other tools (K1) .......................................................................79
6.2 Effective use of tools: potential benefits and risks (K2) .........................................................79
6.2.1 Potential benefits and risks of tool support for testing (for all tools) (K2) ..............79
6.2.2 Special considerations for some types of tool (K1) ................................................80
6.3 Introducing a tool into an organization (K1) ..........................................................................81
7. Standards………........................................................................................................................... 82
Page 3 of 82
Acknowledgements
This document is used for preparing International Software Testing Qualifications Board
foundation level certification.
This document also contains ISTQB proprietary preparation guide for the benefit of the readers
along with the other reference material.
The ISTQB proprietary material is given in the square and the other reference material follows
outside the square. For example:
A bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from
working as intended, or produces an incorrect result. Bugs arise from mistakes and errors, made
by people, in either a program's source code or its design
A defect is one that causes a reproducible or catastrophic malfunction. A malfunction is
considered reproducible if it occurs consistently under the same circumstances.
Page 4 of 82
1. Fundamentals of testing (K2)
Page 5 of 82
1.1 Why is testing necessary (K2)
Terms
Bug, defect, error, failure, fault, mistake, quality, risk, software, testing.
A bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from
working as intended, or produces an incorrect result. Bugs arise from mistakes and errors, made
by people, in either a program's source code or its design
A defect is one that causes a reproducible or catastrophic malfunction. A malfunction is
considered reproducible if it occurs consistently under the same circumstances.
An error may be a piece of incorrectly written program code. A syntax error is an ungrammatical
or nonsensical statement in a program; one that cannot be parsed by the language
implementation. A logic error is a mistake in the algorithm used, which causes erroneous results
or undesired operation.
Failure (or flop) in general refers to the state or condition of not meeting a desirable or intended
objective.
Fault is an abnormal condition or defect at the component, equipment, or sub-system level which
may lead to a failure.
Mistake (an error): A human action that produces an incorrect result.
Quality: Quality refers to the inherent or distinctive characteristics or properties of a person,
object, process or other thing. It can be defined as “The totality of features and characteristics of
a product or service that bear on its ability to satisfy stated or implied needs. ” or “ Consistent
performance of a uniform product meeting the customer's needs for economy and function. “
Risk: Risk is the potential impact (positive or negative) to an asset or some characteristic of value
that may arise from some present process or from some future event.
Software: A collection of programs which is developed to obtain a specific objective.
Testing: Testing is the process used to help identify the correctness, completeness, security and
quality of developed computer software.
1.1.1 Software systems context (K1)
Software systems are an increasing part of life, from business applications (e.g. banking)
to consumer products (e.g. cars). Most people have had an experience with software that did not
work as expected. Software that does not work correctly can lead to many problems, including
loss of money, time or business reputation, and could even cause injury or death.
1.1.2 Causes of software defects (K2)
A human being can make an error (mistake), which produces a defect (fault, bug) in the
code, in software or a system, or in a document. If a defect in code is executed, the system will
fail to do what it should do (or do something it shouldn’t), causing a failure? Defects in software,
systems or documents may result in failures, but not all defects do so. Defects occur because
human beings are fallible and because there is time pressure, complex code, complexity of
infrastructure, changed technologies, and/or many system interactions. Failures can be caused
by environmental conditions as well: radiation, magnetism, electronic fields, and pollution can
cause faults in firmware or influence the execution of software by changing hardware conditions.
An error is a human action that produces an incorrect result. A fault is a manifestation of an error
in software. Faults are also known colloquially as defaults or bugs. A fault, if encountered, may
cause a fault, which is a deviation of the software from its existing delivery or service.
We can illustrate these points with the true story Mercury spacecraft. The computer program
aboa spacecraft contained the following statement wri1 the FORTRAN programming language.
DO 100 i = 1.10
The programmer's intention was to execute a succeeding statements up to line 100 ten times
then creating a loop where the integer variable I was using the loop counter, starting 1 and ending
at 10.
Page 6 of 82
Unfortunately, what this code actually does is writing variable i do to decimal value 1.1 and it does
that once only. Therefore remaining code is executed once and not 10 times within the loop. As a
result spacecraft went off course and mission was abort considerable cost!
The correct syntax for what the programmer intended is
DO 100 i =1,10
1.1.3 Role of testing in software development, maintenance and operations
(K2)
Rigorous testing of systems and documentation can help to reduce the risk of problems
occurring in an operational environment and contribute to the quality of the software system, if
defects found are corrected before the system is released for operational use. Software testing
may also be required to meet contractual or legal requirements, or industry-specific standards.
1.1.4 Testing and quality (K2)
With the help of testing, it is possible to measure the quality of software in terms of
defects found, for both functional and non-functional software requirements and characteristics
(e.g. reliability, usability, efficiency and maintainability). For more information on non-functional
testing see Chapter 2; for more information on software characteristics see ‘Software Engineering
– Software Product Quality’ (ISO 9126). Testing can give confidence in the quality of the software
if it finds few or no defects. A properly designed test that passes reduces the overall level of risk
in a system. When testing does find defects, the quality of the software system increases when
those defects are fixed. Lessons should be learned from previous projects. By understanding the
root causes of defects found in other projects, processes can be improved, which in turn should
prevent those defects reoccurring and, as a consequence, improve the quality of future systems.
Testing should be integrated as one of the quality assurance activities (e.g. alongside
development standards, training and defect analysis).
1.1.5 How much testing is enough? (K2)
Deciding how much testing is enough should take account of the level of risk, including
technical and business product and project risks, and project constraints such as time and
budget. (Risk is discussed further in Chapter 5.) Testing should provide sufficient information to
stakeholders to make informed decisions about the release of the software or system being
tested, for the next development step or handover to customers.
1.2 What is testing (K2)
Terms
Code, debugging, development (of software), requirement, review, test basis, test case,
testing, test objectives.
Page 7 of 82
Test case : A test case is a set of conditions or variables under which a tester will determine if a
requirement upon an application is partially or fully satisfied. It may take many test cases to
determine that a requirement is fully satisfied.
Test basis : Objective of the test or the purpose for which the test is conducted.
Testing : The process of exercising or evaluating a system or system component by manual or
automated means to confirm that it satisfies specified requirements, or to identify differences
between expected and actual results.
Test objectives : A statement defining the purpose of the test.
Background
A common perception of testing is that it only consists of running tests, i.e. executing the
software.
This is part of testing, but not all of the testing activities. Test activities exist before and
after test execution, activities such as planning and control, choosing test conditions, designing
test cases and checking results, evaluating completion criteria, reporting on the testing process
and system under test, and finalizing or closure (e.g. after a test phase has been completed).
Testing also includes reviewing of documents (including source code) and static analysis. Both
dynamic testing and static testing can be used as a means for achieving similar objectives, and
will provide information in order to improve both the system to be tested, and the development
and testing processes.
There can be different test objectives:
Finding defects;
Gaining confidence about the level of quality and providing information;
Preventing defects.
The thought process of designing tests early in the life cycle (verifying the test basis via test
design) can help to prevent defects from being introduced into code. Reviews of documents (e.g.
requirements) also help to prevent defects appearing in the code. Different viewpoints in testing
take different objectives into account. For example, in development testing (e.g. component,
integration and system testing), the main objective may be to cause as many failures as possible
so that defects in the software are identified and can be fixed. In acceptance testing, the main
objective may be to confirm that the system works as expected, to gain confidence that it has met
the requirements. In some cases the main objective of testing may be to assess the quality of the
software (with no intention of fixing defects), to give information to stakeholders of the risk of
releasing the system at a given time. Maintenance testing often includes testing that no new
errors have been introduced during development of the changes. During operational testing, the
main objective may be to assess system characteristics such as reliability or availability.
Debugging and testing are different. Testing can show failures that are caused by defects.
Debugging is the development activity that identifies the cause of a defect, repairs the code and
checks that the defect has been fixed correctly. Subsequent confirmation testing by a tester
ensures that the fix does indeed resolve the failure. The responsibility for each activity is very
different, i.e. testers test and developers debug.
The process of testing and its activities is explained in Section 1.4.
1.3 General testing principles (K2) 35 minutes
Terms
Exhaustive testing: Testing that covers all combinations of input values and preconditions for an
element of the software under test
Principles
A number of testing principles have been suggested over the past 40 years and offer
general guidelines common for all testing.
Page 8 of 82
Principle 1 – Testing shows presence of defects
Testing can show that defects are present, but cannot prove that there are no defects.
Testing reduces the probability of undiscovered defects remaining in the software but, even if no
defects are found, it is not a proof of correctness.
Principle 2 – Exhaustive testing is impossible
Testing everything (all combinations of inputs and preconditions) is not feasible except for
trivial cases. Instead of exhaustive testing, we use risk and priorities to focus testing efforts.
Principle 3 – Early testing
Testing activities should start as early as possible in the software or system development
life cycle, and should be focused on defined objectives.
Principle 4 – Defect clustering
A small number of modules contain most of the defects discovered during pre-release
testing, or show the most operational failures.
Principle 5 – Pesticide paradox
If the same tests are repeated over and over again, eventually the same set of test cases
will no longer find any new bugs. To overcome this “pesticide paradox”, the test cases need to be
regularly reviewed and revised, and new and different tests need to be written to exercise
different parts of the software or system to potentially find more defects.
Principle 6 – Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical software is
tested differently from an e-commerce site.
Principle 7 – Absence-of-errors fallacy
Finding and fixing defects does not help if the system built is unusable and does not fulfill
the users’ needs and expectations.
1.4 Fundamental test process (K1)
Terms
Confirmation testing, exit criteria, incident, regression testing, test basis, test condition,
test coverage, test data, test execution, test log, test plan, test strategy, test summary report, test
ware.
Test data: simulated transactions that can be used to test processing logic, computations and
controls actually programmed in computer applications.
Test execution: The processing of a test case suite by the software under test, producing an
outcome.
Page 9 of 82
Test log: A detail of what tests cases were run, who ran the tests, in what order they were run,
and whether or not individual tests were passed or failed.
Test plan: A record of the test planning process detailing the degree of tester independence, the
test environment, the test case design techniques and test measurement techniques to be used,
and the rationale for their choice.
Test strategy:
A test strategy is a statement of the overall approach to testing, identifying what levels of testing
are to be applied and the method, techniques and tool to be used.
Four components of a good test strategy
a) Critical success factor
b) Risk analysis
c) Assumptions
d) Methodology to be followed
Test summary report: A detail of all the important information to come out of the testing
procedure, including an assessment of how well the testing was performed, an assessment of the
quality of the system, any incidents that occurred, and a record of what testing was done and how
long it took to be used in future.
Test planning : This final document is used to determine if the software being tested is viable
enough to proceed to the next stage of development.
Test ware: TestWare is a suite of Windows™ software tools that is used by design engineers,
test engineers, and production personnel to design, program, and run functional tests on the
Automating Test System. It also provides tools to serialize, record, print, and archive test results
in a production test environment.
Background
The most visible part of testing is executing tests. But to be effective and efficient, test
plans should also include time to be spent on planning the tests, designing test cases, preparing
for execution and evaluating status.
The fundamental test process consists of the following main activities:
Planning and control;
Analysis and design;
Implementation and execution;
Evaluating exit criteria and reporting;
Test closure activities.
Although logically sequential, the activities in the process may overlap or take place concurrently.
1.4.1 Test planning and control (K1)
Test planning is the activity of verifying the mission of testing, defining the objectives of
testing and the specification of test activities in order to meet the objectives and mission. Test
control is the ongoing activity of comparing actual progress against the plan, and reporting the
status, including deviations from the plan. It involves taking actions necessary to meet the
mission and objectives of the project. In order to control testing, it should be monitored throughout
the project. Test planning takes into account the feedback from monitoring and control activities.
Test planning has the following major tasks:
Determining the scope and risks, and identifying the objectives of testing.
Determining the test approach (techniques, test items, coverage, identifying and
interfacing the teams involved in testing, testware).
Determining the required test resources (e.g. people, test environment, PCs).
Implementing the test policy and/or the test strategy.
Scheduling test analysis and design tasks.
Scheduling test implementation, execution and evaluation.
Determining the exit criteria. Test control has the following major tasks:
Measuring and analyzing results;
Monitoring and documenting progress, test coverage and exit criteria;
Page 10 of 82
Initiation of corrective actions;
Making decisions.
1.4.2 Test analysis and design (K1)
Test analysis and design is the activity where general testing objectives are transformed
into tangible test conditions and test designs.
Page 11 of 82
Finalizing and archiving testware, the test environment and the test infrastructure for later
reuse.
Handover of testware to the maintenance organization.
Analyzing lessons learned for future releases and projects, and the improvement of test
maturity.
1.5 The psychology of testing (K2) 35 minutes
Terms
Independent testing : A common practice of software testing is that it is performed by an
independent group of testers after finishing the software product and before it is shipped to the
customer. This practice often results in the testing phase being used as project buffer to
compensate for project delays. Another practice is to start software testing at the same moment
the project starts and it is a continuous process until the project finishes.
Background
The mindset to be used while testing and reviewing is different to that used while
analyzing or Developing. With the right mindset developers are able to test their own code, but
separation of this responsibility to a tester is typically done to help focus effort and provide
additional benefits, such as an independent view by trained and professional testing resources.
Independent testing may be carried out at any level of testing. A certain degree of independence
(avoiding the author bias) is often more effective at finding defects and failures. Independence is
not, however, a replacement for familiarity, and developers can efficiently find many defects in
their own code. Several levels of independence can be defined:
Tests designed by the person(s) who wrote the software under test (low level of
independence).
Tests designed by another person(s) (e.g. from the development team).
Tests designed by a person(s) from a different organizational group (e.g. an independent
test team).
Tests designed by a person(s) from a different organization or company (i.e. outsourcing
or certification by an external body).
People and projects are driven by objectives. People tend to align their plans with the objectives
set by management and other stakeholders, for example, to find defects or to confirm that
software works. Therefore, it is important to clearly state the objectives of testing. Identifying
failures during testing may be perceived as criticism against the product and against the author.
Testing is, therefore, often seen as a destructive activity, even though it is very constructive in the
management of product risks. Looking for failures in a system requires curiosity, professional
pessimism, a critical eye, and attention to detail, good communication with development peers,
and experience on which to base error guessing. If errors, defects or failures are communicated
in a constructive way, bad feelings between the testers and the analysts, designers and
developers can be avoided. This applies to reviewing as well as in testing. The tester and test
leader need good interpersonal skills to communicate factual information about defects, progress
and risks, in a constructive way. For the author of the software or document, defect information
can help them improve their skills. Defects found and fixed during testing will save time and
money later, and reduce risks. Communication problems may occur, particularly if testers are
seen only as messengers of unwanted news about defects. However, there are several ways to
improve communication and relationships between testers
Start with collaboration rather than battles – remind everyone of the common goal of
better quality systems.
Communicate findings on the product in a neutral, fact-focused way without criticizing the
person who created it, for example, write objective and factual incident reports and
review findings.
Try to understand how the other person feels and why they react as they do.
Confirm that the other person has understood what you have said and vice versa.
Page 12 of 82
2. Testing throughout the software life cycle (K2)
Page 13 of 82
2.1 Software development Models (K2)
Terms
Commercial off the shelf (COTS), incremental development model, test level, validation,
verification, V-model.
Background
Testing does not exist in isolation; test activities are related to software development activities.
Different development life cycle models need different approaches to testing.
Each phase produces deliverables required by the next phase in the life cycle.
Requirements are translated into design. Code is produced during implementation that is driven
by the design. Testing verifies the deliverable of the implementation phase against requirements.
Requirements
Business requirements are gathered in this phase. This phase is the main focus of the
project managers and stakeholders. Meetings with managers, stakeholders and users are held in
order to determine the requirements. Who is going to use the system? How will they use the
system? What data should be input into the system? What data should be output by the
system? These are general questions that get answered during a requirements gathering phase.
This produces a nice big list of functionality that the system should provide, which describes
functions the system should perform, business logic that processes data, what data is stored and
used by the system, and how the user interface should work. The overall result is the system as
a whole and how it performs, not how it is actually going to do it.
Design
The software system design is produced from the results of the requirements phase.
Architects have the ball in their court during this phase and this is the phase in which their focus
lies. This is where the details on how the system will work are produced. Architecture, including
Page 14 of 82
hardware and software, communication, software design (UML is produced here) is all part of the
deliverables of a design phase.
Implementation
Code is produced from the deliverables of the design phase during implementation, and
this is the longest phase of the software development life cycle. For a developer, this is the main
focus of the life cycle because this is where the code is produced. Implementation my overlap
with both the design and testing phases. Many tools exists (CASE tools) to actually automate the
production of code using information gathered and produced during the design phase.
Testing
During testing, the implementation is tested against the requirements to make sure that
the product is actually solving the needs addressed and gathered during the requirements
phase. Unit tests and system/acceptance tests are done during this phase. Unit tests act on a
specific component of the system, while system tests act on the system as a whole. So in a
nutshell, that is a very basic overview of the general software development life cycle model. Now
lets delve into some of the traditional and widely used variations.
Maintenance
Although variants of the V-model exist, a common type of V-model uses four test levels,
corresponding to four development levels.
The four levels used in this syllabus are:
Component (unit) testing;
Integration testing;
System testing;
Acceptance testing.
In practice, a V-model may have more, fewer or different levels of development and
testing, depending on the project and the software product. For example, there may be
component integration testing after component testing, and system integration testing after
system testing.
Software work products (such as business scenarios or use cases, requirement
specifications, design documents and code) produced during development are often the basis of
testing in one or more test levels. References for generic work products include Capability
Maturity Model Integration (CMMI) or ‘Software life cycle processes’ (IEEE/IEC 12207).
Verification and validation (and early test design) can be carried out during the development of
the software work products.
Testing is emphasized more in this model. The testing procedures are developed early in
the life cycle before any coding is done, during each of the phases preceding implementation.
Requirements begin the life cycle model. Before development is started, a system test plan is
created. The test plan focuses on meeting the functionality specified in the requirements
gathering.
The high-level design phase focuses on system architecture and design. An integration
test plan is created in this phase as well in order to test the pieces of the software systems ability
to work together.
The low-level design phase is where the actual software components are designed, and
unit tests are created in this phase as well.
Page 15 of 82
The implementation phase is, again, where all coding takes place. Once coding is
complete, the path of execution continues up the right side of the V where the test plans
developed earlier are now put to use.
Advantages
Simple and easy to use.
Each phase has specific deliverables.
Higher chance of success over the waterfall model due to the development of test plans
early on during the life cycle.
Works well for small projects where requirements are easily understood.
Disadvantages
Very rigid, like the waterfall model.
Little flexibility and adjusting scope is difficult and expensive.
Software is developed during the implementation phase, so no early prototypes of the
software are produced.
Model does not provide a clear path for problems found during testing phases.
Firstly, in our experience, there is rarely a perfect, one-to-one relationship between the
documents on the left hand side and the test activities on the right. For example, functional
specifications don’t usually provide enough information for a system test. System tests must often
take account of some aspects of the business requirements as well as physical design issues for
example. System testing usually draws on several sources of requirements information to be
thoroughly planned.
Secondly, and more important, the V-Model has little to say about static testing at all. The V-
Model treats testing as a “back-door” activity on the right hand side of the model. There is no
mention of the potentially greater value and effectiveness of static tests such as reviews,
inspections, static code analysis and so on. This is a major omission and the V-Model does not
support the broader view of testing as a constantly prominent activity throughout the development
lifecycle.
If we focus on the static test techniques, you can see that there is a wide range of techniques
available for evaluating the products of the left hand side. Inspections, reviews, walkthroughs,
static analysis, requirements animation as well as early test case preparation can all be used.
There are a number of different models for software development lifecycles. Some of the
more commonly used models are:
Page 17 of 82
Waterfall Lifecycle Model
Spiral Lifecycle Model
Incremental Lifecycle Model
Iterative Lifecycle Model
Progressive Development Lifecycle Model
Prototyping Model
RAD Lifecycle Model
Advantages
Simple and easy to use.
Easy to manage due to the rigidity of the model at each phase has specific deliverables
and a review process.
Phases are processed and completed one at a time.
Works well for smaller projects where requirements are very well understood.
Disadvantages
Adjusting scope during the life cycle can kill a project
No working software is produced until late during the life cycle.
High amounts of risk and uncertainty.
Poor model for complex and object-oriented projects.
Poor model for long and ongoing projects.
Poor model where requirements are at a moderate to high risk of changing.
Spiral Model
Page 18 of 82
The spiral model is similar to the incremental model, with more emphases placed on risk
analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and
Evaluation. A software project repeatedly passes through these phases in iterations (called
Spirals in this model). The baseline spiral, starting in the planning phase, requirements are
gathered and risk is assessed. Each subsequent spiral builds on the baseline spiral.
Requirements are gathered during the planning phase. In the risk analysis phase, a
process is undertaken to identify risk and alternate solutions. A prototype is produced at the end
of the risk analysis phase.
Software is produced in the engineering phase, along with testing at the end of the
phase. The evaluation phase allows the customer to evaluate the output of the project to date
before the project continues to the next spiral.
In the spiral model, the angular component represents progress, and the radius of the
spiral represents cost.
Advantages
Disadvantages
Can be a costly model to use.
Risk analysis requires highly specific expertise.
Projects success is highly dependent on the risk analysis phase.
Does not work well for smaller projects.
Incremental Model
The incremental model is an intuitive approach to the waterfall model. Multiple
development cycles take place here, making the life cycle a multi-waterfall cycle. Cycles are
divided up into smaller, more easily managed iterations. Each iteration passes through the
requirements, design, implementation and testing phases.
A working version of software is produced during the first iteration, so you have working
software early on during the software life cycle. Subsequent iterations build on the initial software
produced during the first iteration.
Page 19 of 82
Incremental Life Cycle Model
Advantages
Generates working software quickly and early during the software life cycle.
More flexible and less costly to change scope and requirements.
Easier to test and debug during a smaller iteration.
Easier to manage risk because risky pieces are identified and handled during its
iteration.
Each iteration is an easily managed milestone.
Disadvantages
Each phase of an iteration is rigid and do not overlap each other.
Problems may arise pertaining to system architecture because not all requirements
are gathered up front for the entire software life cycle.
Iterative Model
An iterative lifecycle model does not attempt to start with a full specification of
requirements. Instead, development begins by specifying and implementing just part of the
software, which can then be reviewed in order to identify further requirements. This process is
then repeated, producing a new version of the software for each cycle of the model.
Consider an iterative lifecycle model which consists of repeating the following four
phases in sequence:
A Requirements phase, in which the requirements for the software are gathered and analyzed.
Iteration should eventually result in a requirements phase that produces a complete and final
specification of requirements.
Page 20 of 82
A Design phase, in which a software solution to meet the requirements is designed. This may be
a new design, or an extension of an earlier design.
An Implementation and Test phase, when the software is coded, integrated and tested.
A Review phase, in which the software is evaluated, the current requirements are reviewed, and
changes and additions to requirements proposed.
For each cycle of the model, a decision has to be made as to whether the software
produced by the cycle will be discarded, or kept as a starting point for the next cycle (sometimes
referred to as incremental prototyping). Eventually a point will be reached where the requirements
are complete and the software can be delivered, or it becomes impossible to enhance the
software as required, and a fresh start has to be made.
Page 21 of 82
The iterative lifecycle model can be likened to producing software by successive
approximation. Drawing an analogy with mathematical methods that use successive
approximation to arrive at a final solution, the benefit of such methods depends on how rapidly
they converge on a solution.
The key to successful use of an iterative software development lifecycle is rigorous
validation of requirements, and verification (including testing) of each version of the software
against those requirements within each cycle of the model. The first three phases of the example
iterative model is in fact an abbreviated form of a sequential V or waterfall lifecycle model. Each
cycle of the model produces software that requires testing at the unit level, for software
integration, for system integration and for acceptance. As the software evolves through
successive cycles, tests have to be repeated and extended to verify each version of the software.
Each delivery of software will have to pass acceptance testing to verify the software fulfils
the relevant parts of the overall requirements. The testing and integration of each phase will
require time and effort. Therefore, there is a point at which an increase in the number of
development phases will actually become counter productive, giving an increased cost and time
scale, which will have to be weighed carefully against the need for an early solution.
The software produced by an early phase of the model may never actually be used; it
may just serve as a prototype. A prototype will take short cuts in order to provide a quick means
of validating key requirements and verifying critical areas of design. These short cuts may be in
areas such as reduced documentation and testing. When such short cuts are taken, it is essential
to plan to discard the prototype and implement the next phase from scratch, because the reduced
quality of the prototype will not provide a good foundation for continued development.
Page 22 of 82
Prototyping Model
Prototyping has been discussed in the literature as an important approach to early
requirements validation. A prototype is an enact able mock-up or model of a software system that
enables evaluation of features or functions through user and developer interaction with
operational scenarios. Prototyping exposes functional and behavioral aspects of the system as
well as implementation considerations, thereby increasing the accuracy of requirements and
helping to control their volatility during development.
Requirements Planning:
Page 23 of 82
The Requirements Planning stage consists of a review of the areas immediately
associated with the proposed system. This review produces a broad definition of the system
requirements in terms of the functions the system will support.
The deliverables from the Requirements Planning stage include an outline system area
model (entity and process models) of the area under study, a definition of the system's scope,
and a cost justification for the new system.
User Design:
The User Design stage consists of a detailed analysis of the business activities related to
the proposed system. Key users, meeting in workshops, decompose business functions and
define entity types associated with the system. They complete the analysis by creating action
diagrams defining the interactions between processes and data. Following the analysis, the
design of the system is outlined. System procedures are designed, and preliminary layouts of
screens are developed. Prototypes of critical procedures are built and reviewed. A plan for
implementing the system is prepared.
Construction:
In the Construction stage, a small team of developers, working directly with users,
finalizes the design and builds the system. The software construction process consists of a series
of "design-and-build" steps in which the users have the opportunity to fine-tune the requirements
and review the resulting software implementation. This stage also includes preparing for the
cutover to production.
In addition to the tested software, Construction stage deliverables include documentation
and instructions necessary to operate the new application, and routines and procedures needed
to put the system into operation.
Implementation
The implementation stage involves implementing the new system and managing the change from
the old system environment to the new one. This may include implementing bridges between
existing and new systems, converting data, and training users. User acceptance is the end point
of the implementation stage.
Page 24 of 82
2.1.3 Testing within a life cycle model (K2)
In any life cycle model, there are several characteristics of good testing:
Test levels can be combined or reorganized depending on the nature of the project or the
system architecture. For example, for the integration of a commercial off the shelf (COTS)
software product into a system, the purchaser may perform integration testing at the system level
(e.g. integration to the infrastructure and other systems, or system deployment) and acceptance
testing (functional and/or non-functional, and user and/or operational testing).
Terms
Alpha testing, beta testing, component testing (also known as unit, module or program
testing), contract acceptance testing, drivers, field testing, functional requirements, integration,
integration testing, non-functional requirements, operational (acceptance) testing, regulation
acceptance testing, robustness testing, stubs, system testing, test-driven development, test
environment, user acceptance testing.
Background
For each of the test levels, the following can be identified: their generic objectives, the
work product(s) being referenced for deriving test cases (i.e. the test basis), the test object (i.e.
what is being tested), typical defects and failures to be found, test harness requirements and tool
support, and specific approaches and responsibilities.
It is the component testing perspective that is important and not the size of the pieces
being tested. That perspective views the software being tested as intended for integration with
other pieces rather than as a complete system in it. These both helps to determine what features
of the software are tested and how they are tested.
The greater the scope of integration, the more difficult it becomes to isolate failures to a
specific component or system, which may lead to increased risk.
Systematic integration strategies may be based on the system architecture (such as top-
down and bottom-up), functional tasks, transaction processing sequences, or some other aspect
of the system or component. In order to reduce the risk of late defect discovery, integration
should normally be incremental rather than “big bang”.
Testing of specific non-functional characteristics (e.g. performance) may be included in
integration testing.
At each stage of integration, testers concentrate solely on the integration itself. For
example, if they are integrating module A with module B they are interested in testing the
communication between the modules, not the functionality of either module. Both functional and
structural approaches may be used.
Ideally, testers should understand the architecture and influence integration planning. If
integration tests are planned before components or systems are built, they can be built in the
order required for most efficient testing.
You can do integration testing in a variety of ways but the following are three common
strategies:
The Top-Down approach to integration testing requires the highest-level modules be
test and integrated first. This allows high-level logic and data flow to be tested early in the
process and it tends to minimize the need for drivers. However, the need for stubs complicates
test management and low-level utilities are tested relatively late in the development cycle.
Another disadvantage of top-down integration testing is its poor support for early release of
limited functionality.
The Bottom-Up approach requires the lowest-level units be tested and integrated first.
These units are frequently referred to as utility modules. By using this approach, utility modules
are tested early in the development process and the need for stubs is minimized. The downside,
however, is that the need for drivers complicates test management and high-level logic and data
flow are tested late. Like the top-down approach, the bottom-up approach also provides poor
support for early release of limited functionality.
The third approach, sometimes referred to as the Umbrella approach, requires testing
along functional data and control-flow paths. First, the inputs for functions are integrated in the
bottom-up pattern discussed above. The outputs for each function are then integrated in the top-
down manner. The primary advantage of this approach is the degree of support for early release
of limited functionality. It also helps minimize the need for stubs and drivers. The potential
weaknesses of this approach are significant, however, in that it can be less systematic than the
other two approaches, leading to the need for more regression testing.
A big bang project is one that has no staged delivery. The customer must wait,
sometimes months, before seeing anything from the development team. At the end of the wait
comes a "big bang", which often results in the customer being disappointed.
Generally speaking, System testing is the first time that the entire system can be tested
as a whole system against the Functional Requirement Specification(s) (FRS) and/or the System
Requirement Specification (SRS), these are the rules that describe the functionality that the
vendor (the entity developing the software) and a customer have agreed upon. System testing
tends to be more of an investigatory testing phase, where the focus is to have almost a
destructive attitude and test not only the design, but also the behavior and even the believed
expectations of the customer. System testing is intended to test up to and beyond the bounds
defined in the software/hardware requirements specification(s).
One could view System testing as the final destructive testing phase before Acceptance
testing.
Types of System testing
Functional testing
User interface testing
Model based testing
Error exit testing
User help testing
Security Testing
Capacity testing
Performance testing
Sanity testing
Regression testing
Reliability testing
Recovery testing
Installation testing
Maintenance testing
Documentation testing
Although different testing organizations may prescribe more or less types of testing,
within System testing, this list serves as a general frame work or foundation to begin with.
Acceptance testing may occur as more than just a single test level, for example:
Background
A group of test activities can be aimed at verifying the software system (or a part of a
system) based on a specific reason or target for testing.
A test type is focused on a particular test objective, which could be the testing of a
function to be performed by the software; a non-functional quality characteristic, such as reliability
or usability, the structure or architecture of the software or system; or related to changes, i.e.
confirming that defects have been fixed (confirmation testing) and looking for unintended changes
(regression testing).
A model of the software may be developed and/or used in structural and functional
testing. For example, in functional testing a process flow model, a state transition model or a plain
language specification; and for structural testing a control flow model or menu structure model.
It is testing without knowledge of the internal workings of the item being tested.
Functional testing considers the external behavior of the software (black-box testing).
For example, when black box testing is applied to software engineering, the tester would
only know the "legal" inputs and what the expected outputs should be, but not how the program
actually arrives at those outputs. It is because of this that black box testing can be considered
testing with respect to the specifications, no other knowledge of the program is necessary.
Here, the tester and the programmer can be independent of one another, avoiding
programmer bias toward his own work.
- only a small number of possible inputs can actually be tested, to test every possible input
stream would take nearly forever
- without clear and concise specifications, test cases are hard to design
- there may be unnecessary repetition of test inputs if the tester is not informed of test
cases the programmer has already tried
- may leave many program paths untested
- cannot be directed toward specific segments of code which may be very complex (and
therefore more error prone)
- most testing related research has been directed toward glass box testing
Other functional testing techniques include: Transaction testing, Syntax testing, Domain
testing, Logic testing, and State testing.
Non-functional requirements are properties and qualities the software system must
possess while providing its intended functional requirements or services. These types of
requirements have to be considered while developing the functional counterparts. They greatly
affect the design and implementation choices a developer may make. They also affect the
acceptability of the developed software by its intended users.
In the following, we briefly describe the three categories of non-functional requirements
that may be imposed on a software system.
Operational requirements: These requirements specify the environment in which the software
will be running, including, hardware platforms, external interfaces, and operating systems.
Performance requirements: These requirements specify possibly lower and upper bounds on
speed, response time and storage characteristics of the software.
Maintainability requirements: These requirements specify the expected response time for
dealing with the various maintenance activities, such as future release dates.
Portability requirements: These requirements specify future plans for porting the software to
different operating environments. These requirements are linked to both operating and
maintainability requirements and may impose certain design decisions and implementation
choices such as the choice of a programming language.
Security requirements: These requirements specify the levels and types of security
mechanisms that need to be satisfied during the operations of the system. These may include
adherence to specific security standards and plans, and the implementation of specific
techniques.
Reliability Testing:
Under this category, evolution client will be tested to evaluate the ability to perform its
required functions under stated conditions and set of operations for a specified period of time or
number of iterations.
Performance Testing:
This testing will be conducted for evolution to evaluate the time taken or response time of
an evolution to perform it's required functions (in mailer, calendar, address book and tasks) under
stated conditions in comparison with different versions of evolution.
Scalability Testing:
Scalability is the capability of the software product to be upgraded to accommodate
increased loads. Scalability testing of the evolution client will be done together with the
performance testing.
Compatibility Testing:
Testing whether the system is compatible with other systems with which it should
communicate.
Testing to be carried out to validate proper inter-working of interconnecting network
facilities and equipment. Compatibility tests are performed prior to cutover to validate functional
capabilities and services provided over the interconnections. [T1.234-1993]
Compatibility testing measures how well pages display on different clients; for example:
browsers, different browser version, different operating systems, and different machines. At issue
are the different implementations of HTML by the various browser manufacturers and the different
machine platform display and rendering characteristics. Also called browser compatibility testing
and cross-browser testing.
Unit Testing:
The developer carries out unit testing in order to check if the particular module or unit of
code is working fine. The Unit Testing comes at the very basic level as it is carried out as and
when the unit of the code is developed or a particular functionality is built.
Statement Coverage:
In this type of testing the code is executed in such a manner that every statement of the
application is executed at least once. It helps in assuring that all the statements execute without
any side effect.
Branch Coverage:
No software application can be written in a continuous mode of coding, at some point we
need to branch out the code in order to perform a particular functionality. Branch coverage testing
helps in validating of all the branches in the code and making sure that no branching leads to
abnormal behavior of the application.
Security Testing:
Security Testing is carried out in order to find out how well the system can protect itself
from unauthorized access, hacking – cracking, any code damage etc. which deals with the code
of application. This type of testing needs sophisticated testing techniques.
Mutation Testing:
A kind of testing in which, the application is tested for the code that was modified after
fixing a particular bug/defect. It also helps in finding out which code and which strategy of coding
can help in developing the functionality effectively.
Besides all the testing types given above, there are some more types which fall under
both Black box and White box testing strategies such as: Functional testing (which deals with the
code in order to check its functional performance), Incremental integration testing (which deals
with the testing of newly added code in the application), Performance and Load testing (which
helps in finding out how the particular code manages resources and give performance etc.) etc.
Background
Once deployed, a software system is often in service for years or decades. During this
time the system and its environment is often corrected, changed or extended. Maintenance
testing is done on an existing operational system, and is triggered by modifications, migration, or
retirement of the software or system.
Modifications include planned enhancement changes (e.g. release-based), corrective and
emergency changes, and changes of environment, such as planned operating system or
database upgrades, or patches to newly exposed or discovered vulnerabilities of the operating
system.
Maintenance testing for migration (e.g. from one platform to another) should include
operational tests of the new environment, as well as of the changed software. Maintenance
testing for the retirement of a system may include the testing of data migration or archiving if long
data-retention periods are required.
In addition to testing what has been changed, maintenance testing includes extensive
regression testing to parts of the system that have not been changed. The scope of maintenance
testing is related to the risk of the change, the size of the existing system and to the size of the
change.
Depending on the changes, maintenance testing may be done at any or all test levels
and for any or all test types.
Determining how the existing system may be affected by changes is called impact
analysis, and is used to help decide how much regression testing to do.
Maintenance testing can be difficult if specifications are out of date or missing.
The objective of the Testing stage is to ensure that the system is modified to take care of
the business change, and the system meets the business requirements to a level acceptable to
the users.
It is quite well known that fixing bugs during maintenance quite often leads to new
problems or bugs. This leads to frustration and disappointment for customers and also leads to
increased effort and cost for re-work for the maintaining organization. This kind of “bug creep”
usually occurs due to several reasons. The most common ones being:
Integration: The maintenance engineer fixes a bug in one component and makes necessary
changes for them. While doing so, the impact of this change in other components and
areas is often ignored. The engineer then tests this component (which works fine) and
proceeds to offer a patch to the customer. When the customer installs it, the problem
reported is solved but new problems crop up.
Deployment: The maintenance engineer has no clue of the deployment scenarios to be
supported. Whether it is in terms of firewalls, proxy servers, client browsers, security
policies, networking issues, etc which exist at the customer site. These are often ignored
during the maintenance phase thus leading to a theoretical fix of the problem.
Ideal setup: While fixing bugs, the maintenance engineer quite often as an “ideal setup”. All
registry entries have been made. Database connections have been setup. Authorizations
and permissions exist. Relationships have been set. But when a bug is fixed and deployed,
the engineer has to realize that all these and many other system variables need to be
checked and set.
These and several such problems can be addressed by including Maintenance Testing in the
processes for releasing maintenance patches.
Broadly the various steps that make Maintenance Testing are:
Prepare for Testing
Conduct Unit Tests
Test System
Prepare for Acceptance Tests
Conduct Acceptance Tests
Of course, including Maintenance Testing as just one more “activity to be performed” will not
deliver the desired results. The test engineers working on maintenance testing should be
passionate about delivering a solution that solves the problems of customers and allows them to
proceed seamlessly. Applied systematically and scientifically, maintenance testing can add
significantly to the satisfaction of customers who are paying hefty maintenance contracts and
could lead to increased trust between the customer and the maintenance provider.
3. Static techniques (K2)
Background
Static testing techniques do not execute the software that is being tested; they are
manual (reviews) or automated (static analysis).
Reviews are a way of testing software work products (including code) and can be
performed well before dynamic test execution. Defects detected during reviews early in the life
cycle are often much cheaper to remove than those detected while running tests (e.g. defects
found in requirements).
A review could be done entirely as a manual activity, but there is also tool support. The
main manual activity is to examine a work product and make comments about it. Any software
work product can be reviewed, including requirement specifications, design specifications, code,
test plans, test specifications, test cases, test scripts, user guides or web pages.
Benefits of reviews include early defect detection and correction, development
productivity improvements, reduced development timescales, reduced testing cost and time,
lifetime cost reductions, fewer defects and improved communication. Reviews can find omissions,
for example, in requirements, which are unlikely to be found in dynamic testing.
Reviews, static analysis and dynamic testing have the same objective – identifying
defects. They are complementary: the different techniques can find different types of defect
effectively and efficiently. In contrast to dynamic testing, reviews find defects rather than failures.
Typical defects that are easier to find in reviews than in dynamic testing are: deviations
from standards, requirement defects, design defects, insufficient maintainability and incorrect
interface specifications.
A review is a team gathering detect defect at an early development to assess on the continuity of
the project.
Advantages of Review:
The defects are found in the early phase.
Cost effective.
Co-ordination between the team members increases.
Offers training, which helps the members to understand about the product.
Reduces the work of a tester.
Informal review
Key characteristics:
No formal process;
There may be pair programming or a technical lead reviewing designs and code;
Optionally may be documented;
May vary in usefulness depending on the reviewer;
Main purpose: inexpensive way to get some benefit.
It is a one-to-one meeting, which can happen between any two individuals who are
involved in a project. No proper plan is done. Outputs of the product are not formally reported. It
happens in all stages of the software development and it is also called as Peer-reviews.
Walkthrough
Key characteristics:
Meeting led by author;
Scenarios, dry runs, peer group;
Open-ended sessions;
Optionally a pre-meeting preparation of reviewers, review report, lists of findings and scribe (who
is not the author)
May vary in practice from quite informal to very formal
Main purposes: learning, gaining understanding, defects finding.
Technical review
Key characteristics:
Documented, defined defect-detection process that includes peers and technical experts;
May be performed as a peer review without management participation;
Ideally led by trained moderator (not the author);
Pre-meeting preparation;
Optionally the use of checklists, review report, lists of findings and management participation;
May vary in practice from quite informal to very formal;
Main purposes: discuss, make decisions, evaluate alternatives, find defects, solve technical
Problems and check conformance to specifications and standards.
Inspection
Key characteristics:
Led by trained moderator (not the author)
Usually peer examinations
Defined roles
Includes metrics
Formal process based on rules and checklists with entry and exit criteria
Pre-meeting preparation
Inspection report, list of findings
Formal follow-up process
Optionally, process improvement and reader
Main purpose: find defects.
1. For Inspections: The group checklist with all items covered and components relating to
each item
2. For Inspections: A status or Summary Report
3. A list of defects found, and classified by type and frequency
4. Review metric data
The inspection report on the reviewed item may contain a summary of defects and problems
found and a list of review attendees, and some review measures such as the time period for the
review and the total number of major/minor defects. The several status options available are as
follows:
1. Accept: The reviewed item is accepted in its present form or with minor rework required
that does not need further verification
2. Conditional Accept: The reviewed item needs rework and will be accepted after the
moderator has checked and verified the rework
3. Reinspect: Considerable rework must be done on the reviewed item. The inspection
needs to be repeated when the rework is done.
Conclusion
The many benefits of reviews and arguments for use of reviews are strong enough to
convince organizations that they should use reviews as a quality and productivity-enhancing tool.
One of the maturity goals that must be satisfied to reach level 4 of TMM is to establish a review
program. This implies that a formal review program needs to be put in place, supported by
policies, resources, training, and requirements for mandatory reviews of software artifacts.
Developers use Static testing before and during component and integration testing, whereas the
designers use it during software modeling. Static analysis tools may produce a large number of
warning messages, which need to be well managed to allow the most effective use of the tool.
Compilers may offer some support for static analysis, including the calculation of metrics.
Example - QA·MISRA
About MISRA:
"Guidelines For the Use of the C Language in Vehicle Based Software" was published by the
Motor Industry Software Reliability Association to promote safe use of the C language in the
automotive industry. It contains rules defining a subset of the C language and places great
emphasis on the value of static analysis to enforce compliance. MISRA is widely accepted as a
model for best practices by leading developers, not just in the automotive industry, but in
aerospace, telecom, medical devices, defense, and others.
Features
Detects and reports non-compliant code
Links warning messages directly with the source code and the appropriate
Standards.
Provides cross references via further HTML links to the appropriate rule definition and
explanatory examples
Produces code quality reports detailing the number and type of violations that occurred in
each file whilst linking them to the appropriate part of the source code
Generates textual and graphical software metric reports that highlight code testability,
maintainability and portability
Draws code visualization diagrams that enhance source code comprehension and
simplify the review process
Integrates with configuration management tools
Allows users to tailor or add checks appropriate to individual company standards or
conventions
Benefits
Ensures all code complies with statically enforceable standards
Allows tailoring and extension of the rules to meet local requirements
Educates developers with regard to "safe" language usage
Offers an automatic, repeatable and efficient code verification method
Establishes a software quality benchmark against which subsequent revisions of code
can be measured and compared
Enhances source code comprehension
Improves software testability and maintainability
Improves code portability
Prevents coding and implementation errors from reaching the software testing phase
Identifies software issues that may not otherwise be identified
Reduces software development time and cost
Increases software quality
Supports software validation, software process maturity and various
4.Test design techniques (K3)
Terms
Test cases, test case specification, test condition, test data, test procedure specification, test
script, traceability.
Terms Definition
Test cases:
Test Case is a commonly used term for a specific test. This is usually the smallest unit of
testing. A Test Case will consist of information such as requirements testing, test steps,
verification steps, prerequisites, outputs, test environment, etc. A set of inputs, execution
preconditions, and expected outcomes developed for a particular objective, such as to exercise a
particular program path or to verify compliance with a specific requirement.
Test case specification:
A document specifying the test approach for a software feature or combination or
features and the inputs, predicted results and execution conditions for the associated tests.
Test condition:
The Test condition compares the values of two formulae. If the comparison yields true,
then the condition is true. Otherwise, it is false
Background
The process of identifying test conditions and designing tests consists of a number of steps:
Designing tests by identifying test conditions.
Specifying test cases.
Specifying test procedures.
The process can be done in different ways, from very informal with little or no documentation, to
very formal (as it is described in this section). The level of formality depends on the context of the
testing, including the organization, the maturity of testing and development processes, time
constraints, and the people involved.
During test design, the test basis documentation is analyzed in order to determine what
to test, i.e. To identify the test conditions. A test condition is defined as an item or event that could
be verified by one or more test cases (e.g. a function, transaction, quality characteristic or
structural element).
Establishing traceability from test conditions back to the specifications and requirements
enables both impact analysis, when requirements change, and requirements coverage to be
determined for a set of tests. During test design the detailed test approach is implemented based
on, among other considerations, the risks identified (see Chapter 5 for more on risk analysis).
During test case specification the test cases and test data are developed and described
in detail by using test design techniques. A test case consists of a set of input values, execution
preconditions, expected results and execution post-conditions, developed to cover certain test
condition(s). The ‘Standard for Software Test Documentation’ (IEEE 829) describes the content of
test design specifications and test case specifications.
Expected results should be produced as part of the specification of a test case and
include outputs, changes to data and states, and any other consequences of the test. If expected
results have not been defined then a plausible, but erroneous, result may be interpreted as the
correct one. Expected results should ideally be defined prior to test execution. The test cases are
put in an executable order; this is the test procedure specification.
The test procedure (or manual test script) specifies the sequence of action for the
execution of a test. If tests are run using a test execution tool, the sequence of actions is
specified in a test script (which is an automated test procedure).
The various test procedures and automated test scripts are subsequently formed into a
test execution schedule that defines the order in which the various test procedures, and possibly
automated test scripts, are executed, when they are to be carried out and by whom. The test
execution schedule will take into account such factors as regression tests, prioritization, and
technical and logical dependencies.
Background
The purpose of a test design technique is to identify test conditions and test cases.
It is a classic distinction to denote test techniques as black box or white box. Black-box techniques
(also called specification-based techniques) are a way to derive and select test conditions or test cases based
on an analysis of the test basis documentation, whether functional or non-functional, for a component or
system without reference to its internal structure. White-box techniques (also called structural or structure-
based techniques) are based on an analysis of the internal structure of the component or system.
Some techniques fall clearly into a single category; others have elements of more than one
category. This syllabus refers to specification-based or experience-based approaches as black-box
techniques and structure-based as white-box techniques.
Common features of specification-based techniques:
Models, either formal or informal, are used for the specification of the problem to be solved, the
software or its components.
From these models test cases can be derived systematically.
Specification Based Testing refers to the process of testing a program based on what its
specification says its behavior should be. Specification Based testing is also called as Black Box
Testing. In particular, we can develop test cases based on the specification of the program's
behavior, without seeing an implementation of the program. Furthermore, we can develop test
cases before the program even exists!
Without writing a program to solve this problem (you don't really know enough C yet to do
it), based only on the specification given, and without knowing anything about the ultimate
implementation, generate a set of test data that you think would be sufficient to test a program
that would be written in accordance with this specification. If you think the specification is
incomplete in anyway, state what assumptions you are making about how it should be completed
or clarified.
Features:
Models, either formal or informal, are used for the specification of the problem to be
solved, the software or its components.
From these models test cases can be derived systematically.
Equivalence partitioning divides the input domain of a program into classes. For each of
these equivalence classes, the set of data should be treated the same by the module under test
and should produce the same answer. Test cases should be designed so the inputs lie within
these equivalence classes.
(Beizer, 1995) For example, for tests of “Go to Jail” the most important thing is whether the player
has enough money to pay the $50 fine. Therefore, the two equivalence classes can be
partitioned, as shown in Figure
1. The tester must consider both valid and invalid equivalence class
2. The derivation of input or outputs equivalence class is a heuristic process.
List of conditions
1. If an input condition of software-under-test is specified as a range of values, select one
equivalence class that covers the allowed range and two invalid equivalence classes, one
outside each end of the range.
a. For e.g., for a range i.e. 1-499, select one valid equivalence class that includes
all the values from 1 to 499. Select a second equivalence class that consists of
all values less than 1, and a third equivalence class that consists of all values
less than 499.
2. If an input condition of software-under-test is specified as a number of values, select one
valid equivalence class that covers all allowed number of values and two invalid
equivalence classes that are outside each end of the allowed number.
a. For e.g., if the specification for a real estate related module says that a house
can one to four owners, then we select one valid equivalence class that includes
all the valid number of owners and then two invalid equivalence classes for less
than one owner and more than four owners.
List of conditions
1. If an input condition of software-under-test is specified as a range of values, develop valid
test cases for the ends of the range , and invalid test cases for possibilities just above
and below the ends of the range.
a. For e.g., if a specification states that an input value for a module must lie in the
range between -1.0 to +1.0, valid tests that include values for ends of the range,
as well as invalid tests for values just above and below the ends of the range .
The result would be -1.0,-1.1 and 1.0,1.1
2. If an input condition of software-under-test is specified as a number of values, develop
valid test cases for the minimum and maximum numbers as well as invalid test cases that
include one lesser and one greater than the maximum and minimum.
a. For e.g., if the specification for a real estate related module says that a house
can one to four owners, tests that include 0,1 owners and 4,5 owners would be
developed.
3. If an input or output of the software-under-test is an ordered set, such as a table or a
linear list, develop tests that focus on the first and last elements of the set.
An example of the Application of Equivalence Class Partitioning and Boundary Value
Analysis.
Suppose we are testing a module that allows a user to enter new widget identifiers into a
widget database. The input specification for the module states that a widget identifier should
consist of 3-15 alphanumeric characters of which the first two must be letters. We have three
separate input conditions,
1. It must consist of alphanumeric characters
2. The range for the total number of characters is between 3 and 15
3. The first two characters must be letters
In the case of Equivalence Class Partitioning we consider condition 1 and derive two equivalence
classes
In the case of Boundary Value Analysis, a simple set of abbreviation can be used to represent the
bound groups. For e.g.
A decision table is a two-dimensional matrix with one row for each possible action and
one row for each relevant condition and one column for each combination of condition states.
Decision tables can very concisely and rigorously show complex conditions and their resulting
actions while remaining comprehensible to a human reader.
The first set of rows indicates the possible actions that may be taken. An "X" in an action
row shows that the action will be taken under the condition states indicated in the column below.
In a bivariate decision table, conditions are binary, restricting condition evaluations to "yes" and
"no". This results in a number of columns equal to 2 number of conditions. This can quickly result in a
huge number of columns as the number of conditions rise. Fortunately, it is unusual that every
combination of conditions results in a different action.
In the figure, the use of the "-", means that the condition in that row does not affect the action to
be taken. Looking at the first column, we see that no action will be taken no matter the state of
the last two conditions as long as the first condition is false. Each "-" notation reduces the number
of columns necessary and increases the comprehensibility of the table. In this example the 2 3 = 8
possible combinations are reduced to 4.
Background
Structure-based testing/white-box testing is based on an identified structure of the software or
system, as seen in the following examples:
Component level: the structure is that of the code itself, i.e. statements, decisions or branches.
Integration level: the structure may be a call tree (a diagram in which modules call other modules).
System level: the structure may be a menu structure, business process or web page structure.
In this section, two code-related structural techniques for code coverage, based on statements and
decisions, are discussed. For decision testing, a control flow diagram may be used to visualize the
alternatives for each decision.
Code coverage analysis is a structural testing technique (AKA glass box testing and white box
testing). Structural testing compares test program behavior against the apparent intention of the
source code.
Structural testing examines how the program works, taking into account possible pitfalls
in the structure and logic.
Structural testing is also called path testing since you choose test cases that cause paths
to be taken through the structure of the program.
At first glance, structural testing seems unsafe. Structural testing cannot find errors of
omission. However, requirements specifications sometimes do not exist, and are rarely complete.
This is especially true near the end of the product development time line when the
requirements specification is updated less frequently and the product itself begins to take over the
role of the specification.
To achieve statement coverage, every executable statement in the program is invoked at least
once during software testing. Achieving statement coverage shows that all code statements are
reachable statement coverage is considered a weak criterion because it is insensitive to some
control structures.
This measure reports whether Boolean expressions tested in control structures (such as
the if-statement and while-statement) evaluated to both true and false. The entire Boolean
expression is considered one true-or-false predicate regardless of whether it contains logical-and
or logical-or operators.
Additionally, this measure includes coverage of switch-statement cases, exception
handlers, and interrupts handlers.
Also known as: branch coverage, all-edges coverage, basis path coverage ,C2 ,
decision-decision-path testing . "Basis path" testing selects paths that achieve decision coverage.
Decision coverage requires two test cases: one for a true outcome and another for a
false outcome.
For simple decisions (i.e., decisions with a single condition), decision coverage ensures
complete testing of control constructs. But, not all decisions are simple. For the decision (A or B),
test cases (TF) and (FF) will toggle the decision outcome between true and false.
However, the effect of B is not tested; that is, those test cases cannot distinguish
between the decision (A or B) and the decision A.
Condition Coverage
Condition coverage requires that each condition in a decision take on all possible
outcomes at least once (to overcome the problem in the previous example), but does not require
that the decision take on all possible outcomes at least once.
In this case, for the decision (A or B) test cases (TF) and (FT) meet the
coverage criterion, but do not cause the decision to take on all possible outcomes. As with
decision coverage, a minimum of two tests cases is required for each decision.
Condition coverage reports the true or false outcome of each Boolean sub -expression,
separated by logical-and and logical-or if they occur. Condition coverage measures the sub-
expressions independently of each other.
This measure is similar to decision coverage but has better sensitivity to the control flow.
However, full condition coverage does not guarantee full decision coverage. For example,
consider the following C++/Java fragment.
bool f(bool e) { return false; }
bool a[2] = { false, false };
if (f(a && b)) ...
if (a[int(a && b)]) ...
if ((a && b) ? false : false) ...
All three of the if-statements above branch false regardless of the values of a and b.
However if you exercise this code with a and b having all possible combinations of values,
condition coverage reports full coverage.
To achieve full multiple condition coverage, the first condition requires 6 test cases while
the second requires 11. Both conditions have the same number of operands and operators. As
with condition coverage, multiple condition coverage does not include decision coverage.
For languages without short circuit operators such as Visual Basic and Pascal, multiple
condition coverage is effectively path coverage (described below) for logical expressions, with the
same advantages and disadvantages.
Multiple condition coverage requires four test cases, for each of the combinations of a
and b both true and false. As with path coverage each additional logical operator doubles the
number of test cases required.
4.5 Experience-based techniques (K2)
Terms
Error guessing, exploratory testing.
Background
Perhaps the most widely practiced technique is error guessing. Tests are derived from the tester’s
skill and intuition and their experience with similar applications and technologies. When used to augment
systematic techniques, intuitive testing can be useful to identify special tests not easily captured by formal
techniques, especially when applied after more formal approaches. However, this technique may yield
widely varying degrees of effectiveness, depending on the testers’ experience.
A structured approach to the error guessing technique is to enumerate a list of possible errors and
to design tests that attack these errors. These defect and failure lists can be built based on experience,
available defect and failure data, and from common knowledge about why software fails.
Exploratory testing is concurrent test design, test execution, test logging and learning, based on a
test charter containing test objectives, and carried out within time boxes.
It is an approach that is most useful where there are few or inadequate specifications and severe time
pressure, or in order to augment or complement other, more formal testing. It can serve as a check on the
test process, to help ensure that the most serious defects are found.
Terms
No specific terms.
Background
The choice of which test techniques to use depends on a number of factors, including the type of
system, regulatory standards, customer or contractual requirements, level of risk, type of risk, test
objective, documentation available, knowledge of the testers, time and budget, development life cycle, use
case models and previous experience of types of defects found. Some techniques are more applicable to
certain situations and test levels; others are applicable to all test levels.
Terms
Tester, test leader, test manager.
A test plan is a specific version of a project plan with clauses that meet these requirements. The
main characteristics of a project plan are below:
A project plan can be considered to have five key characteristics that have to be managed:
A circle called scope surrounded by a triangle. The triangles corner are marked: time,
resource, quality. There are two small arrows at each corner of the triangle marked risk.
Scope: defines what will be covered in a project and what will not be covered
Resource: what can be used to meet the scope
Time: what tasks are to be undertaken and when
Quality: the spread or deviation allowed from a desired standard
Risk: defines in advance what may happen to drive the plan off course, and what will be
done to recover the situation
The point of a plan is to balance. A balanced seesaw with scope and quality on the left
hand side, and time, resource and risk on the right hand side.
The scope, and quality constraint against,
The time and resource constraint,
While minimising the risks.
The international standard IEEE Std 829-1998 gives advice on the various types of test
documentation required for testing including test plans. The test plan section of the standard
defines 16 clauses.
The 16 clauses of the IEEE 829 test plan standard are:
1. Test plan identifier
2. Introduction.
3. Test items.
4. Features to be tested.
5. Features not to be tested.
6. Approach.
7. Item pass/fail criteria.
8. Suspension criteria and resumption requirements.
9. Test deliverables.
10. Testing tasks.
11. Environmental needs.
12. Responsibilities.
13. Staffing and training needs.
14. Schedule.
15. Risks and contingencies.
16. Approvals.
These can be matched against the five characteristics of a basic plan, with a couple left over that
form part of the plan document itself.
Scope
Scope clauses define what features will be tested.
3. Test Items: The items of software, hardware, and combinations of these that will be
tested.
4. Features to Be Tested: The parts of the software specification to be tested.
5. Features Not to Be Tested: The parts of the software specification to be EXCLUDED
from testing.
Resource
Resource clauses give the overall view of the resources to deliver the tasks.
11. Environmental Needs: What is needed in the way of testing software, hardware,
offices etc.
12. Responsibilities: Who has responsibility for delivering the various parts of the plan.
13. Staffing And Training Needs: The people and skills needed to deliver the plan.
Time
Time clauses specify what tasks are to be undertaken to meet the quality objectives, and when
they will occur.
10. Testing Tasks: The tasks themselves, their dependencies, the elapsed time they will
take, and the resource required.
14. Schedule: When the tasks will take place.
Often these two clauses refer to an appendix or another document that contains the detail.
Quality
Quality clauses define the standard required from the testing activities.
2. Introduction: A high level view of the testing standard required, including what type of
testing it is.
6. Approach: The details of how the testing process will be followed.
7. Item Pass/Fail Criteria: Defines the pass and failure criteria for an item being tested.
9. Test Deliverables: Which test documents and other deliverables will be produced.
Risk
Risk clauses define in advance what could go wrong with a plan and the measures that will be
taken to deal with these problems.
Plan Clauses
These clauses are parts of the plan structure.
1. Test Plan Identifier: This is a unique name or code by which the plan can be
identified in the project's documentation including its version.
16. Approvals: The signatures of the various stakeholders in the plan, to show they
agree in advance with what it says.
The IEEE 829 standard for a test plan provides a good basic structure. It is not restrictive in that it
can be adapted in the following ways:
Descriptions of each clause can be tailored to an organisation's needs,
More clauses can be added,
More content added to any clause,
Sub-sections can be defined in a clause,
Other planning documents can be referred to.
If a properly balanced test plan is created then a project stands a chance of delivering a system
that will meet the user's needs.
Terms
Defect density, failure rate, test control, test coverage, test monitoring, test report.
The outline of a test summary report is given in ‘Standard for Software Test Documentation’
(IEEE 829)
The sections shall be ordered in the specified sequence. Additional sections may be
included just prior to Approvals. If some or all of the content of a section is in another document,
then a reference to that material may be listed in place of the corresponding content. The
referenced material must be attached to the test summary report or available to users of the
summary report.
Details on the content of each section are contained in the following sub clauses.
a) Test summary report identifier
Specify the unique identifier assigned to this test summary report.
b) Summary
Summarize the evaluation of the test items. Identify the items tested, indicating their
version/revision level. Indicate the environment in which the testing activities took place.
For each test item, supply references to the following documents if they exist: test plan,
test design specifications, test procedure specifications, test item transmittal reports, test logs,
and test incident reports.
c) Variances
Report any variances of the test items from their design specifications. Indicate any
variances from the test plan, test designs, or test procedures. Specify the reason for each
variance.
d) Comprehensiveness assessment
Evaluate the comprehensiveness of the testing process against the comprehensiveness
criteria specified in the test plan if the plan exists. Identify features or feature combinations that
were not sufficiently tested and explain the reasons.
e) Summary of results
Summarize the results of testing. Identify all resolved incidents and summarize their
resolutions. Identify all unresolved incidents.
f) Evaluation
Provide an overall evaluation of each test item including its limitations. This evaluation
shall be based upon the test results and the item level pass/fail criteria. An estimate of failure risk
may be included.
g) Summary of activities
Summarize the major testing activities and events. Summarize resource consumption
data, e.g., total staffing level, total machine time, and total elapsed time used for each of the
major testing activities.
h) Approvals
Specify the names and titles of all persons who must approve this report. Provide space
for the signatures and dates.
Terms
Configuration management, version control.
Background
The purpose of configuration management is to establish and maintain the integrity of the products
(components, data and documentation) of the software or system through the project and product life cycle.
Configuration Management(CM) is the process which controls the changes made to a system and
manages the different versions of the evolving software product.
Configuration Management(CM) is also a process of identifying and defining the Configuration
Items in a system, recording and reporting the status of Configuration Items and Requests For
Change, and verifying the completeness and correctness of Configuration Items.
Configurable items
Some of the configurable items in CM are as follows
1. Requirement Phase: e.g. business requirement specification, updates in project plan.
2. Design Phase: e.g. High level design document, test specification
3. Coding Phase: e.g. tools used, program documentation
4. Testing Phases: e.g. manual test cases, automation test scripts, test strategy, test logs,
test results.
Background
Risk can be defined as the chance of an event, hazard, threat or situation occurring and its
undesirable consequences, a potential problem. The level of risk will be determined by the likelihood of an
adverse event happening and the impact (the harm resulting from that event).
Terms
Incident logging.
Background
Since one of the objectives of testing is to find defects, the discrepancies between actual and
expected outcomes need to be logged as incidents. Incidents should be tracked from discovery and
classification to correction and confirmation of the solution. In order to manage all incidents to completion,
an organization should establish a process and rules for classification. Incidents may be raised during
development, review, testing or use of a software product. They may be raised for issues in code or the
working system, or in any type of documentation including development documents, test documents or user
information such as “Help” or installation guides.
Incident reports have the following objectives:
Provide developers and other parties with feedback about the problem to enable identification,
isolation and correction as necessary.
Provide test leaders a means of tracking the quality of the system under test and the progress of the
testing.
Provide ideas for test process improvement.
A tester or reviewer typically logs the following information, if known, regarding an incident:
Date of issue, issuing organization, author, approvals and status.
Scope, severity and priority of the incident.
References, including the identity of the test case specification that revealed the problem.
b) Summary
Summarize the incident. Identify the test items involved indicating their version/revision level.
References to the appropriate test procedure specification, test case specification, and test log
should be supplied.
c) Incident description
Provide a description of the incident. This description should include the following items:
a) Inputs;
b) Expected results;
c) Actual results;
d) Anomalies;
e) Date and time;
f) Procedure step;
g) Environment;
h) Attempts to repeat;
i) Testers;
j) Observers.
Related activities and observations that may help to isolate and correct the cause of the incident
should be included.
d) Impact
If known, indicate what impact this incident will have on test plans, test design specifications, test
procedure specifications, or test case specifications.
6. Tool support for testing (K2)
Tools: The efficient and time saving way of testing are through the usage of testing tools. The
tools are used to continuously improve the quality of testing as well as it also helps in reusability
and faster performance. The tools help to complete the execution without any manual intervention
and can be also used to evaluate usability from different perspectives.
When to Automate
Mainly automation comes in to picture at the level of System testing. Automation is only
possible in stages when the application works 100%. The primary purpose of an automated test
is to verify that a requirement, once validated, functions properly in successive builds or
modifications of the AUT. Furthermore, because automated testing reduces the time necessary to
perform regression testing, this is where its benefits are realized the most.
Reusable modules
The basic building block is a reusable module. These modules are used for navigation,
manipulating controls, data entry, data validation, error identification (hard or soft), and writing out
logs. Reusable modules consist of commands, logic, and data. They should be grouped together
in ways that make sense. Generic modules used throughout the testing system, such as
initialization and setup functionality, are generally grouped together in files named to reflect their
primarily function, such as "Init" or "setup". Others that are more application specific, designed to
service controls on a customer window, for example, are also grouped together and named
similarly. All the modules that service the customer screen are organized in one file, or library.
That way, when the customer screen is modified for any reason, updates to the testing system
are located in one place; hence the principle of One Point Maintenance comes into existence.
Test Cases
The next step in the methodology is to turn reusable modules into automated test cases.
Here, a well-structured manual test case is converted into scripts, which is nothing but reusable
modules. The goal here is to build the reusable modules in a very methodical manner. The
action-response pairs from the test case are scripted into reusable modules. These pairs also
determine the size or granularity of a reusable module, which generally consist of just one action-
response pair. Automating test cases in this manner allows the testing system to take on a very
predictable structure. One benefit of this predictability is the ability to begin building the
automated testing system from the requirements early in the software-testing life cycle.
Multi Level Validation
Multi level data validation increases the usefulness and flexibility of the testing system. The more
levels of data validation, or evaluation, the more flexible the testing system. Multi level validation
and evaluation is the ability of the testing system to perform dynamic data validation at multiple
levels and collect information from system messages to evaluate data.
Dynamic data validation is the process whereby the automated testing tool, in real time, gets data
from a control, compares it to an expected value and writes the result to a log file.
Validation refers to correctness of data decided dynamically. Evaluation refers to system
messages that will be collected but evaluated after the fact.
Support for the management of tests and the testing activities carried out.
Interfaces to test execution tools, defect tracking tools and requirement management
tools.
Independent version control or interface with an external configuration management tool.
Support for traceability of tests, test results and incidents to source documents, such as
requirement specifications.
Logging of test results and generation of progress reports.
Quantitative analysis (metrics) related to the tests (e.g. tests run and tests passed) and
the test object (e.g. incidents raised), in order to give information about the test object,
and to control and improve the test process.
Example
These tools enable the progress of incidents to be monitored over time, often provide support for
statistical analysis and provide reports about incidents. They are also known as defect tracking
tools.
Examples:
1. Bugzilla
2. Source Forge
C M tools:
Examples:
1. Win CVS
2. Visual Source Safe
Static analysis tools can calculate metrics from the code (e.g. complexity), which can give
valuable information, for example, for planning or risk analysis.
Example:
Allparis a test case generation tool. Allpairs.pl is a Perl script that constructs a
reasonably small set of test cases that include all pairings of each value of each of a set of
parameters. Allpairs is a command-line executable based on a Perl script. Source is included.
See the Test Methodology section of http://satisfice.com
Test comparators
Test comparators determine differences between files, databases or test results. Test
execution tools typically include dynamic comparators, but a separate comparison tool may do
post execution comparison. A test comparator may use a test oracle, especially if it is automated.
Security tools
Security tools check for computer viruses and denial of service attacks. A firewall, for
example, is not strictly a testing tool, but may be used in security testing. Other security tools
stress the system by searching specific vulnerabilities of the system.
Examples:
1. LoadManager - Load, Stress, Stability and Performance testing tool from Alvicom.
Runs on all platforms supported by Eclipse and Java such as Linux, Windows, HP Unix, and
others.
2. Test LOAD - An automated load testing solution for IBM iSeries from Origin Software
Group Ltd. Rather than placing artificial load on the network, it runs natively on the server,
simulating actual system performance, monitoring and capturing batch activity, server jobs and
green-screen activity. For web and other applications
Monitoring tools
Monitoring tools are not strictly testing tools but provide information that can be used for
testing purposes and which is not available by other means. Monitoring tools continuously
analyze, verify and report on usage of specific system resources, and give warnings of possible
service problems. They store information about the version and build of the software and
testware, and enable traceability.
Examples:
1. eXternalTest - Site monitoring service from eXternalTest. Periodically checks servers
from different points of the world; view what customers see with screen shots using different
browsers, OSs, and screen resolutions
2. StillUp - Site monitoring service from Deep Blue Systems Ltd. Capabilities include
http/https response monitoring, ping routers, firewall's, etc; SMS and email notification, trace
route on all failures. Also available is a free service for monitoring up to 10 URLs at 59 minute
intervals
6.2.1 Potential benefits and risks of tool support for testing (for all tools)
(K2)
Simply purchasing or leasing a tool does not guarantee success with that tool. Each type
of tool may require additional effort to achieve real and lasting benefits. There are potential
benefit opportunities with the use of tools in testing, but there are also risks.
Repetitive work is reduced (e.g. running regression tests, re-entering the same test data,
and checking against coding standards).
Greater consistency and repeatability (e.g. tests executed by a tool, and tests derived
from requirements).
Objective assessment (e.g. static measures, coverage and system behavior).
Ease of access to information about tests or testing (e.g. statistics and graphs about test
progress, incident rates and performance).
Unrealistic expectations for the tool (including functionality and ease of use).
Underestimating the time, cost and effort for the initial introduction of a tool (including
training and external expertise).
Underestimating the time and effort needed to achieve significant and continuing benefits
from the tool (including the need for changes in the testing process and continuous
improvement of the way the tool is used).
Underestimating the effort required to maintain the test assets generated by the tool.
Over-reliance on the tool (replacement for test design or where manual testing would be
better).
Success factors for the deployment of the tool within an organization include:
IEEE 829 - IEEE 829-1998, also known as the 829 Standard for Software Test Documentation is
an IEEE standard that specifies the form of a set of documents for use in eight defined stages of
software testing, each stage potentially producing its own separate type of document. The
standard specifies the format of these documents. The documents for which this standard applies
are as follows:
Test Plan
Test Design Specification
Test Case Specification
Test Procedure Specification
Test Item Transmittal Report
Test Log
Test Incident Report
Test Summary Report
IEEE 1028 – This particular standard is used for software inspection. An inspection is one of the
most common sorts of review practices found in software projects.
IEEE 12207/ISO/IEC – Provides a common framework for developing and managing software.
This standard provides industry a basis for software practices that would be usable for both
national and international business.
ISO 9126 – It is an international standard for the evaluation of software. This standard is divided
into four parts which address the following subjects:
Quality model
External Metrics
Internal Metrics
Quality in use of metrics
BS 7925-2: 1998 – This is a Software Component Testing standard. This standard defines the
process for software component testing using specified test case design and measurement
techniques. This will enable users of the standard to directly improve the quality of their software
testing, and improve the quality of their software products.
DO-178B: 1992 – This standard is published by Radio Technical Commission for Aeronautics
(RTCA) and is basically for Software considerations in Airborne Systems and Equipment
Certification. It provides guidelines for the production of airborne systems equipment software. It
is used internationally to specify the safety and airworthiness of software for avionics systems. It
describes techniques and methods appropriate to ensure the integrity and reliability of such
software. It has been used to secure Federal Aviation Administration (FAA) approval of digital
computer software. It enforces good software development practices and system design
processes. It describes traceable processes for objectives such as:
High-level requirements are developed
Low-level requirements comply with high-level requirements
Source code complies with low-level requirements
Source code is traceable to low-level requirements
Test coverage of high-level and low-level requirements is achieved
IEEE 610.12:1990 – It is basically called as IEEE Std 610.12- 1990 IEEE Standard Glossary of
Software Engineering Terminology – Description. It identifies terms currently in use in the field of
Software Engineering.
IEEE 1008:1993 – It is a standard for Software Unit Testing. It defines an Integrated approach to
systematic and documented Unit testing. The standard can be applied to the unit testing of any
digital computer software or firmware and to the testing of both newly developed and modified
units.
IEEE 1012:1986 – It is a standard for Verifications and Validations. Software verification and
validation (V&V) processes determine whether the development products of a given activity
conform to the requirements of that activity and whether the software satisfies its intended use
and user needs. This standard applies to software being developed, maintained, or reused
[legacy, commercial off-the shelf (COTS), non-developmental items]. The term software also
includes firmware, micro code, and documentation. Software V&V processes includes analysis,
evaluation, review, inspection, assessment, and testing of software products.
IEEE 1219:1998 – This is a standard for Software Maintenance. This standard describes the
process of managing and executing software maintenance activities.
ISO 9000:2000 – It is one of the most important Quality Management Standard. The ISO 9000
2000 Standards apply to all kinds of organizations in all kinds of areas. Some of these areas
include manufacturing, processing, servicing, printing, forestry, electronics, computing, steel,
legal services, financial services, accounting, trucking, banking, retailing, drilling, recycling,
aerospace, construction, exploration, textiles, pharmaceuticals, oil and gas, pulp and paper,
petrochemicals, publishing, shipping, energy, telecommunications, plastics, metals, research,
health care, hospitality, utilities, aviation, machine tools, food processing, agriculture,
government, education, recreation, fabrication, sanitation, software development, consumer
products, transportation, instrumentation, tourism, biotechnology, chemicals, consulting,
insurance, and so on.
ISO/IEC 12207: 1995 - This standard describes the major component processes of a complete
software life cycle and the high-level relations that govern their interactions. This standard covers
the life cycle of software from conceptualization of ideas through retirement. It also describes how
to tailor the standard for a project. This standard defines the following life cycle processes:
Primary Processes
Supporting Processes
Organization Processes
ISO/IEC 14598-1:1996 – This standard is an expansion of ISO 9126: 1991. This standard is
mainly used to evaluate the software product. The evaluation process is broken down into four
main stages as follows:
Establish evaluation requirements
Specify the evaluation
Design the evaluation
Execute the evaluation
CMMI – Capability Maturity Model Integration is a process improvement approach that provides
organizations with the essential elements of effective processes. It is used to guide process
improvement in an organization. It basically helps in the following ways:
Integrate separate organizational functions
Sets process improvement goals and priorities
Provides guidance for quality processes
Provides a point of reference for appraising current processes