Quality Management and Testing
Quality Management and Testing
Testing
Chapter 4
Introduction to quality
Quality is a complex and multifaceted concept that can be described from five
different points of view:
■ Transcendental view: something that immediately recognizes, but cannot
explicitly define.
● Between three and five people (typically) should be involved in the review.
● Advance preparation should occur but should require no more than two hours of work for
each person.
● The duration of the review meeting should be less than two hours.
After the development of work product, the review meeting is attended by the review leader, all
reviewers, and the producer. At the end of the review, all attendees of the FTR must decide whether
to
● Measure the existing process and its output to determine current quality
performance (collect defect metrics).
ISO 9002: Applies to organizations which do not design products but are only
involved in production. E.g. steel and car manufacturing industries (they buy
plant and product design from others).
This begins construction and testing with atomic modules (components at the
lowest levels in the program structure). Because components are integrated from
the bottom up, the functionality provided by components subordinate to a given
level is always available and the need for stubs is eliminated. A bottom-up
integration strategy may be implemented with the following steps:
i. Low-level components are combined into clusters (sometimes called
builds) that perform a specific software sub-function.
ii. A driver (a control program for testing) is written to coordinate test case
input and output.
iii. The cluster is tested.
iv. Drivers are removed and clusters are combined moving upward in the
program structure.
c. Regression testing
Each time a new module is added as part of integration testing, the software changes. New
data flow paths are established, new I/O may occur, and new control logic is invoked. These
changes may cause problems with functions that previously worked flawlessly. In the context
of an integration test strategy, regression testing is the re-execution of some subset of tests that
have already been conducted to ensure that changes have not propagated unintended side
effects.
Successful tests result in the discovery of errors, and errors must be corrected. Whenever
software is corrected, some aspect of the software configuration (the program, its
documentation, or the data that support it) is changed. The regression test suite (the subset of
tests to be executed) contains three different classes of test cases:
● A representative sample of tests that will exercise all software functions.
● Additional tests that focus on software functions that are likely to be affected by the
change.
● Tests that focus on the software components that have been changed.
d. Smoke testing
Smoke testing refers to physical tests similar to detect cracks or breaks of pipes i.e. "the pipes
will not leak, the keys seal properly, the circuit will not burn or smoke comes out, or the software
will not crash outright”. Thus, smoke testing is the initial testing process exercised to check
whether the software under test is ready/stable for further testing. In essence, the smoke testing
approach encompasses the following activities:
● Software components that have been translated into code are integrated into a “build.” A
build includes all data files, libraries, reusable modules, and engineered components that are
required to implement one or more product functions.
● A series of tests is designed to expose errors that will keep the build from properly
performing its function. The intent should be to uncover “show stopper” errors that have the
highest likelihood of throwing the software project behind schedule.
● The build is integrated with other builds and the entire product (in its current form) is smoke
tested daily. The integration approach may be top down or bottom up.
C. Validation Testing
Validation testing begins at the culmination of integration testing, when individual
components have been exercised, the software is completely assembled as a
package, and interfacing errors have been uncovered and corrected.
At the validation or system level, the distinction between conventional software,
object-oriented software, and Web Apps disappears.
Testing focuses on user-visible actions and user-recognizable output from the
system. Like all other testing steps, validation tries to uncover errors, but the focus
is at the requirements level—on things that will be immediately apparent to the end
user.
The validation testing mainly concerned with following tasks:
a. Validation Test Criteria
Software validation is achieved through a series of black-box tests that demonstrate
conformity with requirements. After each validation test case has been conducted, one
of two possible conditions exists:
● The function or performance characteristics conform to specification and are
accepted
● A deviation from specification is uncovered and a deficiency list is create
b. Configuration Review
An important element of the validation process is a configuration review. The intent of
the review is to ensure that all elements of the software configuration have been
properly developed, are cataloged, and have the necessary detail to bolster the support
phase of the software life cycle. The configuration review, sometimes called an audit.
c. Alpha and Beta Testing
When custom software is built for one customer, a series of acceptance tests are
conducted to enable the customer to validate all requirements.
● Alpha test is conducted at the developer's site by a representative group of
end users. The software is used in a natural setting with the developer
"looking over the shoulder" of the user and recording errors and usage
problems. Alpha tests are conducted in a controlled environment.
● Beta test is conducted at one or more customer sites by the end-user of the
software. Unlike alpha testing, the developer is generally not present. The
Beta test is a "live" application of the software in an environment that
cannot be controlled by the developer. The customer records all problems
(real or imagined) that are encountered during beta testing and reports these
to the developer at regular intervals.
D. System Testing
Software is only one element of a larger computer-based system.
Software is incorporated with other system elements (hardware, people,
information), and a series of system integration and validation tests are
conducted. These tests fall outside the scope of the software process and
are not conducted solely by software engineers.
A classic system testing problem is "finger-pointing", occurs when an
error is uncovered, and each system element developer blames the other
for the problem. Thus, system testing is actually a series of different tests
whose primary purpose is to fully exercise the computer-based system.
The system testing can be classified in following diversified fields:
a. Recovery Testing
Many computer based systems must recover from faults and resume
processing within a pre-specified time. In some cases, a system must be
fault tolerant; that is, processing faults must not cause overall system
function to cease. In other cases, a system failure must be corrected
within a specified period of time or severe economic damage will occur.
Recovery testing is a system test that forces the software to fail in a
variety of ways and verifies that recovery is properly performed. If
recovery is automatic (performed by the system itself), re-initialization,
check- pointing mechanisms, data recovery, and restart are evaluated for
correctness. If recovery requires human intervention, the
mean-time-to-repair (MTTR) is evaluated to determine whether it is
within acceptable limits.
b. Security Testing
Any system that manages sensitive information is a target for illegal
penetration. Penetration spans a broad range of activities: hackers
who attempt to penetrate systems for sport; disgruntled employees
who attempt to penetrate for revenge; dishonest individuals who
attempt to penetrate for illicit personal gain. Security testing
attempts to verify that protection mechanisms built into a system
will protect it from improper penetration. During security testing,
the tester plays the role(s) of the individual who desires to penetrate
the system.
C. Stress Testing
Stress tests are designed to confront programs with abnormal situations. In
essence, the tester who performs stress testing asks: “How high can we crank
this up before it fails? Stress testing executes a system in a manner that
demands resources in abnormal quantity, frequency, or volume. For example:
a) special tests may be designed that generate ten interrupts per second,
when one or two is the average rate,
b) input data rates may be increased by an order of magnitude to determine
how input functions will respond
c) test cases that require maximum memory or other resources are executed
d) test cases that may cause thrashing in a virtual operating system are
designed
e) Test cases that may cause excessive hunting for disk-resident data are
created.
d. Performance Testing
For Real-time and embedded systems, software that provides required function
but does not conform to performance requirements is unacceptable.
Performance testing is designed to test the run-time performance of software
within the context of an integrated system.
Performance testing occurs throughout all steps in the testing process. Even at the
unit level, the performance of an individual module may be assessed as white-box
tests are conducted.
Performance tests are often coupled with stress testing and usually require both
hardware and software instrumentation. External instrumentation can monitor
execution intervals, log events (interrupts) as they occur, and sample machine
states on a regular basis.
White-box Testing
Using white-box testing methods, the software engineer can derive test cases
that:
● Guarantee that all independent paths within a module have been
exercised at least once,
● Exercise all logical decisions on their true and false sides,
● Execute all loops at their boundaries and within their operational bounds,
and
● Exercise internal data structures to ensure their validity
The main Reasons for conducting white-box tests can be listed as:
■ Logic errors and incorrect assumptions are inversely proportional to the
probability that a program path will be executed. Errors tend to creep into our
work when we design and implement function, conditions, or controls that are out
of the mainstream. Everyday processing tends to be well understood (and well
scrutinized), while "special case" processing tends to fall into the cracks.
■ We often believe that a logical path is not likely to be executed when, in fact, it
may be executed on a regular basis. The logical flow of a program is sometimes
counterintuitive, meaning that our unconscious assumptions about flow of control
and data may lead us to make design errors that are uncovered only once path
testing commences.
● Typographical errors are random. When a program is translated into programming
language source code, it is likely that some typing errors will occur.
a. Basic Path Testing
It is white-box testing technique first proposed by Tom McCabe.
The basis path method enables test case designer to derive a logical complexity
measure of a procedural design and use this measure as a guide for defining a
basis set of execution paths.
Test cases derived to exercise the basis set are guaranteed to execute every
statement in the program at least one time during testing.
The basics path testing can be done by using the terms: Flow Graph Notation,
Cyclomatic Complexity, Deriving Test Cases, Graph Matrices
Cyclomatic complexity has a foundation in graph theory and provides us with extremely
useful software metric.
Complexity is computed in one of three ways:
i. The number of regions of the flow graph corresponds to the cyclomatic
complexity.
Condition Testing
Condition testing is a test-case design method that exercises the logical conditions
contained in a program module. A relational expression takes the form:
E1 <relational-operator> E2
Where E1 and E2 are arithmetic expressions and <relational-operator> is one of the
following: <, ≤, =, ≠, >, or ≥. A compound condition is composed of two or more simple
conditions, Boolean operators, and parentheses. We assume that Boolean operators
allowed in a compound condition include OR (|), AND (&), and NOT (¬). A condition
without relational expressions is referred to as a Boolean expression.
Data Flow Testing
The data flow testing method selects test paths of a program according to the
locations of definitions and uses of variables in the program. To illustrate the data
flow testing approach, assume that each statement in a program is assigned a
unique statement number and that each function does not modify its parameters or
global variables. For a statement with S as its statement number,
● Simple loops: They are sequentially tested for the data of desired domain.
● Nested loop: They are started to test from innermost loop.
● Concatenated loop: Multiple loops are tested simultaneously if there is no
dependency exists.
● Unstructured loops: Whenever possible, this class of loops should be
redesigned to reflect the use of the structured programming constructs
Black Box Testing
Black box testing tends to be applied during the later stage of testing (integration
& system testing). The black box testing can be classified into main two categories
as follows:
1. Equivalence partitioning
Divides the input domain of a program into classes of data from which test cases
can be derived. An equivalence class represents a set of valid or invalid states for
input conditions. Equivalence classes may be defined according to the following
guidelines:
● Input condition specifies a range, one valid and two invalid equivalence
classes are defined.
● Input condition needs specific value, one valid and two invalid equivalence
classes defined.
● Input condition specifies member of a set, one valid/one invalid equivalence
class defined.
2. Boundary value analysis
a. Testing GUIs
Finite state modeling graphs may be used to derive a series of tests that
address specific data and program objects that are relevant to the GUI.
b. Testing of Client/Server Architectures
The major difficulties in testing of C/S architectures and the software can
be listed as:
Not only does the test case designer have to consider white- and black-box
test cases but also event handling (interrupt processing), the timing of the
data, and the parallelism of the tasks (processes) that handle the data.
For example, the elements of a design model have been documented and
reviewed. Errors are found and corrected.
Once all parts of the model have been reviewed, corrected, and then
approved, the design model becomes a baseline. Further changes to the
program architecture (documented in the design model) can be made only
after each has been evaluated and approved
Before a software configuration item (SCI) becomes a baseline (or
placed in a project database, also called a project library or software
repository), change can be made quickly or informally.
However, once a baseline is established, we figuratively pass through
a swinging one way door.
Now, the formal procedure must be applied to evaluate and verify
each change. The procedure to make changes on baseline product will
be:
● Send a private copy of module needing modification to change
control board (CCB)
● Get permission from the CCB for restoring change module
● After review from CCB, the manager updates the old baseline by
restore operation.
Software Configuration Items (SCI)
The first law of software engineering is “No matter where you are in the system
life cycle, the system will change, and the desire to change it will persist
throughout the life cycle”. The change is a fact of life in software development.
Customer wants to modify the requirements, developer wants to modify the
technical approach, manager wants to modify the project strategy etc.
A configuration object has a name, attributes, and is “connected” to other objects
by relationships. Referring to Figure, the configuration objects,
Design-Specification, Data-Model, Component-N, Source-Code, and
Test-Specification are each defined separately. However, each of the objects is
related to the others as shown by the arrows. A curved arrow indicates a
compositional relation. That is, Data-Model and Component-N are part of the
object Design-Specification. A double-headed straight arrow indicates an
interrelationship. If a change were made to the Source-Code object, the
interrelationships enable you to determine what other objects (and SCIs) might
The SCM Process
The software configuration management process defines a series of tasks that
have four primary objectives:
1. to identify all items that collectively define the software configuration,
2. to manage changes to one or more of these items,
3. to facilitate the construction of different versions of an application, and
4. to ensure that software quality is maintained as the configuration evolves
over time
a. Identification
A basic object is a unit of information that you create during analysis, design,
code, or test. For example, a basic object might be a section of a requirements
specification, part of a design model, source code for a component, or a suite
of test cases that are used to exercise the code.