0% found this document useful (1 vote)
48 views8 pages

18IS62 Module5

The document discusses different types of software testing including module/unit testing, integration testing, system testing, acceptance testing, and regression testing. Integration testing checks for compatibility issues when individual modules are combined. An effective integration test is built on thorough module testing. Integration strategies include top-down, bottom-up, thread-based, and critical module approaches. System testing checks the entire system against specifications while acceptance testing validates usefulness to users. Regression testing checks for unintended effects of software changes.

Uploaded by

GAURAV PURI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (1 vote)
48 views8 pages

18IS62 Module5

The document discusses different types of software testing including module/unit testing, integration testing, system testing, acceptance testing, and regression testing. Integration testing checks for compatibility issues when individual modules are combined. An effective integration test is built on thorough module testing. Integration strategies include top-down, bottom-up, thread-based, and critical module approaches. System testing checks the entire system against specifications while acceptance testing validates usefulness to users. Regression testing checks for unintended effects of software changes.

Uploaded by

GAURAV PURI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Module 5

Integration and Component-based Software Testing


Overview

Module or unit test checks module behavior against specifications or expectations; integration test checks
module compatibility; system and acceptance tests check behavior of the whole system with respect to
specifications and user needs, respectively.
An effective integration test is built on a foundation of thorough module testing and inspection. Module test
maximizes controllability and observability of an individual unit, and is more effective in exercising the full
range of module behaviors, rather than just those that are easy to trigger and observe in a particular context
of other modules.

While integration testing may to some extent act as a process check on module testing (i.e., faults revealed
during integration test can be taken as a signal of unsatisfactory unit testing), thorough integration testing
cannot fully compensate for sloppiness at the module level. In fact, the quality of a system is limited by the
quality of the modules and components from which it is built, and even apparently noncritical modules can
have widespread effects. For example, in 2004 a buffer overflow vulnerability in a single, widely used
library for reading Portable Network Graphics (PNG) files caused security vulnerabilities in Windows,
Linux, and Mac OS X Web browsers and email clients.

What is integration testing?


INTEGRATION TESTING is a level of software testing where individual units are combined and tested as a
group. The purpose of this level of testing is to expose faults in the interaction
between integrated units. Test drivers and test stubs are used to assist in Integration Testing

Integration Faults
• Inconsistent interpretation of parameters or values – Example: Mixed units (meters/yards) in Martian
Lander
• Violations of value domains, capacity, or size limits – Example: Buffer overflow
• Side effects on parameters or resources – Example: Conflict on (unspecified) temporary file
• Omitted or misunderstood functionality – Example: Inconsistent interpretation of web hits
• Nonfunctional properties – Example: Unanticipated performance issues
• Dynamic mismatches – Example: Incompatible polymorphic method calls

Integration Testing Strategies

A strategy for integration testing of successive partial subsystems is driven by the order in which modules
are constructed (the build plan), which is an aspect of the system architecture. The build plan, in turn, is
driven partly by the needs of test. Design and integration testing are so tightly coupled that in many
companies the integration and the testing groups are merged in a single group in charge of both design and
test integration.
Since incremental assemblies of modules are incomplete, one must often construct scaffolding—drivers,
stubs, and various kinds of instrumentation—to effectively test them. This can be a major cost of integration
testing, and it depends to a large extent on the order in which modules are assembled and tested.

One extreme approach is to avoid the cost of scaffolding by waiting until all modules are integrated, and
testing them together — essentially merging integration test into system testing. In this big bang approach,
neither stubs nor drivers need be constructed, nor must the development be carefully planned to expose well-
specified interfaces to each subsystem. These savings are more than offset by losses in observability,
diagnosability, and feedback. Delaying integration testing hides faults whose effects do not always
propagate outward to visible failures (violating the principle that failing always is better than failing
sometimes) and impedes fault localization and diagnosis because the failures that are visible may be far
removed from their causes. Requiring the whole system to be available before integration does not allow
early test and feedback, and so faults that are detected are much more costly to repair.

Among strategies for incrementally testing partially assembled systems, we can distinguish two main
classes:
and Functional oriented.

In a structural approach, test strategy modules are constructed, assembled, and tested together in an order
based on hierarchical structure in the design. Top-down, Bottom-up, Sandwich or Backbone
Functional orientation: Modules integrated according to application characteristics or features – Threads,
Critical module

Top-down integration strategy begins at the top of the uses hierarchy, including the interfaces exposed
through a user interface or top-level application program interface (API). The need for drivers is reduced or
eliminated while descending the hierarchy, since at each stage the already tested modules can be used as
drivers while testing the next layer.
Bottom-up integration similarly reduces the need to develop stubs, except for breaking circular relations.
Top-down and bottom-up approaches to integration testing can be applied early in the development if paired
with similar design strategies: If modules are delivered following the hierarchy, either top-down or bottom-
up, they can be integrated and tested as soon as they are delivered, thus providing early feedback to the
developers.

Design and integration strategies are driven by other factors, like reuse of existing modules
or commercial off-the-shelf (COTS) components, or the need to develop early prototypes
for user feedback. Integration may combine elements of the two approaches, starting from both ends of the
hierarchy and proceeding toward the middle. An early top-down approach may result from developing
prototypes for early user feedback, while existing modules may be integrated bottom-up. This is known as
the sandwich or backbone strategy
The price of flexibility and adaptability in the sandwich strategy is complex planning and monitoring.
While top-down and bottom-up are straightforward to plan and monitor, a sandwich approach requires extra
coordination between development and test.

The thread integration testing strategy integrates modules according to system features. Test designers
identify threads of execution that correspond to system features, and they incrementally test each thread. The
thread integration strategy emphasizes module interplay for specific functionality.
 Functional strategies require more planning
–Structural strategies (bottom up, top down, sandwich) are simpler
–But thread and critical modules testing provide better process visibility, especially in complex
systems
• Possible to combine
– Top-down, bottom-up, or sandwich are reasonable for relatively small components and subsystems
–Combinations of thread and critical modules integration testing are often preferred for larger
Subsystems
Testing Components and Assemblies
Many software products are constructed, partly or wholly, from assemblies of rebuilt software components.
A key characteristic of software components is that the organization that develops a component is distinct
from the (several) groups of developers who use it to construct systems. The component developers cannot
completely anticipate the uses to which a component will be put, and the system developers have limited
knowledge of the component. Testing components (by the component developers) and assemblies (by
system developers) therefore brings some challenges and constraints that differ from testing other kinds of
module. Reusable components are often more dependable than software developed for a single application.
More effort can be invested in improving the quality of a component when the cost is amortized across many
applications. Moreover, when reusing a component that has been in use in other applications for some time,
one obtains the benefit not only of test and analysis by component developers, but also of actual operational
use.

System, Acceptance, and


Regression Testing
Overview

System, acceptance, and regression testing are all concerned with the behavior of a software system as a
whole, but they differ in purpose. System testing is a check of consistency between the software system and
its specification (it is a verification activity). Like unit and integration testing, system testing is primarily
aimed at uncovering faults, but unlike testing activities at finer granularity levels, system testing focuses on
system-level properties. System testing together with acceptance testing also serves an important role in
assessing whether a product can be released to customers, which is distinct from its role in exposing faults to
be removed to improve the product. Flaws in specifications and in development, as well as changes in users’
expectations, may result in products that do not fully meet users’ needs despite passing system tests.
Acceptance testing, as its name implies, is a validation activity aimed primarily at the acceptability of the
product, and it includes judgments of actual usefulness and usability rather than conformance to a
requirements specification.

Regression testing is specialized to the problem of efficiently checking for unintended effects of software
changes. New functionality and modification of existing code may introduce unexpected interactions and
lead latent faults to produce failures not experienced in previous releases.

System Testing

The essential characteristics of system testing are that it is comprehensive, based on a specification of
observable behavior, and independent of design and implementation decisions. System testing can be
considered the culmination of integration testing, and passing all system tests is tantamount to being
complete and free of known bugs. The system test suite may share some test cases with test suites used in
integration and even unit testing, particularly when a thread-based or spiral model of development has been
taken and subsystem correctness has been tested primarily through externally visible features and behavior.
The essential characteristics of system testing are that it is comprehensive, based on a specification of
observable behavior, and independent of design and implementation decisions. System testing can be
considered the culmination of integration testing, and passing all system tests is tantamount to being
complete and free of known bugs. The system test suite may share some test cases with test suites used in
integration and even unit testing, particularly when a thread-based or spiral model of development has been
taken and subsystem correctness has been tested primarily through externally visible features and behavior.
Acceptance Testing

The purpose of acceptance testing is to guide a decision as to whether the product in its current state should
be released. The decision can be based on measures of the product or process. Measures of the product are
typically some inference of dependability based on statistical testing. Measures of the process are ultimately
based on comparison to experience with previous products. Although system and acceptance testing are
closely tied in many organizations, fundamental differences exist between searching for faults and
measuring quality. Even when the two activities overlap to some extent, it is essential to be clear about the
distinction, in order to avoid drawing unjustified conclusions. Quantitative goals for dependability, including
reliability, availability, and mean time between failures, were introduced in Chapter 4. These are essentially
statistical measures and depend on a statistically valid approach to drawing a representative sample of test
executions from a population of program behaviors. Systematic testing, which includes all of the testing
techniques presented heretofore in this book, does not draw statistically representative samples. Their
purpose is not to fail at a “typical” rate, but to exhibit as many failures as possible. They are thus unsuitable
for statistical testing.
Statistical models of usage, or operational profiles, may be available from measurement of actual use of
prior, similar systems. For example, use of a current telephone handset may be a reasonably good model of
how a new handset will be used. One can perform sensitivity testing to determine which parameters are
critical. Sensitivity testing consists of repeating statistical tests while systematically varying parameters to
note the effect of each parameter on the output. A particular parameter may have little effect on outcomes
over the entire range of plausible values, or there

Usability

A usable product is quickly learned, allows users to work efficiently, and is pleasant to use. Usability
involves objective criteria such as the time and number of operations required to perform tasks and the
frequency of user error, in addition to the overall, subjective satisfaction of users. For test and analysis, it is
useful to distinguish attributes that are uniquely associated with usability from other aspects of software
quality (dependability, performance, security, etc.). Other software qualities may be necessary for usability;
for example, a program that often fails to satisfy its functional requirements or that presents security
holes is likely to suffer poor usability as a consequence. Distinguishing primary usability properties from
other software qualities allows responsibility for each class of properties to be allocated to the most
appropriate personnel, at the most cost-effective points in the project schedule. The process of verifying and
validating usability includes the following main steps:
Inspecting specifications with usability checklists. Inspection provides early feedback on usability.
Testing early prototypes with end users to explore their mental model (exploratory test), evaluate
alternatives (comparison test), and validate software usability. A prototype for early assessment of usability
may not include any functioning software; a cardboard prototype may be as simple as a sequence of static
images presented to users by the usability tester.
Testing incremental releases with both usability experts and end users to monitor progress and anticipate
usability problems. System and acceptance testing that includes expert-based inspection and testing, user
based testing, comparison testing against competitors, and analysis and checks often done automatically,
such as a check of link connectivity and verification of browser compatibility
Regression Testing
Regression testing is a software testing practice that ensures an application still functions as expected after
any code changes, updates, or improvements. Regression testing is responsible for the overall stability and
functionality of the existing features. Whenever a new modification is added to the code, regression testing
is applied to guarantee that after each update, the system stays sustainable under continuous
improvements. Changes in the code may involve dependencies, defects, or malfunctions. Regression testing
targets to mitigate these risks, so that the previously developed and tested code remains operational after
new changes. Generally, an application goes through multiple tests before the changes are integrated into the
main development branch. Regression testing is the final step, as it verifies the product behaviors as a whole.

The non regression of new versions (i.e., preservation of functionality), is a basic quality requirement.
Disciplined design and development techniques, including precise specification and modularity that
encapsulates independent design decisions, improves the likelihood of achieving non regression. Testing
activities that focus on regression problems are called (non) regression testing. Usually “non” is omitted and
we commonly say regression testing.

Retest all - A simple approach to regression testing consists of re executing all test cases designed for
previous versions. Even this simple retest all approach may present non triv-ial problems and costs. Former
test cases may not be reexecutable on the new version without modification, and rerunning all test cases may
be too expensive and unnecessary. A good quality test suite must be maintained across system versions.

Testcase maintenance - Changes in the new software version may impact the format of inputs and outputs,
and test cases may not be executable without corresponding changes Even simple modifications of the data
structures, such as the addition of a field or small change of maintenance data types, may invalidate former
test cases, or outputs comparable with the new ones. Moreover, some test cases may be obsolete, since they
test features of the software that have been modified, substituted, or removed from the new version.

Regression Test Selection Techniques


Regression test selection techniques are based on either code or specifications. Code-based selection
techniques select a test case for execution if it exercises a portion of the code that has been modified.
Specification-based criteria select a test case for execution if it is relevant to a portion of the specification
that has been changed.

Code-based Regression Test Selection

Code-based regression test techniques can be supported by relatively simple tools. They work even when
specifications are not properly maintained. However, like code-based test techniques in general, they do not
scale well from unit testing to integration and system testing. In contrast, specification-based criteria scale
well and are easier to apply to changes that cut across several modules. However, they are more challenging
to automate and require carefully structured and well-maintained specifications.

Control-flow and Data-flow Regression


Control flow graph (CFG) regression techniques are based on the differences between the CFGs of the new
and old versions of the software. CFG regression testing techniques compare the annotated control flow
graphs of the two program versions to identify a subset of test cases that traverse modified parts of the
graphs. The graph nodes are annotated with corresponding program statements, so that comparison of the
annotated CFGs detects not only new or missing nodes and arcs, but also nodes whose changed annotations
correspond to small, but possibly relevant, changes in statements Data flow (DF) regression testing
techniques select test cases for new and modified pairs of definitions with uses . DF regression selection
techniques reexecute test cases that, when executed on the original program, exercise DU pairs that were
deleted or modified in the revised program. Test cases that executed a conditional statement whose predicate
was altered are also selected, since the changed predicate could alter some old definition-use associations.

You might also like