0% found this document useful (0 votes)
238 views16 pages

Levels of Testing, Integration Testing

The document discusses different levels of software testing and integration testing. It describes the traditional waterfall model of software development and some alternative life cycle models that emphasize composition over functional decomposition. Integration testing is discussed in the context of these different models. The document also provides an example of specifying an Automatic Teller Machine (ATM) system using both structured analysis and object-oriented approaches to illustrate different testing techniques.

Uploaded by

Likhith N
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
238 views16 pages

Levels of Testing, Integration Testing

The document discusses different levels of software testing and integration testing. It describes the traditional waterfall model of software development and some alternative life cycle models that emphasize composition over functional decomposition. Integration testing is discussed in the context of these different models. The document also provides an example of specifying an Automatic Teller Machine (ATM) system using both structured analysis and object-oriented approaches to illustrate different testing techniques.

Uploaded by

Likhith N
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

SOFTWARE TESTING 18IS62

LEVELS OF TESTING, INTEGRATION


TESTING

Traditional View of Testing Levels


The traditional model of software development is the Waterfall model, which is drawn as a V in. In
this view, information produced in one of the development phases constitutes the basis for test case
identification at that level. Nothing controversial here: we certainly would hope that system test cases
are somehow correlated with the requirements specification, and that unit test cases are derived from
the detailed design of the unit. Two observations: there is a clear presumption of functional testing
here, and there is an implied “bottom-up” testing order.
Alternative Life Cycle Models
Since the early 1980s, practitioners have devised alternatives in response to shortcomings of the
traditional waterfall model of software development Common to all of these alternatives is the shift
away from the functional decomposition to an emphasis on composition. Decomposition is a perfect
fit both to the top-down progression of the waterfall model and to the bottom-up testing order. One of
the major weaknesses of waterfall development cited by [Agresti 86] is the over-reliance on this
whole paradigm. Functional decomposition can only be well done when the system is completely
understood, and it promotes analysis to the near exclusion of synthesis. The result is a very long
separation between requirements specification and a completed system, and during this interval, there
is no opportunity for feedback from the customer. Composition, on the other hand, is closer the way
people work: start with something known and understood, then add to it gradually, and maybe remove
undesired portions. There is a very nice analogy with positive and negative sculpture. In negative
sculpture, work proceeds by removing unwanted material, as in the mathematician’s view of sculpting
Michelangelo’s David: start with a piece of marble, and simply chip away all non-David. Positive
sculpture is often done with a medium like wax. The central shape is approximated, and then wax is
either added or removed until the desired shape is attained. Think about the consequences of a
mistake: with negative sculpture, the whole work must be thrown away, and restarted. (There is a
museum in Forlence, Italy that contains half a dozen such false starts to The David.) With positive
sculpture, the erroneous part is simply removed and replaced. The centrality of composition in the
alternative models has a major implication for integration testing.
Waterfall Spin-offs
There are three mainline derivatives of the waterfall model: incremental development, evolutionary
development, and the Spiral model [Boehm 88]. Each of these involves a series of increments or
builds, Within a build, the normal waterfall phases from detailed design through testing occur, with
one important difference: system testing is split into two steps, regression and progression testing

Prof. Chaitanya V, ISE, SCE 1


SOFTWARE TESTING 18IS62

Specification Based Models


Two other variations are responses to the “complete understanding” problem. (Recall that functional
decomposition is successful only when the system is completely understood.) When systems are not
fully understood (by either the customer or the developer), functional decomposition is perilous at best.
The rapid prototyping life cycle deals with this by drastically reducing the specification-to-customer
feedback loop to produce very early synthesis. Rather than build a final system, a “quick and dirty”
prototype is built and then used to elicit customer feedback. Depending on the feedback, more
prototyping cycles may occur. Once the developer and the customer agree that a prototype represents
the desired system, the developer goes ahead and builds to a correct specification. At this point, any of
the waterfall spin-offs might also be used.
An Object-Oriented Life Cycle Model
When software is developed with an object orientation, none of our life cycle models fit very well.
The main reasons: the object orientation is highly compositional in nature, and there is dense
interaction among the construction phases of object-oriented analysis, object-oriented design, and
object-oriented programming. We could show this with pronounced feedback loops among waterfall
phases, but the fountain model [Henderson-Sellers 90] is a much more appropriate metaphor. In the
fountain model, the foundation is the requirements analysis of real world systems

Figure 5.1 Fountain Model of Object-Oriented Software Development

As the object-oriented paradigm proceeds, details “bubble up” through specification, design, and
coding phases, but at each stage, some of the “flow” drops back to the previous phase(s). This model
captures the reality of the way people actually work (even with the traditional approaches).

Formulations of the SATM System


In this and the next three chapters, we will relate our discussion to a higher level example, the Simple
Automatic Teller Machine (SATM) system. there are function buttons B1, B2, and B3, a digit keypad
with a cancel key, slots for printer receipts and ATM cards, and doors for deposits and cash
withdrawals. The SATM system is described here in two ways: with a structured analysis approach,
and with an object-oriented approach. These descriptions are not complete, but they contain detail
sufficient to illustrate the testing techniques under discussion

Prof. Chaitanya V, ISE, SCE 2


SOFTWARE TESTING 18IS62

SATM with Structured Analysis


The structured analysis approach to requirements specification is the most widely used method in the
world. It enjoys extensive CASE tool support as well as commercial training, and is described in
numerous texts. The technique is based on three complementary models: function, data, and control.
Here we use data flow diagrams for the functional models, entity/relationship models for data, and
finite state machine models for the control aspect of the SATM system. The functional and data
models were drawn with the Deft CASE tool from Sybase Inc. That tool identifies external devices
(such as the terminal doors) with lower case letters, and elements of the functional decomposition
with numbers (such as 1.5 for the Validate Card function). The open and filled arrowheads on flow
arrows signify whether the flow item is simple or compound. The portions of the SATM system
shown here pertain generally to the personal identification number (PIN) verification portion of the
system.

Figure 5.2 Screens for the SATM System


The Deft CASE tool distinguishes between simple and compound flows, where compound flows may
be decomposed into other flows, which may themselves be compound. The graphic appearance of this
choice is that simple flows have filled arrowheads, while compound flows have open arrowheads. As
an example, the compound flow “screen” has the following decomposition:

Prof. Chaitanya V, ISE, SCE 3


SOFTWARE TESTING 18IS62

As part of the specification and design process, each functional component is normally expanded to
show its inputs, outputs, and mechanism. We do this here with pseudo-code (or PDL, for program
design language) for three modules. This particular PDL is loosely based on Pascal; the point of ny
PDL is to communicate, not to develop something that can be compiled. The main program
description follows the finite state machine description

Prof. Chaitanya V, ISE, SCE 4


SOFTWARE TESTING 18IS62

The ValidatePIN procedure is based on another finite state machine, in which states refer to the
number of PIN entry attempts.

Prof. Chaitanya V, ISE, SCE 5


SOFTWARE TESTING 18IS62

If we follow the pseudocode in these three modules, we can identify the “uses” relationship among the
modules in the functional decomposition.

Separating Integration and System Testing


We are almost in a position to make a clear distinction between integration and system testing. We
need this distinction to avoid gaps and redundancies across levels of testing, to clarify appropriate
goals for these levels, and to understand how to identify test cases at different levels. This whole
discussion is facilitated by a concept essential to all levels of testing: the notion of a “thread”. A
thread is a construct that refers to execution time behavior; when we test a system, we use test cases to
select (and execute) threads. We can speak of levels of threads: system threads describe system

Prof. Chaitanya V, ISE, SCE 6


SOFTWARE TESTING 18IS62

level behavior, integration threads correspond to integration level behavior, and unit threads
correspond to unit level behavior. Many authors use the term, but few define it, and of those that do,
the offered definitions aren’t very helpful. For now, we take “thread” to be a primitive term, much
like function and data. In the next two chapters, we shall see that threads are most often recognized in
terms of the way systems are described and developed. For example, we might think of a thread as a
path through a finite state machine description of a system, or we might think of a thread as something
that is determined by a data context and a sequence of port level input events, such as those in the
context diagram of the SATM system. We could also think of a thread as a sequence of source
statements, or as a sequence of machine instructions. The point is, threads are a generic concept, and
they exist independently of how a system is described and developed.

Structural Insights
Everyone agrees that there must be some distinction, and that integration testing is at a more detailed
level than system testing. There is also general agreement that integration testing can safely assume
that the units have been separately tested, and that, taken by themselves, the units function correctly.
One common view, therefore, is that integration testing is concerned with the interfaces among the
units. One possibility is to fall back on the symmetries in the waterfall life cycle model, and say that
integration testing is concerned with preliminary design information, while system testing is at the
level of the requirements specification. This is a popular academic view, but it begs an important
question: how do we discriminate between specification and preliminary design? The pat academic
answer to this is the what vs. how dichotomy: the requirements specification defines what, and the
preliminary design describes how. While this sounds good at first, it doesn’t stand up well in practice.
Some scholars argue that just the choice of a requirements specification technique is a design choice
The life cycle approach is echoed by designers who often take a “Don’t Tread On Me” view of a
requirements specification: a requirements specification should neither predispose nor preclude a
design option. With this view, when information in a specification is so detailed that it “steps on the
designer’s toes”, the specification is too detailed. This sounds good, but it still doesn’t yield an
operational way to separate integration and system testing.
The models used in the development process provide some clues. If we follow the definition of the
SATM system, we could first postulate that system testing should make sure that all fifteen display
screens have been generated. (An output domain based, functional view of system testing.) The
entity/relationship model also helps: the one-to-one and one-to-many relationships help us understand
how much testing must be done. The control model (in this case, a hierarchy of finite state machines)
is the most helpful. We can postulate system test cases in terms of paths through the finite state
machine(s); doing this yields a system level analog of structural testing. The functional models
(dataflow diagrams and structure charts) move in the direction of levels because both express a
functional decomposition. Even with this, we cannot look at a structure chart and identify where

Prof. Chaitanya V, ISE, SCE 7


SOFTWARE TESTING 18IS62

system testing ends and integration testing starts. The best we can do with structural information is
identify the extremes. For instance, the following threads are all clearly at the system level:

1. Insertion of an invalid card. (this is probably the “shortest” system thread)

2. Insertion of a valid card, followed by three failed PIN entry attempts.

3. Insertion of a valid card, a correct PIN entry attempt, followed by a balance inquiry.

4. Insertion of a valid card, a correct PIN entry attempt, followed by a deposit.

5. Insertion of a valid card, a correct PIN entry attempt, followed by a withdrawal.

6. Insertion of a valid card, a correct PIN entry attempt, followed by an attempt to withdraw more
cash than the account balance.

Behavioral Insights
Here is a pragmatic, explicit distinction that has worked well in industrial applications. Think about a
system in terms of its port boundary, which is the location of system level inputs and outputs.
Every system has a port boundary; the port boundary of the SATM system includes digit keypad, the
function buttons, the screen, the deposit and withdrawal doors, the card and receipts lots, and so on.
Each of these devices can be thought of as a “port”, and events occur at system ports. The port input
and output events are visible to the customer, and the customer very often understands system
behavior in terms of sequences of port events. Given this, we mandate that system port events are the
“primitives” of a system test case, that is, a system test case (or equivalently, a system thread) is
expressed as an interleaved sequence of port input and port output events. This fits our understanding
of a test case, in which we specify pre-conditions, inputs, outputs, and post-conditions. With this
mandate we can always recognize a level violation: if a test case (thread) ever requires an input (or an
output) that is not visible at the port boundary, the test case cannot be a system level
test case (thread). Notice that this is clear, recognizable, and enforceable. We will refine this
inChapter 14 when we discuss threads of system behavior.

Integration Testing
Craftspersons are recognized by two essential characteristics: they have a deep knowledge of the tools
of their trade, and they have a similar knowledge of the medium in which they work, so that they
understand their tools in terms of how they “work” with the medium. In Parts II and III, we focused
on the tools (techniques) available to the testing craftsperson. Our goal there was to understand testing
techniques in terms of their advantages and limitations with respect to particular types of faults. Here
we shift our emphasis to the medium, with the goal that a better understanding of the medium will
improve the testing craftsperson’s judgment.

Prof. Chaitanya V, ISE, SCE 8


SOFTWARE TESTING 18IS62

A Closer Look at the SATM System


This decomposition is the basis for the usual view of integration testing. It is important to remember
that such a decomposition is primarily a packaging partition of the system. As software design moves
into more detail, the added information lets us refine the functional decomposition tree into a unit
calling graph. The unit calling graph is the directed graph in which nodes are program units and edges
correspond to program calls; that is, if unit A calls unit B, there is a directed edge from node A to
node B. We began the development of the call graph for the SATM system in Chapter 12 when we
examined the calls made by the main program and the ValidatePIN and GetPIN modules. That
information is captured in the adjacency matrix given below in Table 2. This matrix was created witha
spreadsheet; this turns out to be a handy tool for testers.

Prof. Chaitanya V, ISE, SCE 9


SOFTWARE TESTING 18IS62

Prof. Chaitanya V, ISE, SCE 10


SOFTWARE TESTING 18IS62

Some of the hierarchy is obscured to reduce the confusion in the drawing. One thing should be quite
obvious: drawings of call graphs do not scale up well. Both the drawings and the adjacency matrix
provide insights to the tester. Nodes with high degree will be important to integration testing, and
paths from the main program (node 1) to the sink nodes can be used to identify contents of builds for
an incremental development.

Decomposition Based Integration

Figure 5.3 SATM Functional Decomposition Tree

We can dispense with the big bang approach most easily: in this view of integration, all the units are
compiled together and tested at once. The drawback to this is that when (not if!) a failure is observed,
there are few clues to help isolate the location(s) of the fault.
Top-Down Integration
Top-down integration begins with the main program (the root of the tree). Any lower level unit that is
called by the main program appears as a “stub”, where stubs are pieces of throw-away code that
emulate a called unit. If we performed top-down integration testing for the SATM system, the first
step would be to develop stubs for all the units called by the main program: Watch Card Slot, Control
Card Roller, Screen Driver, Validate Card, Validate PIN, Manage Transaction

Once all the stubs for SATM main have been provided, we test the main program as if it were a stand-
alone unit. We could apply any of the appropriate functional and structural techniques, and look for
faults. When we are convinced that the main program logic is correct, we gradually replace stubs with
the actual code. Even this can be problematic. Would we replace all the stubs at once? If we did, we

Prof. Chaitanya V, ISE, SCE 11


SOFTWARE TESTING 18IS62

would have a “small bang” for units with a high outdegree. If we replace one stub at a time, we retest
the main program once for each replaced stub. This means that, for the SATM main program example
here, we would repeat its integration test eight times (once for each replaced stub, and once with all
the stubs).
Bottom-up Integration
Bottom-up integration is a “mirror image” to the top-down order, with the difference that stubs are
replaced by driver modules that emulate units at the next level up in the tree. In bottom-up integration,
we start with the leaves of the decomposition tree (units like ControlDoor and DispenseCash), and test
them with specially coded drivers. There is probably less throw-away code in drivers than there is in
stubs. Recall we had one stub for each child node in the decomposition tree. Most systems have a
fairly high fan-out near at the leaves, so in the bottom-up integration order, we won’t have as many
drivers. This is partially offset by the fact that the driver modules will be more complicated.
Sandwich Integration
Sandwich integration is a combination of top-down and bottom-up integration. If we think about it in
terms of the decomposition tree, we are really just doing big bang integration on a sub-tree

Call Graph Based Integration


One of the drawbacks of decomposition based integration is that the basis is the functional
decomposition tree. If we use the call graph instead, we mitigate this deficiency; we also move in the
direction of behavioral testing. We are in a position to enjoy the investment we made in the discussion
of graph theory. Since the call graph is a directed graph, why not use it the way we used program
graphs? This leads us to two new approaches to integration testing: we’ll refer to them as pair-wise
integration and neighborhood integration.
Pair-wise Integration
The idea behind pair-wise integration is to eliminate the stub/driver development effort. Rather than
develop tubs and/or drivers, why not use the actual code? At first, this sounds like big bang
integration, but we restrict a session to just a pair of units in the call graph. The end result is that we
have one integration test session for each edge in the call graph
Neighborhood Integration
We can let the mathematics carry us still further by borrowing the notion of a “neighborhood” from
topology.
(This isn’t too much of a stretch — graph theory is a branch of topology.) We (informally) define the
neighborhood of a node in a graph to be the set of nodes that are one edge away from the given node.
In a directed graph, this means all the immediate predecessor nodes and all the immediate successor
nodes (notice that these correspond to the set of stubs and drivers of the node). The eleven
neighborhoods for the SATM example (based on the call graph in Figure 4.2) are given in Table 3.

Prof. Chaitanya V, ISE, SCE 12


SOFTWARE TESTING 18IS62

We can always compute the number of neighborhoods for a given call graph. There will be one
neighborhood for each interior node, plus one extra in case there are leaf nodes connected directly to
the root node. (An interior node has a non-zero indegree and a non-zero outdegree.)

Path Based IntegrationS


Much of the progress in the development of mathematics comes from an elegant pattern: have a clear
idea of where you want to go, and then define the concepts that take you there. We do this here for
path based integration testing, but first we need to motivate the definitions.
When a unit executes, some path of source statements is traversed. Suppose that there is a call to
another unit along such a path: at that point, control is passed from the calling unit to the called unit,
where some other path of source statements is traversed. We cleverly ignored this situation in Part III,
because this is a better place to address the question. There are two possibilities: abandon the single-
entry, single exit precept and treat such calls as an exit followed by an entry, or “suppress” the call
statement because control eventually returns to the calling unit anyway. The suppression choice works
well for unit testing, but it is antithetical to integration testing.
The first guideline for MM-Paths: points of quiescence are “natural” endpoints for an MM-Path. Our
second guideline also serves to distinguish integration from system testing.
Our second guideline: atomic system functions are an upper limit for MM-Paths: we don’t want MM-
Paths to cross ASF boundaries. This means that ASFs represent the seam between integration and
system testing. They are the largest item to be tested by integration testing, and the smallest item for
system testing. We can test an ASF at both levels. Again, the digit entry ASF is a good example.
During system testing, the port input event is a physical key press that is detected by KeySensor and

Prof. Chaitanya V, ISE, SCE 13


SOFTWARE TESTING 18IS62

sent to GetPIN as a string variable. (Notice that KeySensor performs the physical to logical
transition.) GetPIN determines whether a digit key or the cancel key was pressed, and responds
accordingly. (Notice that button presses are ignored.) The ASF terminates with either screen 2 or 4
being displayed. Rather than require system keystrokes and visible screen displays, we could use a
driver to provide these, and test the digit entry ASF via integration testing. We can see this using our
continuing example.

MM-Paths and ASFS in the SATM System

Prof. Chaitanya V, ISE, SCE 14


SOFTWARE TESTING 15IS63

Prof. Prerana Chaithra, Bhavatarini N, ISE, SCE 1


5
Prof. Prerana Chaithra, Bhavatarini N, ISE, SCE 1
5

You might also like