0% found this document useful (0 votes)
12 views

SE,Unit-4, Software Testing

Uploaded by

Abhinav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

SE,Unit-4, Software Testing

Uploaded by

Abhinav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

TESTING

8.10. INTRODUCTION TO SOFTWARE


that follows codins
the term "testing" is the phase and proceeda
The narrow definition of When
used to mean testing ot program code.
deployment. Testing is traditionally
tests that reflect typical usage patterns
ns bv
by th
a
the product
ctiis
tested with appropriateand realistic customer's requirement is much hi nteWhile
nded
users, the chances of the product satisfying
the
certainly increases theL
r.
testing does not guarantee zero defects, effective testing the chances of
customer acceptance of the sottware.
Testing is done by a set software product (or service) organization
of people with in a
Lon whose
betore it reaches the customer
goal and charter is to uncoyer the defects in the product
"Software testing is the process of executing software in a controlled manner in rder to

answer the question : Does the software behave as specified ?"


Testing is vital to the success of the system. System testing makes a logical assumption that
if all the parts of the system are correct, the goal will be successfully achieved. Inadeauat
ate
testing leads to errors that may not appear until months later.
Testing is a complex process. In order to makethe process simpler, the testing activities are
broken into smaller activities Due to this, for a project, incremental testing is generali
performed. Testing is veryexpensiveprocess and consumes 30% to 40% of the cost of a typical
development project.
Testing cannot show the absence of errors, it can only show that errors are present.
8.11. DEFINITIONS OF TESTING
The process of analyzing a software item to detect the differences between existing
and required conditions (i.e.,
bugs) and to evaluate the features of the software item
(IEEE 1993)
The process of analyzing (Myers 1979)
a
program with the intent of finding errors.
8.13. OBJECTIVE OF SOFTWARE TESTING
The objective of software, testing is to design tests that systematically uncover different clauses
a minimum amount of time and effort. Following a r e the main
of errors and to do so with
testingobjectives
Testing is a process of executing a program-with the intent of finding
a n error.
1.
undiscovered error.
2. A good test case is one that has a high probability of finding a n as-yet
3. A successful test is one that uncovers an as-yet undiscovered error.

Software testing is usually performed for the following objectives


(i) Softuware quality improvement
(ii) Verification and validation
(iii) Software reliability estimation
Software Quality Improvement
to the specified software design
requirements..
Software quality means the conformance
to find out design defects-by
a n a r r o w view of
software testing is performed heavily
Debuggng,
the programmer.
Verification and Validation
implements a
the set of activities that e n s u r e that software correctly
Verification refers to activities that ensure that the
software
to a different of
set
speciicfunction. Validation refers requirements.
thathas been built is traceable to customer works a r e named clean tests, or positive
Tests with the purpose of
validating the product test
validate that the software works for the specified
c a n only
tests. The drawbacks a r e that it for all situations. On the
cannot validate that the software works
cases. A finite number of
tests software does not work.
sufficient enough to show that the
test is
contrary, only o n e failed at breaking the software,
o r showing

tests, refer to the tests aiming capabilities to


Dirty tests, or negative software must have sufficient exception handling
that it does not work. A piece of
tests.
survive a significant level of dirty
Software Reliability Estimation
failure-free software operation for a
the probability of
Software reliability is defined environment. Software reliability is an important attribute
as

specific period of time in a specified usability.performance,serviceability,


capability
together with functionality,
ofsoftware quality,
maintainability and documentation.
Installability, residual design errors before
deliver to the customer.
discover the
The objective to testing 1sto
software
are taken down in
order to estimate the
The failure data during the testing
process
with regular feedback from
the reliability analysis
may function
reliability. The testing process is shown in Fig. 8.1.
that
tO the testers and designers
87. SOFTWARE TESTING PROCESS
be tested as a single unit except for small programs. Large systems are
Systems should not
which are built out of modules that are composed of procedures and
built out of sub-systems,
functions.
of five stages that are as followa
The most widely used testing process consists
1. Unit Testing
2. Module Testing
3. Sub-system Testing
4. System Testing
5. Acceptance Testing8
Iterative software testing process is illustrated as:

User
Unit Testing
Testing
Module
Testing
Sub-system
Testing
System
L Testing
Acceptance
Testing
Component Integration
Testing Testing
- - - - - - -

Fig.8.3.
1. Unit Testing

Unit testingiscode-orientedtesting,Individual componentsare testedtoensuretha


they operate correctly. Each component is tested independently, without othersysten
components
2. Module Testing
Amodule is a collection of dependent components such as an object class, an abstract
data type or some looser collection of procedures and
fünct1ons. A module encapsulates related
components so it can be tested without other system modules.
3. Sub-system Testing

This phaseinvolves testingcollections of modules, which have been integrated intosub


systems. It is a design-oriented testing and is also known as
integration testing.
Sub-systems may be independently designed and implemented. The most common
problems.
which arise in large software systems, are sub-systems interface mismatches. The sub-syste
test process should
theretore concentrate on the detection of interface errorsby rigorousiy
exercising these interfaces.
4. System Testing
The sub-systems are integrated to
concerned with finding errors that result make up the entire system. The testing proce
from unanticipated interactions
and system components. It is also concerned with validating that between sub-syste
and non-functional requirements. the system meets its funeu
5. Acceptance Testing
This is the final stage in the
testing process before the system is accepted for
nal
use.
The system is tested with data supplied by the system client rather thanoperd
st data. Acceptance testing may reveal s1mnt
definition because real data exercises errors and omissions in the systems requir
the system in different
ways from the test a
8,20. UNIT TESTING
more modules.
Unit testing refers to the procoe cess
A unit of software is composed of one or
are the successh
the detailed design. The inputs to unit testing
testing modules against These are assembled during unit testing to mo
modules from the coding process. tested architert
compiled
the largest units, i.e., the components of
architectural design. The successfully
unit testing.
itectural
design components are the outputs of
unit of software design-the modhul.
Unit testing focuses verification effort on the smallest
as a guide, important
control paths are testert
Using the component level design description
uncover errors with in the boundary
of the module. The unit test is white box oriented and th
e
for multiple components.
step can be conducted in parallel
8.20.1. Unit Test Considerations Interface
Local Data structures
The tests that are performed as part of unit testing Module Boundary Conditions
are shown in the Fig. 8.5. IndependentPaths
The module interface is tested to ensure that Eror-handling Paths
information properly flows into and out of the programn
unit under testing. The local data structure is examined
to ensure that data stored temporarily maintains its Test Case
integrity during all steps in an algorithm's execution.
Fig. 8.5.
Boundary conditions are tested to ensure that the
module operates properly to the boundaries established to limit or restrict processing. All
independent paths through the control structure are exercised to ensure that all statements in
a module have been executed at least once. And finally, all error-handling paths are tested.
Some suggested checklists for unit test is as follows :

A. Interface
I s number
of input parameter equal to number of arguments?
I s parameter and argument attributes match ?
Are parameters passed in correct order ?
Are Global variable definitions consistent across modules?
If module does I/O
() File attributes correct?
i) Open/close statements correct ?
(iii) Buffer size match record size ?
(iv) File opened before use?
() End of file condition handled?
(vi) I/O errors handled ?
B. Local data structures
(common source of errors)
Test cases in unit testing should uncover errors, such as
Comparison of different data types
Incorrect logical operators or precedence
.Expectation of equality when precision error makes equality
Incorrect comparison of variables unlikely
Improper loop termination
Failure to exit when divergent iteration is
encountered
Improperly modified loop variables
. Overflow, underflow, address exceptions.
C.Error handling

.Error description unintelligible


. Error noted does not correspond to error encountered
.Exception condition processingincorrecet
. Error description
does not provide sufficient information to assist in determining errors.
8.20.2. Unit Test Procedures
Unit testing is typically seen as an
sdjunct to the coding step. Once source Driver
ode has been produced, reviewed and rMerface
Local Data structure
erified for correct syntax, unit test case Boundary Conditione
design can start. A review of design ndependent Paths
information provides guidance for Module Erorhandling Pahis
to b
establishing test cases that are likely to
errors. Each test case should be Tested
uncover
Jinked with a set of anticipated results.
As amodule is not a stand-alone program,
driver and stub software must be Stub Stub
produced for each test units. Test Cases

In most applications a driver


is
that
nothing more than a "main program"
RESULT
such data Fig. 8.6.
accepts test-case data, passes
and prints
to the component (to be tested), to the component
to replace modules that are subordinate (called by)
rElevant results. Stubs serve
data manipulation,
the subordinate module's interface, maydominimal
b e tested. A stub uses
module undergoing testing.
verification of entry, and returns control to the but
prints software that must be written
overhead. That is, both are
Drivers and stubs represent
final software product. If drivers and stubs are kept simple, actual
delivered with the
that is not
Unfortunately, many components cannotbe adequately until the
unit tested
Overhead is relatively low. testing can be postponed when a
software. In such cases, complete
is simplified
wIth "simple" overhead or stubs are
drivers
also used). Unit testing
component,
negration test step (where When only one function is addressed by a
high cohesion is designed. and uncovered.
with more easily predicted
mponent reduced and e r r o r s
can be
E
number of test c a s e s is

8.21, iNTEGRATION TESTING put together


need to test how they were
been tested there is a
impact on
individual units have does not have an adverse
eal the across interface,
one module
a systematic technique
sure no data is lost Integration testing 18 uncover
conducting tests to
correctly.
a n d a function
is not performed same time into
while at.the
structure are combined modules
fo unit-tested
structing the program
testing many be integrated
this
with interfacing. InThe goal here is to
modules can
see if the
S ASs0ciated
then tested.
hstems, which are in order to
properly. moduleinterfaces
testing is to test the invokes another
integration onemodule
The RPmaryobjective of parameter passing,
when
in the
ensure
,
t there a r e n oerrors
modules of a system are integrated
in a plan
testing, different
module. During integration specifies the steps and the orde
integration plan. The integration plan
manner using an
After each integration step, the partial.
which modules are combined to realize the full system. ally
integrated system is tested.

8.21.1. Approaches to Integration Testing


The various approaches used for integration testing
are:

(a) Big Bang


There is often a tendency to attempt non-incremental integration, i.e., constructaprogram
using a big bang approaches, Big bang method of testing for integration testing has multinla
implementations.However, generaljdea istocombinea large portionofproject.and conducta
planned integration testing with thorough documentation, Amajor section of system of even
thewhole system can be combined to conduct integration testing. Such a method requires a
near-perfect unit testing to be done to ensure that bugs uncovered during the integration testing
would be defects caused by component interfacing or connectors between them.
There are not many advantages of the big bang integration; the only advantageis thatfor a
smaller system thiswill beidealintegration testing technique.
Thedisadvantages is that you would have to wait for all the modules to be integratedin
order to dobig-bang testing so there will be quite a lot of delay. Any errors are 1dentifed at a
verylate stage and itis very hard to identify the fault. It is very difficultto be sure thatall
testing has been done before product release.
(b) Incremental Approach
The incremental approach means to first combine only two components together and test
them. Remove the errorsif they are there, otherwise combine another component to it and then
testagain, and so on until the whole system is developed. Inincremenialintegration the program
isconstructedand tested in small increments, where errors are easier to isolate and correct

D
Test Sequence 1 Test Sequence 2 Test Sequence 3
Fig. 8.7.
According to Fig. 8.7, in test sequence 1 tests T,, T, and
of module A and module B. If these are T2 are first run ona system composed
corrected or error free then module C is
test sequence 2 and then tests
T, T2 and integrated, 1.e,
T, are repeated, problem arises in these tests,
if a
they interact with the new module. The source of the problem is localized, thus it then
defect location and repair. Finally, module D is integrated, i.e., test simplifies
sequence 3 is then tested
using existing (T, toT) and new tests (T).
(c) Top-Down Integration Testing
Top-down integration testing is an incremental approach to construction of program
eos. Modules are integrated by moving downward
through he control hierarchy beginning with the main control
s t r u e

M,
module.
InTop-down method, top elements of systemareintegrated
tested in beginning while units and basic level components M4
and e
are eval luated at the end of testing phas. This approach of
oration
egration testing is beneficial, since it identifieS
implementation of functionalrequirements, high-level M]M][M
romponentsand iOgic ot componentsearly on intesting.
Additionally it allows tester to evaluate each branch of the
System under development..
(i) Depth-first integratjon: This would integrate all Fig. 8.8.
omponents on a major control path of the structure. For
example, M, M, and M, would be integrated first. Next, M, or M, would be integrated. Then,
the central and right-hand control paths are built.
(ii) Breadth-firstintegration: This incorporates all components directly subordinate at
ach level,moving across the structure horizontally. From above figure 8.8, components M, Mg
and M, would be integrated first. The next control level Mg, M, and so on follows.
The integration process is performed in a series offive steps:
1. The main-control module is used as a test driver and stubs are substituted for
all
components directly subordinate to the main control module.
subordinate
2. Depending on the integration approach selected (i.e., depth or breadth first),
stubs are replaced one at a time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced with the real component.
introduced.
5.Regression testing may be conducted to ensure that new errors have not been
released or shown then most of
The advantage to this way of testing is that if a prototype is
to maintain the code and there
the main functionality will already be working. It is also easy
will be better control in terms of errors so most of the errors
would be taken out before going to
the next stage of testing.
test data. The
is that it is hard to test the lower level components using
The disadvantage modules.
is that lower level modules may not be tested as much as the upper level
other thing
(d) Bottom-up Integration Testing
where you test the lower level components and start
This is the opposite to top-down level components. The components will be separated
ng your way upwards to the higher then
important modules will be worked on first,
n e level of importanee and the least integrating components at
each level before moving
Wy you would work your way up by
upwards.
construction and testingwith
its n a m e implies, begins
Dottom-up integration testing, as
A bottom-upintegration strategy
the the program structur.
mponents at the lowest level in
4y be implemented with the following steps
called builds) that perform
Low-level components are combined intoclusters (sometimes
specific software sub-functions. input anc
is written to coordinate test case
A driver (a control program for testing)
output.
8.22. SYSTEM TESTING
Subsystems re integrated to make up the entire
system. The
finding errors that result from unanticipated interactions testing process is concerned with
between subsystems and system
components. It is also concerned with validating that the meets its functional and non-
functional requirements. There are essentially three mainsystem
kinds of system testing:
Alpha testing
Beta testing
Acceptance testing
Alpha Testing
The customer under the project team's guidance conducts the
alpha test at the developer's
site. In this test, users test the software on the development platform
and point out errors for
correction. However, the alpha test, because a few users on the dewelopment platform conduct
it, has limited ability toexpose errors and correct them. Alpha tests are conducted in a controlled
environment. Itis a simulation of rea-life usage. Once the alpha test is complete,thesoftware
product 1s ready for transition to thecustomersite for implementationand development.
Beta Testing
Beta testing is the system testing performed by a selected group of friendly customers? If
the system is complex, the software is not taken for implementation directly. It is installed and
all users are asked touse the software in testing mode; this is not live usage. This is called the
beta test. Beta tests are conducted at the customersitein an environment where the software
1sexposed to a number of users. The developer may or may not be present while the software is
in use. So, betatesting is a real-life software experience without actualimplementation. In th+s
test, end users record their observations,mistakes errors and so on and reportthem periodically
In a beta test, the user may modification, a major change, or a deviation
suggest a
Lon. The
development has to
examine the proposed change and put it into the change managem:
system for smooth change from just developedsoftware to a revised, better
standard practice to put all such changes in subsequent version releases.
softwarare. eIt is
Acceptance Testing
Acceptance testing (also known as acceptance testing) is a type of testing carried o e
user
order to verify if the produet is developed as per the standards and specified criteria andmat in
all the requirements specified by customer. Thus, acceptance testing is the nd meets
system testi
performed by the customer to determine whether to ting
accept or reject the delivery of the svsto
conductedem
When customer software is built for one customer, a series of acceptance tests are
enable the customer to validate all requirements. to
Acceptance testing falls under black box testing methodology where the user is not
Ver
much interested in internal working/coding of the system, but evaluates the ery
overall functionin
of the system and compares it with the
is considered to be one of the most
requirements specified by them. User acceptance testing
important testing by user before the system is finally delivere
or handed over to the end user.

During acceptance testing, the


system has to pass through or operate in a
environment that imitates the actual computing
operating environment existing with user. The user may
choose to perform the testing in an iterative manner or in
the form of a set of varying
for example: missile guidance software can be tested under parameters
conditions etc.). varying payload, different weather
The outcome of the
acceptance testing can be termed as success or failure based on the
critical operating conditions the
system passes through
final evaluation of the
system. successfully/unsuccessfully and the users
The test cases and test criterion in
cannot be achieved without acceptance testing are generally created by end user and
business scenario criteria
test case creation involves most input by user. This type of testing and
experienced people from both
business analysts, specialized testers, developers, end users Atcsides (developers and users) like
8.27. TYPES OF SOFTWARE TESTING
software testing. During the implementation phase, modules
conduct
There are many ways to this is referred to as desh
while they are being coded,
are informally tested by programmer to function correctly
is satisfied that the module appears
checking. After the programmer undertaken team.
methodical testing of the module is
by a separate testing
There are two basic types of methodical
testing:
1. Non-execution based testing
2. Execution based testing.
Non-execution Based Testing
relies on fault-detection strategy. The fault-detecting power
The non-execution based testing
to rapid, thorough, and early faulty detection.
of these non-execution based techniques leads
for code reviews is more than rapid by the increased productivity
The additional time required
due to fewer faults at the integration phase.
is less expensive than execution based testing
In general, non-execution based code testing
because :
Execution based testing (running test cases) can be extremely time-consuming and
Reviews lead to detection of faults earlier in the life cycle.
Non-execution based testing is also known as static testing.

Execution Based Testing


In this type of testing, the modules are run against test cases. Following are the two ways of
systematically constructing test data to test a module:
Blach-Box Testing: The code itselfis ignored; the only information used in designing.
testcases is the specification document.
White-Box Testing:The codeitselfistested, without regard of the specifications.

8.28. WHITE-BOX TESTING


Every software product is realized by means of a program code. White-box testing is a way of
testing the external funetionality ofthecodeby examining
and testing the program code that realizes the externa
functionality. White-box testing is also.known as.clear.box
or glassbox or open box testing.
White-box testing takes into account the program.code,
codestructure and internal design flow. In contrast, black
box testing to be discussed later, does not look at the
program code but looks at the product from an external
perspective.
White-box testing is asoftware testing approach that
examine the program structure and derive test data from
Structural
the program logic. testing is sometimes referred
to as clear-box testing since white-boxes are considered
and do not really permit visibility into the code.
White-box testing is classified into "static" and1 Fig. 8.15. White-box testing
"structural" test.
White-box Testing

Static Testing Structural Testing

Desk Checking Unit/code Code Code


Functional Testing Coverage Complexity
Code Walkthrough
Code Inspection Statement Cyclomatic
Coverage Complexity
Path
Coverage
Condition
Coverage
Function
Coverage

Fig. 8.16.

8.28.1. Advantages of White-box Testing


1/Approximates the partitioning done by execution equivalence.
2.Reveals errors in hidden code.
3/Forces test developer to reason carefully about implementation.
4Beneficent side effects.

5.Optimizations.
Why we do White-Box Testing
Toensure:
module have been exercised at least
once.
That all independent paths within a
their true and false values.
All logical decisions verified on internal
their boundaries and within their operational bounds
All loops executed at
data structures validity.
Need of White-Box Testing
To discover the following types of bugs:
into our work when we design and implement functions,
Logical error tend to creep
controls that are out of the program.
conditions
of the program and the actual
or

errors due to
difference between logical flow
The design
implementation.
Typographical errors and syntax
checking.
2,$tructural Testing
is present in the code. structural
/ Structural testing examines source code and analvses what
analysis. This implies
often dynamic, meaning that code is executed during
testing techniques are
fille management and execution
a high-test cost due to compilation or interpretation, leakage,
time. Test cases are derived from analysis of the program control flow.
between program regions
A Control Flow Graph is a representation of the flow of control
such as a group of statements bounded by a single entry and exit point.
Structural testing cannot expose errors of code omission but can
estimate the test suite
adequacy in terms of code that is, execution of components by the test suite or its
coverage,
fault-finding ability.
The following are some important types ofstructural testing:
Statement Coverage Testing
Branch Coverage Testing
Condition Coverage Testing
Loop Coverage Testing
Path Coverage Testing
Domain and Boundary Testing
Data-flow Testing
Statement coverage: Statement coverage is a measure of the percentage of statements
that have been executed by test cases. Our objective should be to achieve 100% statement
coverage through testing.
Branch/ decision coverage: Test coverage criteria requires enough test cases such
that each condition in a decisions takes on all possible outcomes at least once, and each
point of entry to a program or subroutine is invoked at least once. That is, every branch
(decision) taken each way, true and false. It helps in validating all the branches in the code
making sure that no branch leads to abnormal behaviour of the application.
Condition coverage: (or predicate covered) Has each boolean sub-expression evaluated
both to true and false ? This does not necessarily imply decision coverage.
Condition/decision coverage: Both decision and condition coverage should be satisfied.
Loop coverage: This criteria requires sufficient test cases for all program loops to be
acuted for zero, one, two and many iterations covering initialization, typical running and
e x e c u

termination conditions.

Path coverage: In path coverage we write test cases to ensure that each and every
nath has been traversed at least once. One way to better understand path coverage
p
Draw a flow graph to indicate corresponding logic in the program.
Calculate individual paths in the flow graph.
Run the program in all possible ways to cover every statement.
Understanding path coverage is a good way to better understand code coverage.
Domain and Boundary testing: Domain testing is a form of path coverage. Path domains
are a subset of the program input that causes execution of unique paths. The input data can
be derived from the program control flow graph. Test inputs are chosen to exercise each
path and also the boundaries of each domain.
The drawback of domain and boundary analysis is that it is only suitable for programs with
a small number of input variables and with simple linear predicates.
Data flow testing: Data flow testing focus on the points at which variables receive
values, and the points at which the values are used. This kind of testing serves as a reality
check on path testing and that's why many authors view it as a path testing technique.
This technique requires sufficient test cases for each feasible data flow to be executed at
least once. Data flow analysis studies the sequences of actions and variables along program
paths. It can be considered and applied a s both static and as a dynamic technique.
Data-flow testing uses the flow graph to explore unreasonable things thatcan
happen to
data. Enough paths must be selected to ensure that each and every variable has been initialized
prior to its use or that all defined variables have been used or referenced for something.
Data flow testing is considered viable for incorrect uses of variables and constants as well
as misspelled identifiers. As with code coverage strategies, data flow testing cannot deteet missing
statements.
8.28.3. Disadvantages of White-Box Testing
Despite benefits, white-box testing has its drawbacks. Some of the most commonly citaa
1ssues are ted
Complexity. Being able to see every constituent part of an application meansthat.
at a
tester must have detailed programmatic knowledge of the application in order to wor
withit properly. This high-degree of complexity requires a much more highly skille
Nork
led
individual to develop test case.
Fragility. While introspection is supposed to overcome the issue of application change
nges
breaking test scrips the reality is that often the names of objects change during produet
development or new paths through the application are added. The fact that white-be
testing requirestest scripts to betightly tied to the underlying code of an application
means that changes to the code will often cause white-box test scripts to break.Thio
then, introduces a high degree of script maintenance into the testing process.
Integration. For white-box testing to achieve the degree of introspection required it
the
must be
tightly integrated with application being tested. This
creates a few problema
To betightlyintegrated with thecode youmustinstallthe white-boxtoolonthesystem
on which the application is running. This is okay, but where one wishes to eliminate
the possibility that the testing tool is what is causing either a performance or operational
problem, this becomes impossible to resolve. Another issue that arises is that of platform
support. Due to the highly integrated nature of white-box testing tools many do not
provide support for more than one platform, usually Windows. Where companies have
applications that run on other platforms, they either need to use a different tool or
resort to manual testing.
.It is
nearly impossible to look every bit of code to find out hidden which may
errors,
create problems resulting in failure of the application.
8,29. BLACK-BOX TESTING (FUNCTIONAL TESTING)
Black-box testing involves looking at the specifications and does not require examiningthe
code of a program. Black-box testing is done from the customer's viewpoint. The test engineer
engaged in black-box testing only knows the setof inputs and expected outputs and is unaware
of how those inputs are transformed into outputs by the software. Black-box tests are convenient
to administer because they use the complete finished product and do not
of its construction require any knowledge
Generally, black-box testing attempts to uncover the following:
1. Incorrect functions.
2. Data structure errors.
3. Missing functions.
4. Performance errors.
5. External database access errors.
6. Initializing and termination
errors
Black-box testing is also called functional testing because the tester is
the functionality and not the implementation of the software. only concerned w
Black-box or functional testing are designed to answer the
(i) How is functional validity tested?
following questions:
(i) How is system behaviour and performance tested?
What class of input will make good test cases?
(iv) Is tthe system rticularly sensitive to certain input values?
o) How are the boundaries of data class isolated?
u)What data rates and data volume can a system tolerate?
What effect will specific combination of data have on system operation?
(
why
Black-box Testing ?
Black-box testing helps in the overall functionality verification of the system under test.
1. It helps in /dentifyingany incomplete, inconsistentrequirement as well as any issues
involvedwhen thesystem is tested as a complete entity.
2. Black box testing handles valid and invalid inputs.
3. Black box testing addresses the stated requirements as well asimplied requirements.
4. Black box testing encompasses the end user perspectives.

Black-Box Testing Techniques


The various techniques are as follows
1. Requirements based testing
2. Positive and negative testing.
3. Boundary value analysis.
4. Equivalence partitioning.
5. Graph based testing
6 Compatibility testing
8.40. TEST CASE
to
as.componentsthat describe an input,
action or event and an expected response,
deter
determine if a feature of an application is working correctly.
Why we write test cases ?
The basic objective of writing test cases is to validate the testing coverage of the applicatiOn.
standards. So
writine + King in any CMM company then you will strictly follow test cases
test cases brings some sort of standardization and minimizes the adhoc approach in
testing.
How to write test cases ?
Here is a
simple test case format
Fields in test cases
Test case id:
Unit to test: What to be verified ?
Assumptions:
Test data: Variables and their values.
Steps to be executed:
Expected result:
Actual result:
Pass/Fail:
Comments:

8.4. DEBUGGING
Debuggingoceurs as a consequence ofsuccessfultesting.The goal oftestingis toidentify errors
(bugs in the program. Atter 1dentilication of place, we examine that portion to identify the
cause oftheproblem. This process iscalled debugging.Hencedebuggingistheactivity oflocating
and correcting errors.
The debugging process is illustrated in the figure

Execution
Test casess Result
of cases

Additional
Test - Suspected
causes

Debugging

Regression Corrections
ldentified
causes
tests
Fig. 8.20. Debugging process
282 OSoftware Engineering
is
Thus the outcomes of the debugging process
The cause will be found, corrected and removed.
v2. The cause will not be found.

8.41.1. Debugging Techniques


follows
Some important debugging techniques
are as

Brute Force Debugging


2Backtracking Debugging
3. Debugging by Induction
4. Debugging by Deduction
5. Cause elimination Debugging
6. Debugging by Testing
7. Debugging by Program slicing.
(a) Brute Force Debugging
Brute Force debuggingis debuggingwith a storagedump. In this technique,the program is
loaded with print statements to print the intermediate values with the hope that some of the
printed values will help to identifythe statement with error..
Thisapproachbecomes more systematic with the use of a symbolic debugger because values
of different variables can be easily examined.

(b) Backtracking Debugging


Backtracking is a fairly common debugging technique. This involves backtracking the
jncorrect results through the lagic of the
program. In this technique, thesource codeis traced
backwards until the error is discovered.
This techniquebecomes unfit when the number of sourcelines to be traced back increases
and the number of potential backward paths becomesunmanageable.
(c) Debugging by Induction
The inductive approach comes from the formulation of a single working hypothesis based
on the data, on the analysis of existing data, and on
the working hypothesis.
especially collected data to prove or disprove

cannot
Locate
pertinent data
Organize Study the Devise a
the data relationship hypothesis
can

Prove the
hypothesis cannot

can

Fix the
error

Fig. 8.21. The inductive


debugging process
dhDebugging by Deduction
The process of deduction begins by enumerating all causes or
Gible. Then, one-by-one, particular causes are ruled out until a hypotheses,
which seem

validation.
one single remains for

Enumerate Refine Prove


Use process Fix the
possible remaining
of elimination remaining Error
causes hypothesis hypothesis can
None left cannot

Collect
more data

g . 8.22. The deductive debugging process.

(e) Cause elimination Debuggingg


In this technique, a list of causes that could have contributed to theerror symptomis prepared
and tests are carried out to eliminate each cause. Software fault tree analysis technique may be
used to identify the errors from the error symptom.

() Debugging by Testing
This type of debugging involves test cases. These are two types of test
cases

Cases that expose previously undetected error.


a
to locate an error.
And cases that provide useful information in debugging
conditions per test case. But test cases for
For an undetected e r r o r cases tend to cover many
e r r o r cover a single condition
for each test case.
locatinga specific
(g) Debugging by program slicing
In this technique, the overall search
This debugging technique is similar to backtracking.
confined to the program slice
space is first divided
in the program slices so that the search is
only. is the set of source lines
A program slice for a particular variable at a particular statement
statement that can influences the value of that variable.
preceding this
97 SOFTWARE MAINTENANCE PROCESS MODELS
Two broad categories of process models for software maintenance can be proposed. The first
model is preferred for projects involving small reworks where the code is changed directly
and the changes are reflected in the relevant documents later. This maintenanceprocess 18
presented in Fig. 9.1. In this approach, the project starts by gathering the
graphically the
requirements forchanges. The requirements are next analyzed to formulate strategies
code change. At this stage, the association of at least a few members of the
to be adopted for
for projects
original development team goes a long way in reducing the cycle team, especially
and inadequetely documented code. The availability of a working
involving unstructured
facilitates the task
old system to the maintenance engineers at the maintenance site greatly
of the maintenance team as they get a good insight into the working of the old system and
also can compare the working of their Gather Change Requirements
modified system with the old system. Also,
debugging of the reengineered system
becomes easier as the program traces of Analyze Change Requirements
both the systems can be compard to localize
the bugs.
The second process model for software Devise Code Change Strategies
maintenance is preferred for projects where
the amount of rework required is significant. Apply Code Change
A Strategies to the Old Code
reverse engineering cycle followed by a
forward engineering cycle can represent this
approach. Such an approach is also known
as software reengineering. This process Update Documents
Integrate and Test
model is depicted in Fig. 9.2. The reverse
Fig. 9.1. Maintenance Process Model 1
engineering cycle is required for legacy
products. During the reverse engineering, the old code is analyzed (abstracted) to exte
the module specifications. The module specifications a r e then anaiy zea to produce the design

The design is analyzed (abstracted) to produce the original reguirements specification. The
change requests are then applied to this requirements specifications to arrive at the new
requirements specitications. At the design, module specitications and coding a substantial
reuse is made from the reverse engineered products. An important advantage of this approach
is that it produces a more structured designed compared to what the original product
hai
produces good documentatioin, and very often results in increased efficiency. The efficiency
improvements are brought about by a more efficient design. However, this approach is more
costly than the first approach. An empirical study indicates that process 1 is preferable
when the amount of rework is no more than 15%. Besides the amount of rework, several
other factors might affect the decision regarding using process model 1 over process model2
Reengineering might be preferable for products, which exhibit a high failure rate
Reengineering might also be preferable for legacy products having poor design and
code structure.

Change Requirements

Requirements Requirements
Reverse
Specification Specification
Engineering Forward
Engineering!
Design
Design

Module
Specification Module
Specification

Code
Code
---

Fig. 9.2. Maintenance


----.
Process Model 2
9.8, SOFTWARE MAINTENANCE COST
oftware maintenance cost is derived from the
changes made to software after it has been
delivered to the end user. Software does not "wear out" but it will
become less useful as it
ets older, plus there will always be issues within the software itself.
Software maintenance costs will typically form 75% of TCo.
Software maintenance costs include:
Corrective maintenance: costs due to modifying software to correct issues discovered
after initial deployment (generally 20% of software maintenance costs).
Adaptive maintenance: costs due to modifying a software solution to allow it to
remain effective in a changing business environment (25% of software maintenance
costs)
Perfective maintenance: costs due to improving or enhancing a software soiution
improve overall performance (generally 5% of software maintenance costs).
to

Enhancements: costs due to continuing inovations (generally 50% or more of software


maintenance costs)
According to Sommerville, the cost of a project includes the following:
cost of effort required to develop the software
cost oftarget hardware to execute the system
cost of platform to develop the system
The cost of software may be computed as
S o f t w a r e c o s t (SC) = Basic c o s t * RELY * TIME*STOR*TOOL*EXP*ACPM|

where RELY is reliability multiplier


TIME is execution time constraint multiplier
STOR is storage constraint multiplier
TOOL is the availability multiplier of tool support
EXP is the development team's experience
multiplier
ACPM is the average cost for one
person-month
of effort

You might also like