0% found this document useful (0 votes)
72 views23 pages

SE-Unit 4

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views23 pages

SE-Unit 4

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

UNIT 4

Contents:
Coding and Testing: Coding, Code Review, Software Documentation, Testing, Unit Testing, Black-
Box Testing, White-Box Testing, Debugging, Program Analysis Tool , Integration Testing, Testing Object
Oriented Programs ,System Testing ,Some General Issues Associated with Testing.

Introduction:
● Coding is undertaken once the design phase is complete and the design documents have
been successfully reviewed.
● In the coding phase, every module specified in the design document is coded and unit tested.
During unit testing, each module is tested in isolation from other modules.
● After all the modules of a system have been coded and unit tested, the integration and system
testing phase is undertaken
● Integration and testing of modules is carried out according to an integration plan.
● The full product takes shape only after all the modules have been integrated together. System
testing is conducted on the full product. During system testing, the product is tested against
its requirements as recorded in the SRS document.
● Testing is an important phase in software development, requires the maximum effort and
requires the maximum effort.

 Coding:
● The input to the coding phase is the design document produced at the end of the designphase.
● The design document contains not only the high-level design of the system in the form of a
module structure (e.g., a structure chart), but also the detailed design.
● The detailed design is usually documented in the form of module specifications where the
data structures and algorithms for each module are specified.
● The objective of the coding phase is to transform the design of a system into code in a high-
level language, and then to unit test this code.
● Good software development organisations require their programmers to adhere to somewell-
defined and standard style of coding which is called their coding standard.
● Organisations formulate their own coding standards and require their developers to follow the
standards rigorously.
● The main advantages of adhering to a standard:
○ A coding standard gives a uniform appearance to the codes written by different
engineers.
○ It facilitates code understanding and code reuse.
○ It promotes good programming practices.

What is the difference between a coding guideline and a coding standard?

● It is mandatory for the programmers to follow the coding standards. Compliance of their code
to coding standards is verified during code inspection. Any code that does not
confirm to the coding standards is rejected during code review and the code is reworked by
the concerned programmer.
● In contrast, coding guidelines provide some general suggestions regarding the coding style to
be followed but leave the actual implementation of these guidelines to the discretion of the
individual developers.

Usually code review is carried out to ensure that the coding standards are followed and also
to detect as many errors as possible before testing. Reviews are an efficient way of removing errors
from code.

Coding Standards and Guidelines:

Good software development organisations usually develop their own coding standards and guidelines.

Representative coding standards:

● Rules for limiting the use of globals:


These rules list what types of data can be declared global and what cannot, with a view to limit
the data that needs to be defined with global scope.
● Standard headers for different modules:
The header of different modules should have standard format and information for ease of
understanding and maintenance.
The following is an example of header format that is being used in some companies:
Name of the module, Date on which the module was created, Author’s name, Modification
history, Synopsis of the module, Different functions supported in the module, along with their
input/output parameters, Global variables accessed/modified by the module.
● Naming conventions for global variables, local variables, and constant identifiers:
A popular naming convention is that variables are named using mixed case lettering. Example
:GlobalData, localData, CONSTDATA
● Conventions regarding error return values and exception handling mechanisms:
The way error conditions are reported by different functions in a program should be standard
withinan organisation.

Representative coding guidelines:

● Do not use a coding style that is too clever or too difficult to understand:
Code should be easy to understand. Many inexperienced engineers actually take pride in writing
cryptic and incomprehensible code.
● Avoid obscure side effects:
The side effects of a function call include modifications to the parameters passed by reference,
modification of global variables, and I/O operations. An obscure side effect is one that is not
obvious from a casual examination of the code. Obscure side effects make it difficult to
understand a piece of code.
● Do not use an identifier for multiple purposes:
Programmers often use the same identifier to denote several temporary entities. There are several
things wrong with this approach and hence should be avoided.
Some of the problems caused by the use of a variable for multiple purposes are as follows:
Each variable should be given a descriptive name indicating its purpose. This is not possible if
an identifier is used for multiple purposes. Use of a variable for multiple purposes can lead to
confusion and make it difficult for somebody trying to read and understand the code. Use of
variables for multiple purposes usually makes future enhancements more difficult.
● Code should be well-documented:
As a rule of thumb, there should be at least one comment line on the average for every three
source lines of code.
● Length of any function should not exceed 10 source lines:
A lengthy function is usually very difficult to understand as it probably has a large number of
variables and carries out many different types of computations.For the same reason, lengthy
functions are likely to have disproportionately larger number of bugs.
● Do not use GOTO statements:
Use of GOTO statements makes a program unstructured. This makes the program very difficult
to understand, debug, and maintain.

 Code Review:
● Testing is an effective defect removal mechanism. However, testing is applicable to only
executable code.
● Review is a very effective technique to remove defects from source code. In fact, review has
been acknowledged to be more cost-effective in removing defects as compared to testing.
● Code review for a module is undertaken after the module successfully compiles. That is, all the
syntax errors have been eliminated from the module.
● Code review does not target to design syntax errors in a program, but is designed to detect
logical, algorithmic, and programming errors.
● Code review has been recognised as an extremely cost-effective strategy for eliminating coding
errors and for producing high quality code.
● Reviews directly detect errors, whereas testing only helps detect failures.
● Eliminating an error from code involves three main activities—testing, debugging, and then
correcting the errors. Testing is carried out to detect if the system fails to work satisfactorily for
certain types of inputs and under certain circumstances. Once a failure is detected, debugging is
carried out to locate the error that is causing the failure and to remove it. Of the three testing
activities, debugging is possibly the most laborious and time consuming activity.
● In code inspection, errors are directly detected, thereby saving the significant effort that would
have been required to locate the error. Normally, the following two types of reviews are carried
out on the code:
○ Code Inspection
○ Code Walkthrough

Code inspection.

● During code inspection, the code is examined for the presence of some common programming
errors.
● The principal aim of code inspection is to check for the presence of some common types of
errors that usually creep into code due to programmer mistakes and oversights and to check
whether coding standards have been adhered to.
● The inspection process has several beneficial side effects, other than finding errors. The
programmer usually receives feedback on programming style, choice of algorithm, and
programming techniques. The other participants gain by being exposed to another
programmer’s errors.

● Good software development companies collect statistics regarding different types of errors that
are commonly committed by their engineers and identify the types of errors most frequently
committed.
● Such a list of commonly committed errors can be used as a checklist during code inspection to
look out for possible errors.
● Following is a list of some classical programming errors which can be checked during code
inspection:
○ Use of uninitialised variables.
○ Jumps into loops.
○ Non-terminating loops.
○ Incompatible assignments.
○ Array indices out of bounds.
○ Improper storage allocation and deallocation.
○ Mismatch between actual and formal parameters in procedure calls.
○ Use of incorrect logical operators or incorrect precedence among operators.
○ Improper modification of loop variables.
○ Comparison of equality of floating point values.
○ Dangling reference caused when the referenced memory has not been allocated.

Code walkthrough.

● Code walkthrough is an informal code analysis technique.


● In this technique, a module is taken up for review after the module has been coded, successfully
compiled, and all syntax errors have been eliminated.
● A few members of the development team are given the code a couple of days before the
walkthrough meeting.
● Each member selects some test cases and simulates execution of the code by hand.
● The main objective of code walkthrough is to discover the algorithmic and logical errors in the
code.
● Even though code walkthrough is an informal analysis technique, several guidelines have
evolved over the years. Guidelines are based on personal experience, common sense, and
several other subjective factors.
○ The team performing code walkthrough should not be either too big or too small. Ideally, it
should consist of between three to seven members.
○ Discussions should focus on discovery of errors and avoid deliberations on how to fix the
discovered errors.
○ In order to foster cooperation and to avoid the feeling among the engineers that they are
being watched and evaluated in the code walkthrough meetings, managers should not attend
the walkthrough meetings.
 Software Documentation:
When a software is developed, in addition to the executable files and the source code, several kinds
of documents such as users’ manual, software requirements specification (SRS) document, design
document, test document, installation manual, etc., are developed as part of the software engineering
process.

All these documents are considered a vital part of any good software development practice. Good
documents are helpful in the following ways:
● Good documents help enhance understandability of code.
● Documents help the users to understand and effectively use the system.
● Good documents help to effectively tackle the manpower turnover problem
● Production of good documents helps the manager to effectively track the progress of the
project
Different types of software documents can broadly be classified into the following:
Internal documentation:
● These are provided in the source code itself. Internal documentation can be provided in the code
in several forms. The important types of internal documentation are the following:
○ Comments embedded in the source code.
○ Use of meaningful variable names.
○ Module and function headers.
○ Code indentation.
○ Code structuring (i.e., code decomposed into modules and functions).
○ Use of enumerated types.
○ Use of constant identifiers.
○ Use of user-defined data types.
● Even when a piece of code is carefully commented, meaningful variable names have beenfound
to be the most helpful in understanding the code.
External documentation:
● These are the supporting documents such as SRS document, installation document, user
manual, design document, and test document.
● A systematic software development style ensures that all these documents are of good quality
and are produced in an orderly fashion.
● An important feature that is required of any good external documentation is consistency with
the code.
● If the different documents are not consistent, a lot of confusion is created for somebody trying
to understand the software.
● Every change made to the code should be reflected in the relevant external documents.
● Another important feature required for external documents is proper understandability bythe
category of users for whom the document is designed.
● Gunning’s Fog Index:
○ Gunning’s fog index (developed by Robert Gunning in 1952) is a metric that has been
designed to measure the readability of a document.
○ The computed metric value (fog index) of a document indicates the number of years of
formal education that a person should have, in order to be able to comfortably understand
that document.
○ The Gunning’s fog index of a document D can be computed as follows:

 Testing:
● The aim of program testing is to help realise/identify all defects in a program.
● However, in practice, even after satisfactory completion of the testing phase, it is not possible
to guarantee that a program is error free.
● This is because the input data domain of most programs is very large, and it is not practicalto test the
program exhaustively with respect to each value that the input can assume.
● We must remember that careful testing can expose a large percentage of the defects existing in
a program.

Testing terminology:

As is true for any specialised domain, the area of software testing has come to be associated with its own
set of terminologies. In the following, we discuss a few important terminologies that have been
standardised by the IEEE Standard Glossary of Software Engineering Terminology [IEEE90]:
● Mistake:
A mistake is essentially any programmer action that later shows up as an incorrect result during
program execution. A programmer may commit a mistake in almost any development activity.
● Error:
An error is the result of a mistake committed by a developer in any of the development activities.
Among the extremely large variety of errors that can exist in a program. The terms error, fault, bug,
and defect are considered to be synonyms.
● Failure:
A failure of a program essentially denotes an incorrect behaviour exhibited by the program during its
execution. An incorrect behaviour is observed either as an incorrect result produced or as an
inappropriate activity carried out by the program.
● Test-case:
A test case is a triplet [I , S, R], where I is the data input to the program under test, S is the state of
the program at which the data is to be input, and R is the result expected to be produced by the
program. The state of a program is also called its execution mode.
○ A positive test case is designed to test whether the software correctly performs a required
functionality
○ A negative test case is designed to test whether the software carries out something that is not
required of the system.
● Test scenario:
A test scenario is an abstract test case in the sense that it only identifies the aspects of the program that
are to be tested without identifying the input, state, or output. A test case can be said to be an
implementation of a test scenario.
● Test script:
A test script is an encoding of a test case as a short program. Test scripts are developed for
automated execution of the test cases. A test case is said to be a positive test case if it is designed to
test whether the software correctly performs a required functionality. A test case is said to be
negative test case, if it is designed to test whether the software carries out something, that is not
required of the system
 Test suite:
A test suite is the set of all tests that have been designed by a tester to test a given program.
● Testability:
Testability of a requirement denotes the extent to which it is possible to determine whether an
implementation of the requirement conforms to it in both functionality and performance. In other
words, the testability of a requirement is the degree to which an implementation of it can be
adequately tested to determine its conformance to the requirement.
● Failure mode:
A failure mode of a software denotes an observable way in which it can fail. In other words, all failures
that have similar observable symptoms, constitute a failure mode.
● Equivalent faults:
Equivalent faults denote two or more bugs that result in the system failing in the same failure
mode.
Validation vs Verification:
● The objectives of both verification and validation techniques are very similar since both these
techniques are designed to help remove errors in a software.
● The underlying principles of these two bug detection techniques and their applicability arevery
different.
● Verification:
○ Verification is the process of determining whether the output of one phase ofsoftware
development conforms to that of its previous phase;
○ Verification is to check if the work products produced after a phase conform to thatwhich
was input to the phase.
○ Techniques used for verification include review, simulation, formal verification, andtesting.
● Validation:
○ Validation is the process of determining whether a fully developed software conformsto its
requirements specification
○ Validation is applied to the fully developed and integrated software to check if it satisfies
the customer’s requirements.
○ System testing can be considered as a validation step where it is determined whether the fully
developed code is as per its requirements specification.

Error detection techniques = Verification techniques + Validation techniques


How to test a Program:
● Testing a program involves executing the program with a set of test inputs and observing if the
program behaves as expected.
● If the program fails to behave as expected, then the input data and the conditions under which it
fails are noted for later debugging and error correction.
● Unless the conditions under which a software fails are noted down, it becomes difficult for the
developers to reproduce a failure observed by the testers.

Testing Activities:

Testing involves performing the following main activities:


● Test suite design:
The set of test cases using which a program is to be tested is designed possibly using several test
case design techniques.
● Running test cases and checking the results to detect failures:
Each test case is run and the results are compared with the expected results. A mismatch between
the actual result and expected results indicates a failure. The test cases for which the system fails are
noted down for later debugging.
● Locate error:
In this activity, the failure symptoms are analysed to locate the errors. For each failure observed during
the previous activity, the statements that are in error are identified.
● Error correction:
After the error is located during debugging, the code is appropriately changed to correct the
error. The testing activities have been shown schematically in Figure 10.2. As can be seen, the test
cases are first designed, the test cases are run to detect failures. The bugs causing the failure are
identified through debugging, and the identified error is corrected. Of all the above mentioned
testing activities, debugging often turns out to be the most time-consuming activity.

Testing Process

Self Study:
Why design test cases?
Testing in small vs testing in Large?
 Unit Testing
● Unit testing is undertaken after a module has been coded and reviewed.
● This activity is typically undertaken by the coder of the module himself in the coding phase.
● Before carrying out unit testing, the unit test cases have to be designed and the test
environment for the unit under test has to be developed.
● In order to test a single module, we need a complete environment to provide all relevant code
that is necessary for execution of the module.
● That is, besides the module under test, the following are needed to test the module:
○ The procedures belonging to other modules that the module under test calls.
○ Non-local data structures that the module accesses.
○ A procedure to call the functions of the module under test with appropriateparameters.
● Modules required to provide the necessary environment (which either call or are called by the
module under test) are usually not available until they too have been unit tested.
● In this context, stubs and drivers are designed to provide the complete environment for a
module so that testing can be carried out.
Driver and stub modules:
In order to test a single module, we need a complete environment to provide all relevant code that is
necessary for execution of the module. That is, besides the module under test, the following are
needed to test the module:
 The procedures belonging to other modules that the module under test calls.
 Non-local data structures that the module accesses.
 A procedure to call the functions of the module under test with appropriate parameters.

 Modules required to provide the necessary environment (which either call or are called by the
module under test) are usually not available until they too have been unit tested.

Stub: A stub procedure is a dummy procedure that has the same I/O parameters as the function called
by the unit under test but has a highly simplified behaviour.

Driver: A driver module should contain the non-local data structures accessed by the module under
test. Additionally, it should also have the code to call the different functions of the unit under test
with appropriate parameter values for testing.

● Unit testing is referred to as testing in the small, whereas integration and system testing are
referred to as testing in the large.
 Black-Box testing:
● In black-box testing, test cases are designed from an examination of the input/output values
only and no knowledge of design or code is required.

● The following are the two main approaches available to design black box test cases:
○ Equivalence class partitioning
○ Boundary value analysis

Equivalence class partitioning:

● In the equivalence class partitioning approach, the domain of input values to the program under test
is partitioned into a set of equivalence classes.
● The partitioning is done such that for every input data belonging to the same equivalence class, the
program behaves similarly.
● The main idea behind defining equivalence classes of input data is that testing the code with any one
value belonging to an equivalence class is as good as testing the code with any other value belonging
to the same equivalence class.
● Equivalence classes for a unit under test can be designed by examining the input data and output
data.
The technique involves two steps:

Identification of equivalence class –


Partition any input domain into a minimum of two sets: valid values and invalid values. For example, if the
valid range is 0 to 100 then select one valid input like 49 and one invalid like 104.
Generating test cases –
(i) To each valid and invalid class of input assign a unique identification number.
(ii) Write a test case covering all valid and invalid test cases considering that no two invalid inputs
mask each other.
To calculate the square root of a number, the equivalence classes will be (a) Valid inputs:
 The whole number which is a perfect square- output will be an integer.
 The whole number which is not a perfect square- output will be a decimal number.
 Positive decimals
 Negative numbers(integer or decimal).
 Characters other than numbers like “a”,”!”,”;”, etc.
Boundary Value Analysis:
● A type of programming error that is frequently committed by programmers is missing out on the
special consideration that should be given to the values at the boundaries of different equivalence
classes of inputs.
● Boundary value analysis-based test suite design involves designing test cases using the values at the
boundaries of different equivalence classes.
● To design boundary value test cases, it is required to examine the equivalence classes to check if any
of the equivalence classes contains a range of values. For those equivalence classes that are not a
range of values no boundary value test cases can be defined.
● For an equivalence class that is a range of values, the boundary values need to be included in the test
suite. For example, if an equivalence class contains the integers in the range 1 to 10, then the boundary
value test suite is {0,1,10,11}.

Cause effect Graphing –


This technique establishes a relationship between logical input called causes with corresponding actions
called the effect. The causes and effects are represented using Boolean graphs. The following steps
are followed:
 Identify inputs (causes) and outputs (effect).
 Develop a cause-effect graph.
 Transform the graph into a decision table.
 Convert decision table rules to test cases.

Summary of the Black-box Test Suite Design Approach:


We now summarise the important steps in the black-box test suite design approach:
● Examine the input and output values of the program.
● Identify the equivalence classes.
● Design equivalence class test cases by picking one representative value from each equivalence class.
● Design the boundary value test cases as follows. Examine if any equivalence class is a range of
values. Include the values at the boundaries of such equivalence classes in the test suite.
 White-Box Testing:
● White-box testing is an important type of unit testing. A large number of white-box testing
strategies exist.
● Each testing strategy essentially designs test cases based on analysis of some aspect of source
code and is based on some heuristic.
● White box testing techniques analyze the internal structures the used data structures, internal
design, code structure, and the working of the software rather than just the functionality as in
black box testing. It is also called glass box testing or clear box testing or structural testing.
White Box Testing is also known as transparent testing or open box testing.
● White box testing is a software testing technique that involves testing the internal structure and
workings of a software application. The tester has access to the source code and uses this
knowledge to design test cases that can verify the correctness of the software at the code level.
● White box testing is also known as structural testing or code-based testing, and it is used to test
the software’s internal logic, flow, and structure.

Working process of white box testing:

 Input :- Requirements, Functional specifications, design documents, source code.


 Processing :- Performing risk analysis to guide through the entire process.
 Proper test planning :- Designing test cases so as to cover the entire code. Execute rinse-
repeat until error- free software is reached. Also, the results are communicated.
 Output :- Preparing final report of the entire testing process.

Testing Techniques for White Box Testing:


A white-box testing strategy can either be
 coverage-based testing strategy
 fault based testing strategy
Coverage-Based testing strategies:

1) Statement Coverage:

In this technique, the aim is to traverse all statements at least once. Hence, each line of code is
tested. In the case of a flowchart, every node must be traversed at least once. Since all lines of
code are covered, helps in pointing out faulty code.
● The principal idea governing the statement coverage strategy is that unless a statement is
executed, there is no way to determine whether an error exists in that statement.
● A weakness of the statement- coverage strategy is that executing a statement once and
observing that it behaves properly for one input value is no guarantee that it will behave
correctly for all input values.
● Nevertheless, statement coverage is a very intuitive and appealing testing technique.
Statement Coverage example

2) Branch Coverage:
In this technique, test cases are designed so that each branch from all decision points is
traversed at least once. In a flowchart, all edges must be traversed at least once.
● A test suite satisfies branch coverage, if it makes each branch condition in the program
to assume true and false values in turn.
● For branch coverage each branch in the CFG representation of the program must be
taken at least once, when the test suite is executed.
● Branch testing is also known as edge testing, since in this testing scheme, each edge of a
program’s control flow graph is traversed at least once

4 test cases are required such that all branches of all decisions are covered, i.e, all edges of the
flowchart are covered

3) Condition Coverage:
In this technique, all individual conditions must be covered as shown in the following example:
READ X, Y
IF(X == 0 || Y == 0)
PRINT ‘0’
#TC1 – X = 0, Y = 55
#TC2 – X = 5, Y = 0

4) Multiple Condition Coverage:


a. In the multiple condition (MC) coverage-based testing, test cases are designed to make
each component of a composite conditional expression to assume both true and false
values.
b. For example, consider the composite conditional expression ((c1 .and.c2 ).or.c3). A
test suite would achieve MC coverage, if all the component conditions c1, c2 and c3
are each made to assume both true and false values.
c. Branch testing can be considered to be a simplistic condition testing strategy where
only the compound conditions appearing in the different branch statements are made to
assume the true and false values.
d. It is easy to prove that condition testing is a stronger testing strategy than branch
testing.
5) Path Coverage:
a. A test suite achieves path coverage if it executes each linearly independent paths (or basis
paths) at least once.
b. A linearly independent path can be defined in terms of the control flow graph (CFG) of a
program.
○ A path through a program is any node and edge sequence from the start node to a
terminal node of the control flow graph of a program.
○ Please note that a program can have more than one terminal node when it contains
multiple exit or return types of statements.
○ Writing test cases to cover all paths of a typical program is impractical since therecan
be an infinite number of paths through a program in presence of loops.
○ Path coverage testing does not try to cover all paths, but only a subset of paths called
linearly independent paths (or basis paths).
○ If a set of paths is linearly independent of each other, then no path in the set can be
obtained through any linear operations (i.e., additions or subtractions) on the other paths
in the set.

Fault-based Testing strategies:


Mutation Testing:
● Mutation testing is a fault-based testing technique in the sense that mutation test cases are
designed to help detect specific types of faults in a program.
● In mutation testing, a program is first tested by using an initial test suite designed by using
various white box testing strategies.
● After the initial testing is complete, mutation testing can be taken up.
● The idea behind mutation testing is to make a few arbitrary changes to a program at a time.
● Each time the program is changed, it is called a mutated program and the change effected is
called a mutant.
● A mutation operator makes specific changes to a program.
● A mutant may or may not cause an error in the program.
● If a mutant does not introduce any error in the program, then the original program and the
mutated program are called equivalent programs.
● A mutated program is tested against the original test suite of the program.
 Debugging:
After a failure has been detected, it is necessary to first identify the program statement(s) that are in
error and are responsible for the failure, the error can then be fixed.

Debugging Approaches:
The following are some of the approaches that are popularly adopted by the programmers for
debugging:
1. Brute force method:
● This is the most common method of debugging but is the least efficient method.
● In this approach, print statements are inserted throughout the program to print the
intermediate values with the hope that some of the printed values will help to identify the
statement in error.
● This approach becomes more systematic with the use of a symbolic debugger, because
values of different variables can be easily checked and breakpoints and watchpoints can be
easily set to test the values of variables effortlessly.
2. Backtracking:
● This is also a fairly common approach. In this approach, starting from the statement at
which an error symptom has been observed, the source code is traced backwards until the
error is discovered.
●Unfortunately, as the number of source lines to be traced back increases, the number of
potential backward paths increases and may become unmanageably large for complex
programs, limiting the use of this approach
3. Cause elimination method:
● In this approach, once a failure is observed, the symptoms of the failure are noted.
● Based on the failure symptoms, the causes which could possibly have contributed to the
symptom is developed and tests are conducted to eliminate each.
● A related technique of identification of the error from the error symptom is the software
fault tree analysis.
4. Program slicing:
● This technique is similar to back tracking. In the backtracking approach, one often has to
examine a large number of statements.
● However, the search space is reduced by defining slices.
● A slice of a program for a particular variable and at a particular statement is the set ofsource
lines preceding this statement that can influence the value of that variable.

Debugging guidelines:
Debugging is often carried out by programmers based on their ingenuity and experience. The
following are some general guidelines for effective debugging:
● Many times debugging requires a thorough understanding of the program design. Trying to debug
based on a partial understanding of the program design may require an inordinate amount of
effort to be put into debugging even for simple problems.
● Debugging may sometimes even require full redesign of the system. In such cases, a common
mistake that novice programmers often make is attempting not to fix the error but its
symptoms.
● One must beware of the possibility that an error correction may introduce new errors.
Therefore after every round of error-fixing, regression testing must be carried out.
 Program Analysis Tools
A program analysis tool usually is an automated tool that takes either the source code or the executable
code of a program as input and produces reports regarding several important characteristics of the
program, such as its size, complexity, adequacy of commenting, adherence to programming standards,
adequacy of testing, etc.
We can classify various program analysis tools into the following two broad categories:
 Static analysis tools
 Dynamic analysis tools

Static Analysis Tools

Static program analysis tools assess and compute various characteristics of a program without
executing it. Typically, static analysis tools analyse the source code to compute certain metrics
characterising the source code (such as size, cyclomatic complexity, etc.) and also report certain
analytical conclusions. These also check the conformance of the code with the prescribed coding
standards.
In this context, it displays the following analysis results:
 To what extent the coding standards have been adhered to?
 Whether certain programming errors such as uninitialised variables, mismatch between actual
and formal parameters, variables that are declared but never used, etc., exist? A list of all such
errors is displayed.
 Code review techniques such as code walkthrough and code inspection discussed can be
considered as static analysis methods since those target to detect errors based on analysing the
source code.
 A major practical limitation of the static analysis tools lies in their inability to analyse run-time
information such as dynamic memory references using pointer variables and pointer arithmetic,
etc.
 Static analysis tools often summarise the results of analysis of every function in a polar chart
known as Kiviat Chart. A Kiviat Chart typically shows the analysed values for cyclomatic
complexity, number of source lines, percentage of comment lines, Halstead’s metrics, etc.

Dynamic Analysis Tools

Dynamic program analysis tools can be used to evaluate several program characteristics based on an
analysis of the run time behaviour of a program.
 These tools usually record and analyse the actual behaviour of a program while it is being
executed. A dynamic program analysis tool (also called a dynamic analyser) usually collects
execution trace information by instrumenting the code.
 Code instrumentation is usually achieved by inserting additional statements to print the values
of certain variables into a file to collect the execution trace of the program. The instrumented
code when executed, records the behaviour of the software for different test cases.
 An important characteristic of a test suite that is computed by a dynamic analysis tool is the
extent of coverage achieved by the test suite.
 Integration Testing:
● Integration testing is carried out after all (or at least some of) the modules have been unit tested.
● Successful completion of unit testing, to a large extent, ensures that the unit (or module) as a
whole works satisfactorily.
● In this context, the objective of integration testing is to detect the errors at the module interfaces
(call parameters).
● The objective of integration testing is to check whether the different modules of a program
interface with each other properly.
● During integration testing, different modules of a system are integrated in a plannedmanner
using an integration plan.
● The integration plan specifies the steps and the order in which modules are combined to
realise the full system.
● After each integration step, the partially integrated system is tested.
● By examining the structure chart, the integration plan can be developed.

● Any one (or a mixture) of the following approaches can be used to develop the test plan:

1. Big-bang approach to integration testing:


● Big-bang testing is the most obvious approach to integration testing. In this
approach, all the modules making up a system are integrated in a single step.
● In simple words, all the unit tested modules of the system are simply linked
together and tested.
● However, this technique can meaningfully be used only for very small systems.
● The main problem with this approach is that once a failure has been detected during
integration testing, it is very difficult to localise the error as the error may potentially
lie in any of the modules.
2. Bottom-up approach to integration testing:
● Large software products are often made up of several subsystems.
● A subsystem might consist of many modules which communicate among each other
through well-defined interfaces.
● In bottom-up integration testing, first the modules for each subsystem are integrated.
● Thus, the subsystems can be integrated separately and independently.
● The primary purpose of carrying out the integration testing of a subsystem is to test
whether the interfaces among various modules making up the subsystem work
satisfactorily.
● In a pure bottom-up testing no stubs are required, and only test-drivers are required.
3. Top-down approach to integration testing:
● Top-down integration testing starts with the root module in the structure chart and one or
two subordinate modules of the root module.
● After the top-level ‘skeleton’ has been tested, the modules that are at the immediately
lower layer of the ‘skeleton’ are combined with it and tested.
● Top-down integration testing approach requires the use of program stubs to simulate
the effect of lower-level routines that are called by the routines under test.
● A pure top-down integration does not require any driver routines.
4. Mixed approach to integration testing:
● The mixed (also called sandwiched) integration testing follows a combination oftop-
down and bottom-up testing approaches.
● In a top-down approach, testing can start only after the top-level modules havebeen
coded and unit tested.
● Similarly, bottom-up testing can start only after the bottom level modules are
ready.
● The mixed approach overcomes this shortcoming of the top-down and bottom-up
approaches.
● In the mixed testing approach, testing can start as and when modules become
available after unit testing.
● Therefore, this is one of the most commonly used integration testing approaches.
● In this approach, both stubs and drivers are required to be designed.
5. Incremental Integration Testing
 Big-bang integration testing is carried out in a single step of integration. In contrast, in the
other strategies, integration is carried out over several steps.
 In these later strategies, modules can be integrated either in a phased or incremental manner. A
comparison of these two strategies is as follows:
 In incremental integration testing, only one new module is added to the partially integrated
system each time.
 In phased integration, a group of related modules are added to the partial system each time.
 Testing Object-Oriented Programs

During the initial years of object-oriented programming, it was believed that object-orientation would,
to a great extent, reduce the cost and effort incurred on testing. This thinking was based on the
observation that object-orientation incorporates several good programming features such as
encapsulation, abstraction, reuse through inheritance, polymorphism, etc., thereby chances of errors in
the code is minimized.
 The main reason behind this situation is that various object-oriented features introduce
additional complications and scope of new types of bugs that are present in procedural
programs.
 Therefore additional test cases are needed to be designed to detect these.
 We examine these issues as well as some other basic issues in testing object-oriented programs
in the following subsections.

1. What is a Suitable Unit for Testing Object-oriented Programs?


 For procedural programs, we had seen that procedures are the basic units of testing. That is, first
all the procedures are unit tested.
 Then various tested procedures are integrated together and tested. Thus, as far as procedural
programs are concerned, procedures are the basic units of testing.
 Since methods in an object-oriented program are analogous to procedures in a procedural
program, can we then consider the methods of object-oriented programs as the basic unit of
testing?

2. Do Various Object-orientation Features Make Testing Easy?

In this section, we discuss the implications of different object-orientation features in testing.


 Encapsulation:
Encapsulation prevents the tester from accessing the data internal to an object. Of course, it is
possible that one can require classes to support state reporting methods to print out all the data
internal to an object. Thus, the encapsulation feature though makes testing difficult, the
difficulty can be overcome to some extent through use of appropriate state reporting methods.
 Inheritance:
The inheritance feature helps in code reuse and was expected to simplify testing. It was
expected that if a class is tested thoroughly, then the classes that are derived from this class
would need only incremental testing of the added features.
 Dynamic binding:
Dynamic binding was introduced to make the code compact, elegant, and easily extensible.
However, as far as testing is concerned all possible bindings of a method call have to be
identified and tested. This is not easy since the bindings take place at run-time.
 Object states:
In contrast to the procedures in a procedural program, objects store data permanently. As a
result, objects do have significant states. The behaviour of an object is usually different in
different states. That is, some methods may not be active in some of its states.

3. Why are Traditional Techniques Considered Not Satisfactory for Testing Object-oriented
Programs?
 We have already seen that in traditional procedural programs, procedures are the basic unit of
testing. In contrast, objects are the basic unit of testing for object-oriented programs.
 Besides this, there are many other significant differences as well between testing procedural
and object-oriented programs.

4. Grey-Box Testing of Object-oriented Programs


As we have already mentioned, model-based testing is important for object oriented programs, as
these test cases help detect bugs that are specific to the object-orientation constructs.
The following are some important types of grey-box testing that can be carried on based on UML
models:
State-model-based testing:

State coverage: Each method of an object are tested at each state of the object.
State transition coverage: It is tested whether all transitions depicted in the state model work
satisfactorily.
State transition path coverage: All transition paths in the state model are tested.

Use case-based testing:


Scenario coverage: Each use case typically consists of a mainline scenario and several alternate
scenarios. For each use case, the mainline and all alternate sequences are tested to check if any
errors show up.

Class diagram-based testing :


Testing derived classes: All derived classes of the base class have to be instantiated and tested. In
addition to testing the new methods defined in the derivec. class, the inherited methods must be
retested.
Association testing: All association relations are tested.
Aggregation testing: Various aggregate objects are created and tested.

Sequence diagram-based testing:


Method coverage: All methods depicted in the sequence diagrams are covered.
Message path coverage: All message paths that can be constructed from the sequence diagrams
are covered.
 System Testing
● After all the units of a program have been integrated together and tested, system testing is taken
up.
● System tests are designed to validate a fully developed system to assure that it meets its
requirements.
● The test cases are therefore designed solely based on the SRS document.
● There are essentially three main kinds of system testing depending on who carries out
testing:

1. Alpha Testing: Alpha testing refers to the system testing carried out by the test team
within the developing organisation.
2. Beta Testing: Beta testing is the system testing performed by a select group of
friendly customers.
3. Acceptance Testing: Acceptance testing is the system testing performed by thecustomer
to determine whether to accept the delivery of the system.

● In each of the above types of system tests, the test cases can be the same, but the
difference is with respect to who designs test cases and carries out testing.
● The system test cases can be classified into functionality and performance test cases.
● Before a fully integrated system is accepted for system testing, smoke testing isperformed.
In the following subsection we discuss only smoke and performance testing.

I. Smoke Testing
● Smoke testing is carried out before initiating system testing to ensure that system testing
would be meaningful, or whether many parts of the software would fail.
● The idea behind smoke testing is that if the integrated program cannot pass even the basic
tests, it is not ready for vigorous testing.
● For smoke testing, a few test cases are designed to check whether the basic functionalities
are working.
● The system test cases can be classified into functionality and performance test cases.

II. Performance Testing:


● Performance testing is an important type of system testing.
● Performance testing is carried out to check whether the system meets the nonfunctional
requirements identified in the SRS document.
● There are several types of performance testing corresponding to various types of non-
functional requirements.
● All performance tests can be considered as black-box tests.

1. Stress testing:
● Stress testing is also known as endurance testing.
● Stress testing evaluates system performance when it is stressed for short periods oftime.
● Stress tests are black-box tests which are designed to impose a range of abnormal and
even illegal input conditions so as to stress the capabilities of the software.
● Input data volume, input data rate, processing time, utilisation of memory, etc., are tested
beyond the designed capacity.
● Stress testing is especially important for systems that under normal circumstances operate
below their maximum capacity but may be severely stressed at some peak demand hours.

2. Volume testing:
● Volume testing checks whether the data structures (buffers, arrays, queues, stacks, etc.)
have been designed to successfully handle extraordinary situations.
3. Configuration testing:
● Configuration testing is used to test system behaviour in various hardware and software
configurations specified in the requirements.
● Sometimes systems are built to work in different configurations for different users.
4. Compatibility testing ;
● This type of testing is required when the system interfaces with external systems (e.g.,
databases, servers, etc.).
● Compatibility aims to check whether the interfaces with the external systems are
performing as required.
5. Regression testing:
● This type of testing is required when a software is maintained to fix some bugs or enhance
functionality, performance
6. Recovery testing:
● Recovery testing tests the response of the system to the presence of faults, or loss ofpower,
devices, services, data, etc.
● The system is subjected to the loss of the mentioned resources (as discussed in the SRS
document) and it is checked if the system recovers satisfactorily.
7. Maintenance testing:
● This addresses testing the diagnostic programs, and other procedures that arerequired to
help maintenance of the system.
● It is verified that the artifacts exist and they perform properly.
8. Security testing:
● Security testing is essential for software that handle or process confidential data that is to be
guarded against pilfering.
● It needs to be tested whether the system is fool-proof from security attacks such as
intrusion by hackers.

III. Error Seeding :


Sometimes customers specify the maximum number of residual errors that can be present in the
delivered software. These requirements are often expressed in terms of maximum number of allowable
errors per line of source code. The error seeding technique can be used to estimate the number of
residual errors in a software.
Error seeding, as the name implies, it involves seeding the code with some known errors. In other
words, some artificial errors are introduced (seeded) into the program. The number of these seeded
errors that are detected in the course of standard testing is determined.
These values in conjunction with the number of unseeded errors detected during testing can be used to
predict the following aspects of a program:
 The number of errors remaining in the product.
 The effectiveness of the testing strategy.
 Some General Issues Associated With Testing:
In this section, we shall discuss two general issues associated with testing.
These are—how to document the results of testing and how to perform regression testing.
Test documentation :

A piece of documentation that is produced towards the end of testing is the test summary report. This
report normally covers each subsystem and represents a summary of tests which have been applied to
the subsystem and their outcome. It normally specifies the following:
 What is the total number of tests that were applied to a subsystem.
 Out of the total number of tests how many tests were successful.
 How many were unsuccessful, and the degree to which they were unsuccessful, e.g., whether a
test was an outright failure or whether some of the expected results of the test were actually
observed.

Regression testing:

Regression testing spans unit, integration, and system testing. Instead, it is a separate dimension to
these three forms of testing. Regression testing is the practice of running an old test suite after each
change to the system or after each bug fix to ensure that no new bug has been introduced due to the
change or the bug fix.
However, if only a few statements are changed, then the entire test suite need not be run — only those
test cases that test the functions and are likely to be affected by the change need to be run. Whenever a
software is changed to either fix a bug, or enhance or remove a feature, regression testing is carried
out.

You might also like