Unit4 Se r13 (Completed)

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 37

UNIT-4

IMPLEMENTATION
4.1 CODING PRINCIPLES
 Coding principles help programmers in writing an efficient and effective code, which
is easier to test, maintain, and reengineer.
 Some of the following coding principles can make a code clear, readable, and
understandable.
INFORMATION HIDING
 Information hiding hides the implementation details of data structures from the other
modules.
 Also, it secures data and information from illegal access and alterations.
 An illustration of information hiding is shown in following figure given below.
MODULE

DATA Access to other modules


FUNCTION

Fig: An Illustration of information hiding


 Information hiding is supported by data abstraction, which allows creating multiple
instances of abstract data type.
 Most of object-oriented programming languages such as C++, java etc., support the
features of information hiding.
 Structured programming languages, such as C, Pascal, FORTRAN, etc., provide
information hiding in a disciplined manner.
STRUCTURE PROGRAMMING FEATURES
 Structure programming features linearize the program flow in some sequential way
that the programs follow during execution.
 The organization of program flow is achieved through the following three basic
constructs of structured programming.
(i) Sequence
(ii) Selection
(iii) Iteration

1
 All the above constructs have single entry and single exit points.
 The flowcharts to illustrate single entry and single exit for these constructs are shown
in following figure.
SEQUENCE SELECTION ITERATION

T F

FIG:- SINGLE ENTRY AND SINGLE EXIT FOR SEQUENCE, SELECTION AND ITERATION

MAXIMIZE COHESION AND MINIMIZE COUPLING


 Writing modular programs with the help of functions, code, block, classes etc., may
increase dependency among module in the software.
 The main reason is the use of shared and global data items.
 Shared data should be used as little as possible.
 Minimizing dependencies among programs will maximize cohesion within modules;
that is, there will be more use of local data rather than global data items.
 Thus, high cohesion and low coupling make a program clear, readable, and
maintainable.
CODE REUSABILITY
 Code reusability allows the use of existing code several times.
 Similar to built-in library functions, reusable components can be constructed in
modern programming languages and can be reused in later software developments.
 Also, reusability increases program modularity.

2
KEEP IT SIMPLE, STUPID
 A simple code is easy to debug, write, read, and modify.
 Programmers should write simple programs by breaking a problem into smaller
pieces that they think they understand and then try to implement the solution in code.
SIMPLICITY, EXTENSIBILITY, AND EFFORTLESSNESS
 A simple program always works better than a complicated program.
 Programs should be extendable rather than being lengthy.
 Program should be simple for better programming.
CODE VERIFICATION
 Kent Back has introduced pair programming in extreme programming (XP), which
focuses on program verification.
 Test driven development (TDD) environment is created for better code writing.
 In TDD, Programming is done with testing a code.
 Pair programming allows two programmers to sit together.
 One may be coding while other may be thinking or verifying the code.
CODE DOCUMENTATION
 Source codes are used by testers and maintainers.
 Therefore, Programmers should add comments in source codes as and when required.
 A well commented code helps to understand the code at the time of testing and
maintenance.
SEPARATION OF CONCERN
 A program generally includes several functionalities in the system.
 Each of these functionalities is related to each other.
 Therefore, different functional requirements should be managed by distinct and
loosely coupled modules of codes.
FOLLOW CODING STANDARDS, GUIDELINES, AND STYLES
 A source code with the standards and which is according to the programming style
will have less adverse effect in the system.
 Therefore, programmers should focus on coding standards, guidelines, and good
programming style.

3
4.2 CODING PROCESS
 The coding process describes the steps the programmers follow for producing source
codes.
 The coding process allows programmers to write bug-free source codes.
 The traditional coding process and test-driven development (TDD) are two widely
used coding processes.
 The traditional programming process is an iterative and incremental process which
follows the “write-compile-debug” process.
 TDD was introduced by extreme programming (XP) in agile methodologies that
follow the “coding with testing” process.
 In the following subsections we will explore these two processes in details.
TRADITIONAL CODING PROCESS
 In traditional coding process, programmers use the design specifications (algorithms,
pseudo code, etc.) for writing source codes in a programming language.
 The source code contains executable instructions, Libraries, function calls, comment
line etc.
 The source files are made according to the programming language.
 The source files are compiled to ensure that the coding is syntactically and logically
correct.
 During compilation, source programs are translated into machine language.
 If there is any error in program, it is debugged by changing the source codes.
 Otherwise, the same code is utilized for the testing of source codes.
 During compilation, debugging is performed to uncover errors in the program.
 Test data and test cases are designed to check that source codes are generated as per
the requirements specified in the requirements specification document.
 If there exists any error after testing, source codes are debugged and the process is
iterated until an executable source code is produced.
 The process of traditional coding process is illustrated in the following figure given
below

4
Design specifications

Writing source codes

Source file
Compilation and linking

Object file Debugging

Is there any Yes


compilation

error?

No
Testing

Testing No
ok

Yes
Executable program

FIG:- TRADITIONAL CODING PROCESS


TEST DRIVEN DEVELOPMENT:
 Test-driven development (TDD) was proposed by Kent Back for iterative software
development.
 It is the reverse process of the traditional programming process.
 As the traditional development cycle is “design-code-test”, TDD has the cycle “test-
code-refactor.”
 As the name “TDD” suggests, test cases are designed before coding.
 Traditionally, test cases are written after coding to prove that the program is incorrect.
 In this approach, the programmer thinks about cases in which the code could fail.
 The programmer runs the source code on the test case.

5
 If the source code passes the test, then another test case is written for some other
feature.
 Otherwise, the bugs are identified; source code is changed and run again the source
code on the test case.
 The following figure illustrate the test driven development process

Feature specifications

Test case design

Write source code

bug fixing
Code Change
Unsuccessful

Run

successful

Refactoring

Software

FIG:- TEST DRIVEN DEVELOPMENT PROCESS


 After a successful run, the feature code will be extended to add the functionality of
other features in it using refactoring.
 Refactoring is the disciplined technique for restructuring an existing body of code,
altering its internal structure without changing the external behavior.
 Refactoring removes the duplicate code, adds new features, and refines the code to
produce a better design of the existing source code.

6
4.3 CODE VERIFICATION:
 Code verification is the process of identifying the errors, failures, and faults in source
codes, which cause the system to fail in performing specified task.
 Testing is one of the widely used methods for the verification of the work products of
all phases in software life cycle.
 We will discuss the following methods that are widely used for code verification.
1. Code review
2. Static Analysis
3. Testing

4.3.1 CODE REVIEW:


 It mainly aims at discovering and fixing mistakes in source codes.
 The following methods are used for code review.
1. Code walkthrough
2. Code Inspection
3. Pair Programming

CODE WALKTHROUGH:
 Code walkthrough is technical and peer review process of finding mistakes in source
codes.
 The walkthrough team consists of a review and a team of reviewers.
 The review team may involve a technical leader, a senior member of project team, a
person from quality assurance, a technical writer and other interested persons.
 The review provides the review documents (maybe SRS, Design document, etc. )
along with the code to be reviewed.
 The reviewee walks through the code or may provide the review material to the
reviewers in advance.
 The reviewers examine the code either using a set of test cases or by changing the
source code.

7
CODE INSPECTION
 Code inspection is similar to code walkthrough, which aims at detecting
programming defects in the source code.
 The code inspection team consists of a programmer, a designer, and a tester.
 The inspectors are provided the code and a document of checklists.
 The checklists focus on the important aspects in the code to be inspected.
 They include data referencing, memory referencing, loop conditions, conditional
choices, input/output statement, comments, computation expressions, coding
standards and memory usage.
PAIR PROGRAMMING
 Pair programming is an extreme programming practice in which two programmers
work together at one work station.
 During Pair programming, one programmer operates the keyboard, while other is
watching, learning, asking, talking and making suggestions.
 With the help of pair programming, the pair works with better concentration.
 It catches simple mistakes such as ambiguous variable and method names easily.
 The pair shares knowledge and provides quick solution
 Research shows that a pair provides more design alternatives with increased morale
and confidence than programmers working solo.
 Pair programming improves the quality of software and promotes knowledge sharing
between the team members.
 Pair programming catches issues early, due to which it costs far less than code
review.
 There are some hurdles to pair programming such as wastage of resources,
programmers prefer to work in isolation, egos between team members, etc.
4.3.2 STATIC ANALYSIS
 The analysis is performed with the help of program analysis tools.
 There are two methods of program analysis, viz., static analysis and dynamic
analysis.
 In static analysis, source codes are not executed rather these are given as input to
some tool that provides program behavior.

8
 Dynamic program analysis is the analysis of computer software that is performed by
executing programs built from that software on a real or virtual process.
 Some of the popular static analysis tools are Fortify’s SCA, FindBugs and Fluid,
PREfix, Agitar’s TestOne, and Parfait.
 Static analysis tools help to identify redundancies in source codes.
 They identify idempotent operations, data declared but not used, dead codes, missing
data, connections that lead to unreachable code segments, and redundant assignments.
 They also identify the errors in interfacing between programs.
 They identify mismatch errors in parameters used by the team and assure compliance
to coding standards.
4.3.3 TESTING
 Static analysis helps to observe the structural properties of source codes.
 Dynamic analysis works with test data by executing test cases.
 Unit testing is performed for code verification.
 Each module (I.e., program, function, procedure, routines, etc.) is tested using test
cases.
 Test cases are designed to observe the uncovered errors in the code.
4.4 CODE DOCUMENTATION
 Software development, operation, and maintenance processes include various kinds
of documents.
 Documents act as a communication medium between the different team members of
development.
 They help users in understanding the system operations.
 Documents prepared during development are problem statement, software
requirement specification (SRS) document, design document, documentation in the
sources codes, and test document.
 Documentation is an important artifact in the system.
 There are following categories of documentation done in the system:
(i) Internal documentation
(ii) System documentation
(iii) User documentation

9
(iv) Process documentation
(v) Daily documentation
INTERNAL DOCUMENTATION
 Internal documentation is done in the source code.
 It is basically comments included in the source code, which help in understanding the
source code.
 It mainly covers the standards prologue related to program and its compilation,
comments, and self-documented lines.
 The standards prologue describes the purpose of the program, compilation units, and
supporting files for program execution.
 Comments are written for the complex logics and data structures.
 Most of the programming languages also provide self-documentation in programs.
SYSTEM DOCUMENTATION
 System documentation mainly includes the following documents:
(i) Software requirements specification
(ii) Design document
(iii) System maintenance guide
(iv) Test document
(v) Installation report
USER DOCUMENTATION
 User documentations include the following documents.
(i) System introductory manual: It describes the system usage. That is, how the system
will start, operate, and use various system resources.
(ii) Operation description: It describes the services provided by a specific operation in
the system.
(iii) System installation document: It specifies the configuration of files, hardware,
and software for installation of the module.
(iv)System references manual: It specifies the system facilities, errors reporting,
recovery, and troubleshooting process.
(v) System administrator help: It is a manual which includes the commands and
controls that the administrator generates at the time of system interactions.

10
DAILY DOCUMENTATION
 Each programmer works on a module.
 He is responsible for various activities of the module.
 He maintains a unit development Folder called notebook to record the day to day
activities assigned to him.
 The unit development folder maintains the documents which contain requirements,
design, architecturing, detailed design, source code, text plan, test result, changes,
notes, etc.
 Due dates, completion time, review dates and any unusual things noted by the
programmers are recorded in the unit development folder.
PROCESS DOCUMENTATION
 Process documentation manages the process records such as plan, schedule, process
quality documents, communication documents and standards.
 The communication documents such as emails, working paper, daily documents etc.
are used for managing communication among different team members.
 The standards are laid down by standard organizations such as ISO, ACM, IEEE, etc.
to manage process documents.
4.5 TESTING FUNDAMENTALS
4.5.1 ERRORS, FAULTS, AND FAILURES
 Software failure can have different terms such as defect, fault, error, bug, problem,
anomaly, incident, mistake, inconsistency, and variance.
 IEEE standard defines these terms in the following manner.
(i) Error:
 Error is the discrepancy between the actual value of the output of software and the
theoretically correct value of the output for that given input.
 An error is the human action that produces an incorrect result.
 Error is the unintended behavior of software.
 It observed that most of the times errors occur during writing of programs by
programmers.
 The error is also known as variance, mistake, or problem.

11
(ii) FAULT:
 Fault is also called defect or bug, which is manifestation of one or more errors.
 It causes a system to fail in achieving the intended task.
 The cause of an error is a fault (either software fault or hardware fault), which resides
temporarily or permanently in the system.
 A fault is a defect that gives rise to an error.
 It is a procedural cause that results in system malfunction.
 The presence of faults needs repair.
 A fault is an accidental condition that causes a functional unit to fail to perform its
required function.
(iii) FAILURE:
 Failure is the deviation of the observed behavior from the specified behavior.
 It occurs when the faulty code is executed leading to an incorrect outcome.
 Thus, the presence of faults may lead to system failure.
 A failure is the manifestation of an error in the system or software.
 A failure may be produced when a fault is encountered.

An error is caused by a fault and may propagate to become a failure.

4.5.2 THE COST OF DEFECTS


 The cost of testing is generally more than the cost of other activities in the software
development life cycle.
 The following figure illustrates the cost of defects at various phases of software
development.
 As the development proceeds, the occurrences of number of defects and their
seriousness increase.
 Therefore, the cost of defecting and fixing the defects increases in the software
development lifecycle and the maximum number of defects is reported during testing
and release.

12
Figure 1 The Cost of Defects at various Software development

1200
1000

800
Cost
of
600
defects
($) 400

200

100

0
Requirement Design Coding Testing and
Release

Development Phase
4.5.3 TESTING PROCESS
 Testing is a disciplined process of finding and debugging defects to produce defect-
free software.
 During testing, a test plan is prepared that specifies the name of the module to be
tested, reference modules, date and time, location, name of tester, testing tools, etc.
 Software Engineering design test cases while writing the source codes.
 The software tester runs the program using test cases according to the test plan and
observes the test results.
 If the test result is similar to the specified values of the software, then a test report is
prepared.
 Otherwise, a list of defects is made and the tester tries to find the causes of the
defects.
 The defects are debugged from the work products.
 Finally, the software tester will fix the errors in the program.

13
 At the end, a test document is prepared covering the test report, program execution,
test plan, etc.
 The tested program will now be integrated with the other parts of the software.
 The testing process is shown in the figure given below

Source
Program
Prepare
test
plan
Desig Test case Run Test plan
n test progra
case m

Result

Yes Fix
Is result error
Test Report as
expecte
d
No

Test Document Tested Program Defect List Debug


Progra
Defect m

Find
Integration causes of
defects

Figure 2 Testing Process

4.5.4 THE ROLE OF SOFTWARE TESTERS


 Software engineers write source codes and testers test codes.
 The goal of testers is to confirm that the software works properly by finding
defects as early as possible and ensuring that these are fixed.
 A Software tester does the following tasks for testing the software:

14
 Prepares the test plan and test data
 Designs test cases and test scripts
 Sets up the test environment
 Perform testing
 Tracks defects in the defect management system
 Participates in the test case review meetings
 Prepares test report
 Follows software standards

 In order to perform testing the above technical and nontechnical responsibilities, a


software tester should be equipped with the following skills:
 Understanding of the business scenario
 Background of programming
 Knowledge of testing tools and their environment
 Skills of writing test cases and test case execution
 Skills as a planner for preparing the test plan
 Good analysis skills
 Skills to work with other team members
 Problem-solving skills
 Good communication skills

 Software testers perform testing using software testing tools.


 There are various automated tools for testing, such as LoadRunner, JMeter,
DBMonster, TestTube, WinRunner, Pounder, Marathon, and so on.
 These tools help us in program monitoring, program inspection, performance
measurement, and analysis of software.
4.6 TEST PLANNING
Test planning includes the following activities:
 Create test plan
 Design test cases
 Design test stubs and test drivers

15
 Test case execution
 Defect tracking and statistics
 Prepare test summary report

 In the following subsections, we discuss the various activities of test planning.


(i) Create a Test Plan
 A test plan is a document that describes the scope and activities of testing.
 A test plan contains the following attributes.
Test plan ID: A unique identifier for the test plan document is assigned for an easy
reference and access.
Purpose: It describes an overview of the test plan including the goals, objectives, and
constraints for testing the document.
Test items: It includes the list of test items and their versions.
References: It describes the related documents which are to be referenced, if required,
such as project plan, configuration management plan, etc.
Features to be tested: A list of features of the software/product to be tested is specified
in the test plan. The features which will not be tested should be specified along with the
reasons.
Schedule: A detailed schedule for designing and executing test cases for test items must
be specified against calendar time.
Responsibilities: The responsibilities and roles of each member of the test team should
be specified to determine who will design and execute test cases and repair defects.
Test environment: Test plan describes the required testing tools and the test
environment describing the hardware, software, and network configurations.
Test case libraries and standards: Sometimes a process document is needed for
identifying, writing, and managing test cases.
Test strategy: The test strategy specifies the testing methods, such as white box, black
box. Individual modules may need different testing methods.
Test deliverables: Test deliverables are specified in the test plan before testing the
modules. They include the test cases, test scripts, defect logs, test reports etc.

16
Release criteria: A pass or fail criteria for the test item on test cases must be designed to
judge completion of testing an item.
Expected risk: A list of risks, both which have occurred or are expected to occur, should
be specified along with the contingency plan for each risk.
(ii) Design Test Cases
 A test case is a set of inputs and expected results under which a program unit is
exercised with the purpose of causing failure and detecting faults.
 The intention of designing sets of test cases for testing is to prove that program under
test is incorrect.
 Some of the terminologies that are generally used in automated testing and the design
of test cases are as follows:
Test script A test script is a procedure that is performed on a system under test to verify
that the system functions as expected. Test case is the baseline to create test scripts using
automated tool.
Test suite A test suite is a collection of test cases. It is the composite of test cases
designed for a system.
Test data Test data are needed when writing and executing test cases for any kind of test.
All the test values and test components are separately stored in a file in the form of test
data.Any such kind of data which are used in tests are known as test data. Test data are
sometimes also known as test mixture.
Test harness Test harness is the collection of software, tools, input/output data, and
configurations required for test.
Test scenario Test scenario is the set of test cases in which requirements are tested from
end to end. There can be independent test cases or a series of test cases that follow each
other.
 Test case specification is the description of fields that are specified for testing the
program unit.
 A test case includes the following fields:
 Test plan ID
 Test case ID
 Feature to be tested

17
 Preconditions
 Test script or test procedure
 Test data
 Expected results
 Test status

(iii) Test Stubs and Test Drivers


 A test driver is a simulated module that calls the module under test.
 The test driver specifies the parameters to call the module under test.
 A test stub is a simulated module that is called by module under test.
 The test stub provides the return results to the module under test.
 The test stub and test drivers are the dummy modules that are basically written for
the purpose of providing input/output or the interface behavior for testing.
 The connection of test driver and test stub modules with the module under test is
shown in the figure given below.
 Here, a driver module calls the module under test and three stub modules are called
by the module under test.
Figure 3 Test Driver and test stub modules

Driver module

Module under test

Stub module1 Stub module1 Stub module1

(iv) Test Case Execution


 Once the test cases, test drivers, and test stubs are designed for a test plan, the next
task is to execute the test cases.
 Software tester runs the test procedure again and again using valid and invalid data
and observes the result.

18
 On executing test cases, the expected results and their behavior is recorded in a test
summary report.

(v) Test Summary Report


 A test summary report is prepared to ensure whether the module under test satisfies
the acceptance criteria or not.
 The test summary report covers the result of the items from the test plan which were
planned at the beginning to test the module.
 It includes the number of test cases executed and the types of errors observed.
(vi) Defect Tracking And Statistics
 A Project has a lot of defects ,which are inspected, retested and managed in a test log
 A record of ignored defects, unresolved defects, unresolved defects whose testing had
been stopped due to extra effort and resource requirements etc., is managed
4.7 BLACK BOX TESTING
 Black box testing is performed on the basis of functions or features of the software.
 In black-box testing, only the input values are considered for the design of test cases.
 The output values that the software provides on execution of test cases are observed.
 The internal logic or program structures are not considered during black box testing.
 Requirements specifications is the basis of black -box testing.
 It is also known as behavioral or functional testing
 There are a number of black-box test case design methods but we discuss only the
following methods.
 Equivalent class partitioning
 Boundary value analysis
 Cause-effect graphing
 Error guessing
(i) Equivalence class Partitioning
 Equivalence class partitioning method allows partitioning the input domain into a set
of equivalence classes (i.e., sub-domains).
 Each equivalence class provides different behavior.

19
 Thus, each equivalence class is disjoint, i.e., the input values of an equivalence class
will not belong to another equivalence class.

 The equivalence class partitioning method has the following two aspects:
 Design of equivalence classes
 Selection of test input data
EXAMPLE: Design test cases to find the characters from ASCII numbers using
equivalence class partitioning method.
The range of ASCII values varies from 0 to 256. The equivalence classes for the ASCII
character set is shown below. An ASCII character may belong to one of the following
equivalence class (ECs).

0-31 32-95 96-127 128-256

EC1 EC2 EC3 EC4

EC1: Control Characters


EC2: Basic Printable Characters
EC3: Extended Printable Characters
EC4: Unicode Characters
The test cases for the equivalence classes in the above example are shown in the table
given below
TABLE: Equivalence class partitioning for ASCII characters
Equivalence Class Valid Test Data Invalid Test Data
EC1 13,0,23 -1,33,103
EC2 35,95,59 23,99,31
EC3 97,117,100 91,129,172
EC4 138,246,182 125,258,265

(ii) Boundary Value Analysis


 The boundary value analysis is the special case of equivalence class partitioning
method that focuses on the boundary of the equivalence classes.

20
 Here, the test input data are selected at and near the boundary of the equivalence
classes whereas in equivalent classes partitioning, the test input data are selected
within the equivalence class.
EXAMPLE: A student grading system is used to allot grades in the subjects. The grades
can be A (86 to 100), B (61 to 85), C (46 to 60), D (30 to 45) and F (below 30).
Here, there may be the chances of missing marks for the calculation of grades or invalid
range consideration for allotting grades in a subject or wrong calculation of grades in the
range of marks. Therefore, some of the test cases are shown in the table given below:
TABLE: Equivalence class partitioning for ASCII characters
Equivalence Class Test input domain Test Data
EC1 A (>=86 to <=100) 85,87,101
EC2 B (>=61 to <=85) 60,84,86
EC3 C (>=46 to <=60) 45,59,61
EC4 D (>=30 to <=45) 29,31,46
EC5 F (<30) 28,29,30,31

(iii) Cause-Effect Graphing


 Cause-Effect Graphing technique begins with finding the relationships among input
conditions known as “causes” and the output conditions known as “effects”.
 A cause is any condition in the requirement that affects the program output.
 Similarly, an effect is the outcome of some input conditions.
 The logical relationships among input and output conditions are expressed in terms of
a cause-effect graph.
 For example, “cash withdrawal” depends upon “valid pin”, “valid amount”, and
cash availability” in the account.
 Using the cause-effect graphing technique, the requirements can be stated in causes
and effects as follows:
CAUSES:
C1: Enter valid amount
C2: Enter valid pin
C3: Cash available in the account
EFFECTS
E1: Cash withdrawal

21
 E1 will be successful if C1, C2, and C3 are true.
 This can be represented in the cause-effect graph as shown in the figure given below:

C
1

C ^ E
2 1

C
3
FIG: Cause-Effect graph for cash withdrawal
 The notations used for constructing a cause-effect graph are shown in the figure given
below:

22
Identity NOT
C E1 C E
1 1 1

C AND
1

C ^ E
2 1

C OR
1

V
C E
2 1

FIG: Notations for cause-effect graph


 LOGICAL functions (AND, OR, and NOT) are used for constructing cause-effect
graph.
 Each node has the value either true or false.
 Identity specifies that if C1 is true, then E1 will be true, otherwise false.
 NOT (~) symbol indicates if C1 is true, then E1 will be false, else true.
 OR (v) states E1 will be true if either C1 or C2 are true, else false.
 In AND (^) function, E1 will be true if both C1 and C2 are true, else false.
 The process of cause-effect graphing testing is as follows:
1. From the requirements, identify causes and effects and assign them a unique
identification number.

23
2. The relationships among causes and effects are established by combining the causes
and effects; and they are annotated into the cause effect graph.
3. Transform the cause-effect graph into a decision table and each column in the
decision table represents a test case.
4. Generate tests from the decision table.
(iv)Error Guessing
 The error guessing technique is based on guessing the error-prone areas in the
program.
 Error guessing is an intuitive and ad-hoc process of testing.
 The possible errors or error-prone situations are listed and then test cases are written
for such errors.
 Software testers use their experiences and knowledge to design test cases for error-
prone situations.
4.8 WHITE –BOX TESTING
 White-box testing is concerned with exercising the source code of a module and
traversing a particular execution path.
 Internal logics, such as control structures, control flow, and data structures are
considered during white-box testing.
 Unit testing is performed to test source codes.
 Thereafter, white-box testing methods are applied at the integration and testing
phases.
 White-box testing is also known as glass-box testing or structural testing.
 The following white-box testing methods are widely used for testing the software:
(i) Control-flow based testing
(ii) Path testing
(iii) Data-flow based testing
(iv) Mutation testing
(I) CONTROL-FLOW BASED TESTING
 The control-flow based criteria concentrates on covering all aspects of the program
that is statements, conditions, branches, etc., at least once.
 There are various criterions used to measure test adequacy in a program.

24
 The following are the control-flow based coverage testing in the program:
 Statement coverage testing
 Branch coverage testing
 Condition coverage testing

Statement coverage testing


 The aim of statement coverage is to design test cases so that every statement of the
program can be executed at least once during testing.
 Consider the following example written in C language:
main ()
{
int a,b;
printf(“%d”,a,b);
if(a<b)
printf(“%d”,a+b);
else
printf(“%d”,a*b);
}
 For example, the following two test cases of a test suite will be adequate for statement
coverage of the above program:
Test suite = {(a=1,b=2),(a=2,b=1)}
 This test suite will cover all the statements and even the if-else block of the program.
Branch Coverage Testing
 Branch coverage testing is also known as decision coverage.
 In this coverage criterion, test cases are designed in such a way that all the outcomes
of the decision have been considered.
 Each branch or decision in the program is evaluated as true or false at least once
during testing.
 Consider the following code fragment of C language with a test datum {-2}:
int a, b;
if(a<0)
a = a+b;

25
 On executing the above code for test data {-2}, it satisfies statement coverage because
it evaluates to true.
 However, it is not adequate for branch coverage because the decision (a<0) will not
evaluate to false.
Condition Coverage Testing
 Simple conditions such as (a<0) are covered using branch coverage criteria.
 But there can also exist complex conditions which are made using logical operations,
such as AND, OR, and XOR.
 To understand the test adequacy of complex condition coverage, see the following
program fragment in C language:
int a, b;
if (a>=0 && b>0)
a=a+b;
else
a=a-b;
 Consider the following test cases for the above program:
Test case 1={a=0,b=2};
Test case 2={a=1,b=-1};
 The first case will cover the if part but it will not be adequate for the else part and
vice versa.
 Therefore, both the test cases will cover statement coverage, branch coverage, and
condition coverage in the above program.
(ii) PATH TESTING
 Path testing is another white-box testing method, which focuses on identifying
independent paths in a program.
 The process of path testing to design test cases from CFG is as follows:
Step 1: Construct CFG.
Step 2: Compute cyclomatic complexity.
Step 3: Identify independent paths.
Step 4: Prepare test cases.
 A control flow graph (CFG) is also known as flow graph or program graph.

26
 The CFG mainly involves sequence, selection, and iterative representation in the
program flow.
 These basic elements of CFG are discussed in Figure given below

FIG: BASIC CONSTRUCTS OF CFG


 An independent path is any path through the program that introduces at least one new
edge that is not included in any other path before it.
 The cyclomatic complexity number defines the required number of independent paths
in a given CFG.
(iii) DATA-FLOW-BASED TESTING
 Data-flow-based testing concentrates on data flow representation rather than control
flow in the program.
 Data-flow-based testing is performed as follows to reach to the design of test cases:
1. Construct a data flow graph from a program.
2. Select data flow testing criteria.
3. Determine feasible paths.
4. Design test cases.

27
(iv) MUTATION TESTING
 Mutation testing is a fault-based technique that uses mutation of program to design
test cases.
 A programmer tries to compute the mutation score after mutation testing.
 Let T be a test case, L live mutants, D dead mutants, E the equivalent mutant, and N
the total mutants generated.
 The mutation score on test case T, i.e., M (T), is computed as follows:

4.9 LEVELS OF TESTING


 Testing is a defect detection technique that is performed at various levels.
 Testing begins once a module is fully constructed.
 Although software engineers test source codes after it is written, but this is not an
appealing way that can satisfy customer’s needs and expectations.
 Different levels of testing are shown in Figure given below:

FIG: LEVELS OF TESTING

28
 Testing levels are discussed in detail in the following subsections.
(i)UNIT TESTING
 Unit testing is the starting level of testing.
 Here, unit means a program unit, module, component, procedure, subroutine of a
system developed by the programmer.
 The aim of unit testing is to find bugs by isolating an individual module using test
stub and test drivers and by executing test cases on it.
 The environment of unit testing is shown in Figure given below:

FIG: UNIT TEST ENVIRONMENT


(ii)INTEGRATION TESTING
 Two or more modules are integrated into subsystems and testing is performed in an
integrated manner.
 The main goal of integration testing is to find interface errors between modules.
 The modules are integrated in an incremental fashion.
 There are various approaches in which the modules are combined together for
integration testing.
 These are as follows:
 Big-bang approach
 Top-down approach
 Bottom-up approach
 Sandwich approach

29
Big-bang approach
 In this approach, all the modules are first tested individually and then these are
combined together and tested as a single system.
Top-down approach
 The top-down integration testing approach is as follows: main system  subsystems
 modules at concrete level.
 An illustration of top-down integration testing is shown in Figure 9.14.
 The main module M will be tested first and then it will integrate and test subsystems
S1, S2, and S3.
 Thereafter, S1 will integrate M1.1 and M1.2 modules; S2 links with M2.1 modules
and S3 integrates M3.1, M3.2, and M3.3 modules.
Bottom-up approach
 The approach of bottom-up integration is as follows: concrete level modules 
subsystem main module.
 In Figure 9.14, bottom-level modules M1.1 and M1.2 are integrated to form S1
subsystem; M2.1 forms S2; M3.1, M3.2, and M3.3 are combined with subsystem S3.

Sandwich approach
 Sandwich testing combines both top-down and bottom-up integration approaches.
 During sandwich testing, the top-down approach forces the lower-level modules to be
available and the bottom-up approach requires upper-level modules.
(iii)SYSTEM TESTING

30
 Once all the modules have been tested, system testing is performed to check whether
the system satisfies the requirements (both functional and nonfunctional).
 To test the functional requirements of the system, functional or black-box testing
methods are used with appropriate test cases.
 For specific nonfunctional requirements, special tests are conducted to ensure the
system functionality.
 Some of the nonfunctional system tests are discussed in the following paragraphs.
Performance Testing
 Performance testing is carried out to check the runtime outcomes of the system, such
as efficiency, accuracy, etc.
Volume Testing
 It deals with the system if heavy amount of data are to be processed or stored in the
system.
 For example, an operating system will be checked to ensure that the job queue will be
able to handle a large number of processes entering into the computer.
 It basically checks the capacity of the data structures.

Stress Testing
 In stress testing, behavior of the system is checked when it is under stress.
 There are several reasons for stress, such as increase in the maximum number of
users, peak demand, extended number of operations, etc.
Security Testing
 Security testing is conducted to ensure security checks at different levels in the
system.
 For example, testing of e-payment system is done to ensure that the money
transaction is happening in a secure manner in e-commerce applications.
Recovery Testing
 Recovery testing is performed to check whether the system will recover the losses
caused by data error, software error, or hardware problems.
 For example, the Windows operating system recovers the currently-running files if
any hardware/ software problem occurs in the system.

31
Compatibility Testing
 Compatibility testing is performed to ensure that the new system will be able to work
with the existing system.
 For example, compatibility testing checks whether Windows 2007 files can be opened
in Windows 2003 if it is installed in the system.
Configuration Testing
 Configuration testing is performed to check if a system can run on different hardware
and software configurations.
 For example, if you want to run your program on another machine, then you are
required to check the configuration of its hardware and software.
Installation Testing
 Installation testing is conducted to ensure that all modules of the software are
installed properly.
 Installation testing covers various issues, such as automatic execution of the CD; files
and libraries must be allocated and loaded; appropriate hardware configurations must
be present; proper network connectivity; compatibility with the operating system
platform, etc.
Documentation Testing
 Once the system becomes operational, problems may be encountered in the system.
 A systematic documentation or manual can help to solve such problems.
 The system is checked to see whether its proper documentation is available.
(iv)ACCEPTANCE TESTING
 It is performed with the customer to ensure that the system is acceptable for delivery.
 Acceptance testing is performed at two levels, i.e., alpha testing and beta testing.
Alpha Testing
 Alpha testing is pilot testing in which customers are involved in exercising test cases.
 In alpha testing, the customer conducts tests in the development environment.
 The users perform alpha test and try to pinpoint any problem in the system.
 After alpha testing, system is ready to be transported to the customer site for
deployment.
Beta Testing

32
 Beta testing is conducted at the customer site where the software is to be deployed
and used by the end users.
 Some other tests are also performed by the client during acceptance testing, such as
shadow testing and benchmark testing to measure customer satisfaction in the system.
Shadow Testing
 In this testing, the new system and the legacy system are run side-by-side and their
results are compared.
 Any unusual results noted by the end user are informed to the developers so that they
take corrective actions to remove the problems.
Benchmark Testing
 Benchmark testing helps to assess product’s performance against other products in a
number of areas including functionality, durability, quality, etc.
4.10 USABILITY TESTING
 Usability refers to the ease of use and comfort that users have while working with
software.
 It is also known as user-centric testing.
 Usability testing concentrates on the testing of user interface design, such as look and
feel of the user interface, format of reports, screen layouts, hardware and user
interactions.
 Usability testing is performed by potential end users in a controlled environment.
 The development organization calls selected end users in a test the product in terms of
ease of use, functionality as expected, performance, safety and security, and the
outcome.
 There are three types of usability tests:
(i) Scenario test
(ii) Prototype test
(iii) Product test
SCENARIO TEST
 In scenario test, end users are presented with a visionary scenario of the software.
 The end users go through the dialogues of the scenario and the developer observes
how the end user interacts and reacts with the system.

33
 The developers get immediate feedback by the scenario test.
PROTOTYPE TEST
 In a prototype test, end users are provided a piece of software that implements
important aspects of the system.
 The end user provides a realistic view of the prototype.
 The collected suggestions are incorporated into the system.
PRODUCT TEST
 The product test is just like the prototype test except that the functional version of the
software is used for usability testing instead of the prototype version.
4.11 REGRESSION TESTING
 Regression Testing is also known as program revalidation.
 Regression Testing is performed whenever new functionality is added or an existing
functionality is modified in the program.
 Regression testing is also needed when a subsystem is modified to get the new
version of the system.
 There are various techniques of regression testing, such as
(i) Test-All,
(ii) Test Minimization,
(iii) Test Prioritization, and
(iv) Random Selection.
TEST-ALL
 In test-all technique, test cases for regression testing are selected from the test cases
designed for the existing system.
 Here, all the test cases are executed for regression testing of the new system.
TEST MINIMIZATION
 Test minimization aims to reduce the number of tests and the selected tests.
 It is mainly based upon the code coverage concept.
 Those test cases that cover the changed code are used for regression testing.
 Even the selected test cases are also optimized to reduce their size.
 During test minimization, several test cases are discarded, so test cases should be
reviewed carefully before discarding them.

34
TEST PRIORITIZATION
 Test case prioritization technique prioritizes and schedules test cases to speed up the
testing process or achieve some objective.
 Therefore, test cases are ranked according to their priority in testing.
 The highest priority test cases are executed first and then the next priority test cases.
RANDOM SELECTION
 There are various other techniques of test prioritization other than ranking.
 Critical test cases can be randomly selected for regressing testing.
4.12 DEBUGGING APPROACHES
 Debugging is a post-testing mechanism of locating and fixing errors.
 Once errors are reported by testing methods, these are isolated and removed during
debugging.
 Debugging has two important steps:
(i) Identifying the location and nature of errors
(ii) Correcting or fixing errors
 There are various methods of finding and correcting errors in a program.
 The most popular debugging approaches are as follows:
(i) Brute force
(ii) Backtracking
(iii) Breakpoint
(iv)Debugging by Induction
(v) Debugging by Deduction
(vi)Debugging by Testing
Brute force
 It is the simplest method of debugging but it is inefficient.
 It uses memory dumps or output statements for debugging.
 A memory dump is a machine-level representation of the corresponding variables and
statements.
 The memory dump rarely establishes correspondence to show errors at a particular
time.
Backtracking

35
 In this method, debugging begins from where the bug is discovered and the source
code is traced out backward though different paths until the exact location of the
cause of bug is reached or the cause of bug has disappeared.
 This process is performed with the program logic in reverse direction of the flow of
control.
Breakpoint
 Breakpoint debugging is a method of tracking programs with a breakpoint and
stopping the program execution at the breakpoint.
 A breaking point is a kind of signal that tells the debugger to temporarily suspend
execution of program at a certain point.
 Program execution continues before the breakpoint statement.
 If any error is reported, its location is marked and then the program execution
resumes till the next breakpoint.
 This process is continued until all errors are located in the program.

Debugging by Induction
 It is based on pattern matching and a thought process on some clue.
 The process begins with collecting information about pertinent data where the bug
has been discovered.
 The patterns of successful test cases are observed and data items are organized.
 Thereafter, hypothesis is derived by relating the pattern and the error to be debugged.
 On successful hypothesis, the devised theory proves the occurrence of bugs.
 Otherwise, more data are collected to derive causes of errors.
 Finally, causes are removed and errors are fixed in the program.
Debugging by Deduction
 This is a kind of cause elimination method.
 On the basis of cause hypothesis, lists of possible causes are enumerated for the
observed failure.
 Now tests are conducted to eliminate causes to remove errors in the system.
 If all the causes are eliminated, then errors are fixed.

36
 Otherwise, hypothesis is refined to eliminate the errors.
 Finally, hypothesis is proved to ensure that all causes have been eliminated and the
system is bug free.
Debugging by Testing
 It uses test cases to locate errors.
 Test cases designed during testing are used in debugging to collect information to
locate the suspected errors.

37

You might also like