Unit4 Se r13 (Completed)
Unit4 Se r13 (Completed)
Unit4 Se r13 (Completed)
IMPLEMENTATION
4.1 CODING PRINCIPLES
Coding principles help programmers in writing an efficient and effective code, which
is easier to test, maintain, and reengineer.
Some of the following coding principles can make a code clear, readable, and
understandable.
INFORMATION HIDING
Information hiding hides the implementation details of data structures from the other
modules.
Also, it secures data and information from illegal access and alterations.
An illustration of information hiding is shown in following figure given below.
MODULE
1
All the above constructs have single entry and single exit points.
The flowcharts to illustrate single entry and single exit for these constructs are shown
in following figure.
SEQUENCE SELECTION ITERATION
T F
FIG:- SINGLE ENTRY AND SINGLE EXIT FOR SEQUENCE, SELECTION AND ITERATION
2
KEEP IT SIMPLE, STUPID
A simple code is easy to debug, write, read, and modify.
Programmers should write simple programs by breaking a problem into smaller
pieces that they think they understand and then try to implement the solution in code.
SIMPLICITY, EXTENSIBILITY, AND EFFORTLESSNESS
A simple program always works better than a complicated program.
Programs should be extendable rather than being lengthy.
Program should be simple for better programming.
CODE VERIFICATION
Kent Back has introduced pair programming in extreme programming (XP), which
focuses on program verification.
Test driven development (TDD) environment is created for better code writing.
In TDD, Programming is done with testing a code.
Pair programming allows two programmers to sit together.
One may be coding while other may be thinking or verifying the code.
CODE DOCUMENTATION
Source codes are used by testers and maintainers.
Therefore, Programmers should add comments in source codes as and when required.
A well commented code helps to understand the code at the time of testing and
maintenance.
SEPARATION OF CONCERN
A program generally includes several functionalities in the system.
Each of these functionalities is related to each other.
Therefore, different functional requirements should be managed by distinct and
loosely coupled modules of codes.
FOLLOW CODING STANDARDS, GUIDELINES, AND STYLES
A source code with the standards and which is according to the programming style
will have less adverse effect in the system.
Therefore, programmers should focus on coding standards, guidelines, and good
programming style.
3
4.2 CODING PROCESS
The coding process describes the steps the programmers follow for producing source
codes.
The coding process allows programmers to write bug-free source codes.
The traditional coding process and test-driven development (TDD) are two widely
used coding processes.
The traditional programming process is an iterative and incremental process which
follows the “write-compile-debug” process.
TDD was introduced by extreme programming (XP) in agile methodologies that
follow the “coding with testing” process.
In the following subsections we will explore these two processes in details.
TRADITIONAL CODING PROCESS
In traditional coding process, programmers use the design specifications (algorithms,
pseudo code, etc.) for writing source codes in a programming language.
The source code contains executable instructions, Libraries, function calls, comment
line etc.
The source files are made according to the programming language.
The source files are compiled to ensure that the coding is syntactically and logically
correct.
During compilation, source programs are translated into machine language.
If there is any error in program, it is debugged by changing the source codes.
Otherwise, the same code is utilized for the testing of source codes.
During compilation, debugging is performed to uncover errors in the program.
Test data and test cases are designed to check that source codes are generated as per
the requirements specified in the requirements specification document.
If there exists any error after testing, source codes are debugged and the process is
iterated until an executable source code is produced.
The process of traditional coding process is illustrated in the following figure given
below
4
Design specifications
Source file
Compilation and linking
error?
No
Testing
Testing No
ok
Yes
Executable program
5
If the source code passes the test, then another test case is written for some other
feature.
Otherwise, the bugs are identified; source code is changed and run again the source
code on the test case.
The following figure illustrate the test driven development process
Feature specifications
bug fixing
Code Change
Unsuccessful
Run
successful
Refactoring
Software
6
4.3 CODE VERIFICATION:
Code verification is the process of identifying the errors, failures, and faults in source
codes, which cause the system to fail in performing specified task.
Testing is one of the widely used methods for the verification of the work products of
all phases in software life cycle.
We will discuss the following methods that are widely used for code verification.
1. Code review
2. Static Analysis
3. Testing
CODE WALKTHROUGH:
Code walkthrough is technical and peer review process of finding mistakes in source
codes.
The walkthrough team consists of a review and a team of reviewers.
The review team may involve a technical leader, a senior member of project team, a
person from quality assurance, a technical writer and other interested persons.
The review provides the review documents (maybe SRS, Design document, etc. )
along with the code to be reviewed.
The reviewee walks through the code or may provide the review material to the
reviewers in advance.
The reviewers examine the code either using a set of test cases or by changing the
source code.
7
CODE INSPECTION
Code inspection is similar to code walkthrough, which aims at detecting
programming defects in the source code.
The code inspection team consists of a programmer, a designer, and a tester.
The inspectors are provided the code and a document of checklists.
The checklists focus on the important aspects in the code to be inspected.
They include data referencing, memory referencing, loop conditions, conditional
choices, input/output statement, comments, computation expressions, coding
standards and memory usage.
PAIR PROGRAMMING
Pair programming is an extreme programming practice in which two programmers
work together at one work station.
During Pair programming, one programmer operates the keyboard, while other is
watching, learning, asking, talking and making suggestions.
With the help of pair programming, the pair works with better concentration.
It catches simple mistakes such as ambiguous variable and method names easily.
The pair shares knowledge and provides quick solution
Research shows that a pair provides more design alternatives with increased morale
and confidence than programmers working solo.
Pair programming improves the quality of software and promotes knowledge sharing
between the team members.
Pair programming catches issues early, due to which it costs far less than code
review.
There are some hurdles to pair programming such as wastage of resources,
programmers prefer to work in isolation, egos between team members, etc.
4.3.2 STATIC ANALYSIS
The analysis is performed with the help of program analysis tools.
There are two methods of program analysis, viz., static analysis and dynamic
analysis.
In static analysis, source codes are not executed rather these are given as input to
some tool that provides program behavior.
8
Dynamic program analysis is the analysis of computer software that is performed by
executing programs built from that software on a real or virtual process.
Some of the popular static analysis tools are Fortify’s SCA, FindBugs and Fluid,
PREfix, Agitar’s TestOne, and Parfait.
Static analysis tools help to identify redundancies in source codes.
They identify idempotent operations, data declared but not used, dead codes, missing
data, connections that lead to unreachable code segments, and redundant assignments.
They also identify the errors in interfacing between programs.
They identify mismatch errors in parameters used by the team and assure compliance
to coding standards.
4.3.3 TESTING
Static analysis helps to observe the structural properties of source codes.
Dynamic analysis works with test data by executing test cases.
Unit testing is performed for code verification.
Each module (I.e., program, function, procedure, routines, etc.) is tested using test
cases.
Test cases are designed to observe the uncovered errors in the code.
4.4 CODE DOCUMENTATION
Software development, operation, and maintenance processes include various kinds
of documents.
Documents act as a communication medium between the different team members of
development.
They help users in understanding the system operations.
Documents prepared during development are problem statement, software
requirement specification (SRS) document, design document, documentation in the
sources codes, and test document.
Documentation is an important artifact in the system.
There are following categories of documentation done in the system:
(i) Internal documentation
(ii) System documentation
(iii) User documentation
9
(iv) Process documentation
(v) Daily documentation
INTERNAL DOCUMENTATION
Internal documentation is done in the source code.
It is basically comments included in the source code, which help in understanding the
source code.
It mainly covers the standards prologue related to program and its compilation,
comments, and self-documented lines.
The standards prologue describes the purpose of the program, compilation units, and
supporting files for program execution.
Comments are written for the complex logics and data structures.
Most of the programming languages also provide self-documentation in programs.
SYSTEM DOCUMENTATION
System documentation mainly includes the following documents:
(i) Software requirements specification
(ii) Design document
(iii) System maintenance guide
(iv) Test document
(v) Installation report
USER DOCUMENTATION
User documentations include the following documents.
(i) System introductory manual: It describes the system usage. That is, how the system
will start, operate, and use various system resources.
(ii) Operation description: It describes the services provided by a specific operation in
the system.
(iii) System installation document: It specifies the configuration of files, hardware,
and software for installation of the module.
(iv)System references manual: It specifies the system facilities, errors reporting,
recovery, and troubleshooting process.
(v) System administrator help: It is a manual which includes the commands and
controls that the administrator generates at the time of system interactions.
10
DAILY DOCUMENTATION
Each programmer works on a module.
He is responsible for various activities of the module.
He maintains a unit development Folder called notebook to record the day to day
activities assigned to him.
The unit development folder maintains the documents which contain requirements,
design, architecturing, detailed design, source code, text plan, test result, changes,
notes, etc.
Due dates, completion time, review dates and any unusual things noted by the
programmers are recorded in the unit development folder.
PROCESS DOCUMENTATION
Process documentation manages the process records such as plan, schedule, process
quality documents, communication documents and standards.
The communication documents such as emails, working paper, daily documents etc.
are used for managing communication among different team members.
The standards are laid down by standard organizations such as ISO, ACM, IEEE, etc.
to manage process documents.
4.5 TESTING FUNDAMENTALS
4.5.1 ERRORS, FAULTS, AND FAILURES
Software failure can have different terms such as defect, fault, error, bug, problem,
anomaly, incident, mistake, inconsistency, and variance.
IEEE standard defines these terms in the following manner.
(i) Error:
Error is the discrepancy between the actual value of the output of software and the
theoretically correct value of the output for that given input.
An error is the human action that produces an incorrect result.
Error is the unintended behavior of software.
It observed that most of the times errors occur during writing of programs by
programmers.
The error is also known as variance, mistake, or problem.
11
(ii) FAULT:
Fault is also called defect or bug, which is manifestation of one or more errors.
It causes a system to fail in achieving the intended task.
The cause of an error is a fault (either software fault or hardware fault), which resides
temporarily or permanently in the system.
A fault is a defect that gives rise to an error.
It is a procedural cause that results in system malfunction.
The presence of faults needs repair.
A fault is an accidental condition that causes a functional unit to fail to perform its
required function.
(iii) FAILURE:
Failure is the deviation of the observed behavior from the specified behavior.
It occurs when the faulty code is executed leading to an incorrect outcome.
Thus, the presence of faults may lead to system failure.
A failure is the manifestation of an error in the system or software.
A failure may be produced when a fault is encountered.
12
Figure 1 The Cost of Defects at various Software development
1200
1000
800
Cost
of
600
defects
($) 400
200
100
0
Requirement Design Coding Testing and
Release
Development Phase
4.5.3 TESTING PROCESS
Testing is a disciplined process of finding and debugging defects to produce defect-
free software.
During testing, a test plan is prepared that specifies the name of the module to be
tested, reference modules, date and time, location, name of tester, testing tools, etc.
Software Engineering design test cases while writing the source codes.
The software tester runs the program using test cases according to the test plan and
observes the test results.
If the test result is similar to the specified values of the software, then a test report is
prepared.
Otherwise, a list of defects is made and the tester tries to find the causes of the
defects.
The defects are debugged from the work products.
Finally, the software tester will fix the errors in the program.
13
At the end, a test document is prepared covering the test report, program execution,
test plan, etc.
The tested program will now be integrated with the other parts of the software.
The testing process is shown in the figure given below
Source
Program
Prepare
test
plan
Desig Test case Run Test plan
n test progra
case m
Result
Yes Fix
Is result error
Test Report as
expecte
d
No
Find
Integration causes of
defects
14
Prepares the test plan and test data
Designs test cases and test scripts
Sets up the test environment
Perform testing
Tracks defects in the defect management system
Participates in the test case review meetings
Prepares test report
Follows software standards
15
Test case execution
Defect tracking and statistics
Prepare test summary report
16
Release criteria: A pass or fail criteria for the test item on test cases must be designed to
judge completion of testing an item.
Expected risk: A list of risks, both which have occurred or are expected to occur, should
be specified along with the contingency plan for each risk.
(ii) Design Test Cases
A test case is a set of inputs and expected results under which a program unit is
exercised with the purpose of causing failure and detecting faults.
The intention of designing sets of test cases for testing is to prove that program under
test is incorrect.
Some of the terminologies that are generally used in automated testing and the design
of test cases are as follows:
Test script A test script is a procedure that is performed on a system under test to verify
that the system functions as expected. Test case is the baseline to create test scripts using
automated tool.
Test suite A test suite is a collection of test cases. It is the composite of test cases
designed for a system.
Test data Test data are needed when writing and executing test cases for any kind of test.
All the test values and test components are separately stored in a file in the form of test
data.Any such kind of data which are used in tests are known as test data. Test data are
sometimes also known as test mixture.
Test harness Test harness is the collection of software, tools, input/output data, and
configurations required for test.
Test scenario Test scenario is the set of test cases in which requirements are tested from
end to end. There can be independent test cases or a series of test cases that follow each
other.
Test case specification is the description of fields that are specified for testing the
program unit.
A test case includes the following fields:
Test plan ID
Test case ID
Feature to be tested
17
Preconditions
Test script or test procedure
Test data
Expected results
Test status
Driver module
18
On executing test cases, the expected results and their behavior is recorded in a test
summary report.
19
Thus, each equivalence class is disjoint, i.e., the input values of an equivalence class
will not belong to another equivalence class.
The equivalence class partitioning method has the following two aspects:
Design of equivalence classes
Selection of test input data
EXAMPLE: Design test cases to find the characters from ASCII numbers using
equivalence class partitioning method.
The range of ASCII values varies from 0 to 256. The equivalence classes for the ASCII
character set is shown below. An ASCII character may belong to one of the following
equivalence class (ECs).
20
Here, the test input data are selected at and near the boundary of the equivalence
classes whereas in equivalent classes partitioning, the test input data are selected
within the equivalence class.
EXAMPLE: A student grading system is used to allot grades in the subjects. The grades
can be A (86 to 100), B (61 to 85), C (46 to 60), D (30 to 45) and F (below 30).
Here, there may be the chances of missing marks for the calculation of grades or invalid
range consideration for allotting grades in a subject or wrong calculation of grades in the
range of marks. Therefore, some of the test cases are shown in the table given below:
TABLE: Equivalence class partitioning for ASCII characters
Equivalence Class Test input domain Test Data
EC1 A (>=86 to <=100) 85,87,101
EC2 B (>=61 to <=85) 60,84,86
EC3 C (>=46 to <=60) 45,59,61
EC4 D (>=30 to <=45) 29,31,46
EC5 F (<30) 28,29,30,31
21
E1 will be successful if C1, C2, and C3 are true.
This can be represented in the cause-effect graph as shown in the figure given below:
C
1
C ^ E
2 1
C
3
FIG: Cause-Effect graph for cash withdrawal
The notations used for constructing a cause-effect graph are shown in the figure given
below:
22
Identity NOT
C E1 C E
1 1 1
C AND
1
C ^ E
2 1
C OR
1
V
C E
2 1
23
2. The relationships among causes and effects are established by combining the causes
and effects; and they are annotated into the cause effect graph.
3. Transform the cause-effect graph into a decision table and each column in the
decision table represents a test case.
4. Generate tests from the decision table.
(iv)Error Guessing
The error guessing technique is based on guessing the error-prone areas in the
program.
Error guessing is an intuitive and ad-hoc process of testing.
The possible errors or error-prone situations are listed and then test cases are written
for such errors.
Software testers use their experiences and knowledge to design test cases for error-
prone situations.
4.8 WHITE –BOX TESTING
White-box testing is concerned with exercising the source code of a module and
traversing a particular execution path.
Internal logics, such as control structures, control flow, and data structures are
considered during white-box testing.
Unit testing is performed to test source codes.
Thereafter, white-box testing methods are applied at the integration and testing
phases.
White-box testing is also known as glass-box testing or structural testing.
The following white-box testing methods are widely used for testing the software:
(i) Control-flow based testing
(ii) Path testing
(iii) Data-flow based testing
(iv) Mutation testing
(I) CONTROL-FLOW BASED TESTING
The control-flow based criteria concentrates on covering all aspects of the program
that is statements, conditions, branches, etc., at least once.
There are various criterions used to measure test adequacy in a program.
24
The following are the control-flow based coverage testing in the program:
Statement coverage testing
Branch coverage testing
Condition coverage testing
25
On executing the above code for test data {-2}, it satisfies statement coverage because
it evaluates to true.
However, it is not adequate for branch coverage because the decision (a<0) will not
evaluate to false.
Condition Coverage Testing
Simple conditions such as (a<0) are covered using branch coverage criteria.
But there can also exist complex conditions which are made using logical operations,
such as AND, OR, and XOR.
To understand the test adequacy of complex condition coverage, see the following
program fragment in C language:
int a, b;
if (a>=0 && b>0)
a=a+b;
else
a=a-b;
Consider the following test cases for the above program:
Test case 1={a=0,b=2};
Test case 2={a=1,b=-1};
The first case will cover the if part but it will not be adequate for the else part and
vice versa.
Therefore, both the test cases will cover statement coverage, branch coverage, and
condition coverage in the above program.
(ii) PATH TESTING
Path testing is another white-box testing method, which focuses on identifying
independent paths in a program.
The process of path testing to design test cases from CFG is as follows:
Step 1: Construct CFG.
Step 2: Compute cyclomatic complexity.
Step 3: Identify independent paths.
Step 4: Prepare test cases.
A control flow graph (CFG) is also known as flow graph or program graph.
26
The CFG mainly involves sequence, selection, and iterative representation in the
program flow.
These basic elements of CFG are discussed in Figure given below
27
(iv) MUTATION TESTING
Mutation testing is a fault-based technique that uses mutation of program to design
test cases.
A programmer tries to compute the mutation score after mutation testing.
Let T be a test case, L live mutants, D dead mutants, E the equivalent mutant, and N
the total mutants generated.
The mutation score on test case T, i.e., M (T), is computed as follows:
28
Testing levels are discussed in detail in the following subsections.
(i)UNIT TESTING
Unit testing is the starting level of testing.
Here, unit means a program unit, module, component, procedure, subroutine of a
system developed by the programmer.
The aim of unit testing is to find bugs by isolating an individual module using test
stub and test drivers and by executing test cases on it.
The environment of unit testing is shown in Figure given below:
29
Big-bang approach
In this approach, all the modules are first tested individually and then these are
combined together and tested as a single system.
Top-down approach
The top-down integration testing approach is as follows: main system subsystems
modules at concrete level.
An illustration of top-down integration testing is shown in Figure 9.14.
The main module M will be tested first and then it will integrate and test subsystems
S1, S2, and S3.
Thereafter, S1 will integrate M1.1 and M1.2 modules; S2 links with M2.1 modules
and S3 integrates M3.1, M3.2, and M3.3 modules.
Bottom-up approach
The approach of bottom-up integration is as follows: concrete level modules
subsystem main module.
In Figure 9.14, bottom-level modules M1.1 and M1.2 are integrated to form S1
subsystem; M2.1 forms S2; M3.1, M3.2, and M3.3 are combined with subsystem S3.
Sandwich approach
Sandwich testing combines both top-down and bottom-up integration approaches.
During sandwich testing, the top-down approach forces the lower-level modules to be
available and the bottom-up approach requires upper-level modules.
(iii)SYSTEM TESTING
30
Once all the modules have been tested, system testing is performed to check whether
the system satisfies the requirements (both functional and nonfunctional).
To test the functional requirements of the system, functional or black-box testing
methods are used with appropriate test cases.
For specific nonfunctional requirements, special tests are conducted to ensure the
system functionality.
Some of the nonfunctional system tests are discussed in the following paragraphs.
Performance Testing
Performance testing is carried out to check the runtime outcomes of the system, such
as efficiency, accuracy, etc.
Volume Testing
It deals with the system if heavy amount of data are to be processed or stored in the
system.
For example, an operating system will be checked to ensure that the job queue will be
able to handle a large number of processes entering into the computer.
It basically checks the capacity of the data structures.
Stress Testing
In stress testing, behavior of the system is checked when it is under stress.
There are several reasons for stress, such as increase in the maximum number of
users, peak demand, extended number of operations, etc.
Security Testing
Security testing is conducted to ensure security checks at different levels in the
system.
For example, testing of e-payment system is done to ensure that the money
transaction is happening in a secure manner in e-commerce applications.
Recovery Testing
Recovery testing is performed to check whether the system will recover the losses
caused by data error, software error, or hardware problems.
For example, the Windows operating system recovers the currently-running files if
any hardware/ software problem occurs in the system.
31
Compatibility Testing
Compatibility testing is performed to ensure that the new system will be able to work
with the existing system.
For example, compatibility testing checks whether Windows 2007 files can be opened
in Windows 2003 if it is installed in the system.
Configuration Testing
Configuration testing is performed to check if a system can run on different hardware
and software configurations.
For example, if you want to run your program on another machine, then you are
required to check the configuration of its hardware and software.
Installation Testing
Installation testing is conducted to ensure that all modules of the software are
installed properly.
Installation testing covers various issues, such as automatic execution of the CD; files
and libraries must be allocated and loaded; appropriate hardware configurations must
be present; proper network connectivity; compatibility with the operating system
platform, etc.
Documentation Testing
Once the system becomes operational, problems may be encountered in the system.
A systematic documentation or manual can help to solve such problems.
The system is checked to see whether its proper documentation is available.
(iv)ACCEPTANCE TESTING
It is performed with the customer to ensure that the system is acceptable for delivery.
Acceptance testing is performed at two levels, i.e., alpha testing and beta testing.
Alpha Testing
Alpha testing is pilot testing in which customers are involved in exercising test cases.
In alpha testing, the customer conducts tests in the development environment.
The users perform alpha test and try to pinpoint any problem in the system.
After alpha testing, system is ready to be transported to the customer site for
deployment.
Beta Testing
32
Beta testing is conducted at the customer site where the software is to be deployed
and used by the end users.
Some other tests are also performed by the client during acceptance testing, such as
shadow testing and benchmark testing to measure customer satisfaction in the system.
Shadow Testing
In this testing, the new system and the legacy system are run side-by-side and their
results are compared.
Any unusual results noted by the end user are informed to the developers so that they
take corrective actions to remove the problems.
Benchmark Testing
Benchmark testing helps to assess product’s performance against other products in a
number of areas including functionality, durability, quality, etc.
4.10 USABILITY TESTING
Usability refers to the ease of use and comfort that users have while working with
software.
It is also known as user-centric testing.
Usability testing concentrates on the testing of user interface design, such as look and
feel of the user interface, format of reports, screen layouts, hardware and user
interactions.
Usability testing is performed by potential end users in a controlled environment.
The development organization calls selected end users in a test the product in terms of
ease of use, functionality as expected, performance, safety and security, and the
outcome.
There are three types of usability tests:
(i) Scenario test
(ii) Prototype test
(iii) Product test
SCENARIO TEST
In scenario test, end users are presented with a visionary scenario of the software.
The end users go through the dialogues of the scenario and the developer observes
how the end user interacts and reacts with the system.
33
The developers get immediate feedback by the scenario test.
PROTOTYPE TEST
In a prototype test, end users are provided a piece of software that implements
important aspects of the system.
The end user provides a realistic view of the prototype.
The collected suggestions are incorporated into the system.
PRODUCT TEST
The product test is just like the prototype test except that the functional version of the
software is used for usability testing instead of the prototype version.
4.11 REGRESSION TESTING
Regression Testing is also known as program revalidation.
Regression Testing is performed whenever new functionality is added or an existing
functionality is modified in the program.
Regression testing is also needed when a subsystem is modified to get the new
version of the system.
There are various techniques of regression testing, such as
(i) Test-All,
(ii) Test Minimization,
(iii) Test Prioritization, and
(iv) Random Selection.
TEST-ALL
In test-all technique, test cases for regression testing are selected from the test cases
designed for the existing system.
Here, all the test cases are executed for regression testing of the new system.
TEST MINIMIZATION
Test minimization aims to reduce the number of tests and the selected tests.
It is mainly based upon the code coverage concept.
Those test cases that cover the changed code are used for regression testing.
Even the selected test cases are also optimized to reduce their size.
During test minimization, several test cases are discarded, so test cases should be
reviewed carefully before discarding them.
34
TEST PRIORITIZATION
Test case prioritization technique prioritizes and schedules test cases to speed up the
testing process or achieve some objective.
Therefore, test cases are ranked according to their priority in testing.
The highest priority test cases are executed first and then the next priority test cases.
RANDOM SELECTION
There are various other techniques of test prioritization other than ranking.
Critical test cases can be randomly selected for regressing testing.
4.12 DEBUGGING APPROACHES
Debugging is a post-testing mechanism of locating and fixing errors.
Once errors are reported by testing methods, these are isolated and removed during
debugging.
Debugging has two important steps:
(i) Identifying the location and nature of errors
(ii) Correcting or fixing errors
There are various methods of finding and correcting errors in a program.
The most popular debugging approaches are as follows:
(i) Brute force
(ii) Backtracking
(iii) Breakpoint
(iv)Debugging by Induction
(v) Debugging by Deduction
(vi)Debugging by Testing
Brute force
It is the simplest method of debugging but it is inefficient.
It uses memory dumps or output statements for debugging.
A memory dump is a machine-level representation of the corresponding variables and
statements.
The memory dump rarely establishes correspondence to show errors at a particular
time.
Backtracking
35
In this method, debugging begins from where the bug is discovered and the source
code is traced out backward though different paths until the exact location of the
cause of bug is reached or the cause of bug has disappeared.
This process is performed with the program logic in reverse direction of the flow of
control.
Breakpoint
Breakpoint debugging is a method of tracking programs with a breakpoint and
stopping the program execution at the breakpoint.
A breaking point is a kind of signal that tells the debugger to temporarily suspend
execution of program at a certain point.
Program execution continues before the breakpoint statement.
If any error is reported, its location is marked and then the program execution
resumes till the next breakpoint.
This process is continued until all errors are located in the program.
Debugging by Induction
It is based on pattern matching and a thought process on some clue.
The process begins with collecting information about pertinent data where the bug
has been discovered.
The patterns of successful test cases are observed and data items are organized.
Thereafter, hypothesis is derived by relating the pattern and the error to be debugged.
On successful hypothesis, the devised theory proves the occurrence of bugs.
Otherwise, more data are collected to derive causes of errors.
Finally, causes are removed and errors are fixed in the program.
Debugging by Deduction
This is a kind of cause elimination method.
On the basis of cause hypothesis, lists of possible causes are enumerated for the
observed failure.
Now tests are conducted to eliminate causes to remove errors in the system.
If all the causes are eliminated, then errors are fixed.
36
Otherwise, hypothesis is refined to eliminate the errors.
Finally, hypothesis is proved to ensure that all causes have been eliminated and the
system is bug free.
Debugging by Testing
It uses test cases to locate errors.
Test cases designed during testing are used in debugging to collect information to
locate the suspected errors.
37