0% found this document useful (0 votes)
33 views38 pages

Unit 4 Se Edit

Uploaded by

Harshini Baby
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views38 pages

Unit 4 Se Edit

Uploaded by

Harshini Baby
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

LESSON 6: SOFTWARE IMPLEMENTATION

Contents
6.0.Aims and Objectives
6.1.Introduction
6.2.Structured Coding Techniques
6.3.Coding Style
6.4.Standards and Guidelines
6.5.Documentation Guidelines
6.6.Programming Environments
6.7.Type Checking
6.8.Scoping Rules
6.9.Concurrency Mechanisms
6.10. Review Questions
6.11. Let us Sum Up
6.12. Lesson End Activities
6.13. Points for discussion
6.14. References
6.15. Assignments
6.16. Suggested Reading
6.17. Learning Activities
6.18. Keyword
6.0. AIMS AND OBJECTIVES

• To write source code and internal documentation so that conformance of


the code to its specifications can be easily verified.
• To understand psychological and the engineering view of a programming
language characteristics
• To understand the connection between programming languages and the
different areas in software engineering.
• To understand the coding style and efficiency of a programming language.
6.1. INTRODUCTION
The implementation phase of software development is concerned with
translating design specifications into source code. The primary goal of
implementation is to write source code and internal documentation so that

85
conformance of the code to its specifications can be easily verified, and so that
debugging, testing, and modification are eased. This goal can be achieved by
making the source code as clear and straightforward as possible. Simplicity,
clarity and elegance are the hallmarks of good programs; unexplained
cleverness and complexity are indications of inadequate design and misdirected
thinking.
6.2. STRUCTURED CODING TECHNIQUES
The goal of structured coding is to sequencing control flow through a
computer program so that the execution sequence follows the sequence in
which the code is written. Linear flow of control can be achieved by restricting
the set of allowed program constructs to single entry, single exit formats;
however, strict loyalty to nested, single entry, single exit constructs leads to
questions concerning efficiency, questions about “reasonable” violations of
single entry, single exit, and questions about the proper role of the go to
statements in structured coding.
Programming Language Characteristics
Programming languages are vehicle for communication between humans
and computers. The coding process communication through a programming
language is a human activity.
The Psychological characteristics of a language have an important impact
on the quality of communication. The coding process may also be viewed as
one step in the S/W development project.
The technical characteristics of a language can influence the quality of
design. The technical characteristics can affect both human and S/W
engineering concerns.
A Psychological View
A number of psychological characteristics occur as a result of programming
language design. Although these characteristics are not measurable in any way,
their manifestations in all the programming languages are recognized. The
different characteristics are,
• Uniformity – The indicates the degree to which a language use consistent
notation applies arbitrary restrictions and supports syntactic or semantic
exceptions to the rule.
• Ambiguity – This is a programming language that is perceived by the
programmer. A compiler will always interpret a statement in one way but
the human reader may interpret the statement differently.
• Compactness – A programming language is an indication of the amount
of code-oriented information that must be recalled from the human
memory. The characteristics of human memory have a strong impact on
the manner in which a language is used. Human memory and
recognition may be divided into synthetic and sequential domains.
Synesthetic memory allows remembering and recognizing the things as a
whole. Sequential memory provides a means for recalling the next

86
element in a sequence. Each of these; memory characteristics affect
programming language characteristics that are called Locality and
linearity.
o Locality- Is the synthetic characteristic of a programming language.
Locality is enhanced when statements may be combines into
blocks when the structured constructs may be implemented
directly, and when design and resultant code are modular and
cohesive.
o Linearity – Is a psychological characteristic that is closely
associated with the concept of maintenance of functional domain.
Extensive branching violates the linearity of processing.
• Tradition- A software engineer with a background in FORTRAN would
have little difficulty learning PASCAL or C. The latter languages have a
tradition established by the former. The constructs are similar, the form
is compatible and a sense of programming language format is maintained.
However if the same S/W engineer is required to earn APL, LISP or Small
talk, tradition would be broken and time on the learning curve would be
longer.
The Psychological characteristics of a programming language have an important
bearing on our ability to learn apply and maintain them.
A Syntactic / Semantic Model
When a programmer applies S/W engineering methods that are
programming language-independent, semantic knowledge is used. Syntactic
knowledge on other hand is language dependent, concentrating on the
characteristics of a specific language.
The semantic knowledge is the more difficult to acquire and also more
intellectually demanding to apply. All S/W engineering steps that precede
coding make heavy use of semantic knowledge. The coding step applies
syntactic knowledge that is arbitrary and instructional. When a new
programming language is learn, the new syntactic information is added to
memory. Potential confusion may occur when the syntax of a new
programming language is similar but not equivalent to the syntax of another
language.
An Engineering View
A S/W engineering view of programming language characteristics focuses on
the needs of specific S/W development project. The characteristics are,
• Ease of design to code translation.
• Compiler efficiency.
• Source code portability.
• Availability of development tools.
• Availability of libraries

87
• Company policy
• External requirements
• Maintainability.
The ease of design to code translation provides an indication of how closely
a programming language reflects a design representation. A language that
directly implements the structured constructs, sophisticated data structures,
specialized I/O, bit manipulation capabilities and object oriented constructs will
make translation from design to source code much easier.
The quality of the compiler is important for the actual implementation
phase. A good compiler should not only generate efficient code, but also provide
support for debugging (e.g. with clear error messages and run-time checks).
Although fast advances in processor speed and memory density have begun to
satisfy the need for super efficient code may applications still require fast and
low memory requirement programs. Particularly in the area of microcomputers,
many compilers have been integrated in development systems. Here the user-
friendliness of such systems must also be considered.
The source code portability can be as follows:
• The source code may be transported from processor to processor and
compiler to compiler with little or no modification.
• The source code remains unchanged even when its environment changes.
• The source code may be integrated into different software packages with
little or not modification required because of programming language
characteristics.
The availability of development tools can shorten the time required to
generate source code and can improve the quality of code. Many programming
languages may e acquired with a set of tools that include debugging compilers,
source code formatting aids, built-in editing facilities, tools for source code
control, extensive subprogram libraries, browsers, cross-compilers for
microprocessor development, microprocessor capabilities and others.
With modular programming languages, the availability of libraries for
various application domains represents a significant selection criterion. For
example, for practically all FORTRAN compilers, libraries are available with
numerous mathematical functions, and Smalltalk class libraries contain a
multitude of general classes for constructing interactive programs. The
availability of libraries can also be used in C or Modular-2 if the compiler
supports linking routines from different languages. On the other hand, there
are libraries that are available only in compiled form and usable only in
connection with a certain compiler.
Often a particular company policy influences the choice of a programming
language. Frequently the language decision is made not by the implementers,
but by managers that want to use only a single language company-wide for
reasons of uniformity. Such a decision was made by the U.S. Department of
Defense, which mandates the use of ADA for all programming in the military

88
sector in the U.S (and thus also in most other NATO countries). Such global
decisions have also been made in the area of telecommunications, where many
programmers at distributed locations work over decades on the same product.
Even in-house situations, such as the education of the employees or a
module library built up over years, can force the choice of a certain language. A
company might resist switching to a more modem programming language to
avoid training costs, the purchase of new tools, and the re-implementation of
existing software.
Sometimes external requirements force the use of a given programming
language. Contracts for the European Union increasingly prescribe ADA, and
the field of automation tends to require programs in FORTRAN or C. Such
requirements arise when the client’s interests extend beyond the finished
product in compiled form and the client plans to do maintenance work and
further development in an in-house software department. Then the education
level of the client’s programming team determines the implementation language.
Maintainability of source code is important for all nontrivial software
development efforts. Maintenance cannot be accomplished until S/W is
understood. The earlier elements of the S/W configuration provide a
foundation for understanding, but the source code must be real and modified
according to the changes in design. The self-documenting characteristics of a
language have a strong influence on maintainability.
6.3. CODING STYLE
The readability of a program depends on the programming language used
and on the programming style of the implementer. Writing readable programs is
a creative process. Once the source code has been generated the function of a
module should be apparent without reference to a design specification. The
programming style of the implementer influences the readability of a program
much more than the programming language used. A stylistically well-written
FORTRAN or COBOL program can be more readable than a poorly written
Modula-2 or Smalltlak program. Coding style encompasses a coding philosophy
that stresses simplicity and clarity. The elements of style include the following:
i. Structuredness
ii. Expressiveness
iii. Data declaration
iv. Statement construction
v. Input/Output
vi. Outward form
vii. Efficiency
This refers to both the design and the implementation.
Although efficiency is an important quality attribute, we do not deal with
questions of efficiency here. Only when we understood a problem and its
solution correctly does it make sense to examine efficiency.

89
6.3.1. Structuredness
 Decomposing a software system with the goal of mastering its complexity
through abstraction and striving for comprehensibility (Structuring in the
large.)
 Selecting appropriate program components in the algorithmic formulation of
subsolutions (Structuring in the small.)
a. Structuring in the large
 Classes and methods for object-oriented decomposition
 Modules and procedures assigned to the modules.
 During implementation the components defined in the design must be
realized in such a way that they are executable on a computer. The
medium for this translation process is the programming language.
 For the implementation of a software system, it is important that the
decomposition defined in the design be expressible in the programming
language; i.e. that all components can be represented in the chosen
language.
b. Structuring in the small
 The understanding and testing of algorithms require that the algorithms
be easy to read.
 Many complexity problems ensure from the freedom taken in the use of
GOTO statements, i.e., from the design of unlimited control flow
structures.
 The fundamental ideal behind structured programming is to use only
control flow structures with one input and one output in the formulation
of algorithms. This leads to a correspondence between the static written
form of an algorithm and its dynamic behavior. What the source code
lists sequentially tends to be executed chronologically. This makes
algorithms comprehensible and easier to verify, modify or extend.
 Every algorithm can be represented by a combination of the control
elements sequence, branch and loop (all of which have one input and one
output) ([Böhm 1996]).
 D-diagrams (named after Dijkstra): Diagrams for algorithm consisting of
only elements sequence, branch and loop.
Note: If we describe algorithms by a combination of these elements, no
GOTO statement is necessary. However, programming without GOTO
statements alone cannot guarantee structuredness. Choosing
inappropriate program components produces poorly structured programs
even if they contain no GOTO statements.
6.3.2. Expressive power (Code Documentation)
 The implementation of software systems encompasses the naming of objects
and the description of actions that manipulate these objects.

90
 The choice of names becomes particularly important in writing an algorithm.
Recommendation:
• Choose expressive names, even at the price of long identifiers. The
writing effort pays off each time the program must be read, particular
when it is to be corrected and extended long after its implementation. For
local identifiers (where their declaration and use adjoin) shorted names
suffice.
• If you use abbreviations, then use only ones that the reader of the
program can understand without any explanation. Use abbreviations
only in such a way as to be consistent with the context.
• Within software system assign names in only one language (e.g. do not
mix English and Vietnamese).
• Use upper and lower case to distinguish different kinds of identifiers (e.g.,
upper case first letter for data types, classes and modules; lower case for
variables) and to make long names more readable
(e.g. CheckInputValue).
• Use nouns for values, verbs for activities, and adjectives for conditions to
make the meaning of identifiers clear (e.g., width, ReadKey and valid,
respectively).
• Establish your own rules and follow them consistently.
 Good programming style also finds expression in the use of comments: they
contribute to the readability of a program and are thus important program
components. Correct commenting of programs is not easy and requires
experience, creativity and the ability to express the message concisely and
precisely.
The rules for writing comments:
• Every system component (every module and every class) should begin
with a detailed comment that gives the reader information about several
questions regarding the system component:
- What does the component do?
- How (in what context) is the component used?
- Which specialized methods, etc. are use?
- Who is the author?
- When was the component written?
- Which modifications has have been make?
Example:
/* FUZZY SET CLASS: FSET
FSET.CPP 2.0, 5 Sept. 2007

91
Lists operations on fuzzy sets
Written by Suresh Babu
*/
• Every procedure and method should be provided with a comment that
describes its task (and possibly how it works). This applies particularly
for the interface specifications.
• Explain the meaning of variables with a comment.
• Program components that are responsible for distinct subtasks should be
labeled with comments.
• Statements that are difficult to understand (e.g. in tricky procedures or
program components that exploit features of a specific computer) should
be described in comments so that the reader can easily understand them.
• A software system should contain comments that are as few and as
concise as possible, but as many adequately detailed comments as
necessary.
• Ensure that program modifications not only affect declarations and
statements but also are reflected in updated comments. Incorrect
comments are worse than none at all.
Note: These rules have deliberately been kept general because there are no rules
that apply uniformly to all software systems and every application domain.
Commenting software systems is an art, just like the design and implementation
of software systems.
6.3.3. Data declaration
The style of the data description is established when code is generated. A
number of relatively simple guidelines can be established to make data more
understandable and maintenance simpler. The order of data declarations
should be standardized even if the programming language has no mandatory
requirements. Ordering makes attributes easier to find, expediting testing,
debugging and maintenance. When multiple variable names are declared with
a single statement, an alphabetical ordering of names is used. Similarly,
labeled global data should be ordered alphabetically. If complex data
structures are used in design, the commenting should be used to explain
peculiarities that are present in a programming language implementation.
6.3.4. Statement Construction
The construction of software logical flow is established during design. The
construction of individual statements, is apart of the coding step. Statement
construction should abide by one overriding rule. Each statement should be
simple and direct and code should not be complicated to effect efficiency. Many
programming languages allow multiple statements per line. The spaces saving
aspects of this feature are hardly justified by poor readability that results.
Individual source coder statements can be simplified by,
• Avoiding the use of complicated conditional tests.

92
• Eliminating tests on negative conditions.
• Avoiding heavy nesting of loops or conditions
• Use of parenthesis to clarify logical or arithmetic expressions.
6.3.5. Input / Output
The style of input and output is established during S/W requirements
analysis and design and not coding Input and output style will vary with the
degree of human interaction. For batch oriented I/O, logical input organization,
meaningful Input/Output error checking, good I/O error recovery and rational
output report formats are desirable characteristics. For Interactive I/O, a
simple guided input scheme, extensive error checking and recovery, human
engineered output, and consistency of I/O format are the primary concerns.
6.3.6. Outward form
 Beyond name selection and commenting, the readability of software systems
also depends on its outward form.
Recommended rules for the outward form of programs:
• For every program component, the declarations (of data types, constants,
variables, etc.) should be distinctly separated form the statement section.
• The declaration sections should have a uniform structure when possible,
e.g. using the following sequence: constant, data types, classes and
modules, methods and procedures.
• The interface description (parameter lists for method and procedures)
should separate input, output and input/output parameters.
• Keep comments and source code distinctly separate.
• The program structure should be emphasized with indentation.
6.3.7. Efficiency
In well engineered systems, there is a natural tendency to use critical
resources efficiently. Processor cycles and memory locations are viewed as
critical resources. Three area of Efficiency should be taken care when a
programming language is developed. They are,
a) Code Efficiency
b) Memory Efficiency
c) Input /Output Efficiency
a. Code Efficiency: The efficiency of source code is directly connected to the
efficiency of algorithms defined during detailed design. However, the coding
style can affect the execution speed and memory requirement. The following
guidelines can be applier when detail design is translated into code.
• Simplify arithmetic and logical expressions before committing to
code.

93
• Carefully evaluated the nested loops.
• Avoid use of multi-dimensional arrays.
• Avoid the use of pointers and complete lists.
• Don’t mix data types.
b. Memory Efficiency – Memory restrictions in the large machines like
mainframes and workstation are a thing of the past. Low-cost memory provides
a large physical address space and virtual memory management provides
application software with an enormous logical address space. Memory
efficiency for such environments cannot be elated to minimum memory used.
Memory efficiency must take into account the paging characteristics of an
operating system. Maintenance of functional domain through the structured
constructs is an excellent method for reducing paging and thereby increases
efficiency.
c. Input/Output Efficiency – Two classes of I/O should be considered when
efficiency is discussed. They are,
• I/O directed to human (user),
• I/O directed to another device.
Input supplied by a user and output produced for a user is efficient when the
information can be supplier or understood with an economy of intellectual effort.
Efficiency of I/O to other hardware is an extremely complicated topic and the
following guidelines improve I/O efficiency.
• The number of I/O requests should be minimized.
• All I/O should be buffered to reduce communication overhead
• I/O to secondary memory devices should be blocked
• I/O to terminals and printers should recognize features of the device that
could improve quality or speed.
6.4. STANDARDS AND GUIDELINES
Coding standards are specifications for a preferred coding style. Give a
choice of ways to achieve an effect, a preferred way is specified. Coding
standards are often viewed by programmers as mechanisms to constrain and
devalue the programmer’s creative problem solving skills. It is desirable that all
programmers on a software project adopt similar coding styles so that code of
uniform quality is produced. This does not mean that all programmers must
think alike, or that they must slavishly implement all algorithms in exactly the
same manner. Indeed, the individual style of each programmer on a project is
identical even when right adherence to standards of programming style is
observed.
A programming standard might specify items such as
1. Goto Statements will not be used.

94
2. The nesting depth of program constructs will not exceed five levels.
3. Subroutine length will not exceed 30 lines.
A Guideline rephrases these specifications in the following manner:
1. The use of goto statements should be avoided in normal circumstances.
2. The nesting depth of program constructs should be five or less in normal
circumstances.
3. The number of executable statements in a subprogram should not
exceed 30 in normal circumstances.
4. Departure from normal circumstances requires approval by the project
leader.

6.5. DOCUMENTATION GUIDELINES


Computer software includes the source code for a system and all the
supporting documents generated during analysis, design, implementation,
testing, and maintenance of the system. Internal documentation includes
standards prologues for compellation units and subprograms, the self
documenting aspects of the source code, and the internal comments embedded
in the source code. Program unit notebooks provide mechanisms for organizing
the work activities and documentation efforts of individual programmers. This
section describes some aspects of supporting documents, the use of program
unit notebooks, and some guidelines for internal documentation of source code.
6.5.1. Supporting Documents
Requirements specifications, design documents, test plans, user’s
manuals, installation instructions, and maintenance reports are examples of
supporting documents. These documents are the products that result from
systematic development and maintenance of computer software.
6.5.2. Program Unit Notebooks
A program unit is a unit of source code that is developed and/or
maintained by one person; that person is responsible for the unit. In well-
designed system a program unit is a subprogram or group of subprograms that
provide a well-defined function or form a well define subsystem. A program unit
is also small enough, and modular enough, that it can be thoroughly tested in
isolation by the programmer who develops or modifies it. Program unit
notebooks are used by individual programmers to organize their work activities,
and maintain the documentation for their program units.
6.5.3. Internal Documentation
Internal documentation consists of a standard introduction for each
program unit and compilation unit, the self-documenting aspects of the source
code, and the internal comments embedded in the executable portion of the
code.
Check your progress

95
LESSON 7: SOFTWARE QUALITY ASSURANCE
Contents
7.0.Aim and Objectives
7.1.Introduction
7.2.Quality Assurance
7.3.Walkthrough and Inspections
7.4.Static Analysis
7.5.Symbolic Execution
7.6.Review Questions
7.7.Let us Sum up
7.8.Learning Activities
7.9.Points for Discussion
7.10. References

7.0. AIM AND OBJECTIVES

 To introduce the various aspects involved in delivering quality software.


 To introduce different quality standards and how they affect the software
industry.

7.1. INTRODUCTION

The American Heritage Dictionary defines quality as “a characteristic or


attribute of something”. As an attribute of an item, quality refers to measurable
characteristics when we examine an item based on its measurable
characteristics, two kinds of quality may be encountered: quality of design and
quality of conformance.
In software development, quality of design encompasses requirements,
specifications, and the design of the system. Quality of conformance is an
issue focused primarily on implementation. If the implementation follows the
design and the resulting system meets its requirements and performance goals,
conformance quality is high.

7.2. QUALITY ASSURANCE

Quality Assurance is a “planned and systematic pattern of all actions


necessary to provide adequate confidence that the item or product conforms to
established technical requirements. It also consists of the auditing and
reporting functions of management.

103
Need for Quality Assurance
 Human beings cannot work in error free manner
 Human beings are blind to their own errors
 Cost of fixing errors increases exponentially with time since their
occurrence
 Systems Software itself is not bug free
 Customers should not find bugs when using software
 Post-release debugging is more expensive
 An organization must learn from its mistakes i.e. not repeat its mistakes
 Appropriate resources need to be allotted e.g. people with right skills
 Error detection activity itself is not totally error free
 A group of people cannot work together without standards, processes,
guidelines, etc.,
The goal of quality assurance is to provide management with the data
necessary to be informed about the product quality, thereby gaining insight and
confidence that product quality is meeting its goals. If the data provided
through quality assurance identify problems, it is management’s responsibility
to address the problems and apply the necessary resources to resolve the
quality issues.
7.2.1. Cost of Quality
Cost of quality refers to the total cost of all efforts to achieve
product/service quality, and includes all work to conformance to requirements,
as well as all work resulting from nonconformance to requirements. Quality
costs may be divided into costs associated with prevention, appraisal, and
failure.
• Prevention costs include quality planning, Formal Technical Reviews,
test equipment, training.
• Appraisal costs include activities to gain insight into product condition
the first time through each process. Examples: in-process and inert-
process inspection, equipment calibration and maintenance, testing.
• Failure costs are costs that would disappear if no defects appeared
before shipping a product to customers. Failure costs may be subdivided
into internal failure costs and external failure costs.
 Internal failure costs include rework, repair, and failure mode
analysis.
 External failure costs include complaint resolution, product return
and replacement, help line support, and warranty work.

104
7.2.2. Need for Software Quality
• Competitive pressure: Today’s business is extremely competitive,
and the software industry is no exception. Companies must
continuously make and sustain improvements in cost and quality to
remain in business.
• Customer satisfaction: Acquiring a new customer is far more
expensive than retaining a current one. Further, few unsatisfied
customers might complain but the vast majority simply takes their
business elsewhere.
• Management of change: In an industry like software where new tools
and methods arrive at a faster rate than it takes to train staff in their
use, it is especially important that organizations fundamentally
change their management styles and their workforce’s attitudes so as
to effect significant improvements.
• Cost of defects: Defects in software may lead to huge losses to the
customer. In case of mission-critical systems, defects can cause
serious harm to the end-user.
7.2.3. Software quality attributes
Software quality is a broad and important field of software engineering.
Software quality is addressed by standardization bodies:
ISO, ANSI, IEEE, etc.,
Software quality attributes (see Figure 7.1)
Software Quality Attributes

Correctness

Dependability

User Friendliness

Adequacy

Learnability

Robustness

Maintainability

Readability

Extensibility

Testability

Efficiency

Portability

Figure 7.1 Software quality attributes


a. Correctness
The extent to which a program satisfies its specifications and fulfils its user’s
mission and objectives.

105
b. Reliability
Reliability of a software system derives from
 Correctness, and
 Availability.
The behavior over time for the fulfillment of a given specification depends on
the reliability of the software system.
Reliability of a software system is defined as the probability that this system
fulfills a function (determined by the specifications) for a specified number of
input trials under specified input conditions in a specified time interval
(assuming that hardware and input are free of errors).
A software system can be seen as reliable if this test produces a low error rate
(i.e., the probability that an error will occur in a specified time interval.)
The error rate depends on the frequency of inputs and on the probability that
an individual input will lead to an error.
c. User friendliness:
 Adequacy
 Learnability
 Robustness
Adequacy
Factors for the requirement of Adequacy:
1. The input required of the user should be limited to only what is
necessary. The software system should expect information only if it is
necessary for the functions that the user wishes to carry out. The
software system should enable flexible data input on the part of the user
and should carry out probable checks on the input. In dialog-driven
software systems, we give particular importance in the uniformity, clarity
and simplicity of the dialogs.
2. The performance offered by the software system should be adapted to the
wishes of the user with the consideration given to extensibility; i.e., the
functions should be limited to these in the specification.
3. The results produced by the software system:
The results that a software system delivers should be output in a clear
and well-structured form and be easy to interpret. The software system
should afford the user flexibility with respect to the scope, the degree of
detail, and the form of presentation of the results. Error messages must
be provided in a form that is comprehensible for the user.

106
Learnability
Learnability of a software system depends on:
 The design of user interfaces
 The clarity and the simplicity of the user instructions (tutorial or user
manual).
The user interface should present information as close to reality as possible
and permit efficient utilization of the software’s failures.
The user manual should be structured clearly and simply and be free of all
dead weight. It should explain to the user what the software system should do,
how the individual functions are activated, what relationships exist between
functions, and which exceptions might arise and how they can be corrected. In
addition, the user manual should serve as a reference that supports the user in
quickly and comfortably finding the correct answers to questions.
Robustness (strong)
Robustness reduces the impact of operational mistakes, erroneous input
data, and hardware errors.
A software system is robust if the consequences of an error in its operation,
in the input, or in the hardware, in relation to a given application, are inversely
proportional to the probability of the occurrence of this error in the given
application.
 Frequent errors (e.g. erroneous commands, typing errors) must be
handled with particular care
 Less frequent errors (e.g. power failure) can be handled more relaxly, but
still must not lead to irreversible consequences.
d. Maintainability
The effort required to locate and fix an error or introduce new features in an
operational program.
The maintainability of a software system depends on its:
 Readability
 Extensibility
 Testability
Readability
Readability of a software system depends on its:
 Form of representation
 Programming style

107
 Consistency
 Readability of the implementation programming languages
 Structuredness of the system
 Quality of the documentation
 Tools available for inspection
Extensibility
Extensibility allows required modifications at the appropriate locations to be
made without undesirable side effects.
Extensibility of a software system depends on its:
 Structuredness (modularity) of the software system
 Possibilities that the implementation language provides for this purpose
 Readability (to find the appropriate location) of the code
 Availability of comprehensible program documentation
Testability
Testability: suitability for allowing the programmer to follow program execution
(run-time behavior under given conditions) and for debugging.
The testability of a software system depends on its:
 Modularity
 Structuredness
Modular, well-structured programs prove more suitable for systematic, stepwise
testing than monolithic, unstructured programs.
Testing tools and the possibility of formulating consistency conditions
(assertions) in the source code reduce the testing effort and provide important
prerequisites for the extensive, systematic testing of all system components.
e. Efficiency
Efficiency: ability of a software system to fulfill its purpose with the best
possible utilization of all necessary resources (time, storage, transmission
channels, and peripherals).
f. Portability
Portability: the ease with which a software system can be adapted to run on
computers other than the one for which it was designed.
The portability of a software system depends on:
 Degree of hardware independence

108
 Implementation language
 Extent of exploitation of specialized system functions
 Hardware properties
 Structuredness: System-dependent elements are collected in easily
interchangeable program components.
A software system can be said to be portable if the effort required for porting it
proves significantly less than the effort necessary for a new implementation.
7.2.4. The importance of quality criteria
The quality requirements encompass all levels of software production.
Poor quality in intermediate products always proves detrimental to the quality
of the final product.
• Quality attributes that affect the end product
• Quality attributes that affect intermediate products
Quality of end products [Bons 1982]:
• Quality attributes that affect their application: These influence the
suitability of the product for its intended application (correctness,
reliability and user friendliness).
• Quality attributes related to their maintenance: These affect the
suitability of the product for functional modification and extensibility
(readability, extensibility and testability).
• Quality attributes that influence their portability: These affect the
suitability of the product for porting to another environment (portability
and testability).
Quality attributes of intermediate products:
• Quality attributes that affect the transformation: These affect the
suitability of an intermediate product for immediate transformation to a
subsequent (high-quality) product (correctness, readability and
testability).
• Quality attributes that affect the quality of the end product: These
directly influence the quality of the end product (correctness, reliability,
adequacy, readability, extensibility, testability, efficiency and portability).

109
7.2.5 The effects of quality criteria on each other

Modifiability/extensibilit
Dependability
Correctness

Learnability

Robustness

Readability

Testability

Portability
Efficiency
Adequacy
Effect on

y
Attributes

Correctness + 0 0 + 0 0 0 0 0

Dependability 0 0 0 + 0 0 0 - 0

Adequacy 0 0 + 0 0 0 0 + -

Learnability 0 0 0 0 0 0 0 - 0

Robustness 0 + + 0 0 0 + - 0

Readability + + 0 0 + + + - +

+ + 0 0 + 0 + - +
Modifiability/extensibilit
y

Testability + + 0 0 + 0 + - +

Efficiency - - + - - - - - -

Portability 0 0 - 0 0 0 + 0 -

Table 1.1 Mutual effects between quality criteria (“+”: positive effect, “-“:
negative effect, “0”: no effect)
7.2.6. Software Quality Assurance
• Refers to umbrella activities concerned with ensuing quality
• Covers quality activities associated with all line and staff functions.
• Scope not restricted only to fault detection and correction.

110
• Oriented towards both prevention and excellence.
• Involves taking care in design, production and servicing
• Assurance comes from:
o Defining appropriate standards, procedures, guidelines, tools, and
techniques.
o Providing people with right skills, adequate supervision and
guidance.
Software Quality Assurance is defined as:
• Conformance to explicitly stated functional and performance
requirements,
• Conformance to explicitly documented development standards, and
• Conformance to implicit characteristics that are expected of all
professionally developed software.
The above definition serves to emphasize three important points:
1. Software requirements are the foundation from which quality is
measured. Lack of conformance to requirements is lack of quality.
2. Specified standards define a set of development criteria that guide the
manner in which software is engineered. If the criteria are not
followed, lack of quality will almost surely result.
3. There is a set of implicit requirements that often goes unmentioned. If
software conforms to its explicit requirements but fails to meet
implicit requirements, software quality is suspect.
Software Quality Assurance (SQA) Activities
The SQA group has responsibility for quality assurance planning, record
keeping, analysis, and reporting. They assist the software engineering team in
achieving a high quality end product.
The activities performed by an independent SQA group are:
1. Prepare a SQA Plan for a project
The plan is developed during project planning and is reviewed by
all interested parties. The plan identifies:
• Evaluations to be performed
• Audits and reviews to be performed
• Standards that are applicable to the project
• Procedures for error reporting

111
• Documents to be produced by the SQA group
• Amount of feedback provided to software project team
2. Participates in the development of the project’s software process
description.
3. Reviews software engineering activities to verify compliance with the
defined software process.
4. Audits designated software work products to verify compliance with
those defined as part of the software process.
5. Ensures that deviations in software work and work products are
documented and handled according to a documented procedure.
6. Records any noncompliance and reports to senior management.
7. Coordinates the control and management of change.
8. Helps to collect and analyze software metrics.

7.3. WALKTHROUGHS AND INSPECTIONS

The walkthrough is a procedure that is commonly used to check the


correctness of models produced by structured systems analysis, although its
techniques are applicable to other design methodologies. Such checking has
always been necessary in system life cycle. Walkthroughs differ from earlier
methods in that they recommended a specific checking procedure and
walkthrough team structure.
A Walkthrough team usually consists of a review and three to five reviewers.
On one-or two-person project it may not be cost-effective to assemble a review
team; however, the walkthrough technique can be beneficial with only one or
two reviewers. In this case a walkthrough formalizes the processes of
explaining your work to a colleague. The team must check that the model:
 Meets system objectives;
 Is a correct representation of the system;
 Has no omissions or ambiguities;
 Will do the job it is supposed to do; and
 Is easy to understand.
7.3.1. Software Reviews
Software reviews are a filter for the software engineering process.
Reviews are applied at various points during software development to uncover
errors, which can be removed. So Software reviews help to eliminate defects in
the software work products that occur as a result of improper analysis, design,

112
and coding. Defect implies a quality problem that is discovered after the
software has been shipped to end-users and so needs to be eliminated.
Any review employs the diversity of a group of people to
1. identify needed improvements in the product
2. conform those parts in which improvement is neither desired nor
needed
3. achieve work of uniform, or at least predictable, quality to make
technical work more manageable.
Reviews can either be formal or informal. Formal technical reviews are more
effective from a quality assurance standpoint and effective means for improving
software quality.

7.3.2. Formal Technical Reviews


A Formal Technical Review (FTR) is a software quality assurance activity that
is performed by software engineers. The objectives of the FTR are:
1. To uncover errors in function, logic or implementation
2. To verify the software under review meets requirements
3. To ensure Software is as per predefined standards
4. To achieve uniform development of software
5. To make projects more manageable
The FTR is actually a class of reviews that include that include walkthroughs,
inspections, round-robin reviews, and other small group technical assessment
of software. Each FTR is conducted as a meeting, and will be successful only if
it is properly planned, controlled and attended.
7.3.3. The Review Meeting
Every review meeting should focus on a specific part of the overall software. It
should be:
1. Attended by 3-5 people
2. Should be well planned, but not require more than two hours of work per
person
3. Of less than two hours duration
The focus of the FTR is on a work product – a component of the
software. The individual who has developed the work product is the Producer.
The Producer informs the Project Leader that the work product is complete, and
a review is required. The Project Leader contacts a Review Leader who evaluates
the work product for readiness, generates copies, and distributes them to two

113
or three reviewers for advance preparation. Concurrently, the Review Leader
also reviews the work product and establishes an agenda for the review meeting.
The Review Leader, all reviewers and the Producer attend the Review
Meeting. The producer presents the work product and the reviewers raise issues
if any. One of the reviewers takes on the role of recorder. The recorder records
all important issues raised during the review, like problems and errors.
After the review, all attendees of the FTR must decide whether to
1. Accept the work product without modification,
2. Reject the work product due to severe errors, or
3. Accept the work product provisionally.
Once the decision is made, all FTR attendees complete a Sign-off,
indicating their participation in the review, and their concurrence with the
review team’s findings.
During the FTR, it is important to summarize the issues and produce a
Review Issues List, and a Review Summery Report. A Review Summery Report
becomes the part of the Project Historical Record, and contains information
about what was reviewed, who reviewed, and the findings and conclusions of
the review. This report is distributed to the Project Leader, and other interested
parties. The Review Issues List serves to identify problem areas within the
product, and to serve as an action item checklist that guides the Producer, as
corrections are made. It is important to establish a follow-up procedure to
ensure that items on the issues list have been properly corrected.
A minimum set of guidelines for FTR is:
1. Review the product, not the Producer.
2. Set an agenda, and maintain it.
3. Limit arguments.
4. List out problem areas, but don’t attempt to solve every problem
noted.
5. Take written notes.
6. Limit the number of participants, and insist on advance preparation.
7. Develop a checklist for each work product that is likely to be reviewed.
8. Allocate resources and time schedules for FTRs.
9. Conduct meaningful training for all reviewers.
10. Review your earlier reviews.
7.3.4. Inspections

114
Inspections, like walkthroughs, can be used throughout the software life
cycle to asses and improve the quality of the various work products. Inspection
teams consist of one to four members (producer, inspector, moderator, reader)
who are trained for their tasks. The producer, whose product is under review,
inspector who evaluates the product, and the moderator who controls the
review process. There is also a reader, who may guide inspectors through the
product. Inspections are conducted in a similar manner to walkthrough, but
more structure is imposed on the sessions, and each participant has a definite
role to play. Each of the roles in an inspection team would have well-defined
responsibilities within the inspection team. Fagan suggests a procedure made
up of five steps, namely:
1. Overview, where the producers of the work explain their work to inspectors.
2. Preparation, where the inspectors prepare the work and the associated
documentation for inspection.
3. Inspection, which is meeting moderated by the moderator and guided by a
reader who goes through the work with the inspectors.
4. Rework, which is any work required by the producers to correct any
deficiencies.
5. Follow-up, where a check is made to ensure that any deficiencies have been
corrected.
The important thing here is that the inspections are formal and have a
report that must be acted on. It is also important that any recommendations
made during inspections be acted upon and followed up to ensure that any
deficiencies are corrected.

7.4. STATIC ANALYSIS

Static analysis is a technique for assessing the structural characteristics


of source code, design specifications, or any notational representation that
conforms to well define syntactic rules. The present discussion is restricted to
static analysis of source code
Static program analysis
 Static program analysis seeks to detect errors without direct execution of
the test object.
 The activities involved in static testing concern syntactic, structural and
semantic analysis of the test object.
 The goal is to localize, as early as possible, error-prone parts of the test
object.
 The most important activities of static program analysis are:
• Code inspection

115
LESSON 8: SOFTWARE TESTING
Contents
8.0.Aims and Objectives
8.1.Introduction
8.2.Levels of Testing
8.3.Unit Testing
8.4.System Testing
8.5.Acceptance test
8.6.White Box Testing
8.7.Black Box Testing
8.8.Testing for Specialized Environments
8.9.Formal Verification
8.10. Debugging
8.11. Review questions
8.12. Let us Sum up
8.13. Lesson End Activities
8.14. Points for Discussion
8.15. References

8.0. AIMS AND OBJECTIVES

• To understand the types of testing done on software


• To understand the different approaches to testing
• To do testing effectively by designing proper test cases

8.1. INTRODUCTION

In a software development project, errors can be injected at any stage


during the development. Techniques are available for detecting and eliminating
errors that originate in each phase. However, some requirement errors and
design errors are likely to remain undetected. Such errors will ultimately be
reflected in the code. Since code is the only product that can be executed and
whose actual behavior can be observed, testing the code forms an important
part of the software development activity.

121
8.1.1. Software testing process
Software testing is the process used to help identify the correctness,
completeness, security and quality of developed computer software. With that in
mind, testing can never completely establish the correctness of arbitrary
computer software. In computability theory, a field of computer science and
neat mathematical proof concludes that it is impossible to solve the halting
problem, the question of whether an arbitrary computer program will enter an
infinite loop, or halt and produce output. In other words, testing is nothing but
criticism or comparison that is comparing the actual value with expected one.
Testing presents an interesting variance for the software engineer. The engineer
creates a series a series of test cases that are intended to demolish the software
that has been built. In fact, testing is the only activity in the software
engineering process that could be viewed as “destructive” rather than
“constructive”. Testing performs a very critical role for quality assurance and for
ensuring the reliability of the software.
Glen Meyers [1979] states a number of rules that can serve well as testing
objectives:
1. Testing is the process of executing a program with the intent of finding an
error.
2. A good test case is one that has a high probability of finding an as-yet
undiscovered error.
3. A successful test is one that uncovers an as-yet undiscovered error.
The common viewpoint of testing is that a successful test is one in which no
errors are found. But from the above rules, it can be inferred that a successful
test is one that systematically uncovers different classes of errors and that too
with a minimum time and effort. The more the errors detected, the more
successful is the test.
What testing can do?
1. It can uncover errors in the software
2. It can demonstrate that the software behaves according to the specification
3. It can show that the performance requirements have been met
4. It can prove to be a good indication of software reliability and software
quality
What testing cannot do?
Testing cannot show the absence of defects, it can only show that software
errors are present.
Davis [1995] suggests a set of testing principles as given below:

122
 Identify and catalog reusable modules and components
 Identify areas where programmers and developers need training

8.2. LEVELS OF TESTING

The different levels of testing are used to validate the software at different
levels of the development process.

Unit Test
Code

Integrated Testing
Design

Requirement System
Analysis Testing

System Engineering Acceptance


Testing

Fig. 8.1. Levels of testing


Fig. 8.1 shows the different testing phases and the corresponding development
phases that it validates. Unit Testing is done to validate the code written and is
usually done by the author of the code. Integration testing is done to validate
the design strategies of the software. System testing is done to ensure that all
the functional and non functional requirements of the software are met.
Acceptance testing is then done by the customer to ensure that the software
works well according to customer specification.

8.3. UNIT TESTING

Unit testing is essentially for verification of the code produced during the
coding phase, and hence the goal is to test the internal logic of the modules.
The unit test is normally white box oriented, and the step can be conducted in
parallel for multiple modules. Unit testing is simplified when a module with
high cohesion is designed. When only one function is addressed by a module,
the number of test cases is reduced and errors can easily be uncovered.

126
8.3.1. Unit Test Considerations
The tests that occur as part of unit testing are listed below:
• Interface – The module interface is tested to ensure that information
properly enters into and out of the program unit under test. If data does
not enter or exit properly, then all other tests are unresolved.
• Local data structures – The local data structure is examined to ensure
that data stored temporarily maintains its integrity during all steps in an
algorithm’s execution.
• Boundary conditions – Boundary conditions are tested to ensure that
the module operates properly at boundaries established to limit or
restrict processing.
• Independent paths – All independent paths through the control
structure are exercised to ensure that all statements in a module have
been executed at least once.
• Error handling paths – All error handling paths are tested.

Figure 8.2. Unit Test Environment


8.3.2. Unit Test Procedures
Because a module is not a standalone program, driver and/or stub
software must be developed for each unit test. A driver is nothing more than a
“main program” that accepts test case data, passes such data to the module to
be tested, and prints relevant results. Stubs serve to replace modules that are
subordinate to the module to be tested. A stub or “dummy subprogram” uses
the subordinate module’s interface, may do minimal data manipulation, prints

127
verification of entry, and returns. Drivers and stubs represent overhead.
Because, both are software modules that must be developed to aid in testing
but not delivered with the final software product.
8.3.3. Integration Testing
Integration testing involves checking for errors when units are put
together as described in the design specifications. While integrating, software
can be thought of as a system consisting of several levels. A unit that makes a
function call to another unit is considered to be one level above it. Fig. 8.3

Unit

Unit Unit

Unit Unit Unit Unit

Fig. 8.3. Functional Unit

There are several approaches for integration:


a. Bottom-Up
The bottom-up approach integrates the units at the lowest level (bottom
level) first, and then the units at the next level above it an so on till the topmost
level is integrated. When integrating, each interface is tested to see if it works
properly together.
Method
 First those operations are tested that require no other program
components; then their integration to a module is tested.
 After the module test the integration of multiple (tested) modules to a
subsystem is tested, until finally the integration of the subsystems, i.e.,
the overall system can be tested.
The advantages
 The advantages of bottom-up testing prove to be the drawbacks of top-
down testing (and vice versa).
 The bottom-up test method is solid and proven. The objects to be tested
are known in full detail. It is often simpler to define relevant test cases
and test data.

128
 The bottom-up approach is psychologically more satisfying because the
tester can be certain that the foundations for the test objects have been
tested in full detail.
The drawbacks
 The characteristics of the finished product are only known after the
completion of all implementation and testing, which means that design
errors in the upper levels are detected very late.
 Testing individual levels also cause high costs for providing a suitable test
environment.
b. Top-Down
Top-Down integration starts from the units at the top level first and
works downwards integrating the units at a lower level. While integrating if a
unit in the lower level is not available a replica of the lower level unit is created
which imitates its behavior.
Method
 The control module is implemented and tested first.
 Imported modules are represented by substitute modules.
 Surrogates have the same interfaces as the imported modules and simulate
their input/output behavior.
 After the test of the control module, all other modules of the software
systems are tested in the same way; i.e., their operations are represented
by surrogate procedures until the development has progressed enough to
allow implementation and testing of the operations.
 The test advances stepwise with the implementation. Implementation and
phases merge, and the integration test of subsystems become unnecessary.
The advantages
 Design errors are detected as early as possible, saving development time
and costs because corrections in the module design can be made before
their implementation.
 The characteristics of a software system are evident from the start, which
enables a simple test of the development state and the acceptance by the
user.
 The software system can be tested thoroughly from the start with test
cases without providing (expensive) test environments.

129
The drawbacks
 Strict top-down testing proves extremely difficult because designing
usable surrogate objects can prove very complicated, especially for
complex operations.
 Errors in lower hierarchy levels are hard to localize.
c. Sandwich
Sandwich integration is an attempt to combine the advantages of both
the above approaches. A “target” layer is identified somewhere in between and
the integration converges on the layer using a top-down approach above it and
a bottom-up approach below it. Identifying the target layers must be done by
people with good experience in similar projects or else it might leads to serious
delays.
d. Big-Bang
A different and somewhat simplistic approach is the big-bang approach,
which consists of putting all unit-tested modules together and testing them in
one go. Chances are that it will not work! This is not a very feasible approach
as it will be very difficult to identify interfacing issues.

8.4. SYSTEM TESTING

System testing is testing conducted on a complete, integrated system to


evaluate the system's compliance with its specified requirements. Software is
only one element of a larger computer-based system. The software developed is
ultimately incorporated with other system elements such as new hardware,
information etc., and a series of system integration and validation tests are
conducted. These tests are not conducted by the software developer alone.
System testing is actually a series of different tests whose primary purpose is to
fully exercise the computer-based system. Although each test has a different
purpose, all work to verify that all system elements have been properly
integrated and perform allocated functions.
System testing falls within the scope of Black box testing, and as such,
should require no knowledge of the inner design of the code or logic.
8.4.1. Alpha and Beta Test
Alpha testing and Beta testing are sub-categories of System testing. If
software is developed as a product (example: Microsoft Word) which is intended
to be used by many end-users, it is not practical to perform formal acceptance
tests with each end-user. In this situation most software products are tested
using the process called alpha and beta testing to allow the end-user to find
defects.
The Alpha test is conducted in the developer’s environment by the end-
users. The environment might be simulated, with the developer and the

130
typical end-user present for the testing. The end-user uses the software
and records the errors and problems. Alpha test is conducted in a
controlled environment.
The Beta test is conducted in the end-user’s environment. The
developer is not present for the beta testing. The beta testing is always
in the real-world environment which is not controlled by the developer.
The end-user records the problems and reports it back to the developer
at intervals. Based on the results of the beta testing the software is
made ready for the final release to the intended customer base.
As a rule, System testing takes, as its input, all of the "integrated"
software components that have successfully passed Integration testing and also
the software system itself integrated with any applicable hardware system(s).
The purpose of Integration testing is to detect any inconsistencies between the
software units that are integrated together called assemblages or between any
of the assemblages and hardware. System testing is more of a limiting type of
testing, where it seeks to detect both defects within the "inter-assemblages" and
also the system as a whole.
8.4.2. Finger Pointing
A classic system testing problem is “finger pointing”. This occurs when
an error is uncovered, and each system element developer blames the other for
the problem. The software engineer should anticipate potential interfacing
problems and do the following:
1) Design error-handling paths that test all information coming from
other elements of the system
2) Conduct a series of tests that simulate bad data or other potential
errors at the software interface
3) Record the results of tests to use as “evidence” if finger pointing does
occur
4) Participate in planning and design of system tests to ensure that
software is adequately tested.
8.4.3. Types of System Tests
The types of system tests for software-based systems are:
a. Recovery Testing
b. Security Testing
c. Stress testing
d. Sensitivity Testing
e. Performance Testing

131
a. Recovery Testing
Recovery testing is a system test that forces the software to fail in a
variety of ways and verifies that recovery is properly performed. If recovery is
automatically performed by the system itself, then re-initialization, check
pointing mechanisms, data recovery, and restart are each evaluated for
correctness. If recovery requires human intervention, the mean time to repair is
evaluated to determine whether it is within acceptable limits.
b. Security Testing
Security testing attempts to verify that protection mechanisms built into
a system will in fact protect it from improper penetration. Penetration spans a
broad range of activities: hackers who attempt to penetrate systems for sport;
unhappy employees who attempt to penetrate for revenge; and dishonest
individuals who attempt to penetrate for illegal personal gain.
c. Stress Testing
Stress tests are designed to tackle programs with abnormal situations.
Stress testing executes a system in a manner that demands resources in
abnormal quantity, frequency, or volume. For example,
1) Special tests may be designed that generate 10 interrupts per second,
when one or two is the average rate
2) Input data rates may be increased by an order of magnitude to determine
how input functions will respond
3) Test cases that require maximum memory or other resources may be
executed
4) Test cases that may cause thrashing in a virtual operating system may
be designed
5) Test cases that may cause excessive hunting for disk resident data may
be created.
d. Sensitivity Testing
A variation of stress testing is a technique called sensitivity testing. In
some situations a very small range of data contained within the bounds of valid
data for a program may cause extreme and even erroneous processing or
profound performance degradation. Sensitivity testing attempts to uncover data
combinations within valid input classes that may cause instability or improper
processing.
e. Performance Testing
Performance testing is designed to test run time performance (speed and
response time) of software within the context of an integrated system. It occurs
throughout all steps in the testing process. Performance tests are often coupled
with stress testing and often require both hardware and software

132
instrumentation. External instrumentation can monitor execution intervals, log
events as they occur, and sample machine states on a regular basis.
Performance testing can be categorized into the following:
 Load Testing is conducted to check whether the system is capable of
handling an anticipated load. Here, Load refers to the number of
concurrent user accessing the system. Load testing is used to determine
whether the system is capable of handling various activities performed
concurrently by different users.
 Endurance testing deals with the reliability of the system. This type of
testing is conducted for a longer duration to find out the health of the
system in terms of its consistency. Endurance testing is conducted on
either a normal load, or stress load. However, the duration of the test is
long.
 Stress testing helps to identify the number of users the system can
handle at a time before breaking down or degrading severely. Stress
testing goes one step beyond the load testing and identifies the system’s
capability to handle the peak load.
 Spike testing is conducted to stress the system suddenly for a short
duration. This testing ensures whether the system will be a stable and
responsive under an unexpected rise in load.
8.4.4. Regression Testing
This is an important aspect of testing-ensuring that when an error is
fixed in a system, the new version of the system does not fail any test that the
older version passed. Regression testing consists of running the corrected
system against tests which the program had already passed successfully. This
is to ensure that in the process of modifying the existing system, the original
functionality of the system was not disturbed. This is particularly important in
maintenance project, where the likelihood of making changes can inadvertently
affect the program’s behavior.
Maintenance projects require enhancement or updating of the existing
system; enhancements are introduction of new features to the software and
might be released in different versions. Whenever a version is released,
regression testing should be done on the system to ensure that the existing
features have not been disturbed.

8.5. ACCEPTANCE TEST

Acceptance testing is the process of testing the entire system, with the
completed software as part of it. This is done to ensure that all the
requirements that the customer specified are met. Acceptance testing (done
after System testing) is similar to system testing but administered by the
customer to test if the system follow to the agreed upon requirements.

133
 If programming language semantics are formally defined, it is possible to
consider a program as a mathematical object.
 Using mathematical techniques, it is possible to demonstrate the
correspondence between a program and a formal specification of that
program.
 Program is proved to be correct with respect to its specification.
 Formal verification may reduce testing costs; it cannot replace testing as
a means of system validation.
 Techniques for proving program correctness and axiomatic approaches

The basis of the axiomatic approach


 Assume that there are a number of points in a program where the
software engineer can provide assertions concerning program variables
and their relationships. At each of these points, the assertions should be
invariably true. Say the points in the program are P(1), P(2),...P(n). The
associated assertions are a(l), a(2),...,a(n). Assertion a(1) must be an
assertion about the input of the program and a(n) an assertion about the
program output.
 To prove that the program statements between points P(i) and P(i+l) are
correct, it must be demonstrated that the application of the program
statements separating these points causes assertion a(i) to be
transformed to assertion a(i+l).
 Given that the initial assertion is true before program execution and the
final assertion is true after execution, verification is carried out for
adjacent program statements.

8.10. DEBUGGING

Debugging, a narrow view of software testing, is performed heavily to find


out design defects by the programmer. The limitation of human nature makes it
almost impossible to make a moderately complex program correct the first time.
Finding the problems and get them fixed, is the purpose of debugging in
programming phase.
Debugging is a methodical process of finding and reducing the number
of bugs, or defects, in a computer program or a piece of electronic hardware
thus making it behave as expected. Debugging tends to be harder when various
subsystems are tightly coupled, as changes in one may cause bugs to emerge in
another.

146
Although each debugging experience is unique, certain general principles can
be applied in debugging. This section particularly addresses debugging software,
although many of these principles can also be applied to debugging hardware.
The basic steps in debugging are:
• Recognize that a bug exists
• Isolate the source of the bug
• Identify the cause of the bug
• Determine a fix for the bug
• Apply the fix and test it

8.10.1. The Debugging Process

Execution of cases
Test Results
caes

Additional Suspected
test causes

Debugging

Regression Identified
Corrections
Tests causes

Fig. 8.5 Debugging Process

The debugging process begins with the execution of the test case. Results
are assessed and a lack of correspondence between expected and actual result
is encountered. In many cases, the non-corresponding data is found to be a
symptom of an underlying cause as yet hidden. The debugging process
attempts to match symptom with cause, thereby leading to error correction.
Characteristics of bugs ([Cheung 1990])
1. The symptom and the cause may be geographically remote. That is, the
symptom may appear in one part of a program, while the cause may
actually be located at a site that is far removed. Highly coupled program
structures exacerbate this situation.
2. The symptom may disappear (temporarily) when another error is
corrected.

147
3. The symptom may actually be caused by non-errors (e.g., round-off
inaccuracies).
4. The symptom may be caused by human error that is not easily traced.
5. The symptom may be a result of timing problems, rather than processing
problems.
6. It may be difficult to accurately reproduce input conditions (e.g., a real-
time application in which input ordering is indeterminate).
7. The symptom may be intermittent. This is particularly common in
embedded systems that couple hardware and software.
8. The symptom may be due to causes that are distributed across a number
of tasks running on different processors.
The debugging process will always have one of two outcomes:
(1) The cause will be found, corrected, or removed, or
(2) The cause will not be found.
In general, three categories for debugging approaches are proposed:
 Brute Force Approach
 Backtracking Approach
 Cause Elimination Approach
Each of the debugging approaches can be supplemented with debugging tools
such as debugging compliers, dynamic debugging aids, automatic test case
generators, memory dumps, and cross-reference maps.
a. Brute Force Approach
The brute force category of debugging is the most common method for
isolating the cause of an error. This method is generally the least efficient and is
used when everything else fails. A philosophy such as “let the computer find the
error” is used in this approach. Memory dumps are taken, run-time traces are
invoked, and the program is loaded with “write” statements, with the hope that
from the mass of information produced, we will find a clue that can lead us to
the cause of an error.
b. Back Tracking Approach
Debugging by backtracking involves working backward in the source
code from the point where the error was observed in an attempt to identify the
exact point where the error occurred. It may be necessary to run additional test
cases in order to collect more information. This approach can be used
successfully in small programs. The disadvantage with this approach is that if
the program is too large, then the potential backward paths may become
unmanageably large.

148
c. Cause Elimination Approach
Cause elimination approach is manifested by induction or deduction. This
approach proceeds as follows:
1. List possible causes for the observed failure by organizing the data
related to the error occurrence.
2. Devise a “cause hypothesis”.
3. Prove or disprove the hypothesis using the data.
4. Implement the appropriate corrections.
5. Verify the correction. Rerun the failure case to be sure that the fix
corrects the observed symptom.

Check your Progress


i. ____________is the process used to help identify the correctness,
completeness, security and quality of developed computer software.
ii. In other words, testing is nothing but __________ or _________ that is
comparing the actual value with expected one.
iii. _____________ is done to validate the code written and is usually done by the
author of the code.
iv. ________ and __________ are sub-categories of System testing.
v. White box testing is also called as _________ and __________
Solutions

Ii

Iii

Iv

8.11. REVIEW QUESTIONS


1. What are the Testing objectives, rules in S/W testing fundamentals?
2. Explain the testing information flow.
3. What are the stages available in testing process?
4. Describe art of debugging in software testing strategies.
5. Write short notes on: a. Data flow testing b. Integration testing.

149

You might also like