0% found this document useful (0 votes)
63 views15 pages

Coding and Testing - Part010

This document discusses software coding and testing. It describes the coding phase where each module is coded and unit tested in isolation. It also discusses integration and system testing plans, where modules are integrated in steps and the integrated system is tested at each step. Testing is described as an important phase that typically requires the most effort and involves creative thinking. The document outlines various coding standards, guidelines, code review techniques like code walkthroughs and inspections, as well as internal and external documentation practices.

Uploaded by

Dawn Na Vann
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views15 pages

Coding and Testing - Part010

This document discusses software coding and testing. It describes the coding phase where each module is coded and unit tested in isolation. It also discusses integration and system testing plans, where modules are integrated in steps and the integrated system is tested at each step. Testing is described as an important phase that typically requires the most effort and involves creative thinking. The document outlines various coding standards, guidelines, code review techniques like code walkthroughs and inspections, as well as internal and external documentation practices.

Uploaded by

Dawn Na Vann
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Software Engineering

1
CODING AND TESTING – Part010

CODING AND TESTING

This module tackles the about software reliability and quality management.

Course Module Objectives:


At the end of this module, the learner should be able to:
1) Know about Integration and System Testing Plan.
2) Learn about Testing Activities.
3) Know what Unit Testing.
4) Know Black-Box Testing.
5) Understand boundary value analysis.
6) Know White-Box Testing.
7) Understand McCabe's Cyclomatic Complexity Metric.
8) Learn about path testing carries out by using computed McCabe's
cyclomatic metric.
9) Know the steps to carry out path coverage-based testing.
10)Understand estimatio of structural complexity of code.
11)Understand of testing effort.
12)Understand Estimation of program reliability.
13)Understand Data Flow-based Testing.

Course Module
Coding

Coding is undertaken once the design phase is complete and the


design documents have been successfully reviewed.
Coding phase – every module specified in the design document is coded
and unit tested. During unit testing, each module is tested in isolation
from other modules. A module is tested independently as and when its
coding is complete.

Integration and System Testing Plan


Integration and testing of modules is carried out according to an
integration plan.
The integration plan usually envisages integration of modules through a
number of steps.
During each integration step, a number of modules are added to the
partially integrated system and the resultant system is tested.
The full product takes shape only after all the modules have been
integrated together.

Integration and System Testing Plan


Testing is an important phase in software development and typically
requires the maximum effort among all the development phases.
Testing of a professional software is carried out using a large number of
test cases.

Integration and System Testing Plan


Many novice engineers bear the wrong impression that testing is a
secondary activity and that it is intellectually not as stimulating as the
activities associated with the other development phases.
Testing a software product is as much challenging as initial development
activities such as specifications, design, and coding. Testing involves a lot
of creative thinking.
Software Engineering
3
CODING AND TESTING – Part010

Coding
The input to the coding phase is the design document produced at the end
of the design phase.
The detailed design is usually documented in the form of module
specifications where the data structures and algorithms for each module
are specified.

Coding Standards and Guidelines


Good software development organisations usually develop their own
coding standards and guidelines depending on what suits their
organisation best and based on the specific types of software they
develop.

Representative Coding Standards


Rules for limiting the use of globals:
These rules list what types of data can be declared global and what
cannot, with a view to limit the data that needs to be defined with global
scope.
Standard headers for different modules:
The header of different modules should have standard format and
infomation for ease of understanding and maintenance.
Naming conventions for global variables, local variables, and constant
identifiers:
A popular naming convention is that variables are named using mixed
case lettering. Global variable names would always start with a capital
letter and local variable names start with small letters. Constant names
should be formed using capital letters only.

Code Review
Testing is an effective defect removal mechanism. However, testing is
applicable to only executable code.
Review is a very effective technique to remove defects from source code.
In fact, review has been acknowledged to be more cost-effective in
removing defects as compared to testing.
Code Walkthrough
Code walkthrough is an informal code analysis technique.

Course Module
A module is taken up for review after the module has been coded,
successfylly compiled, and all syntax errors have been eliminated.
A few members of the development team are given the code a couple of
days before the walkthough meeting.
Each member selects some test cases and simulates execution of the code
by hand.
The main objective of code walkthrough is to discover the algorithmic
and logical errors in the code.
The members note down their findings of their walkthrough and discuss
those in a walkthrough meeting where the coder of the module is present.

Code Inspection
The code is examined for the presence of some common programming
errors.
This is in contrast to the hand simulation of code execution carried out
during code walkthrough.
The principal aim of code inspection is to check for the presence of some
common types of errors that usually creep into code due to programmer
mistakes and oversights and to check whether coding standards have
been adhered to.
The inspection process has several beneficial side effects, other than
finding errors.

Clean Room Testing


This technique reportedly produces documentation and code that is more
reliable and maintainable than other development methods relying
heavily on code execution-based testing.
The main problem with this approach is that testing effort is increased as
walkthroughs, inspection, and verification are time consuming for
detecting all simple errors. Also testing based error detection is efficient
for detecting certain errors that escape manual inspection.

Software Documentation
When a software is developed, in addition to the executable files and the
source code, several kinds of documents such as user’ manual, software
requriements specification (SRS) document, design document, test
document, installation manual, etc., are developed as part of the software
engineering process.
All these documents are considered a vital part of any good software
development practice.
Software Engineering
5
CODING AND TESTING – Part010

Internal Documentation
The code comprehension features provided in the source code itself.It can
be provided in the code in several forms.The important types of internal
documentation are the following:
 Comments embedded in the source code.
 Use of meaningful variable names.
 Module and function headers.
 Code indentation.
 Code structuring.
 Use of enumerated types.
 Use of constant identifiers.
 Use of user-defined data types.
Careful experiments suggest that out of all types of internal
documentation, meaningful variable names is most useful while trying to
understand a piece of code.

External Documentation
Is provided through various types of supporting documents such as users’
manual, sofware requirements specification document, design document,
test document, etc.A systematic software development style ensures that
all these documents are of good quality and are produced in an orderly
fashion.An important feature that is required of any good external
documentation is consistency with the code.If the different documents
are not consistent, a lot of confusion is created for somebody trying to
understand the software.All the documents developed for a product
should be up-to-date and every change made to the code should be
reflected in the relevant external documents.Even if only a few
documents are not up-to-date, they create inconsistency and lead to
confusion.Another important feature required for external documents is
proper understandability by the category of users for whom the
document is designed. For achieving this, Gunning’s fog index is very
useful.

Gunning’s fog index


Developed by Robert Gunning in 1952, is a metric that has been designed
to measure the readability of a document.The computed metric value (fog
index) of a document indicates the number of years of formal education
that a person should have, in order to be able to comfortably understand
that document.That is, if a certain document has a fog index of 12, any one
who has completed his 12th class would not have much difficulty in
understanding that document.The Gunning’s fog index of a document D

Course Module
can be computed as follows: D = 0.4 x (words / sentences ) + per cent of
words having 3 or more syllables.Observe that the fox index is computed
as the sum of two different factors.
The first factor computes that average number of words per sentence.
This factor therefore accounts for the common observation that long
sentences are difficult to understand.
The second factor measures the percentage of complex words in the
document.
Note that a syllable is a group of words that can be independently
pronounced.
Words having more than three syllables are complex words and presence
of many such words hamper readability of a document.
If a users’ manual is to be designed for use by factory workers whose
educational qualification is class 8, then the document should be written
such that the Gunning’s fog index of the document does not exceed 8.

Testing
The aim of program testing is to help realiseidentify all defects in a
program. However, in practice, even after saisfactorycomlpetion of the
testing phase, it is not possible to guarantee that a program is error free.
This is because the input data domain of most programs is very large, and
it is not practical to test the program exhaustively with respect to each
value that the input can assume.
Consider a function taking a floating point number as argument. If a tester
takes 1 sec to type in a value, then even a million testers would not be
able to exhaustively test it after trying for a million number of years. Even
with this obvious limitation of the testing process, we should not
underestimate the importance of testing.Careful testing can expose a
large percentage of the defects existing in a program, and therefore
aprovides a practical way of reducing defects in a system.Testing
program involves executing the program with a set of test inputs and
observing if the program behaves as expected.If the program fails to
behave as expected, then the input data and the conditions under which it
fails are noted for later debugging and error correction.The tester who
inputs several test data to the system and observes the outputs produced
by it to check if the system fails on some specific inputs.Unless the
conditions under which a software fails are noted down, it becomes
difficult for the developers to reproduce a failure observed by the testers.

IEEE Standard Glossary of Software Engineering


Terminology
A mistake is essentially any programmer action that later shows up as an
incorrect result during program execution. A programmer may commit a
mistake in almost any development activity.
Software Engineering
7
CODING AND TESTING – Part010

An error is the result of a mistake committed by a developer in any of the


development activities. Among the extremely large variety of errors that
can exist in a program.
The terms error, fault, bug, and defect are considered to be synonyms in
the area of program testing.
Though the terms error, fault, bug, and defect are all used
interchangeably by the program testing community. In the domain of
hardware testing, the term fault is used with a slightly different
connotation as compared to the terms error and bug.
A test scenario is an abstract test case in the sense that it only identifies
the aspects of the program that are to be tested without identifying the
input, state, or output. A test case can be said to be an implementation of
a test scenario. In the test cae, the input, output, and the state at which the
input would be applied is designed such that the scenario can be
executed. An important automatic test case design strategy is to first
design test scenarios through an analysis of some program abstraction
and then implement the test scenarios as test case.
A test script is an encoding of a test case as a short program. Test scripts
are developed for automated execution of the test cases.
A test case is said to be a positive test case if it is designed to test whether
the software correctly performes a requierd functionality. A test case is
said to be negative test case, if it is designed to test whether the software
carries out something, that is not required of the system.

Verification versus Validation


Verification is the process of determining whether the output of one
phase of software development conforms to that of its previous phase;
whereas validation is the process of determining whether a fully
developed software conforms to its requirements specification. Thus, the
objective of verification is to check if the work products produced after a
phase conform to that which was input to the pahse.
The primary techniques used for verification include review, simulation,
formal verification, and testing. Review, simulation, and testing are
usually considered as informal verification techniques. Formal
verification usually involves use of theorem proving techniques or use of
automated tools such as a model checker.
Validation techniques are primarily based on product testing. Note that
we have categorised testing both under program verification and
validation. The reason being that unit and integration testing can be
considered as verification steps where it is verified whther the code is as
per the module and module interface specifications.

Course Module
System testing can be considered as validation step where it is
determined whether the fully developed code is as per its requirements
specification.
The primary objective of the verification steps are to determine whether
the steps in product development are being carried out alright, whereas
validation is carried out towards the end of the development process to
determine whether the right products has been developed.

Testing Activities
Test suite design: The set of test cases using which a program is to be
tested is designed possibly using several test case design techniques.
Running test cases and checking the results to detect failures: Each test
case is run and the results are compard with the expected results. A
mismatch between the actual result and expected results indicates a
failure. The test cases for which the system fails are noted down for later
debugging.
Locate error: The failure symptoms are analysed to locate the rrors. For
each failure observed during the previous activity, the statements that are
in error are identified.
Error correction: After the error is located during debugging, the code is
appropriately changed to correct the error.
When test cases are designed based on random input data, many of the
test cases do not contribute to the significace of the test suite. That is,
they do not help detect any additional defects not already being detected
by other test cases in the suite.
Testing a software using a large collection of randomly selected test cases
does not guarantee that all of the errors in the system will be uncovered.
A minimal test suite is a carefully designed set of test cases such that each
test case helps detect different errors. This is in contrast to testing using
some random input values.
There are essentially two main approaches to systematically design test
cases:
 Black-box approach
 White-box (or glass-box) approach

Testing the Large versus Testing in the Small


A software product is normally tested in three levels or stages:
 Unit testing
 Integration testing
 System testing
Unit testing is referred to as testing in the small, whereas integration and
system testing are referred to as testing in the large.
Software Engineering
9
CODING AND TESTING – Part010

After testing all the units individually, the units are slowly integrated and
tested after each step of integration. Finally, the fully integrated system is
tested. Integration and system testing are known as testing in the large.

Unit testing
Unit testing is undertaken after a module has been coded and reviewed.
This activity is typically understaken by the coder of the module himself
in the coding phase.
Before carrying out unit testing, the unit test cases have to be designed
and the test environment for the unit under test has to be developed.
In order to test a single module, we need a complete environment to
provide all relevant code that is necessary for execution of the module.
Besides the module under test, the following are needed to test the
module:
The procedures belonging to other modules that the module under test
calls.
Non-local data structures that the module accesses.
A procedure to call the functions of the module under test with
appropriate parameters.
Stub: A stub procedure is a dummy procedure that has the same I/O
arameters as the function called by the unit under test but a highly
simplified behaviour.
Driver: A driver module should contain the non-local data structures
accessed by the module under test. Additionally, it should also have the
code to call the different functions of the unit under test with appropriate
parameter values for testing.

Black-Box Testing
Test cases are designed from an examination of the input/output values
only and no knowledge of design or code is required.
The following approaches available to design black box test cases:
 Equivalence class partitioning
 Boundary value analysis

Equivalence Class Partitioning


 The domain of input values to the program under test is
partitioned into a set of equivalence classes.
Course Module
 The partitioning is done such that for every input data belonging
to the same equivalen class, the program behaves similarly.
 Equivalence classes for a unit under test can be designed by
examining the input data and output data.

Boundary Value Analysis


A type of programming error that is frequently committed by
programmers is missing out on the special consideration that should be
given to the values at the boundaries of different equivalence classes of
inputs.
The reason behind programmers committing such errors might purely be
due to psychological factors.
Programmers often fail to properly address the special processing
required by the input values that lie at the boundary of the different
equivalence classes.
To design boundary value test cases, it is required to examine the
equivalence classes to check if any of the equivalence classes contains a
range of values. For those equivalence classes that are not a range of
values no boundary value test cases can be defined. For an equivalence
class that is a range of values, the boundary values need to be included in
the test suite.

White-Box Testing
White-box testing is an important type of unit testing.
A large number of white-box testing strategist exist.
Each testing strategy essentially designs test cases based on analysis of
some aspect of source code and is based on some heuristic.

Fault-based testing
Targets to detect certain types of faults.These faults that a test strategy
focuses on constitutes that fault model of the strategy.

Coverage-based testing
Attempts strategy attempts to execute certain elements of a program.
Popular examples of coverage-based testing strategies are statement
coverage, branch coverage, multiple condition converage, and path
coverage-based testing.

Testing criterion for coverage-based testing


Typically targets to execute certain program elements for discovering
failures.
Software Engineering
11
CODING AND TESTING – Part010

The set of specific program elements that a testing strategy targets to


execute is called testing criterion of the strategy.

Stronger versus weaker testing


A white-box testing strategy is said to be stronger than another strategy,
if the stronger testing strateghy covers all program elements covered by
the weaker testing strategy, and the stronger strategy additionally covers
at least one program element that is not covered by the weaker strategy.
When none of two testing strategies fully covers the program elements
exercised by the other, thenn the two are called complementary testing
strategies.
If a stronger testing has been performed, then a weaker testing need not
be carried out.
Coverage-based testing is frequently used to check the quality of testing
achieved by atest suite.
It is hard to manually design a test suite to achieve a specific coverage for
a non-trival program.
Statement coverage strategy aims to design test cases so as to execute
every statement in a program at least once.
The principal idea governing the statement coverage strategy is that
unless a statement is executed, there is no way to determine whether an
error exists in that statement.
Without executing a statement, it is difficult to determine whether it
causes a failure due to illegal memory access, wrong results computation
due to improper arithmetic operation.
It can however be pointed out that a weakness of the statement coverage
strategy is that executing a statement once and observing that it behaves
properly for one input value is no guarantee that it will behave correctly
for all input values.
Statement coverage is a very intuitive and appealing testing technique.

Branch Coverage
A test suite satisfies branch coverage, if it makes each branch condition in
the program to assume true and false values in turn.
For branch coverage each branch in the CFG representation of the
program must be taken at least once, when test suite is executed.
Branch testing is also known as edge testing, since in this testing scheme,
each edge of a program’s control flow graph is traversed at least once.

Course Module
Multiple Condition Coverage
Test cases are designed to make each component of a composite
conditional expression to assume both true and false values.
Branch testing can be considered to be a simplistic condition testing
strategy where only the compound conditions appearing in the different
branch statements are made to asume the true and false values.
It is easy to prove that condition testing is a stronger testing strategy than
branch testing.

Path Coverage
A test suite achieves path coverage if it executes each linearly
independent paths at least once.
A linearly independent path can be defined in terms of the control flow
graph of a program. Therefore, to understand path coverage-based
testing strategy, we need to first understand how the CFG of a program
can be drawn.

Control Flow Graph


A control flow graph describes how the control flows through the
program.
A control flow graph describes the sequence in which the different
instructions of a program get executed.
The different numbered statements serve as nodes of the control flow
graph. There exists an edge from one node to another, if the execution of
the statement representing the first node can result in the transfer of
control to the other node.

Linearly independent set of paths


A set of paths for a given program is called linearly independent set of
paths, if each path in the set introduces at least one new edge that is not
included in any other path in the set.
Even if we find that a path has one new node compared to all other
linearly independent paths, then this path should also be included in the
set of linearly independent paths.
Any path having a new node would automatically have a new edge.
If a set of paths is linearly independent of each other, then no path in the
set can be obtained through any linear operations on the other paths in
the set.
Software Engineering
13
CODING AND TESTING – Part010

McCabe’s Cyclomatic Complexity Metric


McCabe obtained his reults by applying graph-theoretic techniques to the
control flow graph of a program. McCabe’s cyclomatic complexity defines
an upper bound on the number of independent paths in a program.
Method 1: Given a control flow graph G of a program, the cyclomatic
complexity V(G) can be computed as: V(G) = E – N + 2, where, N is the
number of nodes of the control flow graph and E is the number of edges
in the control flow graph.

McCabe’s Cyclomatic Complexity Metric


Method 2: An alternate way of computing the cyclomatic complexity of a
program is based on a visual inspection of control flow graph is as
follows, V(G) = Total number of non-overlapping bounded areas + 1, any
region enclosed by nodes and edges can be called as a bounded area. This
is an easy way to determine the McCabe’s cyclomatic complexity.
But, what if the graph G is not planar. It can be shown that control flow
representation of structured programs always yields planar graphs. But,
presence of GOTO’s can easily add intersecting edges. Therefore, for non-
structured programs, this way of computing the McCabe’s cyclomatic
complexity does not apply.

McCabe’s Cyclomatic Complexity Metric


The number of bounded areas in a CFG increases with the number of
decision statements and loops. Therefore, the McCabe’s metric provides a
quantitative measure of testing difficulty and the ultimate reliability of a
program.
Method 3: The cyclomatic complexity of a program can also be easily
computed by computing the number of decision and loop statements of
the program. If N is the number of decision and loop statements of a
program, then the McCabe’s metric is equal to N + 1.

Path testing carries out by using computed McCabe’s


cyclomatic metric value.
Knowing the number of basis paths in a program does not make it any
easier to design test cases for path coverage, only it gives an indication of
the minimum number of test cases required for path coverage.
For the CFG of a moderately complex program segment of say 20 nodes
and 25 edges, you may need several days of effort to identify all the
linearly independent paths in it and to design the test cases.

Course Module
It is therefore impractical to require the test designers to identify all the
linearly independent paths in a code, and then design the test cases to
force execution along each of the identified paths.

Steps to carry out path coverage-based testing


Draw control flow graph for the program.
Determine the McCabe’s metric.
Determine the cyclomatic complexity. This gives the minimum number of
test cases required to achieve path coverage.
Repeat.
Test using a randomly designed set of test cases. Perform dynamic
analysis to check the path coverage achieved. Until at least 90 per cent
path coverage is achieved.

Estimation of structural complexity of code


McCabe’s cyclomatic complexity is a measure of the structural complexity
of a program. The reason for this is that it is computed based on the code
structure. Intuitively, the McCabe’s complexity metric correlates with the
difficulty level of understanding a program, since one understands a
program by understanding the computations carried out along all
independent paths of the program.
Cyclomatic complexity of a program is measure of the psychological
complexity of the level of difficulty in understanding the program.

Estimation of testing effort


Cyclomatic complexity is a measure of the maximum number of basic
paths.
It indicates the minimum number of test cases required to achieve path
coverage.
The testing effort and the time required to test a peice of code
satisfactorily is proportional to the cyclomatic complexity of the code.
To reduce testing effort, it is necessary to restrict the cyclomatic
complexity of every function to seven.

Estimation of program reliability


Experimental studies indicate there exists a clear relationship between
the McCabe’s metric and the number of errors latent in the code after
testing.
This relationship exists possibly due to the correlation of cyclomatic
complexity with the structural complexity of code.
Software Engineering
15
CODING AND TESTING – Part010

Usually the larger is the structural complexity, the more difficult it is to


test and debug the code.

Data Flow-based Testing


Data flow based testing method selects test paths of a program according
to the definitions and uses of different variables in a program.
All definitions criterion is a test coverage criterion that requires that an
adequate test set should cover all definition occurrences in the sense that,
for each definition of ocurrence, the testing paths should cover a path
through which the definition reaches a use of the definition.
All use criterion requires that all uses of a definition should be covered.

References
Rajib, Mall., et al., (2014). Fundamentals of Software Engineering. Fourth Edition

Course Module

You might also like