0% found this document useful (0 votes)
13 views

ASE Coding and Testing

The document discusses coding, testing, and documentation practices for software development. It covers: 1) Coding involves transforming a design into code and unit testing code modules. Testing aims to find defects. Coding standards promote uniformity, readability, and best practices. 2) Representative coding standards and guidelines cover naming conventions, error handling, documentation, and limiting global variables. Code walkthroughs and inspections aim to find errors before testing. 3) Software documentation includes internal comments and external documents. Unit testing involves designing test cases and using driver and stub modules to test individual code modules in isolation.

Uploaded by

mrckkannada
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

ASE Coding and Testing

The document discusses coding, testing, and documentation practices for software development. It covers: 1) Coding involves transforming a design into code and unit testing code modules. Testing aims to find defects. Coding standards promote uniformity, readability, and best practices. 2) Representative coding standards and guidelines cover naming conventions, error handling, documentation, and limiting global variables. Code walkthroughs and inspections aim to find errors before testing. 3) Software documentation includes internal comments and external documents. Unit testing involves designing test cases and using driver and stub modules to test individual code modules in isolation.

Uploaded by

mrckkannada
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 55

Coding and Testing

Introduction
Testing: The aim of the testing process is to identify all defects existing in a
software product. It is true that testing does expose many defects existing in
a software product and therefore, testing provides a practical way of
reducing defects in a system and increasing the user’s confidence in a
developed system.
Coding: The main objective of the coding phase is to transform the design of
the system as given by its module specification into a high-level language
code and then to unit test this code. During the coding phase, different
modules identified in the design document are coded according to the
module specifications as specified in the design document.
Normally, good software development organizations adhere to some well-
defined and standard style of coding called coding standards. The reasons for
adhering to a standard coding style are the following:
► It gives a uniform appearance to the codes written by different
programmers.
► It enhances code understanding.
► It encourages good programming practices.
After a module has been coded, usually code inspection and code walk-
throughs are carried out to ensure that coding standards are followed and to
detect as many errors as possible before testing.

Coding Standards and Guidelines


Different organizations usually develop their own coding standards and
guidelines depending on what best suits them. Some of the representative
coding standards and guidelines commonly adopted by many software
development organizations are as follows:

Representative Coding Standards


Rules for limiting the use of global: These rules list what types of data can be
declared global and what cannot.
Contents of the headers preceding codes for different modules: The
information contained in the headers of different modules should be in a
standard format. The following are some standard header data:
► Name of the module.
► Date on which the module was created.
► Author’s name.
► Modification history.
► Synopsis of the module.
► Different functions supported along with their input/output parameters.
► Global variables accessed/modified by the module.
Naming conventions for global variables, local variables and constant
identifiers: A possible naming convention can be that global variable names
always start with a capital letter, local variable names are made up of small
letters and constant names are always capital letters.
Error return conventions and exception handling mechanisms: The way error
conditions are reported by different functions in a program and the way
common exception conditions are handled should be standardized within an
organization.

Representative Coding Guidelines


The following are some representative coding guidelines recommended by
many software development organizations.
► Do not use too clever and difficult to understand coding style.
► Avoid obscure side effects. (*p++ != 0 && *q++ != 0)
► Do not use an identifier for multiple purposes.
► The code should be well-documented.
► The length of any function should not exceed 10 source lines.
► Do not use goto statements, etc.

Code Walk-Throughs
A code walk-through is an informal technique for analysis of the code.
► The team performing a code walk-through should not be either too big or
too small. Ideally, it should consist of between three to seven members.
► Discussions should be focused on the discovery of errors and not on how
To fix the discovered errors.

Code Inspections
In contrast to code walk-throughs, code inspections aim explicitly at the
discovery of commonly made mistakes. Most software development
companies collect statistics to identify the type of errors most frequently
committed. Such a list of commonly committed errors can be used during
code inspections to keep a look-out for possible errors.
The following is a list of some classical programming errors which can be
looked for during code inspections:
► Use of uninitialized variables.
► Jumps into loops.
► Non terminating loops.
► Incompatible assignment.
► Array indices out of bounds.
► Improper storage allocation and deallocation.
► Mismatches between actual and formal parameters in procedure calls.
► Use of incorrect logical operators or incorrect precedence among
operators.
► Improper modifications of loop variables.
► Comparison of equality of floating point values, etc.
► Adherence to coding standards is also checked during code inspections.
Software Documentation
While developing a software product, in addition to executable files and the
source code, various kinds of other documents such as user’s manual, SRS
document, design document, test document, installation manual etc., are
also developed. All these documents are a vital part of any good software
development practice. Good documents enhance understandability and
maintainability of a software product.
Different types of software documentation can be broadly classified into:
► Internal documentation.
► External documentation (supporting documents).

Internal documentation is the code comprehension features provided as part


of the source code itself. Internal documentation is provided through
appropriate module headers and comments embedded in the source code.
Internal documentation is also provided through the use of meaningful
variable names, code indentation, code structuring, use of enumerated types
and constant identifiers, use of user-defined data types, etc. Most software
development organizations usually ensure good internal documentation by
appropriately formulating their coding standards and coding guidelines.
External documentation is provided through various types of supporting
documents such as user’s manual, SRS document, design document, test
documents, etc. A systematic software development style ensures that all
these documents are produced in an orderly fashion.
Unit testing
Testing a program consists of providing the program with a set of test inputs
and observing if the program behaves as expected. If the program fails to
behave as expected, then the conditions under which a failure occurs are
noted for debugging and correction. The following are some commonly used
terms associated with testing:
► A failure is a manifestation of an error (defect or bug). But, the mere
presence of an error may not necessarily lead to a failure.
► A fault is an incorrect intermediate state that may have been entered
during program execution, e.g. a variable value is different from what it
should be. A fault may or may not lead to a failure.
► A test case is the triplet [I, S, O], where I is the data input to the system,
S is the state of the system at which the data is input and O is the
expected output of the system.
► A test suite is the set of all test cases with which a given software
product is to be tested.

Verification and Validation


Verification is the process of determining whether one phase of a software
product confirms to its previous phase, whereas validation is the process of
determining whether a fully developed system conforms to its requirements
specification. Thus while verification is concerned with phase containment of
errors, the aim of validation is to make the final product error free.
Testing process.
Design of Test Cases
► Exhaustive testing of almost any non-trivial system is impractical.
► An optimal test suite that is of reasonable size must be designed.
► The number of random test cases in a test suite is, in general, not an
indication of the effectiveness of the testing.
The following code segment has a simple programming error.
if (x>y) max = x;
else max = x;
For the above code segment, the test set {(x=3, y=2); (x=2, y=3)} can
detect the error, whereas a larger test set {(x=3, y=2); (x=4, y=3); (x=5;
y=1)} does not detect the error.
The two main approaches to designing test cases are:
► Black-box approach
► White-box approach
In the black-box approach, test cases are designed using only the functional
specification of the software. Black-box testing is also known as functional
testing.
On the other hand, designing white-box test cases requires thorough
knowledge of the internal structure of software and therefore the white-box
testing is also called the structural testing.

Driver and Stub Modules


In order to test a single module, we need a complete environment to provide
all that is necessary for execution of the module. That is, besides the module
under test itself, we need the following in order to be able to test the
module:
► The procedures the module under test calls and which do not belong to it.
► Nonlocal data structures that the module accesses.
► A procedure to call the functions of the module under test with
appropriate parameters.
Stubs and drivers are designed to provide the complete environment for a
module. The role of stub and driver modules is shown in Figure. A stub
procedure for a given procedure is a dummy procedure that has the same
I/O parameters as the given procedure but has a highly simplified behavior.
For example, a stub procedure may produce the expected behavior using
table lookup.
A driver module would contain the non local data structures accessed by the
module under test and would also have the code to call the different
functions of the module with appropriate parameter values.

Driver Module

Module Under Test

Stub Module

Unit testing with the help of driver and stub modules


Use of Stubs and Drivers for Testing
Black-Box Testing
There are essentially the following two main approaches to designing black-
box test cases.
► Equivalence class partitioning.
► Boundary value analysis.

Equivalence Class Partitioning


In this approach, the domain of input values to a program is partitioned into
a set of equivalence classes. This partitioning is done such that the behavior
of the program is similar for every input data belonging to the same
equivalence class. Equivalence classes for a software can be designed by
examining the input data. The following are some general guidelines for
designing the equivalence classes:
► If input data values to a system can be specified by a range of values,
then one valid and two invalid equivalence classes should be defined.
► If the input data can assume values from a set of discrete members of
some domain, then one equivalence class for valid input values and
another equivalence class for invalid input values should be defined.
Example: For a software that computes the square root of an input integer
which can assume values in the range from 1 to 5000, there are three
equivalence classes: the set of negative integers, the set of integers in the
range from 1 to 5000 and the integers larger than 5000. Therefore, the test
cases must include representatives from each of the three equivalence
classes and a test set can be {-5, 500, 6000}.

Boundary Value Analysis


Some typical programming errors occur at the boundaries of different
equivalence classes of inputs. The reason for such errors might purely be due
to psychological factors. Programmers often fail to see the special processing
required by the input values that lie at the boundary of the different
equivalence classes. For example, programmers may improperly use <
instead of <= or conversely. Boundary value analysis leads to selection of
test cases at the boundaries of the different equivalence classes.
Example: For a software that computes the square root of integer values in
the range from 1 to 5000, the test cases must include the following values:
{0, 1, 5000, 5001}.
White-Box Testing
Specific Instructional Objectives
At the end of this lesson the student would be able to:
• In the context of white box testing strategy, differentiate between stronger
testing and complementary testing.
• Design statement coverage test cases for a code segment.
• Design branch coverage test cases for a code segment.
• Design condition coverage test cases for a code segment .
• Design path coverage test cases for a code segment.
• Draw control flow graph for any program.
• Identify the linear independent paths.
• Compute cyclomatic complexity from any control flow graph.
• Explain data flow-based testing.
• Explain mutation testing.
Several methodologies are used for white-box testing. Some of the important
methodologies are as follows:
► Statement Coverage
► Branch Coverage
► Condition Coverage
► Path Coverage
Statement Coverage
Example: Consider the Euclid’s GCD computation algorithm:

int compute_gcd(x, y)
int x, y;
{ Why is not branch coverage the same
while (x! = y) thing as code coverage?
{  Consider an if with no else, or a switch
if (x>y) then with no default case.
x= x – y;  Code coverage can be achieved without
else y= y – x; achieving branch coverage.
}
return (x);
}

By choosing the test set {(x=3, y=3); (x=4, y=3); (x=3, y=4)}, we exercise
the program such that all statements are executed at least once.
Branch Coverage

In the branch coverage-based testing strategy, test cases are designed to


make each branch condition to assume true and false values in turn.
► Similar to code coverage, but stronger
► Test every branch in all possible directions
► If statements
 Test both positive and negative directions
► Switch statements
 Test every branch
 If no default case, test a value that doesn't match any case
► Loop statements
 Test for both 0 and > 0 iterations

For Euclid’s GCD computation algorithm, the test cases for branch coverage
can be {(x=3, y=3), (x=3, y=2), (x=4, y=3), (x=3, y=4)}.
Condition Coverage
In this structural testing, test cases are designed such that each component
of a condition of a composite conditional expression is given both true and
false values. For example, in the conditional expression ((c1 and c2) or C3),
c1, c2 and c3 are each exercised at least once.

► For each compound condition, C


► Find the simple sub-expressions that make up C
 Suppose there are n of them
► Create a test case for all 2n T/F combinations of the simple expressions
 If (!done && (value < 100 || c == 'X')) …
 Simple sub-expressions
► !done, value < 100, c == 'X'
►n =3
► Need 8 test cases to test all possibilities
 Use a “truth table” to make sure that all possible combinations are
covered by your test cases.
 Doing this kind of exhaustive condition testing everywhere is
usually not feasible.

!done value < 100 c == ‘X’


Case 1: False False False
Case 2: True False False
Case 3: False True False
Case 4: False False True
Case 5: True True False
Case 6: True False True
Case 7: False True True
Case 8: True True True
Path Coverage
The path coverage-based testing strategy requires us to design test cases
such that all linearly independent paths in the program are executed at least
once. A linearly independent path is defined in terms of the control flow
graph (CFG) of a program.
1. If (a > b) 1. while (a > b)
1. a=5 2. Big = a 2. {a = a - b
2. b=a*2+1 3. Else Big = b 3. Else b = b – a}
4. Print Big 4. Print a
McCabe’s Cyclomatic Complexity Metric

The MeCabe’s cyclomatic complexity of a program defines the number of


independent paths in a program. Given a control flow graph G of a program,
the cyclomatic complexity V(G) can be computed as:
V(G) = E – N + 2
Where N is the number of nodes of the control graph and E is the number of
edges in the control graph.
An alternative way of computing the cyclomatic complexity of a program
from an inspection of its control flow graph is as follows:
V(G) = Total number of bounded areas + 1
The cyclomatic complexity of a program provides a lower bound on the
number of test cases that must be designed and executed to gaurantee
coverage of all linearly independent paths in a program.

Derivation of Test Cases


The following is the sequence of steps that need to be undertaken for
deriving the path coverage-based test cases of a program.
• Draw the control flow graph.
• Determine V(G).
• Determine the basis set of linearly independent paths.
• Prepare a test case that will force execution of each path in the basis set.

Data Flow-Based Testing

The data flow-based testing method selects the test paths of a program
according to the locations of the definitions and uses of different variables in
a program.

Mutation Testing

The goal of testing is to make sure that during the course of testing, each
mutant produces an output different from the output of the original source
program.
Debugging and Integration Testing

Specific Instructional Objectives


• Explain why debugging is needed.
• Explain three approaches of debugging.
• Explain three guidelines for effective debugging.
• Explain what is meant by a program analysis tool.
• Explain the functions of a static program analysis tool.
• Explain the functions of a dynamic program analysis tool.
• Explain the type of failures detected by integration testing.
• Identify four types of integration test approaches and explain them.
• Differentiate between phased and incremental testing in the context of
integration testing.
• What are three types of system testing? Differentiate among them.
• Explain what is meant by error seeding.
• Explain what functions are performed by regression testing.
Once errors are identified, it is necessary to first identify the precise location
of the errors and then to fix them.
Debugging Approaches
► Brute Force Method
► Backtracking
► Causes Elimination Method
► Program Slicing

Debugging Guidelines
The following are some general guidelines for effective debugging:

► Many times, debugging requires a thorough understanding of the program


design.
► Debugging may some times even require a full redesign of the system
► One must be beware of the possibility that an error correction may
introduce new errors. Therefore after every round of error-fixing,
regression testing must be carried out.
► We must be beware of the possibility that an error correction may
introduce new errors. Therefore, after every round of error-fixing,
regression testing must be carried out.
Program Analysis Tools

A program analysis tool usually means an automated tool that takes the
source code of a program as input and produces reports regarding several
important characteristics of the program such as the size, complexity,
adequacy of commenting, adherence to programming standards, etc. Also,
some program analysis tools produce reports regarding the adequacy of the
test cases. There are essentially two categories of program analysis tools:
► Static analysis tools.
► Dynamic analysis tools.

Static Analysis Tools

Static analysis tools assess and portray the properties of a software product
without executing it. Typically, static analysis tools analyze some structural
representation of a program to arrive at certain analytical conclusions. The
structural properties that are usually analyzed are:
► Whether the coding standards have been adhered to?
► Certain programming errors such as uninitialized variables, mismatch
between actual and formal parameters, variables that are declared but
never used, etc.
A major practical limitation of the static analysis tools lies in handling
dynamic evaluation of memory references at run-time.
Static analysis tools often summarize the results of analysis of every function
in a polar chart known as Kiviat Chart. A Kiviat Chart typically shows the
analyzed values for cyclomatic complexity, number of source lines,
percentage of comment lines, Halstead’s metrics, etc.

Dynamic Analysis Tools


Dynamic program analysis techniques require the program to be executed
and its actual behavior recorded. A dynamic analyzer usually instruments the
code of the software to be tested in order to record the behavior of the
software for different test cases. Dynamic analysis tools carries out a post
execution analysis and produces reports which describes the structural
coverage that has been achieved by the complete test suite for the program.
For example, the report might provide the proportion of the program
components executed in terms of statement coverage and branch coverage.
Integration Testing

During integration testing, different modules of a system are integrated using


an integration plan. The primary objective of integration testing is to test the
module interfaces. An important factor that guides the integration plan is the
module dependency graph, which denotes the order in which different
modules call each other. A structure chart is a form of a module dependency
graph. Thus, by examining the structure chart the integration plan can be
developed based on any of the following approaches:

► Big-bang approach
► Top-down approach
► Bottom-up approach
► Mixed approach
Big-Bang Approach

Advantages:
Convenient for small systems.

Disadvantages:
Fault Localization is difficult.
Given the sheer number of interfaces that need to be tested in this
approach, some interfaces link to be tested could be missed easily.
Since the Integration testing can commence only after "all" the
modules are designed, the testing team will have less time for
execution in the testing phase.
Since all modules are tested at once, high-risk critical modules are
not isolated and tested on priority. Peripheral modules which deal with
user interfaces are also not isolated and tested on priority.
Bottom-Up Approach

Advantages:
Fault localization is easier.
No time is wasted waiting for all modules to be developed unlike
Big-bang approach.

Disadvantages:
Critical modules (at the top level of software architecture) which
control the flow of application are tested last and may be prone to
defects.
An early prototype is not possible.
Bottom Up Testing
Top-Down Approach
Advantages:
Fault Localization is easier.
Possibility to obtain an early prototype.
Critical Modules are tested on priority; major design flaws could be found and
fixed first.

Disadvantages:
Needs many Stubs.
Modules at a lower level are tested inadequately.

Phased vs. Incremental Integration Testing


 In incremental integration testing, only one new module is added to the partial
system each time.
 In phased integration, a group of related modules is added to partial system each
time.
Top Down Testing
Example of Integration Test Case
Integration Test Case differs from other test cases in the sense
it focuses mainly on the interfaces & flow of data/information between
the modules. Here priority is to be given for the integrating
links rather than the unit functions which are already tested.

Sample Integration Test Cases for the following scenario:


Application has 3 modules say 'Login Page', 'Mailbox' and 'Delete
emails' and each of them is integrated logically.
Here do not concentrate much on the Login Page testing as it's already
been done in unit testing. But check how it is linked to the Mail Box
Page.
Similarly Mail Box: Check its integration to the Delete Mails Module.
Best Practices/ Guidelines for Integration Testing
First, determine the Integration Test Strategy that could be adopted
and later prepare the test cases and test data accordingly.
Study the Architecture design of the Application and identify the
Critical Modules. These need to be tested on priority.
Obtain the interface designs from the Architectural team and create
test cases to verify all of the interfaces in detail. Interface to
database/external hardware/software application must be tested in
detail.
After the test cases, it is the test data which plays the critical role.
Always have the mock data prepared, prior to executing. Do not select
test data while executing the test cases.
Optical Character Recognition Example
System Testing
System tests are designed to validate a fully developed system with a view to
assuring that it meets its requirements. There are essentially three main
kinds of system testing:
► Alpha testing: Alpha testing refers to the system testing that is carried out
by the test team within the organization.
► Beta testing: Beta testing is the system testing performed by a select
group of friendly customers.
► Acceptance testing: Acceptance testing is the system testing performed by
the customer to determine whether or not to accept the delivery of the
system.

Stress Testing
Stress testing is also known as endurance testing. Stress test are black-box
tests which are designed to impose a range of abnormal and even illegal
input conditions so as to stress the capabilities of the software. Input data
volume, input data rate, processing time, utilization of memory, etc. are
tested beyond the designed capacity.
Stress testing usually involves an element of time and size, such as the
number of records transferred per unit time, the maximum number of users
active at any time, input data size, etc. Therefore, stress testing may not be
applicable to many types of systems.
Error Seeding
Error seeding as the name implies, seeds the code with some known errors.
In other words, some artificial errors are introduced into the program. The
number of these seeded errors detected in the course of the standard testing
procedure is determined. These values in conjunction with the number of
unseeded errors detected can be used to predict:
► The number of errors remaining in the product.
► The effectiveness of the testing strategy.

Let N be the total number of defects in the system and let n of these defects
be found by testing.
Let S be the total number of seeded defects, and let s of these defects be
found during testing.
Therefore, n/N=s/S or N = S * n / s
Remaining defects = N – n = n * ( ( S – s ) / s)
General Issues Associated with Testing

► Testing documentation.
► Regression Testing.

Regression Testing

This type of testing is required when the system being tested is an


upgradation of an already existing system to fix some bugs or enhance
functionality, performance, etc. Regression testing is the practice of running
an old test suite after each change to the system or after each bug fix to
ensure that no new bug has been introduced due to the change or the bug
fix. However, if only a few statements are changed, then the entire test suite
need not be run - only those test cases that test the functions that are likely
to be affected by the change need to be run.
Professional Ethics
► Put quality first : Even if you lose the argument, you will gain respect
► If you can’t test it, don’t build it
► Begin test activities early
► Decouple
 Designs should be independent of language
 Programs should be independent of environment
 Couplings are weaknesses in the software!
► Don’t take shortcuts
 If you lose the argument you will gain respect
 Document your objections
 Vote with your feet
 Don’t be afraid to be right!
Standard Test Plan
► ANSI / IEEE Standard 829-1983 is ancient but still used

Test Plan
A document describing the scope, approach, resources, and schedule
of intended testing activities. It identifies test items, the features to be
tested, the testing tasks, who will do each task, and any risks requiring
contingency planning.

• Many organizations are required to adhere to this standard.

• Unfortunately, this standard emphasizes documentation, not actual


testing – often resulting in a well documented vacuum.
Types of Test Plans
► Mission plan – tells “why”
 Usually one mission plan per organization or group
 Least detailed type of test plan

► Strategic plan – tells “what” and “when”


 Usually one per organization, or perhaps for each type of project
 General requirements for coverage criteria to use

► Tactical plan – tells “how” and “who”


 One per product
 More detailed
 Living document, containing test requirements, tools, results and
issues such as integration order
Test Plan Contents – System Testing
► Purpose
► Target audience and application
► Deliverables
► Information included
 Introduction  Hardware and software requirements
 Test items  Responsibilities for severity ratings
 Features tested  Staffing & training needs
 Features not tested  Test schedules
 Test criteria  Risks and contingencies

 Pass / fail standards  Approvals

 Criteria for starting testing


 Criteria for suspending testing
 Requirements for testing restart
Test Plan Contents – Tactical Testing

 Purpose  Testing tasks


 Outline  Environmental needs
 Test-plan ID  Responsibilities
 Introduction  Staffing & training needs
 Test reference items  Schedule
 Features that will be tested  Risks and contingencies
 Features that will not be tested  Approvals
 Approach to testing (criteria)
 Criteria for pass / fail
 Criteria for suspending testing
 Criteria for restarting testing
 Test deliverables
Summary
 Most software development organizations formulate their own coding
standards and expect their engineers to adhere to them. On the other
hand, coding guidelines serve as general suggestions to programmers
regarding good programming styles, but the implementation of the
guidelines is left to the discretion to the individual engineers.

 Code review is an efficient way of removing errors as compared to testing,


because code review identifies errors whereas testing identifies failures.
Therefore, after identifying failures, additional efforts (debugging) must
be done to locate and fix the errors.

 Exhaustive testing of almost any non-trivial system is impractical. Also,


random selection of test cases is inefficient since many test cases become
redundant as they detect the same type of errors. Therefore, we need to
design an minimal set of test cases that would expose as many errors as
possible.
Summary
 There are two well-known approaches to testing—black-box testing and
white-box testing. Black box testing is also known as functional testing.
Designing test cases for black box testing does not require any
knowledge about how the functions have been designed and
implemented. On the other hand, white-box testing requires knowledge
about internals of the software.

 We observed that the system test suite is designed based on the SRS
document. The two major types of system testing are functionality
testing and performance testing. The functionality test cases are
designed based on the functional requirements and the performance test
cases are design to test the compliance of the system to test the non-
functional requirements documented in the SRS document.

*****

You might also like