0% found this document useful (0 votes)
15 views29 pages

SE Unit 4 - Final

The document outlines the coding and testing process in software development, emphasizing the importance of coding standards, guidelines, and documentation. It details the phases of coding, unit testing, integration testing, and system testing, along with techniques like code reviews and black-box testing. Additionally, it defines key terminologies such as mistakes, errors, failures, and the difference between verification and validation in the context of software quality assurance.

Uploaded by

cherrihoesduh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views29 pages

SE Unit 4 - Final

The document outlines the coding and testing process in software development, emphasizing the importance of coding standards, guidelines, and documentation. It details the phases of coding, unit testing, integration testing, and system testing, along with techniques like code reviews and black-box testing. Additionally, it defines key terminologies such as mistakes, errors, failures, and the difference between verification and validation in the context of software quality assurance.

Uploaded by

cherrihoesduh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

(19A05404T)- CODING AND TESTING (Unit-4)

Coding Standards and Guidelines

 Coding is undertaken once the design phase is complete and the design documents have been
successfully reviewed.
 In the coding phase, every module specified in the design document is coded and unit tested.
 During unit testing, each module is tested in isolation from other modules. That is, a module is
tested independently as and when its coding is complete.
 After all the modules of a system have been coded and unit tested, the integration and system
testing phase is undertaken.
 Over the years, the general perception of testing as monkeys typing in random data and trying to
crash the system has changed. Now testers are looked upon as masters of specialised concepts,
techniques, and tools.

CODING: The objective of the coding phase is to transform the design of a system into code in a high-
level language, and then to unit test this code.

Coding Standards and Guidelines

 Normally, good software development organisations require their programmers to adhere to


some well-defined and standard style of coding which is called their coding standard.
 It is mandatory for the programmers to follow the coding standards.
 Compliance of their code to coding standards is verified during code inspection. Any code that
does not conform to the coding standards is rejected during code review and the code is reworked
by the concerned programmer.
 Good software development organisations usually develop their own coding standards and
guidelines depending on what suits their organization best and based on the specific types of
software they develop.
Coding Standards
1. Rules for limiting the use of global: These rules list what types of data can be declared global and
what cannot, with a view to limit the data that needs to be defined with global scope.
2. Standard headers for different modules: The header of different modules should have standard
format and information for ease of understanding and maintenance.
The following is an example of header format that is being used in some companies:
a. Name of the module.
b. Date on which the module was created.
c. Author’s name.
d. Modification history.
e. Synopsis of the module.
f. Different functions supported in the module, along with their input/output parameters.
g. Global variables accessed/modified by the module
3. Naming conventions for global variables, local variables, and constant identifiers: A popular
naming convention is that variables are named using mixed case lettering. Global variable names
would always start with a capital letter (e.g., GlobalData) and local variable names start with small
letters (e.g., localData). Constant names should be formed using capital letters only (e.g.,
CONSTDATA).

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

4. Conventions regarding error return values and exception handling mechanisms: The way error
conditions are reported by different functions in a program should be standard within an
organization. For example, all functions while encountering an error condition should either return
a 0 or 1 consistently, independent of which programmer has written the code. This facilitates
reuse and debugging.
Coding Guidelines
1. Do not use a coding style that is too clever or too difficult to understand: Code should be easy to
understand. Many inexperienced engineers actually take pride in writing cryptic and
incomprehensible code. Clever coding can obscure meaning of the code and reduce code
understandability; thereby making maintenance and debugging difficult and expensive.
2. Avoid obscure side effects: The side effects of a function call include modifications to the
parameters passed by reference, modification of global variables, and I/O operations. An obscure
side effect is one that is not obvious from a casual examination of the code.
3. Do not use an identifier for multiple purposes: Programmers often use the same identifier to
denote several temporary entities.
4. Each variable should be given a descriptive name indicating its purpose.
5. Use of variables for multiple purposes usually makes future enhancements more difficult.
6. Code should be well-documented: As a rule of thumb, there should be at least one comment line
on the average for every three source lines of code.
7. Length of any function should not exceed 10 source lines: A lengthy function is usually very
difficult to understand as it probably has a large number of variables and carries out many
different types of computations. For the same reason, lengthy functions are likely to have
disproportionately larger number of bugs.
8. Does not use GO TO statements: Use of GO TO statements makes a program unstructured. This
makes the program very difficult to understand, debug, and maintain

CODE REVIEW

 Review is a very effective technique to remove defects from source code. In fact, review has been
acknowledged to be more cost-effective in removing defects as compared to testing.
 Testing is an effective defect removal mechanism. However, testing is applicable to only
executable code.
 The reason behind why code review is a much more cost-effective strategy to eliminate errors
from code compared to testing is that reviews directly detect errors. On the other hand, testing
only helps detect failures and significant effort is needed to locate the error during debugging.
 Normally, the following two types of reviews are carried out on the code of a module:
o Code walkthrough.
o Code inspection.

Code walkthrough:

1. The main objective of code walkthrough is to discover the algorithmic and logical errors in the
code.
2. Code walkthrough is an informal code analysis technique.
3. In this technique, a module is taken up for review after the module has been coded,
successfully compiled, and all syntax errors have been eliminated.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

4. A few members of the development team are given the code a couple of days before the
walkthrough meeting.
5. Each member selects some test cases and simulates execution of the code by hand (i.e., traces
the execution through different statements and functions of the code).
6. The members note down their findings of their walkthrough and discuss those in a
walkthrough meeting where the coder of the module is present.
Code Inspection:
1. The principal aim of code inspection is to check for the presence of some common types of
errors that usually creep into code due to programmer mistakes and oversights and to check
whether coding standards have been adhered to.
2. The programmer usually receives feedback on programming style, choice of algorithm, and
programming techniques.
Following is a list of some classical programming errors which can be checked during code
inspection:
 Use of uninitialized variables.
 Jumps into loops.
 Non-terminating loops.
 Incompatible assignments.
 Array indices out of bounds.
 Improper storage allocation and deallocation.
 Use of incorrect logical operators or incorrect precedence among operators.
 Dangling reference caused when the referenced memory has not been allocated.

SOFTWARE DOCUMENTATION
When software is developed, in addition to the executable files and the source code, several kinds of
documents such as users’ manual, software requirements specification (SRS) document, design
document, test document, installation manual, etc., are developed as part of the software engineering
process. All these documents are considered a vital part of any good software development practice.
Good documents are helpful in the following ways:

 Good documents help enhance understandability of code


 Documents help the users to understand and effectively use the system.
 Good documents help to effectively tackle the manpower turnover1 problem.
 Production of good documents helps the manager to effectively track the progress of the
project.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

Different types of software documents can broadly be classified into the following:
Internal documentation:
1. These are provided in the source code itself.
2. Internal documentation can be provided in the code in several forms.
3. The important types of internal documentation are the following:
a. Comments embedded in the source code.
b. Use of meaningful variable names.
c. Module and function headers.
d. Code indentation.
e. Code structuring (i.e., code decomposed into modules and functions).
f. Use of enumerated types.
g. Use of constant identifiers.
h. Use of user-defined data types
External documentation: These are the supporting documents such as SRS document, installation
document, user manual, design document, and test document
Gunning’s fog index:
Gunning’s fog index Gunning’s fog index (developed by Robert Gunning in 1952) is a metric
that has been designed to measure the readability of a document. The computed metric value
(fog index) of a document indicates the number of years of formal education that a person
should have, in order to be able to comfortably understand that document.

Example 10.1 Consider the following sentence: “The Gunning’s fog index is based on the
premise that use of short sentences and simple words makes a document easy to understand.”
Calculate its Fog index. The fog index of the above example sentence is

If a users’ manual is to be designed for use by factory workers whose educational qualification is class
8, then the document should be written such that the Gunning’s fog index of the document does not
exceed 8.

TESTING

Definition: Testing a program involves executing the program with a set of test inputs and observing if
the program behaves as expected. If the program fails to behave as expected, then the input data and
the conditions under which it fails are noted for later debugging and error correction.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

Terminologies

Few important terminologies that have been standardised by the IEEE Standard Glossary of Software
Engineering Terminology [IEEE90]:

1. Mistake: A mistake is essentially any programmer action that later shows up as an incorrect
result during program execution. A programmer may commit a mistake in almost any
development activity.
Example, during coding a programmer might commit the mistake of not initializing a certain
variable, or might overlook the errors that might arise in some exceptional situations such as
division by zero in an arithmetic operation.
2. Error: An error is the result of a mistake committed by a developer in any of the development
activities. One example of an error is a call made to a wrong function. The terms error, fault,
bug, and defect are considered to be synonyms in the area of program testing
3. Failure: A failure of a program essentially denotes an incorrect behaviour exhibited by the
program during its execution. Every failure is caused by some bugs present in the program.
Example: A program crashes on an input.

Note: It may be noted that mere presence of an error in a program code may not necessarily lead to a
failure during its execution.

In the above code, if the variable roll assumes zero or some negative value under some circumstances,
then an array index out of bound type of error would result.

4. Test Case: A test case is a specification of the inputs, execution conditions, testing procedure,
and expected results that define a single test to be executed to achieve a particular software
testing objective.
5. Test suite: A test suite is the set of all test that have been designed by a tester to test a given
program.
6. Testability: A program is more testable, if it can be adequately tested with less number of test
cases. Obviously, a less complex program is more testable.

Verification vs Validation
Barry Boehm described verification and validation as the following:
 Verification: Are we building the product right?
 Validation: Are we building the right product?
1. Verification:
Verification is the process of checking that a software achieves its goal without any bugs. It
is the process to ensure whether the product that is developed is right or not. It verifies
whether the developed product fulfils the requirements that we have.
Verification is Static Testing.
Activities involved in verification:
 Inspections

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

 Reviews
 Walkthroughs
2. Validation:
Validation is the process of checking whether the software product is up to the mark or in
other words product has high level requirements. It is the process of checking the validation
of product i.e. it checks what we are developing is the right product. it is validation of actual
and expected product. Validation is the Dynamic Testing.

Activities involved in validation:


1. Black box testing
2. White box testing
3. Unit testing
4. Integration testing
Error detection techniques = Verification techniques + Validation techniques

Testing Activities

1. Test suite design


2. Running test cases and checking the results to detect failures
3. Locate error
4. Error correction

Levels of Testing
A software product is normally tested in three levels or stages
A software product is normally tested in three levels or stages:
1. Unit testing: During unit testing, the individual functions (or units) of a program are tested.
2. Integration testing: After testing all the units individually, the units are slowly integrated and
tested after each step of integration (integration testing).
3. System testing: Finally, the fully integrated system is tested (system testing).

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

Unit Testing
Unit testing is defined as a type of software testing where individual
components of software are tested.

Unit testing of software product is carried out during the development of


an application. An individual component may be either an individual
function or a procedure. Unit testing is typically performed by the
developer.

Objective of Unit Testing:


1. To isolate a section of code.
2. To verify the correctness of code.
3. To test every function and procedure.
4. To fix bug early in development cycle and to save costs
Integration testing
Integration testing is the process of testing the interface between two software units or module. It’s
focus on determining the correctness of the interface. There are four types of integration testing
approaches. Those approaches are the following:

a. Big-Bang Integration Testing: It is the simplest integration testing approach, where all the
modules are combining and verifying the functionality after the completion of individual
module testing.
b. Bottom-Up Integration Testing: In bottom-up testing, each module at lower levels is tested
with higher modules until all modules are tested.
c. Top-Down Integration Testing: Top-down integration testing technique used in order to
simulate the behavior of the lower-level modules that are not yet integrated.
d. Mixed Integration Testing: A mixed integration testing is also called sandwiched integration testing.
A mixed integration testing follows a combination of top down and bottom-up testing approaches
System Testing
System Testing is carried out on the whole system in the context of either system requirement
specifications or functional requirement specifications or in the context of both. System testing tests
the design and behavior of the system and also the expectations of the customer. It is performed to
test the system beyond the bounds mentioned in the software requirements specification (SRS).

BLACK-BOX TESTING

In black-box testing, test cases are designed from an examination of the input/output values only and no
knowledge of design or code is required. The following are the two main approaches available to design black
box test cases:

 Equivalence class partitioning


 Boundary value analysis

 Equivalence Class Partitioning

In the equivalence class partitioning approach, the domain of input values to the program under test is
partitioned into a set of equivalence classes. The partitioning is done such that for every input data
belonging to the same equivalence class, the program behaves similarly.
Example 1 : For a software that computes the square root of an input integer that can assume values
in the range of 0 and 5000. Determine the equivalence classes and the black box test suite.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

Answer: There are three equivalence classes—The set of negative integers, the set of integers in the
range of 0 and 5000, and the set of integers larger than 5000. Therefore, the test cases must include
representatives for each of the three equivalence classes. A possible test suite can be: {–5,500,6000}.

Example 2: Design equivalence class partitioning test suite for a function that reads a character string
of size less than five characters and displays whether it is a palindrome.

Answer: The equivalence classes are the leaf level classes shown in Figure 10.4. The equivalence
classes are palindromes, non-palindromes, and invalid inputs. Now, selecting one representative value
from each equivalence class, we have the required test suite: {abc,aba,abcdef}.

 Boundary Value Analysis


Boundary value analysis-based test suite design involves designing test cases using the values at the
boundaries of different equivalence classes.
Example, programmers may improperly use < instead of <=, or conversely <= for <, etc.
Example 10.9 For a function that computes the square root of the integer values in the range of 0 and
5000, determine the boundary value test suite.
Answer: There are three equivalence classes—The set of negative integers, the set of integers in the
range of 0 and 5000, and the set of integers larger than 5000. The boundary value-based test suite is:
{0,-1,5000,5001}.

Important steps in the black-box test suite design approach:


1. Examine the input and output values of the program.
2. Identify the equivalence classes.
3. Design equivalence class test cases by picking one representative value from each equivalence
class.
4. Design the boundary value test cases as follows. Examine if any equivalence class is a range of
values. Include the values at the boundaries of such equivalence classes in the test suite.

WHITE-BOX TESTING

White-box testing is an important type of unit testing. A large number of white-box testing strategies
exist. A white-box testing strategy can either be (i) coverage-based or (ii) fault based.
Coverage-based testing
A coverage-based testing strategy attempts to execute (or cover) certain elements of a program.
Popular examples of coverage-based testing strategies are statement coverage, branch coverage,
multiple condition coverage, and path coverage-based testing.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

Fault-based testing
A fault-based testing strategy targets to detect certain types of faults. These faults that a test strategy
focuses on constitute the fault model of the strategy. An example of a fault-based strategy is
mutation testing (testers change specific components of an application's source code to ensure a
software test suite will be able to detect the changes).

Stronger versus weaker testing


We have mentioned that a large number of white-box testing strategies have been proposed. It
therefore becomes necessary to compare the effectiveness of different testing strategies in detecting
faults. We can compare two testing strategies by determining whether one is stronger, weaker, or
Complementary to the other.

 Stronger testing strategy covers all program elements covered by the weaker testing strategy, and
the stronger strategy additionally covers at least one program element that is not covered by the
weaker strategy.
 If a stronger testing has been performed, then a weaker testing need not be carried out.

COVERAGE-BASED TESTING
Statement Coverage: The statement coverage strategy aims to design test cases so as to execute
every statement in a program at least once.
Example Design statement coverage-based test suite for the following Euclid’s GCD computation
program:

Answer: To design the test cases for the statement coverage, the conditional expression of the while
statement needs to be made true and the conditional expression of the if statement needs to be made
both true and false. By choosing the test set {(x = 3, y = 3), (x = 4, y = 3), (x = 3, y =4)}, all statements of
the program would be executed at least once.

Branch Coverage: A test suite satisfies branch coverage, if it makes each branch condition
in the program to assume true and false values in turn.
Example 2: Design branch coverage -based test suite for the following Euclid’s GCD computation
program
Answer: The test suite {(x = 3, y = 3), (x = 3, y = 2), (x = 4, y = 3), (x =3, y = 4)} achieves branch coverage.

Note: Branch coverage-based testing is stronger than statement coverage-based testing.

Multiple Condition Coverage: In the multiple condition (MC) coverage-based testing, test cases are
designed to make each component of a composite conditional expression to assume both true and
false values. For example, consider the composite conditional expression ((c1 .and.c2 ).or.c3). A test
suite would achieve MC coverage, if all the component conditions c1, c2 and c3 are each made to
assume both true and false values.
Consider the following C program segment:

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

The program segment has a bug in the second component condition, it should have been
temperature<50. The test suite {temperature=160, temperature=40} achieves branch
coverage. But, it is not able to check that setWarningLightOn(); should not be called for
temperature values within 150 and 50.

Path Coverage: A test suite achieves path coverage if it exeutes each linearly independent paths ( o r
basis paths ) at least once. A linearly independent path can be defined in terms of the control flow
graph (CFG) of a program.
Control flow graph (CFG) : A control flow graph describes the sequence in which the different
instructions of a program get executed. we can define a CFG as follows.
A CFG is a directed graph consisting of a set of nodes and edges (N, E), such that each node n 􀀀 N
corresponds to a unique program statement and an edge exists between two nodes if control can
transfer from one node to the other.

McCabe’s Cyclomatic Complexity Metric:


 Cyclomatic complexity Metric is the quantitative measure of the number of linearly
independent paths in it.
 Cyclomatic complexity of a program is a measure of the psychological complexity or the
level of difficulty in understanding the program.
 It is a software metric used to indicate the complexity of a program.
 It is computed using the Control Flow Graph of the program.
 The nodes in the graph indicate the smallest group of commands of a program, and a
directed edge in it connects the two nodes i.e. if second command might immediately follow
the first command.
 McCabe’s cyclomatic complexity defines an upper bound on the number of independent paths
in a program.
For example, if source code contains no control flow statement then its cyclomatic complexity will
be 1 and source code contains a single path in it.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

Similarly, if the source code contains one if condition then cyclomatic complexity will be 2 because
there will be two paths one for true and the other for false.

There are three different ways to compute the cyclomatic complexity.

Method 1: Given a control flow graph G of a program, the cyclomatic complexity V(G) can be
computed as:
V(G) = E – N + 2
Where, N is the number of nodes of the control flow graph and E is the number of edges in the control
flow graph.
For the CFG of example shown in the above Figure E = 7 and N = 6. Therefore,
the value of the Cyclomatic complexity = 7 – 6 + 2 = 3.

Method 2: An alternate way of computing the cyclomatic complexity of aprogram is based on a visual
inspection of the control flow graph is as follows. —In this method, the cyclomatic complexity V (G)
for a graph G is given by the following expression:
V(G) = Total number of non-overlapping bounded areas + 1
From a visual examination of the CFG the number of bounded areas is 2. Therefore the cyclomatic
complexity, computed with this method is also 2+1=3.
Method 3: The cyclomatic complexity of a program can also be easily computed by computing the
number of decision and loop statements of the program. If N is the number of decision and loop
statements of a program, then the McCabe’s metric is equal to N + 1.

FAULT-BASED TESTING
Mutation Testing: Mutation test cases are designed to help detect specific types of faults in a
program. The idea behind mutation testing is to make a few arbitrary changes to a program at a time.
Each time the program is changed, it is called a mutated program and the change effected is called a
mutant.

Mutated program is tested against the original test suite of the program. If there exists at least one
test case in the test suite for which a mutated program yields an incorrect result, then the mutant is
said to be dead, since the error introduced by the mutation operator has successfully been detected
by the test suite.

DEBUGGING

After a failure has been detected, it is necessary to first identify the program statement(s) that are in
error and are responsible for the failure, the error can then be fixed.

Debugging Approaches

Brute force method: This is the most common method of debugging but is the least efficient method.
In this approach, print statements are inserted throughout the program to print the intermediate
values with the hope that some of the printed values will help to identify the statement in error.

Backtracking: This is also a fairly common approach. In this approach, starting from the statement at
which an error symptom has been observed, the source code is traced backwards until the error is
discovered. Unfortunately, as the number of source lines to be traced back increases, the number of

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

potential backward paths increases and may become unmanageably large for complex programs,
limiting the use of this approach.
Cause elimination method: In this approach, once a failure is observed, the symptoms of the failure
(i.e., certain variable is having a negative value though it should be positive, etc.) are noted. Based on
the failure symptoms, the causes which could possibly have contributed to the symptom is developed
and tests are conducted to eliminate each.
Program slicing: This technique is similar to back tracking. In the backtracking approach, one often has
to examine a large number of statements. However, the search space is reduced by defining slices.

PROGRAM ANALYSIS TOOLS

A program analysis tool usually is an automated tool that takes either the source code or the
executable code of a program as input and produces reports regarding several important
characteristics of the program, such as its size, complexity, adequacy of commenting, adherence to
programming standards, adequacy of testing, etc. We can classify various program analysis tools into
the following two broad categories:
1. Static analysis tools
2. Dynamic analysis tools
Static Analysis Tools:
Static program analysis tools assess and compute various characteristics of a program without
executing it.
Typically, static analysis tools analyse the source code to compute certain metrics characterising the
source code (such as size, cyclomatic complexity, etc.) and also report certain analytical conclusions.
These also check the conformance of the code with the prescribed coding standards. In this context, it
displays the following analysis results:
 To what extent the coding standards have been adhered to?
 Whether certain programming errors such as uninitialised variables, mismatch between actual
and formal parameters, variables that are declared but never used, etc., exist?
 A list of all such errors is displayed.

Dynamic Analysis Tools


Dynamic program analysis tools can be used to evaluate several program characteristics based on an
analysis of the run time behaviour of a program. These tools usually record and analyse the actual
behaviour of a program while it is being executed.
For example, the dynamic analysis tool can report the statement, branch, and path coverage achieved
by a test suite. If the coverage achieved is not satisfactory more test cases can be designed, added to
the test suite, and run. Further, dynamic analysis results can help eliminate redundant test cases from
a test suite.

SYSTEM TESTING
System tests are designed to validate a fully developed system to assure that it meets its
requirements. The test cases are therefore designed solely based on the SRS document.

There are essentially three main kinds of system testing depending on who carries out testing:

1. Alpha Testing: Alpha testing refers to the system testing carried out by the test team within the
developing organisation.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

2. Beta Testing: Beta testing is the system testing performed by a select group of friendly customers.
3. Acceptance Testing: Acceptance testing is the system testing performed by the customer to
determine whether to accept the delivery of the system.

Functionality and Performance test cases

The system test cases can be classified into functionality and performance test cases. The
functionality tests are designed to check whether the software satisfies the functional requirements as
documented in the SRS document. The performance tests, on the other hand, test the conformance of
the system with the non-functional requirements of the system.

Smoke Testing:

Before a fully integrated system is accepted for system testing, smoke testing is performed. Smoke
testing is done to check whether at least the main functionalities of the software are working properly.
Unless the software is stable and at least the main functionalities are working satisfactorily, system
testing is not undertaken.

For smoke testing, a few test cases are designed to check whether the basic functionalities are
working. For example, for a library automation system, the smoke tests may check whether books can
be created and deleted, whether member records can be created and deleted, and whether books can
be loaned and returned.

Performance Testing

1. Performance testing is carried out to check whether the system meets the nonfunctional
requirements identified in the SRS document. There are several types of performance testing
corresponding to various types of non-functional requirements. All performance tests can be
considered as black-box tests.
2. Stress testing: Stress testing is also known as endurance testing. Stress testing evaluates system
performance when it is stressed for short periods of time.
For example, suppose an operating system is supposed to support fifteen concurrent transactions,
then the system is stressed by attempting to initiate fifteen or more transactions simultaneously.
3. Volume testing: Volume testing checks whether the data structures (buffers, arrays, queues,
stacks, etc.) have been designed to successfully handle extraordinary situations.
4. Configuration testing: Configuration testing is used to test system behaviour in various hardware
and software configurations specified in the requirements.
5. Compatibility testing: This type of testing is required when the system interfaces with external
systems (e.g., databases, servers, etc.). Compatibility aims to check whether the interfaces with
the external systems are performing as required.
6. Regression testing: This type of testing is required when a software is maintained to fix some bugs
or enhance functionality, performance, etc.
7. Recovery testing: Recovery testing tests the response of the system to the presence of faults, or
loss of power, devices, services, data, etc.
8. Maintenance testing: This addresses testing the diagnostic programs, and other procedures that
are required to help maintenance of the system.
9. Documentation testing: It is checked whether the required user manual, maintenance manuals,
and technical manuals exist and are consistent.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

10. Usability testing: Usability testing concerns checking the user interface to see if it meets all user
requirements concerning the user interface. During usability testing, the display screens,
messages, report formats, and other aspects relating to the user interface requirements are
tested.
11. Security testing: Security testing is essential for software that handle or process confidential data
that is to be gurarded against stealing.

Regression Testing

Regression Testing is the process of testing the modified parts of the code and the parts that might get
affected due to the modifications to ensure that no new errors have been introduced in the software
after the modifications have been made.
When to do regression testing?
1. When a new functionality is added to the system and the code has been modified to absorb and
integrate that functionality with the existing code.
2. When some defect has been identified in the software and the code is debugged to fix it.
3. When the code is modified to optimize it’s working.
Process of Regression testing:
1. Firstly, whenever we make some changes to the source code for any reasons like adding new
functionality, optimization, etc. then our program when executed fails in the previously designed
test suite for obvious reasons.
2. After the failure, the source code is debugged in order to identify the bugs in the program.
3. After identification of the bugs in the source code, appropriate modifications are made.
4. Then appropriate test cases are selected from the already existing test suite which covers all the
modified and affected parts of the source code.
5. We can add new test cases if required. In the end regression testing is performed using the
selected test cases.

TESTING OBJECT-ORIENTED PROGRAMS


UNIT TESTING

 Traditional Techniques Considered Not Satisfactory for Testing Object-oriented Programs.


 Adequate testing of individual methods does not ensure that a class has been satisfactorily tested.
 An object is the basic unit of testing of object-oriented programs.

Grey-Box Testing of Object-oriented Programs

1. Model-based testing is important for object oriented programs,


2. For object-oriented programs, several types of test cases can be designed based on the design
models of object-oriented programs. These are called the grey-box test cases.
3. The following are some important types of grey-box testing that can be carried on based on UML
models:
State-model-based testing
 State coverage: Each method of an object is tested at each state of the object.
 State transition coverage: It is tested whether all transitions depicted in the state model
work satisfactorily.
 State transition path coverage: All transition paths in the state model are tested.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

Use case-based testing:

 Scenario coverage: Each use case typically consists of a mainline scenario and several
alternate scenarios. For each use case, the mainline and all alternate sequences are tested to
check if any errors show up.

Class diagram-based testing

 Testing derived classes: All derived classes of the base class have to be instantiated and
tested. In addition to testing the new methods defined in the derived . c lass, the inherited
methods must be retested.
 Association testing: All association relations are tested.
 Aggregation testing: Various aggregate objects are created and tested.

INTEGRATION TESTING

There are two main approaches to integration testing of object-oriented programs:


• Thread-based
• Use based
Thread-based approach: In this approach, all classes that need to collaborate to realise the behaviour
of a single use case are integrated and tested.

Use-based approach: Use-based integration begins by testing classes that either need no service from
other classes or need services from at most a few other classes. After these classes have been
integrated and tested, classes that use the services from the already integrated classes are integrated
and tested. This is continued till all the classes have been integrated and tested.

Designed by Dr. Penchal, NECN for JNTUA-R19


Software Reliability
Reliability of a software product essentially denotes its trustworthiness or dependability.
Alternatively, reliability of a software product can also be defined as the probability of the product
working “correctly” over a given period of time.It is obvious that a software product having a large
number of defects is unreliable. It is also clear that the reliability of a system improves, if the
number of defects in it is reduced. However, there is no simple relationship between the observed
system reliability and the number of latent defects in the system. For example, removing errors from
parts of a software which are rarely executed makes little difference to the perceived reliability of the
product. It has been experimentally observed by analyzing the behavior of a large number of programs
that 90% of the execution time of a typical program is spent in executing only 10% of the instructions
in the program. These most used 10% instructions are often called the core of the program. The rest
90% of the program statements are called non-core and are executed only for 10% of the total
execution time. It therefore may not be very surprising to note that removing 60% product defects
from the least used parts of a system would typically lead to only 3% improvement to the product
reliability. It is clear that the quantity by which the overall reliability of a program improves due to the
correction of a single error depends on how frequently the corresponding instruction is executed.

Thus, reliability of a product depends not only on the number of latent errors but also on the
exact location of the errors. Apart from this, reliability also depends upon how the product is used,
i.e. on its execution profile. If it is selected input data to the system such that only the “correctly”
implemented functions are executed, none of the errors will be exposed and the perceived
reliability of the product will be high. On the other hand, if the input data is selected such that
only those functions which contain errors are invoked, the perceived reliability of the system will
be very low.
Reasons for software reliability being difficult to measure
The reasons why software reliability is difficult to measure can be summarized as follows:
 The reliability improvement due to fixing a single bug depends on where the bug is
located in the code.
 The perceived reliability of a software product is highly observer-dependent.
 The reliability of a product keeps changing as errors are detected and fixed.
Hardware reliability vs. software reliability differs.
Reliability behavior for hardware and software are very different. For example, hardware
failures are inherently different from software failures. Most hardware failures are due to component
wear and tear. A logic gate may be stuck at 1 or 0, or a resistor might short circuit. To fix hardware
faults, one has to either replace or repair the failed part. On the other hand, a software product would
continue to fail until the error is tracked down and either the design or the code is changed. For this
reason, when a hardware is repaired its reliability is maintained at the level that existed before the
failure occurred; whereas when a software failure is repaired, the reliability may either increase or
decrease (reliability may decrease if a bug introduces new errors). To put this fact in a different
perspective, hardware reliability study is concerned with stability (for example, inter-failure times
remain constant). On the other hand, software reliability study aims at reliability growth (i.e. inter-
failure times increase). The change of failure rate over the product lifetime for a typical hardware and
a software product are sketched in fig. 26.1. For hardware products, it can be observed that failure rate
is high initially but decreases as the faulty components are identified and removed. The system then
enters its useful life. After some time (called product life time) the components wear out, and the
failure rate increases. This gives the plot of hardware reliability over time its characteristics “bath tub”
shape. On the other hand, for software the failure rate is at it’s highest during integration and test.
As the system is tested, more and more errors are identified and removed resulting in reduced failure
rate. This error removal continues at a slower pace during the useful life of the product. As the
software becomes obsolete no error corrections occurs and the failure rate remains unchanged.

Fig: Change in failure rate of a product


Reliability Metrics
The reliability requirements for different categories of software products may be different. For
this reason, it is necessary that the level of reliability required for a software product should be
specified in the SRS (software requirements specification) document. In order to be able to do
this, some metrics are needed to quantitatively express the reliability of a software product. A
good reliability measure should be observer-dependent, so that different people can agree on the
degree of reliability a system has. For example, there are precise techniques for measuring
performance, which would result in obtaining the same performance value irrespective of who is
carrying out the performance measurement. However, in practice, it is very difficult to formulate a
precise reliability measurement technique. The next base case is to have measures that correlate
with reliability. There are six reliability metrics which can be used to quantify the reliability of
software products.
 Rate of occurrence of failure (ROCOF)- ROCOF measures the frequency of occurrence
of unexpected behavior (i.e. failures). ROCOF measure of a software product can be
obtained by observing the behavior of a software product in operation over a specified
time interval and then recording the total number of failures occurring during the
interval.
 Mean Time To Failure (MTTF) - MTTF is the average time between two successive
failures, observed over a large number of failures. To measure MTTF, we can record the
failure data for n failures. Let the failures occur at the time instants t , t , …, t .
Then,MTTF can be calculated as
It is important to note that only run time is considered in the time measurements, i.e. the
time for which the system is down to fix the error, the boot time, etc are not taken into
account in the time measurements and the clock is stopped at these times.
 Mean Time To Repair (MTTR) - Once failure occurs, sometime is required to fix the
error. MTTR measures the average time it takes to track the errors causing the failure and
to fix them.
 Mean Time Between Failure (MTBR) - MTTF and MTTR can be combined to get the
MTBR metric: MTBF = MTTF + MTTR. Thus, MTBF of 300 hours indicates that once a
failure occurs, the next failure is expected after 300 hours. In this case, time measurements
are real time and not the execution time as in MTTF.
 Probability of Failure on Demand (POFOD) - Unlike the other metrics discussed, this
metric does not explicitly involve time measurements. POFOD measures the likelihood
of the system failing when a service request is made. For example, a POFOD of 0.001
would mean that 1 out of every 1000 service requests would result in a failure.
 Availability- Availability of a system is a measure of how likely shall the system be
available for use over a given period of time. This metric not only considers the number
of failures occurring during a time interval, but also takes into account the repair time
(down time) of a system when a failure occurs. This metric is important for systems such
as telecommunication systems, and operating systems, which are supposed to be never
down and where repair and restart time are significant and loss of service during that time
is important.
Classification of software failures
A possible classification of failures of software products into five different types is as follows:
 Transient- Transient failures occur only for certain input values while invoking a
function of the system.
 Permanent- Permanent failures occur for all input values while invoking a function of
the system.
 Recoverable- When recoverable failures occur, the system recovers with or without operator
intervention.
 Unrecoverable- In unrecoverable failures, the system may need to be restarted.
 Cosmetic- These classes of failures cause only minor irritations, and do not lead to
incorrect results. An example of a cosmetic failure is the case where the mouse button has
to be clicked twice instead of once to invoke a given function through the graphical user
interface.

RELIABILITY GROWTH MODELS

A reliability growth model is a mathematical model of how software reliability improves as errors
are detected and repaired. A reliability growth model can be used to predict when (or if at all) a particular
level of reliability is likely to be attained. Thus, reliability growth modeling can be used to determine
when to stop testing to attain a given reliability level. Although several different reliability growth models
have been proposed, in this text we will discuss only two very simple reliability growth models.
Jelinski and Moranda Model -The simplest reliability growth model is a step function model where it is
assumed that the reliability increases by a constant increment each time an error is detected and repaired.
However, this simple model of reliability which implicitly assumes that all errors contribute equally to
reliability growth, is highly unrealistic since it is already known that correction of different types of errors
contribute differently to reliability growth.

Fig: Step function model of reliability growth


Littlewood and Verall’s Model -This model allows for negative reliability growth to reflect the fact
that when a repair is carried out, it may introduce additional errors. It also models the fact that as
errors are repaired, the average improvement in reliability per repair decreases.It treat’s an error’s
contribution to reliability improvement to be an independent random variable having Gamma
distribution. This distribution models the fact that error corrections with large contributions to
reliability growth are removed first. This represents diminishing return as test continues.
Statistical testing
Statistical testing is a testing process whose objective is to determine the reliability of software
products rather than discovering errors. Test cases are designed for statistical testing with an
entirely different objective than those of conventional testing.
Operation profile
Different categories of users may use a software for different purposes. For example, a Librarian
might use the library automation software to create member records, add books to the library, etc.
whereas a library member might use to software to query about the availability of the book, or to
issue and return books. Formally, the operation profile of a software can be defined as the
probability distribution of the input of an average user. If the input to a number of classes
{Ci} is divided, the probability value of a class represent the probability of an average user
selecting his next input from this class. Thus, the operation profile assigns a probability value Pi
to each input class Ci.

Steps in statistical testing


Statistical testing allows one to concentrate on testing those parts of the system that are most
likely to be used. The first step of statistical testing is to determine the operation profile of the
software. The next step is to generate a set of test data corresponding to the determined operation
profile. The third step is to apply the test cases to the software and record the time between each
failure. After a statistically significant number of failures have been observed, the reliability can
be computed
Advantages and disadvantages of statistical testing

Statistical testing allows one to concentrate on testing parts of the system that are most likely
to be used. Therefore, it results in a system that the users to be more reliable (than actually it is!).
Reliability estimation using statistical testing is more accurate compared to those of other
methods such as ROCOF, POFOD etc. But it is not easy to perform statistical testing properly.
There is no simple and repeatable way of defining operation profiles. Also it is very much
cumbersome to generate test cases for statistical testing cause the number of test cases with
which the system is to be tested should be statistically significant.

Software Quality
Traditionally, a quality product is defined in terms of its fitness of purpose. That is, a
quality product does exactly what the users want it to do. For software products, fitness of
purpose is usually interpreted in terms of satisfaction of the requirements laid down in the SRS
document. Although “fitness of purpose” is a satisfactory definition of quality for many products
such as a car, a table fan, a grinding machine, etc. – for software products, “fitness of purpose” is
not a wholly satisfactory definition of quality. To give an example, consider a software product
that is functionally correct. That is, it performs all functions as specified in the SRS
document. But, has an almost unusable user interface. Even though it may be functionally correct,
we cannot consider it to be a quality product. Another example may be that of a product which
does everything that the users want but has an almost incomprehensible and unmaintainable code.
Therefore, the traditional concept of quality as “fitness of purpose” for software products is not
wholly satisfactory.

The modern view of a quality associates with a software product several quality factors
such as the following:
 Portability: A software product is said to be portable, if it can be easily made to work in
different operating system environments, in different machines, with other software
products, etc.
 Usability: A software product has good usability, if different categories of users (i.e. both
expert and novice users) can easily invoke the functions of the product.
 Reusability: A software product has good reusability, if different modules of the product
can easily be reused to develop new products.
 Correctness: A software product is correct, if different requirements as specified in the
SRS document have been correctly implemented.
 Maintainability: A software product is maintainable, if errors can be easily corrected as
and when they show up, new functions can be easily added to the product, and the
functionalities of the product can be easily modified, etc.

Software quality management system


A quality management system (often referred to as quality system) is the principal
methodology used by organizations to ensure that the products they develop have the desired
quality. A quality system consists of the following:
 Managerial Structure and Individual Responsibilities. A quality system is actually
the responsibility of the organization as a whole. However, every organization has a
separate quality department to perform several quality system activities. The quality
system of an organization should have support of the top management. Without support
for the quality system at a high level in a company, few members of staff will take the
quality system seriously.
 Quality System Activities. The quality system activities encompass the following:
---auditing of projects
---review of the quality system
---development of standards, procedures, and guidelines, etc.
---production of reports for the top management summarizing t h e
effectiveness of the quality system in the organization.

Evolution of quality management system


Quality systems have rapidly evolved over the last five decades. Prior to World War II, the usual
method to produce quality products was to inspect the finished products to eliminate defective
products. Since that time, quality systems of organizations have undergone through four stages of
evolution as shown in the figure. The initial product inspection method gave way to quality control
(QC). Quality control focuses not only on detecting the defective products and eliminating them but
also on determining the causes behind the defects. Thus, quality control aims at correcting the causes of
errors and not just rejecting the products. The next breakthrough in quality systems was the
development of quality assurance principles.

 The basic premise of modern quality assurance is that if an organization’s processes are good and
are followed rigorously, then the products are bound to be of good quality. The modern quality paradigm
includes guidance for recognizing, defining, analyzing, and improving the production process. Total
quality management (TQM) advocates that the process followed by an organization must be
continuously improved through process measurements.
 TQM goes a step further than quality assurance and aims at continuous process improvement.
TQM goes beyond documenting processes to optimizing them through redesign. A term related to TQM is
Business Process Reengineering (BPR). BPR aims at reengineering the way business is carried out in an
organization. From the above discussion it can be stated that over the years the quality paradigm has
shifted from product assurance to process assurance.

Fig: Evolution of quality system and corresponding shift in the quality paradigm

ISO 9000 certification


ISO (International Standards Organization) is a consortium of 63 countries established to
formulate and foster standardization. ISO published its 9000 series of standards in 1987. ISO
certification serves as a reference for contract between independent parties. The ISO 9000 standard
specifies the guidelines for maintaining a quality system. We have already seen that the quality
system of an organization applies to all activities related to its product or service. The ISO standard
mainly addresses operational aspects and organizational aspects such as responsibilities,
reporting, etc. In a nutshell, ISO 9000 specifies a set of guidelines for repeatable and high
quality product development. It is important to realize that ISO 9000 standard is a set of guidelines
for the production process and is not directly concerned about the product itself.
Types of ISO 9000 quality standards
 ISO 9000 is a series of three standards: ISO 9001, ISO 9002, and ISO 9003. The ISO 9000
series of standards is based on the premise that if a proper process is followed for production, then
good quality products are bound to follow automatically. The types of industries to which the
different ISO standards apply are as follows.

 ISO 9001 applies to the organizations engaged in design, development, production, and
servicing of goods. This is the standard that is applicable to most software development
organizations.

 ISO 9002 applies to those organizations which do not design products but are only involved
in production. Examples of these category industries include steel and car manufacturing
industries that buy the product and plant designs from external sources and are involved in only
manufacturing those products. Therefore, ISO 9002 is not applicable to software development
organizations.

 ISO 9003 applies to organizations that are involved only in installation and testing of the
products.

Software products vs. other products

There are mainly two differences between software products and any other type of products.
l Software is intangible in nature and therefore difficult to control. It is very difficult to
control and manage anything that is not seen. In contrast, any other industries such as
car manufacturing industries where one can see a product being developed through
various stages such as fitting engine, fitting doors, etc. Therefore, it is easy to
accurately determine how much work has been completed and to estimate how much
more time will it take.
l During software development, the only raw material consumed is data. In contrast,
large quantities of raw materials are consumed during the development of any other
product.
Need for obtaining ISO 9000 certification
There is a mad scramble among software development organizations for obtaining ISO
certification due to the benefits it offers. Some benefits that can be acquired to organizations by
obtaining ISO certification are as follows:

Confidence of customers in an organization increases when organization qualifies for ISO


certification. This is especially true in the international market. In fact, many organizations
awarding international software development contracts insist that the development organization
have ISO 9000 certification. For this reason, it is vital for software organizations involved in
software export to obtain ISO 9000 certification.
l ISO 9000 requires a well-documented software production process to be in place. A
well-documented software production process contributes to repeatable and higher
quality of the developed software.
l ISO 9000 makes the development process focused, efficient, and cost- effective.
l ISO 9000 certification points out the weak points of an organization and
recommends remedial action.
l ISO 9000 sets the basic framework for the development of an optimal process and
Total Quality Management (TQM)

Summary of ISO 9001 certification


A summary of the main requirements of ISO 9001 as they relate of software development
is as follows. Section numbers in brackets correspond to those in the standard itself:
Management Responsibility (4.1)
l The management must have an effective quality policy.
l The responsibility and authority of all those whose work affects quality must be
defined and documented.
l A management representative, independent of the development process, must be
responsible for the quality system. This requirement probably has been put down so
that the person responsible for the quality system can work in an unbiased manner.
l The effectiveness of the quality system must be periodically reviewed by audits.
Quality System (4.2)
A quality system must be maintained and documented.
Contract Reviews (4.3)
 Before entering into a contract, an organization must review the contract to ensure that it
is understood, and that the organization has the necessary capability for carrying out its
obligations.
Design Control (4.4)
l The design process must be properly controlled, this includes controlling coding also.
This requirement means that a good configuration control system must be in place.
l Design inputs must be verified as adequate.
l Design must be verified.
l Design output must be of required quality.
l Design changes must be controlled.
Document Control (4.5)
l There must be proper procedures for document approval, issue and removal.
l Document changes must be controlled. Thus, use of some configuration management
tools is necessary.
Purchasing (4.6)
 purchasing material, including bought-in software must be checked for conforming to
requirements.
Purchaser Supplied Product (4.7)
 Material supplied by a purchaser, for example, client-provided software must be
properly managed and checked.
Product Identification (4.8)
 The product must be identifiable at all stages of the process. In software terms this
means configuration management.
Process Control (4.9)
l The development must be properly managed.
l Quality requirement must be identified in a quality plan.
Inspection and Testing (4.10)
 In software terms this requires effective testing i.e., unit testing, integration testing
and system testing. Test records must be maintained.
Inspection, Measuring and Test Equipment (4.11)
 If integration, measuring, and test equipments are used, they must be properly
maintained and calibrated.
Inspection and Test Status (4.12)
 The status of an item must be identified. In software terms this implies configuration
management and release control.
Control of Nonconforming Product (4.13)
 In software terms, this means keeping untested or faulty software out of the released
product, or other places whether it might cause damage.
Corrective Action (4.14)
 This requirement is both about correcting errors when found, and also investigating
why the errors occurred and improving the process to prevent occurrences. If an error
occurs despite the quality system, the system needs improvement.
Handling, (4.15)
 This clause deals with the storage, packing, and delivery of the software product.
Quality records (4.16)
 Recording the steps taken to control the quality of the process is essential in order to
be able to confirm that they have actually taken place.
Quality Audits (4.17)
 Audits of the quality system must be carried out to ensure that it is effective.
Training (4.18)
 Training needs must be identified and met.
Salient features of ISO 9001 certification

The salient features of ISO 9001 are as follows:


 All documents concerned with the development of a software product should be properly
managed, authorized, and controlled. This requires a configuration management system to
be in place.
 Proper plans should be prepared and then progress against these plans should be
monitored.
 Important documents should be independently checked and reviewed for effectiveness and
correctness.
 The product should be tested against specification.
 Several organizational aspects should be addressed e.g., management reporting of the
quality team.

Shortcomings of ISO 9000 certification


 Even though ISO 9000 aims at setting up an effective quality system in an organization, it
suffers from several shortcomings. Some of these shortcomings of the ISO 9000 certification
process are the following:
 ISO 9000 requires a software production process to be adhered to but does not guarantee
the process to be of high quality. It also does not give any guideline for defining an appropriate
process.

 ISO 9000 certification process is not fool-proof and no international accreditation agency
exists. Therefore it is likely that variations in the norms of awarding certificates can exist among
the different accreditation agencies and also among the registrars.

 Organizations getting ISO 9000 certification often tend to downplay domain expertise.
These organizations start to believe that since a good process is in place, any engineer is as
effective as any other engineer in doing any particular activity relating to software development.
However, many areas of software development are so specialized that special expertise and
experience in these areas (domain expertise) is required. In manufacturing industry there is a clear
link between process quality and product quality. Once a process is calibrated, it can be run
again and again producing quality goods. In contrast, software development is a creative process
and individual skills and experience are important.

 ISO 9000 does not automatically lead to continuous process improvement,i.e. does not
automatically lead to TQM.

SEI Capability Maturity Model


SEI Capability Maturity Model (SEI CMM) helped organizations to improve the quality of
the software they develop and therefore adoption of SEI CMM model has significant business
benefits.
SEI CMM can be used two ways: capability evaluation and software process assessment.
Capability evaluation and software process assessment differ in motivation, objective, and the final
use of the result. Capability evaluation provides a way to assess the software process capability of
an organization. The results of capability evaluation indicates the likely contractor performance if
the contractor is awarded a work. Therefore, the results of software process capability assessment
can be used to select a contractor. On the other hand, software process assessment is used by an
organization with the objective to improve its process capability. Thus, this type of assessment is
for purely internal use.
SEI CMM classifies software development industries into the following five maturity
levels. The different levels of SEI CMM have been designed so that it is easy for an organization
to slowly build its quality system starting from scratch.

Level 1: Initial. A software development organization at this level is characterized by ad hoc


activities. Very few or no processes are defined and followed. Since software production processes
are not defined, different engineers follow their own process and as a result development efforts
become chaotic. Therefore, it is also called chaotic level. The success of projects depends on
individual efforts and heroics. When engineers leave, the successors have great difficulty in
understanding the process followed and the work completed. Since formal project management
practices are not followed, under time pressure short cuts are tried out leading to low quality.
Level 2: Repeatable. At this level, the basic project management practices such as tracking
cost and schedule are established. Size and cost estimation techniques like function point analysis,
COCOMO, etc. are used. The necessary process discipline is in place to repeat earlier success on
projects with similar applications. Please remember that opportunity to repeat a process exists only
when a company produces a family of products.
Level 3: Defined. At this level the processes for both management and development activities
are defined and documented. There is a common organization-wide understanding of activities,
roles, and responsibilities. The processes though defined, the process and product qualities are not
measured. ISO 9000 aims at achieving this level.
Level 4: Managed. At this level, the focus is on software metrics. Two types of metrics are
collected. Product metrics measure the characteristics of the product being developed, such as its
size, reliability, time complexity, understandability, etc. Process metrics reflect the effectiveness
of the process being used, such as average defect correction time, productivity, average number of
defects found per hour inspection, average number of failures detected during testing per LOC, etc.
Quantitative quality goals are set for the products. The software process and product quality are
measured and quantitative quality requirements for the product are met. Various tools like Pareto
charts, fishbone diagrams, etc. are used to measure the product and process quality. The process
metrics are used to check if a project performed satisfactorily. Thus, the results of process
measurements are used to evaluate project performance rather than improve the process.
Level 5: Optimizing. At this stage, process and product metrics are collected. Process and
product measurement data are analyzed for continuous process improvement. For example, if from
an analysis of the process measurement results, it was found that the code reviews were not very
effective and a large number of errors were detected only during the unit testing, then the process
may be fine tuned to make the review more effective. Also, the lessons learned from specific
projects are incorporated in to the process. Continuous process improvement is achieved both by
carefully analyzing the quantitative feedback from the process measurements and also from
application of innovative ideas and technologies. Such an organization identifies the best software
engineering practices and innovations which may be tools, methods, or processes.
Key process areas (KPA) of a software organization:Except for SEI CMM level 1, each maturity
level is characterized by several Key Process Areas (KPAs) that includes the areas an organization
should focus to improve its software process to the next level. The focus of each level and the
corresponding key process areas are shown in the figure

CMM Level Focus Key Process Ares

1. Initial Competent people


2. Repeatable Project management Software project planning
Software configuration
management
3. Defined Definition of processes Process definition
Training program
Peer reviews
4. Managed Product and process Quantitative process metrics
quality Software quality management
5. Optimizing Continuous process Defect prevention
improvement Process change management
Technology change management
Fig: The focus of each SEI CMM level and the corresponding key process areas
SEI CMM provides a list of key areas on which to focus to take an organization from one
level of maturity to the next. Thus, it provides a way for gradual quality improvement over several
stages. Each stage has been carefully designed such that one stage enhances the capability already
built up. For example, it considers that trying to implement a defined process (SEI CMM level 3)
before a repeatable process (SEI CMM level 2) would be counterproductive as it becomes difficult
to follow the defined process due to schedule and budget pressures.
ISO 9000 certification vs. SEI/CMM

For quality appraisal of a software development organization, the characteristics of ISO 9000
certification and the SEI CMM differ in some respects. The differences are as follows:
 ISO 9000 is awarded by an international standards body. Therefore, ISO 9000 certification
can be quoted by an organization in official documents, communication with external
parties, and the tender quotations. However, SEI CMM assessment is purely for internal
use.
 SEI CMM was developed specifically for software industry and therefore addresses many
issues which are specific to software industry alone.
 SEI CMM goes beyond quality assurance and prepares an organization to ultimately
achieve Total Quality Management (TQM). In fact, ISO 9001 aims at level 3 of SEI CMM
model.
 SEI CMM model provides a list of key process areas (KPAs) on which an organization at
any maturity level needs to concentrate to take it from one maturity level to the next. Thus,
it provides a way for achieving gradual quality improvement.

Applicability of SEI CMM to organizations


Highly systematic and measured approach to software development suits large organizations
dealing with negotiated software, safety-critical software, etc. For those large organizations, SEI
CMM model is perfectly applicable. But small organizations typically handle applications such
as Internet, e-commerce, and are without an established product range, revenue base, and
experience on past projects, etc. For such organizations, a CMM-based appraisal is probably
excessive. These organizations need to operate more efficiently at the lower levels of maturity.
For example, they need to practice effective project management, reviews, configuration
management, etc.

Personal software process


Personal Software Process (PSP) is a scaled down version of the industrial software process. PSP
is suitable for individual use. It is important to note that SEI CMM does not tell software
developers how to analyze, design, code, test, or document software products, but assumes that
engineers use effective personal practices. PSP recognizes that the process for individual use is
different from that necessary for a team.

The quality and productivity of an engineer is to a great extent dependent on his process.
PSP is a framework that helps engineers to measure and improve the way they work. It helps in
developing personal skills and methods by estimating and planning, by showing how to track
performance against plans, and provides a defined process which can be tuned by individuals.

Time measurement. PSP advocates that engineers should rack the way they spend time.
Because, boring activities seem longer than actual and interesting activities seem short. Therefore,
the actual time spent on a task should be measured with the help of a stop-clock to get an objective
picture of the time spent. For example, he may stop the clock when attending a telephone call,
taking a coffee break etc. An engineer should measure the time he spends for designing, writing
code, testing, etc.
The PSP is schematically shown inWhile carrying out the different phases, they must
record the log data using time measurement. During post-mortem, they can compare the log data
with their project plan to achieve better planning in the future projects, to improve their process,
etc.

Fig: Schematic representation of PSP

The PSP levels are summarized in figure. PSP2 introduces defect management via the use of
checklists for code and design reviews. The checklists are developed from gathering and
analyzing defect data earlier projects.

Fig: Levels of PSP


Six sigma
The purpose of Six Sigma is to improve processes to do things better, faster, and at lower
cost. It can be used to improve every facet of business, from production, to human resources, to
order entry, to technical support. Six Sigma can be used for any activity that is concerned with
cost, timeliness, and quality of results. Therefore, it is applicable to virtually every industry.
Six Sigma at many organizations simply means striving for near perfection. Six Sigma is a
disciplined, data-driven approach to eliminate defects in any process – from manufacturing to
transactional and product to service.
The statistical representation of Six Sigma describes quantitatively how a process is
performing. To achieve Six Sigma, a process must not produce more than 3.4 defects per million
opportunities. A Six Sigma defect is defined as any system behavior that is not as per customer
specifications. Total number of Six Sigma opportunities is then the total number of chances for a
defect. Process sigma can easily be calculated using a Six Sigma calculator.
The fundamental objective of the Six Sigma methodology is the implementation of a
measurement-based strategy that focuses on process improvement and variation reduction
through the application of Six Sigma improvement projects. This is accomplished through the
use of two Six Sigma sub-methodologies: DMAIC and DMADV. The Six Sigma DMAIC
process (define, measure, analyze, improve, control) is an improvement system for existing
processes failing below specification and looking for incremental improvement. The Six Sigma
DMADV process (define, measure, analyze, design, verify) is an improvement system used to
develop new processes or products at Six Sigma quality levels. It can also be employed if a
current process requires more than just incremental improvement. Both Six Sigma processes are
executed by Six Sigma Green Belts and Six Sigma Black Belts, and are overseen by Six Sigma
Master Black Belts.
Many frameworks exist for implementing the Six Sigma methodology. Six Sigma
Consultants all over the world have also developed proprietary methodologies for implementing
Six Sigma quality, based on the similar change management philosophies and applications of
tools.

CASE tool and its scope


A CASE (Computer Aided Software Engineering) tool is a generic term used to denote any
form of automated support for software engineering. In a more restrictive sense, a CASE tool
means any tool used to automate some activity associated with software development. Many
CASE tools are available. Some of these CASE tools assist in phase related tasks such as
specification, structured analysis, design, coding, testing, etc.; and others to non-phase activities
such as project management and configuration management.
Reasons for using CASE tools
The primary reasons for using a CASE tool are:
 To increase productivity
 To help produce better quality software at lower cost

You might also like