0% found this document useful (0 votes)
15 views44 pages

Software - Engineering BCA LORDS 4th and 5th Unit

Software engineering notes

Uploaded by

dsharmila670
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views44 pages

Software - Engineering BCA LORDS 4th and 5th Unit

Software engineering notes

Uploaded by

dsharmila670
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

UNIT – 4

System Design

Objective
After studying this lesson, students will be able to:
1. Explain the need of system design.
2. Discuss the need for modular design.
3. Describe the strategies of developing system design.
4. Define the concept of functional independence between modules.
5. Explain the concept of cohesion and coupling in system modules.

Introduction
Once the requirements have been defined in software requirement specification document during
analysis phase, it is time to develop the design document that should act as a blue print for the
development team. It is just like architecture of a building that suggests how the final building will
look like. Ease of implementation and maintenance of the software system relies on the quality of
a system design. A good system design helps in improving the reusability of already developed
modules.

System Design and its objectives


The system design is concerned with defining how to solve the problem, whereas the analysis phase
was concerned with defining what a solution looks like. Design documents acts as a blueprint for
implementation phase. Software design for complex problems is built iteratively. Principle of
divide and conquer is used to design the solution to complex problems. It helps to reduce the overall
complexity of the problem. Partitioning involves following decisions:  Defining the boundaries
along which to partition.
 Too many partitions are not good, and hence deciding on appropriate degree of partition is
important.
 Identify the proper level of detail when design should stop.

65
The design phase begins, as soon as the SRS document is available. Design is concerned with
defining a system as a set of components with clearly defined behavior that interacts with each
other in a defined manner to produce some services for its environment.
The design process in first level focuses on deciding which modules are needed, their specifications,
and their interconnection. This is called top-level design. In the second level, the internal design of
the modules, or how the specifications of the module can be satisfied, is decided. This is called
detailed logic design. Detailed design is an extension of system design and it contains a more
detailed description of the processing logic and data structures.
Input to design phase is the specifications for the system to be designed. The output of the toplevel
design phase is the architectural or system design for the software system to be built. A design can
be object-oriented or function-oriented. In function-oriented design, the design consists of module
definitions, and in case of object-oriented design, the modules in the design represent data
abstraction.

Design Principles
Design principles provide the underline basis for development and evaluation of software
development techniques. A good software design enables to develop a system that satisfies the
requirements of that system. Following are fundamental design principles

Problem Partitioning and Hierarchy


It is not feasible to tackle or solve large problems at once in one go, whereas a small problem is
easy to solve in one go. Design process of software system follows the principle of divide and
conquer. Do not be confused with the traditional approach of divide and conquer, where the
decomposed problems are solved together. Here, what we mean by divide and conquer is to solve
the small pieces of problems one after the other. Main objective of software design is to divide the
problem into small pieces that are easy to manage and can be solved separately. The cost of solving
one large problem is more than solving small pieces of problems separately.
Dividing a large problem into number of small pieces can add to the complexity. It is not possible
to implement the small pieces in total isolation, as they are interconnected to each other. You want
the design to support minimal maintenance, i.e. each part in the system be easily related to the
application and each piece can be modified separately. If one piece can be modified separately
without introducing any unanticipated side effects in second piece, the former is said to be
independent of later. Total independence is not possible.

66
Problem partitioning leads to hierarchies in the design. Design produced using problem partitioning
can be represented using a tree structure as a hierarchy of components. The relationship between
components varies depending on the method used. For example, in case of "whole-part of"
relationship, system consists of some parts; each part consists of subparts, and so on.
For a system based on hierarchical architecture, the program structure can be partitioned both
horizontally and vertically. Horizontal partitioning defines separate branches of the modular
hierarchy for each major program function. Control modules in horizontal partitioning are used to
coordinate execution and communication between the functions. Horizontal partitioning defines
input, transformation or processing and output as its three partitions. Partitioning the architecture
horizontally makes it easy to test, maintain and extend the software. It also results in propagation
of fewer side effects. On big disadvantage of horizontal partitioning is that, it can complicate the
overall control of program flow often when more data needs to be passed across module or
functions.

Figure Horizontal partitioning

In case of vertical partitioning, also known as factoring, control and processing is distributed
topdown in the program structure. Top- level modules are mainly concerned with control functions,
and they do little processing. Modules that are low in the structure are called workers, performing
all input, computation, and output tasks.
Probability of propagating side effects to modules that are low in structure, due to change in control
module is much higher as compared to change in worker module. In general, changes to computer
programs revolve around changes to input, computation or transformation, and output. Vertically
partitioned structures are less likely to be susceptible to side effects when changes are made, and
are more preferred than horizontal partitioning.

67
Figure : Vertical partitioning

Abstraction
Abstraction principle allows you to separate conceptual aspects of a system from implementation
details during requirements definition and design. For example, you may specify whether to use
FIFO based queue or to use LIFO based stack data structure without having to worry about the
representation scheme for the implementation of two data structures. Also you can specify the
functional characteristics of the routines like PUSH, POP, TOP, for stack and INSERT, DELETE,
FRONT, REAR without concern for its algorithmic details.
Abstraction permits a designer to consider a component at an abstract level without having to worry
about the implementation details of that component. Components of a system or the system itself
provides services to its environment, and abstraction of a component describes the external
behavior of that component without the need of knowing the internal details that bring about the
behavior.
Components of a system are not completely independent and often interact with each other. The
designer has to specify how a component will interact with other components. Abstraction allows
the designer to concentrate on one component at a time.
Three levels of abstraction can be created; procedural abstraction, data abstraction and control
abstraction. A procedural abstraction is a named sequence of instructions that has a specific and
limited function. A data abstraction is a named collection of data that describes a data object.
Control abstraction implies a program control mechanism without specifying internal details.

Modularity
Principle of partitioning is successful, only if the modules are solvable and modifiable separately.
It will be even better, if changes made to one component does not require you to recompile the
whole system. In a modular system, change in one component has no or minimal impact on other
components. Modularity helps in easy debugging of the system.

68
In a modular system, each module supports a well-defined abstraction and has a clear interface
through which it can interact with other modules. Abstraction and partitioning together result in
modularity.

Top-Down and Bottom-Up Strategies


A hierarchical system can be recursively defined. It is a system that consists of components, that
further consists of components, and so on. The component at the top-most level of the hierarchy
refers to the total system. A hierarchical system can be designed using either top-down approach
or bottom-up approach. The top-down approach starts from the highest-level component and then
decomposes it into smaller components, and iterating until the desired level of detail is achieved.
Top down approach is preferred by most of the researchers, as it is easy to identify the major
components of a system.

The bottom-up approach starts with the lowest-level component and proceeds progressively by
integrating them to form the higher levels. You need to identify the most basic or primitive
components first and then work with layers of abstraction. A top-down approach is followed in
cases where the requirements are very well defined. It is useful in cases you want to automate an
already existing system.

Design Concepts
A module is a logically separable part of a program. A Module from the point of view of a
programming language construct, can be a macro, a function, a procedure, a process, or a package.
For a modular design, module selection must be such that it supports well-defined abstraction.
Cohesion and coupling are two modularization criteria's used for describing the functional modular
systems.

Functional Independence
Functional independence means that the modules should be developed believing that they will be
executed separately in isolation and there will be no interaction with the other modules of the
system. Module developers should focus only a sub-problem in hand. Its interface should be simple,
when viewed from other modules of the program structure. It is easy to develop software that
comprises of functionally independent modules. It is easy to maintain software with independent
modules because of the following reasons:
1. Secondary effects caused due to modification in design are limited.

69
2. Functional independence means that changes in one module does not affect other modules inthe
system, and hence, error propagation is reduced.
3. Independent modules can be reused in multiple software systems, as the interface is easy.

Data Structure
Data structure refers to the logical representation of relationship between the individual elements
of data. Data structure is important to the representation of software architecture, as the structure
of information invariably affects the final software design.
Data structure represents the organization, access methods, degree of associativity, and processing
alternatives for information.
A scalar item is addressed using an identifier. It can be accessed by specifying a single address in
memory. The size of scalar item in memory depends on the type of information represented by it
and the programming language in use. Sequential vector (known as array in programming
languages) is a collection of similar type of data elements. A sequential vector can be extended to
ndimensional spaces.
Data items can be organized in a variety of formats. A linked list data structure is used to organize
data items in a non contiguous manner, where each data element is represented as a node.
You can construct other advanced data structures using vectors or linked lists. A tree for example,
is a hierarchical data structure. Data structure can also be represented at different levels of
abstraction. There is no need to specify the details of internal implementations for logical structures
like stack and queue.

Coupling
Objective of a good software design is to reduce the complexity of interconnections between the
system modules. Two modules are considered independent if one can function completely without
the presence of other. Practically, it is difficult to achieve 100% modularity in the system. Coupling
in software design is used to define the strength or "how strongly" two or more modules are
interconnected.
Coupling refers to a measure of interdependence among modules. "Tightly coupled" means the two
modules are strongly interconnected, and "loosely coupled" modules are weakly interconnected. It
is better to have loose coupling between the two couples, as completely independent modules have
no coupling. Coupling between the two modules is defined during the design phase only and cannot
be changed later on.

70
Coupling is affected by the type of connection between modules, the complexity of the interface,
and the type of information flow between modules. Coupling increases with the increase in
complexity between modules and number of interfaces per module. Complexity for a module refers
to number of data items being passed to it by other modules. Passing of information, only using the
defined entry interface of a module helps to reduce the coupling. Passing of information directly
using the internals of a module or shared variables increases the coupling. Data and control are the
two types of information that flow between the modules. Control information is used to control the
actions of the entry module, whereas data as input means a simple input- output function. Interfaces
with control information have high coupling and lesser abstraction, and interfaces with data
information have low coupling and greater abstraction

Cohesion
Coupling is concerned with the measurement of strength between the modules, whereas cohesion
is a measure for the strength of binding elements within the module. Parameters used to define
cohesion of elements are based on a scale of weakest to strongest. In the last section, you learned
that coupling can be reduced by minimizing the connections between the two modules. Coupling
can also be reduced by achieving strong cohesion or to strengthen the binding between elements in
the same module. Cohesion tries to determine how closely the elements of a module are related to
each other.
Cohesion and coupling are inversely related to each other. Higher cohesion within the modules,
means lower coupling between the modules. This is what a designer tries to achieve, but it may not
be that perfect a correlation. Following are different levels of scale on which cohesion can be
measured:  Coincidental
 Logical
 Temporal
 Procedural
 Communicational
 Sequential
 Functional
Coincidental represents the lowest level of cohesion and functional represent the highest level of
cohesion. Cohesion of a module is defined by the highest level of cohesion applicable to each
element in a module.

71
Coincidental cohesion is generally achieved when an already existed software is decomposed into
modules, and there is no meaningful relationship between the two elements of a module. It may
also result into different modules having duplicate code. It results into strong coupling between the
modules and hence they cannot be modified separately, which is un-desirable.
Logical cohesion exists in case the elements are logically connected to each other and they perform
functions that fall in the same logical class. For example, elements performing input function fall
in the same logical class.
In case of temporal cohesion, the element are not only logically connected to each other, but are
also executed together. For example, elements involved in activities like initialization and
termination are usually temporally bound
Procedural cohesion means that the elements belong to some common procedural unit. For
example, elements that belong to some loop structure.
A module has communicational cohesion, if the elements operate on the same input or output data.
Sequential cohesion is achieved when the output of one element becomes the input of next element
within the same module. There are no guidelines to define sequential cohesion.
Functional cohesion is the strongest of all cohesions. Functional cohesion is the strongest of all
cohesion, and it means that all elements within a function are related to performing a single
function.

Figure Correlation between cohesion and coupling

72
Software design methodologies

Objective
After studying this lesson, students will be able to:
1. Define the design notions used in design phase
2. Discuss the concept of data design.
3. Describe the components and connectors used in the design.
4. Explain in detail the procedural design.

1 Introduction
Once the SRS document has been defined, the software development moves to the design phase.
SRS document specifies the problem domain and the focus of the design phase is to specify the
solution domain. The activities in design phase might be similar to the analysis phase, but the
objective is different. Design phase is concerned with creating a document format that is closer to
the implementation and is easy to understand for the coding team.

2 Design Notion
Design notions are used to represent the design or design decisions during the design phase.
These notions help the designer to represent its decisions in compact manner.
2.1 Structure Charts
Graphical notions are best suited to represent the design document. Structure charts are
commonly used graphical tool used for representing procedural design. Program structure
consists of system modules and their interconnections. The structure chart is used to describe the
structure of a program.
Labelled rectangular box is used to represent a module. An arrow is used to show the parent and
child relationship between the two modules. An arrow from module A to module B, describes
that modules A is invoking module B and that module B is subordinate of module A. Arrow
labels are used to specify the input and output parameters between the two modules.
Control information passed as parameter can be specified using filled circle at the tail of the label,
and the data information can be specified using unfilled circle at the tail of the label.

73
Figure Top level structure of structure chart

The structure above shows that only data information is being passed between the modules. Total
of 4 modules are there. Procedural information like loops and decisions can be explicitly
specified in structural chart. For example, if a module repeatedly calls its sub modules, it can be
represented using looping arrows around the arrows used to invoke the sub module.

Figure Looping arrows in structure chart.

Even decisions can be explicitly specified using diamond symbol. For example, consider that a
super module invokes a sub module based on some decision, then a diamond symbol can be
added to the head of the arrow that connects the two modules.

Figure Decision in structure chart.

Modules can be categorized into following classes:


1. Input Module: Input module is one that receives parameters from sub modules and then
passesthem to super module.

74
2. Output Module: Output module is one that receives parameters from super module and
thenpasses them to sub modules.
Input and output modules are used for input and output of data from and to the environment.
3. Transform Module: transform modules are only concerned with the transforming of data
forminto some other form. Computational modules typically fall in this category.
4. Coordinate module: Coordinate modules are concerned with managing the flow of data to and
from different subordinates.

A module can also perform functions of more than one type of module. A structure chart is best
suited for representation of a design that uses functional abstraction. A typical structure chart is
used to specify the following:
1. Modules and their call hierarchy.
2. Modules and their interfaces.
3. Type of information passed between modules.

Once the structure design is finalized, the modules and their interfaces cannot be changed. The aim
of structural design is to make programs implementing it:
1. Also have a hierarchical structure.
2. Have functionally cohesive modules.
3. And there be very few interconnections between modules.

2.2 Specifications
It is important that design specifications are used to communicate the design to others. Design
specifications are used to specify the data structures, module specification, and design decisions.
A formal description of all data structures to be used in the software are specified in the design
document. Module specifications include description of interfaces used between the modules,
abstract behaviour of the module and the sub modules being used by a module. After the design
is approved, the design is implemented using a programming language that best suites the design
architecture. The design also includes all the major decisions taken by the designer. It gives a
brief description of all choices available and the explanation for why the specified choice was
selected.

3 Design Phase

75
Design phase is carried out to transform the requirements specified during analysis phase into
format that is easy to implement. The Design is carried out as follows.

3.1 Data design


First step of the design phase in software development is data design. Main focus of the data
design is to define the data structure to be used by various software components. It helps to
reduce the program complexity, makes the program structure modular. The information domain
model developed during analysis phase is transformed into data structures needed for
implementing the software. ER diagram and data dictionary defined in analysis phase forms the
basis for the data design. The relationship between various entities is defined in the ER diagram
and data dictionary is used to list all data items that appear in a DFD of a system, i.e. all the data
flows and data stores appearing on the DFD. It is also used to list the purpose of each data item
along with the definition of all composite data items. Data types and data constraints for all data
items listed in the data dictionary are specified during this step. Following principles are
followed during data design step:
1. Identify the data structures needed for implementing various system functions.
2. Develop or design a data dictionary to define how different data objects interact whatconstraints
should be imposed on data structures.
3. Data should be represented in abstract manner and representation of data structure should only
be known to the modules that need to access these data structure directly.
4. Maintain a library of useful data structures along with the possible operations on them.
5. Programming language used for implementation supports abstract data types.

3.2 Architecture Design


Software architecture may be defined as combination of software elements, their external view,
and the relationships between these components. Following are some of the advantages of
software architecture:
1. Understanding and communication
2. Reuse.
3. Construction and Evolution
4. Analysis.

Software architecture views


Software architecture provides three types of views:

76
1. Module
2. Component and connector
3. Allocation
Module view represents the view of the system as collection of coded transformations that are
used to implement a specified system functionality. Modules represent the key elements of this
view and some of the examples of module elements are class, method, package, etc. relationship
between the modules depend on the interaction between the modules.
Component and connector view represents the view of the system as collection of run time
components. If you are familiar with object oriented programming, then objects or set of objects
belonging to a class are its run time components. Process is also an example of run time
component. Connectors define how the two objects interact with each other at run time, and
examples of connectors are pipes and sockets.
3.2.1 Components of system
Components in architecture design refers to a computation unit or data stores. Component name
is based on the function it performs and it provides a unique identity to it that is used for
referencing details about the component in the supporting documents.

Figure Components of system

3.2.2 Connectors of the system


Components interact with other components of the system in order to provide services to the
system. Components only provide a part of the overall functionality of the system, and the results
should be combined to form the overall result of the system. Connector or procedure call
(supported by runtime environment of a programming language) helps to provide interaction
between two components. Interaction can also take place with the help of protocols like http, ftp,
etc. Connector name is used to specify the type of interaction the two components are having.

77
Specification of connectors help to identify the suitable infrastructure needed to implement an
architecture. Connectors can also be used to provide n-way communication between multiple
components.

Figure Connectors used for connecting components of system

Bus type connector- It is used by system components to broadcast message to other components of
the system.
Database connector- It is used by a functional component, when it wants to access the database
component of the system.
RPC- It is used by the system components do specify the remote procedure call.
Pipe: it is used to represent a simple message passing between the two components.
Request-Reply- It is used to show a simple connection between two components of system, that
shows request by one component and reply by other component.
An allocation view represents the view of how different software modules are allocated resources
like the hardware, file systems, etc. It is used to represent the relationship between the various
elements of the software system and the environment in which the same is to be executed. In
technical terms, you can say that this view is concerned with exposing structural properties like
which processes run on which processor, and how to organize the files on a file system.

3.3 Procedural Design Methodology

78
Procedural design is also known as Function-oriented design. Design methodology is concerned
with providing fair guidelines to the teams involved in design process. These guidelines help to
produce a design that is modular and simple. Procedural design methodology is based on the
principle of problem partitioning. The system is partitioned into sub-systems to handle input,
output and transformation.
The idea behind this partitioning is that in many systems, set of modules deals with input and are
concerned with issues of screens, reading data, formats, errors, exceptions, structure of the
information, etc, whereas , set of modules deals with input and are concerned with issues of
preparation of output in presentation formats, charts, reports, etc.
There are four major steps in the procedural design methodology:
1. Restate the problem as a data flow diagram
2. Identify the most abstract input and output data elements
3. First-level factoring
4. Factoring of input, output, and transform branches

3.3.1 Restate the Problem as a Data Flow Diagram


Procedural design methodology starts by developing the DFD for the specified problem. DFD for
the design phase is different from the DFD for the analysis phase. Modelling of problem domain
is the key objective of DFD in analysis phase, whereas modelling of solution domain is the key
objective of DFD in design phase. DFD in design represents the data flow in the actual system.
The DFD is concerned with identification of all major functions and the flow of data between these
functions.

79
Figure Sample DFD
Source: An integrated approach to software Engineering by Pankaj Jalote, Narosa Publication.

3.3.2 Identify the Most Abstract Input and Output Data Elements
Functions or transformations cannot be directly applied on physical input, and that input is
converted into a form suitable for applying transformations. Similarly, the outputs generated by
transformations are converted into physical output. This step focuses on the separation of two
type of transformations: one that performs actual transformations and other that covert the input
and output formats.
For this, you need to identify the highest abstract level of input and output. Data elements that are
farthest removed from the physical input elements and still can be used to represent input data
are called most abstract level input data elements. You can recognize the most abstract input data
elements by moving from the physical inputs toward the outputs in the data flow diagram, until
you reach the data elements that can no longer be considered incoming.
Similarly, data elements that are farthest removed from the physical output elements and still can
be used to represent output data are called most abstract level output data elements. You can
recognize the most abstract output data elements by moving from the physical outputs toward the
inputs in the data flow diagram, until you reach the data elements that can no longer be
considered outgoing. These data elements represent the logical output data items, and the
transforms after these data items merely convert the logical output into a physical output format.
The actual transformation happens between the most abstract input data elements and most abstract
output data elements.

3.3.3 First-Level Factoring


After the identification of abstract input and output data elements, next step is to identify the
modules. In this step, a main module is specified, whose purpose is to invoke the subordinates.
For each of the most abstract input data items, an immediate subordinate module to the main
module is specified. Each of these modules is called input module, and is responsible for
delivering the most abstract data item to the main module. Similarly, for each most abstract
output data item, a subordinate output module that accepts data from the main module is
specified.
Abstract data items are used to label the respective arrows connecting these input and output
subordinate modules. Finally, for each central transform, a module subordinate to the main one is

80
specified. These transform modules accept data from main module and return appropriate data
back to main module.
Incoming arcs on the DFD diagram are used to specify the data items coming to a transform
module from the main module and the outgoing arcs on the DFD diagram are used to specify the
data items returned from the transform module.

Figure First level factoring


Source: An integrated approach to software Engineering by Pankaj Jalote, Narosa Publication.

3.3.4 Factoring the Input, Output, and Transform Branches


The first-level factoring leaves a lot of work for each subordinate module to perform. These
modules must be further factored into subordinate modules to reduce the work load on each
module at higher level. Let us start with the input modules.
Input module is concerned with producing input data. Input module can be factored as under:
1. The transform be treated as central transform.
2. Repeat the first-level factoring process for new central transform, considering the main module
to be the input modules.
3. Create a subordinate input module for each input data stream coming into this new
centraltransform
4. Repeat the same process, until the physical inputs are reached.
No output subordinate modules are produced during the factoring of input modules. Output
modules can be factored by repeating the same process.
Factoring the central transform is functional decomposition. Factoring of the central transform
can be achieved by creating a subordinate transform module for each of the transforms in this
data flow diagram. This process can be repeated for the new transform modules that are created,
until we reach atomic modules.

81
Object oriented concepts

Objective
After studying this lesson, students will be able to:
1. Define various terms associated with object oriented programming.
2. Explain the use of UML in object oriented design.
3. Discuss various types of diagrams created in UML.
4. List various steps involved in OO design methodology

1 Introduction
Object-oriented (OO) approach is the most popular software development approach today. An
object oriented design is less affected by change in requirements. Inheritance and close association
of objects in design to problem domain encourages the reusability of modules that help to reduce
the overall cost and effort needed to develop the software. Object-oriented approach provides
structural support for implementing abstraction.

2 Object oriented concepts


Class- A class in object oriented design forms the basis for any OOP language. A class is a
collection or binding of data members and member functions. All other objectives of object oriented
programming can be achieved only with the help of class type and you cannot call any program or
language supporting OOP paradigm if it does not include class data type.

Object- A class is simply a type that forms the basis for OOP. An object may be defined as instance
of a class. It is the active entity of a class. Object occupies space in the memory. When you create
multiple objects of a class; multiple instances class members is created.

There was a clear separation between the data and functions in procedural or structural languages.
More emphasis was given to the coding than data. A class supports the encapsulation feature of
OOP. Encapsulation means binding of data and functions (coding) in a single type. A class is a
collection of data members and member functions, and hence it supports encapsulation.
Encapsulation leads to data hiding.

82
Objects are entities that encapsulate some state and provide services to be used by its environment.
The basic property of an object is encapsulation. Interface of an object refers to the services that
can be requested from the object. Encapsulation allows only limited access to the data, that lets you
achieve data integrity. State of an object is preserved until the object is destroyed. Attributes and
services provided by an object are defined by the class, it belongs to. A class may also be considered
as a set of objects that share same behaviour.

A system consists of a number of objects belonging to different classes. These objects interact with
each other in order to achieve the system objective. Mechanism of messaging is used for interaction
between the objects. The object that receives the request executes the appropriate service requested
and returns the result to the object requesting for the service. It is a clear case of encapsulation and
abstraction supported by objects. Abstraction in OOP means providing only the interface to the
users and hiding the un-necessary details or coding from the user. Abstraction is also a process of
creating some abstract object from a class that depicts a real life entity. Some of the examples of
abstract objects are employee, student, car, account, human, etc. The main objective of abstraction
is to reduce the complexity and improve the performance. A class that contains only the prototype
of data members and member functions is a perfect example of abstract view of class. You can
access member function of class using objects of that class without knowing any details about that
member function.

Relation between object- Two objects are related in some way, if an object invokes a service in
other object, and there is an association between the two objects, if an object uses a specified service
of another object. Links are used to represent such association between objects. Association leads
to visibility. Suppose that object A wants to send a message to object B, or invoke some service of
object B, then the object B must be visible to object A in the final program definition.

Another important type of relationship between objects is aggregation. It is used to represent the
whole/part-of relationship. Aggregation is often referred to as containment. For example, if an
object OBJ1 is an aggregation of objects OBJ2 and OBJ3, aggregation states that objects OBJ2 and
OBJ3 will normally be within object OBJ1.

Inheritance- It is probably the most powerful feature of object oriented programming. It lets you
create a new class by re-using or inheriting the features of already existing class and adding new
features to the same. Inheritance helps you to improve the reusability of your code by re-using
already tested classes. It helps to reduce a lot of programming effort and also improves the

83
performance. Inheritance also helps you to break one large class into smaller classes that helps
improve the abstraction.

Inheritance represents “is a” relation. Inheritance relation can be best represented using
hierarchical structure. A subclass inherits the features of a superclass. Hierarchy should be such
that an object of a class is also an object of all its superclasses in the problem domain. Subclass is
an extension of superclass , or all common features of the subclasses are accumulated in the
superclass. Features can be inherited from the superclass class and used in the subclass directly. A
derived class can also be considered to be a specialized class of available abstract classes. In case
of strict inheritance, a subclass takes all the features from the superclass and adds additional features
to specialize it.
In strict inheritance, all data members and operations of base class are available in the derived class.
In case of non-strict inheritance, subclass does not inherit all the features of superclass, or redefines
some of the features of superclass.

A class may also inherit from multiple classes. It means that the relationship may not necessarily
be a tree like hierarchical structure. When a subclass inherits features from multiple classes, it is
called multiple inheritance.

Polymorphism- In yet another key feature of OOP, you can create multiple forms of a single object.
Polymorphism is also known as overloading. Polymorphism is un-avoidable in a system that

supports inheritance, as there “is a” relation supported by inheritance. If B is a subclass of

superclass A, an object of class B can also be used to access the instance of class A. Static type of
object polymorphism is specified in the program text, and it remains unchanged. The dynamic type
of object polymorphism can change from time to time and is known only at reference time. The
dynamic type of object will be defined at the time of reference of the object. This type of
polymorphism requires dynamic binding of operations. Dynamic binding means that the code
associated with a given procedure call is not known until the moment of the call.

3 Unified Modelling Language (UML)


UML is a graphical tool for representing object-oriented design. It is called modelling, as it tries to
define the relationship and interaction between the classes in the system

Class Diagram- It is the core of the UML model. A class diagram is used to define the following:
1. Classes that are part of the system.

84
2. Association or relationship between the classes.
3. Inheritance relationship between the classes.
A class in UML is represented as a box divided into three parts. Top part specifies the class name,
middle part lists the data members of attributes of the class and the bottom part specifies the
functions that transform the class state.

Figure : Simple representation of class

It is important to describe the relationships between the classes, as the interaction between the
classes is must to achieve the system objective. One common relationship is the
generalizationspecialization relationship between classes. It can be best represented using
inheritance hierarchy in which, properties of general significance are assigned to a more general
class—the superclass—while properties which can specialize an object further are put in the
subclass. Subclass contains its own properties as well as those of the superclass. The
generalizationspecialization relationship is specified by having arrows coming from the subclass to
the superclass, with the empty triangle-shaped arrowhead touching the superclass.

85
Figure : A class hierarchy.

Association is another relationship that allows objects to communicate with each other, and it means
that an object one class needs services from objects of other class. Line is used to show the
association between two classes. Label on the line is used to specify the name of the association.
Association roles, attributes and cardinality can also be defined. A zero or many multiplicity is
represented by a “*”.The part-whole type of relationship is used when an object is composed of
many objects. It represents containment, which means that a class object is contained within the
object of another class.
Aggregation relationship is represented using a line originating from a little diamond connecting it
to classes which represent the parts.

Figure : Aggregation and association.

Sequence and Collaboration Diagrams


Class diagrams do not represent the dynamic behaviour of the system. Sequence diagrams or
collaboration diagrams are used to represent the system behaviour when it performs some of its
functions. Sequence diagrams and collaboration diagrams are collectively known as interaction
diagrams.
A sequence diagram shows the temporal ordering and series of messages exchanged between
objects as they collaborate or interact to implement desired system functionality.
Objects, instead of classes participate in the sequence diagrams, as it tries to depict the dynamic
behaviour of the system. Objects in sequence diagram are shown on top with the help of labelled
boxes. Lifeline of an object is represented using a vertical bar. Arrow is used to represent message
from one object to another from the lifeline of one to the lifeline of the other. Message name
generally refers to a method in the class. Each message has a return, which is when the operation
finishes and returns to the invoking object. It is desirable to show the return message explicitly,
even though it is not mandatory to. This is done by using a dashed arrow.

86
Figure : Sequence diagram

A collaboration diagram is also a good representative of objects communication and looks more
like a state diagram. An object is represented as box, and messages are shown as numbered arrows
between the objects. Message numbering is used to capture the chronological ordering of messages.

Figure : Collaboration diagram.

Activity Diagram. It is also used for modelling the dynamic behaviour of the system.
It focuses on modelling the activities during the system execution. An activity in activity diagram
is represented using oval shape. The activity name is written inside the oval shape. System proceeds
between activities and which will be the next activity depends on some decision. Diamond shape
is used to represent the decision and the same is connected to multiple activities. Activity diagram
resembles somewhat to flow charts. An activity diagram also have notation to specify parallel
execution of activities in a system.

87
Figure Activity Diagram

4 OO Design Methodology
Object oriented design is a specification of classes and objects that are part of the system
implementation. It is very close to the real code and implementation phase should require you to
add only details about methods or attributes to the design. An object oriented design generally
consists of the following steps:

– Identification of classes and the relationships between them.

– Design of dynamic model and the definition functions on classes.

– Design of functional model and the definition of functions on classes.

– Identification of internal classes and functions.

– Optimization and packaging.

Identification of classes and their relationships requires :


1. Identification of object types in the problem domain.
2. Aggregation and association between classes.
3. Identification of class attributes, and
4. Services that each class will provide to the system.

It is basically concerned with the design of the initial class design

88
Design of dynamic model and the definition functions on classes
The initial class diagram only gives the module-level design. This design needs further modelling
to ensure that the expected behaviour for the events can be supported. Main aim of this step, is to
specify how the state of various objects changes when events occur. An event occurs, when a
request is made to an object for some service. A series of events during system execution refers to
a scenario.
A scenario defines the different services being performed by each object. . All scenarios put
together, represent the behaviour of complete system. A design capable of supporting all the
scenarios, is also capable of supporting the desired dynamic behaviour of the system.

Functional Modelling
It does not consider the control aspects of the computation, and is only concerned with how the
output
values are computed from the input values of the system. Functional view represents the mapping
from inputs to outputs and also the steps involved in achieving this mapping. DFD is used to
represent the functional modelling. Functional modelling is done to ensure that the object model
can perform the transformations required from the system.

Defining Internal Classes and Operations


It is important to consider the implementation issues before the final design of the system. Each
class is evaluated to see if it is fit in its current form to be included in the final design. Some of the
classes may even be removed from the design. Then the implementation of methods on the classes
is considered.
Once the designer is satisfied with the implementation of each class and its methods, the system
design is complete.

Optimizing and Packaging


The last step is concerned with the issue of efficiency. Focus is to ensure that the final structures
should not deviate too much from the logical structure. Various optimizations are possible, and
much is left to the experience and judgement of a designer.

89
UNIT – 5
Software Testing

Introduction
No matter what product you produce, it is imperative to test the product before it is handed over to
the customer. Software testing is a key phase in development process of any software.

Software must be tested to determine if it meets the standards and requirements defined in the SRS.
Depending on the nature of the software under development, testing can be performed by the
developer itself, or by a special team comprising of testers. Testing can be performed right from
the beginning of the software development process or at the end, before the software is handed over
to the customer. In this lesson you will also learn about various software testing techniques
practically used.

Software testing
Testing refers to the process of evaluating a system or to check if it satisfies the specified
requirements or not. Testing is performed by executing the system to identify errors, or gaps in the
requirements and actual product. Software testing is an automated process. Software testing refers
to design of a special type of software that is used to find bugs or gaps in some other software
system.
Testing is the fourth phase in waterfall model. But the same can be started in parallel to first three
phases to identify errors in the very beginning, It helps to reduce the time and cost to correct these
errors. It depends on the model in use. For example, in case of incremental waterfall model, testing
can be performed after each iteration or phase. Testing in requirement gathering phase is done to
analyse and verify the requirements. Reviewing of the design in design phase to improve the same
is also a type of testing. Parallel review can also be performed by the developers during the
implementation.

Testing process mainly includes:


 Execution of the software using the inputs that satisfies the operational conditions.
 Verifying or comparing the gaps between the actual and the desired outputs.
 Verifying or comparing the resultant and expected states.

90
 Measuring various performance metrics such as amount of memory consumed, time
taken to execute, resources used, etc.

Key terminologies used in software testing


Error- When the resultant state is incorrect or not same as expected state.
Failure- Failing to provide the expected service to the user.
Bug- Missing or incorrect code that may lead to a failure.
Test case- It refers to a set of inputs, operational conditions, and expected outcome.
Test Suite- It refers to a set or collection of interrelated test cases with the aim of common testing
goal.
Test Driver- Software tool that is responsible for the application of test cases.
Test strategy- It refers to an algorithm used to select test cases from a representation.

A software testing cycle

Figure Software testing cycle

The process starts by designing the test cases or picking up already designed test cases. Then the
test data or input is prepared. The actual software to be tested is then executed on the test data and
the results are obtained. Results specify the identified errors, bugs, gaps, and also the value for
various measures such as memory used, time taken to successful completion, etc. The results are
then compared with the test cases to identify the gaps and generate the test report.

Objectives of software testing

Following are the key objectives of software testing:


1. Finding errors or bugs that a developer might have unintentionally left in the source code
ofvarious software modules.
2. Providing information about the level of software quality.

91
3. To make sure that the end result meets the business and user requirements.
4. To ensure that the software is reliable to be deployed in real work scenario.
5. Software testing shall evaluate the capabilities of a system to show that a system performs as
intended.
6. Software testing shall also verify documents created throughout software development lifecycle.

Principles of software testing


Following are the seven key principles of software testing
1. Testing shows the presence of bugs- Objective of testing is to find bugs and not declare a
software as bugs free. Hence, a key principle of software testing is to find as many bugs as possible.
2. Exhaustive testing in impossible- It is not possible to test all possible combinations of data and
scenarios for a large software system. Levels of risks and priorities are used to focus on most
important aspects to test.
3. Early testing- Sooner you start, better and easy it is. The testing process must start with the
very first phase of the development cycle. It makes the cost to fixing the errors minimal and also
reduces the time to fix them.
4. Defect clustering- Pareto Principle to software testing states that: approximately 80% of the
problems are found in 20% of the modules.
5. The pesticide paradox- If you keep running the same set of tests over and over again, chances
are no more new defects will be discovered by those test cases. Regression testing must be used
after the bugs are fixed, to make sure the new changed software has not broken any other part of
the software.
6. Testing is context dependent- Different methodologies, techniques and types of testing is
related to the type and nature of the application.

7. Absence of errors fallacy- If the testing fails to find any bugs in the software, it doesn’t mean
that the software is ready to be delivered. It may be possible that the tests were designed to see if
the software matched the user’s requirements?
Testability
Testability is a software quality characteristic. Some of the definitions of testability:
"The degree to which a software or its modules facilitates testing is called testability."
or
"Testability is the degree of difficulty of testing a system".

92
Testability is determined by both the system being tested and its development approach. Higher
testability means better tests at same cost and lower testability means fewer weaker tests at same
cost. Testability determines the limit to which the risk of costly bugs can be reduced to an
acceptable level. Delivering a system with nasty bugs, means poor testability. Improved testability
means there are good chances to find bugs as you do more testing. Designing a software system
with testing in mind is called design for testability.

Following are some of the characteristics of testability, which are listed below:
1. High quality software can be tested in a better manner. This is because if the software is designed
and implemented considering quality, then comparatively fewer errors will be detected during the
execution of tests.
2. Software becomes stable when changes made to the software are controlled and when the existing
tests can still be performed.
3. Testers can easily identify whether the output generated for certain input is accurate simply by
observing it.
4. Software that is easy to understand can be tested in an efficient manner. Software can be properly
understood by gathering maximum information about it. For example, to have a proper knowledge
of the software, its documentation can be used, which provides complete information about the
software code, thereby increasing its clarity and making the testing easier.
5. By breaking software into independent modules, problems can be easily isolated and the modules
can be easily tested.

Black-Box Testing

In case of Black-Box testing, the software is treated as a black box and the testing is performed
without having any knowledge of the components or modules of the software. Tester doesn't have
knowledge of the software architecture and cannot view the source code. Black box testing is
performed from the user interface. The tester provides inputs using the user interface and checks
the output without knowing the processing details.

Figure Black-Box testing

93
Advantages of Black-Box testing
1. It is perfect for testing large software's.
2. It is suited when there is less time in delivery to the customer and the delivery is incremental.
3. Access to the code is not required in this method, it provides full abstraction.
4. No need of highly skilled testers, even moderately skilled testers, who have no knowledge of the
software can be used to test the software.
5. Testing from user interface is easy, it is just like operating the software.

Disadvantages of Black-Box testing


1. Only a limited scenario can be tested from the user interface.
2. It is not the most efficient method of testing, as the tester has knowledge of the software modules.
3. There is no difference between the highly skilled and moderately skilled testers, as the tester
cannot target any module that is prone to errors.
4. It is difficult to design the test cases.

White-Box Testing
White-box testing methodology refers to the detailed investigation of structure and of the software
modules and code. White-box testing is also known as glass or open-box testing, as everything is
visible from outside to the tester. Only the skilled testers are used to perform white box testing on
a software, as they need to have complete knowledge of the software architecture and design. The
tester digs deep into the source code to find out which sections of the code that might be coded
inappropriately.

Figure White-Box testing

Advantages of White-Box Testing.


1. It is easy for the skilled tester to identify the type of data that can help in testing the application
effectively.
2. It can help in optimizing the code.
3. Tester can even help in removing the ineffective code segments.

94
4. Maximum coverage is achieved during test, due to the knowledge levels of skilled tester.

Disadvantages of White-Box Testing.


1. It is costlier method of testing in terms of both time taken and the cost to hire the skilled testers.
2. It is not possible to cover each and every segment of the code to find out hidden errors.
3. Specialized tools like code analysers and debugging tools are needed to perform White-Box
testing.
4. It suffers from the biasness of the skilled tester, as it may ignore certain code segments and focus
only on a few code segments.

Validation Testing
Validation Testing is performed to check whether the software satisfies the customer needs or not.
Validation testing is done during the software development process and also at the end of it.

Validation testing is carried out before the software is handed over to the customer. The aim of
validation testing is to ensure that the software is made according to the customer requirements.
The acceptance of the software by the end customer is part of validation testing.

Validation Testing Process


Bugs or errors found during the testing process are logged for the development team to fix. them.
Once the bugs are fixed by the developers, testing process is repeated to ensure that the bugs have
been fixed, and no new bugs are introduced due to changes made by the developers.

Validation testing can be described as follows:


 Validation Planning
 Define Requirements
 Select Appropriate Team
 Develop Documents
 Evaluation
 Incorporating Changes

The Basic Tests


The validation testing process can be best viewed as V-Model as shown in the figure below:

95
Figure V-Model of validation testing

Types of Validation Testing


Normally the testers are involved right from the very beginning of the development process and the
testing starts immediately after a module of the system has been developed. A part from the basic
tests, the different types of software validation tests are:

Unit Testing
Unit testing is also known as Component testing. Unit testing is performed by the developers on
the individual units of source code. The aim of the tests carried out in this testing type is to search
for defects in the software component. At the same time, it also verifies the functioning of the
different software components, like modules, objects, classes, etc.
It is performed before the setup is handed over to the testing team to execute the test cases. Test
data used by developers for unit testing is different from the test data used by quality assurance
team. The goal of unit testing is to show that individual parts of a program are correct in terms of
requirements and functionality.

Limitations of Unit Testing


A developer cannot find each and every bug in the program, as there is a limit to the number of test
data and scenarios that a developer can use to verify a source code. Once all options or scenarios
in the mind of the developer have exhausted, the unit code is integrated with the other units.

Integration Testing
This is an important part of the software validation model, where the interaction between the
different interfaces of the components is tested. Integration testing is done to check the functional

96
correctness of the units when executed after integration. Integration testing can be done using two
approaches:

1. Bottom-up integration testing


It starts by testing the units in isolation, followed by tests of progressively higher-level
combinations of units called modules or builds.

2. Top-down integration
In this testing, the highest-level modules are tested first , followed by testing the lower-level
modules.

A more comprehensive software testing performs bottom-up testing , followed by top-down testing.
Multiple tests of the complete application are performed in a scenario that is simulation of actual
situations.

System Testing
System testing is performed on the whole system or complete software. The concern of this testing
is to check the behaviour of the whole system as defined by the scope of the project. Once all the
modules are integrated, the software as a whole is tested rigorously to see that it meets the specified
requirements and standards. Specialized testing teams are used to conduct the system testing. While
carrying out the tests, the tester is not concerned with the internals of the system, but checks if the
system behaves as per expectations.
Following are the key benefits of system testing:
1. It is that phase of the SDLC, where it is judged whether the software as a whole is up to
thestandards and meets the minimum requirements.
2. The software is tested thoroughly to verify that it meets the functional and technical
specifications.
3. The software is tested in an environment that is very close to the actual environment where
itwill be eventually deployed.
4. System testing enables us to test, verify, and validate both the business requirements as well as
the software architecture.

Acceptance Testing

97
A tester has to think like the client, or even interact with the client and test the software with respect
to user needs, requirements, and business processes, and determine whether the software is ready
for the delivery or not. This testing is key to determine the confidence of the client in the system.

Verification and Validation


Objective

This chapter aims at understanding the major and most important part of software
engineering which is verification and validation because unless until one does not know
how to verify and validate a software product that software product is of no use which
doesn’t meet the user’s business requirements. In this chapter we will learn basics of
making a quality software product which will leads to better performance and a flawless
product.

Introduction

In software Engineering verification and validation are two very wisely used terms which
seems to be same but the fact is that the both of these terms are quite different.

Two points must be considered about verification and validation


• Fulfils the SRS document’s requirements
• Fulfils the user’s requirements

When we talk about the SRS document’s requirements, we know that SRS is completely
accepted by both consumer and producer, so once it fulfils the SRS requirements the final
product produced will be quality oriented. We will talk about both of these terms in detail
but firstly let’s talk about these terms one by one and try to understand these terms.

Verification

Verification in simple words means that we will check the product/software in intermediate
phases of SDLC to make sure that whether we are building the product on right track or
not?

Verification falls under two categories

• The right method


• The right product
By right method we mean that what so ever we have already planned to make that
product/software. Are we following that way so that the final product which will be made

98
will be correct and by right product we mean that final software which will be handed over
to the end user will perform according to user’s requirements &fulfils his/her objectives.

Validation

Validation is a method of evaluating the end product/software which we are going to


handed over to the user will match the user’s business requirements.

We can understand this by taking a simple example that if we are an owner of a restaurant
and we have hired a software company to automate the daily operations of our restaurant,
so as an owner or an employee of a restaurant we have nothing to do with the intermediate
steps of software building. We will check the final product by printing or by firing various
commands and by testing it by any means to ensure that is the final product/software
meeting our restaurant’s business requirements or not?

And if not I will refuse to accept the product. This will be the real validation & complete
testing.

Need for verification and validation


• Gaining early intuition in software performance
• Synchronizing design and test data
• Managing the testing procedure
• Improved software quality
• Reduced assurance cost
• Reduced prototyping cost
• Flattened design cycle time

Understanding need of verification


and validation

99
Foremost of verification and validation

Here we have some questions that should be answered before we start going in more details
of verification and validation.

Validation
• “Are we working on the right system?”
• Does our software product precisely capture the actual problem?
• Does our product have justification for the needs of all the participants?

Verification

• “Are we working on the right system?”


• Does our software product meet the customer’s specifications?
• Are we implementing the product correctly?
• Does the given system work according to user’s instructions?

Software verification and validation approaches and their applicability

Verification and validation of the software product is being carried out throughout the
development of the product. There are a lot of techniques to check software either with
isolation or combing of modules. In a broad way we have classified these approaches in
five broad ways which are discussed in next section

100
1. Software Technical Reviews
This approach includes techniques like walk-throughs, audits and inspections.
Software technical reviews can be used to examine all the software products
especially applicable for natural language software.
2. Software Testing Software testing is a process of testing that the software we are
working upon is up to the mark or not i.e. the software meets the user’s
requirements or not.
There are mainly three main types of testing
a. Alpha Testing. Alpha testing refers to the software testing carried out
by the test team with in the developing environment.
b. Beta testing. Beta testing is the software testing performed by the
selected group of friendly customers.
c. Acceptance testing. This testing is performed by the customer to
determine whether to accept the software product or to reject it.

Levels of testing
Module testing
Integration testing
System testing
Regression testing
3. Proof of correctness
Proof of correctness is mathematical & an analytical technique which provides
proofs of getting work done correctly for example some facts and figures can help
to get correct results about any software product. For example any restaurant
automation software will provide correct bills and takes correct inputs are said to
be proof of correctness.
4. Simulation and prototyping
Simulation is a technique of model building, which helps to understand the software
product in more clarification. There are many approaches to understand the needs
and working of software but simulating the software is the best way. In simulation
we make a dummy model of that software and test it according to our requirements
and with random set of inputs.
5. Requirement tracing
Tracing user’s requirements because it may vary as time passes away. There are a
lot of situations where user ask developers to make changes in the software product
according to their convenience so in that case it is always important to trace user’s
requirements and specifications.

101
Principle of verification and validation

The process of reviewing, inspecting, testing, checking, auditing or establishing &


documenting items, services & processes conforms to specified requirements.
The act of evaluating the system & components, checking whether the software
product result of given development phase satisfy the requirements introduced at the start
of the phase.
According to IEEE/ANSI definition. “It is the process of evaluating a system or
components during or at the end of the development process to determine whether it
satisfies the specified requirement”. Validation is therefore ‘end to end’ verification [IEEE
Education society].

Difference between Verification and Validation

Up to now we get to know what are verification and validations. So let’s now try to
understand the difference between these two terms

Criteria Verification Validation


Definition Verification is the process of Validation is the checking of final
evaluating the product in product before giving it to the end
intermediate stages to check user.
whether all the phases of SDLC
are up to the mark or not.
Requirements It checks that whether the It checks whether the product is
product is made as per specified made as per user’s business
requirements and design requirements or not.
specifications
Right It checks “ Are we on right track It checks “Have we made the right
Product in progress of making the product?”
product”
Execution It is checked without executing It is checked with executing the
the product product.
Testing It includes all the static testing It involves all the dynamic testing
techniques. techniques.
Examples Examples of verification includes Examples of validation includes all
reviews, inspection and types of testing like smoke,
walkthrough regression, functional, systems and
UAT

Verification activities includes


• Technical reviews, software inspection and walkthroughs
• Verifying that whether the software requirements meet the user’s requirements.
• Unit testing so that each unit performs as per specified.

102
• Integration testing i.e. combining two or more units results to good / satisfied
performance.
• System testing i.e. compilation of the complete system / software to check its
performance.
• Acceptance testing whether the end user accept the product or not.
• Audit

The process of verification can be explained diagrammatically as below.

Fig

Verification Activities

103
Validation activities includes
• Assuring that the product meets user’s business requirements
• Data checksum’s
• Hardware testing Critical system testing
• Software product meets user’s requirements
• Random Testing
• Deficiency in software product is being checked by end user
• Outputs are analyzed

System view in terms of verification and validation

Fig

System view of verification and validation

104
Difficulty & Importance of verification and validation

Fig

Difficulty in verification and validation

Fig
Process of verification validation with credibility

105
VERIFICATION AND VALIDATION IN LIFE CYCLE.

Verification and Validation of Requirement Phase

Verification and Validation in requirement phase focuses on acquiring


information about software or product on which the team is going to work upon. It aims
on requirement reviews, walkthroughs, inspection & prototyping. This phase looks
problems in these areas.

• The software requirement specification between user & the development team
correctly describe the functional requirements for which the system is to be built
or extended.
• The graphical user interface which includes appearance, the look & feel & output
formats.
• Non-functional requirement which includes security
requirement, user friendliness etc.
• Any ambiguity which includes in definition application, specific content &
formulation of requirements.

Verification and Validation in Design Phase

Verification and Validation in design phase focus on setting up benchmark on


correctness, consistency & acceptability of design with respect to all the requirement and
analysis phase.

This phase ensures that


1. The notations of the design specified language should be used correctly.
2. Design, Mismatch between as expected by the customer & design functionality.
3. Mismatch between user interface which is expected by the customer & one which
is development by the development team
4. This phase ensures that the requirement & the capabilities which were states in each
requirement earlier should actually be delivered when the system is being
implemented.

Verification and Validation in Implementation phase:

This is the most important phase in software engineering &it takes a lot of time &
efforts to implement the actual product / software at user’s workplace and user’s
environment. The phase focus on 1. Inadequate & incomplete implementation of
software.

106
2. Organization’s coding standards should be followed while writing code.
3. Programming language should be used properly to match all the user’s need.
4. Incorrect implementation of data structures and algorithms.
5. Hardware/devices should be in working condition & controlled by the software.
6. Every module when integrated together should result in proper functioning of
whole system.

Verification and Validation in integration phase

Integration phase bunches up all the modules to integrate complete software.


Dynamic validation ensuring is the important part of this phase. Purpose of this phase is to
ensure the interface between various modules & get it right. This phase includes

1. Correct assignment of actual parameters to formal parameters.


2. Correct assignment of values of variables.
3. Correct sequence of execution of modules.
4. Module interaction.

Verification and Validation in testing phase

Following up with above phases Testing phase is the most important phase of
Verification and Validation. In this phase system testing is performed in development
environment as well as in the environment provided by the user. Objective is to make sure
that the functional &non-functional requirement should meet as per user’s business
necessities. This phase ensures.
1. Random testing
2. Use cause based testing
3. Acceptance testing

Verification and Validation in maintenance

Once the system is installed in the user’s workplace, it is the time to maintain the
software in good condition & the software should not fade away. This phase includes.

1. Corrective maintained to check errors in the system.


2. Adding additional capabilities to the system.
3. Improvement for performance, response time, user interface and quality issues.
4. Easy adaption of any new hardware changes, operating system change and new
technology change.
5. Security of software from threats like virus, worms etc.

107
Example of Verification and Validation

Suppose we own a software company and we got a project of college management


system to develop.

Firstly, our company’s core team will have a meeting with core team of the college and
will finalize the software product’s basic structure also known as Software Requirement
Specifications (SRS) which is later on signed by the software producer and user then
management of our company will nominate a team to develop that software product.

Then the team nominated will conduct regular meetings with the college faculty to
understand their requirements from the software product

As discussed earlier in verification and validation topic of life cycle the nominated team
will conduct regular sessions with the college staff and gather all the information that will
help them to develop a good software product. These meetings will help team to get
knowledge about these phases

• Understanding the requirements of the user to better understand software


product.
• Understanding the design of the software product
• Understanding the outputs format as per user’s requirements
• Implementing the software
• Testing the software
And Finally

Verification and Validation


In this phase the nominated team will check the software product which is made by them
with random and unexpected inputs. The development team will check the integration
between various modules to make a better software product.

When the development team installs the software product at user’s workplace the user will
check the software product according to his/her specified needs, if the needs are as per
specifications in SRS met by the software product, user accepts the product.

Then maintenance phase of software product started by the development team which keeps
on updating the product with new technologies, hardware support and changes suggested
by the user.

108

You might also like