0% found this document useful (0 votes)
32 views178 pages

Software Engineering 12 - 04 - 2022

Software Engineering

Uploaded by

dehelyqi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views178 pages

Software Engineering 12 - 04 - 2022

Software Engineering

Uploaded by

dehelyqi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 178

Contents

Software and Software Engineering __________________________________________________________ 6

Program versus Software___________________________________________________________________ 6

Characteristics of software _________________________________________________________________ 7

Changing Nature of Software/Software Application Domains ____________________________________ 10

Software Myths: (Wrong thinking) __________________________________________________________ 14

Role Of Management in Software Development _______________________________________________ 17

SOFTWARE DEVELOPMENT LIFE CYCLE LIFE CYCLE MODEL _______________________________________ 19

THE NEED FOR A SOFTWARE LIFE CYCLE MODEL ____________________________________________________ 19

Different software life cycle models _________________________________________________________ 19

1) Classical Waterfall Model _________________________________________________________________ 20

2) Iterative Waterfall Model or Modified Waterfall Model _________________________________________ 24

3) Incremental Process Models _______________________________________________________________ 25

4) Evolutionary Process Models _______________________________________________________________ 26


1) Prototype Model ________________________________________________________________________________ 28
2) Spiral Model____________________________________________________________________________________ 30

5) Unified Process Model ____________________________________________________________________ 34

Software Engineering | Comparison of Different Life Cycle Models ________________________________ 38

Selection Of Appropriate Life Cycle Model for A Project _________________________________________ 39

Unit-II _________________________________________________________________________________ 41

Requirements Engineering _________________________________________________________________ 41

Types of Software Requirement ____________________________________________________________ 44

Functional Requirements: ______________________________________________________________________ 44

Non-Functional Requirements ___________________________________________________________________ 44


1. Product Requirements ___________________________________________________________________________ 45
2. Process Requirements ____________________________________________________________________________ 45
3. External Requirements ___________________________________________________________________________ 46

Domain Requirements _________________________________________________________________________ 46

PAGE NO. 1
Non-Functional vs. Functional Requirements _______________________________________________________ 47

Feasibility Study _________________________________________________________________________ 48

Need of Feasibility Study: ______________________________________________________________________ 49

Requirements Elicitation __________________________________________________________________ 50

Requirements Elicitation Methods: _______________________________________________________________ 51

Requirements Analysis ____________________________________________________________________ 55

Software Requirements Specification (SRS) Document __________________________________________ 58

Characteristics of good SRS _____________________________________________________________________ 58

Properties of a good SRS document ______________________________________________________________ 60


Click Here to View IEEE Software Requirements Specification Template _________________________________________ 61

Requirements Validation __________________________________________________________________ 62

Requirements Management _______________________________________________________________ 64

Software Architecture ____________________________________________________________________ 68

Common Software Architectures ________________________________________________________________ 68

Role of Software Architecture ___________________________________________________________________ 70

Role of Software Architect ______________________________________________________________________ 71


Design Expertise ______________________________________________________________________________________ 71
Domain Expertise _____________________________________________________________________________________ 72
Technology Expertise __________________________________________________________________________________ 72
Methodological Expertise ______________________________________________________________________________ 72
Hidden Role of Software Architect _______________________________________________________________________ 72

Software Design Vs Software Architecture _________________________________________________________ 72

Architecture View Model __________________________________________________________________ 75


Why is it called 4+1 instead of 5? _________________________________________________________________________ 76

Component and Connector View and its Architecture Style _______________________________________ 78

Components _________________________________________________________________________________ 78

Connectors __________________________________________________________________________________ 79

N-Tier Architecture _______________________________________________________________________ 83

Deployment View ________________________________________________________________________ 87

Deployment View and Performance Analysis __________________________________________________ 89

PAGE NO. 2
Documenting Architecture Design ___________________________________________________________ 92

Evaluating Architectures __________________________________________________________________ 95

Unit -III _______________________________________________________________________________ 102

Software Design ________________________________________________________________________ 102

Principles of Software Design __________________________________________________________________ 102

Modularization _________________________________________________________________________ 104


Advantage of modularization: __________________________________________________________________________ 104

Module-Level Concepts _______________________________________________________________________ 105


Types of Coupling ____________________________________________________________________________________ 106
Data coupling: _______________________________________________________________________________________ 106
Stamp Coupling: _____________________________________________________________________________________ 107
Control Coupling: ____________________________________________________________________________________ 108
External Coupling: ____________________________________________________________________________________ 109
Common Coupling: ___________________________________________________________________________________ 109
Content Coupling: ____________________________________________________________________________________ 110

2. Cohesion ______________________________________________________________________________ 110


Types of cohesion: ___________________________________________________________________________________ 111
Calculate_sale_Tax ___________________________________________________________________________________ 112
End__________________________________________________________________________ Error! Bookmark not defined.
Sequential Cohesion __________________________________________________________________________________ 112
Communicational Cohesion ____________________________________________________________________________ 113
Procedural Cohesion __________________________________________________________________________________ 113
Temporal Cohesion ___________________________________________________________________________________ 114
Logical Cohesion _____________________________________________________________________________________ 114
Coincidental Cohesion ________________________________________________________________________________ 115

Software Design Approaches ______________________________________________________________ 118


Bottom-up approach: _________________________________________________________________________________ 118
Advantages:_________________________________________________________________________________________ 118
Disadvantages: ______________________________________________________________________________________ 119
Top-down approach: _________________________________________________________________________________ 119
Advantages:_________________________________________________________________________________________ 120
Disadvantages: ______________________________________________________________________________________ 120
Hybrid Design: _______________________________________________________________________________________ 120
Generic Procedure: ___________________________________________________________________________________ 122
Problem in Top-Down design method: ___________________________________________________________________ 122
Solution to the problem: ______________________________________________________________________________ 122
Function Oriented Design Strategies or Design Notations: ____________________________________________________ 123

PAGE NO. 3
1. Data Flow Diagram (DFD): ________________________________________________________________________ 123
2. Data Dictionaries: ______________________________________________________________________________ 123
3. Structure Charts: _______________________________________________________________________________ 123
Symbols used in construction of structured chart ___________________________________________________________ 123
2. Conditional Call ________________________________________________________________________________ 124
3. Loop (Repetitive call of module) ___________________________________________________________________ 124
4. Data Flow _____________________________________________________________________________________ 125
5. Control Flow __________________________________________________________________________________ 125
6. Physical Storage ________________________________________________________________________________ 125
Example : Structure Chart for an Email server ______________________________________________________________ 126

Object-Oriented Design __________________________________________________________________ 127


The other characteristics of Object Oriented Design are as follow: _____________________________________________ 127
Process of Object Oriented Design: ______________________________________________________________________ 128
Concepts of Object Oriented Design: _____________________________________________________________________ 129
Analyze and Design Object Oriented System _______________________________________________________________ 131
https://www.javatpoint.com/uml-use-case-diagram ________________________________________________________ 140

Unit-IV___________________________________________________________ Error! Bookmark not defined.


Dynamic Model _____________________________________________________________________________________ 148

Diagrams for Dynamic Modelling _______________________________________________________________ 148


1) Interaction Diagrams ____________________________________________________________________________ 148
2) State Transition Diagram _________________________________________________________________________ 148

States and State Transitions ___________________________________________________________________ 150


State ______________________________________________________________________________________________ 150
Parts of a state ______________________________________________________________________________________ 150
Initial and Final States _________________________________________________________________________________ 150
Transition __________________________________________________________________________________________ 150

Events _____________________________________________________________________________________ 152


External and Internal Events ___________________________________________________________________________ 153
Deferred Events _____________________________________________________________________________________ 153
Event Classes ________________________________________________________________________________________ 153

Actions ____________________________________________________________________________________ 153


Activity_____________________________________________________________________________________________ 153
Action _____________________________________________________________________________________________ 153
Entry and Exit Actions _________________________________________________________________________________ 154
Scenario ____________________________________________________________________________________________ 154
Examples of State Transition Diagram ___________________________________________________________________ 154

Internal Classes and Operations ________________________________________________________________ 156

PAGE NO. 4
Software Design ________________________________________________________________________ 157

Software Design Levels _______________________________________________________________________ 157

Program Design Language (PDL) ________________________________________________________________ 162

Logic/Algorithm Design _______________________________________________________________________ 165

Example of Stepwise Refinement Technique ______________________________________________________ 166


Initial breakdown into steps ____________________________________________________________________________ 167
Revised breakdown of steps ____________________________________________________________________________ 167
Breaking the steps into smaller steps ____________________________________________________________________ 167
Putting it all together _________________________________________________________________________________ 168

Verification ____________________________________________________________________________ 172

1. Design Walkthroughs: - __________________________________________________________________ 172

1. Plan for a Design Walkthrough ____________________________________________________________ 172

2. Get the Right Participants ________________________________________________________________ 173

3. Understand Key Roles and Responsibilities __________________________________________________ 174

4. Prepare for a Design Walkthrough _________________________________________________________ 174

5. Use a Well-Structured Process_____________________________________________________________ 174

6. Review and Assessment the Product, Not the Designer ________________________________________ 174

2. Critical Design Review (CRD): - ____________________________________________________________ 174

A Critical Detailed Design Review (CDR) should: ___________________________________________________ 175

Completion of CDR should provide: _____________________________________________________________ 176

3. Consistency Checkers ____________________________________________________________________ 177

PAGE NO. 5
Software and Software Engineering
Software is more than just a program code. A program is an executable code, which serves some
computational purpose. Software is considered to be collection of executable programming code,
associated libraries and documentations. Software, when made for a specific requirement is called
software product.
Engineering on the other hand, is all about developing products, using well-defined, scientific principles
and methods. Software engineering is an engineering branch associated with development of software
product using well-defined scientific principles, methods and procedures. The outcome of software
engineering is an efficient and reliable software product.
Stephen Schach defined the same as “A discipline whose aim is the production of quality software,
software that is delivered on time, within budget and that satisfies its requirements”.

Program versus Software


Software is more than programs. It consists of programs; documentation of any facet of the program
and the procedure used to setup and operate the software system.

Software is more than programs. It comprises of programs, documentation to use these programs and
the procedures that operate on the software systems
A program is a part of software and can be called software only if documentation and operating
procedures are added to it. Program includes both source code and the object code.

PAGE NO. 6
Operating procedures comprise of instructions required to setup and use the software and instructions
on actions to be taken during failure. List of operating procedure manuals/ documents is given in Figure

Characteristics of software
Software has characteristics that are considerably different than those of hardware:
1) Software is developed or engineered; it is not manufactured in the Classical
Sense.
The life of the software is from concept exploration to the retirement of the software product. It
is one time development effort and continuous maintenance effort in order to keep it operational.
However, making 1000 copies is not an issue and it does not involve any cost. In case of

PAGE NO. 7
hardware product, every product costs us due to raw material and other processing expenses.
We do not have assembly line in software development. Hence it is not manufactured in the
classical sense.
2) Software doesn’t “Wear Out”

There is a well-known
“Bath Tub Curve” in
reliability studies for
hardware products. There
are three phases for the
life of a hardware product.
Initial phase is burn-in
phase, where failure
intensity is high. Due to
testing and fixing faults,
failure intensity will come
down initially and may
stabilise after certain
time. The second phase is the useful life phase where failure intensity is approximately
constant. After few years, again failure intensity will increase due to wearing out of components.
This phase is called wear out phase.
We do not have this phase for the software as it does not wear out. Important point is software
become reliable instead of wearing out. Software may be retired due to environment changes,
new requirements, new expectations etc.

3) Reusability of Components

PAGE NO. 8
A software component should be designed and implemented so that it can be reused in many
different programs. Modern reusable components encapsulate both data and the processing
that is applied to the data, enabling the software engineer to create new applications from
reusable parts.
4) Software is flexible

A software can be developed to do almost anything. Sometimes, this characteristic may be the
best and may help us to accommodate any kind of change. However, most of the times, this
“almost anything” characteristic has made software development difficult to plan, monitor and
control. This unpredictability is the basis of what has been referred to for the past 30 years as
the “Software Crisis”.

PAGE NO. 9
Changing Nature of Software/Software Application Domains
The nature of software is chaining. Following broad categories of software are evolving
to dominating the industry today. These categories have developed in last ten years
and more and more software developed in these categories. These are:
1) System software: A collection of programs written to service other programs.
Some system software (e.g., compilers, editors, and file management utilities)
2) Application software: Stand-alone programs that solve a specific business
need. Application software is used to control business functions in real time (e.g.,
point-of-sale transaction processing, real-time manufacturing process control).
3) Engineering/scientific software: It has been characterized by “number
crunching” algorithms. Applications range from astronomy to volcanology, from
automotive stress analysis to space shuttle orbital dynamics, and from molecular
biology to automated manufacturing.
4) Real Time Software: This Software is used to monitor, control and analyze real
world events as they occur. Real time software deals with changing environment.
An example may be software required for weather forecasting. Such software will
gather and process the status of temperature, humidity and other environmental
parameters to forecast the weather.
5) Embedded software: This type of software is placed in “Read-Only- Memory
(ROM)” of the product and control the various functions of the product. Embedded
software can perform limited and esoteric functions (e.g., key pad control for a
microwave oven) or provide significant function and control capability (e.g., digital
functions in an automobile such as fuel control, dashboard displays, and braking
systems).
6) Product-line software: Designed to provide a specific capability for use by many
different customers. Product-line software can focus on a limited and esoteric
marketplace (e.g., inventory control products) or address mass consumer markets
(e.g., word processing, spreadsheets, computer graphics, multimedia,
entertainment, database management, and personal and business financial
applications).

PAGE NO. 10
7) Web applications: In the early days of the World Wide Web (1990 to 1995),
websites consisted of little more than a set of linked hyper files that presented
information using text and limited graphics.
As time passed the growth of HTML by development tools (e.g. XML, Java)
enabled Web engineers to provide computing capability (dynamic Pages) along
with information content. Web based systems and applications (we refer to these
collectively as Web Apps) were born.
Today, web Apps have evolved into sophisticated computing tools that not only
provide stand-alone function to the end user, but also have been integrated with
corporate database and business applications. A decade ago, WebApps “involved
a mixture between print publishing and software development, between marketing
and computing, between internal communications and external relations, and
between art and technology”. But today, they provide full computing potential in
many of the application categories noted in Software Application Domains.
8) Artificial intelligence software: These makes use of non-numerical algorithms
to solve complex problems. Applications within this area include robotics, expert
systems, pattern recognition (image and voice), artificial neural networks, theorem
proving, and game playing.

9) Mobile Applications The term app has evolved to signify software that has been
specifically designed to reside on a mobile platform (e.g. IOS (Apple), Android, or
Windows Mobile). Software we are developing on these platforms are known as
Mobile Apps.
In most instances, mobile applications encompass a user interface that takes
advantage of the unique interaction mechanisms provided by the mobile platform,
interoperability with web based resources (GPS) that provide access to a wide
array of information that is relevant to the app, and local processing capabilities
that collect, analyse, and format information in a manner that is best suited to the
mobile platform.
In addition, a mobile app provides persistent storage capabilities within the
platform. E.g. Apple provide ICloud.

PAGE NO. 11
It is important to recognize that there is a subtle distinction between mobile web
application and mobile apps. A mobile web application (WebApp) allows a mobile
device to gain access to webbased content via a browser that has been
specifically designed to accommodate the strengths and weaknesses of the
mobile platform.
A mobile app can gain direct access to the hardware characteristics of the device
(e.g. accelerometer or GPS location) and then provide the local processing and
storage capabilities.
10) Cloud Computing
Cloud computing encompasses an infrastructure or “ecosystem” that enable any

user, anywhere, to use a computing device to share computing resources on a


broad scale.
Infrastructure
Compute, Block Storage, Network

PAGE NO. 12
Platform
Object Storage, Identity, Runtime, Queue, Database
Applications
Monitoring, Contents, Colabroration, communication, finance

Referring to the figure, computing devices reside outside the cloud and have
access to a variety of resources within cloud.
These resources encompass (surround) applications, platforms, and
infrastructure. In its simplest form, an external computing device accesses the
cloud (Amazone one of them) via a Web browser or analogous software
(comparable in certain respects). The cloud provides access to data that resides
with databases and other data structure.
In addition, devices can access executable applications that can be used in place
of apps that reside on the computing device. App designed for a single purpose
and performs a single function whereas application designed to perform a variety
of functions (like Yahoo).
The implementation of cloud computing requires the development of an
architecture that encompasses front-end and back-end services.
The front-end includes the client (user) device and the application software (e.g.
a browser) that allows the back-end to be accessed.
The back-end includes servers and related computing resources, and storage
system (e.g. databases), server-resident applications, and administrative server
that use middleware to coordinate and monitor traffic by establishing a set of
protocols for access to the cloud and its resident resources.
The cloud architecture can be segmented to provide access at a variety of
different levels from full public access to private cloud architectures accessible
only to those with authorization.

PAGE NO. 13
Software Myths: (Wrong thinking)
Software myths are defined as the beliefs about the software and the process used to
build it- can be traced to the earliest days of computing. Myths have a number of
attributes that have made them insidious (harmful effects). It is also defined as the
misleading altitudes which caused serious problem for managers & technical people.
The development of software requires dedication and understanding on the developers’
part. Many software problems arise due to myths that are formed during the initial stages
of software development. Software myths propagate false beliefs and confusion in the
minds of management, users and developers.
Management Myths:

Managers with software responsibility, like managers in most disciplines, are often
under pressure to maintain budgets, keep schedules, and improve quality. A software
manager often grasps at belief in a software myth.
Myth: We already have a book that’s full of standards and procedures for building
software that developer need.
Won’t that provide my people with everything they need to know?
Reality:

• The book of standards may very well exist, but is it used?


• Are software practitioners aware of its existence?
• Does it reflect modern software engineering practice?
• Is it complete?
• Is it adaptable?
• Is it efficient to improve time to delivery while still maintaining a focus on
Quality?
In many cases, the answer to these entire question is NO.
Myth: If we get behind schedule, we can add more programmers and catch up

Reality: Software development is not a mechanical process like manufacturing.


“Adding people to a late software project makes it later.” However, as
new people are added, people who were working must spend time

PAGE NO. 14
educating the newcomers, thereby reducing the amount of time spent on
productive development effort.
People can be added but only in a planned and well-coordinated manner.

Myth: If we decide to outsource the software project to a third party, I can just relax
and let that firm build it.
Reality: If an organization does not understand how to manage and control software
project internally, it will always struggle when it out sources software project.
Customer myths:

A customer who requests computer software may be a person at the next desk or
anybody else a technical group, the marketing /sales department, or an outside
company that has requested software under contract. In many cases, the customer
believes myths about software because software managers and practitioners do little
to correct misinformation. Myths led to false expectations, dissatisfaction with the
developers.

Myth: A general statement of objectives is sufficient to begin writing programs-we


can fill in the details later.
Reality: An ambiguous statement of objectives is a disaster. Unambiguous
requirements are developed through effectives and continuous
communication between customer & developer
Myth: Project requirements continually change, but change can be easily
accommodated because software is flexible.
Reality: It’s true that software requirement change, but the impact of change varies
with the time at which it is introduced. When requirement changes are
requested early, cost impact is relatively small. However, as time passes,
cost impact grows rapidly – resources have been committed, a design
framework has been established, and change can cause disorder that
requires additional resources and major design modification.
Developer Myths

In the early days of software development, programming was viewed as an art, but now
software development has gradually become an engineering discipline. However,
PAGE NO. 15
developers still believe in some myths.
Myth: Once we write the program and get it to work, our job is done.
Reality:
• Expert said "the sooner you begin 'writing code', the longer it'll take you to
get done."
• Industry data indicate that between 60 and 80 percent of all effort expended
on software will be expended after it is delivered to the customer for the first
time.
Myth: Until I get the program "running" I have no way of assessing its quality.
Reality:
• One of the most effective software quality assurance mechanisms can be
applied from the beginning of a project—the formal technical review.

• Software reviews are a "quality filter" that have been found to be more
effective than testing for finding certain classes of software defects.
Myth: The only deliverable work product for a successful project is the working
program.
Reality:
• A working program is only one part of a software configuration that includes
many elements.
• A variety of work products (e.g. Documents, Models, Plans) provides a
foundation for successful engineering and, more important, guidance for
software support.

Myth: Software engineering will make us create huge and unnecessary


documentation and will always slow us down.
Reality:
• Software engineering is not about creating documents.
• It is about creating quality product.
• Better quality leads to reduced rework.
• And reduced rework results in faster delivery times.

PAGE NO. 16
Role Of Management in Software Development

Management is very important whenever we work on anything, especially in cases when


we are working in a team and the number of co-workers is huge. If we talk specifically
about the software development process, then the main aim of software engineering is
to define a procedure which is applicable to all the software that needs to be developed,
and through which we can successfully finish our project till its deployment stage and
also the final product that we get is an efficient one. In short, software engineering
somewhere directly or indirectly deals with the management part of the software
development too.
Factors upon which the Role of Management in Software Development depends

1) People

Of course, the management has


to deal with people in every stage
of the software developing
process. From the ideation phase
to the final deployment phase,
including the development and
testing phases in between, there
are people involved in
everything, whether they be the customers or the developers, the designers or the
salesmen. Hence, how they contact and communicate with each other must be
managed so that all the required information is successfully delivered to the relevant
person and hence there is no communication gap between the customers and the
service providers.
2) Project

From the ideation phase to the deployment phase, we term the process as a project.
Many people work together on a project to build a final product that can be delivered to
the customer as per their needs or demands. So, the entire process that goes on while
working on the project must be managed properly so that we can get a worthy result

PAGE NO. 17
after completing the project and also so that the project can be completed on time
without any delay.
3) Process

Every process that takes place while developing the software, or we can say while
working on the project must be managed properly and separately. For example, there
are various phases in a software development process and every phase has its process
like the designing process is different from the coding process, and similarly, the coding
process is different from the testing. Hence, each process is managed according to its
needs and each needs to be taken special care of.
4) Product

Even after the development process is completed and we reach our final product, still,
it needs to be delivered to its customers. Hence the entire process needs a separate
management team like the sales department.

PAGE NO. 18
SOFTWARE DEVELOPMENT LIFE CYCLE LIFE CYCLE MODEL
A software life cycle model (also called process model) is a descriptive and diagrammatic
representation of the software life cycle. A life cycle model represents all the activities
required to make a software product transit through its life cycle phases. It also captures
the order in which these activities are to be undertaken. In other words, a life cycle model
maps the different activities performed on a software product from its beginning to end.
Different life cycle models may map the basic development activities to phases in different
ways.
THE NEED FOR A SOFTWARE LIFE CYCLE MODEL

The development team must identify a suitable life cycle model for the particular project
and then follow to it. Without using of a particular life cycle model, the development
of a software product would not be in a systematic and disciplined manner. When
a software product is being developed by a team there must be a clear understanding
among team members about when and what to do. Otherwise, it would lead to project
failure.

This problem can be illustrated by using an example. Suppose a software development


problem is divided into several parts and the parts are assigned to the team members.
From then on, suppose the team members are allowed the freedom to develop the parts
assigned to them in whatever way they like. It is possible that one member might start
writing the code for his part, another might decide to prepare the test documents first, and
some other engineer might begin with the design phase of the parts assigned to him. This
would be one of the perfect recipes for project failure.

A software life cycle model defines entry and exit criteria for every phase. A phase can
start only if its phase-entry criteria have been satisfied. So, without software life cycle
model the entry and exit criteria for a phase cannot be recognized. Without software life
cycle models, it becomes difficult for software project managers to monitor the progress
of the project.

Different software life cycle models


Many life cycle models have been proposed so far. Each of them has some advantages
PAGE NO. 19
as well as some disadvantages. A few important and commonly used life cycle models
are as follows:
1) Classical Waterfall Model

The name waterfall has been borrowed from the concept that flow of water is downward
from hill. It is easiest, earliest and simplest process model, introduced by Winston Royce
in 1970. It is also referred as a linear-sequential life cycle model. As the flow of control
is top down in waterfall model therefore one development stage should be
completed before the next begins. If some phase is complete, we cannot come back
to the previous phase.
The classical waterfall model is intuitively the most obvious way to develop software.
Though the classical waterfall model is elegant and intuitively obvious, it is not a practical
model in the sense that it cannot be used in actual software development projects. Thus,
this model can be considered to be a theoretical way of developing software. But all
other life cycle models are essentially derived from the classical waterfall model. So, in
order to be able to appreciate other life cycle models it is necessary to understand the
classical waterfall model. Classical waterfall model divides the life cycle into the following
phases as shown in fig.

Feasibility study - The main aim of feasibility study is to determine whether it would be
financially and technically feasible to develop the product.
PAGE NO. 20
• At first project managers or team leaders try to have a rough understanding of what is
required to be done by visiting the client side. They study different input data to the
system and output data to be produced by the system. They study what kind of
processing is needed to be done on these data and they look at the various constraints
on the behavior of the system.
• After they have an overall understanding of the problem, they investigate the different
solutions that are possible. Then they examine each of the solutions in terms of what
kind of resources required, what would be the cost of development and what would be
the development time for each solution.
• Based on this analysis they pick the best solution and determine whether the solution
is feasible financially and technically. They check whether the customer budget would
meet the cost of the product and whether they have sufficient technical expertise in
the area of development.

Requirements analysis and specification: - The aim of the requirements analysis and
specification phase is to understand the exact requirements of the customer and to
document them properly. This phase consists of two distinct activities, namely

• Requirements gathering and analysis


• Requirement’s specification

The goal of the requirement’s gathering activity is to collect all relevant information from
the customer regarding the product to be developed. This is done to clearly understand
the customer requirements so that incompleteness and inconsistencies are removed. The
requirements analysis activity is begun by collecting all relevant data regarding the
product to be developed from the users of the product and from the customer through
interviews and discussions. For example, to perform the requirements analysis of a
business accounting software required by an organization, the analyst might interview all
the accountants of the organization to ascertain their requirements. The data collected
from such a group of users usually contain several contradictions and ambiguities, since
each user typically has only a partial and incomplete view of the system. Therefore, it is
necessary to identify all ambiguities and contradictions in the requirements and resolve
them through further discussions with the customer. After all ambiguities, inconsistencies,

PAGE NO. 21
and incompleteness have been resolved and all the requirements properly understood,
the requirements specification activity can start. During this activity, the user requirements
are systematically organized into a Software Requirements Specification (SRS)
document. The customer requirements identified during the requirements gathering and
analysis activity are organized into a SRS document. The important components of this
document are functional requirements, the nonfunctional requirements, and the goals of
implementation.

Design: - The goal of the design phase is to transform the requirements specified in the
SRS document into a structure that is suitable for implementation in some programming
language. In technical terms, during the design phase the software architecture is derived
from the SRS document. Two distinctly different approaches are available: the traditional
design approach and the object-oriented design approach.

• Traditional design approach -Traditional design consists of two different activities;


first a structured analysis of the requirements specification is carried out where the
detailed structure of the problem is examined. This is followed by a structured design
activity. During structured design, the results of structured analysis are transformed
into the software design.
• Object-oriented design approach -In this technique, various objects that occur in the
problem domain and the solution domain are first identified, and the different
relationships that exist among these objects are identified. The object structure is
further refined to obtain the detailed design.

Coding and unit testing: -The purpose of the coding phase (sometimes called the
implementation phase) of software development is to translate the software design into
source code. Each component of the design is implemented as a program module. The
end-product of this phase is a set of program modules that have been individually tested.
During this phase, each module is unit tested to determine the correct working of all the
individual modules. It involves testing each module in isolation as this is the most efficient
way to debug the errors identified at this stage.

Integration and system testing: -Integration of different modules is undertaken once


they have been coded and unit tested. During the integration and system testing phase,

PAGE NO. 22
the modules are integrated in a planned manner. The different modules making up a
software product are almost never integrated in one shot. Integration is normally carried
out incrementally over a number of steps. During each integration step, the partially
integrated system is tested and a set of previously planned modules are added to it.
Finally, when all the modules have been successfully integrated and tested, system
testing is carried out. The goal of system testing is to ensure that the developed system
conforms to its requirements laid out in the SRS document. System testing usually
consists of three different kinds of testing activities:

• α – testing: It is the system testing performed by the development team.


• β –testing: It is the system testing performed by a friendly set of customers.
• Acceptance testing: It is the system testing performed by the customer himself after
the product delivery to determine whether to accept or reject the delivered product.
System testing is normally carried out in a planned manner according to the system
test plan document. The system test plan identifies all testing-related activities that
must be performed, specifies the schedule of testing, and allocates resources. It also
lists all the test cases and the expected outputs for each test case.

Maintenance: -Maintenance of a typical software product requires much more than the
effort necessary to develop the product itself. Many studies carried out in the past confirm
this and indicate that the relative effort of development of a typical software product to its
maintenance effort is roughly in the 40:60 ratios. Maintenance involves performing any
one or more of the following three kinds of activities:

• Correcting errors that were not discovered during the product development phase.
This is called corrective maintenance.
• Improving the implementation of the system, and enhancing the functionalities of the
system according to the customer’s requirements. This is called perfective
maintenance.
• Porting the software to work in a new environment. For example, porting may be
required to get the software to work on a new computer platform or with a new
operating system. This is called adaptive maintenance.
Shortcomings Of the Classical Waterfall Model

PAGE NO. 23
The classical waterfall model is an idealistic one since it assumes that no development
error is ever committed by the engineers during any of the life cycle phases. However, in
practical development environments, the engineers do commit a large number of errors
in almost every phase of the life cycle. The source of the defects can be many: oversight,
wrong assumptions, use of inappropriate technology, communication gap among the
project engineers, etc. These defects usually get detected much later in the life cycle. For
example, a design defect might go unnoticed till we reach the coding or testing phase.
Once a defect is detected, the engineers need to go back to the phase where the defect
had occurred and redo some of the work done during that phase and the subsequent
phases to correct the defect and its effect on the later phases. Therefore, in any practical
software development work, it is not possible to strictly follow the classical waterfall
model.
2) Iterative Waterfall Model or Modified Waterfall Model

One of the drawbacks of a strict waterfall model is that the water cannot flow upwards
means if some phase is complete, we cannot come back to the previous phase. If problem
is found at particular stage in development, there is no way of redoing an earlier stage in
order to rectify the problem. e.g. testing usually find errors in the coding stage, but in the
strict Waterfall approach, the coding cannot be corrected.
To overcome this obvious drawback, a variation of the waterfall model provides for
feedback between adjoining stages, so that a problem uncovered at one stage can cause
remedial action to be taken at the previous stages which is the main difference from the
classical waterfall model. When errors are detected at some later phase, these feedbacks
paths allow correcting errors committed by programmers during some phase.

PAGE NO. 24
3) Incremental Process Models

Incremental process model is also known as Successive Version Model. Increment


process models are effective in the situations where requirements are defined
precisely and there is no confusion about the functionality of the final product. Although,
functionality can be delivered in phases. After every cycle, a useable product is
given to customer. For example, in the university automation software library
automation module may be delivered in the first phase and examination automation
module in the second phase and so on. During the implementation phase, the project is

divided into small subsets known as increments that are implemented individually. This
model comprises several phases where each phase produces an increment. These
increments are identified in the beginning of the development process and the entire
PAGE NO. 25
process from requirements gathering to delivery of the product is carried out for each
increment.
Characteristics of an Incremental module includes

• System development is broken down into many mini development projects


• Partial systems are successively built to produce a final total system
• Highest priority requirement is tackled first
• Once the requirement is developed, requirement for that increment is frozen
Advantages of Incremental Model

• The software will be generated quickly during the software life cycle
• It is flexible and less expensive to change requirements and scope
• Throughout the development stages changes can be done
• This model is less costly compared to others
• A customer can respond to each building
• Errors are easy to be identified

4) Evolutionary Process Models


Evolutionary model is also referred to as the successive versions model and
sometimes as the incremental model. In Evolutionary model, the software requirement
is first broken down into several modules (or functional units) that can be incrementally
constructed and delivered.

The development first develops the core modules of the system. The core modules are
those that do not need services from the other modules. The initial product skeleton is
refined into increasing levels of capability by adding new functionalities in successive
versions. Each evolutionary model may be developed using an iterative waterfall model
of development.

PAGE NO. 26
The evolutionary model is shown in above figure. Each successive version/model of the
product is a fully functioning software capable of performing more work than the previous
versions/model.

The evolutionary model is normally useful for very large products, where it is easier to
find modules for incremental implementation.

PAGE NO. 27
Often, evolutionary model is used when the customer prefers to receive the product in
increments so that he can start using the different features as and when they are
developed rather than waiting all the time for the full product to be developed and
delivered.
Advantages of Evolutionary Model

• Large project: Evolutionary model is normally useful for very large products.
• User gets a chance to experiment with a partially developed software much
before the complete version of the system is released.
• Evolutionary model helps to accurately elicit user requirements during the
delivery of different versions of the software.
• The core modules get tested thoroughly, thereby reducing the chances of
errors in the core modules of the final products.
Disadvantages of Evolutionary Model

• Difficult to divide the problem into several versions that would be acceptable
to the customer and which can be incrementally implemented and delivered.
There are two common evolutionary process models

1) Prototype Model

2) Spiral Model

1) Prototype Model
Prototype Model is a software development model in which prototype is built, tested,
and reworked until an acceptable prototype is achieved. It also creates base to
produce the final system or software. It works best in scenarios where the project’s
requirements are not known in detail. It is an iterative, trial and error method which
takes place between developer and client.

Prototyping Model has following six SDLC phases as follow:

PAGE NO. 28
Step 1: Requirements gathering and analysis

A prototyping model starts with requirement analysis. In this phase, the requirements
of the system are defined in detail. During the process, the users of the system are
interviewed to know what is their expectation from the system.

Step 2: Quick design

The second phase is a preliminary design or a quick design. In this stage, a simple
design of the system is created. However, it is not a complete design. It gives a brief
idea of the system to the user. The quick design helps in developing the prototype.

Step 3: Build a Prototype

In this phase, an actual prototype is designed based on the information gathered from
quick design. It is a small working model of the required system.

Step 4: Initial user evaluation

In this stage, the proposed system is presented to the client for an initial evaluation. It
helps to find out the strength and weakness of the working model. Comment and
suggestion are collected from the customer and provided to the developer.

Step 5: Refining prototype

If the user is not happy with the current prototype, you need to refine the prototype
according to the user’s feedback and suggestions.

This phase will not over until all the requirements specified by the user are met. Once
the user is satisfied with the developed prototype, a final system is developed based
on the approved final prototype.

Step 6: Implement Product and Maintain

Once the final system is developed based on the final prototype, it is thoroughly tested
and deployed to production. The system undergoes routine maintenance for
minimizing downtime and prevent large-scale failures.

Advantages of the Prototyping Model

Here, are important pros/benefits of using Prototyping models:


• Users are actively involved in development. Therefore, errors can be detected in
the initial stage of the software development process.
• Missing functionality can be identified, which helps to reduce the risk of failure
as Prototyping is also considered as a risk reduction activity.
• Helps team member to communicate effectively

PAGE NO. 29
• Customer satisfaction exists because the customer can feel the product at a very
early stage.
• There will be hardly any chance of software rejection.
• Quicker user feedback helps you to achieve better software development
solutions.
• Allows the client to compare if the software code matches the software
specification.
• It helps you to find out the missing functionality in the system.
• It also identifies the complex or difficult functions.
• Encourages innovation and flexible designing.
• It is a straightforward model, so it is easy to understand.
• No need for specialized experts to build the model
• The prototype serves as a basis for deriving a system specification.
• The prototype helps to gain a better understanding of the customer’s needs.
Disadvantages of the Prototyping Model

Here, are important cons/drawbacks of prototyping model:


• The client involvement is more and it is not always considered by the developer.
• Prototyping is a slow and time taking process.
• The cost of developing a prototype is a total waste as the prototype is ultimately
thrown away.
• Prototyping may encourage excessive change requests.
• There may be far too many variations in software requirements when each time
the prototype is evaluated by the customer.
• Poor documentation because the requirements of the customers are changing.
• It is very difficult for software developers to accommodate all the changes
demanded by the clients.
• After seeing an early prototype model, the customers may think that the actual
product will be delivered to him soon.
2) Spiral Model
Originally proposed by Barry Boehm, the spiral model is an evolutionary software
process model that couples the iterative nature of prototyping with the controlled and
systematic aspects of the waterfall model. It provides the potential for rapid
development of increasingly more complete versions of the software. During the early
iterations, the additional release may be a paper model or prototype. During later
iterations, more and more complete versions of the engineered system are produced.

PAGE NO. 30
Spiral model is one of the most important Software Development Life Cycle models,
which provides support for Risk Handling. In its diagrammatic representation, it looks
like a spiral with many loops. The exact number of loops of the spiral is unknown and
can vary from project to project. Each loop of the spiral is called a Phase of the
software development process. The exact number of phases needed to develop the
product can be varied by the project manager depending upon the project risks. As the
project manager dynamically determines the number of phases, so the project
manager has an important role to develop a product using the spiral model.

When looking at a diagram of a spiral model, the radius of the spiral represents the
cost of the project and the angular degree represents the progress made in the
current phase. Each phase begins with a goal for the design and ends when the
developer or client reviews the progress.

To explain in simpler terms, the steps involved in the spiral model are:

PAGE NO. 31
Spiral Model Phases

It has four stages or phases: The planning of objectives, risk analysis, engineering or
development, and finally review. A project passes through all these stages repeatedly
and the phases are known as a Spiral in the model.
1. Determine objectives and find alternate solutions – This phase includes
requirement gathering and analysis. Based on the requirements, objectives are
defined and different alternate solutions are proposed.
2. Risk Analysis and resolving – In this quadrant, all the proposed solutions are
analysed and any potential risk is identified, analysed, and resolved. Risk analysis
should be performed on all possible solutions in order to find any faults or
vulnerabilities -- such as running over the budget or areas within the software that
could be open to cyber-attacks. Each risk should then be resolved using the most
efficient strategy.
3. Develop and test: This phase includes the actual implementation of the different
features. All the implemented features are then verified with thorough testing.

PAGE NO. 32
4. Review and planning of the next phase – In this phase, the software is evaluated
by the customer. It also includes risk identification and monitoring like cost overrun
or schedule slippage and after that planning of the next phase is started.
Spiral Model is also called Meta Model.

The Spiral model is called a Meta-Model because it subsumes all the other SDLC models.
For example, a single loop spiral actually represents the Iterative Waterfall Model. The
spiral model incorporates the stepwise approach of the Classical Waterfall Model. The
spiral model uses the approach of the Prototyping Model by building a prototype at the
start of each phase as a risk-handling technique. Also, the spiral model can be considered
as supporting the Evolutionary model – the iterations along the spiral can be considered
as evolutionary levels through which the complete system is built.
Spiral Model Advantages

1. The spiral model is perfect for projects that are large and complex in nature as
continuous prototyping and evaluation help in mitigating any risk.
2. Because of its risk handling ability, the model is best suited for projects which are
very critical like software related to the health domain, space exploration, etc.
3. This model supports the client feedback and implementation of change
requests (CRs) which is not possible in conventional models like a waterfall.
4. Since customer gets to see a prototype in each phase, so there are higher chances
of customer satisfaction.
Spiral Model Disadvantages

1. Because of the prototype development and risk analysis in each phase, it is


very expensive and time taking.
2. It is not suitable for a simpler and smaller project because of multiple phases.
3. It requires more documentation as compared to other models.
4. Project deadlines can be missed since the number of phases is unknown in the
beginning and frequent prototyping and risk analysis can make things worse.

PAGE NO. 33
5) Unified Process Model
Unified process (UP) is an architecture centric, use case driven, iterative and incremental
development process. UP is also referred to as the unified software development
process.

Architecture-Centric Approach

Using this approach, you would be creating a blueprint of the organization of the
software system. It would include taking into account the different technologies,
programming languages, operating systems, development and release environments,
server capabilities, and other such areas for developing the software.

Use-Case Driven Approach

A use-case defines the interaction between two or more entities. The list of
requirements specified by a customer are converted to functional requirements by a
business analyst and generally referred to as use-cases. A use-case describes the
operation of a software as interactions between the customer and the system, resulting
in a specific output or a measurable return. For example, the online cake shop can be
specified in terms of use cases such as 'add cake to cart', 'change the quantity of added
cakes in cart', 'cake order checkout' and so on. Each use case represents a significant
functionality and could be considered for an iteration.

Iterative and Incremental Approach

Using an iterative and incremental approach means treating each iteration as a mini-
project. Therefore, you would develop the software as a number of small mini-projects,
working in cycles. You would develop small working versions of the software at the end
of each cycle. Each iteration would add some functionality to the software according to
the requirements specified by the customer.

The Unified Process is an attempt to draw on the best features and characteristics of
traditional software process models, but characterize them in a way that implements
many of the best principles of agile (ability to move with quick, easy grace) software
development. The Unified Process recognizes the importance of customer
communication and streamlined methods for describing the customer’s view of a system.

PAGE NO. 34
It emphasizes the important role of software architecture and “helps the architect focus
on the right goals, such as understandability, support to future changes, and reuse”. It
suggests a process flow that is iterative and incremental, providing the evolutionary feel
that is essential in modern software development.

A Brief History

During the early 1990s James Rumbaugh, Grady Booch, and Ivar Jacobson began
working on a “unified method” that would combine the best features of each of their
individual object-oriented analysis and design methods and adopt additional features
proposed by other experts in object-oriented modelling. The result was UML—a unified
modelling language that contains a robust notation for the modelling and development of
object-oriented systems. They developed the Unified Process, a framework for object-
oriented software engineering using UML.

Phases of the Unified Process

This process
divides the
development
process into five
phases:

• Inception
• Elaboration
• Conception
• Transition
• Production

Inception Phase

The inception phase of the UP encompasses both customer communication and planning
activities. By collaborating with stakeholders, business requirements for the software are
identified; a rough architecture for the system is proposed; and a plan for the iterative,
incremental nature of the ensuing project is developed.

PAGE NO. 35
The following are typical goals for the Inception phase.

o Establish a justification or business case for the project


o Establish the project scope and boundary conditions
o Outline the use cases and key requirements that will drive the design tradeoffs
o Outline one or more candidate architectures
o Identify risks
o Prepare a preliminary project schedule and cost estimate
o The Lifecycle Objective Milestone marks the end of the Inception phase.

Elaboration Phase

The elaboration phase encompasses the communication and modelling activities of the
generic process model. Elaboration refines and expands the preliminary use cases
that were developed as part of the inception phase and expands the architectural
representation to include five different views of the software—the use case model, the
requirements model, the design model, the implementation model, and the deployment
model. Elaboration creates an “executable architectural baseline” that represents a “first
cut” executable system.

Construction Phase

The construction phase of the UP is identical to the construction activity defined for the
generic software process. Using the architectural model as input, the construction phase
develops or acquires the software components that will make each use case
operational for end users. To accomplish this, requirements and design models that
were started during the elaboration phase are completed to reflect the final version of the
software increment. All necessary and required features and functions for the software
increment (i.e., the release) are then implemented in source code.

Transition Phase

The transition phase of the UP encompasses the latter stages of the generic construction
activity and the first part of the generic deployment (delivery and feedback) activity.
Software is given to end users for beta testing and user feedback reports both

PAGE NO. 36
defects and necessary changes. At the conclusion of the transition phase, the software
increment becomes a usable software release.

Production Phase

The production phase of the UP coincides with the deployment activity of the generic
process. During this phase, the ongoing use of the software is monitored, support
for the operating environment (infrastructure) is provided, and defect reports and
requests for changes are submitted and evaluated. It is likely that at the same time
the construction, transition, and production phases are being conducted, work may have
already begun on the next software increment. This means that the five UP phases do
not occur in a sequence, but rather with staggered concurrency.

UP has the following major characteristics:

• It is use-case driven

• It is architecture-centric

• It is risk focused

• It is iterative and incremental

PAGE NO. 37
Software Engineering | Comparison of Different Life Cycle Models

Classical Waterfall Model: The Classical Waterfall model can be considered as the
basic model and all other life cycle models are based on this model. It is an ideal model.
However, the Classical Waterfall model cannot be used in practical project development,
since this model does not support any mechanism to correct the errors that are committed
during any of the phases but detected at a later phase. This problem is overcome by the
Iterative Waterfall model through the inclusion of feedback paths.

Iterative Waterfall Model: The Iterative Waterfall model is probably the most used
software development model. This model is simple to use and understand. But this model
is suitable only for well-understood problems and is not suitable for the development of
very large projects and projects that suffer from a large number of risks.

Evolutionary Model: The Evolutionary model is suitable for large projects which can be
decomposed into a set of modules for incremental development and delivery. This model
is widely used in object-oriented development projects. This model is only used if
incremental delivery of the system is acceptable to the customer.

Prototyping Model: The Prototyping model is suitable for projects, which either the
customer requirements or the technical solutions are not well understood. These risks
must be identified before the project starts. This model is especially popular for the
development of the user interface part of the project.

Spiral Model: The Spiral model is considered as a meta-model as it includes all other life
cycle models. Flexibility and risk handling are the main characteristics of this model. The
spiral model is suitable for the development of technically challenging and large software
that is prone to various risks that are difficult to anticipate at the start of the project. But
this model is more complex than the other models.

Unified Process Model: Unified process (UP) is an architecture centric, use case driven,
iterative and incremental development process. This process divides the development
process into inception, elaboration, conception, transition and production phases. The
Unified Process insists that architecture sit at the heart of the project team's efforts to
PAGE NO. 38
shape the system. The Unified Process requires the project team to focus on addressing
the most critical risks early in the project life cycle.

Selection Of Appropriate Life Cycle Model for A Project

Selection of proper lifecycle model to complete a project is the most important task. It can
be selected by keeping the advantages and disadvantages of various models in mind.
The different issues that are analysed before selecting a suitable life cycle model are
given below:

• Characteristics of the software to be developed: The choice of the life cycle model
largely depends on the type of the software that is being developed. For small services
projects, the agile model is favored. On the other hand, for product and embedded
development, the Iterative Waterfall model can be preferred. The evolutionary model
is suitable to develop an object-oriented project. User interface part of the project is
mainly developed through prototyping model.

• Characteristics of the development team: Team member’s skill level is an important


factor to deciding the life cycle model to use. If the development team is experienced
in developing similar software, then even an embedded software can be developed
using the Iterative Waterfall model. If the development team is entirely novice, then
even a simple data processing application may require a prototyping model.

• Risk associated with the project: If the risks are few and can be anticipated at the
start of the project, then prototyping model is useful. If the risks are difficult to
determine at the beginning of the project but are likely to increase as the development
proceeds, then the spiral model is the best model to use.

• Characteristics of the customer: If the customer is not quite familiar with computers,
then the requirements are likely to change frequently as it would be difficult to form
complete, consistent and unambiguous requirements. Thus, a prototyping model may
be necessary to reduce later change requests from the customers. Initially, the
customer’s confidence is high on the development team. During the lengthy
development process, customer confidence normally drops off as no working software
is yet visible. So, the evolutionary model is useful as the customer can experience a

PAGE NO. 39
partially working software much earlier than whole complete software. Another
advantage of the evolutionary model is that it reduces the customer’s trauma of getting
used to an entirely new system.

PAGE NO. 40
Unit-II

Requirements Engineering
Requirements analysis, also called requirements engineering, is the process of
determining user expectations for a new or modified product. Requirements
engineering is a major software engineering action that begins during the
communication activity and continues into the modelling activity. It must be adapted
to the needs of the process, the project, the product, and the people doing the work.
Requirements engineering builds a bridge to design and construction.

According to IEEE standard 729, a requirement is defined as follows:

• A condition or capability needed by a user to solve a problem or achieve an


objective

• A condition or capability that must be met or possessed by a system or system


component to satisfy a contract, standard, specification or other formally
imposed documents

• A documented representation of a condition or capability as in 1 and 2.

Requirements engineering provides the appropriate mechanism for understanding what


the customer wants, analysing need, assessing feasibility, negotiating a reasonable
solution, specifying
the solution
unambiguously,
validating the
specification, and
managing the
requirements as they
are transformed into
an operational system.
It encompasses seven
distinct tasks: inception, elicitation, elaboration, negotiation, specification, validation, and
management.

PAGE NO. 41
Inception: It establish a basic understanding of the problem, the people who want a
solution, the nature of the solution that is desired, and the effectiveness of preliminary
communication and collaboration between the other stakeholders and the software team.

Elicitation: In this stage, proper information is extracted to prepare to document the


requirements. It certainly seems simple enough—ask the customer, the users, and
others what the objectives for the system or product are, what is to be accomplished, how
the system or product fits into the needs of the business, and finally, how the system or
product is to be used on a day-to-day basis.

• Problems of scope: The boundary of the system is ill-defined or the


customers/users specify unnecessary technical detail that may confuse, rather
than clarify, overall system objectives.
• Problems of understanding. The customers/users are not completely sure of
what is needed, have a poor understanding of the capabilities and limitations of
their computing environment, don’t have a full understanding of the problem
domain, have trouble communicating needs to the system engineer, omit
information that is believed to be “obvious,” specify requirements that conflict
with the needs of other customers/users, or specify requirements that are
ambiguous or un testable.
• Problems of volatility. The requirements change over time. The rate of change
is sometimes referred to as the level of requirement volatility

Elaboration: The information obtained from the customer during inception and elicitation
is expanded and refined during elaboration. This task focuses on developing a refined
requirements model that identifies various aspects of software function, behavior, and
information. Elaboration is driven by the creation and refinement of user scenarios that
describe how the end user (and other actors) will interact with the system.

Negotiation: To negotiate the requirements of a system to be developed, it is necessary


to identify conflicts and to resolve those conflicts. You have to reconcile these
conflicts through a process of negotiation. Customers, users, and other stakeholders are
asked to rank requirements and then discuss conflicts in priority. Using an iterative
approach that prioritizes requirements, assesses their cost and risk, and addresses

PAGE NO. 42
internal conflicts, requirements are eliminated, combined, and/or modified so that each
party achieves some measure of satisfaction.

Specification: The term specification means different things to different people. A


specification can be a written document, a set of graphical models, a formal mathematical
model, a collection of usage scenarios, a prototype, or any combination of these.

Validation: The work products produced as a consequence of requirements engineering


are assessed for quality during a validation step. Requirements validation examines the
specification to ensure that all software requirements have been stated
unambiguously; that inconsistencies, omissions, and errors have been detected and
corrected; and that the work products conform to the standards established for the
process, the project, and the product.

The primary requirements validation mechanism is the technical review. The review team
that validates requirements includes software engineers, customers, users, and other
stakeholders who examine the specification looking for errors in content or interpretation,
areas where clarification may be required, missing information, inconsistencies,
conflicting requirements, or unrealistic requirements.

Requirements management. Requirements for computer-based systems change, and


the desire to change requirements persists throughout the life of the system.
Requirements management is a set of activities that help the project team identify,
control, and track requirements and changes to requirements at any time as the
project proceeds. Many of these activities are identical to the software configuration
management (SCM) techniques.

PAGE NO. 43
Types of Software Requirement
A software requirement can be of 3
types:
• Functional requirements

• Non-functional requirements
• Domain requirements
Functional Requirements:
These are the requirements that the
end user specifically demands as
basic facilities that the system should
offer. All these functionalities need to
be necessarily incorporated into the system as a part of the contract. These are
represented or stated in the form of input to be given to the system, the operation
performed and the output expected. They are basically the requirements stated by the
user which one can see directly in the final product.

Functional requirement defines a function of a system or its component, where a function


is described as a specification of behavior between inputs and outputs.

Functional requirements may involve calculations, technical details, data manipulation


and processing, and other specific functionality that define what a system is supposed to
accomplish. Behavioral requirements describe all the cases where the system uses the
functional requirements, these are captured in use cases

Non-Functional Requirements
Non-functional requirement (NFR) is a requirement that specifies criteria that can be used
to judge the operation of a system, rather than specific behaviors. These are basically the
quality constraints that the system must satisfy according to the project contract. The
priority or extent to which these factors are implemented varies from one project to other.
They are also called non-behavioral requirements. The plan for implementing non-
functional requirements is detailed in the system architecture, because they are usually
architecturally significant requirements.

Three main classes of non-functional requirements:


PAGE NO. 44
1. Product Requirements: These are requirements directly concerning the software
system to be built. They include requirements relevant to the customer---such as
usability, efficiency, and reliability requirements---but also portability requirements
which are more relevant to the organisation developing the software.

• Usability Requirements: Describe the ease with which users are able to
operate the software. For example, the software should be able to provide
access to functionality with fewer keystrokes and mouse clicks.
• Efficiency Requirements: Describe the extent to which the software makes
optimal use of resources, the speed with which the system executes, and the
memory it consumes for its operation. For example, the system should be
able to operate at least three times faster than the existing system.
• Reliability Requirements: Describe the acceptable failure rate of the
software. For example, the software should be able to operate even if a
hazard occurs.
• Portability Requirements: Describe the ease with which the software can
be transferred from one platform to another. For example, it should be easy
to port the software to a different operating system without the need to
redesign the entire software.

2. Process Requirements: Sometimes also called organisational requirements, these


requirements "[...] are a consequence of organisational policies and procedures." They

PAGE NO. 45
include requirements concerning programming language, design methodology, and
similar requirements defined by the developing organisation.

• Delivery Requirements: Specify when the software and its documentation


are to be delivered to the user.
• Implementation Requirements: Describe requirements such as
programming language and design method.
• Standards Requirements: Describe the process standards to be used during
software development. For example, the software should be developed using
standards specified by the ISO and IEEE standards.

3. External Requirements: These requirements come neither from the customer nor
from the organisation developing the software. They include, for example,
requirements derived from legislation relevant to the field for which the software is
being produced.

• Interoperability Requirements: Define the way in which different computer-


based systems will interact with each other in one or more organizations.
• Ethical Requirements: Specify the rules and regulations of the software so that
they are acceptable to users.
• Legislative Requirements: Ensure that the software operates within the legal
jurisdiction. For example, pirated software should not be sold.

Non-functional requirements are difficult to verify. Hence, it is essential to write non-


functional requirements quantitatively, so that they can be tested.

Domain Requirements:
Domain requirements are the requirements which are characteristic of a particular
category or domain of projects. The basic functions that a system of a specific domain
must necessarily exhibit come under this category. For instance, in an academic software
that maintains records of a school or college, the functionality of being able to access the
list of faculty and list of students of each grade is a domain requirement. These
requirements are therefore identified from that domain model and are not user specific.

PAGE NO. 46
Non-Functional vs. Functional Requirements
Here, are key differences between Functional and Non-functional requirements in
Software Engineering:

Parameters Functional Requirement Non-Functional Requirement


Requirement It is mandatory It is non-mandatory
Capturing type It is captured in use case. It is captured as a quality attribute.
End result Product feature Product properties
Capturing Easy to capture Hard to capture
Objective Helps you verify the Helps you to verify the performance of
functionality of the software. the software.
Area of focus Focus on user requirement Concentrates on the user’s
expectation.
Documentation Describe what the product Describes how the product works
does
Type of Testing Functional Testing like Non-Functional Testing like
System, Integration, End to Performance, Stress, Usability,
End, API testing, etc. Security testing, etc.
Test Execution Test Execution is done before After the functional testing
non-functional testing.
Product Info Product Features Product Properties

PAGE NO. 47
Feasibility Study
Feasibility Study in Software Engineering is a study to evaluate feasibility of proposed
project or system. As name suggests feasibility study is the feasibility analysis or it is a
measure of the software product in terms of how much beneficial product
development will be for the organization in a practical point of view. Feasibility study
is carried out based on many purposes to analyse whether software product will be right
in terms of development, implantation, contribution of project to the organization etc.

Types of Feasibility Study:

The feasibility study mainly concentrates on below five mentioned areas. Among these
Economic Feasibility Study is most important part of the feasibility analysis and Legal
Feasibility Study is less considered feasibility analysis.

Technical Feasibility:

In Technical Feasibility current resources both hardware software along with required
technology are analysed/assessed to develop project. This technical feasibility study
gives report whether there exists correct required resources and technologies which will
be used for project development. Along with this, feasibility study also analyses technical
skills and capabilities of technical team, existing technology can be used or not,
maintenance and up-gradation is easy or not for chosen technology etc.

Operational Feasibility:

In Operational Feasibility degree of providing service to requirements is analysed along


with how much easy product will be to operate and maintenance after deployment. Along
with this other operational scopes are determining usability of product, Determining
suggested solution by software development team is acceptable or not etc.

Economic Feasibility:

In Economic Feasibility study cost and benefit of the project is analysed. Means under
this feasibility study a detail analysis is carried out what will be cost of the project for
development which includes all required cost for final development like hardware and
software resource required, design and development cost and operational cost and so

PAGE NO. 48
on. After that it is analysed whether project will be beneficial in terms of finance for
organization or not.

Legal Feasibility:

In Legal Feasibility study project is analysed in legality point of view. This includes
analysing barriers of legal implementation of project, data protection acts or social media
laws, project certificate, license, copyright etc. Overall, it can be said that Legal Feasibility
Study is study to know if proposed project conforms legal and ethical requirements.

Schedule Feasibility:

In Schedule Feasibility Study mainly timelines/deadlines is analysed for proposed project


which includes how many times teams will take to complete final project which has a great
impact on the organization as purpose of project may fail if it can’t be completed on time.

Need of Feasibility Study:


Feasibility study is so important stage of Software Project Management Process as after
completion of feasibility study it gives a conclusion of whether to go ahead with proposed
project as it is practically feasible or to stop proposed project here as it is not right/feasible
to develop or to think/analyse about proposed project again.

Along with this Feasibility study helps in identifying risk factors involved in developing and
deploying system and planning for risk analysis also narrows the business alternatives
and enhance success rate analysing different parameters associated with proposed
project development.

PAGE NO. 49
Requirements Elicitation
Requirements elicitation is the practice of researching and discovering the requirements
of a system from users, customers, and other stakeholders. The practice is also
sometimes referred to as "requirement gathering".

The term elicitation is used in research to raise the fact that good requirements cannot
just be collected from the customer, as would be indicated by the name requirements
gathering. Requirements elicitation is non-trivial because you can never be sure you get
all requirements from the user and customer by just asking them what the system should
do or not do (for Safety and Reliability). Requirements elicitation practices include
interviews, questionnaires, user observation, workshops, brainstorming, use cases, role
playing and prototyping.

Before requirements can be analysed, modelled, or specified they must be gathered


through an elicitation process. Requirements elicitation is a part of the requirements
engineering process, usually followed by analysis and specification of the requirements.

Commonly used elicitation processes are the stakeholder meetings or interviews. For
example, an important first meeting could be between software engineers and customers
where they discuss their perspective of the requirements.

In 1992, Christel and Kang identified problems that indicate the challenges for
requirements elicitation:

1. 'Problems of scope'. The boundary of the system is ill-defined or the


customers/users specify unnecessary technical details that may confuse, rather
than clarify, overall system objectives.

2. Problems of understanding. The customers/users are not completely sure of what


is needed, have a poor understanding of the capabilities and limitations of their
computing environment, don’t have a full understanding of the problem domain,
have trouble communicating needs to the system engineer, omit information that is
believed to be “obvious,” specify requirements that conflict with the needs of other
customers/users, or specific requirements that are ambiguous or untestable.

PAGE NO. 50
3. Problems of volatility. The requirements change over time. The rate of change is
sometimes referred to as the level of requirement volatility

Requirements Elicitation Methods:


There are a number of requirements elicitation methods. Few of them are listed below –

1. Interviews

2. Brainstorming Sessions

3. Facilitated Application Specification Technique (FAST)

4. Quality Function Deployment (QFD)

5. Use Case Approach

The success of an elicitation technique used depends on the maturity of the analyst,
developers, users, and the customer involved.

1. Interviews:

Objective of conducting an interview is to understand the customer’s expectations from


the software.

It is impossible to interview every stakeholder hence representatives from groups are


selected based on their expertise and credibility.

Interviews maybe be open-ended or structured.

1. In open-ended interviews there is no pre-set agenda. Context free questions


may be asked to understand the problem.

2. In structured interview, agenda of fairly open questions is prepared.


Sometimes a proper questionnaire is designed for the interview.

2. Brainstorming Sessions:

• It is a group technique

• It is intended to generate lots of new ideas hence providing a platform to share


views

• A highly trained facilitator is required to handle group preference and group


conflicts.

PAGE NO. 51
• Every idea is documented so that everyone can see it.

• Finally, a document is prepared which consists of the list of requirements and


their priority if possible.

3. Facilitated Application Specification Technique (FAST):

Its objective is to bridge the expectation gap – difference between what the developers
think they are supposed to build and what customers think they are going to get.

A team oriented approach is developed for requirements gathering.

Each attendee is asked to make a list of objects that are-

1. Part of the environment that surrounds the system

2. Produced by the system

3. Used by the system

Each participant prepares his/her list, different lists are then combined, redundant entries
are eliminated, team is divided into smaller sub-teams to develop mini-specifications and
finally a draft of specifications is written down using all the inputs from the meeting.

4. Quality Function Deployment:

In this technique customer satisfaction is of prime concern, hence it emphasizes on the


requirements which are valuable to the customer. Three types of requirements are
identified –

• Normal requirements – In this the objective and goals of the proposed software
are discussed with the customer. Example – normal requirements for a result
management system may be entry of marks, calculation of results, etc

• Expected requirements – These requirements are so obvious that the


customer need not explicitly state them. Example – protection from unauthorized
access.

• Exciting requirements – It includes features that are beyond customer’s


expectations and prove to be very satisfying when present. Example – when
unauthorized access is detected, it should backup and shutdown all processes.

PAGE NO. 52
The major steps involved in this procedure are –

1. Identify all the stakeholders, eg. Users, developers, customers etc

2. List out all requirements from customer.

3. A value indicating degree of importance is assigned to each requirement.

4. In the end the final list of requirements is categorized as –

• It is possible to achieve

• It should be deferred and the reason for it

• It is impossible to achieve and should be dropped off

5. Use Case Approach: This technique combines text and pictures to provide a better
understanding of the requirements. The use cases describe the ‘what’, of a system and
not ‘how’. Hence, they only give a functional view of the system. The components of the
use case design includes three major things – Actor, Use cases, use case diagram.

1. Actor – It is the external agent that lies outside the system but interacts with it
in some way. An actor maybe a person, machine etc. It is represented as a stick
figure. Actors can be primary actors or secondary actors.

• Primary actors – It requires assistance from the system to achieve a


goal.

• Secondary actor – It is an actor from which the system needs


assistance.

2. Use cases –They describe the sequence of interactions between actors and the
system. They capture who(actors) do what(interaction) with the system. A
complete set of use cases specifies all possible ways to use the system.

3. Use case diagram –A use case diagram graphically represents what happens
when an actor interacts with a system. It captures the functional aspect of the
system.

• A stick figure is used to represent an actor.

• An oval is used to represent a use case.

PAGE NO. 53
• A line is used to represent a relationship between an actor and a use
case.

PAGE NO. 54
Requirements Analysis
Requirement analysis is significant and essential activity after elicitation. We analyse,
refine, and scrutinize the gathered requirements to make consistent and unambiguous
requirements. This activity reviews all requirements and may provide a graphical view of
the entire system. After the completion of the analysis, it is expected that the
understandability of the project may improve significantly. Here, we may also use the
interaction with the customer to clarify points of confusion and to understand which
requirements are more important than others.

(i) Draw the context diagram: The context diagram is a simple model that defines the
boundaries and interfaces of the proposed systems with the external world. It identifies
the entities outside the proposed system that interact with the system. The context
diagram of student result management system is given below:

PAGE NO. 55
(ii) Development of a Prototype (optional): One effective way to find out what the
customer wants is to construct a prototype, something that looks and preferably acts as
part of the system they say they want.

We can use their feedback to modify the prototype until the customer is satisfied
continuously. Hence, the prototype helps the client to visualize the proposed system and
increase the understanding of the requirements. When developers and users are not sure
about some of the elements, a prototype may help both the parties to take a final decision.

Some projects are developed for the general market. In such cases, the prototype should
be shown to some representative sample of the population of potential purchasers. Even
though a person who tries out a prototype may not buy the final system, but their feedback
may allow us to make the product more attractive to others.

The prototype should be built quickly and at a relatively low cost. Hence it will always
have limitations and would not be acceptable in the final system. This is an optional
activity.

(iii) Model the requirements: This process usually consists of various graphical
representations of the functions, data entities, external entities, and the relationships
between them. The graphical view may help to find incorrect, inconsistent, missing, and
superfluous requirements. Such models include the Data Flow diagram, Entity-
Relationship diagram, Data Dictionaries, etc.

PAGE NO. 56
• Data Flow Diagrams: Data Flow Diagrams (DFDs) are used widely for
modelling the requirements. DFD shows the flow of data through a system.
The system may be a company, an organization, a set of procedures, a
computer hardware system, a software system, or any combination of the
preceding. The DFD is also known as a data flow graph or bubble chart.
• Data Dictionaries: Data Dictionaries are simply repositories to store
information about all data items defined in DFDs. At the requirements stage,
the data dictionary should at least define customer data items, to ensure that
the customer and developers use the same definition and terminologies.
• Entity-Relationship Diagrams: Another tool for requirement specification is
the entity-relationship diagram, often called an "E-R diagram." It is a detailed
logical representation of the data for the organization and uses three main
constructs i.e. data entities, relationships, and their associated attributes.

(iv) Finalise the requirements: After modelling the requirements, we will have a better
understanding of the system behavior. The inconsistencies and ambiguities have been
identified and corrected. The flow of data amongst various modules has been analysed.
Elicitation and analyse activities have provided better insight into the system. Now we
finalize the analysed requirements, and the next step is to document these requirements
in a prescribed format.

PAGE NO. 57
Software Requirements Specification (SRS) Document
A software requirements specification (SRS) is a document that describes what the
software will do and how it will be expected to perform.

This report lays a foundation for software engineering activities and is constructing when
entire requirements are elicited and analysed. SRS is a formal report, which acts as a
representation of software that enables the customers to review whether it is according
to their requirements. Also, it comprises user requirements for a system as well as
detailed specifications of the system requirements.

The SRS is a specification for a specific software product, program, or set of applications
that perform particular functions in a specific environment. It serves several goals
depending on who is writing it. First, the SRS could be written by the client of a
system. Second, the SRS could be written by a developer of the system. The two
methods create entirely various situations and establish different purposes for the
document altogether. The first case, SRS, is used to define the needs and expectation of
the users. The second case, SRS, is written for various purposes and serves as a contract
document between customer and developer.

Characteristics of good SRS


1. Correctness: User review is used to provide the accuracy of requirements stated in
the SRS. SRS is said to be perfect if it covers all the needs that are truly expected from
the system.

2. Completeness: The SRS is complete if, and only if, it includes the following elements:

(1) All essential requirements, whether relating to functionality, performance,


design, constraints, attributes, or external interfaces.

(2) Full labels and references to all figures, tables, and diagrams in the SRS and
definitions of all terms and units of measure.

3. Consistency: The SRS is consistent if, and only if, no subset of individual
requirements described in its conflict. There are three types of possible conflict in
the SRS:

(1) The specified characteristics of real-world objects may conflict. For example,

PAGE NO. 58
(a) The format of an output report may be described in one requirement as
tabular but in another as textual.

(b) One condition may state that all lights shall be green while another
states that all lights shall be blue.

(2) There may be a reasonable or temporal conflict between the two specified
actions. For example,
(a) One requirement may determine that the program will add two inputs,
and another may determine that the program will multiply them.
(b) One condition may state that "A" must always follow "B," while other
requires that "A and B" co-occurs.
(3) Two or more requirements may define the same real-world object but use
different terms for that object. For example, a program's request for user input
may be called a "prompt" in one requirement's and a "cue" in another. The
use of standard terminology and descriptions promotes consistency.

4. Unambiguousness: SRS is unambiguous when every fixed requirement has only one
interpretation. This suggests that each element is uniquely interpreted. In case there
is a method used with multiple definitions, the requirements report should determine
the implications in the SRS so that it is clear and simple to understand.

5. Modifiability: SRS should be made as modifiable as likely and should be capable of


quickly obtain changes to the system to some extent. Modifications should be perfectly
indexed and cross-referenced.

6. Verifiability: SRS is correct when the specified requirements can be verified with a
cost-effective system to check whether the final software meets those requirements.
The requirements are verified with the help of reviews.

7. Traceability: The SRS is traceable if the origin of each of the requirements is clear
and if it facilitates the referencing of each condition in future development or
enhancement documentation.

There are two types of Traceability:

PAGE NO. 59
1. Backward Traceability: This depends upon each requirement explicitly
referencing its source in earlier documents.
2. Forward Traceability: This depends upon each element in the SRS having
a unique name or reference number. The forward traceability of the SRS is
especially crucial when the software product enters the operation and
maintenance phase. As code and design document is modified, it is
necessary to be able to ascertain the complete set of requirements that may
be concerned by those modifications.

8. Testability: An SRS should be written in such a method that it is simple to generate


test cases and test plans from the report.

9. Understandable by the customer: An end user may be an expert in his/her explicit


domain but might not be trained in computer science. Hence, the purpose of formal
notations and symbols should be avoided too as much extent as possible. The
language should be kept simple and clear.

Properties of a good SRS document


The essential properties of a good SRS document are the following:

1. Concise: The SRS report should be concise and at the same time, unambiguous,
consistent, and complete. Irrelevant descriptions decrease readability and also
increase error possibilities.

2. Structured: It should be well-structured. A well-structured document is simple to


understand and modify.

3. Black-box view: It should only define what the system should do and refrain from
stating how to do these. This means that the SRS document should define the external
behavior of the system and not discuss the implementation issues. The SRS report
should view the system to be developed as a black box and should define the
externally visible behavior of the system. For this reason, the SRS report is also known
as the black-box specification of a system.

PAGE NO. 60
4. Conceptual integrity: Conceptual integrity is the principle that anywhere you look in
your system, you can tell that the design is part of the same overall design. This
includes low-level issues such as formatting and identifier naming, but also issues
such as how modules and classes are designed, etc.

SRS should show conceptual integrity so that the reader can merely understand it.
Response to undesired events: It should characterize acceptable responses to
unwanted events. These are called system response to exceptional conditions.

5. Verifiable: All requirements of the system, as documented in the SRS document,


should be correct. This means that it should be possible to decide whether or not
requirements have been met in an implementation.

Click Here to View IEEE Software Requirements Specification Template

PAGE NO. 61
Requirements Validation
Requirements validation is the process of checking that requirements define the system
that the customer really wants. It overlaps with elicitation and analysis, as it is concerned
with finding problems with the requirements. Requirements validation is critically
important because errors in a requirements document can lead to extensive rework costs
when these problems are discovered during development or after the system is in service.

The cost of fixing a requirements problem by making a system change is usually much
greater than repairing design or coding errors. A change to the requirements usually
means that the system design and implementation must also be changed. Furthermore,
the system must then be retested.

During the requirements validation process, different types of checks should be carried
out on the requirements in the requirements document. These checks include:

1. Validity checks: These check that the requirements reflect the real needs of system
users. Because of changing circumstances, the user requirements may have changed
since they were originally elicited.

2. Consistency checks: Requirements in the document should not conflict. That is,
there should not be contradictory constraints or different descriptions of the same
system function.

3. Completeness checks: The requirements document should include requirements


that define all functions and the constraints intended by the system user.

4. Realism checks: By using knowledge of existing technologies, the requirements


should be checked to ensure that they can be implemented within the proposed budget
for the system. These checks should also take account of the budget and schedule for
the system development.

5. Verifiability: To reduce the potential for dispute between customer and contractor,
system requirements should always be written so that they are verifiable. This means
that you should be able to write a set of tests that can demonstrate that the delivered
system meets each specified requirement.

PAGE NO. 62
A number of requirements validation techniques can be used individually or in conjunction
with one another:

1. Requirements Reviews: The requirements are analysed systematically by a team of


reviewers who check for errors and inconsistencies.

2. Prototyping: This involves developing an executable model of a system and using


this with end-users and customers to see if it meets their needs and expectations.
Stakeholders experiment with the system and feedback requirements changes to the
development team.

3. Test-case generation: Requirements should be testable. If the tests for the


requirements are developed as part of the validation process, this often reveals
requirements problems. If a test is difficult or impossible to design, this usually means
that the requirements will be difficult to implement and should be reconsidered.
Developing tests from the user requirements before any code is written is an integral
part of test-driven development.

As a result, you rarely find all requirements problems during the requirements validation
process. Further requirements changes will be needed to correct omissions and
misunderstandings after agreement has been reached on the requirements document.

PAGE NO. 63
Requirements Management
The purpose of requirements management is to ensure product development goals are
successfully met. It is a set of techniques for documenting, analysing, prioritizing, and
agreeing on requirements so that engineering teams always have current and approved
requirements. Requirements management provides a way to avoid errors by keeping
track of changes in requirements and develop communication with stakeholders from the
start of a project throughout the engineering lifecycle.

With requirements management, we can overcome the complexity and


interdependencies that exist in today’s engineering lifecycles to streamline product
development and accelerate deployment.

Issues in requirements management are often cited as major causes of project failures.
Requirements management software provides the tools for us to execute that plan,
helping to reduce costs, accelerate time to market and improve quality control.

Requirements Management Plan (RMP)

A requirements management plan (RMP) helps explain how you will receive, analyse,
document and manage all of the requirements within a project. The plan usually covers
everything from initial information gathering of the high-level project to more detailed
product requirements that could be gathered throughout the lifecycle of a project. Key
items to define in a requirements management plan are the project overview,
requirements gathering process, roles and responsibilities, tools, and traceability.

Requirements Management Process

When looking for requirements management tools, there are a few key features to look
for.

• Collect initial requirements from stakeholders


• Analyse requirements
• Define and record requirements
• Prioritize requirements
• Agree on and approve requirements
• Trace requirements to work items

PAGE NO. 64
• Query stakeholders after implementation on needed changes to requirements
• Utilize test management to verify and validate system requirements
• Assess impact of changes
• Revise requirements
• Document changes
By following these steps, engineering teams are able to tackle the complexity inherent in
developing smart connected products. Using a requirements management solution helps
to streamline the process so we can optimize our speed to market and expand your
opportunities while improving quality.

Digital Requirements Management

Digital requirements management is a beneficial way to capture, trace, analyse and


manage requirements changes. Digital management ensures changes are tracked in a
secure, central location, and it allows for strengthened collaboration between team
members. Increased transparency minimizes duplicate work and enhances agility while
helping to ensure requirements adhere to standards and compliance.

Requirements Attributes

In order to be considered a “good” requirement, a requirement should have certain


characteristics, which include being:

• Specific
• Testable
• Clear and concise
• Accurate
• Understandable
• Feasible and realistic
• Necessary
Benefits Of Requirements Management

Some of the benefits of requirements management include:

• Lower cost of development across the lifecycle


• Fewer defects

PAGE NO. 65
• Minimized risk for safety-critical products
• Faster delivery
• Reusability
• Traceability
• Requirements being tied to test cases
• Global configuration management
Who is responsible for requirements management?

The product manager is typically responsible for curating and defining requirements.
However, requirements can be generated by any stakeholder, including customers,
partners, sales, support, management, engineering, operations and product team
members. Constant communication is necessary to ensure the engineering team
understands changing priorities.

What is Requirements Management Software?

Requirements management software helps project teams manage, document, analyse,


prioritize, and set requirements for new products or services. It also connects
development teams with relevant stakeholders and other interested parties, creating an
avenue of communication about requirements and changes needed for the product or
service.

Requirements management tools provide businesses with a complete, top-down


understanding of all factors contributing to the scope of a new product or service.
Businesses can utilize this software to verify product or service development meets the
company’s standards, stays within constraints, and also meets the targeted needs of the
consumers. Requirements management software facilitates a more organized approach
to creating and implementing new products or services and fits in well alongside other
development and application lifecycle management tools.

• To qualify for inclusion in the Requirements Management category, a product must:


• Document all requirements and steps toward a product or service creation
• Analyse product or service needs, objectives, and constraints
• Allow requirement flexibility as product or service development matures

PAGE NO. 66
• Facilitate continuous communication between development teams, stakeholders,
and interested parties

Ten Best Requirements Management Tools & Software Of 2022

1. Jama Software Best requirements management software for enterprises

2. Modern Requirements Best rated requirements management software

3. Visure Requirements Best for configuration management

4. ReqSuite® RM Best for quick startup and high level of customization

5. Doc Sheets Best intuitive enterprise requirements management software

6. Orcanos Best for visualization and reporting

7. IBM Engineering Requirements Management DOORS Next Best for engineering


requirements management

8. Accompa Best for ease of implementation and use

9. Caliber Best for storyboards and simulations

10. Pearls Best for team collaboration features

PAGE NO. 67
Software Architecture
It refers to the high-level structure of the software and disciplines to create that structure.
It serves as a blueprint of system. In this we made the structure which meet the technical
requirements.

Software architecture consists of

• Software Components
• Details of about data structures and algorithms
• Relationship among components
• Data flow, control flow and dependency from one component to another component

Software architecture directly put impact on the software quality in each and every sense.

Don’t mix it with Software Design, although it sometime serves as software design but
still there is a difference between software architecture and software design.

Why we use software architecture?

• It helps in real world complex systems by Divide and Conquer.


• With help of software architecture, we distribute work between team members
which allow us to work parallel.
• It helps in planning some thing or for defining strategies
• It helps in understanding the picture of system

Users of Software Architecture

• Project Manager
• Software Developer
• Security Expert
• Tester
• Anyone else who want to make some improvement by looking at architecture can
also use it.

Common Software Architectures


There are many different types of architectures, but some architectural patterns occur
more commonly than others. Here is a list of common software architecture patterns:

PAGE NO. 68
• Single process.

• Client / Server (2 processes collaborating).

• 3 Tier systems (3 processes collaborating in chains).

• N Tier systems (N processes collaborating in chains).

• Service oriented architecture (lots of processes interacting with each other).

• Peer-to-peer architecture (lots of processes interacting without a central server).

• Hybrid architectures - combinations of the above architectures.

Here is a simple illustration of these architectures.

PAGE NO. 69
Design

• Software architecture to some level is part of design.


• But we can’t say always that architecture is design.
• Software architecture is the skeleton of the software, and software design is how
the components are internally planned individually to work.
• In short, we also can say that the architecture is about as a whole software and
design is about a specific module.

Role of Software Architecture


Some of the important uses that software architecture descriptions play are:

1. Understanding and communication:

An architecture description is primarily to communicate the architecture to its various


stakeholders, which include the users who will use the system, the clients who hired the
system, the builders who will build the system, and, of course, the architects. An
architecture description is an important means of communication between these various
stakeholders. Through this description the stakeholders gain an understanding of some
macro properties of the system and how the system intends to fulfill the functional and
quality requirements.

2. Reuse:
Architecture descriptions can help software reuse. Reuse is considered one of the main
techniques by which productivity can be improved, thereby reducing the cost of software.
The software engineering world has, for a long time, been working towards a discipline
where software can be assembled from parts that are developed by different people and
are available for others to use. If one wants to build a software product in which existing
components may be reused, then architecture becomes the key point at which reuse at
the highest-level is decided. The architecture has to be chosen in a manner such that the
components that have to be reused can fit properly and together with other components
that may be developed, they provide the features that are needed.
3. Construction and Evolution
As architecture partitions the system into parts, some architecture provided partitioning
can naturally be used for constructing the system, which also requires that the system be
PAGE NO. 70
broken into parts such that different teams (or individuals) can separately work on
different parts. A suitable partitioning in the architecture can provide the project with the
parts that need to be built to build the system. As, almost by definition, the parts specified
in an architecture are relatively independent (the dependence between parts coming
through their relationship), they can be built independently. Not only does architecture
guide the development, it also establishes the constraints—the system should be
constructed in a manner that the structures chosen during the architecture creation are
preserved. That is, the chosen parts are there in the final system and they interacting the
specified manner.
4. Analysis
It is highly desirable if some important properties about the behavior of the system can
be determined before the system is actually built. This will allow the designers to consider
alternatives and select the one that will best suit the needs. Many engineering disciplines
use models to analyse design of a product for its cost, reliability, performance, etc.
Architecture opens such possibilities for software also. It is possible (thought the methods
are not fully developed or standardized yet) to analyse or predict the properties of the
system being built from its architecture. For example, the reliability or the performance of
the system can be analysed. Such an analysis can help determine whether the system
will meet the quality and performance requirements, and if not, what needs to be done to
meet the requirements.

Role of Software Architect


A Software Architect provides a solution that the technical team can create and design
for the entire application. A software architect should have expertise in the following
areas −

Design Expertise

• Expert in software design, including diverse methods and approaches such as


object-oriented design, event-driven design, etc.

• Lead the development team and coordinate the development efforts for the integrity
of the design.

• Should be able to review design proposals and tradeoff among themselves.


PAGE NO. 71
Domain Expertise

• Expert on the system being developed and plan for software evolution.

• Assist in the requirement investigation process, assuring completeness and


consistency.

• Coordinate the definition of domain model for the system being developed.

Technology Expertise

• Expert on available technologies that helps in the implementation of the system.

• Coordinate the selection of programming language, framework, platforms,


databases, etc.

Methodological Expertise

• Expert on software development methodologies that may be adopted during SDLC


(Software Development Life Cycle).

• Choose the appropriate approaches for development that helps the entire team.

Hidden Role of Software Architect

• Facilitates the technical work among team members and reinforcing the trust
relationship in the team.

• Information specialist who shares knowledge and has vast experience.

• Protect the team members from external forces that would distract them and bring
less value to the project.

Software Design Vs Software Architecture


Software architecture and software design are the two main important parts or phases
of software development. Software Architecture focuses more on the interaction
between the externally visible components of the system where as the Design is about
how the internal components of the system interact with each other. Software
Architecture is more about what we want the system to do and Software Design is about
how we want to achieve that. Software Architecture is at a higher level of abstraction

PAGE NO. 72
than the Software Design. Software Architecture is concerned with issues beyond the
data structures and algorithms used in the system.

Software Architecture shows how the different modules of the system communicate with
each other and other systems. What language is to be used? What kind of data storage
is present, what recovery systems are in place? Like design patterns there are
architectural patterns. Such as 3-tier layered design, etc.

Software design is about designing the individual modules / components. What are the
responsibilities, functions, of module x or class Y? What can it do, and what not? What
design patterns can be used? UML diagram/flow chart/simple wireframes (for UI) for a
specific module/part of the system.

Software Architecture is the design of the entire system, while Software Design
emphasizes on a specific module / component / class level.

All architecture is design but not all design is architecture.

Software Architecture is “what” we are building. Software Design is “how” we are building.

Software Design Software Architecture


Design is about how the internal Software Architecture focuses more on
components of the system interact the interaction between the externally
with each other. visible components of the system
Software design is about designing the Software Architecture shows how the
individual modules / components. different modules of the system
What are the responsibilities, communicate with each other and other
functions, of module x or class Y? systems. What language is to be used?
What can it do, and what not? What What kind of data storage is present,
design patterns can be used? UML what recovery systems are in place?
diagram/flow chart/simple wireframes
(for UI) for a specific module/part of the
system.
Software Design emphasizes on a Software Architecture is the design of the
specific module / component / class entire system.

PAGE NO. 73
level.

Software Design is about how we want Software Architecture is more about what
to achieve that. we want the system to do
Implementation Level Structure Level
Detailed Properties Fundamental Properties
Use Guidelines Define Guidelines
Communication with developer Communication with business
stakeholders
Avoid Uncertainty Manage Uncertainty
It helps to implement the software. It helps to define the high level
infrastructure of the software.
In one word the level of software design In one word the level of software
is implementation. architecture is structure.

PAGE NO. 74
Architecture View Model
A model is a complete, basic, and simplified description of software architecture which
is composed of multiple views from a particular perspective or viewpoint.

A view is a representation of an entire system from the perspective of a related set of


concerns. It is used to describe the system from the viewpoint of different stakeholders
such as end-users, developers, project managers, and testers.

4+1 View Model

The 4+1 View Model was


designed by Philippe
Kruchten to describe the
architecture of a
software–intensive
system based on the use
of multiple and
concurrent views. End-
users, developers,
system engineers, and project managers all have unique views on the system, hence
the viewpoints are used to describe it from their perspectives. It is a multiple view model
that addresses different features and concerns of the system. It standardizes the
software design documents and makes the design easy to understand by all
stakeholders.

It is an architecture verification method for studying and documenting software


architecture design and covers all the aspects of software architecture for all
stakeholders. It provides four essential views −

• The logical view or conceptual view − The logical view is concerned with the
functionality that the system provides to end-users. It describes the object model of
the design. The logical view is concerned with the system’s functionality as it relates
to end-users. Class diagrams and state diagrams are examples of UML diagrams that
are used to depict the logical view.

• The Process View


PAGE NO. 75
The process view focuses on the system’s run-time behavior and deals with the
system’s dynamic elements. It explains the system processes and how they
communicate. Concurrency, distribution, integrator, performance, and scalability are
all addressed in the process view. The sequence diagram, communication diagram,
and activity diagram are all UML diagrams that can be used to describe a process
view.

• The Physical View

The physical view depicts the system from a system engineer's point of view. It
describes the mapping of software onto hardware and reflects its distributed aspect.
It is concerned with the topology of software components on the physical layer as well
as the physical connections between these components. UML diagrams used to
represent the physical view include the deployment diagram

• The Development View

The development view illustrates a system from a programmer's perspective and is


concerned with software management. It describes the static organization or structure
of the software in its development of environment. This view is also known as the
implementation view. UML Diagrams used to represent the development view include
the Package diagram and the Component diagram.

• Scenario View

This view model can be extended by adding one more view called scenario
view or use case view for end-users or customers of software systems. It is coherent
with other four views and are utilized to illustrate the architecture serving as “plus one”
view, (4+1) view model.

Why is it called 4+1 instead of 5?


The use case view has a special significance as it details the high level requirement of
a system while other views details — how those requirements are understood. When all
other four views are completed, it’s effectively redundant. However, all other views would
not be possible without it. The following image and table shows the 4+1 view in detail −

PAGE NO. 76
Logical Process Development Physical Scenario

Description Shows the Shows the Gives building Shows the Shows the
component processes / block views of installation, design is
(Object) of Workflow system and configuration complete by
system as well rules of describe static and performing
as their system and organization of deployment of validation
interaction how those the system software and
processes modules application illustration
communicate,
focuses on
dynamic view
of system

Viewer / End-User, Integrators & Programmer System All the


Stake Analysts and developers and software engineer, views of
holder Designer project operators, their views
managers system and
administrators evaluators
and system
installers

Consider Functional Non Software Nonfunctional System


requirements Functional Module requirement Consistency
Requirements organization regarding to and validity
(Software underlying
management hardware
reuse,
constraint of
tools)

UML – Class, State, Activity Component, Deployment Use case


Diagram Object, Diagram Package diagram diagram
sequence, diagram
Communication
Diagram

PAGE NO. 77
Component and Connector View and its Architecture Style
Component-and-connector (C&C) views define models consisting of elements that have
some runtime presence, such as processes, objects, clients, servers, and data stores.

Component and Connector (C&C) architecture view of a system has two main elements—
components and connectors. Components are usually computational elements or data
stores that have some presence during the system execution. Connectors define the
means of interaction between these components.

A C&C view of the system defines the components, and which component is connected
to which and through what connector. A C&C view describes a runtime structure of the
system—what components exist when the system is executing and how they interact
during the execution. The C&C structure is essentially a graph, with components as nodes
and connectors as edges. C&C view is perhaps the most common view of architecture
and most box-and-line drawings representing architecture attempt to capture this view.
Most often when people talk about the architecture, they refer to the C&C view. Most
architecture description languages also focus on the C&C view.

Components
Components are generally units of computation or
data stores in the system. A component has a
name, which is generally chosen to represent the
role of the component or the function it performs.

In a diagram representing a C&C architecture view


of a system, it is highly desirable to have a different
representation for different component types, so
the different types can be identified visually. It is
much better to use a different symbol/notation for
each different component type. If there are multiple
components of the same type, then each of these
components will be represented using the same
symbol they will be distinguished from each other
by their names. Components use interfaces to

PAGE NO. 78
communicate with other components. The interfaces are sometimes called ports.

It would be useful if there was a list of standard symbols that could be used to build an
architecture diagram. However, as there is no standard list of component types, there
is no such standard list.

Connectors
The different components of a system are likely to interact while the system is in operation
to provide the services expected of the system. After all, components exist to provide
parts of the services and features of the system, and these must be combined to deliver
the overall system functionality. For composing a system from its components,
information about the interaction between components is necessary.

Interaction between components may be through a simple means supported by the


underlying process execution infrastructure of the operating system. For example, a
component may interact with another using the procedure call mechanism (a
connector,) which is provided by the runtime environment for the programming language.
However, the interaction may involve more complex mechanisms as well. Examples of
such mechanisms are remote procedure call, TCP/IP ports, and a protocol like HTTP.
These mechanisms require a fair amount of underlying runtime infrastructure, as well as
special programming within the components to use the infrastructure.

Consequently, it is extremely important to identify and explicitly represent these


connectors. Specification of connectors will help identify the suitable infrastructure
needed to implement an architecture, as well as clarify the programming needs for
components using them. Without a proper understanding of the connectors, a realization
of the components using the connectors may not be possible.

PAGE NO. 79
Note that connectors need not be binary and a connector may provide a n-way
communication between multiple components. For example, a broadcast bus may be
used as a connector, which allows a
component to broadcast its message
to all the other components.

A connector also has a name that


should describe the nature of
interaction the connector supports. A
connector also has a type, which is a
generic description of the interaction,
specifying properties like whether it is
a binary or n-way, types of interfaces
it supports, etc. Sometimes, the
interaction supported by a connector
is best represented as a protocol. A
protocol implies that when two or more
components use the connector using the protocol to communicate, they must follow some
conventions about order of events or commands, order in which data is to be grouped for
sending, error conditions etc. For example, if TCP ports are to be used to send information
from one process to another (TCP ports are the connector between the two components
of process type), the protocol requires that a connection must first be established and a
port number obtained before sending the information, and that the connection should be
closed in the end. A protocol description makes all these constraints explicit, and defines
the error conditions and special scenarios. If a protocol is used by a connector type, it
should be explicitly stated.

example. Below figure illustrates a primary presentation of a C&C view as one might
encounter it in a typical description of a system's runtime architecture.

PAGE NO. 80
A bird's-eye view of a system
as it might appear during
runtime. This system contains
a shared repository that is
accessed by servers and an
administrative component. A
set of client tellers can interact
with the account repository
servers and communicate
among themselves through a
publish-subscribe connector.

What is this diagram? The


system contains a shared
repository of customer
accounts (Account Database)
accessed by two servers and an administrative component. A set of client tellers can
interact with the account repository servers, embodying a client-server style. These
client components communicate among themselves by publishing and subscribing to
events. We learn from the supporting documentation that the purpose of the two servers
is to enhance reliability: If the main server goes down, the backup can take over. Finally,
a component allows an administrator to access, and presumably maintain, the shared-
data store.

Each of the three types of connectors shown in Figure represents a different form of
interaction among the connected parts. The client-server connector allows a set of
concurrent clients to retrieve data synchronously via service requests. This variant of
the client-server style supports transparent failover to a backup server. The database
access connector supports authenticated administrative access for monitoring and
maintaining the database. The publish-subscribe connector supports asynchronous
event announcement and notification.

Each of these connectors represents a complex form of interaction and will likely require

PAGE NO. 81
nontrivial implementation mechanisms. For example, the client-server connector type
represents a protocol of interaction that prescribes how clients initiate a client-server
session, constraints on ordering of requests, how/when failover is achieved, and how
sessions are terminated. Implementation of this connecter will probably involve runtime
mechanisms that detect when a server has gone down, queue client requests, handle
attachment and detachment of clients, and so on. Note also that connectors need not
be binary: Two of the three connector types in Figure can involve more than two
participants.

PAGE NO. 82
N-Tier Architecture
Definition of N-Tier Architecture
N-tier architecture is also called multi-tier architecture because the software is
engineered to have the processing, data management, and presentation
functions physically and logically separated. That means that these different
functions are hosted on several machines or clusters, ensuring that services are
provided without resources being shared and, as such, these services are delivered
at top capacity. This separation makes managing each separately easier since doing
work on one does not affect the others, isolating any problems that might occur.
Not only does your software gain from being able to get services at the best possible
rate, but it’s also easier to manage. This is because when you work on one section, the
changes you make will not affect the other functions. And if there is a problem, you can
easily pinpoint where it originates.
A More In-Depth Look at N-Tier Architecture
N-tier architecture would involve dividing an application into three different tiers. It is
the physical separation of the different parts of the application. These would
be the
1. the presentation tier,
2. logic tier, and
3. the data tier.

PAGE NO. 83
How It Works and Examples of N-Tier Architecture

When it comes to n-tier architecture, a three-tier architecture is fairly common. In this


setup, you have the presentation or GUI tier, the data layer, and the application logic tier.

The presentation tier. The presentation tier is the user interface. This is what the
software user sees and interacts with. This is where they enter the needed information.
This tier also acts as a go-between for the data tier and the user, passing on the user’s
different actions to the logic tier.

The application logic tier. The application logic tier is where all the “thinking” happens,
and it knows what is allowed by your application and what is possible, and it makes other
decisions. This logic tier is also the one that writes and reads data into the data tier.

The data tier. The data tier is where all the data used in your application are stored. You
can securely store data on this tier, do transaction, and even search through volumes
and volumes of data in a matter of seconds.

Just imagine surfing on your favorite website. The presentation tier is the Web application
that you see. It is shown on a Web browser you access from your computer, and it has
the CSS, JavaScript, and HTML codes that allow you to make sense of the Web
application. If you need to log in, the presentation tier will show you boxes for username,
password, and the submit button. After filling out and then submitting the form, all that will
be passed on to the logic tier. The logic tier will have the JSP, Java Servlets, Ruby, PHP
and other programs. The logic tier would be run on a Web server. And in this example,
the data tier would be some sort of database, such as a MySQL, NoSQL, or PostgreSQL
database. All of these are run on a separate database server. Rich Internet applications
and mobile apps also follow the same three-tier architecture.

Benefits of N-Tier Architecture

There are several benefits to using n-tier architecture for your software. These are
scalability, ease of management, flexibility, and security.

• Secure: You can secure each of the three tiers separately using different methods.
• Easy to manage: You can manage each tier separately, adding or modifying each
tier without affecting the other tiers.

PAGE NO. 84
• Scalable: If you need to add more resources, you can do it per tier, without affecting
the other tiers.
• Flexible: Apart from isolated scalability, you can also expand each tier in any
manner that your requirements dictate.

In short, with n-tier architecture, you can adopt new technologies and add more
components without having to rewrite the entire application or redesigning your whole
software, thus making it easier to scale or maintain. Meanwhile, in terms of security, you
can store sensitive or confidential information in the logic tier, keeping it away from the
presentation tier, thus making it more secure.

Other benefits include:

• More efficient development. N-tier architecture is very friendly for development,


as different teams may work on each tier. This way, you can be sure the design
and presentation professionals work on the presentation tier and the database
experts work on the data tier.
• Easy to add new features. If you want to introduce a new feature, you can add it
to the appropriate tier without affecting the other tiers.
• Easy to reuse. Because the application is divided into independent tiers, you can
easily reuse each tier for other software projects. For instance, if you want to use
the same program, but for a different data set, you can just replicate the logic and
presentation tiers and then create a new data tier.

And there are n-tier architecture models that have more than three tiers. Examples are
applications that have these tiers:

• Services – such as print, directory, or database services

• Business domain – the tier that would host Java, DCOM, CORBA, and other
application server object.

• Client tier – or the thin clients

Considerations for Using N-Tier Architecture for Your Applications

Because you are going to work with several tiers, you need to make sure that network
bandwidth and hardware are fast. If not, the application’s performance might be slow.
PAGE NO. 85
Also, this would mean that you would have to pay more for the network, the hardware,
and the maintenance needed to ensure that you have better network bandwidth.

Also, use as fewer tiers as possible. Remember that each tier you add to your software
or project means an added layer of complexity, more hardware to purchase, as well as
higher maintenance and deployment costs. To make your n-tier applications make sense,
it should have the minimum number of tiers needed to still enjoy the scalability, security
and other benefits brought about by using this architecture. If you need only three tiers,
don’t deploy four or more tiers.

PAGE NO. 86
Deployment View
The Deployment view focuses on aspects of the system that are important after the
system has been tested and is ready to go into live operation. This view defines the
physical environment in which the system is intended to run, including the hardware
environment your system needs (e.g., processing nodes, network interconnections, and
disk storage facilities), the technical environment requirements for each node (or node
type) in the system, and the mapping of your software elements to the runtime
environment that will execute them.

The deployment view shows the physical distribution of processing within the system.

The Deployment viewpoint applies to any information system with a required deployment
environment that is not immediately obvious to all of the interested stakeholders. This
includes the following scenarios:
• Systems with complex runtime dependencies (e.g., particular third-party software
packages are needed to support the system)
• Systems with complex runtime environments (e.g., elements are distributed over a
number of machines)
PAGE NO. 87
• Situations where the system may be deployed into a number of different environments
and the essential characteristics of the required environments need to be clearly
illustrated (which is typically the case with packaged software products)
• Systems that need specialist or unfamiliar hardware or software in order to run.
Most large information systems fall into one of these groups, so you will almost always
need to create a Deployment view.

Definition Describes the environment into which the system will be deployed,
including the dependencies the system has on its runtime
environment
Concerns • runtime platform required
• specification and quantity of hardware or hosting required
• third-party software requirements
• technology compatibility
• network requirements
• network capacity required
• physical constraints
Models • runtime platform models
• network models
• technology dependency models
• intermodel relationships

Deployment Deployment diagram are used to represent the deployment view of a


diagram system. Deployment diagrams are useful for system engineers. An
efficient deployment diagram is very important because it control the
following parameters
• Performance
• Scalability
• Maintainability
• Portability
Stakeholders
• System administrators, developers, testers, communicators, and
assessors

PAGE NO. 88
Deployment Diagram for Library Management System

Deployment View and Performance Analysis


Introduction: -There are different views through which software architectures can be
represented and which views we use depends on the types of analysis we want to do at
the architecture design time. If we want to analyse the performance of the architecture (to
be exact, performance of the system that will have the proposed architecture,) then even
though the C&C view represents the run-time structure, it is not enough.
The reason is that though performance depends on the runtime structure of the
software, it also depends on the hardware and other resources that will be used to
execute the software. For example, the performance of a n-tier system will be very
different if all the tiers reside on the same machine as compared to if they reside on
different machines. In other words, the performance of a system whose architecture
remains the same, can change depending on how the components of the architecture are
allocated on the hardware. Hence, to do any meaningful performance analysis, we must
specify the allocation as well. The same holds true if we want to do any reliability or
availability analysis, as they also depend on the reliability of the hardware components
involved in running the system. To facilitate such an analysis, a deployment view needs
to be provided. In a deployment view, the elements of a C&C style are allocated to
PAGE NO. 89
execution resources like CPU and communication channels. Hence, the elements of this
view are the software components and connectors from the C&C view, and the hardware
elements like CPU, memory, and bandwidth. These view shows which software
components are allocated to which hardware element. This allocation can be dynamic
and this dynamism can also be represented.
Note that even the allocation view, which is necessary to do performance analysis, is not
sufficient. To analyse the performance, besides the allocation, we will need to properly
characterize the hardware elements in terms of their capacities, and the software
elements in terms of their resource requirement and usage. Using the information, models
can be built to determine bottlenecks, optimal allocation, etc. This is an active area of
research and a survey of the area can be found in.
For doing any performance analysis, some models will have to be built, and these models
will need information about the hardware and software. The level of detail that can be
obtained depends on the model used. At the basic level, some experience-based analysis
can be done to see if there are any performance bottlenecks. The allocation of software
can also be examined for optimality of performance, and if needed that allocation can be
changed. For example, in a n-tier system, it may be found that the overhead of
communication is too heavy between two tiers, and it may be better to allocate both of
them to one machine. Or the analysis may reveal the reverse—allocating both the
database and the business layer on the same machine might degrade the response time
as concurrency will be lost, and it may be decided to add another machine to host the
business layer and connect it to the machine hosting the database layer with a high-speed
connection.
Many such possibilities exist for performance analysis. Consequently, for C&C view of
the architecture, it may be desirable to look at an allocation view at the time of creating
the architecture. It may be added that not all C&C views render themselves easily or
fruitfully for allocation view. The n-tier style (or client-server style), or the process view
clearly render itself to an allocation view. However, it is not clear if the allocation view of
a publish-subscribe view will be very useful. For giving an allocation view, it is best to
choose a C&C view that renders itself naturally to the allocation view. If the views obtained
so far do not render themselves to an allocation view, but an allocation view is essential

PAGE NO. 90
for the desired analysis, then a view should be created that can be used for such an
allocation and analysis.

PAGE NO. 91
Documenting Architecture Design
Introduction: -When the designing is over, the architecture has to be properly communicated to
all stakeholders for negotiation and agreement. This requires that architecture be precisely
documented with enough information to perform the types of analysis the different stakeholders
wish to make to satisfy themselves that their concerns have been adequately addressed. Without
a properly documented description of the architecture, it is not possible to have a clear common
understanding. Hence, properly documenting architecture is as important as creating one.

Just like different projects require different views, different projects will need different level of
detail in their architecture documentation. In general, however, a document describing the
architecture should contain the following:

• System and architecture context

• Description of architecture views

• Across views documentation

A pictorial representation is not a complete description of the view. It gives an intuitive idea of
the design, but is not sufficient for providing the details. For example, what is the purpose and
functionality of a module or a component is indicated only by its name, which is not sufficient.
Hence, supporting documentation is needed for the view diagrams. This supporting
documentation should have some or all of the following: -

• Element Catalog: Provides more information about the elements shown in the
primary representation. Besides describing the purpose of the element, it should
also describe the element's interfaces (remember that all elements have interfaces
through which they interact with other elements). All the different interfaces
provided by the elements should be specified. Interfaces should have unique
identity, and the specification should give both syntactic and semantic
information. Syntactic information is often in terms of signatures, which describe
all the data items involved in the interface and their types. Semantic information
must describe what the interface does. The description should also clearly state
the error conditions that the interface can return.

• Architecture Rationale: Though a view specifies the elements and the


PAGE NO. 92
relationship between them, it does not provide any insight into why the architect
chose the particular structure. Architecture rationale gives the reasons for
selecting the different elements and composing them in the way it was done.

• Behaviour: A view gives the structural information. It does not represent the actual
behaviour or execution. Consequently, in a structure, all possible interactions
during an execution are shown. Sometimes, it is necessary to get some idea of
the actual behaviour of the system in some scenarios. Such a description is
useful for arguing about properties like deadlock. Behaviour description can be
provided to help aid understanding of the system execution. Often diagrams like
collaboration diagrams or sequence diagrams are used.

• Other Information: This may include a description of all those decisions that
have not been taken during architecture creation but have been deliberately
left for future. For example, the choice of a server or protocol. If this is done, then
it must be specified as fixing these will have impacts on the architecture.

Architecture documentation is in many ways similar to the documentation we write in other facets
of our software development projects.

1. Document should be written from the point of view of the reader, not the
writer. The document’s efficiency is optimized if we make things easier for reader.
2. Avoid Repetition: Each kind of information should be recorded in exactly one
place. This makes documentation easier to use and much easier as it evolves. It
also avoids the confusion, because information that is repeated is often repeated
in a slightly different form thus confusing the things.

3. Avoid Unintentional Ambiguity: In some sense, the point of architecture is to be


ambiguous. A primary reason architecture is useful is because it supresses of
defers the excess of details that are necessary to resolve before bring a system to
the field. The architecture is therefore ambiguous, one might argue, with respect to
these suppressed details.

4. User Standard Organization: Each document should confirm to standard,


planned organization scheme, and this scheme should be made known to the
reader.
PAGE NO. 93
5. Record Rationale: While documenting the results of decisions, record the
decisions you avoided and say why. Next time when those decisions come under
scrutiny, you will find yourself revisiting the same arguments and wondering why
you did not take some other path. Recording rational will save you enormous time
in the long run, although it requires discipline to record in the heat of the moment.

6. Keep it Current: Documentation that is incomplete, out of date, does not reflect
truth, and does not obey its own rule for form and internal consistency will not be
used. Documentation that is kept current and accurate will be used.

7. Review Documentation for Fitness of purpose: Only the intended users of a


document will be able to tell you if it contains the right information presented in right
way. Before a document is released, have it reviewed by representatives of the
community or communities for whom it was written.

PAGE NO. 94
Evaluating Architectures
Architecture allows the accomplishment of certain quality attributes, its evaluation in early
stage is a crucial task in a software development project. It is possible to verify if the
architectural decisions are appropriate in early stage without waiting for the system to be
developed and deployed. We can predict if a system will have the required quality
attributes. The goal is to determine the degree in which software architecture or an
architectural style satisfies the quality requirements. Architectural evaluation has
saved important amount of money when detecting that the system under development
could not achieve the quality requirements which was supposed to in early stages of the
development.

Software architecture evaluation methods that address one or more of the following
quality attributes: performance, maintainability, testability, and portability. The IEEE
standard 610.12-1990 defines the four quality attributes as:

Maintainability. This is defined as:

“The ease with which a software system or component can be modified to correct faults,
improve performance or other attributes, or adapt to a changed environment.”

Maintainability is a multifaceted quality requirement. It incorporates aspects such as


readability and understandability of the source code. Maintainability is also concerned
with testability to some extent as the system has to be re-validated during the
maintenance.

Performance. Performance is defined as:

“The degree to which a system or component accomplishes its designated functions


within given constraints, such as speed, accuracy, or memory usage.”

There are many aspects of performance, e.g., latency, throughput, and capacity.

Testability. Testability is defined as:

“The degree to which a system or component facilitates the establishment of test criteria
and the performance of tests to determine whether those criteria have been met.”

We interpret this as the effort needed to validate the system against the requirements. A

PAGE NO. 95
system with high testability can be validated quickly.

Portability. Portability is defined as:

“The ease with which a system or component can be transferred from one hardware or
software environment to another.”

We interpret this as portability not only between different hardware platforms and
operating systems, but also between different virtual machines and versions of
frameworks.

These four quality attributes are selected, not only for their importance for software
developing organizations in general, but also for their relevance for organizations
developing software in the real-time system domain in a cost effective way, e.g., by using
a product-line approach. Performance is important since a system must fulfil the
performance requirements, if not, the system will be of limited use, or not used. The long-
term focus forces the system to be maintainable and testable, it also makes portability
important since the technical development on computer hardware technology moves
quickly and it is not always the case that the initial hardware is available after a number
of years.

Architecture Evaluation Methods

Following methods and approaches that can be applied for architecture-level evaluation
of performance, maintainability, testability, or portability.

1. SAAM — Software Architecture Analysis Method

Software Architecture Analysis Method (SAAM) is a scenario-based software


architecture evaluation method, targeted for evaluating a single architecture or making
several architectures comparable using metrics such as coupling (It measures the
relative interdependence among modules) between architecture components. SAAM was
originally focused on comparing modifiability of different software architectures in an
organization’s domain. It has since then evolved to a structured method for scenario-
based software architecture evaluation. Several quality attributes can be addressed,
depending on the type of scenarios that are created during the evaluation process.
The method consists of five steps.

PAGE NO. 96
1. It starts with the documentation of the architecture in a way that all
participants of the evaluation can understand.

2. Scenarios are then developed that describe the intended use of the system.
The scenarios should represent all stakeholders that will use the system.

3. The scenarios are then evaluated and a set of scenarios that represents the
aspect that we want to evaluate is selected.

4. Interacting scenarios are then identified as a measure of the modularity of


the architecture.

5. The scenarios are then ordered according to priority, and their expected
impact on the architecture.

2. ATAM — Architecture Trade-off Analysis Method

Architecture Trade-off Analysis Method (ATAM) is a scenario-based software


architecture evaluation method. The goals of the method are to evaluate architecture
level designs that considers multiple quality attributes and to gain insight as to whether
the implementation of the architecture will meet its requirements. ATAM builds on
SAAM and extends it to handle trade-offs between several quality attributes. The
architecture evaluation is performed in six steps.

1. The first one is to collect scenarios that operationalize the requirements for
the system (both functional and quality requirements).

2. The second step is to gather information regarding the constraints and


environment of the system. This information is used to validate that the
scenarios are relevant for the system.

3. The third step is to describe the architecture using views that are relevant for
the quality attributes that were identified in step one.

4. Step four is to analyse the architecture with respect to the quality attributes.
The quality attributes are evaluated one at a time.

5. Step five is to identify sensitive points in the architecture, i.e., identifying


those points that are affected by variations of the quality attributes.

PAGE NO. 97
6. The sixth and final step is to identify and evaluate trade-off points, i.e.,
variation points that are common to two or more quality attributes.

3. ALMA — Architecture-Level Modifiability Analysis

Architecture-Level Modifiability Analysis (ALMA) is a scenario-based software


architecture evaluation method with the following characteristics: focus on modifiability,
distinguish multiple analysis goals, make important assumptions explicit, and provide
repeatable techniques for performing the steps. The goal of ALMA is to provide a
structured approach for evaluating three aspects of the maintainability of software
architectures, i.e., maintenance prediction, risk assessment, and software
architecture comparison. ALMA is an evaluation method that follows SAAM in its
organization. The method specifies five steps:

1. determine the goal of the evaluation,

2. describe the software architecture,

3. elicit a relevant set of scenarios,

4. evaluate the scenarios, and

5. interpretation of the results and draw conclusions from them.

The method provides more detailed descriptions of the steps involved in the process than
SAAM does, and tries to make it easier to repeat evaluations and compare different
architectures. It makes use of structural metrics and base the evaluation of the scenarios
on quantification of the architecture.

4. RARE/ARCADE RARE and ARCADE

RARE/ARCADE RARE and ARCADE are part of a toolset called SEPA (Software
Engineering Process Activities). RARE (Reference Architecture Representation
Environment) is used to specify the software architecture and ARCADE is used for
simulation-based evaluation of it. The goal is to enable automatic simulation and
interpretation of a software architecture that has been specified using the RARE
environment. An architecture description is created using the RARE environment. The
architecture description together with descriptions of usage scenarios are used as input
to the ARCADE tool. ARCADE then interprets the description and generates a simulation
PAGE NO. 98
model. The simulation is driven by the usage scenarios. RARE is able to perform static
analysis of the architecture, e.g., coupling. ARCADE makes it possible to evaluate
dynamic attributes such as performance and reliability of the architecture. The RARE and
ARCADE tools are tightly integrated to simplify an iterative refinement of the software
architecture. The method has, as far as we know, only been used by the authors.

5. Argus-I

Argus-I is a specification-based evaluation method. Argus-I makes it possible to


evaluate a number of aspects of an architecture design. It is able to perform structural
analysis, static behavioural analysis, and dynamic behavioural analysis, of
components. It is also possible to perform dependence analysis, interface
mismatch, model checking, and simulation of an architecture

6. LQN — Layered Queuing Networks

Layered queuing network models are very general and can be used to evaluate many
types of systems. The model describes the interactions between components in the
architecture and the processing times required for each interaction. The creation of
the models requires detailed knowledge of the interaction of the components, together
with behavioural information, e.g., execution times or resource requirements. The
execution times can either be identified by, e.g., measurements, or estimated. The more
detailed the model is the more accurate the simulation result will be. The goal when using
a queuing network model is often to evaluate the performance of a software architecture
or a software system. Important measures are usually response times, throughput,
resource utilization, and bottleneck identification.

7. SAM

SAM is a formal systematic methodology for software architecture specification and


analysis. SAM is mainly targeted for analysing the correctness and performance of a
system. SAM has two major goals. The first goal is the ability to precisely define software
architectures and their properties, and then perform formal analysis of them using formal
methods. Further, SAM also supports an executable software architecture specification
using time Petri nets and temporal logic. The second goal is to facilitate scalable software
architecture specification and analysis, using hierarchical architectural decomposition.
PAGE NO. 99
8. EBAE

Empirically-Based Architecture Evaluation Lindvall et al. describe in a case study of a


redesign/reimplementation of a software system developed more or less in-house. The
main goal was to evaluate the maintainability of the new system as compared to the
previous version of the system. The paper outlines a process for empirically-based
software architecture evaluation. The paper defines and uses a number of architectural
metrics that are used to evaluate and compare the architectures. The basic steps in the
process are: select a perspective for the evaluation, define/select metrics, collect metrics,
and evaluate/compare the architectures. In this study the evaluation perspective was to
evaluate the maintainability, and the metrics were structure, size, and coupling. The
evaluations were done in a late development stage, i.e., when the systems already were
implemented. The software architecture was reverse engineered using source code
metrics.

9. ABAS — Attribute-Based Architectural Styles

Attribute-Based Architectural Styles (ABASs) build on the concept of architectural styles


[9, 35], and extend it by associating a reasoning framework with an architectural style.
The method can be used to evaluate various quality attributes, e.g., performance or
maintainability, and is thus not targeted at a specific set of quality attribute. The reasoning
framework for an architectural style can be qualitative or quantitative, and are based on
models for specific quality attributes. Thus, ABASs enable analysis of different quality
aspects of software architectures based on ABASs. The method is general and several
quality attributes can be analyzed concurrently, given that quality models are provided
for the relevant quality attributes. One strength of ABASs is that they can be used also
for architectural design.

10. SPE — Software Performance Engineering

Software performance engineering (SPE) is a general method for building performance


into software system. A key concept is that the performance shall be taken into
consideration during the whole development process, not only evaluated or optimized
when the system already is developed.

SPE relies on two different models of the software system, i.e., a software execution
PAGE NO. 100
model and a system execution model. The software execution model models the
software components, their interaction, and the execution flow. In addition, key resource
requirements for each component can also be included, e.g., execution time, memory
requirements, and I/O operations. The software execution model predicts the
performance without taken contention of hardware resources into account. The system
execution model is a model of the underlaying hardware. Examples of hardware
resources that can be modelled are processors, I/O devices, and memory. Further, the
waiting time and competition for resources are also modelled. The software execution
model generates input parameters to the system execution model. The system execution
model can be solved by using either mathematical methods or simulations. The method
can be used to evaluate various performance measures, e.g., response times,
throughput, resource utilization, and bottleneck identification. The methods is primarily
targeted for performance evaluation. However, the authors argue that their method can
be used to evaluate other quality attributes in a qualitative way as well [39]. The method
has been used in several studies by the authors, but do not seem to have been used by
others.

PAGE NO. 101


Unit -III
Software Design
Software design is a process to transform user requirements into some suitable form,
which helps the programmer in software coding and implementation. It is a process to
conceptualize the software requirements into software implementation. For assessing
user requirements, SRS (Software Requirement Specification) document is created
whereas for coding and implementation, there is a need of more specific and detailed
requirements in software terms. Software design takes the user requirements as
challenges and tries to find optimum solution. While the software is being
conceptualized, a plan is written out to find the best possible design for implementing
the intended solution.

Principles of Software Design


Software Design is a process to plan or convert the software requirements into a step
that are needed to be carried out to develop a software system. There are several
principles that are used to organize and arrange the structural components of Software
design. Software Designs in which these principles are applied affect the content and
the working process of the software from the beginning.

PAGE NO. 102


1. Should not suffer from “Tunnel Vision”

While designing the process, it should not suffer from “tunnel vision” which
means that is should not only focus on completing or achieving the aim but
should consider alternative approaches, judging each based on the
requirements of the problem, the resources available to do the job.

2. The design should be traceable to the analysis model.

The design process should be traceable to the analysis model which means it
should satisfy all the requirements that software requires to develop a high-
quality product.

3. The design should not “Reinvent the Wheel”


The design process should not reinvent the wheel that means it should not waste
time or effort in creating things that already exist. Due to this, the overall
development will get increased.
4. The design should Minimize Intellectual distance
The design process should reduce the gap between real-world problems and
software solutions for that problem meaning it should simply minimize
intellectual distance.
5. The design should Exhibit uniformity and integration
The design should display uniformity which means it should be uniform
throughout the process without any change. Integration means it should mix or
combine all parts of software i.e. subsystems into one system.
6. The design should Accommodate change
The software should be designed in such a way that it accommodates the change
implying that the software should adjust to the change that is required to be done
as per the user’s need.
7. The design should Degrade gently
The software should be designed in such a way that it degrades gracefully which
means it should work properly even if an error occurs during the execution
or accommodate unusual circumstances.
PAGE NO. 103
8. The design should be assessed for quality
The design should be assessed for quality as it is being created not after the fact.
9. The design should be reviewed to minimize conceptual (semantic) errors
The design should be reviewed which means that the overall evaluation should
be done to check if there is any error present or if it can be minimized.
10. Design is not coding and coding is not design
Design means describing the logic of the program to solve any problem and
coding is a type of language that is used for the implementation of a design.

Modularization
Modularization is a technique to divide a software system into multiple discrete and
independent modules, which are expected to be capable of carrying out task(s) independently.
These modules may work as basic constructs for the entire software. Designers tend to design
modules such that they can be executed and/or compiled separately and independently.
Modular design unintentionally follows the rules of ‘divide and conquer’ problem-solving
strategy this is because there are many other benefits attached with the modular design of a
software. As the number of modules grows, the efforts associated with integrating the module
also grows.

Advantage of modularization:

• Smaller components are easier to maintain


• Program can be divided based on functional aspects
• Desired level of abstraction can be brought in the program
• Components with high cohesion can be re-used again
• Concurrent execution can be made possible
• Desired from security aspect
• Easy to understand the system.
• System maintenance is easy.
• A module can be used many times as their requirements. No need to write it
again and again.
• It allows large programs to be written by several or different people

PAGE NO. 104


Module-Level Concepts
Functional Independence: Functional independence is achieved by developing functions that
perform only one kind of task and do not excessively interact with other modules.
Independence is important because it makes implementation more accessible and faster. The
independent modules are easier to maintain, test, and reduce error propagation and can be
reused in other programs as well. Thus, functional independence is a good design feature which
ensures software quality.

Functional Independence is measured using two criteria:

1) Coupling: It measures the relative interdependence among modules.


2) Cohesion: It measures the relative function strength of a module.

1) Coupling: -

Two modules are considered independent if one can function completely without the presence
of other. Obviously, if two modules are independent, they are solvable and modifiable
separately. However, all the modules in a system cannot be independent of each other, as they
must interact so that together they produce the desired external behavior of the system. The
more connections between modules, the more dependent they are in the sense that more
knowledge about one module is required to understand or solve the other module. Hence, the
fewer and simpler the connections between modules, the easier it is to understand one without
understanding the other. The notion of coupling attempts to capture this concept of "how
strongly" different modules are interconnected.
Coupling is a measure of interdependence among modules. In general, the more we must
know about module A in order to understand module B, the more closely connected A is to B.
"Highly coupled" modules are joined by strong interconnections, while "loosely coupled"
modules have weak interconnections. Independent modules or uncoupled modules have no
interconnections.

PAGE NO. 105


The loosely coupled system has a distributed memory which delays the data rate whereas, the
tightly coupled system has shared memory which increases the data rate.
A good design will have low coupling. The design with high coupling will have more error.
Loose coupling, on other hand, minimizes the interdependence amongst modules.

Types of Coupling

Different types of coupling are content, common, external, control, stamp and data. The
strength of coupling from lowest coupling (Best) to highest coupling (worst) is given in below
figure:

Data coupling:

In this type of coupling two modules interact by exchanging or passing data as parameter. The

PAGE NO. 106


dependency between module A & B is said to be data coupled if their dependency is based on
the fact that they communicate by only passing data. Other than communicating through data,
the two modules are independent.
e.g.
Customer ID
Retrieve
Retrieve
customer
customer
Address
information Customer Address

• Two modules are data coupled if they communicated by passing parameter.


• It is good because maintenance is easier and good design has higher cohesion
and weak coupling.

Stamp Coupling:

Two modules are stamp coupled if they communicate via a passed data structure which contains
more information than necessary for the module to perform their function.

Here student records contain name, roll number, address, outside activities, medical
information, contact number, date of birth etc. in addition to academic performance
information. When we pass Student Record data structure to Calculate CGPA module we pass
so many unnecessary information in addition to required information i.e. academic
performance information.

Problem with stamp coupling

PAGE NO. 107


• It affects understanding, because it not clears without reading entire module,
which fields of record are accessed or change.
• Not reusable, other products have to use the same higher level data structure.
• Passes more data than necessary.

We can reduce stamp coupling by passing simple variables

Control Coupling:

Two modules are control coupled if they communicated using atleast one “Control Flag”.

Module A flag Module B

e.g. When one class must perform operation in a fixed order, but the order is controlled elsewhere.
Problem with control coupling

• Modules are not independent, called module must know internal structure and
logic of calling module.

PAGE NO. 108


External Coupling:

A form of a coupling in which a module has dependency to other module, external to the software
being developed or to a particular type of hardware.
This is basically related to the communication to external tools and devices e.g. OS, shared
libraries or the hardware.
External coupling occurs when two or more module OS / shared libraries / hardware /
access the same global data variable not data global data variable
structure.
Problems Module B
Module A
• High potential for side effects
• Missing access control
• Modules are bound to the global structure

Common Coupling:

When multiple modules have read and write


Global data structure
access to some global data (data structure), it is
called common or global coupling.
e.g. two modules have access to same database
Module A Module B
and can both read and write same record. Both
modules can access and change the value of
global structure.

Problems:
PAGE NO. 109
• Difficult to reuse because one module is dependent to other module because
they use same data
• Resulting code is unreadable, must read the entire module to understand
• Difficult to determine all the modules that affect a data element. It produces the
problem of reduces maintainability. If we want to make changes in one module,
we have to check all the module using this global data structure element.

Content Coupling:

This type of coupling occurs


when one module modifies
local data or instructions in
another module. e.g. in
programming class A jumps
directly into the mid code of the
class B and start run that
functionality. This form of
coupling should never be
used. It is worst coupling. e.g.

• One module changes a statement in another module (LISP)


• One module references or alters data contained inside another module (Pascal)
• One module branches into another module (Pascal)

Problem:
Almost any changes to one module requires changes to another module, if they are
content coupled.

2. Cohesion

Cohesion is a measure of the degree to which the elements of the module are
functionally related. A strongly cohesive module implements functionality that is related to
one feature of solution and requires little or no interaction with other module. Basically,

PAGE NO. 110


cohesion is the internal glue that keeps the module together. A good software design will
have high cohesion.

Cohesion strengthen the bond


between elements of the same
module by maximizing the
relationship between elements of
the same module. Cohesion is the
concept that tries to capture this
intra module. With cohesion, we
are interested in determining how
closely the elements of a module
are related to each other.
Cohesion of a module represents how tightly bound the internal elements of the module are to
one another. Cohesion and
coupling are clearly related.
Usually, the greater the
cohesion of each module in
the system, the lower the
coupling between modules
is.

Types of cohesion:

Functional Cohesion

Module with functional


cohesion focuses on exactly
one goal or function. All of
the elements of module
contribute to the performance of a single specific task. In other words, every essential element
for a single computation is contained in the component. The mathematical subroutines such as
calculating CGPA or Grade of student, calculating sale tax are typical example of functional
cohesion.

PAGE NO. 111


e.g.

Calculate_sale_Tax

If Product is sale exempt then


Sale_tax=0
Else
If product_price < 50 then
Sale_tax=product_price * 0.25
Else
If product_price < 100 then
Sale_tax=product_price * 0.35
Else
Sale_tax=product_price * 0.5
End if
End if
End if

It is considered to be the highest degree of cohesion and it is highly expected. Elements of


module in functional cohesion are grouped because they all contribute to a single well defined
function. It can also be reused.
Functional cohesion is best because:

• More Reusable
• Corrective Maintenance easier
• Easier to extend product

Sequential Cohesion

Sequential cohesion occurs when a module contains elements that depend on


the processing output of previous element.
A function with operations that must be performed in a specific order, with
the output of one operation being the input to the next. e.g. Search value from
data Sequential cohesion is acceptable. Sequential cohesion is stronger than
communicational cohesion because it is more problem-oriented. Its weakness
lies only in the fact that the module may perform multiple functions or
fragments of functions.
Problem:

• Fixed combinations order of task

PAGE NO. 112


• Bad reusability and maintainability

Communicational Cohesion

A communicational cohesion module is one whose elements perform different functions,


but each function references the same input information or output.
e.g. All operation that access the same data are defined with in one class. Like student record
class that functions adds, removes, updates are access various fields of a student record. In this
example adds, removes and updates functions perform different work but access the same data.
Problem:
Weak and lack of reusability

Procedural Cohesion

A module has procedural cohesion if all the operations it performs are related to a sequence of
steps performed in the program.
e.g. A module has three functions

1) Read data from keyboard


2) Validate Input
3) Store answer in global variable

Functionality of above three are different but must follow the sequence, data cannot be
validated till it is not inputted and it cannot be stored into global variable till it is not validated.
e.g. Sequence in Report Module of Examination System

1) Calculate SGPA
2) Calculate CGPA
3) Print Student Record and CGPA

Procedures that are used one after another are kept together, even if one does not necessarily
provide input to next.
Problem:

• Actions are still weakly connected


• Not Reusable
• Here elements are related only by sequence otherwise the activities are unrelated.
PAGE NO. 113
Temporal Cohesion

Here elements of a component are related by timing. A module has temporal cohesion when it
performs a series of operations related in time. X and Y both operations must have performed
around the same time.
Functions that are related by time, all placed in the same module. e.g. In Security System alarm
system and automatic telephone dialing unit both placed in the same module because they
related with time. As soon as the alarm rings, the automatic telephone dialing gets connected.
So both must be activated at the same time.

Logical Cohesion

In logical cohesion several logical related functions or data elements are placed in same
module or component. Here elements of components are related logically but not functionally.
Several logically related elements are in the same component and one of the element is selected
by the caller.
e.g. Module Display Record
Display_Record
If record type is student then
Display Student Record
Else if record type is staff then
Display Staff Record
End if
End
They are logically related on Display work so they are placed in same component and which
type of record is to be display depends on the caller.
e.g. Built in library functions are placed in different header files according to logical cohesion,
all the input and output functions are placed into stdio.h and all mathematical functions placed
into math.h. Different functions performs different type of task and their calling is depending
upon the caller.
Problem:
Logical cohesion can be bad because you end up grouping functionality by technical

PAGE NO. 114


characteristics rather than functional characteristics.

Coincidental Cohesion

The elements of a module are essentially unrelated by any common function, procedure, data
or anything.
e.g. File Processing Module
File_Processing
Open Employee update file
Read Employee Record
Print Page Heading
Open Employee Master File
Set Page Count to One
End
This is the weakest form of cohesion have no meaningful relationship.
Problem:
Difficult to maintain, understand and not reusable
Solution: Break module into separate modules each performing one task

Difference Between Loosely Coupled and Tightly Coupled Systems In Tabular


Form

BASIS OF LOOSELY COUPLED TIGHLY COUPLED


COMPARISON SYSTEMS SYSTEMS

Memory Loosely coupled systems have a Tightly coupled systems have a


Concept distributed memory concept. shared memory concept.

The interconnections in a tightly


The interconnection network in coupled system are Processor-
a loosely coupled system is memory interconnection network
Interconnection Message Transfer System (PMIN), I/O-Processor
(MTS). interconnection network (IOPIN) and

PAGE NO. 115


the interrupt-signal interconnection
network (ISIN).

Data Rate Data rate of the loosely The data rate of tightly coupled
coupled system is low. system is high.

The loosely coupled system is The tightly coupled system is more


Cost less expensive but larger in expensive but compact in size.
size.
Efficiency Loosely coupled system is The tightly coupled system can take a
efficient when the tasks running higher degree of interaction between
on different processors have processes and is efficient for high-
minimal interaction between speed and real-time processing
them.
They are widely used in They are widely used in parallel
Application distributed computing processing systems.
systems.
Throughput Throughput in this type of Throughput in this type of systems is
systems is low. high.

Power Power consumption is high. Power consumption is low.

Each process has its own cache System cache memory assigns
Cache Memory memory. processes according to the need of
processing.

Security Security is low in this type of Security is high.


systems.
Operating It operates on multiple It operates on single operating
System operating systems. system.

Scalability It has low scalability. It has high scalability.

PAGE NO. 116


Delay It has high delay. It has low delay.

PAGE NO. 117


Software Design Approaches
A good system design is to organize the program modules in such a way that are easy to
develop and change. Structured design techniques help developers to deal with the size and
complexity of programs. Analysts create instructions for the developers about how code
should be written and how pieces of code should fit together to form a program. It is important
for two reasons:

1. If any pre-existing code needs to be understood, organized and pieced


together.
2. It is common for the project team to have to write some code and produce
original programs that support the application logic of the system.

There are many strategies or techniques for performing system design. They are:

Bottom-up approach:

The design starts with the lowest level components and subsystems. By using these
components, the next immediate higher level components and subsystems are created
or composed. The process is continued till all the components and subsystems are
composed into a single component, which is considered as the complete system. The
amount of abstraction grows high as the design moves to more high levels.
By using the basic information existing system, when a new system needs to be created,
the bottom up strategy suits the purpose.

Advantages:

• The economics can result when general solutions can be reused. If a

PAGE NO. 118


system is to be built from an existing system, this approach is more
suitable, where the basic primitives can be used in the newer system.
• It can be used to hide the low-level details of implementation and be
merged with top-down technique.

Disadvantages:

• It is not so closely related to the structure of the problem.


• High quality bottom-up solutions are very hard to construct.
• If we get it wrong, then at a higher level, we will find that it is not as per
requirements, then we have to redesign at a lower level.

Top-down approach:

Each system is divided into several subsystems and components. Each of the subsystem
is further divided into set of subsystems and components. This process of division
facilitates in forming a system hierarchy structure. The complete software system is
considered as a single entity and in relation to the characteristics, the system is split into
sub-system and component. The same is done with each of the sub-system.
This process is continued until the lowest level of the system is reached. The design is
started initially by defining the system as a whole and then keeps on adding definitions
of the subsystems and components. When all the definitions are combined together, it
turns out to be a complete system.
For the solutions of the software need to be developed from the ground level, top- down
design best suits the purpose.

PAGE NO. 119


Advantages:

• The main advantage of top down approach is that its strong focus on
requirements helps to make a design responsive according to its
requirements.
• Top-down design is more suitable when the software solution needs to
be designed from scratch and specific details are unknown.

Disadvantages:

• Project and system boundaries tends to be application specification-


oriented. Thus it is more likely that advantages of component reuse will
be missed.
• The system is likely to miss, the benefits of a well-structured, simple
architecture.

Hybrid Design:

It is a combination of both the top – down and bottom – up design strategies. In this we
can reuse the modules.
Pure top-down or pure bottom-up approaches ate often not practical. For a bottom-up
approach to be successful, we must have a good notation of the top to which the design
should be heading. Without the good idea about the operations needed at the higher
layers, it is difficult to determine what operations the current layer should support.
PAGE NO. 120
For top-down approach to be effective, some bottom-up approach is essential for the
following reasons:

• To permit common sub modules


• Near the bottom of the hierarchy, where the intuition is simpler, and the
need for bottom-up testing is greater, because there are more numbers of
modules at low level than higher level.
• In the use of pre-written library modules, in particular, reuse old modules.

Hybrid approach has really popular after the acceptance of reusability of modules.
Standard libraries, MicroSoft Foundation Classes (MFCs), Object Oriented concepts are
the steps in this direction.

Function Oriented Design


The design activity begins when the SRS document for the software to be developed. The
design process for software system often has two level. At the first level the focus is on
deciding which modules are needed for the system, the specifications of these modules,
and how the modules should be interconnected. This is what is called the system design
or top- level design. In the second level, the internal design of the modules, or how the
specifications of the module can be satisfied, is decided. This design level is often called
detailed design or logic design.
Function oriented design is an approach to software design where the design decomposed to
a set of interacting units where each unit has a clearly defined function. Thus the system is
designed from a functional viewpoint. These functions are capable of performing significant
task in the system. The system is considered as top view of all functions.
Function oriented design inherits some properties of structured design where divide and
conquer methodology is used.
This design mechanism divides the whole system into smaller functions, which provides
means of abstraction by concealing the information and their operation. These functional
modules can share information among themselves by means of information passing and using
information available globally.

PAGE NO. 121


Generic Procedure:

Start with a high level description of what the software / program does. Refine each part of the
description one by one by specifying in greater details the functionality of each part. These
points lead to Top-Down Structure.

Problem in Top-Down design method:

Mostly each module is used by at most one other module and that module is called its Parent
module.

Solution to the problem:

Designing of reusable module. It means modules use several modules to do their required
functions.

PAGE NO. 122


Function Oriented Design Strategies or Design Notations:

Function Oriented Design Strategies are as follows:

1. Data Flow Diagram (DFD):

A data flow diagram (DFD) maps out the flow of information for any process or system. It
uses defined symbols like rectangles, circles and arrows, plus short text labels, to show data
inputs, outputs, storage points and the routes between each destination.

2. Data Dictionaries:

Data dictionaries are simply repositories to store information about all data items defined
in DFDs. At the requirement stage, data dictionaries contain data items. Data dictionaries
include Name of the item, Aliases (Other names for items), Description / purpose, Related
data items, Range of values, Data structure definition / form.

3. Structure Charts:

Structure Chart represent hierarchical structure of modules. It breaks down the entire
system into lowest functional modules, describe functions and sub-functions of each
module of a system to a greater detail. Structure Chart partitions the system into black
boxes (functionality of the system is known to the users but inner details are unknown).
Inputs are given to the black boxes and appropriate outputs are generated.
Modules at top level called modules at low level. Components are read from top to bottom
and left to right. When a module calls another, it views the called module as black box,
passing required parameters and receiving results.

Symbols used in construction of structured chart

1. Module

It represents the process or task of the system. It is of three types.

• Control Module: A control module branches to more than one sub module.
• Sub Module: Sub Module is a module which is the part (Child) of another
module.
• Library Module: Library Module are reusable and invoke able from any

PAGE NO. 123


module.

2. Conditional Call

It represents that control module can select any of the sub module on the basis of some
condition.

3. Loop (Repetitive call of module)

It represents the repetitive execution of module by the sub module.


A curved arrow represents loop in the module.

PAGE NO. 124


All the sub modules cover by the loop repeat execution of module.

4. Data Flow

It represents the flow of data between the modules. It is represented by directed arrow with
empty circle at the end.

5. Control Flow

It represents the flow of control between the modules. It is represented by directed arrow with

filled circle at the end.

6. Physical Storage

Physical Storage is that where all the information are to be stored.

PAGE NO. 125


Example : Structure Chart for an Email server

4. Pseudo Code:

Pseudo-code is an informal way to express the design of an algorithm. Pseudocode often


uses structural conventions of a normal programming language, but is intended for human
reading rather than machine reading. Pseudo Code is system description in short English
like phrases describing the function. It uses keyword and indentation. Pseudo codes are
used as replacement for flow charts. It decreases the amount of documentation required.
Pseudocode generally does not actually obey the syntax rules of any particular language;
there is no systematic standard form.
procedure bubbleSort( list : array of items, size )
for i=0 to size-1 do
for j = 0 to size-i-1 do:
if list[j] > list[j+1] then swap(
list[j], list[j+1] )
end if
end for
end for end
procedure

PAGE NO. 126


Object-Oriented Design
In the object-oriented design method, the system is viewed as a collection of objects
(i.e., entities). The state is distributed among the objects, and each object handles its
state data. Object Oriented Design (OOD) is defined as the process of planning a
system of interacting objects for the purpose of solving a software problem.
Object Oriented Design (OOD) serves as part of the object oriented programming
(OOP) process of lifestyle. It is mainly the process of using an object methodology to
design a computing system or application. This technique enables the implementation
of a software based on the concepts of objects. Additionally, it is a concept that forces
programmers to plan out their code in order to have a better flowing program.
The origins of Object Oriented Design (OOD) is debated, but the first languages that
supported it included Simula and SmallTalk. The term did not become popular until
Grady Booch wrote the first paper titled Object-Oriented Design, in 1982. The chief
objective of this type of software design is to define the classes and their relationships,
which are needed to build a system that meets the requirements contained in the
Software Requirement Specifications.
Moreover, it is the discipline of defining the objects and their interactions to solve a
problem that was identified and documented during the Object Oriented Analysis
(OOA). In short, Object Oriented Design (OOD) is a method of design encompassing
the process of object oriented decomposition and a notation for depicting both logical
and physical models of the system under design.

The other characteristics of Object Oriented Design are as follow:

• Objects are abstractions of the real-world or system entities and manage


themselves.
• The objects are independent and in an encapsulated state and representation
information.
• System functionality is expressed in terms of object services.
• Shared data areas are eliminated.
• Communication between objects is through message passing.
• The objects may be distributed and may execute sequentially or in parallel.

PAGE NO. 127


Process of Object Oriented Design:

Understanding the process of any type of software related activity simplifies its
development for the software developer, programmer and tester. Whether you are
executing functional testing, or making a test report, each and every action has a
process that needs to be followed by the members of the team. Similarly, Object
Oriented Design (OOD) too has a defined process, which if not followed rigorously,
can affect the performance as well as the quality of the software. Therefore, to assist
the team of software developers and programmers, here is the process of Object
Oriented Design (OOD):

1. To design classes and their attributes, methods, associations, structure, and


even protocol, design principle is applied.
o The static UML class diagram is redefined and completed by adding
details.
o Attributes are refined.
o Protocols and methods are designed by utilizing a UML activity diagram
to represent the methods algorithm.
o If required, redefine associations between classes, and refine class
hierarchy and design with inheritance.
o Iterate and refine again.
2. Design the access layer.

o Create mirror classes i.e., for every business class identified and created,
create one access class.

3. Identify access layer class relationship.


4. Simplify classes and their relationships. The main objective here is to eliminate
redundant classes and structures.
5. Iterate and refine again.
6. Design the view layer classes.
o Design the macro level user interface, while identifying the view layer
objects.
PAGE NO. 128
o Design the micro level user interface.
o Test usability and user satisfaction.
o Iterate and refine.
7. At the end of the process, iterate the whole design. Re-apply the design
principle, and if required repeat the preceding steps again.

Concepts of Object Oriented Design:

In Object Oriented Design (OOD), the technology independent concepts in the


analysis model are mapped onto implementing classes, constraints are identified, and
the interfaces are designed, which results in a model for the solution domain. In short,
a detailed description is constructed to specify how the system is to be built on
concrete technologies. Moreover, Object Oriented Design (OOD) follows some
concepts to achieve these goals, each of which has a specific role and carries a lot of
importance. These concepts are defined in detail below:

1. Objects: All entities involved in the solution design are known as objects. For
example, person, banks, company, and users are considered as objects. Every
entity has some attributes associated with it and has some methods to perform
on the attributes.
2. Classes: A class is a generalized description of an object. An object is an
instance of a class. A class defines all the attributes, which an object can have
and methods, which represents the functionality of the object.

3. Encapsulation: This is a tight coupling or association of data structure with the


methods or functions that act on the data. This is basically known as a class, or

PAGE NO. 129


object (object is often the implementation of a class).
4. Data Protection: The ability to protect some components of the object from
external entities. This is realized by language keywords to enable a variable to
be declared as private or protected to the owning class.
5. Inheritance: This is the ability of a class to extend or override the functionality
of another class. This so called child class has a whole section that is the parent
class and then it has its own set of functions and data.

6. Interface: A definition of functions or methods, and their signature that are


available for use as well as to manipulate a given instance of an object.
7. Polymorphism: OOD languages provide a mechanism where methods
performing similar tasks but vary in arguments, can be assigned the same
name. This is known as polymorphism, which allows a single interface is

PAGE NO. 130


performing functions for different types. Depending upon how the service is
invoked, the respective portion of the code gets executed.

Analyze and Design Object Oriented System

There are various steps in the analyasis and design of an object oriented system are
given in below figure:

PAGE NO. 131


UML-Building Blocks

UML is composed of three main building blocks, i.e., things, relationships, and
diagrams. Building blocks generate one complete UML model diagram by rotating
around several different blocks. It plays an essential role in developing UML diagrams.
The basic UML building blocks are enlisted below:

1. Things
2. Relationships
3. Diagrams
1. Things

Anything that is a real world entity or object is termed as things. It can be


divided into several different categories:

a) Structural things
b) Behavioral things
c) Grouping things
d) Annotational things

a) Structural things

Nouns that depicts the static behavior of a model is termed as structural


things. They display the physical and conceptual components. They
include class, object, interface, node, collaboration, component, and a use
case.

PAGE NO. 132


Class: A Class is a set of identical things that outlines the functionality
and properties of an object. It also represents the abstract class whose
functionalities are not defined. Its notation is as follows;

Object: An individual that describes the behavior and the functions of a


system. The notation of the object is similar to that of the class; the only
difference is that the object name is always underlined and its notation is
given below;

Interface: A set of operations that describes the functionality of a class,


which is implemented whenever an interface is implemented.

PAGE NO. 133


Collaboration: It represents the interaction between things that is done
to meet the goal. It is symbolized as a dotted ellipse with its name written
inside it.

Use case: Use case is the core concept of object-oriented modeling. It


portrays a set of actions executed by a system to achieve the goal.

Actor: It comes under the use case diagrams. It is an object that interacts
with the system, for example, a user.

PAGE NO. 134


Component: It represents the physical part of the system.

Node: A physical element that exists at run time.

Behavioral Things

They are the verbs that encompass the dynamic parts of a model. It depicts the
behavior of a system. They involve state machine, activity diagram, interaction
diagram, grouping things, annotation things
State Machine: It defines a sequence of states that an entity goes through in the

PAGE NO. 135


software development lifecycle. It keeps a record of several distinct states of a
system component.

Activity Diagram: It portrays all the activities accomplished by different entities of


a system. It is represented the same as that of a state machine diagram. It consists

of an initial state, final state, a decision box, and an action notation.

Interaction Diagram: It is used to envision the flow of messages between several


components in a system.

PAGE NO. 136


Grouping Things

It is a method that together binds the elements of the UML model. In UML, the
package is the only thing, which is used for grouping.
Package: Package is the only thing that is available for grouping behavioral and
structural things.

Annotation Things

It is a mechanism that captures the remarks, descriptions, and comments of UML


model elements. In UML, a note is the only Annotational thing.
Note: It is used to attach the constraints, comments, and rules to the elements

PAGE NO. 137


of the model. It is a kind of yellow sticky note.
Relationships
It illustrates the meaningful connections between things. It shows the association
between the entities and defines the functionality of an application. There are four
types of relationships given below:
Dependency: Dependency is a kind of relationship in which a change in target
element affects the source element, or simply we can say the source element is
dependent on the target element. It is one of the most important notations in UML. It
depicts the dependency from one entity to another.
It is denoted by a dotted line followed by an arrow at one side as shown below,

Association: A set of links that associates the entities to the UML model. It tells
how many elements are actually taking part in forming that relationship.
It is denoted by a dotted line with arrowheads on both sides to describe the
relationship with the element on both sides.

Generalization: It portrays the relationship between a general thing (a parent class


or superclass) and a specific kind of that thing (a child class or subclass). It is used
to describe the concept of inheritance.
It is denoted by a straight line followed by an empty arrowhead at one side.

Realization: It is a semantic kind of relationship between two things, where one


defines the behavior to be carried out, and the other one implements the mentioned
behavior. It exists in interfaces.
It is denoted by a dotted line with an empty arrowhead at one side.

Diagrams
The diagrams are the graphical implementation of the models that incorporate
symbols and text. Each symbol has a different meaning in the context of the UML
diagram. There are thirteen different types of UML diagrams that are available in
UML 2.0, such that each diagram has its own set of a symbol. And each diagram

PAGE NO. 138


manifests a different dimension, perspective, and view of the system.
UML diagrams are classified into three categories that are given below:

1. Structural Diagram
2. Behavioral Diagram
3. Interaction Diagram

Structural Diagram: It represents the static view of a system by portraying the


structure of a system. It shows several objects residing in the system. Following are
the structural diagrams given below:

o Class diagram
o Object diagram
o Package diagram
o Component diagram
o Deployment diagram

Behavioral Diagram: It depicts the behavioral features of a system. It deals with


dynamic parts of the system. It encompasses the following diagrams:

o Activity diagram
o State machine diagram
o Use case diagram

Interaction diagram: It is a subset of behavioral diagrams. It depicts the interaction


between two objects and the data flow between them. Following are the several
interaction diagrams in UML:

o Timing diagram
o Sequence diagram
o Collaboration diagram

PAGE NO. 139


https://www.javatpoint.com/uml-use-case-diagram

PAGE NO. 140


Object Oriented Methodology

Object-oriented design (OOD) is the process of using an object-oriented methodology


to design a computing system or application. This technique enables the
implementation of a software solution based on the concepts of objects.
OOD serves as part of the object-oriented programming (OOP) process or lifecycle.

It is a new system development approach, encouraging and facilitating re-use of


software components. It employs international standard Unified Modelling Language
(UML) from the Object Management Group (OMG). Using this methodology, a system
can be developed on a component basis, which enables the effective re-use of existing
components, it facilitates the sharing of its other system components.
Object Oriented Methodology asks the analyst to determine what the objects of the
system are?, What responsibilities and relationships an object has to do with the other
objects? and How they behave over time?
There are three types of Object Oriented Methodologies
1. Object Modelling Techniques (OMT)
2. Object Process Methodology (OPM)
3. Rational Unified Process (RUP)

1. Object Modelling Techniques (OMT)


It was one of the first object-oriented methodologies and was introduced by Rumbaugh
in 1991.
OMT uses three different models that are combined in a way that is analogous to the
older structured methodologies.

PAGE NO. 141


a. Analysis
The main goal of the analysis is to build models of the mini world.
The requirements of the users, developers and managers provide the information
needed to develop the initial problem statement.
b. OMT Models
I. Object Model
• It depicts the object classes and their relationships as a class
diagram, which represents the static structure of the system.
• It observes all the objects as static and does not pay any attention to
their dynamic nature.
II. Dynamic Model
• It captures the behaviour of the system over time and the flow control
and events in the Event-Trace Diagrams and State Transition
Diagrams.
• It portrays the changes occurring in the states of various objects with
the events that might occur in the system.
III. Functional Model
• It describes the data transformations of the system.
• It describes the flow of data and the changes that occur to the data
throughout the system.

PAGE NO. 142


c. Design
• It specifies all of the details needed to describe how the system will be
implemented.
• In this phase, the details of the system analysis and system design are
implemented.
• The objects identified in the system design phase are designed.

2. Object Process Methodology (OPM)


It is also called as second-generation methodology. It was first introduced in 1995. It
has only one diagram that is the Object Process Diagram (OPD) which is used for
modelling the structure, function and behaviour of the system. It has a strong emphasis
on modelling but has a weaker emphasis on process. It consists of three main
processes:
I. Initiating:
It determines high level requirements, the scope of the system and the resources
that will be required.
II. Developing:
It involves the detailed analysis, design and implementation of the system.
III. Deploying:
It introduces the system to the user and subsequent maintenance of the system.

3. Rational Unified Process (RUP)


The Rational Unified Process (RUP) is an iterative software development process
framework created by the Rational Software Corporation, a division of IBM since
2003. It divides the development process into four phases which can be broken
down into iterations.
1. Inception - The idea for the project is stated. The development team
determines if the project is worth pursuing and what resources will be
needed.

PAGE NO. 143


2. Elaboration - The project's architecture and required resources are
further evaluated. Developers consider possible applications of the
software and costs associated with the development.
3. Construction - The project is developed and completed. The software is
designed, written, and tested.
4. Transition - The software is released to the public. Final adjustments or
updates are made based on feedback from end users.

The RUP development methodology provides a structured way for companies to


envision create software programs. Since it provides a specific plan for each step
of the development process, it helps prevent resources from being wasted and
reduces unexpected development costs.

Each iteration consists of nine work areas called disciplines.


A discipline depends on the phase in which the iteration is taking place.
For each discipline, RUP defines a set of artefacts (work products), activities
(work undertaken on the artefacts) and roles (the responsibilities of the members
of the development team).

Objectives of Object Oriented Methodologies


• To encourage greater re-use.
• To produce a more detailed specification of system constraints.
• To have fewer problems with.

Benefits of Object Oriented Methodologies


1. It represents the problem domain, because it is easier to produce and understand
designs.
2. It allows changes more easily.
3. It provides nice structures for thinking, abstracting and leads to modular design.
4. Simplicity:
• The software object's model complexity is reduced and the program
structure is very clear.
5. Reusability:
PAGE NO. 144
• It is a desired goal of all development process.
• It contains both data and functions which act on data.
• It makes easy to reuse the code in a new system.
• Messages provide a predefined interface to an object's data and
functionality.
6. Increased Quality:
• This feature increases in quality is largely a by-product of this program
reuse.
7. Maintainable:
• The OOP method makes code more maintainable.
• The objects can be maintained separately, making locating and fixing
problems easier.
8. Scalable:
• The object-oriented applications are more scalable than structured
approach.
• It makes easy to replace the old and aging code with faster algorithms and
newer technology.
9. Modularity:
• The OOD systems are easier to modify.
• It can be altered in fundamental ways without ever breaking up since
changes are neatly encapsulated.
10. Modifiability:
• It is easy to make minor changes in the data representation or the
procedures in an object-oriented program.
11. Client/Server Architecture:
• It involves the transmission of messages back and forth over a network.

PAGE NO. 145


Functional Modelling
Functional Modelling gives the process perspective of the object-oriented analysis
model and an overview of what the system is supposed to do. It defines the function of
the internal processes in the system with the aid of Data Flow Diagrams (DFDs). It
depicts the functional derivation of the data values without indicating how they are
derived when they are computed, or why they need to be computed.
Data Flow Diagrams: Function modelling is represented with the help of DFDs. DFD is
the graphically representation of data. It shows the input, output and processing of the
system. When we are trying to create our own business, website, system, project then
there is need to find out how information passes from one process to another so all are
done by DFD. There are number of levels in DFD but up to third level DFD is sufficient
for understanding of any system.
The basic components of the DFD are:

1. External Entity: External entity is the entity that takes information and gives
information to the system. It is represented with rectangle.
2. Data Flow: The data passes from one place to another is shown by data flow. Data
flow is represented with arrow and some information written over it.
3. Process: It is also called function symbol. It is used to process all the information. If
there are calculations so all are done in the process part. It is represented with circle
and name of the process and level of DFD written inside it.
4. Data Store: It is used to store the information and retrieve the stored information. It
is represented with double parallel lines.

Some Guidelines for creating a DFD:

1. Every process must have meaningful name and number.


2. Level 0 DFD must have only one process.
3. Every data flow and arrow has given the name.
4. DFD should be logical consistent.
5. DFD should be organised in such a way that it is easy to understand.
6. There should be no loop in the DFD.
7. Each DFD should not have more than 6 processes.
PAGE NO. 146
8. The process can only connect with process, external entity and data store.
9. External entity cannot be directly connected with external entity.
10. The direction of DFD is left to right and top to bottom representation.

PAGE NO. 147


Dynamic Model
Dynamic Modelling describes those aspect of the system that are concerned with time
and sequencing of the operations. It is concerned with the temporal changes in the
state of the objects in a system. It is used to specify and implement the control aspect
of the system.
In static modelling the view of system that does not change with time but in dynamic
modelling the view of system that does change with time.

Diagrams for Dynamic Modelling

There are two primary diagrams that are used for dynamic modelling −

1) Interaction Diagrams

Interaction diagrams describe the dynamic behaviour among different objects. It


comprises of a set of objects, their relationships, and the message that the objects
send and receive. Thus, an interaction models the behaviour of a group of
interrelated objects. The two types of interaction diagrams are −
• Sequence Diagram − It represents the temporal ordering of messages in a
tabular manner.
• Collaboration Diagram − It represents the structural organization of objects
that send and receive messages through vertices and arcs.

2) State Transition Diagram

State transition diagrams or state machines describe the dynamic behaviour of a


single object. It illustrates the sequences of states that an object goes through in
its lifetime, the transitions of the states, the events and conditions causing the
transition and the responses due to the events.
The main concepts are −
• State, which is the situation at a particular condition during the lifetime of an
object.
• Transition, a change in the state
• Event, an occurrence that triggers transitions

PAGE NO. 148


• Action, an uninterrupted and atomic computation that occurs due to some
event, and
A state machine models the behaviour of an object as it passes through a number
of states in its lifetime due to some events as well as the actions occurring due to
the events. A state machine is graphically represented through a state transition
diagram.

PAGE NO. 149


States and State Transitions

State

The state is an abstraction given by the values of the attributes that the object
has at a particular time period. It is a situation occurring for a finite time period in
the lifetime of an object, in which it fulfils certain conditions, performs certain
activities, or waits for certain events to occur. In state transition diagrams, a state
is represented by rounded rectangles.

Parts of a state

• Name − A string differentiates one state from another. A state may not have
any name.
• Entry/Exit Actions − It denotes the activities performed on entering and on
exiting the state.
• Internal Transitions − The changes within a state that do not cause a change
in the state.
• Sub–states − States within states.

Initial and Final States

The default starting state of an object is called its initial state. The final state
indicates the completion of execution of the state machine. In state transition
diagrams, the initial state is represented by a filled black circle. The final state is
represented by a filled black circle encircled within another unfilled black circle.

Transition

A transition denotes a change in the state of an object. If an object is in a certain


state when an event occurs, the object may perform certain activities subject to
specified conditions and change the state. In this case, a state−transition is said
to have occurred. The transition gives the relationship between the first state and
the new state. A transition is graphically represented by a solid directed arc from
the source state to the destination state.
The five parts of a transition are −
• Source State − The state affected by the transition.

PAGE NO. 150


• Event Trigger − The occurrence due to which an object in the source state
undergoes a transition if the guard condition is satisfied.
• Guard Condition − A Boolean expression which if True, causes a transition
on receiving the event trigger.
• Action − An un-interruptible and atomic computation that occurs on the
source object due to some event.
• Target State − The destination state after completion of transition.

PAGE NO. 151


Example
Suppose a person is taking a taxi from place X to place Y. The states of the
person may be: Waiting (waiting for taxi), Riding (he has got a taxi and is travelling
in it), and Reached (he has reached the destination). The following figure depicts
the state transition.

Events

Events are some occurrences that can trigger state transition of an object or a
group of objects. Events have a location in time and space but do not have a time
period associated with it. Events are generally associated with some actions.
Examples of events are mouse click, key press, an interrupt, stack overflow, etc.
Events that trigger transitions are written alongside the arc of transition in state
diagrams.
Example
Considering the example shown in the above figure, the transition from Waiting
state to Riding state takes place when the person gets a taxi. Likewise, the final
state is reached, when he reaches the destination. These two occurrences can
be termed as events Get_Taxi and Reach_Destination. The following figure
shows the events in a state machine.

PAGE NO. 152


External and Internal Events

External events are those events that pass from a user of the system to the
objects within the system. For example, mouse click or key−press by the user are
external events.
Internal events are those that pass from one object to another object within a
system. For example, stack overflow, a divide error, etc.

Deferred Events

Deferred events are those which are not immediately handled by the object in the
current state but are lined up in a queue so that they can be handled by the object
in some other state at a later time.

Event Classes

Event class indicates a group of events with common structure and behaviour.
As with classes of objects, event classes may also be organized in a hierarchical
structure. Event classes may have attributes associated with them, time being an
implicit attribute. For example, we can consider the events of departure of a flight
of an airline, which we can group into the following class −
Flight_Departs (Flight_No, From_City, To_City, Route)

Actions

Activity

Activity is an operation upon the states of an object that requires some time
period. They are the ongoing executions within a system that can be interrupted.
Activities are shown in activity diagrams that portray the flow from one activity to
another.

Action

An action is an atomic operation that executes as a result of certain events. By


atomic, it is meant that actions are un-interruptible, i.e., if an action starts
executing, it runs into completion without being interrupted by any event. An
action may operate upon an object on which an event has been triggered or on
other objects that are visible to this object. A set of actions comprises an activity.
PAGE NO. 153
Entry and Exit Actions

Entry action is the action that is executed on entering a state, irrespective of the
transition that led into it.
Likewise, the action that is executed while leaving a state, irrespective of the
transition that led out of it, is called an exit action.

Scenario

Scenario is a description of a specified sequence of actions. It depicts the


behaviour of objects undergoing a specific action series. The primary scenarios
depict the essential sequences and the secondary scenarios depict the
alternative sequences.

Examples of State Transition Diagram


Case Study:
You need to develop a web-based application in such a way that users can search other
users, and after getting search complete, the user can send the friend request to other
users. If the request is accepted, then both users are added to the friend list of each
other. If one user does not accept the friend request. The second user can send another
friend request. The user can also block each other.
Solution:
1. First of all, identify the object that you will create during the development of
classes in oop
2. Identify the actions or events
3. Identify the possible states for an object
4. Draw the diagram.
Object: friends
Events or actions: Search to add a friend, add a friend, accept a friend, reject a friend,
again add, block user and close.
States: Start, the friend added, friend rejected, user blocked and end.

PAGE NO. 154


PAGE NO. 155
Internal Classes and Operations
The classes identified so far are the ones that come from the problem domain. The
methods identified on the objects are the ones needed to satisfy all the interactions with
the environment and the user and to support the desired functionality. However, the
final design is a blueprint for implementation. Hence, implementation issues have to be
considered. While considering implementation issues, algorithm and optimization
issues arise. These issues are handled in this step.
First, each class is critically evaluated to see if it is needed in its present form in the final
implementation. Some of the classes might be discarded if the designer feels they are
not needed during implementation. Then the implementation of operations on the
classes is considered. For this, rough algorithms for implementation might be
considered. While doing this, a complex operation may get defined in terms of lower-
level operations on simpler classes. In other words, effective implementation of
operations may require heavy interaction with some data structures and the data
structure to be considered an object in its own right. These classes that are identified
while considering implementation concerns are largely support classes that may be
needed to store intermediate results or to model some aspects of the object whose
operation is to be implemented. The classes for these objects are called container
classes. Once the implementation of each class and each operation on the class has
been considered and it has been satisfied that they can be implemented, the system
design is complete. The detailed design might also uncover some very low-level objects,
but most such objects should be identified during system design.

PAGE NO. 156


Software Design

Software design is a process to transform user requirements into some suitable form,
which helps the programmer in software coding and implementation.
For assessing user requirements, an SRS (Software Requirement Specification)
document is created whereas for coding and implementation, there is a need of more
specific and detailed requirements in software terms. The output of this process can
directly be used into implementation in programming languages.
Software design is the first step in SDLC (Software Design Life Cycle), which moves
the concentration from problem domain to solution domain. It tries to specify how to
fulfil the requirements mentioned in SRS.

Software Design Levels

Software design yields three levels of results:

1) Architectural Design - The architectural design is the highest abstract version of


the system. It identifies the software as a system with many components
interacting with each other. At this level, the designers get the idea of proposed
solution domain.
Architecture Design will be focusing on the

o Identifying the right technology tools for implementation


o Sizing the hardware, plan for backup/restore and disaster recovery
o topologies, security modelling. Etc
o Identifying the right patterns for design and development
o Process for managing the environments (testing environment, production
environment)
o Mainly Architecture will be focusing on environment level.

2) High-level Design- The high-level design breaks the ‘single entity-multiple


component’ concept of architectural design into less-abstracted view of sub-
systems and modules and depicts their interaction with each other. High-level
design focuses on how the system along with all of its components can be

PAGE NO. 157


implemented in forms of modules. It recognizes modular structure of each sub-
system and their relation and interaction among each other.

High-level design will be focusing on


o Database design
o Brief mention of all the platforms, systems, services, and processes the
product would depend on
o Brief description of relationships between the modules and system
features
o Application related components
o Interaction between the components
o Identifying the business case with the related users
o Identifying the flow of the business case process.
o Mainly high-level design will be focusing on the application level.

All the data flows, flowcharts, data structures, etc. are in these docs, so that
developers can understand how the system is expected to work with regards to
the features and the database design.

3) Detailed Design- Detailed design deals with the implementation part of what is
seen as a system and its sub-systems in the previous two designs. It is more
detailed towards modules and their implementations. It defines logical structure of
each module and their interfaces to communicate with other modules.

Detail design is the specification of the internal elements of all major system
components, their properties, relationships, processing, and often their algorithms
and the data structures.
According to the IEEE,
The process of refining and expanding the preliminary design phase
(software architecture) of a system or component to the extent that the design is
sufficiently complete to be implemented.
During Detailed Design designers go deep into each component to define its
internal structure and behavioural capabilities, and the resulting design leads to
natural and efficient construction of software.

PAGE NO. 158


Detailed design is closely related to architecture and construction; therefore,
successful designers (during detailed design) are required to have or acquire full
understanding of the system’s requirements and architecture.
They must also be proficient in particular design strategies (e.g., object-oriented),
programming languages, and methods and processes for software quality control.
Just as architecture provides the bridge between requirements and design,
detailed design provides the bridge between design and code.

KEY TASKS IN DETAILED DESIGN


The major tasks identified for carrying out the detailed design activity include:
1. Understanding the architecture and requirements
2. Creating detailed designs
3. Evaluating detailed designs
4. Documenting software design
5. Monitoring and controlling implementation

1. UNDERSTANDING THE ARCHITECTURE AND REQUIREMENTS


Unlike the software architecture, where the complete set of requirements are
evaluated and well understood, designers during detailed design activity focus on
requirements allocated to their specific components.

2. CREATING DETAILED DESIGNS

➢ After the architecture and requirements for assigned components are well
understood, the detailed design of software components can begin.

Detailed design consists of both structural and behavioural designs.

➢ When creating detailed designs, focus is placed on the following:

1. Interface Design - Internal & External


2. Graphical User Interface (GUI) Design
3. Internal Component Design

• Structural
PAGE NO. 159
• Behavioural

4. Data Design

• Database

3. EVALUATING DETAILED DESIGNS

➢ The most popular technique for evaluating detailed designs involves Technical
Reviews. When conducting technical reviews, keep in mind the following:
• Send a review notice with enough time for others to have appropriate time
to thoroughly review the design.
• Include a technical expert in the review team, as well as stakeholders of
your design.
• Include a member of the software quality assurance or testing team in the
review.
• During the review, focus on the important aspects of your designs; those
that show how your design helps meet functional and non-functional
requirements.
• Document the review process.
o Make sure that any action items generated during the review are
captured and assigned for processing.

4. DOCUMENTING DETAILED DESIGNS

➢ Documentation of a project’s software design is mostly captured in the


software design document (SDD), also known as software design
description. The SDD is used widely throughout the development of the
software.
• Used by programmers, testers, maintainers, systems integrators, etc.
➢ Other forms of documentation include:
• Interface Control Document

Serves as written contract between components of the system software as


to how they communicate.

PAGE NO. 160


• Version Control Document

Contains information about what is included in a software release, including


different files, scripts and executable. Different versions of the design
depend on specific software release.
5. MONITORING AND CONTROLLING IMPLEMENTATION

➢ Monitor and control detailed design synchronicity (events and occurances)


➢ Detailed design synchronicity is concerned with the degree of how well detailed
designs adhere to the software architecture and how well software code
adheres to the detailed design.
• Forward & backward traceability
• Low degree of synchronicity points to a flaw in the process and can lead
to software project failure.
➢ Particular attention needs to be paid when projects enter the maintenance
phase or when new engineers are brought into the project.
➢ Processes must be in place to ensure that overall synchronicity is high.

PAGE NO. 161


Program Design Language (PDL)
Program Design Language (or PDL, for short) is a method for designing and
documenting methods and procedures in software. It is related to pseudocode, but
unlike pseudocode, it is written in plain language without any terms that could suggest
the use of any programming language or library.

PDL was originally developed by the company Caine, Farber & Gordon and has been
modified substantially since they published their initial paper on it in 1975. It has been
described in some detail by Steve McConnell in his book Code Complete.
PDL is used to express the design in a language that is as precise and unambiguous
as possible without having too much detail and that can be easily converted into an
implementation.
PDL has an overall outer syntax of a structure programming language and has a
vocabulary of a natural language.
PDL Example:
Consider the problem of reading the record from the file. If file reading is not completed
and there is no error in the record then print the information of the record otherwise print
that there is an error in reading the record. This process will continue till the whole file

is completed.

PAGE NO. 162


The PDL contain the entire logic of the procedure but little about the details of
implementation in a particular language.
To implements this in a language, each of the PDL statements will have to be converted
into programming language statements.
PDL Constructs
The basic constructs of PDL are similar to those of a structured language. The following
are the constructs of PDL:
1. Sequence Construct: It is the simplest; whereby statements are executed in
the order they’re found in procedure.

Sequence Construct

2. If construct: The if construct is used to control the flow of execution down one

of two or more paths, depending on the result of given condition.


if-then-else construct

3. Selection Constructs: The selection construct is used when the flow of


execution may flow down two or more paths. When there are many conditions and
the vales are discrete, selection constructs are used. There are two or more

PAGE NO. 163


conditions in the selection constructs. Each condition statement is an entry point,
and execution will continue from that point unless a break statement is used. The
break statement causes the program to continue from the end of the selection
construct. Only discrete values may be used for the conditions.
Selection Construct

4. Repetition Constructs: The repetition construct is used when a block of code


is required to be executes continually until a condition is met. This type of loop is
used to execute the block of code and the condition should be true.

Repetition construct

Advantages of PDL :
• It can be embedded with source code, therefore easy to maintain.
• It enables declaration of data as well as procedure.
• It is the cheapest and most effective way to change program architecture,

PAGE NO. 164


Logic/Algorithm Design

The basic goal in detailed design is to specify the logic for the different modules that
have been specified during system design. Specifying the logic will require developing
an algorithm that will implement the given specifications. The term algorithm is quite
general and is applicable to a wide variety of areas. Essentially, an algorithm is a
sequence of steps that need to be performed to solve a given problem.
There are a number of steps that one has to perform while developing an algorithm.
• The starting step in the design of algorithms is statement of the problem. The
problem for which an algorithm is being devised has to be precisely and clearly
stated and properly understood by the person responsible for designing the
algorithm. For detailed design, the problem statement comes from the system
design. That is, the problem statement is already available when the detailed
design of a module commences.
• The next step is development of a mathematical model for the problem. In
modelling, one has to select the mathematical structures that are best suited for
the problem. It can help to look at other similar problems that have been solved.
In most cases, models are constructed by taking models of similar problems and
modifying the model to suit the current problem.
• The next step is the design of the algorithm. During this step the data structure
and program structure are decided.
• Once the algorithm is designed, its correctness should be verified.
Stepwise Refinement
The most common method for designing algorithms or the logic for a module is to use
the stepwise refinement technique.

PAGE NO. 165


Effective way to solve a complex problem is to break it down into successively simpler
subproblems. We start by breaking the whole task down into simpler parts. Some of
those tasks may themselves need subdivision. This process is called stepwise

refinement or decomposition

• Start with the initial problem statement


• Break it into a few general steps
• Take each "step", and break it further into more detailed steps
• Keep repeating the process on each "step", until you get a breakdown that is pretty
specific, and can be written more or less in pseudocode
• Translate the pseudocode into real code

The stepwise refinement technique is a top-down method for developing detailed


design. To perform stepwise refinement PDL is very suitable. Its formal outer syntax
ensures that the design being developed is a "computer algorithm" whose statements
can later be converted into statements of a programming language. Its flexible natural
language-based inner syntax allows statements to be expressed with varying degrees
of precision and aids the refinement process.

Example of Stepwise Refinement Technique


Problem Statement: Determine the average for a set of test grades, input by the user.
The number of test grades is not known in advance (so the user will have to enter a
special code -- a "sentinel" value -- to indicate that he/she is finished typing in grades).
PAGE NO. 166
(A sentinel value (also referred to as a flag value, trip value, rogue value, signal value,
or dummy data) is a special value in the context of an algorithm which uses its presence
as a condition of termination, typically in a loop or recursive algorithm.)

Initial breakdown into steps

Declare and initialize variables


Input grades (prompt user and allow input)
Compute class average and output result

Now, breaking down the "compute" step further, we got:

Compute:
add the grades
count the grades
divide the sum by the count

We realized this would be a problem, because to do all input before doing the
sum and the count would require us to have enough variables for all the grades
(but the number of grades to be entered is not known in advance). So, we revised
our breakdown of "steps".

Revised breakdown of steps

Declare and initialize variables


Input grades -- count and add them as they are input
Compute class average

Breaking the steps into smaller steps

So, now we can break down these 3 steps into more detail. The input
step can roughly break down this way:

loop until the user enters the sentinel value (-1 would be good)
prompt user to enter a grade (give them needed info, like -1 to quit)
allow user to type in a grade (store in a variable)
add the grade into a variable used for storing the sum
add 1 to a counter (to track how many grades)

PAGE NO. 167


We could specifically write this as a while loop or as a do-while loop. So, one more
refining step would be a good idea, to formulate the pseudo-code more like the actual
code we would need to write. For example:

do
prompt user to enter a grade (give them needed info, like -1 to quit)
allow user to type in a grade (store in a variable)
add the grade into a variable used for storing the sum
add 1 to a counter (to track how many grades)
while user has NOT entered the sentinel value (-1 would be good)

If we look at this format, we realize that the "adding" and "counting" steps should only
be done if the user entry is a grade, and NOT when it's the sentinel value. So, we can
add one more refinement:

do
prompt user to enter a grade (give them needed info, like -1 to quit)
allow user to type in a grade (store in a variable)
if the entered value is a GRADE (not the sentinel value)
add the grade into a variable used for storing the sum
add 1 to a counter (to track how many grades)
while user has NOT entered the sentinel value (-1 would be good)

This breakdown helps us see what variables are needed, so the declare and initialize
variables step can be now made more specific:

initialize variables:
a grade variable (to store user entry)
a sum variable (initialized to 0)
a counter (initialized to 0)

And the compute answer and print step become:

divide sum by counter and store result


print result

Putting it all together

This is approximately where we left it at the end of class:


PAGE NO. 168
initialize variables:
---------
a grade variable (to store user entry)
a sum variable (initialized to 0)
a counter (initialized to 0)

grade entry:
---------
do
prompt user to enter a grade (give them needed info, like -1 to quit)
allow user to type in a grade (store in a variable)
if the entered value is a GRADE (not the sentinel value)
add the grade into a variable used for storing the sum
add 1 to a counter (to track how many grades)
while user has NOT entered the sentinel value (-1 would be good)

Compute average:
---------
divide the sum by the counter
print the answer

State Modelling of Classes


The technique for getting a more detailed understanding of the class as a whole, without talking
about the logic of different methods, has to be fundamentally different from the PDL-based
approach. An object of a class has some state and many operations on it. To better understand
a class, the relationship between the state and various operations and the effect of interaction
of various operations have to be understood. This can be viewed as one of the objectives of
the detailed design activity for object-oriented development. Once the overall class is better
understood, the algorithms for its various methods can be developed. Note that the axiomatic
specification approach for a class also takes this view. Instead of specifying the functionality
of each operation, it specifies, through axioms, the interaction between different operations.
PAGE NO. 169
The Algebraic Specification describes functions in the form of an algebra.

A method to understand the behaviour of a class is to view it as a finite state


automata (FSA), An FSA consists of states and transitions between states, which
take place when some events occur. When modelling an object, the state is the value
of its attributes, and an event is the performing of an operation on the object. A
state diagram relates events and states by showing how the state changes when an
event is performed. A state diagram for an object will generally have an initial state,
from which all states in the FSA are reachable (i.e., there is a path from the initial state
to all other states).

A state diagram for an object does not represent all the actual states of the
object, as there are many possible states. A state diagram attempts to represent only the
logical states of the object. A logical state of an object is a combination of all those states
from which the behaviour of the object is similar for all possible events. Two logical states
will have different behaviour for at least one event. For example, for an object that represents
a stack, all states that represent a stack of size more than 0 and less than some defined
maximum are similar as the behaviour of all operations defined on the stack will be similar in
all such states (e.g., push will add an element, pop will remove one, etc.). However, the state

representing an empty stack is different as the behaviour of push and pop operations are
different now (an error message may be returned in case of pop). Similarly, the state
representing a full stack is different. The state model for this bounded size stack is shown in
Figure

PAGE NO. 170


The finite state modelling of objects is an aid to understand the effect of various operations
defined on the class on the state of the object. A good understanding of this can aid in
developing the logic for each of the operations. To develop the logic of operations, regular
approaches for algorithm development can be used. The model can also be used to validate if
the logic for an operation is correct. As we have seen, for a class, typically the input-output
specification of the operations is not provided. Hence, the FSA model can be used as a
reference for validating the logic of the different methods.

PAGE NO. 171


Verification
There are a few techniques available to verify that the detailed design is consistent with the
system design. The focus of verification in the detailed design phase is on showing that the
detailed design meets the specifications laid down in the system design. Validating that the
system as designed is consistent with the requirements of the system is not stressed during
detailed design. The three verification methods we consider are:

1. Design Walkthroughs
2. Critical Design Review
3. Consistency Checkers.

1. Design Walkthroughs: -

A design walkthrough is a manual method of verification. The definition and use of


walkthroughs change from organization to organization. A design walkthrough is done
in an informal meeting called by the designer or the leader of the designer's group. The
walkthrough group is usually small and contains, along with the designer, the group
leader and/or another designer of the group. The designer might just get together with a
colleague for the walkthrough or the group leader might require the designer to have the
walkthrough with him. In a walkthrough the designer explains the logic step by step,
and the members of the group ask questions, point out possible errors or seek
clarification.
A beneficial side effect of walkthroughs is that in the process of articulating and
explaining the design in detail, the designer himself can uncover some of the errors.
Walkthroughs are essentially a form of peer review. Due to its informal nature, they are
usually not as effective as the design review.
For a design walkthrough to be effective, it needs to include specific components. The
following guidelines highlight these key components. Use these guidelines to plan,
conduct, and participate in design walkthroughs and increase their effectiveness.

1. Plan for a Design Walkthrough

Time and effort of every participant should be built into the project plan so that
participants can schedule their personal work plans accordingly. The plan should
include time for individual preparation, the design walkthrough (meeting), and the
PAGE NO. 172
likely rework.

2. Get the Right Participants

It is important to invite the right participants to a design walkthrough. The


reviewers/experts should have the appropriate skills and knowledge to make the
walkthrough meaningful for all. It is imperative that participants add quality and value
to the product and not simply 'add to their learning.'

PAGE NO. 173


3. Understand Key Roles and Responsibilities

All participants in the design walkthrough should clearly understand their role and
responsibilities so that they can consistently practice effective and efficient reviews.

4. Prepare for a Design Walkthrough

Besides planning, all participants need to prepare for the design walkthrough. One
cannot possibly find all high-impact mistakes in a work product that they have looked
at only 10 minutes before the meeting. If all participants are adequately prepared as per
their responsibilities, the design walkthrough is likely to be more effective.

5. Use a Well-Structured Process

A design walkthrough should follow a well-structured, documented process. This


process should help define the key purpose of the walkthrough and should provide
systematic practices and rules of conduct that can help participants collaborate with one
another and add value to the review.

6. Review and Assessment the Product, Not the Designer

The design walkthrough should be used as a means to review and assessment the
product, not the person who created the design. Use the collective understanding to
improve the quality of the product, add value to the interactions, and encourage
participants to submit their products for a design walkthrough.

2. Critical Design Review (CRD): -

The purpose of critical design review is to ensure that the detailed design satisfies the
specifications laid down during system design.
A Critical Design Review is a multi-disciplined technical review to ensure that a system
can proceed into fabrication, demonstration, and test and can meet stated performance
requirements within cost, schedule, and risk. A successful CDR is predicated upon
a determination that the detailed design satisfies the Capabilities Development
Document (CDD). Multiple CDRs may be held for key Configuration Items (CI) and/or
at each subsystem level, culminating in a system-level CDR.
The critical design review process is same as the inspections process in which a
group of people get together to discuss the design with the aim of close- fitting design

PAGE NO. 174


errors or undesirable properties. The review group includes, besides the author of the
detailed design, a member of the system design team, the programmer responsible for
ultimately coding the module(s) under review, and an independent software quality
engineer. While doing design review it should be kept in mind that the aim is to
uncover design errors, not try to fix them.

A Critical Detailed Design Review (CDR) should:

PAGE NO. 175


• Determine that the detailed design of the configuration item under review
satisfies cost (for cost-type contracts), schedule, and performance
requirements.
• Establish detail design compatibility among the configuration item and
other items of equipment, facilities, computer software, and personnel.
• Assess configuration item risk areas (on a technical, cost, and schedule
basis).
• Assess the results of producibility analyses conducted on system
hardware.
• Review preliminary hardware product specifications.
• Determine the acceptability of the detailed design, performance, and test
characteristics, and on the adequacy of the operation and support
documents.

The use of checklists, as with other reviews, is considered important for the success of
the review. The checklist is a means of focusing the discussion or the "search" of errors.
Checklists can be used by each member during private study of the design and during
the review meeting. For best results, the checklist should be tailored to the project at
hand, to uncover project specific errors.

Completion of CDR should provide:

1. A system initial Product Baseline (Product Baseline is the documentation


describing all of the necessary functional and physical characteristics of a
configuration item. The initial product baseline includes “build-to”
specifications for hardware (product, process, material specifications,
engineering drawings, and other related data) and software (software
module design— “code-to” specifications))
2. An updated risk assessment for Engineering, Manufacturing, and
Development (EMD),
3. An updated Cost Analysis Requirements Description (CARD) based on
the system product baseline,
4. An updated program development schedule including fabrication, test,

PAGE NO. 176


and evaluation, and software coding, critical path drivers, and
5. An approved Life-Cycle Sustainment Plan (plan for formulating,
implementing, and executing the maintenance strategy) updating
program sustainment development efforts and schedules based on
current budgets, test evaluation results, and firm supportability design
features.

3. Consistency Checkers

If the design is specified in PDL or some other formally defined design language, it is
possible to detect some design defects by using consistency checkers. Consistency
checkers are essentially compilers that take as input the design specified in a design
language (PDL in our case). Clearly, they cannot produce executable code because the
inner syntax of PDL allows natural language and many activities are specified in the
natural language. However, the module

PAGE NO. 177


interface specifications (which belong to outer syntax) are specified formally.
A consistency checker can ensure that any modules invoked or used by
a given module actually exist in the design and that the interface used
by the caller is consistent with the interface definition of the called
module. It can also check if the used global data items are indeed defined
globally in the design.

Depending on the precision and syntax of the design language, consistency checkers
can produce other information as well. In addition, these tools can be used to compute
the complexity of modules and other metrics, because these metrics are based on
alternate and loop constructs, which have a formal syntax in PDL. The trade-off here
is that the more formal the design language, the more checking can be done during
design, but the cost is that the design language becomes less flexible and tends
towards a programming language.

PAGE NO. 178

You might also like