Software Engineering 12 - 04 - 2022
Software Engineering 12 - 04 - 2022
Unit-II _________________________________________________________________________________ 41
PAGE NO. 1
Non-Functional vs. Functional Requirements _______________________________________________________ 47
Components _________________________________________________________________________________ 78
Connectors __________________________________________________________________________________ 79
PAGE NO. 2
Documenting Architecture Design ___________________________________________________________ 92
PAGE NO. 3
1. Data Flow Diagram (DFD): ________________________________________________________________________ 123
2. Data Dictionaries: ______________________________________________________________________________ 123
3. Structure Charts: _______________________________________________________________________________ 123
Symbols used in construction of structured chart ___________________________________________________________ 123
2. Conditional Call ________________________________________________________________________________ 124
3. Loop (Repetitive call of module) ___________________________________________________________________ 124
4. Data Flow _____________________________________________________________________________________ 125
5. Control Flow __________________________________________________________________________________ 125
6. Physical Storage ________________________________________________________________________________ 125
Example : Structure Chart for an Email server ______________________________________________________________ 126
PAGE NO. 4
Software Design ________________________________________________________________________ 157
6. Review and Assessment the Product, Not the Designer ________________________________________ 174
PAGE NO. 5
Software and Software Engineering
Software is more than just a program code. A program is an executable code, which serves some
computational purpose. Software is considered to be collection of executable programming code,
associated libraries and documentations. Software, when made for a specific requirement is called
software product.
Engineering on the other hand, is all about developing products, using well-defined, scientific principles
and methods. Software engineering is an engineering branch associated with development of software
product using well-defined scientific principles, methods and procedures. The outcome of software
engineering is an efficient and reliable software product.
Stephen Schach defined the same as “A discipline whose aim is the production of quality software,
software that is delivered on time, within budget and that satisfies its requirements”.
Software is more than programs. It comprises of programs, documentation to use these programs and
the procedures that operate on the software systems
A program is a part of software and can be called software only if documentation and operating
procedures are added to it. Program includes both source code and the object code.
PAGE NO. 6
Operating procedures comprise of instructions required to setup and use the software and instructions
on actions to be taken during failure. List of operating procedure manuals/ documents is given in Figure
Characteristics of software
Software has characteristics that are considerably different than those of hardware:
1) Software is developed or engineered; it is not manufactured in the Classical
Sense.
The life of the software is from concept exploration to the retirement of the software product. It
is one time development effort and continuous maintenance effort in order to keep it operational.
However, making 1000 copies is not an issue and it does not involve any cost. In case of
PAGE NO. 7
hardware product, every product costs us due to raw material and other processing expenses.
We do not have assembly line in software development. Hence it is not manufactured in the
classical sense.
2) Software doesn’t “Wear Out”
There is a well-known
“Bath Tub Curve” in
reliability studies for
hardware products. There
are three phases for the
life of a hardware product.
Initial phase is burn-in
phase, where failure
intensity is high. Due to
testing and fixing faults,
failure intensity will come
down initially and may
stabilise after certain
time. The second phase is the useful life phase where failure intensity is approximately
constant. After few years, again failure intensity will increase due to wearing out of components.
This phase is called wear out phase.
We do not have this phase for the software as it does not wear out. Important point is software
become reliable instead of wearing out. Software may be retired due to environment changes,
new requirements, new expectations etc.
3) Reusability of Components
PAGE NO. 8
A software component should be designed and implemented so that it can be reused in many
different programs. Modern reusable components encapsulate both data and the processing
that is applied to the data, enabling the software engineer to create new applications from
reusable parts.
4) Software is flexible
A software can be developed to do almost anything. Sometimes, this characteristic may be the
best and may help us to accommodate any kind of change. However, most of the times, this
“almost anything” characteristic has made software development difficult to plan, monitor and
control. This unpredictability is the basis of what has been referred to for the past 30 years as
the “Software Crisis”.
PAGE NO. 9
Changing Nature of Software/Software Application Domains
The nature of software is chaining. Following broad categories of software are evolving
to dominating the industry today. These categories have developed in last ten years
and more and more software developed in these categories. These are:
1) System software: A collection of programs written to service other programs.
Some system software (e.g., compilers, editors, and file management utilities)
2) Application software: Stand-alone programs that solve a specific business
need. Application software is used to control business functions in real time (e.g.,
point-of-sale transaction processing, real-time manufacturing process control).
3) Engineering/scientific software: It has been characterized by “number
crunching” algorithms. Applications range from astronomy to volcanology, from
automotive stress analysis to space shuttle orbital dynamics, and from molecular
biology to automated manufacturing.
4) Real Time Software: This Software is used to monitor, control and analyze real
world events as they occur. Real time software deals with changing environment.
An example may be software required for weather forecasting. Such software will
gather and process the status of temperature, humidity and other environmental
parameters to forecast the weather.
5) Embedded software: This type of software is placed in “Read-Only- Memory
(ROM)” of the product and control the various functions of the product. Embedded
software can perform limited and esoteric functions (e.g., key pad control for a
microwave oven) or provide significant function and control capability (e.g., digital
functions in an automobile such as fuel control, dashboard displays, and braking
systems).
6) Product-line software: Designed to provide a specific capability for use by many
different customers. Product-line software can focus on a limited and esoteric
marketplace (e.g., inventory control products) or address mass consumer markets
(e.g., word processing, spreadsheets, computer graphics, multimedia,
entertainment, database management, and personal and business financial
applications).
PAGE NO. 10
7) Web applications: In the early days of the World Wide Web (1990 to 1995),
websites consisted of little more than a set of linked hyper files that presented
information using text and limited graphics.
As time passed the growth of HTML by development tools (e.g. XML, Java)
enabled Web engineers to provide computing capability (dynamic Pages) along
with information content. Web based systems and applications (we refer to these
collectively as Web Apps) were born.
Today, web Apps have evolved into sophisticated computing tools that not only
provide stand-alone function to the end user, but also have been integrated with
corporate database and business applications. A decade ago, WebApps “involved
a mixture between print publishing and software development, between marketing
and computing, between internal communications and external relations, and
between art and technology”. But today, they provide full computing potential in
many of the application categories noted in Software Application Domains.
8) Artificial intelligence software: These makes use of non-numerical algorithms
to solve complex problems. Applications within this area include robotics, expert
systems, pattern recognition (image and voice), artificial neural networks, theorem
proving, and game playing.
9) Mobile Applications The term app has evolved to signify software that has been
specifically designed to reside on a mobile platform (e.g. IOS (Apple), Android, or
Windows Mobile). Software we are developing on these platforms are known as
Mobile Apps.
In most instances, mobile applications encompass a user interface that takes
advantage of the unique interaction mechanisms provided by the mobile platform,
interoperability with web based resources (GPS) that provide access to a wide
array of information that is relevant to the app, and local processing capabilities
that collect, analyse, and format information in a manner that is best suited to the
mobile platform.
In addition, a mobile app provides persistent storage capabilities within the
platform. E.g. Apple provide ICloud.
PAGE NO. 11
It is important to recognize that there is a subtle distinction between mobile web
application and mobile apps. A mobile web application (WebApp) allows a mobile
device to gain access to webbased content via a browser that has been
specifically designed to accommodate the strengths and weaknesses of the
mobile platform.
A mobile app can gain direct access to the hardware characteristics of the device
(e.g. accelerometer or GPS location) and then provide the local processing and
storage capabilities.
10) Cloud Computing
Cloud computing encompasses an infrastructure or “ecosystem” that enable any
PAGE NO. 12
Platform
Object Storage, Identity, Runtime, Queue, Database
Applications
Monitoring, Contents, Colabroration, communication, finance
Referring to the figure, computing devices reside outside the cloud and have
access to a variety of resources within cloud.
These resources encompass (surround) applications, platforms, and
infrastructure. In its simplest form, an external computing device accesses the
cloud (Amazone one of them) via a Web browser or analogous software
(comparable in certain respects). The cloud provides access to data that resides
with databases and other data structure.
In addition, devices can access executable applications that can be used in place
of apps that reside on the computing device. App designed for a single purpose
and performs a single function whereas application designed to perform a variety
of functions (like Yahoo).
The implementation of cloud computing requires the development of an
architecture that encompasses front-end and back-end services.
The front-end includes the client (user) device and the application software (e.g.
a browser) that allows the back-end to be accessed.
The back-end includes servers and related computing resources, and storage
system (e.g. databases), server-resident applications, and administrative server
that use middleware to coordinate and monitor traffic by establishing a set of
protocols for access to the cloud and its resident resources.
The cloud architecture can be segmented to provide access at a variety of
different levels from full public access to private cloud architectures accessible
only to those with authorization.
PAGE NO. 13
Software Myths: (Wrong thinking)
Software myths are defined as the beliefs about the software and the process used to
build it- can be traced to the earliest days of computing. Myths have a number of
attributes that have made them insidious (harmful effects). It is also defined as the
misleading altitudes which caused serious problem for managers & technical people.
The development of software requires dedication and understanding on the developers’
part. Many software problems arise due to myths that are formed during the initial stages
of software development. Software myths propagate false beliefs and confusion in the
minds of management, users and developers.
Management Myths:
Managers with software responsibility, like managers in most disciplines, are often
under pressure to maintain budgets, keep schedules, and improve quality. A software
manager often grasps at belief in a software myth.
Myth: We already have a book that’s full of standards and procedures for building
software that developer need.
Won’t that provide my people with everything they need to know?
Reality:
PAGE NO. 14
educating the newcomers, thereby reducing the amount of time spent on
productive development effort.
People can be added but only in a planned and well-coordinated manner.
Myth: If we decide to outsource the software project to a third party, I can just relax
and let that firm build it.
Reality: If an organization does not understand how to manage and control software
project internally, it will always struggle when it out sources software project.
Customer myths:
A customer who requests computer software may be a person at the next desk or
anybody else a technical group, the marketing /sales department, or an outside
company that has requested software under contract. In many cases, the customer
believes myths about software because software managers and practitioners do little
to correct misinformation. Myths led to false expectations, dissatisfaction with the
developers.
In the early days of software development, programming was viewed as an art, but now
software development has gradually become an engineering discipline. However,
PAGE NO. 15
developers still believe in some myths.
Myth: Once we write the program and get it to work, our job is done.
Reality:
• Expert said "the sooner you begin 'writing code', the longer it'll take you to
get done."
• Industry data indicate that between 60 and 80 percent of all effort expended
on software will be expended after it is delivered to the customer for the first
time.
Myth: Until I get the program "running" I have no way of assessing its quality.
Reality:
• One of the most effective software quality assurance mechanisms can be
applied from the beginning of a project—the formal technical review.
• Software reviews are a "quality filter" that have been found to be more
effective than testing for finding certain classes of software defects.
Myth: The only deliverable work product for a successful project is the working
program.
Reality:
• A working program is only one part of a software configuration that includes
many elements.
• A variety of work products (e.g. Documents, Models, Plans) provides a
foundation for successful engineering and, more important, guidance for
software support.
PAGE NO. 16
Role Of Management in Software Development
1) People
From the ideation phase to the deployment phase, we term the process as a project.
Many people work together on a project to build a final product that can be delivered to
the customer as per their needs or demands. So, the entire process that goes on while
working on the project must be managed properly so that we can get a worthy result
PAGE NO. 17
after completing the project and also so that the project can be completed on time
without any delay.
3) Process
Every process that takes place while developing the software, or we can say while
working on the project must be managed properly and separately. For example, there
are various phases in a software development process and every phase has its process
like the designing process is different from the coding process, and similarly, the coding
process is different from the testing. Hence, each process is managed according to its
needs and each needs to be taken special care of.
4) Product
Even after the development process is completed and we reach our final product, still,
it needs to be delivered to its customers. Hence the entire process needs a separate
management team like the sales department.
PAGE NO. 18
SOFTWARE DEVELOPMENT LIFE CYCLE LIFE CYCLE MODEL
A software life cycle model (also called process model) is a descriptive and diagrammatic
representation of the software life cycle. A life cycle model represents all the activities
required to make a software product transit through its life cycle phases. It also captures
the order in which these activities are to be undertaken. In other words, a life cycle model
maps the different activities performed on a software product from its beginning to end.
Different life cycle models may map the basic development activities to phases in different
ways.
THE NEED FOR A SOFTWARE LIFE CYCLE MODEL
The development team must identify a suitable life cycle model for the particular project
and then follow to it. Without using of a particular life cycle model, the development
of a software product would not be in a systematic and disciplined manner. When
a software product is being developed by a team there must be a clear understanding
among team members about when and what to do. Otherwise, it would lead to project
failure.
A software life cycle model defines entry and exit criteria for every phase. A phase can
start only if its phase-entry criteria have been satisfied. So, without software life cycle
model the entry and exit criteria for a phase cannot be recognized. Without software life
cycle models, it becomes difficult for software project managers to monitor the progress
of the project.
The name waterfall has been borrowed from the concept that flow of water is downward
from hill. It is easiest, earliest and simplest process model, introduced by Winston Royce
in 1970. It is also referred as a linear-sequential life cycle model. As the flow of control
is top down in waterfall model therefore one development stage should be
completed before the next begins. If some phase is complete, we cannot come back
to the previous phase.
The classical waterfall model is intuitively the most obvious way to develop software.
Though the classical waterfall model is elegant and intuitively obvious, it is not a practical
model in the sense that it cannot be used in actual software development projects. Thus,
this model can be considered to be a theoretical way of developing software. But all
other life cycle models are essentially derived from the classical waterfall model. So, in
order to be able to appreciate other life cycle models it is necessary to understand the
classical waterfall model. Classical waterfall model divides the life cycle into the following
phases as shown in fig.
Feasibility study - The main aim of feasibility study is to determine whether it would be
financially and technically feasible to develop the product.
PAGE NO. 20
• At first project managers or team leaders try to have a rough understanding of what is
required to be done by visiting the client side. They study different input data to the
system and output data to be produced by the system. They study what kind of
processing is needed to be done on these data and they look at the various constraints
on the behavior of the system.
• After they have an overall understanding of the problem, they investigate the different
solutions that are possible. Then they examine each of the solutions in terms of what
kind of resources required, what would be the cost of development and what would be
the development time for each solution.
• Based on this analysis they pick the best solution and determine whether the solution
is feasible financially and technically. They check whether the customer budget would
meet the cost of the product and whether they have sufficient technical expertise in
the area of development.
Requirements analysis and specification: - The aim of the requirements analysis and
specification phase is to understand the exact requirements of the customer and to
document them properly. This phase consists of two distinct activities, namely
The goal of the requirement’s gathering activity is to collect all relevant information from
the customer regarding the product to be developed. This is done to clearly understand
the customer requirements so that incompleteness and inconsistencies are removed. The
requirements analysis activity is begun by collecting all relevant data regarding the
product to be developed from the users of the product and from the customer through
interviews and discussions. For example, to perform the requirements analysis of a
business accounting software required by an organization, the analyst might interview all
the accountants of the organization to ascertain their requirements. The data collected
from such a group of users usually contain several contradictions and ambiguities, since
each user typically has only a partial and incomplete view of the system. Therefore, it is
necessary to identify all ambiguities and contradictions in the requirements and resolve
them through further discussions with the customer. After all ambiguities, inconsistencies,
PAGE NO. 21
and incompleteness have been resolved and all the requirements properly understood,
the requirements specification activity can start. During this activity, the user requirements
are systematically organized into a Software Requirements Specification (SRS)
document. The customer requirements identified during the requirements gathering and
analysis activity are organized into a SRS document. The important components of this
document are functional requirements, the nonfunctional requirements, and the goals of
implementation.
Design: - The goal of the design phase is to transform the requirements specified in the
SRS document into a structure that is suitable for implementation in some programming
language. In technical terms, during the design phase the software architecture is derived
from the SRS document. Two distinctly different approaches are available: the traditional
design approach and the object-oriented design approach.
Coding and unit testing: -The purpose of the coding phase (sometimes called the
implementation phase) of software development is to translate the software design into
source code. Each component of the design is implemented as a program module. The
end-product of this phase is a set of program modules that have been individually tested.
During this phase, each module is unit tested to determine the correct working of all the
individual modules. It involves testing each module in isolation as this is the most efficient
way to debug the errors identified at this stage.
PAGE NO. 22
the modules are integrated in a planned manner. The different modules making up a
software product are almost never integrated in one shot. Integration is normally carried
out incrementally over a number of steps. During each integration step, the partially
integrated system is tested and a set of previously planned modules are added to it.
Finally, when all the modules have been successfully integrated and tested, system
testing is carried out. The goal of system testing is to ensure that the developed system
conforms to its requirements laid out in the SRS document. System testing usually
consists of three different kinds of testing activities:
Maintenance: -Maintenance of a typical software product requires much more than the
effort necessary to develop the product itself. Many studies carried out in the past confirm
this and indicate that the relative effort of development of a typical software product to its
maintenance effort is roughly in the 40:60 ratios. Maintenance involves performing any
one or more of the following three kinds of activities:
• Correcting errors that were not discovered during the product development phase.
This is called corrective maintenance.
• Improving the implementation of the system, and enhancing the functionalities of the
system according to the customer’s requirements. This is called perfective
maintenance.
• Porting the software to work in a new environment. For example, porting may be
required to get the software to work on a new computer platform or with a new
operating system. This is called adaptive maintenance.
Shortcomings Of the Classical Waterfall Model
PAGE NO. 23
The classical waterfall model is an idealistic one since it assumes that no development
error is ever committed by the engineers during any of the life cycle phases. However, in
practical development environments, the engineers do commit a large number of errors
in almost every phase of the life cycle. The source of the defects can be many: oversight,
wrong assumptions, use of inappropriate technology, communication gap among the
project engineers, etc. These defects usually get detected much later in the life cycle. For
example, a design defect might go unnoticed till we reach the coding or testing phase.
Once a defect is detected, the engineers need to go back to the phase where the defect
had occurred and redo some of the work done during that phase and the subsequent
phases to correct the defect and its effect on the later phases. Therefore, in any practical
software development work, it is not possible to strictly follow the classical waterfall
model.
2) Iterative Waterfall Model or Modified Waterfall Model
One of the drawbacks of a strict waterfall model is that the water cannot flow upwards
means if some phase is complete, we cannot come back to the previous phase. If problem
is found at particular stage in development, there is no way of redoing an earlier stage in
order to rectify the problem. e.g. testing usually find errors in the coding stage, but in the
strict Waterfall approach, the coding cannot be corrected.
To overcome this obvious drawback, a variation of the waterfall model provides for
feedback between adjoining stages, so that a problem uncovered at one stage can cause
remedial action to be taken at the previous stages which is the main difference from the
classical waterfall model. When errors are detected at some later phase, these feedbacks
paths allow correcting errors committed by programmers during some phase.
PAGE NO. 24
3) Incremental Process Models
divided into small subsets known as increments that are implemented individually. This
model comprises several phases where each phase produces an increment. These
increments are identified in the beginning of the development process and the entire
PAGE NO. 25
process from requirements gathering to delivery of the product is carried out for each
increment.
Characteristics of an Incremental module includes
• The software will be generated quickly during the software life cycle
• It is flexible and less expensive to change requirements and scope
• Throughout the development stages changes can be done
• This model is less costly compared to others
• A customer can respond to each building
• Errors are easy to be identified
The development first develops the core modules of the system. The core modules are
those that do not need services from the other modules. The initial product skeleton is
refined into increasing levels of capability by adding new functionalities in successive
versions. Each evolutionary model may be developed using an iterative waterfall model
of development.
PAGE NO. 26
The evolutionary model is shown in above figure. Each successive version/model of the
product is a fully functioning software capable of performing more work than the previous
versions/model.
The evolutionary model is normally useful for very large products, where it is easier to
find modules for incremental implementation.
PAGE NO. 27
Often, evolutionary model is used when the customer prefers to receive the product in
increments so that he can start using the different features as and when they are
developed rather than waiting all the time for the full product to be developed and
delivered.
Advantages of Evolutionary Model
• Large project: Evolutionary model is normally useful for very large products.
• User gets a chance to experiment with a partially developed software much
before the complete version of the system is released.
• Evolutionary model helps to accurately elicit user requirements during the
delivery of different versions of the software.
• The core modules get tested thoroughly, thereby reducing the chances of
errors in the core modules of the final products.
Disadvantages of Evolutionary Model
• Difficult to divide the problem into several versions that would be acceptable
to the customer and which can be incrementally implemented and delivered.
There are two common evolutionary process models
1) Prototype Model
2) Spiral Model
1) Prototype Model
Prototype Model is a software development model in which prototype is built, tested,
and reworked until an acceptable prototype is achieved. It also creates base to
produce the final system or software. It works best in scenarios where the project’s
requirements are not known in detail. It is an iterative, trial and error method which
takes place between developer and client.
PAGE NO. 28
Step 1: Requirements gathering and analysis
A prototyping model starts with requirement analysis. In this phase, the requirements
of the system are defined in detail. During the process, the users of the system are
interviewed to know what is their expectation from the system.
The second phase is a preliminary design or a quick design. In this stage, a simple
design of the system is created. However, it is not a complete design. It gives a brief
idea of the system to the user. The quick design helps in developing the prototype.
In this phase, an actual prototype is designed based on the information gathered from
quick design. It is a small working model of the required system.
In this stage, the proposed system is presented to the client for an initial evaluation. It
helps to find out the strength and weakness of the working model. Comment and
suggestion are collected from the customer and provided to the developer.
If the user is not happy with the current prototype, you need to refine the prototype
according to the user’s feedback and suggestions.
This phase will not over until all the requirements specified by the user are met. Once
the user is satisfied with the developed prototype, a final system is developed based
on the approved final prototype.
Once the final system is developed based on the final prototype, it is thoroughly tested
and deployed to production. The system undergoes routine maintenance for
minimizing downtime and prevent large-scale failures.
PAGE NO. 29
• Customer satisfaction exists because the customer can feel the product at a very
early stage.
• There will be hardly any chance of software rejection.
• Quicker user feedback helps you to achieve better software development
solutions.
• Allows the client to compare if the software code matches the software
specification.
• It helps you to find out the missing functionality in the system.
• It also identifies the complex or difficult functions.
• Encourages innovation and flexible designing.
• It is a straightforward model, so it is easy to understand.
• No need for specialized experts to build the model
• The prototype serves as a basis for deriving a system specification.
• The prototype helps to gain a better understanding of the customer’s needs.
Disadvantages of the Prototyping Model
PAGE NO. 30
Spiral model is one of the most important Software Development Life Cycle models,
which provides support for Risk Handling. In its diagrammatic representation, it looks
like a spiral with many loops. The exact number of loops of the spiral is unknown and
can vary from project to project. Each loop of the spiral is called a Phase of the
software development process. The exact number of phases needed to develop the
product can be varied by the project manager depending upon the project risks. As the
project manager dynamically determines the number of phases, so the project
manager has an important role to develop a product using the spiral model.
When looking at a diagram of a spiral model, the radius of the spiral represents the
cost of the project and the angular degree represents the progress made in the
current phase. Each phase begins with a goal for the design and ends when the
developer or client reviews the progress.
To explain in simpler terms, the steps involved in the spiral model are:
PAGE NO. 31
Spiral Model Phases
It has four stages or phases: The planning of objectives, risk analysis, engineering or
development, and finally review. A project passes through all these stages repeatedly
and the phases are known as a Spiral in the model.
1. Determine objectives and find alternate solutions – This phase includes
requirement gathering and analysis. Based on the requirements, objectives are
defined and different alternate solutions are proposed.
2. Risk Analysis and resolving – In this quadrant, all the proposed solutions are
analysed and any potential risk is identified, analysed, and resolved. Risk analysis
should be performed on all possible solutions in order to find any faults or
vulnerabilities -- such as running over the budget or areas within the software that
could be open to cyber-attacks. Each risk should then be resolved using the most
efficient strategy.
3. Develop and test: This phase includes the actual implementation of the different
features. All the implemented features are then verified with thorough testing.
PAGE NO. 32
4. Review and planning of the next phase – In this phase, the software is evaluated
by the customer. It also includes risk identification and monitoring like cost overrun
or schedule slippage and after that planning of the next phase is started.
Spiral Model is also called Meta Model.
The Spiral model is called a Meta-Model because it subsumes all the other SDLC models.
For example, a single loop spiral actually represents the Iterative Waterfall Model. The
spiral model incorporates the stepwise approach of the Classical Waterfall Model. The
spiral model uses the approach of the Prototyping Model by building a prototype at the
start of each phase as a risk-handling technique. Also, the spiral model can be considered
as supporting the Evolutionary model – the iterations along the spiral can be considered
as evolutionary levels through which the complete system is built.
Spiral Model Advantages
1. The spiral model is perfect for projects that are large and complex in nature as
continuous prototyping and evaluation help in mitigating any risk.
2. Because of its risk handling ability, the model is best suited for projects which are
very critical like software related to the health domain, space exploration, etc.
3. This model supports the client feedback and implementation of change
requests (CRs) which is not possible in conventional models like a waterfall.
4. Since customer gets to see a prototype in each phase, so there are higher chances
of customer satisfaction.
Spiral Model Disadvantages
PAGE NO. 33
5) Unified Process Model
Unified process (UP) is an architecture centric, use case driven, iterative and incremental
development process. UP is also referred to as the unified software development
process.
Architecture-Centric Approach
Using this approach, you would be creating a blueprint of the organization of the
software system. It would include taking into account the different technologies,
programming languages, operating systems, development and release environments,
server capabilities, and other such areas for developing the software.
A use-case defines the interaction between two or more entities. The list of
requirements specified by a customer are converted to functional requirements by a
business analyst and generally referred to as use-cases. A use-case describes the
operation of a software as interactions between the customer and the system, resulting
in a specific output or a measurable return. For example, the online cake shop can be
specified in terms of use cases such as 'add cake to cart', 'change the quantity of added
cakes in cart', 'cake order checkout' and so on. Each use case represents a significant
functionality and could be considered for an iteration.
Using an iterative and incremental approach means treating each iteration as a mini-
project. Therefore, you would develop the software as a number of small mini-projects,
working in cycles. You would develop small working versions of the software at the end
of each cycle. Each iteration would add some functionality to the software according to
the requirements specified by the customer.
The Unified Process is an attempt to draw on the best features and characteristics of
traditional software process models, but characterize them in a way that implements
many of the best principles of agile (ability to move with quick, easy grace) software
development. The Unified Process recognizes the importance of customer
communication and streamlined methods for describing the customer’s view of a system.
PAGE NO. 34
It emphasizes the important role of software architecture and “helps the architect focus
on the right goals, such as understandability, support to future changes, and reuse”. It
suggests a process flow that is iterative and incremental, providing the evolutionary feel
that is essential in modern software development.
A Brief History
During the early 1990s James Rumbaugh, Grady Booch, and Ivar Jacobson began
working on a “unified method” that would combine the best features of each of their
individual object-oriented analysis and design methods and adopt additional features
proposed by other experts in object-oriented modelling. The result was UML—a unified
modelling language that contains a robust notation for the modelling and development of
object-oriented systems. They developed the Unified Process, a framework for object-
oriented software engineering using UML.
This process
divides the
development
process into five
phases:
• Inception
• Elaboration
• Conception
• Transition
• Production
Inception Phase
The inception phase of the UP encompasses both customer communication and planning
activities. By collaborating with stakeholders, business requirements for the software are
identified; a rough architecture for the system is proposed; and a plan for the iterative,
incremental nature of the ensuing project is developed.
PAGE NO. 35
The following are typical goals for the Inception phase.
Elaboration Phase
The elaboration phase encompasses the communication and modelling activities of the
generic process model. Elaboration refines and expands the preliminary use cases
that were developed as part of the inception phase and expands the architectural
representation to include five different views of the software—the use case model, the
requirements model, the design model, the implementation model, and the deployment
model. Elaboration creates an “executable architectural baseline” that represents a “first
cut” executable system.
Construction Phase
The construction phase of the UP is identical to the construction activity defined for the
generic software process. Using the architectural model as input, the construction phase
develops or acquires the software components that will make each use case
operational for end users. To accomplish this, requirements and design models that
were started during the elaboration phase are completed to reflect the final version of the
software increment. All necessary and required features and functions for the software
increment (i.e., the release) are then implemented in source code.
Transition Phase
The transition phase of the UP encompasses the latter stages of the generic construction
activity and the first part of the generic deployment (delivery and feedback) activity.
Software is given to end users for beta testing and user feedback reports both
PAGE NO. 36
defects and necessary changes. At the conclusion of the transition phase, the software
increment becomes a usable software release.
Production Phase
The production phase of the UP coincides with the deployment activity of the generic
process. During this phase, the ongoing use of the software is monitored, support
for the operating environment (infrastructure) is provided, and defect reports and
requests for changes are submitted and evaluated. It is likely that at the same time
the construction, transition, and production phases are being conducted, work may have
already begun on the next software increment. This means that the five UP phases do
not occur in a sequence, but rather with staggered concurrency.
• It is use-case driven
• It is architecture-centric
• It is risk focused
PAGE NO. 37
Software Engineering | Comparison of Different Life Cycle Models
Classical Waterfall Model: The Classical Waterfall model can be considered as the
basic model and all other life cycle models are based on this model. It is an ideal model.
However, the Classical Waterfall model cannot be used in practical project development,
since this model does not support any mechanism to correct the errors that are committed
during any of the phases but detected at a later phase. This problem is overcome by the
Iterative Waterfall model through the inclusion of feedback paths.
Iterative Waterfall Model: The Iterative Waterfall model is probably the most used
software development model. This model is simple to use and understand. But this model
is suitable only for well-understood problems and is not suitable for the development of
very large projects and projects that suffer from a large number of risks.
Evolutionary Model: The Evolutionary model is suitable for large projects which can be
decomposed into a set of modules for incremental development and delivery. This model
is widely used in object-oriented development projects. This model is only used if
incremental delivery of the system is acceptable to the customer.
Prototyping Model: The Prototyping model is suitable for projects, which either the
customer requirements or the technical solutions are not well understood. These risks
must be identified before the project starts. This model is especially popular for the
development of the user interface part of the project.
Spiral Model: The Spiral model is considered as a meta-model as it includes all other life
cycle models. Flexibility and risk handling are the main characteristics of this model. The
spiral model is suitable for the development of technically challenging and large software
that is prone to various risks that are difficult to anticipate at the start of the project. But
this model is more complex than the other models.
Unified Process Model: Unified process (UP) is an architecture centric, use case driven,
iterative and incremental development process. This process divides the development
process into inception, elaboration, conception, transition and production phases. The
Unified Process insists that architecture sit at the heart of the project team's efforts to
PAGE NO. 38
shape the system. The Unified Process requires the project team to focus on addressing
the most critical risks early in the project life cycle.
Selection of proper lifecycle model to complete a project is the most important task. It can
be selected by keeping the advantages and disadvantages of various models in mind.
The different issues that are analysed before selecting a suitable life cycle model are
given below:
• Characteristics of the software to be developed: The choice of the life cycle model
largely depends on the type of the software that is being developed. For small services
projects, the agile model is favored. On the other hand, for product and embedded
development, the Iterative Waterfall model can be preferred. The evolutionary model
is suitable to develop an object-oriented project. User interface part of the project is
mainly developed through prototyping model.
• Risk associated with the project: If the risks are few and can be anticipated at the
start of the project, then prototyping model is useful. If the risks are difficult to
determine at the beginning of the project but are likely to increase as the development
proceeds, then the spiral model is the best model to use.
• Characteristics of the customer: If the customer is not quite familiar with computers,
then the requirements are likely to change frequently as it would be difficult to form
complete, consistent and unambiguous requirements. Thus, a prototyping model may
be necessary to reduce later change requests from the customers. Initially, the
customer’s confidence is high on the development team. During the lengthy
development process, customer confidence normally drops off as no working software
is yet visible. So, the evolutionary model is useful as the customer can experience a
PAGE NO. 39
partially working software much earlier than whole complete software. Another
advantage of the evolutionary model is that it reduces the customer’s trauma of getting
used to an entirely new system.
PAGE NO. 40
Unit-II
Requirements Engineering
Requirements analysis, also called requirements engineering, is the process of
determining user expectations for a new or modified product. Requirements
engineering is a major software engineering action that begins during the
communication activity and continues into the modelling activity. It must be adapted
to the needs of the process, the project, the product, and the people doing the work.
Requirements engineering builds a bridge to design and construction.
PAGE NO. 41
Inception: It establish a basic understanding of the problem, the people who want a
solution, the nature of the solution that is desired, and the effectiveness of preliminary
communication and collaboration between the other stakeholders and the software team.
Elaboration: The information obtained from the customer during inception and elicitation
is expanded and refined during elaboration. This task focuses on developing a refined
requirements model that identifies various aspects of software function, behavior, and
information. Elaboration is driven by the creation and refinement of user scenarios that
describe how the end user (and other actors) will interact with the system.
PAGE NO. 42
internal conflicts, requirements are eliminated, combined, and/or modified so that each
party achieves some measure of satisfaction.
The primary requirements validation mechanism is the technical review. The review team
that validates requirements includes software engineers, customers, users, and other
stakeholders who examine the specification looking for errors in content or interpretation,
areas where clarification may be required, missing information, inconsistencies,
conflicting requirements, or unrealistic requirements.
PAGE NO. 43
Types of Software Requirement
A software requirement can be of 3
types:
• Functional requirements
• Non-functional requirements
• Domain requirements
Functional Requirements:
These are the requirements that the
end user specifically demands as
basic facilities that the system should
offer. All these functionalities need to
be necessarily incorporated into the system as a part of the contract. These are
represented or stated in the form of input to be given to the system, the operation
performed and the output expected. They are basically the requirements stated by the
user which one can see directly in the final product.
Non-Functional Requirements
Non-functional requirement (NFR) is a requirement that specifies criteria that can be used
to judge the operation of a system, rather than specific behaviors. These are basically the
quality constraints that the system must satisfy according to the project contract. The
priority or extent to which these factors are implemented varies from one project to other.
They are also called non-behavioral requirements. The plan for implementing non-
functional requirements is detailed in the system architecture, because they are usually
architecturally significant requirements.
• Usability Requirements: Describe the ease with which users are able to
operate the software. For example, the software should be able to provide
access to functionality with fewer keystrokes and mouse clicks.
• Efficiency Requirements: Describe the extent to which the software makes
optimal use of resources, the speed with which the system executes, and the
memory it consumes for its operation. For example, the system should be
able to operate at least three times faster than the existing system.
• Reliability Requirements: Describe the acceptable failure rate of the
software. For example, the software should be able to operate even if a
hazard occurs.
• Portability Requirements: Describe the ease with which the software can
be transferred from one platform to another. For example, it should be easy
to port the software to a different operating system without the need to
redesign the entire software.
PAGE NO. 45
include requirements concerning programming language, design methodology, and
similar requirements defined by the developing organisation.
3. External Requirements: These requirements come neither from the customer nor
from the organisation developing the software. They include, for example,
requirements derived from legislation relevant to the field for which the software is
being produced.
Domain Requirements:
Domain requirements are the requirements which are characteristic of a particular
category or domain of projects. The basic functions that a system of a specific domain
must necessarily exhibit come under this category. For instance, in an academic software
that maintains records of a school or college, the functionality of being able to access the
list of faculty and list of students of each grade is a domain requirement. These
requirements are therefore identified from that domain model and are not user specific.
PAGE NO. 46
Non-Functional vs. Functional Requirements
Here, are key differences between Functional and Non-functional requirements in
Software Engineering:
PAGE NO. 47
Feasibility Study
Feasibility Study in Software Engineering is a study to evaluate feasibility of proposed
project or system. As name suggests feasibility study is the feasibility analysis or it is a
measure of the software product in terms of how much beneficial product
development will be for the organization in a practical point of view. Feasibility study
is carried out based on many purposes to analyse whether software product will be right
in terms of development, implantation, contribution of project to the organization etc.
The feasibility study mainly concentrates on below five mentioned areas. Among these
Economic Feasibility Study is most important part of the feasibility analysis and Legal
Feasibility Study is less considered feasibility analysis.
Technical Feasibility:
In Technical Feasibility current resources both hardware software along with required
technology are analysed/assessed to develop project. This technical feasibility study
gives report whether there exists correct required resources and technologies which will
be used for project development. Along with this, feasibility study also analyses technical
skills and capabilities of technical team, existing technology can be used or not,
maintenance and up-gradation is easy or not for chosen technology etc.
Operational Feasibility:
Economic Feasibility:
In Economic Feasibility study cost and benefit of the project is analysed. Means under
this feasibility study a detail analysis is carried out what will be cost of the project for
development which includes all required cost for final development like hardware and
software resource required, design and development cost and operational cost and so
PAGE NO. 48
on. After that it is analysed whether project will be beneficial in terms of finance for
organization or not.
Legal Feasibility:
In Legal Feasibility study project is analysed in legality point of view. This includes
analysing barriers of legal implementation of project, data protection acts or social media
laws, project certificate, license, copyright etc. Overall, it can be said that Legal Feasibility
Study is study to know if proposed project conforms legal and ethical requirements.
Schedule Feasibility:
Along with this Feasibility study helps in identifying risk factors involved in developing and
deploying system and planning for risk analysis also narrows the business alternatives
and enhance success rate analysing different parameters associated with proposed
project development.
PAGE NO. 49
Requirements Elicitation
Requirements elicitation is the practice of researching and discovering the requirements
of a system from users, customers, and other stakeholders. The practice is also
sometimes referred to as "requirement gathering".
The term elicitation is used in research to raise the fact that good requirements cannot
just be collected from the customer, as would be indicated by the name requirements
gathering. Requirements elicitation is non-trivial because you can never be sure you get
all requirements from the user and customer by just asking them what the system should
do or not do (for Safety and Reliability). Requirements elicitation practices include
interviews, questionnaires, user observation, workshops, brainstorming, use cases, role
playing and prototyping.
Commonly used elicitation processes are the stakeholder meetings or interviews. For
example, an important first meeting could be between software engineers and customers
where they discuss their perspective of the requirements.
In 1992, Christel and Kang identified problems that indicate the challenges for
requirements elicitation:
PAGE NO. 50
3. Problems of volatility. The requirements change over time. The rate of change is
sometimes referred to as the level of requirement volatility
1. Interviews
2. Brainstorming Sessions
The success of an elicitation technique used depends on the maturity of the analyst,
developers, users, and the customer involved.
1. Interviews:
2. Brainstorming Sessions:
• It is a group technique
PAGE NO. 51
• Every idea is documented so that everyone can see it.
Its objective is to bridge the expectation gap – difference between what the developers
think they are supposed to build and what customers think they are going to get.
Each participant prepares his/her list, different lists are then combined, redundant entries
are eliminated, team is divided into smaller sub-teams to develop mini-specifications and
finally a draft of specifications is written down using all the inputs from the meeting.
• Normal requirements – In this the objective and goals of the proposed software
are discussed with the customer. Example – normal requirements for a result
management system may be entry of marks, calculation of results, etc
PAGE NO. 52
The major steps involved in this procedure are –
• It is possible to achieve
5. Use Case Approach: This technique combines text and pictures to provide a better
understanding of the requirements. The use cases describe the ‘what’, of a system and
not ‘how’. Hence, they only give a functional view of the system. The components of the
use case design includes three major things – Actor, Use cases, use case diagram.
1. Actor – It is the external agent that lies outside the system but interacts with it
in some way. An actor maybe a person, machine etc. It is represented as a stick
figure. Actors can be primary actors or secondary actors.
2. Use cases –They describe the sequence of interactions between actors and the
system. They capture who(actors) do what(interaction) with the system. A
complete set of use cases specifies all possible ways to use the system.
3. Use case diagram –A use case diagram graphically represents what happens
when an actor interacts with a system. It captures the functional aspect of the
system.
PAGE NO. 53
• A line is used to represent a relationship between an actor and a use
case.
PAGE NO. 54
Requirements Analysis
Requirement analysis is significant and essential activity after elicitation. We analyse,
refine, and scrutinize the gathered requirements to make consistent and unambiguous
requirements. This activity reviews all requirements and may provide a graphical view of
the entire system. After the completion of the analysis, it is expected that the
understandability of the project may improve significantly. Here, we may also use the
interaction with the customer to clarify points of confusion and to understand which
requirements are more important than others.
(i) Draw the context diagram: The context diagram is a simple model that defines the
boundaries and interfaces of the proposed systems with the external world. It identifies
the entities outside the proposed system that interact with the system. The context
diagram of student result management system is given below:
PAGE NO. 55
(ii) Development of a Prototype (optional): One effective way to find out what the
customer wants is to construct a prototype, something that looks and preferably acts as
part of the system they say they want.
We can use their feedback to modify the prototype until the customer is satisfied
continuously. Hence, the prototype helps the client to visualize the proposed system and
increase the understanding of the requirements. When developers and users are not sure
about some of the elements, a prototype may help both the parties to take a final decision.
Some projects are developed for the general market. In such cases, the prototype should
be shown to some representative sample of the population of potential purchasers. Even
though a person who tries out a prototype may not buy the final system, but their feedback
may allow us to make the product more attractive to others.
The prototype should be built quickly and at a relatively low cost. Hence it will always
have limitations and would not be acceptable in the final system. This is an optional
activity.
(iii) Model the requirements: This process usually consists of various graphical
representations of the functions, data entities, external entities, and the relationships
between them. The graphical view may help to find incorrect, inconsistent, missing, and
superfluous requirements. Such models include the Data Flow diagram, Entity-
Relationship diagram, Data Dictionaries, etc.
PAGE NO. 56
• Data Flow Diagrams: Data Flow Diagrams (DFDs) are used widely for
modelling the requirements. DFD shows the flow of data through a system.
The system may be a company, an organization, a set of procedures, a
computer hardware system, a software system, or any combination of the
preceding. The DFD is also known as a data flow graph or bubble chart.
• Data Dictionaries: Data Dictionaries are simply repositories to store
information about all data items defined in DFDs. At the requirements stage,
the data dictionary should at least define customer data items, to ensure that
the customer and developers use the same definition and terminologies.
• Entity-Relationship Diagrams: Another tool for requirement specification is
the entity-relationship diagram, often called an "E-R diagram." It is a detailed
logical representation of the data for the organization and uses three main
constructs i.e. data entities, relationships, and their associated attributes.
(iv) Finalise the requirements: After modelling the requirements, we will have a better
understanding of the system behavior. The inconsistencies and ambiguities have been
identified and corrected. The flow of data amongst various modules has been analysed.
Elicitation and analyse activities have provided better insight into the system. Now we
finalize the analysed requirements, and the next step is to document these requirements
in a prescribed format.
PAGE NO. 57
Software Requirements Specification (SRS) Document
A software requirements specification (SRS) is a document that describes what the
software will do and how it will be expected to perform.
This report lays a foundation for software engineering activities and is constructing when
entire requirements are elicited and analysed. SRS is a formal report, which acts as a
representation of software that enables the customers to review whether it is according
to their requirements. Also, it comprises user requirements for a system as well as
detailed specifications of the system requirements.
The SRS is a specification for a specific software product, program, or set of applications
that perform particular functions in a specific environment. It serves several goals
depending on who is writing it. First, the SRS could be written by the client of a
system. Second, the SRS could be written by a developer of the system. The two
methods create entirely various situations and establish different purposes for the
document altogether. The first case, SRS, is used to define the needs and expectation of
the users. The second case, SRS, is written for various purposes and serves as a contract
document between customer and developer.
2. Completeness: The SRS is complete if, and only if, it includes the following elements:
(2) Full labels and references to all figures, tables, and diagrams in the SRS and
definitions of all terms and units of measure.
3. Consistency: The SRS is consistent if, and only if, no subset of individual
requirements described in its conflict. There are three types of possible conflict in
the SRS:
(1) The specified characteristics of real-world objects may conflict. For example,
PAGE NO. 58
(a) The format of an output report may be described in one requirement as
tabular but in another as textual.
(b) One condition may state that all lights shall be green while another
states that all lights shall be blue.
(2) There may be a reasonable or temporal conflict between the two specified
actions. For example,
(a) One requirement may determine that the program will add two inputs,
and another may determine that the program will multiply them.
(b) One condition may state that "A" must always follow "B," while other
requires that "A and B" co-occurs.
(3) Two or more requirements may define the same real-world object but use
different terms for that object. For example, a program's request for user input
may be called a "prompt" in one requirement's and a "cue" in another. The
use of standard terminology and descriptions promotes consistency.
4. Unambiguousness: SRS is unambiguous when every fixed requirement has only one
interpretation. This suggests that each element is uniquely interpreted. In case there
is a method used with multiple definitions, the requirements report should determine
the implications in the SRS so that it is clear and simple to understand.
6. Verifiability: SRS is correct when the specified requirements can be verified with a
cost-effective system to check whether the final software meets those requirements.
The requirements are verified with the help of reviews.
7. Traceability: The SRS is traceable if the origin of each of the requirements is clear
and if it facilitates the referencing of each condition in future development or
enhancement documentation.
PAGE NO. 59
1. Backward Traceability: This depends upon each requirement explicitly
referencing its source in earlier documents.
2. Forward Traceability: This depends upon each element in the SRS having
a unique name or reference number. The forward traceability of the SRS is
especially crucial when the software product enters the operation and
maintenance phase. As code and design document is modified, it is
necessary to be able to ascertain the complete set of requirements that may
be concerned by those modifications.
1. Concise: The SRS report should be concise and at the same time, unambiguous,
consistent, and complete. Irrelevant descriptions decrease readability and also
increase error possibilities.
3. Black-box view: It should only define what the system should do and refrain from
stating how to do these. This means that the SRS document should define the external
behavior of the system and not discuss the implementation issues. The SRS report
should view the system to be developed as a black box and should define the
externally visible behavior of the system. For this reason, the SRS report is also known
as the black-box specification of a system.
PAGE NO. 60
4. Conceptual integrity: Conceptual integrity is the principle that anywhere you look in
your system, you can tell that the design is part of the same overall design. This
includes low-level issues such as formatting and identifier naming, but also issues
such as how modules and classes are designed, etc.
SRS should show conceptual integrity so that the reader can merely understand it.
Response to undesired events: It should characterize acceptable responses to
unwanted events. These are called system response to exceptional conditions.
PAGE NO. 61
Requirements Validation
Requirements validation is the process of checking that requirements define the system
that the customer really wants. It overlaps with elicitation and analysis, as it is concerned
with finding problems with the requirements. Requirements validation is critically
important because errors in a requirements document can lead to extensive rework costs
when these problems are discovered during development or after the system is in service.
The cost of fixing a requirements problem by making a system change is usually much
greater than repairing design or coding errors. A change to the requirements usually
means that the system design and implementation must also be changed. Furthermore,
the system must then be retested.
During the requirements validation process, different types of checks should be carried
out on the requirements in the requirements document. These checks include:
1. Validity checks: These check that the requirements reflect the real needs of system
users. Because of changing circumstances, the user requirements may have changed
since they were originally elicited.
2. Consistency checks: Requirements in the document should not conflict. That is,
there should not be contradictory constraints or different descriptions of the same
system function.
5. Verifiability: To reduce the potential for dispute between customer and contractor,
system requirements should always be written so that they are verifiable. This means
that you should be able to write a set of tests that can demonstrate that the delivered
system meets each specified requirement.
PAGE NO. 62
A number of requirements validation techniques can be used individually or in conjunction
with one another:
As a result, you rarely find all requirements problems during the requirements validation
process. Further requirements changes will be needed to correct omissions and
misunderstandings after agreement has been reached on the requirements document.
PAGE NO. 63
Requirements Management
The purpose of requirements management is to ensure product development goals are
successfully met. It is a set of techniques for documenting, analysing, prioritizing, and
agreeing on requirements so that engineering teams always have current and approved
requirements. Requirements management provides a way to avoid errors by keeping
track of changes in requirements and develop communication with stakeholders from the
start of a project throughout the engineering lifecycle.
Issues in requirements management are often cited as major causes of project failures.
Requirements management software provides the tools for us to execute that plan,
helping to reduce costs, accelerate time to market and improve quality control.
A requirements management plan (RMP) helps explain how you will receive, analyse,
document and manage all of the requirements within a project. The plan usually covers
everything from initial information gathering of the high-level project to more detailed
product requirements that could be gathered throughout the lifecycle of a project. Key
items to define in a requirements management plan are the project overview,
requirements gathering process, roles and responsibilities, tools, and traceability.
When looking for requirements management tools, there are a few key features to look
for.
PAGE NO. 64
• Query stakeholders after implementation on needed changes to requirements
• Utilize test management to verify and validate system requirements
• Assess impact of changes
• Revise requirements
• Document changes
By following these steps, engineering teams are able to tackle the complexity inherent in
developing smart connected products. Using a requirements management solution helps
to streamline the process so we can optimize our speed to market and expand your
opportunities while improving quality.
Requirements Attributes
• Specific
• Testable
• Clear and concise
• Accurate
• Understandable
• Feasible and realistic
• Necessary
Benefits Of Requirements Management
PAGE NO. 65
• Minimized risk for safety-critical products
• Faster delivery
• Reusability
• Traceability
• Requirements being tied to test cases
• Global configuration management
Who is responsible for requirements management?
The product manager is typically responsible for curating and defining requirements.
However, requirements can be generated by any stakeholder, including customers,
partners, sales, support, management, engineering, operations and product team
members. Constant communication is necessary to ensure the engineering team
understands changing priorities.
PAGE NO. 66
• Facilitate continuous communication between development teams, stakeholders,
and interested parties
PAGE NO. 67
Software Architecture
It refers to the high-level structure of the software and disciplines to create that structure.
It serves as a blueprint of system. In this we made the structure which meet the technical
requirements.
• Software Components
• Details of about data structures and algorithms
• Relationship among components
• Data flow, control flow and dependency from one component to another component
Software architecture directly put impact on the software quality in each and every sense.
Don’t mix it with Software Design, although it sometime serves as software design but
still there is a difference between software architecture and software design.
• Project Manager
• Software Developer
• Security Expert
• Tester
• Anyone else who want to make some improvement by looking at architecture can
also use it.
PAGE NO. 68
• Single process.
PAGE NO. 69
Design
2. Reuse:
Architecture descriptions can help software reuse. Reuse is considered one of the main
techniques by which productivity can be improved, thereby reducing the cost of software.
The software engineering world has, for a long time, been working towards a discipline
where software can be assembled from parts that are developed by different people and
are available for others to use. If one wants to build a software product in which existing
components may be reused, then architecture becomes the key point at which reuse at
the highest-level is decided. The architecture has to be chosen in a manner such that the
components that have to be reused can fit properly and together with other components
that may be developed, they provide the features that are needed.
3. Construction and Evolution
As architecture partitions the system into parts, some architecture provided partitioning
can naturally be used for constructing the system, which also requires that the system be
PAGE NO. 70
broken into parts such that different teams (or individuals) can separately work on
different parts. A suitable partitioning in the architecture can provide the project with the
parts that need to be built to build the system. As, almost by definition, the parts specified
in an architecture are relatively independent (the dependence between parts coming
through their relationship), they can be built independently. Not only does architecture
guide the development, it also establishes the constraints—the system should be
constructed in a manner that the structures chosen during the architecture creation are
preserved. That is, the chosen parts are there in the final system and they interacting the
specified manner.
4. Analysis
It is highly desirable if some important properties about the behavior of the system can
be determined before the system is actually built. This will allow the designers to consider
alternatives and select the one that will best suit the needs. Many engineering disciplines
use models to analyse design of a product for its cost, reliability, performance, etc.
Architecture opens such possibilities for software also. It is possible (thought the methods
are not fully developed or standardized yet) to analyse or predict the properties of the
system being built from its architecture. For example, the reliability or the performance of
the system can be analysed. Such an analysis can help determine whether the system
will meet the quality and performance requirements, and if not, what needs to be done to
meet the requirements.
Design Expertise
• Lead the development team and coordinate the development efforts for the integrity
of the design.
• Expert on the system being developed and plan for software evolution.
• Coordinate the definition of domain model for the system being developed.
Technology Expertise
Methodological Expertise
• Choose the appropriate approaches for development that helps the entire team.
• Facilitates the technical work among team members and reinforcing the trust
relationship in the team.
• Protect the team members from external forces that would distract them and bring
less value to the project.
PAGE NO. 72
than the Software Design. Software Architecture is concerned with issues beyond the
data structures and algorithms used in the system.
Software Architecture shows how the different modules of the system communicate with
each other and other systems. What language is to be used? What kind of data storage
is present, what recovery systems are in place? Like design patterns there are
architectural patterns. Such as 3-tier layered design, etc.
Software design is about designing the individual modules / components. What are the
responsibilities, functions, of module x or class Y? What can it do, and what not? What
design patterns can be used? UML diagram/flow chart/simple wireframes (for UI) for a
specific module/part of the system.
Software Architecture is the design of the entire system, while Software Design
emphasizes on a specific module / component / class level.
Software Architecture is “what” we are building. Software Design is “how” we are building.
PAGE NO. 73
level.
Software Design is about how we want Software Architecture is more about what
to achieve that. we want the system to do
Implementation Level Structure Level
Detailed Properties Fundamental Properties
Use Guidelines Define Guidelines
Communication with developer Communication with business
stakeholders
Avoid Uncertainty Manage Uncertainty
It helps to implement the software. It helps to define the high level
infrastructure of the software.
In one word the level of software design In one word the level of software
is implementation. architecture is structure.
PAGE NO. 74
Architecture View Model
A model is a complete, basic, and simplified description of software architecture which
is composed of multiple views from a particular perspective or viewpoint.
• The logical view or conceptual view − The logical view is concerned with the
functionality that the system provides to end-users. It describes the object model of
the design. The logical view is concerned with the system’s functionality as it relates
to end-users. Class diagrams and state diagrams are examples of UML diagrams that
are used to depict the logical view.
The physical view depicts the system from a system engineer's point of view. It
describes the mapping of software onto hardware and reflects its distributed aspect.
It is concerned with the topology of software components on the physical layer as well
as the physical connections between these components. UML diagrams used to
represent the physical view include the deployment diagram
• Scenario View
This view model can be extended by adding one more view called scenario
view or use case view for end-users or customers of software systems. It is coherent
with other four views and are utilized to illustrate the architecture serving as “plus one”
view, (4+1) view model.
PAGE NO. 76
Logical Process Development Physical Scenario
Description Shows the Shows the Gives building Shows the Shows the
component processes / block views of installation, design is
(Object) of Workflow system and configuration complete by
system as well rules of describe static and performing
as their system and organization of deployment of validation
interaction how those the system software and
processes modules application illustration
communicate,
focuses on
dynamic view
of system
PAGE NO. 77
Component and Connector View and its Architecture Style
Component-and-connector (C&C) views define models consisting of elements that have
some runtime presence, such as processes, objects, clients, servers, and data stores.
Component and Connector (C&C) architecture view of a system has two main elements—
components and connectors. Components are usually computational elements or data
stores that have some presence during the system execution. Connectors define the
means of interaction between these components.
A C&C view of the system defines the components, and which component is connected
to which and through what connector. A C&C view describes a runtime structure of the
system—what components exist when the system is executing and how they interact
during the execution. The C&C structure is essentially a graph, with components as nodes
and connectors as edges. C&C view is perhaps the most common view of architecture
and most box-and-line drawings representing architecture attempt to capture this view.
Most often when people talk about the architecture, they refer to the C&C view. Most
architecture description languages also focus on the C&C view.
Components
Components are generally units of computation or
data stores in the system. A component has a
name, which is generally chosen to represent the
role of the component or the function it performs.
PAGE NO. 78
communicate with other components. The interfaces are sometimes called ports.
It would be useful if there was a list of standard symbols that could be used to build an
architecture diagram. However, as there is no standard list of component types, there
is no such standard list.
Connectors
The different components of a system are likely to interact while the system is in operation
to provide the services expected of the system. After all, components exist to provide
parts of the services and features of the system, and these must be combined to deliver
the overall system functionality. For composing a system from its components,
information about the interaction between components is necessary.
PAGE NO. 79
Note that connectors need not be binary and a connector may provide a n-way
communication between multiple components. For example, a broadcast bus may be
used as a connector, which allows a
component to broadcast its message
to all the other components.
example. Below figure illustrates a primary presentation of a C&C view as one might
encounter it in a typical description of a system's runtime architecture.
PAGE NO. 80
A bird's-eye view of a system
as it might appear during
runtime. This system contains
a shared repository that is
accessed by servers and an
administrative component. A
set of client tellers can interact
with the account repository
servers and communicate
among themselves through a
publish-subscribe connector.
Each of the three types of connectors shown in Figure represents a different form of
interaction among the connected parts. The client-server connector allows a set of
concurrent clients to retrieve data synchronously via service requests. This variant of
the client-server style supports transparent failover to a backup server. The database
access connector supports authenticated administrative access for monitoring and
maintaining the database. The publish-subscribe connector supports asynchronous
event announcement and notification.
Each of these connectors represents a complex form of interaction and will likely require
PAGE NO. 81
nontrivial implementation mechanisms. For example, the client-server connector type
represents a protocol of interaction that prescribes how clients initiate a client-server
session, constraints on ordering of requests, how/when failover is achieved, and how
sessions are terminated. Implementation of this connecter will probably involve runtime
mechanisms that detect when a server has gone down, queue client requests, handle
attachment and detachment of clients, and so on. Note also that connectors need not
be binary: Two of the three connector types in Figure can involve more than two
participants.
PAGE NO. 82
N-Tier Architecture
Definition of N-Tier Architecture
N-tier architecture is also called multi-tier architecture because the software is
engineered to have the processing, data management, and presentation
functions physically and logically separated. That means that these different
functions are hosted on several machines or clusters, ensuring that services are
provided without resources being shared and, as such, these services are delivered
at top capacity. This separation makes managing each separately easier since doing
work on one does not affect the others, isolating any problems that might occur.
Not only does your software gain from being able to get services at the best possible
rate, but it’s also easier to manage. This is because when you work on one section, the
changes you make will not affect the other functions. And if there is a problem, you can
easily pinpoint where it originates.
A More In-Depth Look at N-Tier Architecture
N-tier architecture would involve dividing an application into three different tiers. It is
the physical separation of the different parts of the application. These would
be the
1. the presentation tier,
2. logic tier, and
3. the data tier.
PAGE NO. 83
How It Works and Examples of N-Tier Architecture
The presentation tier. The presentation tier is the user interface. This is what the
software user sees and interacts with. This is where they enter the needed information.
This tier also acts as a go-between for the data tier and the user, passing on the user’s
different actions to the logic tier.
The application logic tier. The application logic tier is where all the “thinking” happens,
and it knows what is allowed by your application and what is possible, and it makes other
decisions. This logic tier is also the one that writes and reads data into the data tier.
The data tier. The data tier is where all the data used in your application are stored. You
can securely store data on this tier, do transaction, and even search through volumes
and volumes of data in a matter of seconds.
Just imagine surfing on your favorite website. The presentation tier is the Web application
that you see. It is shown on a Web browser you access from your computer, and it has
the CSS, JavaScript, and HTML codes that allow you to make sense of the Web
application. If you need to log in, the presentation tier will show you boxes for username,
password, and the submit button. After filling out and then submitting the form, all that will
be passed on to the logic tier. The logic tier will have the JSP, Java Servlets, Ruby, PHP
and other programs. The logic tier would be run on a Web server. And in this example,
the data tier would be some sort of database, such as a MySQL, NoSQL, or PostgreSQL
database. All of these are run on a separate database server. Rich Internet applications
and mobile apps also follow the same three-tier architecture.
There are several benefits to using n-tier architecture for your software. These are
scalability, ease of management, flexibility, and security.
• Secure: You can secure each of the three tiers separately using different methods.
• Easy to manage: You can manage each tier separately, adding or modifying each
tier without affecting the other tiers.
PAGE NO. 84
• Scalable: If you need to add more resources, you can do it per tier, without affecting
the other tiers.
• Flexible: Apart from isolated scalability, you can also expand each tier in any
manner that your requirements dictate.
In short, with n-tier architecture, you can adopt new technologies and add more
components without having to rewrite the entire application or redesigning your whole
software, thus making it easier to scale or maintain. Meanwhile, in terms of security, you
can store sensitive or confidential information in the logic tier, keeping it away from the
presentation tier, thus making it more secure.
And there are n-tier architecture models that have more than three tiers. Examples are
applications that have these tiers:
• Business domain – the tier that would host Java, DCOM, CORBA, and other
application server object.
Because you are going to work with several tiers, you need to make sure that network
bandwidth and hardware are fast. If not, the application’s performance might be slow.
PAGE NO. 85
Also, this would mean that you would have to pay more for the network, the hardware,
and the maintenance needed to ensure that you have better network bandwidth.
Also, use as fewer tiers as possible. Remember that each tier you add to your software
or project means an added layer of complexity, more hardware to purchase, as well as
higher maintenance and deployment costs. To make your n-tier applications make sense,
it should have the minimum number of tiers needed to still enjoy the scalability, security
and other benefits brought about by using this architecture. If you need only three tiers,
don’t deploy four or more tiers.
PAGE NO. 86
Deployment View
The Deployment view focuses on aspects of the system that are important after the
system has been tested and is ready to go into live operation. This view defines the
physical environment in which the system is intended to run, including the hardware
environment your system needs (e.g., processing nodes, network interconnections, and
disk storage facilities), the technical environment requirements for each node (or node
type) in the system, and the mapping of your software elements to the runtime
environment that will execute them.
The deployment view shows the physical distribution of processing within the system.
The Deployment viewpoint applies to any information system with a required deployment
environment that is not immediately obvious to all of the interested stakeholders. This
includes the following scenarios:
• Systems with complex runtime dependencies (e.g., particular third-party software
packages are needed to support the system)
• Systems with complex runtime environments (e.g., elements are distributed over a
number of machines)
PAGE NO. 87
• Situations where the system may be deployed into a number of different environments
and the essential characteristics of the required environments need to be clearly
illustrated (which is typically the case with packaged software products)
• Systems that need specialist or unfamiliar hardware or software in order to run.
Most large information systems fall into one of these groups, so you will almost always
need to create a Deployment view.
Definition Describes the environment into which the system will be deployed,
including the dependencies the system has on its runtime
environment
Concerns • runtime platform required
• specification and quantity of hardware or hosting required
• third-party software requirements
• technology compatibility
• network requirements
• network capacity required
• physical constraints
Models • runtime platform models
• network models
• technology dependency models
• intermodel relationships
PAGE NO. 88
Deployment Diagram for Library Management System
PAGE NO. 90
for the desired analysis, then a view should be created that can be used for such an
allocation and analysis.
PAGE NO. 91
Documenting Architecture Design
Introduction: -When the designing is over, the architecture has to be properly communicated to
all stakeholders for negotiation and agreement. This requires that architecture be precisely
documented with enough information to perform the types of analysis the different stakeholders
wish to make to satisfy themselves that their concerns have been adequately addressed. Without
a properly documented description of the architecture, it is not possible to have a clear common
understanding. Hence, properly documenting architecture is as important as creating one.
Just like different projects require different views, different projects will need different level of
detail in their architecture documentation. In general, however, a document describing the
architecture should contain the following:
A pictorial representation is not a complete description of the view. It gives an intuitive idea of
the design, but is not sufficient for providing the details. For example, what is the purpose and
functionality of a module or a component is indicated only by its name, which is not sufficient.
Hence, supporting documentation is needed for the view diagrams. This supporting
documentation should have some or all of the following: -
• Element Catalog: Provides more information about the elements shown in the
primary representation. Besides describing the purpose of the element, it should
also describe the element's interfaces (remember that all elements have interfaces
through which they interact with other elements). All the different interfaces
provided by the elements should be specified. Interfaces should have unique
identity, and the specification should give both syntactic and semantic
information. Syntactic information is often in terms of signatures, which describe
all the data items involved in the interface and their types. Semantic information
must describe what the interface does. The description should also clearly state
the error conditions that the interface can return.
• Behaviour: A view gives the structural information. It does not represent the actual
behaviour or execution. Consequently, in a structure, all possible interactions
during an execution are shown. Sometimes, it is necessary to get some idea of
the actual behaviour of the system in some scenarios. Such a description is
useful for arguing about properties like deadlock. Behaviour description can be
provided to help aid understanding of the system execution. Often diagrams like
collaboration diagrams or sequence diagrams are used.
• Other Information: This may include a description of all those decisions that
have not been taken during architecture creation but have been deliberately
left for future. For example, the choice of a server or protocol. If this is done, then
it must be specified as fixing these will have impacts on the architecture.
Architecture documentation is in many ways similar to the documentation we write in other facets
of our software development projects.
1. Document should be written from the point of view of the reader, not the
writer. The document’s efficiency is optimized if we make things easier for reader.
2. Avoid Repetition: Each kind of information should be recorded in exactly one
place. This makes documentation easier to use and much easier as it evolves. It
also avoids the confusion, because information that is repeated is often repeated
in a slightly different form thus confusing the things.
6. Keep it Current: Documentation that is incomplete, out of date, does not reflect
truth, and does not obey its own rule for form and internal consistency will not be
used. Documentation that is kept current and accurate will be used.
PAGE NO. 94
Evaluating Architectures
Architecture allows the accomplishment of certain quality attributes, its evaluation in early
stage is a crucial task in a software development project. It is possible to verify if the
architectural decisions are appropriate in early stage without waiting for the system to be
developed and deployed. We can predict if a system will have the required quality
attributes. The goal is to determine the degree in which software architecture or an
architectural style satisfies the quality requirements. Architectural evaluation has
saved important amount of money when detecting that the system under development
could not achieve the quality requirements which was supposed to in early stages of the
development.
Software architecture evaluation methods that address one or more of the following
quality attributes: performance, maintainability, testability, and portability. The IEEE
standard 610.12-1990 defines the four quality attributes as:
“The ease with which a software system or component can be modified to correct faults,
improve performance or other attributes, or adapt to a changed environment.”
There are many aspects of performance, e.g., latency, throughput, and capacity.
“The degree to which a system or component facilitates the establishment of test criteria
and the performance of tests to determine whether those criteria have been met.”
We interpret this as the effort needed to validate the system against the requirements. A
PAGE NO. 95
system with high testability can be validated quickly.
“The ease with which a system or component can be transferred from one hardware or
software environment to another.”
We interpret this as portability not only between different hardware platforms and
operating systems, but also between different virtual machines and versions of
frameworks.
These four quality attributes are selected, not only for their importance for software
developing organizations in general, but also for their relevance for organizations
developing software in the real-time system domain in a cost effective way, e.g., by using
a product-line approach. Performance is important since a system must fulfil the
performance requirements, if not, the system will be of limited use, or not used. The long-
term focus forces the system to be maintainable and testable, it also makes portability
important since the technical development on computer hardware technology moves
quickly and it is not always the case that the initial hardware is available after a number
of years.
Following methods and approaches that can be applied for architecture-level evaluation
of performance, maintainability, testability, or portability.
PAGE NO. 96
1. It starts with the documentation of the architecture in a way that all
participants of the evaluation can understand.
2. Scenarios are then developed that describe the intended use of the system.
The scenarios should represent all stakeholders that will use the system.
3. The scenarios are then evaluated and a set of scenarios that represents the
aspect that we want to evaluate is selected.
5. The scenarios are then ordered according to priority, and their expected
impact on the architecture.
1. The first one is to collect scenarios that operationalize the requirements for
the system (both functional and quality requirements).
3. The third step is to describe the architecture using views that are relevant for
the quality attributes that were identified in step one.
4. Step four is to analyse the architecture with respect to the quality attributes.
The quality attributes are evaluated one at a time.
PAGE NO. 97
6. The sixth and final step is to identify and evaluate trade-off points, i.e.,
variation points that are common to two or more quality attributes.
The method provides more detailed descriptions of the steps involved in the process than
SAAM does, and tries to make it easier to repeat evaluations and compare different
architectures. It makes use of structural metrics and base the evaluation of the scenarios
on quantification of the architecture.
RARE/ARCADE RARE and ARCADE are part of a toolset called SEPA (Software
Engineering Process Activities). RARE (Reference Architecture Representation
Environment) is used to specify the software architecture and ARCADE is used for
simulation-based evaluation of it. The goal is to enable automatic simulation and
interpretation of a software architecture that has been specified using the RARE
environment. An architecture description is created using the RARE environment. The
architecture description together with descriptions of usage scenarios are used as input
to the ARCADE tool. ARCADE then interprets the description and generates a simulation
PAGE NO. 98
model. The simulation is driven by the usage scenarios. RARE is able to perform static
analysis of the architecture, e.g., coupling. ARCADE makes it possible to evaluate
dynamic attributes such as performance and reliability of the architecture. The RARE and
ARCADE tools are tightly integrated to simplify an iterative refinement of the software
architecture. The method has, as far as we know, only been used by the authors.
5. Argus-I
Layered queuing network models are very general and can be used to evaluate many
types of systems. The model describes the interactions between components in the
architecture and the processing times required for each interaction. The creation of
the models requires detailed knowledge of the interaction of the components, together
with behavioural information, e.g., execution times or resource requirements. The
execution times can either be identified by, e.g., measurements, or estimated. The more
detailed the model is the more accurate the simulation result will be. The goal when using
a queuing network model is often to evaluate the performance of a software architecture
or a software system. Important measures are usually response times, throughput,
resource utilization, and bottleneck identification.
7. SAM
SPE relies on two different models of the software system, i.e., a software execution
PAGE NO. 100
model and a system execution model. The software execution model models the
software components, their interaction, and the execution flow. In addition, key resource
requirements for each component can also be included, e.g., execution time, memory
requirements, and I/O operations. The software execution model predicts the
performance without taken contention of hardware resources into account. The system
execution model is a model of the underlaying hardware. Examples of hardware
resources that can be modelled are processors, I/O devices, and memory. Further, the
waiting time and competition for resources are also modelled. The software execution
model generates input parameters to the system execution model. The system execution
model can be solved by using either mathematical methods or simulations. The method
can be used to evaluate various performance measures, e.g., response times,
throughput, resource utilization, and bottleneck identification. The methods is primarily
targeted for performance evaluation. However, the authors argue that their method can
be used to evaluate other quality attributes in a qualitative way as well [39]. The method
has been used in several studies by the authors, but do not seem to have been used by
others.
While designing the process, it should not suffer from “tunnel vision” which
means that is should not only focus on completing or achieving the aim but
should consider alternative approaches, judging each based on the
requirements of the problem, the resources available to do the job.
The design process should be traceable to the analysis model which means it
should satisfy all the requirements that software requires to develop a high-
quality product.
Modularization
Modularization is a technique to divide a software system into multiple discrete and
independent modules, which are expected to be capable of carrying out task(s) independently.
These modules may work as basic constructs for the entire software. Designers tend to design
modules such that they can be executed and/or compiled separately and independently.
Modular design unintentionally follows the rules of ‘divide and conquer’ problem-solving
strategy this is because there are many other benefits attached with the modular design of a
software. As the number of modules grows, the efforts associated with integrating the module
also grows.
Advantage of modularization:
1) Coupling: -
Two modules are considered independent if one can function completely without the presence
of other. Obviously, if two modules are independent, they are solvable and modifiable
separately. However, all the modules in a system cannot be independent of each other, as they
must interact so that together they produce the desired external behavior of the system. The
more connections between modules, the more dependent they are in the sense that more
knowledge about one module is required to understand or solve the other module. Hence, the
fewer and simpler the connections between modules, the easier it is to understand one without
understanding the other. The notion of coupling attempts to capture this concept of "how
strongly" different modules are interconnected.
Coupling is a measure of interdependence among modules. In general, the more we must
know about module A in order to understand module B, the more closely connected A is to B.
"Highly coupled" modules are joined by strong interconnections, while "loosely coupled"
modules have weak interconnections. Independent modules or uncoupled modules have no
interconnections.
Types of Coupling
Different types of coupling are content, common, external, control, stamp and data. The
strength of coupling from lowest coupling (Best) to highest coupling (worst) is given in below
figure:
Data coupling:
In this type of coupling two modules interact by exchanging or passing data as parameter. The
Stamp Coupling:
Two modules are stamp coupled if they communicate via a passed data structure which contains
more information than necessary for the module to perform their function.
Here student records contain name, roll number, address, outside activities, medical
information, contact number, date of birth etc. in addition to academic performance
information. When we pass Student Record data structure to Calculate CGPA module we pass
so many unnecessary information in addition to required information i.e. academic
performance information.
Control Coupling:
Two modules are control coupled if they communicated using atleast one “Control Flag”.
e.g. When one class must perform operation in a fixed order, but the order is controlled elsewhere.
Problem with control coupling
• Modules are not independent, called module must know internal structure and
logic of calling module.
A form of a coupling in which a module has dependency to other module, external to the software
being developed or to a particular type of hardware.
This is basically related to the communication to external tools and devices e.g. OS, shared
libraries or the hardware.
External coupling occurs when two or more module OS / shared libraries / hardware /
access the same global data variable not data global data variable
structure.
Problems Module B
Module A
• High potential for side effects
• Missing access control
• Modules are bound to the global structure
Common Coupling:
Problems:
PAGE NO. 109
• Difficult to reuse because one module is dependent to other module because
they use same data
• Resulting code is unreadable, must read the entire module to understand
• Difficult to determine all the modules that affect a data element. It produces the
problem of reduces maintainability. If we want to make changes in one module,
we have to check all the module using this global data structure element.
Content Coupling:
Problem:
Almost any changes to one module requires changes to another module, if they are
content coupled.
2. Cohesion
Cohesion is a measure of the degree to which the elements of the module are
functionally related. A strongly cohesive module implements functionality that is related to
one feature of solution and requires little or no interaction with other module. Basically,
Types of cohesion:
Functional Cohesion
Calculate_sale_Tax
• More Reusable
• Corrective Maintenance easier
• Easier to extend product
Sequential Cohesion
Communicational Cohesion
Procedural Cohesion
A module has procedural cohesion if all the operations it performs are related to a sequence of
steps performed in the program.
e.g. A module has three functions
Functionality of above three are different but must follow the sequence, data cannot be
validated till it is not inputted and it cannot be stored into global variable till it is not validated.
e.g. Sequence in Report Module of Examination System
1) Calculate SGPA
2) Calculate CGPA
3) Print Student Record and CGPA
Procedures that are used one after another are kept together, even if one does not necessarily
provide input to next.
Problem:
Here elements of a component are related by timing. A module has temporal cohesion when it
performs a series of operations related in time. X and Y both operations must have performed
around the same time.
Functions that are related by time, all placed in the same module. e.g. In Security System alarm
system and automatic telephone dialing unit both placed in the same module because they
related with time. As soon as the alarm rings, the automatic telephone dialing gets connected.
So both must be activated at the same time.
Logical Cohesion
In logical cohesion several logical related functions or data elements are placed in same
module or component. Here elements of components are related logically but not functionally.
Several logically related elements are in the same component and one of the element is selected
by the caller.
e.g. Module Display Record
Display_Record
If record type is student then
Display Student Record
Else if record type is staff then
Display Staff Record
End if
End
They are logically related on Display work so they are placed in same component and which
type of record is to be display depends on the caller.
e.g. Built in library functions are placed in different header files according to logical cohesion,
all the input and output functions are placed into stdio.h and all mathematical functions placed
into math.h. Different functions performs different type of task and their calling is depending
upon the caller.
Problem:
Logical cohesion can be bad because you end up grouping functionality by technical
Coincidental Cohesion
The elements of a module are essentially unrelated by any common function, procedure, data
or anything.
e.g. File Processing Module
File_Processing
Open Employee update file
Read Employee Record
Print Page Heading
Open Employee Master File
Set Page Count to One
End
This is the weakest form of cohesion have no meaningful relationship.
Problem:
Difficult to maintain, understand and not reusable
Solution: Break module into separate modules each performing one task
Data Rate Data rate of the loosely The data rate of tightly coupled
coupled system is low. system is high.
Each process has its own cache System cache memory assigns
Cache Memory memory. processes according to the need of
processing.
There are many strategies or techniques for performing system design. They are:
Bottom-up approach:
The design starts with the lowest level components and subsystems. By using these
components, the next immediate higher level components and subsystems are created
or composed. The process is continued till all the components and subsystems are
composed into a single component, which is considered as the complete system. The
amount of abstraction grows high as the design moves to more high levels.
By using the basic information existing system, when a new system needs to be created,
the bottom up strategy suits the purpose.
Advantages:
Disadvantages:
Top-down approach:
Each system is divided into several subsystems and components. Each of the subsystem
is further divided into set of subsystems and components. This process of division
facilitates in forming a system hierarchy structure. The complete software system is
considered as a single entity and in relation to the characteristics, the system is split into
sub-system and component. The same is done with each of the sub-system.
This process is continued until the lowest level of the system is reached. The design is
started initially by defining the system as a whole and then keeps on adding definitions
of the subsystems and components. When all the definitions are combined together, it
turns out to be a complete system.
For the solutions of the software need to be developed from the ground level, top- down
design best suits the purpose.
• The main advantage of top down approach is that its strong focus on
requirements helps to make a design responsive according to its
requirements.
• Top-down design is more suitable when the software solution needs to
be designed from scratch and specific details are unknown.
Disadvantages:
Hybrid Design:
It is a combination of both the top – down and bottom – up design strategies. In this we
can reuse the modules.
Pure top-down or pure bottom-up approaches ate often not practical. For a bottom-up
approach to be successful, we must have a good notation of the top to which the design
should be heading. Without the good idea about the operations needed at the higher
layers, it is difficult to determine what operations the current layer should support.
PAGE NO. 120
For top-down approach to be effective, some bottom-up approach is essential for the
following reasons:
Hybrid approach has really popular after the acceptance of reusability of modules.
Standard libraries, MicroSoft Foundation Classes (MFCs), Object Oriented concepts are
the steps in this direction.
Start with a high level description of what the software / program does. Refine each part of the
description one by one by specifying in greater details the functionality of each part. These
points lead to Top-Down Structure.
Mostly each module is used by at most one other module and that module is called its Parent
module.
Designing of reusable module. It means modules use several modules to do their required
functions.
A data flow diagram (DFD) maps out the flow of information for any process or system. It
uses defined symbols like rectangles, circles and arrows, plus short text labels, to show data
inputs, outputs, storage points and the routes between each destination.
2. Data Dictionaries:
Data dictionaries are simply repositories to store information about all data items defined
in DFDs. At the requirement stage, data dictionaries contain data items. Data dictionaries
include Name of the item, Aliases (Other names for items), Description / purpose, Related
data items, Range of values, Data structure definition / form.
3. Structure Charts:
Structure Chart represent hierarchical structure of modules. It breaks down the entire
system into lowest functional modules, describe functions and sub-functions of each
module of a system to a greater detail. Structure Chart partitions the system into black
boxes (functionality of the system is known to the users but inner details are unknown).
Inputs are given to the black boxes and appropriate outputs are generated.
Modules at top level called modules at low level. Components are read from top to bottom
and left to right. When a module calls another, it views the called module as black box,
passing required parameters and receiving results.
1. Module
• Control Module: A control module branches to more than one sub module.
• Sub Module: Sub Module is a module which is the part (Child) of another
module.
• Library Module: Library Module are reusable and invoke able from any
2. Conditional Call
It represents that control module can select any of the sub module on the basis of some
condition.
4. Data Flow
It represents the flow of data between the modules. It is represented by directed arrow with
empty circle at the end.
5. Control Flow
It represents the flow of control between the modules. It is represented by directed arrow with
6. Physical Storage
4. Pseudo Code:
Understanding the process of any type of software related activity simplifies its
development for the software developer, programmer and tester. Whether you are
executing functional testing, or making a test report, each and every action has a
process that needs to be followed by the members of the team. Similarly, Object
Oriented Design (OOD) too has a defined process, which if not followed rigorously,
can affect the performance as well as the quality of the software. Therefore, to assist
the team of software developers and programmers, here is the process of Object
Oriented Design (OOD):
o Create mirror classes i.e., for every business class identified and created,
create one access class.
1. Objects: All entities involved in the solution design are known as objects. For
example, person, banks, company, and users are considered as objects. Every
entity has some attributes associated with it and has some methods to perform
on the attributes.
2. Classes: A class is a generalized description of an object. An object is an
instance of a class. A class defines all the attributes, which an object can have
and methods, which represents the functionality of the object.
There are various steps in the analyasis and design of an object oriented system are
given in below figure:
UML is composed of three main building blocks, i.e., things, relationships, and
diagrams. Building blocks generate one complete UML model diagram by rotating
around several different blocks. It plays an essential role in developing UML diagrams.
The basic UML building blocks are enlisted below:
1. Things
2. Relationships
3. Diagrams
1. Things
a) Structural things
b) Behavioral things
c) Grouping things
d) Annotational things
a) Structural things
Actor: It comes under the use case diagrams. It is an object that interacts
with the system, for example, a user.
Behavioral Things
They are the verbs that encompass the dynamic parts of a model. It depicts the
behavior of a system. They involve state machine, activity diagram, interaction
diagram, grouping things, annotation things
State Machine: It defines a sequence of states that an entity goes through in the
It is a method that together binds the elements of the UML model. In UML, the
package is the only thing, which is used for grouping.
Package: Package is the only thing that is available for grouping behavioral and
structural things.
Annotation Things
Association: A set of links that associates the entities to the UML model. It tells
how many elements are actually taking part in forming that relationship.
It is denoted by a dotted line with arrowheads on both sides to describe the
relationship with the element on both sides.
Diagrams
The diagrams are the graphical implementation of the models that incorporate
symbols and text. Each symbol has a different meaning in the context of the UML
diagram. There are thirteen different types of UML diagrams that are available in
UML 2.0, such that each diagram has its own set of a symbol. And each diagram
1. Structural Diagram
2. Behavioral Diagram
3. Interaction Diagram
o Class diagram
o Object diagram
o Package diagram
o Component diagram
o Deployment diagram
o Activity diagram
o State machine diagram
o Use case diagram
o Timing diagram
o Sequence diagram
o Collaboration diagram
1. External Entity: External entity is the entity that takes information and gives
information to the system. It is represented with rectangle.
2. Data Flow: The data passes from one place to another is shown by data flow. Data
flow is represented with arrow and some information written over it.
3. Process: It is also called function symbol. It is used to process all the information. If
there are calculations so all are done in the process part. It is represented with circle
and name of the process and level of DFD written inside it.
4. Data Store: It is used to store the information and retrieve the stored information. It
is represented with double parallel lines.
There are two primary diagrams that are used for dynamic modelling −
1) Interaction Diagrams
State
The state is an abstraction given by the values of the attributes that the object
has at a particular time period. It is a situation occurring for a finite time period in
the lifetime of an object, in which it fulfils certain conditions, performs certain
activities, or waits for certain events to occur. In state transition diagrams, a state
is represented by rounded rectangles.
Parts of a state
• Name − A string differentiates one state from another. A state may not have
any name.
• Entry/Exit Actions − It denotes the activities performed on entering and on
exiting the state.
• Internal Transitions − The changes within a state that do not cause a change
in the state.
• Sub–states − States within states.
The default starting state of an object is called its initial state. The final state
indicates the completion of execution of the state machine. In state transition
diagrams, the initial state is represented by a filled black circle. The final state is
represented by a filled black circle encircled within another unfilled black circle.
Transition
Events
Events are some occurrences that can trigger state transition of an object or a
group of objects. Events have a location in time and space but do not have a time
period associated with it. Events are generally associated with some actions.
Examples of events are mouse click, key press, an interrupt, stack overflow, etc.
Events that trigger transitions are written alongside the arc of transition in state
diagrams.
Example
Considering the example shown in the above figure, the transition from Waiting
state to Riding state takes place when the person gets a taxi. Likewise, the final
state is reached, when he reaches the destination. These two occurrences can
be termed as events Get_Taxi and Reach_Destination. The following figure
shows the events in a state machine.
External events are those events that pass from a user of the system to the
objects within the system. For example, mouse click or key−press by the user are
external events.
Internal events are those that pass from one object to another object within a
system. For example, stack overflow, a divide error, etc.
Deferred Events
Deferred events are those which are not immediately handled by the object in the
current state but are lined up in a queue so that they can be handled by the object
in some other state at a later time.
Event Classes
Event class indicates a group of events with common structure and behaviour.
As with classes of objects, event classes may also be organized in a hierarchical
structure. Event classes may have attributes associated with them, time being an
implicit attribute. For example, we can consider the events of departure of a flight
of an airline, which we can group into the following class −
Flight_Departs (Flight_No, From_City, To_City, Route)
Actions
Activity
Activity is an operation upon the states of an object that requires some time
period. They are the ongoing executions within a system that can be interrupted.
Activities are shown in activity diagrams that portray the flow from one activity to
another.
Action
Entry action is the action that is executed on entering a state, irrespective of the
transition that led into it.
Likewise, the action that is executed while leaving a state, irrespective of the
transition that led out of it, is called an exit action.
Scenario
Software design is a process to transform user requirements into some suitable form,
which helps the programmer in software coding and implementation.
For assessing user requirements, an SRS (Software Requirement Specification)
document is created whereas for coding and implementation, there is a need of more
specific and detailed requirements in software terms. The output of this process can
directly be used into implementation in programming languages.
Software design is the first step in SDLC (Software Design Life Cycle), which moves
the concentration from problem domain to solution domain. It tries to specify how to
fulfil the requirements mentioned in SRS.
All the data flows, flowcharts, data structures, etc. are in these docs, so that
developers can understand how the system is expected to work with regards to
the features and the database design.
3) Detailed Design- Detailed design deals with the implementation part of what is
seen as a system and its sub-systems in the previous two designs. It is more
detailed towards modules and their implementations. It defines logical structure of
each module and their interfaces to communicate with other modules.
Detail design is the specification of the internal elements of all major system
components, their properties, relationships, processing, and often their algorithms
and the data structures.
According to the IEEE,
The process of refining and expanding the preliminary design phase
(software architecture) of a system or component to the extent that the design is
sufficiently complete to be implemented.
During Detailed Design designers go deep into each component to define its
internal structure and behavioural capabilities, and the resulting design leads to
natural and efficient construction of software.
➢ After the architecture and requirements for assigned components are well
understood, the detailed design of software components can begin.
• Structural
PAGE NO. 159
• Behavioural
4. Data Design
• Database
➢ The most popular technique for evaluating detailed designs involves Technical
Reviews. When conducting technical reviews, keep in mind the following:
• Send a review notice with enough time for others to have appropriate time
to thoroughly review the design.
• Include a technical expert in the review team, as well as stakeholders of
your design.
• Include a member of the software quality assurance or testing team in the
review.
• During the review, focus on the important aspects of your designs; those
that show how your design helps meet functional and non-functional
requirements.
• Document the review process.
o Make sure that any action items generated during the review are
captured and assigned for processing.
PDL was originally developed by the company Caine, Farber & Gordon and has been
modified substantially since they published their initial paper on it in 1975. It has been
described in some detail by Steve McConnell in his book Code Complete.
PDL is used to express the design in a language that is as precise and unambiguous
as possible without having too much detail and that can be easily converted into an
implementation.
PDL has an overall outer syntax of a structure programming language and has a
vocabulary of a natural language.
PDL Example:
Consider the problem of reading the record from the file. If file reading is not completed
and there is no error in the record then print the information of the record otherwise print
that there is an error in reading the record. This process will continue till the whole file
is completed.
Sequence Construct
2. If construct: The if construct is used to control the flow of execution down one
Repetition construct
Advantages of PDL :
• It can be embedded with source code, therefore easy to maintain.
• It enables declaration of data as well as procedure.
• It is the cheapest and most effective way to change program architecture,
The basic goal in detailed design is to specify the logic for the different modules that
have been specified during system design. Specifying the logic will require developing
an algorithm that will implement the given specifications. The term algorithm is quite
general and is applicable to a wide variety of areas. Essentially, an algorithm is a
sequence of steps that need to be performed to solve a given problem.
There are a number of steps that one has to perform while developing an algorithm.
• The starting step in the design of algorithms is statement of the problem. The
problem for which an algorithm is being devised has to be precisely and clearly
stated and properly understood by the person responsible for designing the
algorithm. For detailed design, the problem statement comes from the system
design. That is, the problem statement is already available when the detailed
design of a module commences.
• The next step is development of a mathematical model for the problem. In
modelling, one has to select the mathematical structures that are best suited for
the problem. It can help to look at other similar problems that have been solved.
In most cases, models are constructed by taking models of similar problems and
modifying the model to suit the current problem.
• The next step is the design of the algorithm. During this step the data structure
and program structure are decided.
• Once the algorithm is designed, its correctness should be verified.
Stepwise Refinement
The most common method for designing algorithms or the logic for a module is to use
the stepwise refinement technique.
refinement or decomposition
Compute:
add the grades
count the grades
divide the sum by the count
We realized this would be a problem, because to do all input before doing the
sum and the count would require us to have enough variables for all the grades
(but the number of grades to be entered is not known in advance). So, we revised
our breakdown of "steps".
So, now we can break down these 3 steps into more detail. The input
step can roughly break down this way:
loop until the user enters the sentinel value (-1 would be good)
prompt user to enter a grade (give them needed info, like -1 to quit)
allow user to type in a grade (store in a variable)
add the grade into a variable used for storing the sum
add 1 to a counter (to track how many grades)
do
prompt user to enter a grade (give them needed info, like -1 to quit)
allow user to type in a grade (store in a variable)
add the grade into a variable used for storing the sum
add 1 to a counter (to track how many grades)
while user has NOT entered the sentinel value (-1 would be good)
If we look at this format, we realize that the "adding" and "counting" steps should only
be done if the user entry is a grade, and NOT when it's the sentinel value. So, we can
add one more refinement:
do
prompt user to enter a grade (give them needed info, like -1 to quit)
allow user to type in a grade (store in a variable)
if the entered value is a GRADE (not the sentinel value)
add the grade into a variable used for storing the sum
add 1 to a counter (to track how many grades)
while user has NOT entered the sentinel value (-1 would be good)
This breakdown helps us see what variables are needed, so the declare and initialize
variables step can be now made more specific:
initialize variables:
a grade variable (to store user entry)
a sum variable (initialized to 0)
a counter (initialized to 0)
grade entry:
---------
do
prompt user to enter a grade (give them needed info, like -1 to quit)
allow user to type in a grade (store in a variable)
if the entered value is a GRADE (not the sentinel value)
add the grade into a variable used for storing the sum
add 1 to a counter (to track how many grades)
while user has NOT entered the sentinel value (-1 would be good)
Compute average:
---------
divide the sum by the counter
print the answer
A state diagram for an object does not represent all the actual states of the
object, as there are many possible states. A state diagram attempts to represent only the
logical states of the object. A logical state of an object is a combination of all those states
from which the behaviour of the object is similar for all possible events. Two logical states
will have different behaviour for at least one event. For example, for an object that represents
a stack, all states that represent a stack of size more than 0 and less than some defined
maximum are similar as the behaviour of all operations defined on the stack will be similar in
all such states (e.g., push will add an element, pop will remove one, etc.). However, the state
representing an empty stack is different as the behaviour of push and pop operations are
different now (an error message may be returned in case of pop). Similarly, the state
representing a full stack is different. The state model for this bounded size stack is shown in
Figure
1. Design Walkthroughs
2. Critical Design Review
3. Consistency Checkers.
1. Design Walkthroughs: -
Time and effort of every participant should be built into the project plan so that
participants can schedule their personal work plans accordingly. The plan should
include time for individual preparation, the design walkthrough (meeting), and the
PAGE NO. 172
likely rework.
All participants in the design walkthrough should clearly understand their role and
responsibilities so that they can consistently practice effective and efficient reviews.
Besides planning, all participants need to prepare for the design walkthrough. One
cannot possibly find all high-impact mistakes in a work product that they have looked
at only 10 minutes before the meeting. If all participants are adequately prepared as per
their responsibilities, the design walkthrough is likely to be more effective.
The design walkthrough should be used as a means to review and assessment the
product, not the person who created the design. Use the collective understanding to
improve the quality of the product, add value to the interactions, and encourage
participants to submit their products for a design walkthrough.
The purpose of critical design review is to ensure that the detailed design satisfies the
specifications laid down during system design.
A Critical Design Review is a multi-disciplined technical review to ensure that a system
can proceed into fabrication, demonstration, and test and can meet stated performance
requirements within cost, schedule, and risk. A successful CDR is predicated upon
a determination that the detailed design satisfies the Capabilities Development
Document (CDD). Multiple CDRs may be held for key Configuration Items (CI) and/or
at each subsystem level, culminating in a system-level CDR.
The critical design review process is same as the inspections process in which a
group of people get together to discuss the design with the aim of close- fitting design
The use of checklists, as with other reviews, is considered important for the success of
the review. The checklist is a means of focusing the discussion or the "search" of errors.
Checklists can be used by each member during private study of the design and during
the review meeting. For best results, the checklist should be tailored to the project at
hand, to uncover project specific errors.
3. Consistency Checkers
If the design is specified in PDL or some other formally defined design language, it is
possible to detect some design defects by using consistency checkers. Consistency
checkers are essentially compilers that take as input the design specified in a design
language (PDL in our case). Clearly, they cannot produce executable code because the
inner syntax of PDL allows natural language and many activities are specified in the
natural language. However, the module
Depending on the precision and syntax of the design language, consistency checkers
can produce other information as well. In addition, these tools can be used to compute
the complexity of modules and other metrics, because these metrics are based on
alternate and loop constructs, which have a formal syntax in PDL. The trade-off here
is that the more formal the design language, the more checking can be done during
design, but the cost is that the design language becomes less flexible and tends
towards a programming language.