0% found this document useful (0 votes)
43 views98 pages

SEN Notes (Unit1 To 5)

Uploaded by

jadhavayush590
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views98 pages

SEN Notes (Unit1 To 5)

Uploaded by

jadhavayush590
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 98

Course Name : Computer Engineering Subject Title : Software Engineering

Course Code : CO/CM/IF/CD Subject Code : 22413

Table of Contents
Chapter 1 Software Development Process ........................................................................................2
1.1 Software, Software Engineering as layered approach and its characteristics, Types of Software ..2
1.2 Software Development Framework .................................................................................................7
1.3 Software Process Framework, Process models: Perspective Process Models, Specialized Process
Models..............................................................................................................................................7
1.4 Agile Software Development: Agile Process and its importance, Extreme Programming,
Adaptive Software Development, Scrum, Dynamic Systems Development Method (DSDM),
Crystal ............................................................................................................................................17
1.5 Selection Criteria for Software Process Model ..............................................................................24
Chapter 2 Software Engineering Practices And Software Requirements Engineering ..............26
2.1 Software Engineering Practices and importance, Core Principles.................................................26
2.2 Communication Practices, Planning Practices, Modelling Practices, Construction Practices,
Software Deployment ....................................................................................................................27
2.3 Requirement Engineering ..............................................................................................................34
2.4 Software Requirement Specification .............................................................................................42
Chapter 3 Software Modelling and Design ......................................................................................46
3.1. Translating Requirement model into Design model: Data Modelling ..........................................46
3.2. Analysis Modelling : Elements of Analysis model .......................................................................48
3.3. Design Modelling : Fundamental Design Concepts .....................................................................50
3.4. Design Notations ...........................................................................................................................52
3.5. Testing ...........................................................................................................................................59
3.6. Test Documentation ......................................................................................................................63
Chapter 4 Software Project Estimation ...........................................................................................66
Introduction to Software Project Management & its need...................................................................66
4.1. The Management Spectrum – the 4 P’s and their Significance ....................................................66
4.2. Metrics for Size Estimation...........................................................................................................67
5.1 Project Scheduling .........................................................................................................................68
4.3. Project Cost Estimation Techniques .............................................................................................70
4.4. COCOMO Model (Constructive Cost Model) ..............................................................................73
4.5. Risk Management .........................................................................................................................74
Chapter 5 Software Quality Assurance and Security .....................................................................82
5.1. Project Scheduling ........................................................................................................................82
5.2. Project Tracking: ...........................................................................................................................87
5.3. Software Quality Management Vs. Software Quality Assurance .................................................89
5.4. Quality Evaluation Standards........................................................................................................91
5.5. Software Security ..........................................................................................................................95

1
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Chapter 1 Software Development Process


1.1 Software, Software Engineering as layered approach and its characteristics, Types of
Software
Computer Software is the product that software professionals build and then support over the long
term. Software is a set of instructions to acquire inputs and manipulate them to produce the desired
output in terms of functions and performance as determined by the user of the software.
Software Engineering is defined as a discipline that addresses the following aspects of the software
and its development. They are:
1. Economic : Cost, Benefits, and Returns on Investment (ROI).
2. Design : Ease of development and ensuring delivery of customer requirements.
3. Maintenance : Ease of effecting changes and modifications.
4. Implementation : Ease of installation, Demonstration, and implementation of software by the
customer and users.
It is an engineering discipline which is systemic, scientific, methodical, uses standards, models
algorithms in design and development.
The IEEE (Institution of Electrical and Electronics Engineers) defines Software Engineering as the
application of a systematic, disciplined, quantifiable approach to the development, operations and
maintenance of software.
Today, software performs a dual role. It is both a product and a vehicle for delivering a product. As
a product, it delivers the computing potential embodied by computer hardware or, more broadly, by a
network of computers that are accessible by local hardware. Software is an information transformer
- producing, managing, acquiring, modifying, displaying, transmitting information that can be as
simple as a single bit or as complex as a multimedia presentation.
As a vehicle, for delivering the product, software acts as the basis for the control of computer
(Operating System),the communication of information (networks), and the creation and control of
other programs (Software tools and environments).
Characteristics of Software:
Software is written to handle an Input – Process – Output system to achieve predetermined goals.
Software is logical rather than a physical system element. Therefore software has characteristics that
are different than that of hardware.
1. Software is developed or engineered; it is not manufactured in the classical sense.
2. Software doesn’t “wear out” like hardware and it is not degradable over a period.
3. Although the industry is moving toward component – based construction, most software
continues to be custom built.
4. A software component should be designed and implemented so that it can be reused in many
different programs.
5. The following figure depicts failure rate as a function of time for hardware. The relationship
often called as “bathtub curve”, indicates that, hardware exhibits relatively high failure rates
early in its life.

2
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

6. Software is not susceptible to the environmental maladies that cause hardware to wear out.
Hence, the failure rate curve for software should take the form of “idealized curve” as shown in
the figure. Undiscovered defects will cause high failure rates early in the life of a program.
However, these are corrected and the curve flattens as shown. Hence the software doesn’t wear
out, but it does deteriorate.

Time
Failure curve for Hardware

Failure curve for Software


The classical and conventional definition of software is that it is a set of instructions which when
executed through a computing device produces the desired result by the execution of functions and
processes. It also includes a set of documents, such as the software manual, meant for users to
understand the software system. Today’s software comprises the source code, Executable, Design
Documents, Operations Manual, System Manual, Installation /Implementation Manuals.
Software is described by its capabilities. The capabilities relate to the functions it executes, the
features it provides and the facilities it offers.

3
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Software Engineering- Definition, Need


According to Fritz Bauer, software engineering is establishment and use of sound engineering
principles in order to obtain economical software that is reliable and works efficiently on real
machines.
“More than a discipline or a body of knowledge, engineering is a verb, an action word, a way of
approaching a problem”. - Scott Whit mire
Relationship between Systems Engineering and Software Engineering
Software Engineering
Software engineering deals with designing and developing software of the highest quality. A
software engineer does analyzing, designing, developing and testing software. Software engineers
carry out software engineering projects, which usually have a standard software life cycle. For
example, the Water Fall Software Life cycle will include an analysis phase, design phase,
development phase, testing and verification phase and finally the implementation phase. Analysis
phase looks at the problem to be solved or the opportunities to be seized by developing the software.
Sometimes, a separate business analyst carries out this phase. However, in small companies,
software engineers may do this task. Design phase involves producing the design documents such as
UML diagrams and ER diagrams depicting the overall structure of the software to be developed and
its components. Development phase involves programming or coding using a certain programming
environment. Testing phase deals with verifying that software is bug free and also satisfies all the
customer requirements. Finally, the completed software is implemented at the customer site
(sometimes by a separate implementation engineer). In recent years, there has been a rapid growth of
other software development methodologies in order to further improve the efficiency of the software
engineering process. For example, Agile methods focus on incremental development with very short
development cycles. Software Engineering profession is a highly rated job because of its very high
salary range.
System Engineering
System Engineering is the sub discipline of engineering which deals with the overall management of
engineering projects during their life cycle focusing more on physical aspects. It deals with logistics,
team coordination, automatic machinery control, work processes and similar tools. Most of the times,
System Engineering overlaps with the concepts of Industrial Engineering, Control Engineering,
Organizational and Project Management and even Software Engineering. System Engineering is
identified as an interdisciplinary engineering field due to this reason. System Engineer may carry out
system designing, developing requirements, verifying requirements, system testing and other
engineering studies.
Software engineering layers: A Layered Technology Approach

Software engineering is a layered technology. The layers of software engineering as shown in the
above diagram are:-
4
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

1. A Quality Focus:
Any engineering approach including software engineering must rest on an organizational
commitment to quality. Total quality management, six sigma and similar philosophies foster a
continuous process improvement culture, and it is this culture that ultimately leads to the
development of increasingly more effective approaches to software engineering. The bedrock that
supports software engineering is a quality focus.
2. Process Layer:
The foundation for software engineering is the process layer. Software Engineering process is the
glue that holds the technology layers together and enables rational and timely development of
computer software. Process defines a framework that must be established for effective delivery of
software engineering technology. The software process forms the basis for management control of
software projects and establishes the context in which technical methods are applied, works products
(models, documents, data, reports, forms etc.) are produced, milestones are established, quality is
ensured and change is properly managed.
3. Methods:
Software Engineering methods provide the technical know-how’s for building software. Methods
encompass a broad array of tasks that include communication, requirements analysis, design
modeling, program construction, testing and support.
4. Tools:
Software Engineering tools provide automated or semi-automated support for the process and the
methods. When tools are integrated so that information created by one tool can be used by another, a
system for the support of software development, called computer–aided software engineering is
established.
Types/ Categories of Software:
Today, seven broad categories of computer software present continuing challenges for software
engineers.
1. System Software:
System Software is a collection of programs written to serve other programs. Some system software
(e.g.- compliers, editors, and file management utilities) processes complex, but determinate
information structures. Other system applications (e.g.- operating system components, drivers,
networking software, telecommunications processors) process largely indeterminate data. In either
case, the systems software area is characterized by heavy interaction with computer hardware; heavy
usage by multiple users; concurrent operation that requires scheduling, resource sharing, and
sophisticated process management; complex data structures and multiple external interfaces.
2. Application Software:
Application Software consists of standalone programs that solve a specific business need.
Applications in this area process business or technical data in a way that facilitates business
operations, management, and technical decision-making. In addition to conventional data processing
applications, application software is used to control business functions in real time (e.g., point-of-
sale transaction processing, real-time manufacturing process control).

5
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

3. Engineering / Scientific Software:


Formerly characterized by “number crunching” algorithms, engineering and scientific software
applications range from astronomy to volcanology, from automotive stress analysis to space shuttle
orbital dynamics, and from molecular biology to automated manufacturing. Computer-aided design,
system simulation, and other interactive applications have begun to take on real–time and even
system software characteristics.
4. Embedded Software:
Embedded Software resides within a product or system and is used to implement and control features
and functions for the end-user and for the system itself. Embedded software can perform limited and
esoteric functions (e.g. keypad control for a microwave oven) or provide significant function and
control capability (e.g. digital functions in an automobile such as fuel control, dashboard displays,
braking systems, etc.)
5. Product–line Software:
Designed to provide a specific capability for use by many different customers, product–line software
can focus on a limited & esoteric market place (e.g. – inventory control products) or address mass
consumer markets (e.g. – word processing, spreadsheets, computer graphics, multimedia,
entertainment, database management, personal and business financial applications.)
6. Web – applications:
“WebApps”, span a wide array of applications. WebApps are evolving into sophisticated computing
environments that not only provide standalone features, computing functions and content to the end
user, but also are integrated with corporate databases and business applications.
7. Artificial Intelligence(AI) Software:
AI Software makes use of non–numerical algorithms to solve complex problems that are not
amenable to computation or straight forward analysis. Applications within this area include robotics,
expert systems, pattern recognition (image and voice), artificial neural networks, theorem proving,
and game playing.
Due to changing nature of software and rapid growth of technology, the challenge for software
engineers will be –
a) To develop systems and application software that will allow small devices, personal computers,
and enterprise system to communicate across vast networks to meet rapid growth of wireless
networking.
b) To architect simple (e.g.- personal financial planning) and sophisticated applications that provide
benefit to targeted end-user markets worldwide to meet rapid growth of net sourcing (World
Wide Web)
c) To build source code that is self-descriptive, but, more importantly, to develop techniques that
will enable both customers and developers to know what changes have been made and how those
changes manifest themselves within the software.
d) To build applications that will facilitate mass communication and mass product distribution using
concepts that is only now forming.
e) The computer itself will make a historic transition from something that is used for analytic tasks
to something that can elicit emotion. - David Vaskevitch

6
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

1.2 Software Development Framework


A process defines who is doing what, when and how to reach a certain goal. The following generic
process framework is applicable to the vast majority of software projects.
1. Communication :
This framework activity involves heavy communication & collaboration with the customer (and the
stakeholders) and encompasses requirements gathering and other related activities.
2. Planning :
This activity establishes a plan for the software engineering work that follows. It describes the
technical tasks to be conducted, the risks that are likely, the resources that will be required, the work
products to be produced and a work schedule.
3. Modeling :
This activity encompasses the creation of models that allow the developer & the customer to better
understand software requirements & the design that will achieve those requirements.
4. Construction :
This activity combines code generation and the testing that is required to uncover errors in the code.
5. Deployment :
The software is delivered to the customer who evaluates the delivered product and provides feedback
based on the evaluation.

1.3 Software Process Framework, Process models: Perspective Process Models, Specialized
Process Models
A process framework establishes the foundation for a complete software process by identifying a
small number of framework activities that are applicable to all software projects, regardless of their
size or complexity. In addition, the process framework encompasses a set of umbrella activities that
are applicable across the entire software process.

7
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Software Process
Process Framework
Umbrella activities
Framework activity #1
Software engineering action #1.1

Work tasks
Task sets Work products
. Quality assurance points
. Project milestones
.
Software engineering action #1.k

Work tasks
Task sets Work products
Quality assurance points
Project milestones

.
.
Framework activity #n
Software engineering action #n.1
Work tasks
Work products
Task sets
Quality assurance points
.
Project milestones
.
.
Software engineering action #n.m
Work tasks
Work products
Task sets
Quality assurance points
Project milestones

From the above figure, each framework activity is populated by a set of software engineering
actions- a collection of related tasks that produces a major software engineering work product (e.g.
design is a SE action). Each action is populated with individual work tasks that accomplish some
part of the work implied by the action.
Umbrella Activities:Generic views of SE has a set of unbrella activities. They are
Software Project tracking and control:
The framework described in the generic view of SE is complemented by a number of umbrella
activities, one of which is software project tracking and control. It allows the software team to
access progress against the project plan and takes necessary action to maintain schedule. Umbrella

8
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

activities occur throughout the software process and focus primarily on project management,
tracking and control.
Risk Management:
Assess risks that are likely to affect performance and quality of project.
Software quality assurance:
Define and conduct activities to ensure software quality.
Formal Technical Review:
Work products to uncover and remove errors before they are shifted to next level of activity.
Measurement:
Defines and collects process, project and product measures to assist the team in delivering the
software that meets customer needs can be used in conjunction with all framework and umbrella
activities.
Software Configuration Management (SCM):
Manages and effects the changes throughout the software process.
Reusability management:
Defines criteria for work product reuse (including software components) and establishes the
mechanism to achieve reusable components.
Work product preparation and production:
Includes activities for creating work product such as models, documents, data, reports etc.
Process Models : Perspective Process Models and Specailzed Process Models
The generic process models must be adapted for use by a software project team. To accomplish this,
process technology tools have been developed to help software organizations analyze their current
process, organize work tasks, control, monitor progress and manage technical quality.
Process technology tools allow a software organization to build an automated model of the common
process framework, task sets and umbrella activities. The model, normally represented as a network
can then be analyzed to determine typical workflow and examine alternative process structures that
might lead to reduced development time and cost.
Once, an acceptable process is created, other process technology tools can be used to allocate,
monitor and even control all software engineering tasks defined as part of the process model, to
develop a checklist of work tasks to be performed, work products to be produced and quality
assurance activities to be conducted, to coordinate the use of other computer-aided software
engineering tools that are appropriate for a particular work task.
In some cases, the process technology tools incorporate standard project management tasks such as
estimating, scheduling, tracking and control.
Prescriptive Process Models:
Irrespective of which level of CMM the organization has, the software engineer has five choices for
selection of software process models.

9
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

They are –
1. Waterfall Model
2. Incremental Model
3. RAD Model
4. Prototype Model
5. Spiral Model

1. The Waterfall Model:

There are times when the requirements of a problem are reasonably well understood – when work
flows from communication through deployment in a reasonably linear fashion.
The waterfall model is a traditional method, sometimes called the classic life cycle. This is one of the
initial models. As the figure implies stages are cascaded and shall be developed one after the other.
In other words one stage should be completed before the other begins. Hence, when all the
requirements are elicited by the customer, analyzed for completeness and consistency, documented
as per requirements, the development and design activities commence.
One of the main needs of this model is the user’s explicit prescription of complete requirements at
the start of development. For developers it is useful to layout what they need to do at the initial
stages. Its simplicity makes it easy to explain to customers who may not be aware of software
development process. It makes explicit with intermediate products to begin at every stage of
development.
One of the biggest limitation is it does not reflect the way code is really developed.
Problem is well understood but software is developed with great deal of iteration.
Often this is a solution to a problem which was not solved earlier and hence software developers
shall have extensive experience to develop such application; as neither the user nor the developers
are aware of the key factors affecting the desired outcome and the time needed. Hence at times the
software development process may remain uncontrolled.
Today software work is fast paced and subject to a never-ending stream of changes in features,
functions and information content. Waterfall model is inappropriate for such work. This model is
useful in situation where the requirements are fixed and work proceeds to completion in a linear
manner.
Among the problems that are sometimes encountered when the waterfall model is applied are

10
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

1. Real projects rarely follow the sequential flow that the model proposes. Although the linear
model can accommodate iteration, it does so directly. As a result, changes can cause confusion
as the project team proceeds.
2. It is often difficult for the customer to state all requirements explicitly. The Waterfall Model
requires this and has difficulty accommodating the natural uncertainty that exists at the beginning
of many projects.
3. The customer must have patience. A working version of the program will not be available until
late in the project time-span. A major blunder, if undetected until the working program is
received, can be disastrous.
The waterfall model is often inappropriate for such work. However, it can serve as a useful process
model in situations where requirements are fixed and work is to proceed to completion in a linear
manner.
2. The Incremental Model:
The incremental model combines elements of the waterfall model applied in an iterative fashion.
The incremental model delivers a series of releases, called increments, that provides progressively
more functionality for the customer at each increment is delivered. In each increment, additional
functions and features are added after confirming the utility of earlier increments.
In the early years of development users were willing to wait for software projects to be ready.
Today’s business does not tolerate long delays. Software helps to distinguish products in the market
place and customers are always looking for new quality and functions. One of the ways to reduce
time is the phased development. The system is developed such that it can be delivered in parts
enabling the users to have few functions while the rest are being developed. Thus development and
usage will happen in parallel.
In incremental development the system is partitioned into subsystems or increments. The releases are
defined in the beginning with initial function and them adding functionalities with subsequent
releases. Incremented development slowly builds up to full functionality with subsequent releases.
This model combines the elements of waterfall model in an iterative fashion. The model applies
linear sequences in a staggered manner as the calendar time progresses. In this model first increment
is the core product or primary function. The core product implemented undergoes detailed evaluation
by the user which becomes advantages for future increments. The feedback also addresses future
modifications which are included in the next increments for additional features and functionality. He
process is repeated till delivery of each increment till the final product is delivered.
This is useful when the software team is smaller in size. Additional increments can be planned and
managed to address technical risks. This has the advantage of prompt system delivery to users
without hassle.
In case of availability of new hardware are delayed and early increments which could be executed on
existing systems for partial functionality to prevent inordinate delays.

11
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

From this diagram, the incremental model applies linear sequences in a staggered fashion as calendar
time progresses. Each linear sequence produces deliverable “Increments” of the software.
For example, word-processing software developed using the incremental paradigm might deliver
basic file management, editing and document production functions in the first increment; more
sophisticated editing and document production capabilities in the second increment; spelling and
grammar checking in the third increment; and advanced page layout capability in the fourth
increment.
3. The RAD Model:

Rapid Application Development (RAD) is a modern software process model that emphasizes a short
development cycle. The RAD Model is a “high-speed” adaptation of the waterfall model, in which
rapid development is achieved by using a component based construction approach. If requirements
are well understood and project scope is considered, the RAD process enables a development team to
create a “Fully Functional System” within a very short period of time (e.g. 60 to 90 days).
12
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

One of the distinct features of RAD model is the possibility of cross life cycle activities which will
be assigned to teams, teams #1 to team #n leading to each module getting developed almost
simultaneously.
This approach is very useful if the business application requirements are modularized as function to
be completed by individual teams and finally to integrate into a complete system. As such compared
to waterfall model the team will be of larger size to function with proper coordination.
RAD model distributes the analysis and construction phases into a series of short iterative
development cycles. The activities of each phase per team are Business modeling, Data modeling
and process modeling.
This model is useful for projects with possibility of modularization. RAD may fail if modularization
is difficult. This model should be used if domain experts are available with relevant business
knowledge.
Advantages:
1. Changing requirements can be accommodated and progress can be measured.
2. Powerful RAD tools can reduce development time.
3. Productivity with small team in short development time and quick reviews, risk control increases
reusability of components, better quality.
4. Due to risks in new approach only modularized systems are recommended through RAD.
5. Suitable for scalable component based systems.
Limitations:
1. Success of RAD model depends on strong technical team expertise and skills.
2. Highly skilled developers needed with modeling skills.
3. User involvement throughout life cycle. If developers & customers are not committed to the
rapid fire activities necessary to complete the System in a much-abbreviated time frame, RAD
projects will fail.
4. May not be appropriate for very large scale systems where the technical risks are high.
4. The Prototype Model:
The prototyping paradigm begins with communication as shown in the diagram below.

13
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

The software development process can help to control by including activities and sub processes to
enhance understanding. Prototyping is a sub process or a partially developed product that enable
customers and developers to examine aspects of a proposed system and decide if it is suitable or
appropriate for the finished product.
Developers may build a system to implement a small portion of some of the key requirements to
ensure that the requirements are consistent, feasible and practical. In case of changes, revisions are
made at the requirements stage by prototyping parts of the design.
Design prototyping helps the developers assess alternative strategies and decide which best suits for
the project. There may be radically different designs to get best performances. Often user interface is
built and tested as a prototype for users to understand the new system and developers to get the idea
of user’s reaction/response to the system.
In Business needs, requirements change very often making earlier methods unrealistic and redundant.
Short market deadlines make it difficult to complete comprehensive software products. The
evolutionary models are iterative and help the developers to complete short version within the given
deadlines.
Ideally prototype serves as a mechanism to identify software requirements for working prototypes.
The developer attempts to make use of existing program fragments and applies tools such as report
generators which enable working programs to be generated quickly.
The software engineer & customer meet and define the overall objectives for the software, identify
whatever requirements are known and outline areas where further definition is mandatory.
Prototyping iteration is planned quickly and modeling (in the form of quick design) occurs. The
quick design focuses on a representation of those aspects of the software that will be visible to the
customer/end-user (e.g. human interface layout or output display formats). The quick design leads to
the construction of a prototype. The prototype is deployed & then evaluated by the customer/user.
Feedback is used to refine requirements for the software.

5. The Spiral Model:

Boehm (1988) viewed the software development process in light of risks involved, Spiral model
could combine development activities with risk management to minimize and control the risk impact.
It is an evolutionary model which couples iterative nature of prototyping with controlled and
systematic aspects of the waterfall model. It also provides scope for RAD for increasingly complete
software.
14
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

The spiral development model is a risk-driven process model generator that is used to guide multi-
stakeholder concurrent engineering of software intensive systems. It has two main distinguishing
features. One is a cyclic approach for incrementally growing a system’s degree of definition and
implementation while decreasing its degree of risk. The other is a set of anchor point milestones for
ensuring stakeholder commitment to feasible and mutually satisfactory system solutions.
From the figure given above, a spiral model is divided into a set of framework activities defined by
the software engineering team. As this evolutionary process begins, the software team performs
activities that are implied by a circuit around the spiral in a clockwise direction, beginning at the
center. Risk is considered as each revolution is made. Anchor point milestones – a combination of
work products and conditions that are attained along the path of the spiral – are noted for each
evolutionary pass.
Each pass through the planning region results in adjustments to the project plan. Cost & schedule are
adjusted based on feedback derived from the customer after delivery. In addition, the project
manager adjusts the planned number of iterations required to complete the software.
The initial circuit around the spiral can be for the concept development and with multiple iterations.
The spiral traverses outward for new product development spiral development remains operative for
the life span of software. This may be a realistic approach for large scale software development. As
the process progresses both users and developers better understand the system. However the system,
demands risks, identification and monitoring to prevent hurdles.
Advantages:
1. One is a cyclic approach for incrementally growing a system‘s degree of definition and
implementation while decreasing its degree of risk.
2. The set of anchor point milestones for ensuring stakeholder commitment to obtain feasible and
mutually satisfactory system solutions.

Limitations:
1. The system demands risks identification and monitoring to prevent hurdles.
2. System can get into infinite iterations.
Specialized Process Models
Special process models take many features from one or more conventional models. However these
special models tend to be applied when a narrowly defined software engineering approach is chosen.
Types in Specialized process models:
1. Component based development (Promotes reusable components)
Commercial off-the-shelf (COTS) software components, developed by vendors who offer them
as products, provide targeted functionality with well-defined interfaces that enable the
component to be integrated into the software that is to be built. The component-based
development model incorporates many of the characteristics of the spiral model. It is
evolutionary in nature, demanding an iterative approach to the creation of software. However, the
component-based development model constructs applications from prepackaged software
components. Modeling and construction activities begin with the identification of candidate
components. These components can be designed as either conventional software modules or
object-oriented classes or packages of classes. Regardless of the technology that is used to create
15
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

the components, the component-based development model incorporates the following steps
(implemented using an evolutionary approach):
a. Available component-based products are researched and evaluated for the application domain
in question.
b.Component integration issues are considered.
c. A software architecture is designed to accommodate the components.
d.Components are integrated into the architecture.
e. Comprehensive testing is conducted to ensure proper functionality.
The component-based development model leads to software reuse, and reusability provides software
engineers with a number of measurable benefits.

2. The formal methods model (Mathematical formal methods are backbone here)
 The formal methods model encompasses a set of activities that leads to formal mathematical
specification of computer software. Formal methods enable you to specify, develop, and
verify a computer-based system by applying a rigorous, mathematical notation.
 A variation on this approach, called cleanroom software engineering, is currently applied by
some software development organizations. When formal methods are used during
development, they provide a mechanism for eliminating many of the problems that are
difficult to overcome using other software engineering paradigms.
 Ambiguity, incompleteness, and inconsistency can be discovered and corrected more easily—
not through ad hoc review, but through the application of mathematical analysis. When
formal methods are used during design, they serve as a basis for program verification and
therefore enable to discover and correct errors that might otherwise go undetected. Although
not a mainstream approach, the formal methods model offers the promise of defect-free
software.

16
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

 The development of formal models is currently quite time consuming and expensive.
 Because few software developers have the necessary background to apply formal methods,
extensive training is required.
 It is difficult to use the models as a communication mechanism for technically
unsophisticated customers.
3. Aspect oriented software development (Uses crosscutting technology)
 Aspect Oriented Software Development(AOSD) often referred to as aspect oriented
programming(AOP),a relatively new paradigm that provides process and methodology for
defining, specifying designing and constructing aspects.
 It addresses limitations inherent in other approaches, including object-oriented programming.
AOSD aims to address crosscutting concerns by providing means for systematic
identification, separation, representation and composition.
 This results in better support for modularization hence reducing development, maintenance
and evolution costs.
1.4 Agile Software Development: Agile Process and its importance, Extreme Programming,
Adaptive Software Development, Scrum, Dynamic Systems Development Method
(DSDM), Crystal
Agile programming is an approach to project management, typically used in software development.
It helps teams react to the instability of building software through incremental, iterative work cycles,
known as sprints.
Features of the Agile Software Development Approach
The name “agile software process” first originated in Japan. The Japanese faced competitive
pressures, and many of their companies, like their American counterparts, promoted cycle-time
reduction as the most important characteristic of software process improvement efforts
Modularity
Modularity is a key element of any good process. Modularity allows a process to be broken into
components called activities. A software development process prescribes a set of activities capable of
transforming the vision of the software system into reality.
Activities are used in the agile software process like a good tool. They are to be wielded by software
craftsman who know the proper circumstances for their use. They are not utilized to create a
production-line atmosphere for manufacturing software.
Iterative
Agile software processes acknowledge that we get things wrong before we get them right. Therefore,
they focus on short cycles. Within each cycle, a certain set of activities is completed. These cycles
will be started and completed in a matter of weeks. However, a single cycle called iteration will
probably not be enough to get the element 100% correct.
Time-Bound
Iterations become the perfect unit for planning the software development project. One can set time
limits (between one and six weeks is normal) on each iteration and schedule them accordingly.
Chances are that the designer will not (unless the process contains very few activities) schedule all of
the activities the process in a single iteration. Instead, will only attempt those activities necessary to
17
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

achieve the goals set out at the beginning of the iteration. Functionality may be reduced or activities
may be rescheduled if they cannot be completed within the allotted time period.
Parsimony
Agile Process is more than a traditional software development process with some time constraints.
Attempting to create impossible deadlines under a process not suited for rapid delivery puts the onus
on the software developers. This leads to burnout and poor quality Instead, agile software processes
focus on parsimony. That is, they require a minimal number of activities necessary to mitigate risks
and achieve their goals.
Adaptive
During an iteration, new risks may be exposed which require some activities that were not planned.
The agile process adapts the process to attack these new found risks. If the goal cannot be achieved
using the activities planned during the iteration, new activities can be added to allow the goal to be
reached. Similarly, activities may be discarded if the risks turn out to be ungrounded.
Incremental
An agile process does not try to build the entire system at once. Instead, it partitions the nontrivial
system into increments which may be developed in parallel, at different times, and at different rates.
We unit test each increment independently. When an increment is completed and tested, it is
integrated into the system.
Convergent
Convergence states that we are actively attacking all of the risks worth attacking. As a result, the
system becomes closer to the reality that we seek with each iteration. As risks are being proactively
attacked, the system is being delivered in increments. We are doing everything within our power to
ensure success in the most rapid fashion.
People-Oriented
Agile processes favor people over process and technology. They evolve through adaptation in an
organic manner. Developers that are empowered raise their productivity, quality, and performance.
Collaborative
Agile processes foster communication among team members. Communication is a vital part of any
software development project. When a project is developed in pieces, understanding how the pieces
fit together is vital to creating the finished product. There is more to integration than simple
communication. Quickly integrating a large project while increments are being developed in parallel
requires collaboration.
Concept of Extreme Programming
Extreme Programming is an instance of an Agile Software Development method. XP is a method
that is optimized for small to medium-sized project teams that fit a certain profile. It promotes rapid
feedback and response to continual change. It is based upon the four values of simplicity,
communication, feedback, and courage and is consistent with the values of agile software
development.
Characteristics of an XP Project
Extreme Programming or XP is a development process that can be used by small to medium-sized
teams to develop high quality software within a predictable schedule and budget and with a
minimum of overhead. Since XP relies heavily on direct and frequent communication between the
18
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

team members, the team should be co-located. An ideal project for using XP would be one that has
most of the following characteristics:
 A small to medium-sized team (fewer than 20 people on the complete team)
 Co-located, preferably in a single area with a large common space
 A committed, full-time, on-site customer or customer representative

The Extreme Programming Process


Goals
Extreme Programming Explained describes Extreme Programming as a software-development
discipline that organizes people to produce higher-quality software more productively.
XP attempts to reduce the cost of changes in requirements by having multiple short development
cycles, rather than a long one. In this doctrine, changes are a natural, inescapable and desirable
aspect of software-development projects, and should be planned for, instead of attempting to define a
stable set of requirements.
Extreme programming also introduces a number of basic values, principles and practices on top of
the agile programming framework.
Activities
XP describes four basic activities that are performed within the software development process:
coding, testing, listening, and designing. Each of those activities is described below.
Coding
The advocates of XP argue that the only truly important product of the system development process
is code – software instructions that a computer can interpret. Without code, there is no working
product.
Coding can also be used to figure out the most suitable solution. Coding can also help to
communicate thoughts about programming problems. A programmer dealing with a complex
programming problem, or finding it hard to explain the solution to fellow programmers, might code
it in a simplified manner and use the code to demonstrate what he or she means. Code, say the
proponents of this position, is always clear and concise and cannot be interpreted in more than one
way. Other programmers can give feedback on this code by also coding their thoughts.

19
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Testing
Extreme programming's approach is that if a little testing can eliminate a few flaws, a lot of testing
can eliminate many more flaws.
Unit tests determine whether a given feature works as intended. A programmer writes as many
automated tests as they can think of that might "break" the code; if all tests run successfully, then the
coding is complete. Every piece of code that is written is tested before moving on to the next feature.
Acceptance tests verify that the requirements as understood by the programmers satisfy the
customer's actual requirements. System-wide integration testing was encouraged, initially, as a daily
end-of-day activity, for early detection of incompatible interfaces, to reconnect before the separate
sections diverged widely from coherent functionality. However, system-wide integration testing has
been reduced, to weekly, or less often, depending on the stability of the overall interfaces in the
system.
Listening
Programmers must listen to what the customers need the system to do, what "business logic" is
needed. They must understand these needs well enough to give the customer feedback about the
technical aspects of how the problem might be solved, or cannot be solved. Communication between
the customer and programmer is further addressed in the Planning Game.
Designing
From the point of view of simplicity, of course one could say that system development doesn't need
more than coding, testing and listening. If those activities are performed well, the result should
always be a system that works. In practice, this will not work. One can come a long way without
designing but at a given time one will get stuck. The system becomes too complex and the
dependencies within the system cease to be clear. One can avoid this by creating a design structure
that organizes the logic in the system. Good design will avoid lots of dependencies within a system;
this means that changing one part of the system will not affect other parts of the system.
Adaptive Software Development (ASD)
Adaptive Software Development (ASD) has been proposed by Jim Highsmith as a technique for
building complex software and systems. The philosophical underpinnings of ASD focus on human
collaboration and team self-organization. Highsmith argues that an agile, adaptive development
approach based on collaboration is “as much a source of order in our complex interactions as
discipline and engineering.” He defines an ASD “life cycle” that incorporates three phases,
speculation, collaboration, and learning.

20
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Adaptive Software Development


During speculation, the project is initiated and adaptive cycle planning is conducted. Adaptive cycle
planning uses project initiation information—the customer’s mission statement, project constraints
(e.g., delivery dates or user descriptions), and basic requirements—to define the set of release cycles
(software increments) that will be required for the project. No matter how complete and farsighted
the cycle plan, it will invariably change. Based on information obtained at the completion of the first
cycle, the plan is reviewed and adjusted so that planned work better fits the reality in which an ASD
team is working.
Motivated people use collaboration in a way that multiplies their talent and creative output beyond
their absolute numbers. This approach is a recurring theme in all agile methods. But collaboration is
not easy. It encompasses communication and teamwork, but it also emphasizes individualism,
because individual creativity plays an important role in collaborative thinking. It is, above all, a
matter of trust. People working together must trust one another to (1) criticize without animosity,
(2) assist without resentment, (3) work as hard as or harder than they do, (4) have the skill set to
contribute to the work at hand, and (5) communicate problems or concerns in a way that leads to
effective action.
As members of an ASD team begin to develop the components that are part of an adaptive cycle, the
emphasis is on “learning” as much as it is on progress toward a completed cycle.
Scrum
Scrum (the name is derived from an activity that occurs during a rugby match) is an agile software
development method that was conceived by Jeff Sutherland and his development team in the early
1990s. In recent years, further development on the Scrum methods has been performed by Schwaber
and Beedle. Scrum principles are consistent with the agile manifesto and are used to guide
development activities within a process that incorporates the following framework activities:
requirements, analysis, design, evolution, and delivery. Within each framework activity, work tasks
occur within a process pattern called a sprint. The work conducted within a sprint (the number of
sprints required for each framework activity will vary depending on product complexity and size) is
adapted to the problem at hand and is defined and often modified in real time by the Scrum team.
The overall flow of the Scrum process is illustrated in figure below. Scrum emphasizes the use of a
set of software process patterns that have proven effective for projects with tight timelines, changing
21
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

requirements, and business criticality. Each of these process patterns defines a set of development
actions: Backlog—a prioritized list of project requirements or features that provide business value
for the customer. Items can be added to the backlog at any time (this is how changes are introduced).
The product manager assesses the backlog and updates priorities as required. Sprints—consist of
work units that are required to achieve a requirement defined in the backlog that must be fit into a
predefined time-box14 (typically 30 days).
Changes (e.g., backlog work items) are not introduced during the sprint. Hence, the sprint allows
team members to work in a short-term, but stable environment.
Scrum meetings—are short (typically 15 minutes) meetings held daily by the Scrum team. Three key
questions are asked and answered by all team members:
• What did you do since the last team meeting?
• What obstacles are you encountering?
• What do you plan to accomplish by the next team meeting?

A team leader, called a Scrum master, leads the meeting and assesses the responses from each
person. The Scrum meeting helps the team to uncover potential problems as early as possible. Also,
these daily meetings lead to “knowledge socialization” and thereby promote a self-organizing team
structure.

Scrum

Dynamic Systems Development Method (DSDM)


The Dynamic Systems Development Method (DSDM) is an agile software development approach
that “provides a framework for building and maintaining systems which meet tight time constraints
through the use of incremental prototyping in a controlled project environment”.
22
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

The DSDM philosophy is borrowed from a modified version of the Pareto principle—80 percent of
an application can be delivered in 20 percent of the time it would take to deliver the complete (100
percent) application.
DSDM is an iterative software process in which each iteration follows the 80 percent rule. That is,
only enough work is required for each increment to facilitate movement to the next increment. The
remaining detail can be completed later when more business requirements are known or changes
have been requested and accommodated.
The DSDM Consortium (www.dsdm.org) is a worldwide group of member companies
that collectively take on the role of “keeper” of the method. The consortium has defined an agile
process model, called the DSDM life cycle that defines three different iterative cycles, preceded by
two additional life cycle activities:
Feasibility study—establishes the basic business requirements and constraints associated with the
application to be built and then assesses whether the application is a viable candidate for the DSDM
process.
Business study—establishes the functional and information requirements that will allow the
application to provide business value; also, defines the basic application architecture and identifies
the maintainability requirements for the application.
Functional model iteration—produces a set of incremental prototypes that demonstrate functionality
for the customer. (Note: All DSDM prototypes are intended to evolve into the deliverable
application.) The intent during this iterative cycle is to gather additional requirements by eliciting
feedback from users as they exercise the prototype.
Design and build iteration—revisits prototypes built during functional model iteration to ensure that
each has been engineered in a manner that will enable it to provide operational business value for
end users. In some cases, functional model iteration and design and build iteration occur
concurrently.
Implementation—places the latest software increment (an “operationalized” prototype) into the
operational environment. It should be noted that (1) the increment may not be 100 percent complete
or (2) changes may be requested as the increment is put into place. In either case, DSDM
development work continues by returning to the functional model iteration activity.
DSDM can be combined with XP (Section 3.4) to provide a combination approach that defines a
solid process model (the DSDM life cycle) with the nuts and bolts practices (XP) that are required to
build software increments. In addition, the ASD concepts of collaboration and self-organizing teams
can be adapted to a combined process model.
Crystal
Alistair Cockburn and Jim Highsmith created the Crystal family of agile methods in order to achieve
a software development approach that puts a premium on “maneuverability” during what Cockburn
characterizes as “a resource limited, cooperative game of invention and communication, with a
primary goal of delivering useful, working software and a secondary goal of setting up for the next
game”.
To achieve maneuverability, Cockburn and Highsmith have defined a set of methodologies, each
with core elements that are common to all, and roles, process patterns, work products, and practice
that are unique to each. The Crystal family is actually a set of example agile processes that have been

23
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

proven effective for different types of projects. The intent is to allow agile teams to select the
member of the crystal family that is most appropriate for their project and environment.
1.5 Selection Criteria for Software Process Model
The software process model framework is specific to the project. Thus, it is essential to select the
software process model according to the software which is to be developed. The software project is
considered efficient if the process model is selected according to the requirements. It is also essential
to consider time and cost while choosing a process model as cost and/ or time constraints play an
important role in software development. The basic characteristics required to select the process
model are project type and associated risks, requirements of the project, and the users.
Following are the parameters which is used to select
1. Requirements Characteristics
• Reliability of Requirements
• How often the requirements can change
• Types of requirements
• Number of requirements
• Can the requirements be defined at an early stage
• Requirements indicate the complexity of the system
2. Development team :
• Team size
• Experience of developers on similar type of projects
• Level of understanding of user requirements by the developers
• Environment
• Domain knowledge of developers
• Experience on technologies to be used
• Availability of training
3. User involvement in the project :
• Expertise of user in project
• Involvement of user in all phases of the project
• Experience of user in similar project in the past
4. Project type and associated risk :
• Stability of funds
• Tightness of project schedule
• Availability of resources
• Type of project
• Size of the project
• Expected duration for the completion of project
24
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

• Complexity of the project


• Level and the type of associated risk

Question Bank
1. Define Software and Software Engineering.
2. State four characteristics of software.
3. Explain and differentiate between hardware and software.
4. State and explain broad categories of software (Changing nature of software)
5. Explain challenges faced by Software developers due to changing nature.
6. Explain Software Engineering as layered technology approach.
7. Using schematic diagram explain software process framework.
8. State and define generic process framework activities.
9. Enlist and define Umbrella activities in a Software Process framework.
10. Explain Waterfall process model with their advantages and limitations
11. Explain Incremental process model with its advantages and limitations
12. Explain RAD process model with advantages and limitations.
13. Explain Prototype model with advantages and limitations.
14. Explain Spiral model with advantages and limitations.
15. Write a note on Component based process model
16. Explain in brief Specialized process models AOSP/AOP
17. State features of Agile Software development.
18. Explain concept of Extreme Programming (XP).
19. Write a note on Adaptive Software Development
20. Describe the Scrum process with the help of a schematic diagram
21. Write a detailed note on Dynamic Systems Development Method
22. Write a note on Crystal family of agile software development
23. State various parameters of selection of software process model

25
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Chapter 2 Software Engineering Practices And Software Requirements Engineering


2.1 Software Engineering Practices and importance, Core Principles
 Software engineering deals with processes to ensure delivery of the software through
management control of development process and production of requirement analysis models,
data models process models, information products, reports and software documentation.
 Software Engineering practices consist of collection of concepts, principles, methods and tools
that a software engineer calls upon on a daily basis.
 It equips managers to manage software projects and software engineers to build computer
programs.
 Provides necessary technical and management know-how for getting the job done.
 Transforms a haphazard, unfocused approach into something that is more organized, more
effective and more likely to achieve success.
Importance of Software Engineering practices:
The software engineering considers various issues like hardware platform, performance, scalability
and upgrades.
The Essence of software engineering practices:
The essence includes understanding the problem, planning a solution, carrying out the plan and
examining the results for accuracy.
1. Understand the problem (communication and analysis)
 Who has a stake in the solution to the problem?
 What are the unknowns (data, function, behavior)?
 Can the problem be compartmentalized?
 Can the problem be represented graphically?
2. Plan a solution (planning, modeling and software design)
 Have you seen similar problems like this before?
 Has a similar problem been solved and is the solution reusable?
 Can sub problems be defined and are solutions available for the sub problems?
3. Carry out the plan: The design you’ve created serves as road map for the system you want to
build. (Construction, Code generation)
 Does the solution conform to the plan? Is source code traceable to the design model?
 Is each component part of the solution probably correct? Have the design and code been
reviewed or have correctness proofs been applied to the algorithm?
4. Examine the result for accuracy (testing and quality assurance)
 Is it possible to test each component part of the solution?
 Does the solution produce results that conform to the data, functions and features that are
required? Has the software been validated against all stakeholder requirements?

26
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Core Principles of Software Engineering (Statements & Meaning of each Principle)


1. The reason it all exists:
The software system exists in the organization for providing value to its users with, the availability
of hardware and software requirements. Hence all the decisions should be made by keeping this in
mind.
2. Keep it Simple, Stupid (KISS)
Software design is not a haphazard process. There are many factors considered in the design effort.
The design should be straight forward and as simple as possible. This facilitates having a system
which can be easily understood and easy to maintain.
Simple doesn’t mean quick and dirty. In fact, it requires lot of thought and effort to simplify multiple
iterations of a complex task. This results in the advantage that the software is less error prone and
easily maintainable.
3. Maintain the vision
A clear vision is essential for the success of a software project. If the vision is missing, the project
may end up of two or more minds. The team leader has a critical role to play for maintaining the
vision and enforce compliance with the help of the team members.
4. What you produce, others will consume
The design and implementation should be done by keeping in mind the user’s requirements. The
code should permit the system extension. Some other programmers debugging the code should not
have any errors and satisfying all the user needs.
5. Be open to future
The system with the long lifetime has more value. The industry standard software systems induce for
longer. The system should be ready to accept and adapt to new changes. The systems which are
designed by keeping in mind the future needs will be more successful and acceptable to the users.
6. Plan ahead for reuse
Reuse saves time and efforts. The reuse of code and design is one of the advantages of object
oriented technologies. The reuse of parts of the code helps in reducing the cost and time evolved, in
the new software development.
7. Think
Placing clear and complete thought before action almost always produces better results. With proper
thinking, we are most likely to do it right. We also gain knowledge about how to do it right again. It
becomes a valuable experience, even if something goes wrong, as there was adequate thought
process. Hence when clear thought has gone into the system, value comes out, this provides potential
rewards.
2.2 Communication Practices, Planning Practices, Modelling Practices, Construction Practices,
Software Deployment
Effective communication among the technical peers, customers and other stakeholders, project
managers etc. is among the most challenging activities that confront software engineers.
Before customers’ requirements can be analyzed, modeled are specified they must be gathered
through a communication.

27
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Effective communication is among the most challenging activities that designers will confront.
Communication Principles are
1. Listen: Try to focus on the speaker’s words, rather than formulating your response to those
words. Ask for clarification if something is unclear, but avoid constant interruptions.
2. Prepare before you communicate: Speed the time to understand the problem before you meet
with others. If necessary, do some research to understand business domain jargon.
3. Someone should facilitate the activity: Every communication meeting should have a leader to
keep the conversation moving in a productive direction, to mediate any conflict that does occur
and to ensure than other principles are followed.
4. Face-to Face communication is the best: It usually works better when some other
representation of the relevant information is present. For e.g. A participant may create a drawing
or a “strawman” document that serves as a focus for discussion.
5. Take notes and document decisions: Things have a way of falling into cracks. Someone
participating in the communication should serve as a “recorder” and write down all important
points and decisions.
6. Strive for collaboration: Collaboration and consensus occur when the collective knowledge of
members of the team is used to describe product or system functions or features. Collaboration
serves to build trust among team members and creates a common goal for the team.
7. Stay focused; modularize your discussion: The more likely involved in any communication,
the more likely that discussion will bounce from one topic to next. The facilitator should keep the
conversation modular; leaving one topic only after it has been resolved.
8. If something is unclear, draw a picture: Verbal communication goes only so far. A sketch or
drawing can often provide clarity when words fail to do the job.
9. A) Once agree to something, move on. B) If can’t agree to something, still move on C) If a
feature or function is unclear and cannot be clarified at the moment, move on.
Communication, like any software engineering activity, takes time. Rather than iterating
endlessly the people who participates should recognize that many topics require discussion and
that “moving on” is sometimes the best way to achieve communication agility.
10. Negotiation is not a contest or a game. It works best when both parties win: There are many
instances in which the designer and other stakeholders must negotiate functions and features,
priorities, and delivery dates. If the team has collaborated well, all parties have a common goal
still, negotiation will demand compromise from all parties.
Planning Practices: Concept, Need of planning, basic activities included, statements and
meaning of each principle.
The planning activity encompasses of a set of management and technical practices that enable the
software team to define a road map as it travels towards its strategic goals and tactical objectives.
Like most things in life, planning should be conducted in moderation enough to provide useful
guidance to the team.

28
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Planning Principles:
1. Understand the scope of the project: It is impossible to use a road map if one doesn’t know
where to go. Scope provides the software team with a destination.
2. Involve stakeholders in the planning activity: Stakeholders define priorities and establish
project constraints. To accommodate these realities, software engineers must often negotiate
order of delivery, timelines and other project related issues.
3. Recognize that the planning is iterative: When the project work begins it’s likely that few
things may change. To accommodate these changes the plan must be adjusted, as a consequence.
The iterative and incremental model may dictate re-planning based on the feedback received
from users.
4. Estimate based on what you know: The purpose of estimation is to provide an indication of the
efforts, cost, task duration and skillsets based on the team’s current understanding of the work
and past experience. If the information is vague or unreliable estimates will be equally unreliable.
5. Consider the risk as you define the plan: The team should define the risks of high impact and
high probability. It should also provide contingency plan if the risks become a reality. The
project plan should be adjusted to accommodate the likelihood of the risks.
6. Be realistic: The realistic plan helps in completing the project on time including the
inefficiencies and change. Even the best software engineers commit mistakes and then correct
them. Such realities should be considered while establishing a project plan.
7. Adjust granularity as you define the plan: Granularity refers to the level of details that is
introduced as a project plan is developed. It is the representation of the system from macro to
micro level. A “high-granularity” plan provides significant work task detail that is planned over
relatively short time increments. A “low-granularity” plan provides broader work tasks that are
planned over longer time periods. In general, granularity moves from high to low as the project
time line moves away from the current date.
8. Define how you intend to ensure quality: The plan should identify how the software team
intends to ensure quality. If technical reviews are to be conducted, they should be scheduled.
9. Describe how you intend to accommodate change: Even the best planning can be obviated by
uncontrolled change. The software team should identify how the changes are to be
accommodated as the software engineering work proceeds. If a change is requested, the team
may decide on the possibility of implementing the changes or suggest alternatives. The team
should also access the impact of change on the development process and the changes in cost.
10. Track and monitor the plan frequently and make adjustments if required: Software projects
fall behind schedule one day at a time. Therefore, make sense to track progress on a daily basis,
looking for problem areas and situations in which scheduled work does not conform to actual
work conducted. When slippage is encountered, the plan is adjusted accordingly.
The W5HH Principle:
Barry Boehm suggest an approach that addresses project objectives, milestones and schedules,
responsibilities, management and technical approaches, and required resources. It is the W5HH
principle, which includes a series of questions:
 Why is the system being developed?
 What will be done?
29
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

 When will it be accomplished?


 Who is responsible for a function?
 Where they are organizationally located?
 How will the job be done technically and managerially?
 How much of each resource is needed?
Modeling Principles
Concept of Software Modeling
In software engineering designers create models to gain a better understanding of the actual entity to
be built. When the entity is a physical thing, such as machine, we can build a model that is identical
in form and shape but smaller in scale. However, when the entity to be built is software, the model
must take a different form. The model must be capable of representing the information that the
software transforms, the architecture and functions that enable the transformation to occur, the
features that the user desires and the behavior of the system as the transformation is taking place. The
models must accomplish these objectives at different levels of abstraction. Initially the system is
represented by depicting the software from the customer’s point of view (Analysis model). The later
the system is represented at a more technical level providing concrete specification for the
construction of the software (Design model). The Design model represents the characteristics of the
software which help the professionals to construct the software effectively.
In software engineering work, two classes of models are created viz. Analysis Models and
Design Models.
Analysis Models
This model represents the customer requirements by depicting the software in three domains
The information domain
The functional domain
The behavioral domain
Analysis Modeling Principles:
Requirement models (also called analysis models) represent customer requirements by depicting the
software three different domains: the information domain, the functional domain and the behavioral
domain.
1. The information domain of a problem must be represented and understood
The information domain encompasses the data that flow into the system from end users, other
systems or external devices, the data that flow out the system via the user interface, network
interfaces, reports, graphics, and other means and the data stores that collect and organize persistent
data objects i.e. data that are maintained permanently.
2. The functions that the software performs must be defined.
Software functions provide direct benefit to end users and also provide internal support for those
features that are user visible. Some functions transform data that flow into the system. In other cases,
functions affect some level of control over internal software processing or external system elements.
Functions can be described at many different levels of abstraction.

30
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

3. The behavior of the software as a consequence of external events must be represented.


The behavior of computer software is driven by its interaction with the external environment. Input
provided by end users, control data provided by an external system, or monitoring data collected
over a network all cause the software to behave in a specific way.
4. The models that depict information function and behavior must be partitioned in a manner
that uncovers detail in a layered fashion.
Requirements modeling are the first step in software engineering problem solving. It allows you to
better understand the problem and establishes a basis for the solution (design). Complex problems
are difficult to solve in their entirely.
5. The analysis task should move from essential information toward implementation detail.
Requirements modeling begin by describing the problem from the end-user’s perspective. The
“essence” of the problem is described without any consideration of how a solution will be
implemented. For example, a video game requires that the player “instruct” its protagonist on what
direction to proceed as she moves into a dangerous maze. Implementation detail indicates how the
essence will be implemented.
Design Modeling Principles: The software design model is similar to the architect’s plan or drawing
for a house. It begins by representing the totality of the thing to be built and slowly refines the thing
to provide guidance for constructing each detail. The design model created for the software provides
variety of views of the system. This includes the architecture, the user interface and component –
level detail. Design models provide a concrete specification for the construction of the software. It
represents characteristics of the software that help practitioners to construct it effectively.
The Design Modeling principles are:
1. Design should be traceable to the requirements model.
The design model should translate the information into architecture; a set of subsystems which
implement major functions and a set of component level designs are the realization of the analysis
classes.
2. Always consider the architecture of the system to be built
Software architecture is the skeleton of the system to be built. It affects interfaces, data structures,
program control flow and behavior, the manner in which testing can be conducted, the
maintainability of the resultant system.
3. Design of data is as important as design of processing functions
The data design is an essential element of architectural design. The manner in which data objects are
realized within the design cannot be left to chance. A well-structured data design helps to simplify
program flow, makes the design and implementation of software components easier, and makes
overall processing more efficient.
4. Both internal and external interfaces must be designed with care
The manner in which data flows between the components of a system has much to do with
processing efficiency, error propagation, and design simplicity. A well-designed interface makes
integration easier and assists the tester in validating component functions.

31
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

5. User interface design should be tuned to the needs of the end user.
The user interface is the visible manifestation of the software. No matter how sophisticated its
internal functions, no matter how comprehensive its data structures, no matter how well designed its
architecture, a poor interface design often leads to the perception that the software is “bad”.
6. Component-level design should be functionally independent,
Functional independence is a measure of the “Single-mindedness” of a software component. The
functionality that is delivered by a component should be cohesive-that is, it should focus on one and
only one function or sub function.
7. Components should be loosely coupled to one another and to the external environment.
Coupling is achieved in many ways- via a component interface, be messaging, through global data.
As the level of coupling increases, the likelihood of error propagation also increases and the overall
maintainability of the software decreases. Therefore, component coupling should be kept as low as is
reasonable.
8. Design models should be easily understandable
The purpose of design is to communicate information to practitioners who will generate code to
those who will test the software and to others who may maintain the software in the future. If the
design is difficult to understand, it will not serve as an effective communication medium.
9. The design should be developed iteratively. With each iteration, the designer should strive
for greater simplicity
Like almost all creative activities, design occurs iteratively. The first iterations work to refine the
design and correct errors, but later iterations should strive to make the design as simple as possible.
Construction Practices
The construction activity encompasses a set of coding and testing tasks that lead to operational
software that is ready for delivery to the customer or the end user.
Even the software development process has undergone a radical change over the years.
In the model software engineering work the coding may be:
1. The direct creation of source code using a programming language.
2. Automatic generation of source code using an intermediates design like representation of the
components to be built.
3. Automatic generation of executable code using 4GL language
Coding Principles and concept
The principle and concept that guide the coding task are closely aligned programming style,
programming language, and programming methods. However, there are a number of fundamental
principles that can be stated.
Preparation Principles:
Before writing the lines of code,
1. Understand the problem to be solved
2. Understand basic design principles and concepts

32
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

3. Pick a programming language that meets the needs of the software to be built and the
environment in which it will operate
4. Select a programming environment that provides tools that will make the work easier
5. Create a set of unit tests that will be applied once the component code is completed
Coding Principles:
While writing the code, consider the following points:
1. Construct algorithms by following structured programming practice.
2. Consider the use of proper programming language.
3. Select data structures that will meet the needs of the design.
4. Understand the software architecture and create interfaces that are consistent with it.
5. Keep conditional logic as simple as possible.
6. Create nested loops in a way that makes theme easily testable.
7. Select meaningful variable names and follow other local coding standards
8. Write code that is self-documenting
9. Create a visual layout that aids understanding
Validation Principles: After completing the first coding pass, consider the following points
1. Conduct a code walkthrough when appropriate
2. Perform unit tests and correct errors that are uncovered
3. Refactor the code
Testing principles and concept:
Testing is a process of executing a program with the intent of finding an error.
A good “test-case” is the highest probability of finding an “as-yet undiscovered errors”.
A successful test is a one which uncovers an as-yet undiscovered errors
Davis suggests a set of testing principles as follows:
1. All tests should be traceable to customer requirements.
The objective of software testing is to uncover errors. It follows that the most server defects from the
user’s point of view are those that cause the program to fail to meet its requirement.
2. Tests should be planned long before testing begins
Test planning can begin as soon as the requirements model is complete. Detailed definition of test
cases can begin as soon as the design model has been solidified. Therefore, all tests can be planned
and designed before any code has been generated.
3. The Pareto principle applies to software testing
In this context the Pareto principle implies that 80 percent of all errors uncovered during testing will
likely be traceable to 20 percent of all program components. The problem is to isolate these suspect
components and to thoroughly test them.

33
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

4. Testing should begin “in a small” and progress toward testing “in the large”
The initial testing should be on small individual components. As testing progresses, focus shifts to
find errors in integrated clusters of programs and finally in the entire system.
5. Exhaustive testing is not possible
It may be noted that the number of path permutations for even a moderately sized program is
exceptionally large. Hence for this reason it is impossible to execute every combination of the paths
during testing.
Software Deployment
The deployment phase includes 3 actions namely 1. Delivery 2. Support 3. Feedback
1. The delivery cycle provides the customer and the end user with an operational software
increment that provides usable functions and features.
2. The support cycle provides documentation, human assistance for all functions and features
introduced during all deployment cycles to date.
3. Each feedback cycle provides the software team with useful inputs. The feedback can help in
modifications to the functions, features and even the approach for the next increments.
The delivery of the software increment is an important milestone of any software project. A number
of key principles should be followed as the team prepares to deliver an increment.
1. Customer expectations for the software must be managed
Before the software delivery the project team should ensure that all the requirements of the users are
satisfied.
2. A complete delivery package should be assembled and tested
The system containing all executable software, support data files, tools and support documents
should be provided with beta testing at the actual user side.
3. A support regime must be established before the software is delivered
This includes assigning the responsibility to the team members to provide support to the users in case
of problem.
4. Appropriate instructional materials must be provided to end users
At the end of construction various documents such as technical manual, operations manual, user
training manual, user reference manual should be kept ready. These documents will help in providing
proper understanding and assistance to the user.
5. Buggy software should be fixed first, delivered later.
Sometimes under time pressure, the software delivers low-quality increments with a warning to the
customer that bugs will be fixed in the next release. Customers will forget you delivered a high-
quality product a few days late, but they will never forget the problems that a low quality product
caused them. The software reminds them every day.
2.3 Requirement Engineering
Requirement Engineering means that requirements for a product are defined, managed and tested
systematically.
Requirements engineering builds a bridge to design and construction.
34
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Requirements engineering provides the appropriate mechanism for understanding what the customer
wants, analyzing need, assessing feasibility, negotiating a reasonable solution, specifying the
solution unambiguously, validating the specification, and managing the requirements as they are
transformed into an operational system.
Requirement Engineering helps software engineers to better understand the problem they will work
to solve. It includes the set of tasks that lead to an understanding of:
1. What will be business impact of the software?
2. What the customer wants exactly?
3. How end user will interact with the system software engineering and other project stakeholders
all participate.
Collaborative Requirement Gathering and Analysis
Many different approaches to collaborative requirements gathering have been proposed. Each makes
use of a slightly different scenario, but all apply some variation on the following basic guidelines:
• Meetings are conducted and attended by both software engineers and other
• stakeholders.
• Rules for preparation and participation are established.
• An agenda is suggested that is formal enough to cover all important points but informal
enough to encourage the free flow of ideas.
• A “facilitator” (can be a customer, a developer, or an outsider) controls the meeting.
• A “definition mechanism” (can be work sheets, flip charts, or wall stickers or an electronic
bulletin board, chat room, or virtual forum) is used.
The goal is to identify the problem, propose elements of the solution, negotiate different approaches,
and specify a preliminary set of solution requirements in an at-mosphere that is conducive to the
accomplishment of the goal.
To better understand the flow of events as they occur,
Present a brief scenario that outlines the sequence of events that lead up to the requirements
gathering meeting, occur during the meeting, and follow the meeting
Analysis
Anyone who has done requirements engineering on more than a few software projects begins to
notice that certain problems reoccur across all projects within a specific application domain.
These analysis patterns suggest solutions (e.g., a class, a function, a behavior) within the application
domain that can be reused when modeling many applications.
 Geyer-Schulz and Hahsler suggest two benefits that can be associated with the use of analysis
patterns:
1. Firstly, Analysis patterns speed up the development of abstract analysis models that capture
the main requirements of the concrete problem by providing reusable analysis models with
examples as well as a description of advantages and limitations.
2. Second, Analysis patterns facilitate the transformation of the analysis model into a design
model by suggesting design patterns and reliable solutions for common problems.

35
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

 Analysis patterns are integrated into the analysis model by reference to the pattern name.
 They are also stored in a repository so that requirements engineers can use search facilities to
find and apply them. Information about an analysis pattern is presented in a standard template.
Types of Requirements
• Functional Requirements
A Functional Requirement (FR) is a description of the service that the software must offer. It
describes a software system or its component.
A function is nothing but inputs to the software system, its behavior, and outputs. It can be a
calculation, data manipulation, business process, user interaction, or any other specific functionality
which defines what function a system is likely to perform.
Functional Requirements in Software Engineering are also called Functional Specification.
Functional Requirements of a system should include the following things:
 Details of operations conducted in every screen
 Data handling logic should be entered into the system
 It should have descriptions of system reports or other outputs
 Complete information about the workflows performed by the system
 It should clearly define who will be allowed to create/modify/delete the data in the system
 How the system will fulfill applicable regulatory and compliance needs should be captured in the
functional document
Non-Functional Requirements
 The non-functional requirements (also known as quality requirements) are related to system
attributes such as reliability and response time.
 Non-functional requirements arise due to user requirements, budget constraints, organizational
policies, and so on. These requirements are not related directly to any particular function
provided by the system.
 Non-functional requirements should be accomplished in software to make it perform efficiently.
 For example, if an aeroplane is unable to fulfill reliability requirements, it is not approved for
safe operation. Similarly, if a real time control system is ineffective in accomplishing non-
functional requirements, the control functions cannot operate correctly.
The description of different types of non-functional requirements is listed below
1. Product requirements: These requirements specify how software product performs. Product
requirements comprise the following.
• Efficiency requirements: Describe the extent to which the software makes optimal use of
resources, the speed with which the system executes, and the memory it consumes for its
operation. For example, the system should be able to operate at least three times faster than
the existing system.
• Reliability requirements: Describe the acceptable failure rate of the software. For example,
the software should be able to operate even if a hazard occurs.

36
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

• Portability requirements: Describe the ease with which the software can be transferred from
one platform to another. For example, it should be easy to port the software to a different
operating system without the need to redesign the entire software.
• Usability requirements: Describe the ease with which users are able to operate the software. For
example, the software should be able to provide access to functionality with fewer keystrokes
and mouse clicks.
2. Organizational requirements: These requirements are derived from the policies and procedures
of an organization. Organizational requirements comprise the following.
• Delivery requirements: Specify when the software and its documentation are to be delivered to
the user.
• Implementation requirements: Describe requirements such as programming language and
design method.
• Standards requirements: Describe the process standards to be used during software
development. For example, the software should be developed using standards specified by the
ISO and IEEE standards.
3. External requirements: These requirements include all the requirements that affect the software
or its development process externally. External requirements comprise the following.
• Interoperability requirements: Define the way in which different computer based systems will
interact with each other in one or more organizations.
• Ethical requirements: Specify the rules and regulations of the software so that they are
acceptable to users.
• Legislative requirements: Ensure that the software operates within the legal jurisdiction. For
example, pirated software should not be sold.
Eliciting Requirements
• Requirements elicitation (also called requirements gathering) combines elements of problem
solving, elaboration, negotiation, and specification.
• In order to encourage a collaborative, team-oriented approach to requirements gathering,
stakeholders work together to identify the problem, propose elements of the solution, negotiate
different approaches and specify a preliminary set of solution requirements.
The work products produced as a consequence of requirements elicitation will vary depending on the
size of the system or product to be built. For most systems, the work products include
• A statement of need and feasibility.
• A bounded statement of scope for the system or product.
• A list of customers, users, and other stakeholders who participated in requirements elicitation.
• A description of the system’s technical environment.
• A list of requirements (preferably organized by function) and the domain constraints that apply to
each.
• A set of usage scenarios that provide insight into the use of the system or product under different
operating conditions.
• Any prototypes developed to better define requirements.
37
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Each of these work products is reviewed by all people who have participated in requirements
elicitation
Developing Use Cases
 In software engineering, a Use Case is a list of actions or event steps, typically defining the
interactions between a role (known in the Unified Modeling Language as an actor) and a system,
to achieve a goal.
 It tells a stylized story about how an end user (playing one of a number of possible roles)
interacts with the system under a specific set of circumstances.
 The story may be narrative text, an outline of tasks or interactions, a template-based description,
or a diagrammatic representation. Regardless of its form, a use case depicts the software or
system from the end user’s point of view.
 Step 1: Identify who is going to be using the system directly.
The main component of use case development is actors. An actor is a specific role played by a
system user and represents a category of users that demonstrates similar behaviors when using the
system. The actors may be people or computer systems.
A primary actor is one having a goal requiring the assistance of the system. A secondary actor is one
from which the system needs assistance to satisfy its goal. One of the actors is designated as the
system under discussion. A person can play several roles and thereby represent several actors, such
as computer-system operator or end user.
 Step 2: Pick one of those Actors.
To identify a target system’s use case, we identify the system actors. A good starting point is to
check the system design and identify who it is supposed to help.
 Step 3: Define what that Actor wants to do with the system. Each of these things that the
actor wants to do with the system become a Use Case.
• The things that the actors want to do with the system become goals.
• The goal is the end outcome of the actions of the user.
• There are two types of goals. The first type is a rigid goal. This goal must be completely
satisfied and describes a target system’s minimum requirement.
 Step 4: For each of those Use Cases decide on the most usual course when that Actor is
using the system. What normally happens.
A use case has one basic course and several alternative courses. The basic course is the simplest
course, the one in which a request is delivered without any difficulty.
There may be alternative courses that describe variants of the basic course and the errors that can
occur. These are documented as extensions to the use case.
 Step 5: Describe that basic course in the description for the use case.
The use scenario is written from the user’s perspective in view in easy to understand language. This
step is very similar to documenting a process flow. The steps necessary to achieve the identified goal
are written out.

38
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

 Step 6: Once you’re happy with the basic course now consider the alternatives and add
those as extending use cases.
Building the Requirements Model
The intent of the analysis model is to provide a description of the required informational, functional,
and behavioral domains for a computer-based system.
The model changes dynamically as we learn more about the system to be built, and other
stakeholders understand more about what they really require. For that reason, the analysis model is a
snapshot of requirements at any given time.
Elements of the Requirements Model
The specific elements of the requirements model are dictated by the analysis modeling method that is
to be used. However, a set of generic elements is common to most requirements models.
Scenario-based elements :
• The system is described from the user’s point of view using a scenario-based approach.
• Scenario-based elements of the requirements model are often the first part of the model that is
developed. As such, they serve as input for the creation of other modeling elements.
• The below figure depicts a UML activity diagram for eliciting requirements and representing
them using use cases. Three levels of elaboration are shown, culminating in a scenario-based
representation.

UML activity diagrams for eliciting requirements


Elements of the Requirements Model
Class-based elements :
• Each usage scenario implies a set of objects that are manipulated as an actor interacts with the
system. These objects are categorized into classes—a collection of things that have similar
attributes and common behaviors.
• For example, a UML class diagram given below can be used to depict a Sensor class for the
SafeHome security function.

39
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

• Note that the diagram lists the attributes of sensors (e.g.,name, type) and the operations (e.g.,
identify, enable) that can be applied to modify these attributes.
• In addition to class diagrams, other analysis modeling elements depict the manner in which
classes collaborate with one another and the relationships and interactions between classes.

Class diagram for Sensor


Elements of the Requirements Model
Behavioral elements :
• The behavior of a computer-based system can have a profound effect on the design that is
chosen and the implementation approach that is applied.
• Therefore, the requirements model must provide modeling elements that depict behavior.
• The state diagram is one method for representing the behavior of a system by depicting its
states and the events that cause the system to change state.
• A state is any externally observable mode of behavior. In addition, the state diagram
indicates actions (e.g., process activation) taken as a consequence of a particular event.
To illustrate the use of a state diagram, consider software embedded within the SafeHome control
panel that is responsible for reading user input. A simplified UML state diagram is shown in
below figure

40
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Elements of the Requirements Model


Flow-oriented elements:
 Information is transformed as it flows through a computer-based system. The system accepts
input in a variety of forms, applies functions to transform it, and produces output in a variety of
forms. Input may be a control signal transmitted by a transducer, a series of numbers typed by a
human operator, a packet of information transmitted on a network link, or a voluminous data file
retrieved from secondary storage.
 The transform may comprise a single logical comparison, a complex numerical algorithm, or a
rule-inference approach of an expert system. Output may light a single LED or produce a 200-
page report. In effect, we can create a flow model for any computer-based system, regardless of
size and complexity.
Requirement Negotiation
 In an ideal requirements engineering context, the inception, elicitation, and elaboration tasks
determine customer requirements in sufficient detail to proceed to subsequent software
engineering activities.
 In reality, we may have to enter into a negotiation with one or more stakeholders.
 In most cases, stakeholders are asked to balance functionality, performance, and other product or
system characteristics against cost and time-to-market.
 The intent of this negotiation is to develop a project plan that meets stakeholder needs while at
the same time reflecting the real-world constraints (e.g., time, people, budget) that have been
placed on the software team.
 The extensions are written in the same manner as the original use case but they provide
alternatives to the simplest path.
 The best negotiations strive for a “win-win” result.
 Stakeholders win by getting the system or product that satisfies the majority of their needs and
you (as a member of the software team) win by working to realistic and achievable budgets and
deadlines.
Following are the set of negotiation activities at the beginning of each software process iteration
1. Identification of the system or subsystem’s key stakeholders.
2. Determination of the stakeholders’ “win conditions.”
3. Negotiation of the stakeholders’ win conditions to reconcile them into a set of win-win
conditions for all concerned (including the software team).
Successful completion of these initial steps achieves a win-win result, which becomes the key
criterion for proceeding to subsequent software engineering activities.
Requirement Validation
 Review the requirement specification for errors, ambiguities, omissions, and conflicts.
 Requirements validation examines the specification to ensure that all software requirements have
been stated unambiguously; that inconsistencies, omissions, and errors have been detected and
corrected; and that the work products conform to the standards established for the process, the
project, and the product.
41
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

 The primary requirements validation mechanism is the technical review.


 The review team that validates requirements includes software engineers, customers, users, and
other stakeholders who examine the specification looking for errors in content or interpretation,
areas where clarification may be required, missing information, inconsistencies (a major problem
when large products or systems are engineered), conflicting requirements, or unrealistic
(unachievable) requirements.
2.4 Software Requirement Specification
 It contains a complete information description, a detailed functional description, a representation
of system behaviour, an indication of performance requirements and design constraints,
appropriate validation criteria, and other information pertinent to requirements.
 Software requirement specification (SRS) is a document that completely describes what the
proposed software should do without describing how software will do it.
 The basic goal of the requirement phase is to produce the SRS, Which describes the complete
behaviour of the proposed software.
 SRS is also helping the clients to understand their own needs.
Need for Software Requirement Specification
 An SRS minimizes the time and effort required by developers to achieve desired goals and also
minimizes the development cost.
 A good SRS defines how an application will interact with system hardware, other programs and
human users in a wide variety of real-world situations.
 Parameters such as operating speed, response time, availability, portability, maintainability,
footprint, security and speed of recovery from adverse events are evaluated.
Characteristics of an SRS
 Software requirements specification should be accurate, complete, efficient, and of high quality,
so that it does not affect the entire project plan.
 An SRS is said to be of high quality when the developer and user easily understand the prepared
document.
 Other characteristics of SRS are discussed on next slide
Correct
 SRS is correct when all user requirements are stated in the requirements document.
 The stated requirements should be according to the desired system.
 This implies that each requirement is examined to ensure that it (SRS) represents user
requirements.
 Note that there is no specified tool or procedure to assure the correctness of SRS. Correctness
ensures that all specified requirements are performed correctly.
Unambiguous
 SRS is unambiguous when every stated requirement has only one interpretation.
 This implies that each requirement is uniquely interpreted.
42
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

 In case there is a term used with multiple meanings, the requirements document should specify
the meanings in the SRS so that it is clear and easy to understand.
Complete
 SRS is complete when the requirements clearly define what the software is required to do.
 This includes all the requirements related to performance, design and functionality.
Ranked for importance/stability
 All requirements are not equally important, hence each requirement is identified to make
differences among other requirements.
 For this, it is essential to clearly identify each requirement. Stability implies the probability of
changes in the requirement in future.
Modifiable
 The requirements of the user can change, hence requirements document should be created in such
a manner that those changes can be modified easily, consistently maintaining the structure and
style of the SRS.
Traceable
 SRS is traceable when the source of each requirement is clear and facilitates the reference of each
requirement in future.
 For this, forward tracing and backward tracing are used.
 Forward tracing implies that each requirement should be traceable to design and code elements.
 Backward tracing implies defining each requirement explicitly referencing its source.
Verifiable
 SRS is verifiable when the specified requirements can be verified with a cost-effective process to
check whether the final software meets those requirements.
 The requirements are verified with the help of reviews. Note that unambiguity is essential for
verifiability.
Consistent
 SRS is consistent when the subsets of individual requirements defined do not conflict with each
other.
 For example, there can be a case when different requirements can use different terms to refer to
the same object.
 There can be logical or temporal conflicts between the specified requirements and some
requirements whose logical or temporal characteristics are not satisfied.
 For instance, a requirement states that an event ‘a’ is to occur before another event ‘b’. But then
another set of requirements states (directly or indirectly by transitivity) that event ‘b’ should
occur before event ‘a’.

43
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Format of SRS
In order to form a good SRS, here you will see some points which can be used and should be
considered to form a structure of good SRS. These are as follows :
1. Introduction
(i) Purpose of this document
(ii) Scope of this document
(iii) Overview
2. General description
3. Functional Requirements
4. Interface Requirements
5. Performance Requirements
6. Design Constraints
7. Non-Functional Attributes
8. Preliminary Schedule and Budget
9. Appendices
Depending upon information gathered after interaction, SRS is developed which describes
requirements of software that may include changes and modifications that is needed to be done to
increase quality of product and to satisfy customer’s demand.
1. Introduction :
(i) Purpose of this Document –
At first, main aim of why this document is necessary and what’s purpose of document is explained
and described.
(ii) Scope of this document –
In this, overall working and main objective of document and what value it will provide to customer is
described and explained. It also includes a description of development cost and time required.
(iii) Overview –
In this, description of product is explained. It’s simply summary or overall review of product.
2. General description :
General functions of product which includes objective of user, a user characteristic, features,
benefits, about why its importance is mentioned. It also describes features of user community.
3. Functional Requirements :
In this, possible outcome of software system which includes effects due to operation of program is
fully explained. All functional requirements which may include calculations, data processing, etc. are
placed in a ranked order.

44
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

4. Interface Requirements :
In this, software interfaces which mean how software program communicates with each other or
users either in form of any language, code, or message are fully described and explained. Examples
can be shared memory, data streams, etc.
5. Performance Requirements :
In this, how a software system performs desired functions under specific condition is explained. It
also explains required time, required memory, maximum error rate, etc.
6. Design Constraints :
In this, constraints which simply means limitation or restriction are specified and explained for
design team. Examples may include use of a particular algorithm, hardware and software limitations,
etc.
7. Non-Functional Attributes :
In this, non-functional attributes are explained that are required by software system for better
performance. An example may include Security, Portability, Reliability, Reusability, Application
compatibility, Data integrity, Scalability capacity, etc.
8. Preliminary Schedule and Budget :
In this, initial version and budget of project plan are explained which include overall time duration
required and overall cost required for development of project.
9. Appendices :
In this, additional information like references from where information is gathered, definitions of
some specific terms, acronyms, abbreviations, etc. are given and explained.

Question Bank
1. Define SE practices, its importance.
2. Staff briefly essence of SE Practices.
3. State and describe Seven core Principles of software Engineering
4. State and explain the communication Principles
5. State and explain ten planning Principles
6. Briefly explain Barry Boehm’s W5HH Principle.
7. Explain five Analysis modeling Principles.
8. Explain nine design modeling Principles.
9. Write a note on Construction practices and principles
10. Explain S/W Deployment phases and state five principles.
11. State and briefly discuss seven RE tasks.
12. Explain the Functional and Non-functional requirements.
13. Explain with an example a Use-Case representation
14. Describe SRS Template with its contents.
15. List and define six characteristics of a SRS.
45
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Chapter 3 Software Modelling and Design


3.1.Translating Requirement model into Design model: Data Modelling
 A software engineer (sometimes called an analyst) builds the model using requirements
elicited from the customer.
 To validate software requirements, you need to examine them from a number of different
points of view.
 Analysis modelling represents requirements in three “dimensions” thereby increasing the
probability that errors will be found, that inconsistency will surface, and that omissions will
be uncovered.
 Data, functional, and behavioural requirements are modeled using a number of different
diagrammatic formats. Data modelling defines data objects, attributes, and relationships.
 Data, functional, and behavioural requirements are modeled using a number of different
diagrammatic formats. Data modelling defines data objects, attributes, and relationships.
Data modelling :
Data modeling answers a set of specific questions that are relevant to any data processing
application.
1. What are the primary data objects to be processed by the system?
2. What is the composition of each data object and what attributes describe the object?
3. Where do the objects currently reside?
4. What are the relationships between each object and other objects?
5. What are the relationships between the objects and the processes that transform them?
To answer these questions, data modeling methods make use of the entity relationship
diagram(ERD).
The ERD, enables a software engineer to identify data objects and their relationships using a
graphical notation. In the context of structured analysis, the ERD defines all data that are entered,
stored, transformed, and produced within an application.
The entity relationship diagram focuses solely on data (and therefore satisfies the first operational
analysis principles), representing a "data network" that exists for a given system. The ERD is
especially useful for applications in which data and the relationships that govern data are complex.
Data modelling considers data independent of the processing that transforms the data.
Data Objects
A Data object is a representation of any composite information that must be understood by the
software. A data object can be an external entity, a thing, an occurrence of event, a role, a unit, a
place, a structure etc.
A person or a car can be viewed as a data object in the sense that either can be defined in terms of a
set of attributes. In a data object i.e. encapsulated data only – there is no reference within a data
object to operations that act on the data. The data object can be represented in a table as

46
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Naming attributes
Descriptive Referential
Identifier Attributes Attributes

Make Model ID# Body type Color Owner

The body of the table represents specific instances of the data object.
Data Attributes
1. These define the properties of data object and take on one of three different characteristics
2. Name an instance of the data object
3. Describe the instance
4. Make reference to another instance in another table
5. Attributes may be car, id-number, body type, colour etc.
Data relationships

Cardinality and Modality with example


Cardinality
 Cardinality is the specification of the number of occurrences of one [object] that can be related to
the number of occurrences of another [object]
 Cardinality is usually expressed as simply ‘one’ or ‘many’ i.e. 1:1 or 1:N or M:N
 It also defines the max no. of objects that can participate in a relationship
 Cardinality does not however indicate whether or not a particular data object must participate in
the relationship.
Modality
 To specify this information, the data model adds modality to the object/relationship pair.
 The modality of a relationship is 0 if there is no explicit need for the relationship to occur or the
relationship is optional.
 The modality is 1 if an occurrence of the relationship is mandatory.

47
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Example
Consider software that is used by a local telephone company to process requests for field service. A
customer indicates that there is a problem.
If the problem is diagnosed as relatively simple, a single repair action occurs. However, if the
problem is complex, multiple repair actions may be required.
Following figure illustrates the relationship, cardinality and modality between the data objects
customer and repair action.

3.2.Analysis Modelling : Elements of Analysis model


The analysis model and requirements specification provide a means for assessing quality once the
software is built.
Requirements analysis results in the specification of software’s operational characteristics.

System
description
Analysis
Model
Design
model

The analysis model is a bridge between the system description and the design model.

48
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Objectives
Analysis model must achieve three primary objectives:
Describe Customer needs
Establish a basis for software design
Define a set of requirements that can be validated once the software is built.
Elements of the analysis model
1. Scenario based element
 This type of element represents the system user point of view.
 Scenario based elements are use case diagram, user stories.
2. Class based elements
 The object of this type of element manipulated by the system.
 It defines the object, attributes and relationship.
 The collaboration is occurring between the classes.
 Class based elements are the class diagram, collaboration diagram.
3. Behavioral elements
 Behavioral elements represent state of the system and how it is changed by the external
events.
 The behavioral elements are sequenced diagram, state diagram.
4. Flow oriented elements
 An information flows through a computer-based system it gets transformed.
 It shows how the data objects are transformed while they flow between the various system
functions.
 The flow elements are data flow diagram, control flow diagram.

49
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

3.3.Design Modelling : Fundamental Design Concepts


Design Process
 Software design is an iterative process through which requirements are translated into a
“blueprint” for constructing the software.
 The design is representation at a high level of abstraction – data, functional, and behavioral
requirements.
 As design iterations occur, subsequent refinement leads to design representations at much lower
levels of abstraction.
There are three characteristics that serve as a guide for the evaluation of a good design:
1. The design must implement all the explicit requirements contained in the requirements model,
and it must accommodate all the implicit requirements desired by stakeholders.
2. The design must be a readable, understandable guide for those who generate code and for those
who test and subsequently support the software.
3. The design should provide a complete picture of the software, addressing the data, functional and
behavioral domains for implementation.
Design Quality Guidelines
1. A design should exhibit an architecture that a) has been created using recognizable architectural
styles or patterns, b) is composed of components that exhibit good design characteristics and c)
can be implemented in an evolutionary fashion, thereby facilitating implementation and testing.
2. A design should be modular, i.e. the software should be logically partitioned into elements or
subsystems.
3. A design should contain distinct representations of data, architecture, interfaces and components.
4. A design should lead to data structures that are appropriate for the classes to be implemented and
are drawn from recognizable data patterns.
5. A design should lead to components that exhibit independent functional characteristics.
6. A design should lead to interfaces that reduce the complexity of connections between
components and with the external environment.
7. A design should be derived using a repeatable method that is driven by information obtained
during software requirements analysis.
8. A design should be represented using a notation that effectively communicates its meaning.
Design Concepts
A set of fundamental software design concepts has evolved over the history of software engineering.
Although the degree of interest in each concept has varied over the years, each has stood the test of
time.
Following are the software concepts that span both traditional and object- oriented software
development.

50
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Abstraction :
At the highest level of abstraction, a solution is stated in broad terms using the language of the
problem environment. At lower levels of abstraction, a more detailed description of the solution is
provided.
As we move through different levels of abstraction, we work to create procedural and data
abstractions. A procedural abstraction refers to a sequence of instructions that have a specific and
limited function. An example of a procedural abstraction would be the word open for a door.
A data abstraction is a named collection of data that describes a data object. In the context of the
procedural abstraction open, we can define a data abstraction called door. Like any data object, the
data abstraction for door would encompass a set of attributes that describe the door (e.g. door type,
swing direction, weight, etc.).
Information Hiding :
• It is about controlled interfaces. Modules should be specified and design so that information
(algorithm and data) contained within a module is inaccessible to other modules that have no
need for such information.
• Hiding implies that effective modularity can be achieved by defining a set of independent
modules that communicate with one another only that information necessary to achieve software
function.
• The use of Information Hiding as a design criterion for modular systems provides the greatest
benefits when modifications are required during testing and later, during software maintenance.
Because most data and procedures are hidden from other parts of the software, inadvertent errors
introduced during modifications are less likely to propagate to other location within the software.
Structure:
• The complete structure of the software is known as software architecture.
• Structure provides conceptual integrity for a system in a number of ways.
• The architecture is the structure of program modules where they interact with each other in a
specialized way.
• The components use the structure of data.
• The aim of the software design is to obtain an architectural framework of a system.
• The more detailed design activities are conducted from the framework.

Modularity :
• Software architecture and design patterns embody modularity i.e. software is divided into
separately named and addressable components, sometimes called modules that are integrated to
satisfy problem requirements.
• Monolithic software i.e. large program composed of a single module cannot be easily grasped by
a software engineer. The number of control paths, span of reference, number of variables, and
overall complexity would make understanding close to impossible.
• It is the compartmentalization of data and function. It is easier to solve a complex problem when
you break it into manageable pieces. “Divide-and-Conquer”
• Don’t over-modularize. The simplicity of each small module will be overshadowed by the
complexity of Integration Cost.

Functional Independence :

The concept of functional Independence is a direct outgrowth of modularity and the concepts of
abstraction and information hiding.
51
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Design software such that each module addresses a specific sub-function of requirements and has a
simple interface when viewed from other parts of the program structure. Functional independence is
a key to good design, and to software quality.
Independence is assessed using two qualitative criteria: cohesion and coupling.
Cohesion is an indication of the relative functional strength of a module.
Coupling is an indication of the relative interdependence among modules.
Coupling is a qualitative indication of the degree to which a module is connected to other modules
and to the outside world in “lowest possible” way.
3.4.Design Notations
Data Flow Diagram(DFD)
A Data Flow Diagram (DFD) is a graphical representation that depicts the information flow and the
processes used for transformation as the data moves from input to output.
Use
• The data flow diagram may be used to represent a system or software at any level of abstraction.
• DFD provides a mechanism for functional modeling as well as information flow modeling.
• A DFD shows what kinds of data will be input to and output from the system, where the data will
come from and go to, and where the data will be stored.
• It does not provide information about the timing of processes, or information about whether
processes will operate in sequence or in parallel (which is shown in a flowchart).
Standard Notations

• A circle (bubble) represents a process or transformation which is applied to data (or control).
• The double line represents a data store - information that is used by the software.
• An arrow represents one or more data items (data objects). All arrows on a data flow diagram
should be labeled.
Rules followed for preparing a Data Flow Diagram.
The level 0 Data Flow Diagram (Context Diagram) should depict the software/system as a single
bubble. For any application before drawing the detailed DFD, Context Diagram should be drawn.

52
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

1. Primary input and output should be carefully noted.


2. All arrows and bubbles should be labeled with meaningful names.
3. Information flow continuity must be maintained from level-to-level.
4. One bubble at a time should be refined.
5. Refinement should begin by isolating candidate processes, data objects, and data stores to be
represented at the next level.
Safe Home Application
SafeHome software enables the homeowner to configure the security system when it is installed,
monitors all sensors connected to the security system, and interacts with the homeowner through a
keypad and function keys contained in the SafeHome control panel.
During installation, the SafeHome control panel is used to “program” and configure the system. Each
sensor is assigned a number and type, a master password is programmed for arming and disarming
the system, and telephone numbers are input for dialing when a sensor event occurs.
When a sensor event is recognized, the software invokes an audible alarm attached to the system.
After a delay time that is specified by the homeowner during system configuration activities, the
software dials a telephone number of monitoring service, provides information about the location,
reporting the nature of the event that has been detected. The telephone number of a monitoring
service provides information about the location, reporting the nature of the event that has been
detected. The telephone number will be redialed every 20 seconds until telephone connection is
obtained. All interaction with SafeHome is managed by a user interaction subsystem that reads input
provided through the keypad and function keys, displays prompting messages on the LCD display.

Level 0 DFD for the SafeHome security function

53
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Level 1 DFD for the SafeHome security function

Level 2 DFD that refines the monitor sensors transform

54
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Examples:
DFD for Bank Account

Account Master

Current New Balance


Cheque/ Balance
Withdrawal
Slip Verify
Account Debit Account
A/c Withdrawal
Holder Holder
balance Valid Amount
Withdraw
Balance
acknowledge

DFD for Payroll System

Structured Flowcharts
There are many complex design methodologies for implementing large hardware and software
projects such as a new corporate database or operating system. Projects implemented in embedded
control, however, usually require much less code and thus need an appropriate design level. The
technique discussed in this document is a top down, structured flowchart methodology.

55
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Basic Blocks
The basic elements of a flowchart are shown in Figure 1. The START block represents the beginning
of a process. It always has exactly one output. The START block is labeled with a brief description
of the process carried out by the proceeding flowchart. The END block represents the end of a
process. It always has exactly one input and generally contains either END or RETURN depending
on its function in the overall process of the flowchart.

Figure 1: Basic Flowchart Blocks


A PROCESS block represents some operation carried out on an element of data. It contains a brief
descriptive label describing the process being carried out on the data. It may itself be further broken
down into simpler steps by another complete flowchart representing that process. If it is broken down
further, the flowchart that represents the process will have the same label in the start block as the
description in the process block at the higher level. A process always has exactly one input and one
output.
A DECISION block always makes a binary choice. The label in a decision block should be a
question that clearly has only two possible answers. The decision block will have exactly one input
and two outputs. The two outputs will be labeled with the two answers to the question in order to
show the direction of the logic flow depending upon the decision made.
On-page and off-page CONNECTORS may also appear in some flowcharts. For this document we
will restrict ourselves to flowcharts that can be represented on a single page.
Basic Structures
A structured flowchart is one in which all of the processes and decisions must fit into one of a few
basic structured elements. The basic elements of a structured flowchart are shown in Figure. It
should be possible to take any structured flowchart and enclose all of the blocks within one of the
following process structures. Note that each of the structures shown below has exactly one input and
one output. Thus the structure itself can be represented by a single process block.

The SEQUENCE process is just a series of processes carried out one after the other. Most programs
are represented at the highest level by a SEQUENCE, possible with a loop from the end back to the
beginning.
56
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Basic Structures
• The IF-THEN-ELSE process logically completes the binary decision block by providing two
separate processes. One of the processes will be carried out in the each path from the binary
decision.
• The WHILE process allows for the representation of a conditional loop structure within a
program. The decision to execute the process in the loop is made prior to the first execution of
the process.
Derived Structures
Although all flowcharts can be represented by the above basic structures, it is sometimes useful to
employ some additional structures, each of which can themselves be constructed from the above
structures. These derived structures are shown in Figure.

The DO-WHILE structure differs from the WHILE structure in that the process contained within the
loop is always executed at least one time. This is equivalent to performing the process once before
going into a WHILE loop. In the WHILE structure the process may never be executed. Although the
WHILE structure is preferred, the DO-WHILE structure is sometimes more intuitive.
Similarly, the CASE structure is useful in representing a series of IF-THEN-ELSE statements where
there are more than two choices to be made. Hence the DECISION blocks are identical except for the
choice being compared. For example, the DECISION could be ‘is the color of the sock …’ Each
DECISION block would then have a different color as the choice. The true result always flows to the
right, with the false result flowing into the next DECISION block. There will always be one less
DECISION block than the number of choices.
Decision Tree
• Decision tree is the most powerful and popular tool for classification and prediction.
• A Decision tree is a flowchart like tree structure, where each internal node denotes a test on an
attribute, each branch represents an outcome of the test, and each leaf node (terminal node) holds
a class label.

57
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Advantages of Decision Tree


• Decision Trees are easy to understand and interpret.
• With the graphical representation of decision trees, a non-specialist can understand what is taking
place.
• It can help provide essential insights even with little complex data.
• It helps determine quickly the best, worst, and expected values for various conditions.
• Decision trees can be quickly combined with other decision analysis techniques.
Disadvantages of Decision Tree
• Given its structure, a small change anywhere may lead to cascading effects on a large section of
the decision tree.
• Decision trees are also considered relatively inaccurate in comparison to other techniques.
• Certain calculations can get pretty complex, particularly when many outcomes are linked.
Decision Tables
A decision table is a brief visual representation for specifying which actions to perform depending on
given conditions. The information represented in decision tables can also be represented as decision
trees or in a programming language using if-then-else and switch-case statements.
A decision table is a good way to settle with different combination inputs with their corresponding
outputs and is also called a cause-effect table. The reason to call cause-effect table is a related logical
diagramming technique called cause-effect graphing that is basically used to obtain the decision
table.
Importance of Decision Table:
• Decision tables are very much helpful in test design techniques.
• It helps testers to search the effects of combinations of different inputs and other software states
that must correctly implement business rules.
• It provides a regular way of starting complex business rules, that is helpful for developers as well
as for testers.
• It assists in the development process with the developer to do a better job. Testing with all
combinations might be impractical.
• A decision table is basically an outstanding technique used in both testing and requirements
management.
• It is a structured exercise to prepare requirements when dealing with complex business rules.
• It is also used in model complicated logic.

58
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Advantages of Decision Table in Software Testing


• Any complex business flow can be easily converted into the test scenarios & test cases using this
technique.
• Decision tables work iteratively. Therefore, the table created at the first iteration is used as the
input table for the next tables. The iteration is done only if the initial table is not satisfactory.
• Simple to understand and everyone can use this method to design the test scenarios & test cases.
• It provides complete coverage of test cases which help to reduce the rework on writing test
scenarios & test cases.
• These tables guarantee that we consider every possible combination of condition values. This is
known as its completeness property.
Example of Decision Table
• A Decision Table is a tabular representation of inputs versus rules, cases or test conditions. Let’s
take an example and see how to create a decision table for a login screen:

Conditions Rule 1 Rule 2 Rule 3 Rule 4

Username F T F T

Password F F T T

Output E E E H

In the above example,


• T – Correct username/password
• F – Wrong username/password
• E – Error message is displayed
• H – Home screen is displayed
• Case 1 – Username and password both were wrong. The user is shown an error message.
• Case 2 – Username was correct, but the password was wrong. The user is shown an error
message.
• Case 3 – Username was wrong, but the password was correct. The user is shown an error
message.
• Case 4 – Username and password both were correct, and the user is navigated to the homepage.
3.5.Testing
Software Testing is a planned series of steps that result in successful construction of Software. It is
an individualistic process and the types vary depending on the development approaches. It is also a
defense against programming errors. To avoid any inherent coding errors, several distinct approaches
or philosophies are used. This is called Strategic approach to software testing.
Good Test:
1. A good test has a high probability of finding an error.
2. The tester must understand the software and how it might fail.
3. A good test is not redundant.
4. Testing time is limited; one test should not serve the same purpose as another test.
59
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

5. A good test should be the “best of the breed”.


6. Tests that have the highest likelihood of uncovering a whole class of errors should be used.
7. A good test should be neither too simple nor too complex.
8. Each test should be executed separately; combining a series of tests could cause side effects and
mask certain errors.
Purpose of Testing
 Finding defects which may get created by the programmer while developing the software.
 Gaining confidence in and providing information about the level of quality.
 To prevent defects.
 To make sure that the end result meets the business and user requirements.
 To ensure that it satisfies the BRS that is Business Requirement Specification and SRS that is
System Requirement Specifications.
 To gain the confidence of the customers by providing them a quality product
Testing Methods - White- Box Testing
• White Box testing is also called as glass-box testing. It is a test case design philosophy that uses
the control structure described as part of component-level design to drive test cases. Using this
method, one can derive test cases that
1) Guarantee that all independent paths within a module have been exercised at least once
2) Exercise all logical decisions on their true and false sides
3) Execute all loops at their boundaries and within their operational bounds and
4) Exercise internal data structures to ensure their validity.
Advantages
1. A side effect of having the knowledge of the source code is beneficial to thorough testing.
2. Optimization of code by revealing hidden errors and being able to remove these possible defects.
3. Gives the programmer introspection because developers carefully describe any new
implementation.
4. Provides traceability of tests from the source, allowing future changes to the software to be easily
captured in changes to the tests.
5. White box tests are easy to automate.
6. White box testing gives clear, engineering-based, rules for when to stop testing.
Disadvantages
1. White-box testing brings complexity to testing because the tester must have knowledge of the
program, including being a programmer. White-box testing requires a programmer with a high-
level of knowledge due to the complexity of the level of testing that needs to be done.
2. On some occasions, it is not realistic to be able to test every single existing condition of the
application and some conditions will be untested.
3. The tests focus on the software as it exists, and missing functionality may not be discovered.
60
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Black-Box Testing
It is called behavior testing or performance testing because it focuses on functional requirement of
soft.
1. Black-Box testing attempts to find errors of following category:
2. Incorrect or Interface errors
3. Errors in data structure
4. Initialization & termination error
5. Behavior & performance error.
Advantages
1. Tests are done from a user’s point of view and will help in exposing discrepancies in the
specification
2. Tester need not know programming languages or how the software has been implemented
3. Tests can be conducted by a body independent from the developers, allowing for an objective
perspective and the avoidance of developer-bias
4. Test cases can be designed as soon as the specifications are complete
Disadvantages
1. Only a small number of possible inputs can be tested and many program paths will be left
untested
2. Without clear specifications, which is the situation in many projects, test cases will be difficult to
design
3. Tests can be redundant if the software designer/ developer has already run a test case.
Differentiate between Black Box and White box testing
Criteria Black Box Testing White Box Testing
Definition Black Box testing is a software testing White Box Testing is a software testing
method in which the internal structure / method in which the internal structure /
design / implementation of the item design / implementation of the item
being tested is NOT known to the tester being tested is known to the tester.
Levels Mainly applicable to higher levels of Mainly applicable to lower levels of
Applicable to testing: Acceptance testing, System testing: Unit testing, Integration Testing
testing
Responsibility Generally independent Generally Software Developers
Software Testers
Programming Not Required Required
Knowledge
Implementation Not Required Required
Knowledge
Basis for Test Requirement Specifications Detail Design
Cases

61
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Testing Strategies
A software process may be viewed as the spiral illustrated in the diagram below. Initially, system
engineering defines the role of software and leads to software requirements analysis, where the
information the role of software and leads to software requirements analysis, in which the
information domain, function, behavior, performance, constraints and validation criteria for software
are established.

 A strategy for software testing may also be viewed in the context of the Spiral. Unit testing
begins at the vortex of the Spiral and concentrates on each unit of the software for the
implementation in Source Code.
 The testing progresses by moving outward along the Spiral to integration testing. In this the
focus is on design and the construction of the software architecture.
 Taking another turn outward on the Spiral, validation testing is a part where requirements
established as part of requirements modeling which are validated against the software that has
been constructed.
 Finally there is the system testing, where the software and other system elements are tested a
whole.

 Considering the process from a procedural point of view, testing within the context of software
engineering is actually a series of four steps that are implemented sequentially.
 Initially, the tests focus on each component individually, ensuring that they function properly as a
unit.
 Unit testing makes heavy use of testing techniques that exercise specific paths in a component’s
control structure to ensure complete coverage and maximum error detection. Integration testing
addresses the issues associated with the dual problems of verification and program construction.
62
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

 Test case design techniques that focus on inputs and outputs are more prevalent during
integration, although techniques that exercise specific program paths may be used to ensure
coverage of major control paths.
 After the software has been integrated, a set of high-order tests is conducted. Validation testing
provides final assurance that software meets all informational, functional, behavioral and
performance requirements.
 Software once validated, must be combined with other system elements. System testing verifies
that all elements mesh properly and that the overall system functions are achieved.
Unit Testing
 Unit is the smallest part of a software system which is testable it may include code files, classes
and methods which can be tested individual for correctness.
 Unit is a process of validating such small building block of a complex system, much before
testing an integrated large module or the system as a whole.
 Unit is the smallest part of a software system which is testable it may include code files, classes
and methods which can be tested individual for correctness.
 Unit is a process of validating such small building block of a complex system, much before
testing an integrated large module or the system as a whole.
3.6.Test Documentation
 Test documentation is documentation of artifacts created before or during the testing of software.
It helps the testing team to estimate testing effort needed, test coverage, resource tracking,
execution progress, etc.
 It is a complete suite of documents that allows you to describe and document test planning, test
design, test execution, test results that are drawn from the testing activity
Test Plan
 It is a document that is prepared by the managers or test lead. It consists of all information about
the testing activities. The test plan consists of multiple components such as Objectives, Scope,
Approach, Test Environments, Test methodology, Template, Role & Responsibility, Effort
estimation, Entry and Exit criteria, Schedule, Tools, Defect tracking, Test Deliverable,
Assumption, Risk, and Mitigation Plan or Contingency Plan.
 The Level of Test Plan defines what the test plan is being created for e.g. subsections of testing:
Integration, Unit, Acceptance
 A Test Plan document will follow the same structure for each level of test plan. The only
difference being the content and detail.
 Hierarchy of Test Plans will exist:
 What is a Master Test Plan?
 Note: All Test Plans must agree

63
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Test Plans follow a strict structure to ensure all aspects of testing are covered.
1. Plan Identifier 8. Suspension Criteria
2. Test Items 9. Test Deliverables
3. Risk Issues 10. Environmental Requirements
4. Features to be Tested 11. Staffing/Training Needs
5. Features not to be Tested 12. Schedule of Test
6. Test Approach 13. Planning for risks
7. Pass/Fail Criteria 14. Approvals

Test Case
 It is a detailed document that describes step by step procedure to test an application. It consists of
the complete navigation steps and inputs and all the scenarios that need to be tested for the
application. We will write the test case to maintain the consistency, or every tester will follow the
same approach for organizing the test document.
Defect Report
 Defect report is a documented report of any flaw in a Software System which fails to perform its
expected function.
Test Summary Report
 Test summary report is a high-level document which summarizes testing activities conducted as
well as the test result.
 Test report is an assessment of how well the Testing is performed. Based on the test report,
stakeholders can evaluate the quality of the tested product and make a decision on the software
release.
 For example, if the test report informs that there are many defects remaining in the product,
stakeholders can delay the release until all the defects are fixed.

64
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Question Bank
1. Write a note on data modelling.
2. Define data objects, attributes and relationship with examples.
3. Define Cardinality and Modality with examples
4. State three objectives of analysis Model.
5. Write a detailed note on Elements of analysis Model using a schematic diagram.
6. Explain the Design Process and three characteristics of a good design.
7. Enlist six guidelines for a quality design.
8. In Design Modeling discuss Abstraction, Architecture, Patterns, Modularity, Information Hiding,
Functional Independence.
9. Explain DFD with symbols and their meaning
10. Enlist rules for drawing a DFD
11. Draw a DFD for Library Book Issue and Return System
12. Write a note on Structured Flow Charts.
13. Explain Decision Tree with an example
14. Explain Decision Table with an example

65
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Chapter 4 Software Project Estimation


Introduction to Software Project Management & its need
A Project is a temporary sequence of unique, complex and connected activities that have a goal or
purpose and must be completed within budget and specific time and according to specifications.
Management is the art of getting things done through and with a team of individuals in formally
organized groups. Project Manager is a person responsible for supervising the project development
from start to completion.
The Software Project Management includes basic function such as scoping, planning, estimating,
scheduling, organizing, directing, coordinating, controlling and closing. The effective software
project management focuses on the four P’s via People, Product, Process and Project.
4.1.The Management Spectrum – the 4 P’s and their Significance
The People:
The “people factor” is so important that the software engineering institute has developed a people
management capability maturity model (PM-CMM), “to enhance the readiness of software
organizations to undertake increasingly complex applications by helping to attract, grow, motivate,
deploy, and retain the talent needed to improve their software development capability”.
The people management maturity model defines the following key practice areas for software
people: recruiting, selection, performance management, training, compensation, career development,
organization and work design, and team/culture development. The organizations that achieve high
levels of PM-CMM have higher likelihood of implementing effective software management.
The software processing is populated by stakeholders who can be categorized into one of five
constituencies via. Senior managers, Project (technical) managers, Practitioners, Customers, End-
users.
The Product :
Before a project can be planned, product objectives and scope should be established, alternative
solutions should be considered, and technical and management constrains should be identified.
Without this information, it is impossible to define reasonable and accurate estimates of the cost, an
effective assignment of risk, a realistic breakdown of project tasks, or a manageable product schedule
that provides a meaningful indication of progress.
The software developers and customer must meet to define product objectives and scope. Objectives
identify the overall goals for the product (from customer’s point of view) without considering how
the goal will be achieved. Scope identifies the primary data, function and behaviors that characterize
the product, and more importantly, attempt to bind these characteristics in a quantitative manner.
We must examine the product and the problem intended to solve at the very beginning of the project.
For this reason, the scope of the product must be established and bounded. The first software project
management activity is the determination of software scope. Software project scope must be
unambiguous and understandable at the management and technical levels. Problem Decomposition,
sometime called partitioning or problem elaboration is an activity that sits at the core of software
require analysis.
The Process :
A software process provides the framework from which a comprehensive plan for software
development can be established. A number of different tasks sets-tasks, milestones, work products,
66
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

and quality assurance points enable the framework activities to be adapted to the characteristics of
the software projects and the requirements of the project team
The job of project manager is to estimate resource requirements for each matrix cell, start and end
dates for the tasks associated with each cell, and work products to be produced as a sequence of each
task.
Process decomposition commences when the project manager asks,” How do we accomplish this
framework activity” For example, a small, relatively simple project or a more complex project,
which has a broader scope and more significant business impact requires different work tasks for the
communication activity.
The Project:
“A project is like a road trip. Some projects are simple and routine, like driving to the store in broad
daylight. But most projects worth doing are more like driving a truck off-road, in the mountains, at
night.” -Cem Kaner, James Bach, and Pattichord Bret
To avoid project failure, a software project manager and the software engineers who build the project
must heed a set of common warning signals, understand the critical success factors that lead to good
project management, and develop a common sense approach for planning, monitoring and
controlling the project.
Reel suggests a five-part common sense approach to software projects:
1. Start on the right foot
2. Maintain momentum
3. Track progress
4. Make smart decisions
5. Conduct a postmortem analysis
Decomposition is applied into two major areas:
1. The functionality that must be delivered and
2. The process that will be used to deliver it.
4.2. Metrics for Size Estimation
Accurate estimation of the problem size is fundamental to satisfactory estimation of effort, time
duration and cost of a software project. In order to be able to accurately estimate the project size,
some important metrics should be defined in terms of which the project size can be expressed. The
project size is a measure of the problem complexity in terms of the effort and time required to
develop the product.
Currently two metrics are popularly being used widely to estimate size: lines of code (LOC) and
function point (FP). The usage of each of these metrics in project size estimation has its own
advantages and disadvantages.
Lines of Code (LOC)
LOC is the simplest among all metrics available to estimate project size. This metric is very popular
because it is the simplest to use. Using this metric, the project size is estimated by counting the
number of source instructions in the developed program. Obviously, while counting the number of
source instructions, lines used for commenting the code and the header lines should be ignored.

67
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Determining the LOC count at the end of a project is a very simple job. However, accurate
estimation of the LOC count at the beginning of a project is very difficult. In order to estimate the
LOC count at the beginning of a project, project managers usually divide the problem into modules,
and each module into submodules and so on, until the sizes of the different leaf-level modules can
be approximately predicted.
To be able to do this, past experience in developing similar products is helpful. By using the
estimation of the lowest level modules, project managers arrive at the total size estimation.
Function point (FP)
• Function point metrics provide a standardized method for measuring the various functions of a
software application.
• Function point metrics, measure functionality from the user’s point of view, that is, on the basis of
what the user requests and receives in return.
Information domain values:
• Number of user inputs – Distinct input from user
• Number of user outputs – Reports, screens, error messages, etc.
• Number of user inquiries – On line input that generates some result
• Number of files – Logical file (database)
• Number of external interfaces – Data files/connections as interface to other systems
Formula to count FP is
• FP = Total Count * [0.65 + 0.01*Σ(Fi)]
Where, Total count is all the counts times a weighting factor that is determined for each organization
via empirical data. Fi (i=1 to 14) are complexity adjustment values.

5.1 Project Scheduling


 Concept of Project Scheduling
Software project scheduling is an action that distributes estimated effort across the planned project
duration by allocating the effort to specific software engineering tasks. During early stages of project
planning, a macroscopic schedule is developed. This type of schedule identifies all major process
framework activities and the product function to which they are applied. As the project gets under
way, each entry on the macroscopic schedule is refined into a detailed schedule. Here, specific
software actions and tasks are identified and scheduled.

Value Adjustment Factors (Fi):


 F1. Data Communication
 F2. Distributed Data Processing
 F3. Performance
 F4. Heavily Used Configuration
68
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

 F10. Reusability
 F11. Installation Ease
 F12. Operational Ease
 F13. Multiple Sites
 F14. Facilitate Change
 Example:
 A simple example:
 inputs
 3 simple X 3 = 9
 4 average X 4 = 16
 1 complex X 6 = 6
 outputs
 6 average X 5 = 30
 2 complex X 7 = 14
 files
 5 complex X 15 = 75
 inquiries
 8 average X 4 = 32
 interfaces
 3 average X 7 = 21
 4 complex X 10 = 40
 Unadjusted function points = 243
 F09. Complex internal processing = 3
 F10. Code to be reusable = 2
 F03. High performance = 4
 F13. Multiple sites = 3
 F02. Distributed processing = 5
 Project adjustment factor = 17
 Adjustment calculation:
 Adjusted FP = Unadjusted FP X [0.65 + (adjustment factor X 0.01)]
 = 243 X [0.65 + ( 17 X 0.01)]
 = 243 X [0.82]
 = 199.26 Adjusted function points
69
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Shortcoming Function Point Metric


A major shortcoming of the function point measure is that it does not take into account the
algorithmic complexity of a software. That is, the function point metric implicitly assumes that the
effort required to design and develop any two functionalities of the system is the same.
But, we know that this is normally not true, the effort required to develop any two functionalities
may vary widely. It only takes the number of functions that the system supports into consideration
without distinguishing the difficulty level of developing the various functionalities.
To overcome this problem, an extension of the function point metric called feature point
metric is proposed.
4.3. Project Cost Estimation Techniques
Estimation of various project parameters is a basic project planning activity. The important project
parameters that are estimated include: project size, effort required to develop the software, project
duration, and cost. These estimates not only help in quoting the project cost to the customer, but are
also useful in resource planning and scheduling. There are three broad categories of estimation
Techniques:
• Empirical estimation techniques
• Heuristic techniques
• Analytical estimation techniques
Empirical Estimation Techniques
Empirical estimation techniques are based on making an educated guess of the project parameters.
While using this technique, prior experience with development of similar products is helpful.
Although empirical estimation techniques are based on common sense, different activities involved
in estimation have been formalized over the years. Two popular empirical estimation techniques are:
Expert judgment technique and Delphi cost estimation.
Expert Judgment Technique
Expert judgment is one of the most widely used estimation techniques. In this approach, an expert
makes an educated guess of the problem size after analyzing the problem thoroughly. Usually, the
expert estimates the cost of the different components (i.e. modules or subsystems) of the system and
then combines them to arrive at the overall estimate.
However, this technique is subject to human errors and individual bias. Also, it is possible that the
expert may overlook some factors inadvertently. Further, an expert making an estimate may not have
experience and knowledge of all aspects of a project. For example, he may be conversant with the
database and user interface parts but may not be very knowledgeable about the computer
communication part.
A more refined form of expert judgment is the estimation made by group of experts. Estimation by a
group of experts minimizes factors such as individual oversight, lack of familiarity with a particular
aspect of a project, personal bias, and the desire to win contract through overly optimistic estimates.
However, the estimate made by a group of experts may still exhibit bias on issues where the entire
group of experts may be biased due to reasons such as political considerations. Also, the decision
made by the group may be dominated by overly assertive members.

70
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Delphi Cost Estimation


Delphi cost estimation approach tries to overcome some of the shortcomings of the expert judgment
approach. Delphi estimation is carried out by a team comprising of a group of experts and a
coordinator. In this approach, the coordinator provides each estimator with a copy of the software
requirements specification (SRS) document and a form for recording his cost estimate.
Estimators complete their individual estimates anonymously and submit to the coordinator.
In their estimates, the estimators mention any unusual characteristic of the product which has
influenced his estimation.
The coordinator prepares and distributes the summary of the responses of all the estimators, and
includes any unusual rationale noted by any of the estimators.
Based on this summary, the estimators re-estimate. This process is iterated for several rounds.
However, no discussion among the estimators is allowed during the entire estimation process. The
idea behind this is that if any discussion is allowed among the estimators, then many estimators may
easily get influenced by the rationale of an estimator who may be more experienced or senior.
After the completion of several iterations of estimations, the coordinator takes the responsibility of
compiling the results and preparing the final estimate.
Heuristic Techniques:
Heuristic techniques assume that the relationships among the different project parameters can be
modelled using suitable mathematical expressions. Once the basic (independent) parameters are
known, the other (dependent) parameters can be easily determined by substituting the value of the
basic parameters in the mathematical expression.
Different heuristic estimation models can be divided into the following two classes: single variable
model and the multi variable model.
Single variable estimation models provide a means to estimate the desired characteristics of a
problem, using some previously estimated basic (independent) characteristic of the software product
such as its size. A single variable estimation model takes the following form:
Estimated Parameter = c1 * e d1
In the above expression, e is the characteristic of the software which has already been estimated
(independent variable). Estimated Parameter is the dependent parameter to be estimated. The
dependent parameter to be estimated could be effort, project duration, staff size, etc. c1 and d1 are
constants.
Estimated Parameter = c1 * e d1
The values of the constants c1 and d1 are usually determined using data collected from past projects
(historical data). The basic COCOMO model is an example of single variable cost estimation model.
A multivariable cost estimation model takes the following form:
Estimated Resource = c1 * e1 d1 + c2 * e2 d2 +…….
Where e1 , e2 , … are the basic (independent) characteristics of the software already estimated, and
c1 , c2 , d1 , d2 , … are constants.
Values of these constants are usually determined from historical data. The intermediate COCOMO
model can be considered to be an example of a multivariable estimation model.

71
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Analytical Estimation Techniques


Analytical estimation techniques derive the required results starting with basic assumptions
regarding the project. Thus, unlike empirical and heuristic techniques, analytical techniques do have
scientific basis. Halstead’s software science is an example of an analytical technique.
Halstead’s software science can be used to derive some interesting results starting with a few simple
assumptions. Halstead’s software science is especially useful for estimating software maintenance
efforts. In fact, it outperforms both empirical and heuristic techniques when used for predicting
software maintenance efforts.
Halstead’s Software Science – An Analytical Technique
For a given program, let:
• η1 be the number of unique operators used in the program,
• η2 be the number of unique operands used in the program,
• N1 be the total number of operators used in the program,
• N2be the total number of operands used in the program.
• Halstead’s Software Science – An Analytical Technique
• Length and Vocabulary
• The length of a program as defined by Halstead, quantifies total usage of all operators and
operands in the program. Thus, length N = N1 + N2 . Halstead’s definition of the length of the
program as the total number of operators and operands roughly agrees with the intuitive notation
of the program length as the total number of tokens used in the program.
• The program vocabulary is the number of unique operators and operands used in the program.
Thus, program vocabulary η = η1+ η2 .
• Program Volume
The length of a program (i.e. the total number of operators and operands used in the code) depends
on the choice of the operators and operands used. In other words, for the same programming
problem, the length would depend on the programming style.
This type of dependency would produce different measures of length for essentially the same
problem when different programming languages are used. Thus, while expressing program size, the
programming language used must be taken into consideration:
V = Nlog2η
Here the program volume V is the minimum number of bits needed to encode the program.
The unique operators are:
main,(),{},int, scanf,&,“,”,“;”,=,+,/, printf
The unique operands are:
a, b, c, &a, &b, &c, a+b+c, avg, 3, “%d %d %d”, “avg = %d”
Therefore,
η1 = 12, η2 = 11
Estimated Length = (12*log12 + 11*log11) = (12*3.58 + 11*3.45) = (43+38) = 81
Volume = Length*log(23) = 81*4.52 = 366

72
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

4.4. COCOMO Model (Constructive Cost Model)


• As with all estimation models, it requires sizing information and accepts it in three forms: object
points, function points, and lines of source code
• Application composition model - Used during the early stages of software engineering when the
following are important
– Prototyping of user interfaces
– Consideration of software and system interaction
– Assessment of performance
– Evaluation of technology maturity
• Early design stage model – Used once requirements have been stabilized and basic software
architecture has been established
• Post-architecture stage model – Used during the construction of the software
Organic, Semidetached and Embedded software projects
• Organic: A development project can be considered of organic type, if the project deals with
developing a well understood application program, the size of the development team is reasonably
small, and the team members are experienced in developing similar types of projects.
• Semidetached: A development project can be considered of semidetached type, if the development
consists of a mixture of experienced and inexperienced staff. Team members may have limited
experience on related systems but may be unfamiliar with some aspects of the system being
developed.
• Embedded: A development project is considered to be of embedded type, if the software being
developed is strongly coupled to complex hardware, or if the stringent regulations on the operational
procedures exist.
The basic COCOMO model gives an approximate estimate of the project parameters. The basic
COCOMO estimation model is given by following expressions:
Effort = a1 x (KLOC)a2 PM (person Month)
Time of Development = b1 x (Effort) b2 Months
Where, a1,a2,b1,b2 are constants for each category of software products.
Estimation of Effort
Organic: Effort = 2.4 (KLOC) 1.05 PM
Semi-detached: Effort = 3.0 (KLOC) 1.12 PM
Embedded: Effort = 3.6 (KLOC) 1.20 PM
Estimation Time of Development
Organic: Time of Development = 2.5 (Effort) 0.38 Months
Semi-detached: Time of Development = 2.5 (Effort) 0.35 Months
Embedded: Time of Development = 2.5 (Effort) 0.32 Months

73
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Example:
Assume that the size of an organic s/w product has been estimated to be 32,000 lines of source code.
Assume that the average salary of software be Rs. 15,000/- month. Determine the effort required to
develop the software product and the nominal development time.
Effort= 2.4 x (32) 1.05 = 91 PM
Time of development = 2.5 x (91) 0.38 =
14 months Cost= 14 x 15,000 = Rs.
2,10,000/-
4.5. Risk Management
What is a software risk?
Risk refers to the uncertainties related to future happenings with the project. A risk is any uncertain
event that may or may not happen, which will impact the project.
This means the risks can be predicted and brought within control. However, when the risk becomes a
reality, it leads to unwanted consequences and may be losses.
Risk Identification
The risk may be proactive or reactive
Reactive Risk Strategy
The reactive risk strategies monitor the project for likely risks. It can be called as the fire- fighting
mode or the crisis management mode.
The team gets into action in an attempt to correct the problem rapidly
Proactive Risk Strategy
A proactive strategy begins long before technical work is initiated. Potential risks are identified, their
probability and impact are assessed, priorities are ranked and a plan is established to avoid such
risks.
This is called as a contingency plan which will enable to respond in a controlled and effective
manner.
Project Risk
Project risks threaten the project plan. That is, if project risks become real, it is likely that the project
schedule will slip and that costs will increase.
Project risks identify potential budgetary, schedule, personnel (staffing and organization), resource,
stakeholder, and requirements problems and their impact on a software project.
Technical Risks
Technical risks threaten the quality and timeliness of the software to be produced. If a technical risk
becomes reality, implementation may become difficult or impossible.
Technical risks identify potential design, implementation, interface, verification and maintenance
problems.
In addition, specification ambiguity, technical uncertainty, technical obsolescence and “leading-
edge” technology are also risk factors.

74
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Business Risk
Business risks threaten the viability of the software to be built and often risk the project or the
product. Candidates for the top five business risks are
(1) building an excellent product or system that no one really wants (market risk),
(2) building a product that no longer fits into the overall business strategy for the company (strategic
risk),
Risk Assessment
Risk Identification
Risk identification is a systematic attempt to specify threats to the project plan. By identifying known
and predictable risks, the project manager takes a first step toward avoiding them when possible and
controlling them when necessary.
There are two distinct types of risks for each of the categories:
Generic risks and Product-specific risks.
Generic risks are a potential threat to every software project.
Product- specific risks can be identified only by those with a clear understanding of technology, the
people, and the environment that is specific to the software that is to build.
One method for identifying risks is to create a risk item checklist. The checklist can be used for
identification and focuses subset or known and predictable risks in the following generic
subcategories.
Product Size - risks associated with the overall size of the software to be built or modified. Business
impact: Risks associated with constraints imposed by management or the marketplace.
Stakeholder characteristics – risks associated with the sophistication of the stakeholders and the
developer’s ability to communicate with stakeholders in a timely manner.
Process definition - Risks associated with the degree to which the software process has been defined
and is followed by the development organization.
Development environment- risks associated with the availability and quality of the tools to be used
to build the product.
Technology to be built – risks associated with the complexity of the system to be built and the
“newness” of the technology that is packaged by the system.
Staff size and experience – risks associated with the overall technical and project experience of the
software engineers who will do the work.
The risk item checklist can be organized in different ways. Questions relevant to each of the topics
can be answered for each software project. The answer to these questions allows you to estimate the
impact of risk.
Risk Analysis
The following questions are to be used for analyzing project risk:
• Have top software and customer manager formally committed to support the project?
• Are end users enthusiastically committed to the project and the system/product to be built?
• Are requirements fully understood by the software engineering team and its customers?
75
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

• Have customers been involved fully in the definition of requirements?


• Do end users have realistic expectations?
• Is the project scope stable?
• Does the software engineering team have the right mix of skills?
• Are project requirements stable?
• Does the project team have experience with the technology to be implemented?
• Is the number of people on the project team adequate to do the job?
• Do all customer/user constitutes agree on the importance of the project and on the requirements
for the system/product to be built?
• If any one of these questions is answered negatively mitigation. Monitoring and management
steps should be instituted without fail. The degree to which the project is at risk is directly
proportional to the number of negative responses to these questions.
• The risk components are defined in the following manner
• Performance risk: The degree of uncertainty that the product will meet its requirements and be
fit for its intended use.
• Cost risk: the degree of uncertainty that the project budget will be maintained.
• Support risk: the degree of uncertainty that the resultant software will be easy to correct, adapt
and enhance.
• Schedule risk: The degree of uncertainty that the project schedule will be maintained and that the
product will be delivered on time.
Impact Assessment
The impact of each risk driver on the risk component is divided into one of four impact categories –
negligible, marginal, critical or catastrophic.

76
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Risk Projection/Prioritization
There are four risk projection steps:
1. Establish a scale that reflects the perceived likelihood of risk
2. Delineate the consequence of the risk
3. Estimate the impact of the risk on the project and the product
4. Assess the overall accuracy of the risk projection so there will be no misunderstandings.
The intent of these steps is to consider risks in a manner that leads to prioritization.
No software team has the resources to address every possible risk with the same degree of rigor. By
prioritizing risks, we can allocate resources where they will have the most impact.
A risk table provides with a simple technique for risk projection.
Sample Risk table prior to sorting

It contains all risks listed in first column of the table. Each risk is categorized in the second column.
The probability of occurrence of each risk is entered in the next column of the table.
The probability value for each risk can be estimated by team members individually. The impact of
each risk is assessed.
The categories for each of the four risk components – performance, support, cost and schedule – are
averaged to determine an overall impact value.
After the four columns are completed, the table is sorted by probability and by impact. High High-
probability, high-impact risks percolate to the top of the table, and low-probability risks drop to the
bottom. This accomplishes first-order risk prioritization.
The cutoff line (drawn horizontally at some point in the table) implies that only risks that lie above
the line will be given further attention. Risks that fall below the line are reevaluated to accomplish
second-order prioritization.

77
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Risk and Management Concern

Risk impact and probability have a distinct influence on management concern.


A risk factor that has a high impact but a very low probability of occurrence should not absorb a
significant amount of management time.
However, high-impact risks with moderate to high probability and low-impact risks with high
probability should be carried forward into the risk analysis steps that follow.
All risks that lie above the cutoff line should be managed. The column labeled RMMM contains a
pointer into a risk mitigation, monitoring, and management plan or, alternatively, a collection of risk
information sheets developed for all risks that lie above the cutoff.
Risk probability can be determined by making individual estimates and then developing a single
consensus value.
Although that approach is workable, more sophisticated techniques for determining risk probability
have been developed. Risk drivers can be assessed on a qualitative probability scale that has the
following values: impossible, improbable, probable, and frequent. Mathematical probability can then
be associated with each qualitative value.
Assessing Risk Impact
Three factors affect the consequences that are likely if a risk does occur: its nature, its scope, and its
timing. The nature of the risk indicates the problems that are likely if it occurs.
The overall risk exposure RE is determined using the following relationship
RE = P X C
Where P is the probability of occurrence for a risk, and C is the cost to the project should the risk
occur.
For example: assume that the software team defines a project risk in the following manner:
Risk identification: Only 70 percent of the software components scheduled for reuse will, in fact, be
integrated into the application. The remaining functionality will have to be custom developed.
Risk probability: 80 percent (likely).

78
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Risk impact: Sixty reusable software components were planned. If only 70 percent can be used, 18
components would have to be developed from scratch (in addition to other custom software that has
been scheduled for development). Since the average component is 100 LOC and local data indicate
that the software engineering cost for each LOC is $14.00, the overall cost (impact) to develop the
components would be
18 X 100 X 14 = $25,200.
Risk exposure: RE = 0.80 X 25,200 ~ $20,200
Risk exposure can be computed for each risk in the risk table, once an estimate of the cost of the risk
is made. The total risk exposure for all risks (above the cutoff in the risk table) can provide a means
for adjusting the final cost estimate for a project. It can also be used to predict the probable increase
in staff resources required at various points during the project schedule.
Risk Control – Need and RMMM Strategy
During early stages of project planning, a risk may be stated quite generally. As time passes and
more is learned about the project and the risk, it may be possible to refine the risk into a set of more
detailed risks, each somewhat easier to mitigate, monitor, and manage.
One way to do this is to represent the risk in condition-transition-consequence
Using the CTC format for the reuse risk, we can write
Given that all reusable software components must conform to specific design standards and that
some do not conform, then there is concern that (possibly) only 70 percent of the planned reusable
modules may actually be integrated into the as-built system, resulting in the need to custom engineer
the remaining 30 percent of components.
This general condition can be refined in the following manner:
Sub condition 1.
Certain reusable components were developed by a third party with no knowledge of internal design
standards.
Sub condition 2.
The design standard for component interfaces has not been solidified and may not conform to certain
existing reusable components.
Sub condition 3.
Certain reusable components have been implemented in a language that is not supported on the target
environment.
The consequences associated with these refined subconditions remain the same (i.e., 30 percent of
software components must be custom engineered), but the refinement helps to isolate the underlying
risks and might lead to easier analysis and response.
All of the risk analysis activities presented to this point have a single goal—to assist the project team
in developing a strategy for dealing with risk. An effective strategy must consider three issues: risk
avoidance, risk monitoring, and risk management and contingency planning.
If a software team adopts a proactive approach to risk, avoidance is always the best strategy. This is
achieved by developing a plan for risk mitigation.

79
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

For example,
Assume that high staff turnover is noted as a project risk r1 . Based on past history and management
intuition, the likelihood l1 of high turnover is estimated to be 0.70
(70 percent, rather high) and the impact x1 is projected as critical. That is, high turnover will have a
critical impact on project cost and schedule
• To mitigate this risk, you would develop a strategy for reducing turnover. Among the possible
steps to be taken are:
• Meet with current staff to determine causes for turnover (e.g., poor working conditions, low pay,
competitive job market).
• Mitigate those causes that are under your control before the project starts. Once the project
commences, assume turnover will occur and develop techniques to ensure continuity when people
leave.
• Organize project teams so that information about each development activity is widely dispersed.
• Define work product standards and establish mechanisms to be sure that all models and
documents are developed in a timely manner.
• Conduct peer reviews of all work (so that more than one person is “up to speed”).
• Assign a backup staff member for every critical technologist.
As the project proceeds, risk-monitoring activities commence. The project manager monitors factors
that may provide an indication of whether the risk is becoming more or less likely.
In the case of high staff turnover, the general attitude of team members based on project pressures,
the degree to which the team has jelled, interpersonal relationships among team members, potential
problems with compensation and benefits, and the availability of jobs within the company and
outside it are all monitored.
In addition to monitoring these factors, a project manager should monitor the effectiveness of risk
mitigation steps.
For example, a risk mitigation step noted here called for the definition of work product standards and
mechanisms to be sure that work products are developed in a timely manner. This is one mechanism
for ensuring continuity, should a critical individual leave the project.
The project manager should monitor work products carefully to ensure that each can stand on its own
and that each imparts information that would be necessary if a newcomer were forced to join the
software team somewhere in the middle of the project.
Risk management and contingency planning assumes that mitigation efforts have failed and that the
risk has become a reality. Continuing the example, the project is well under way and a number of
people announce that they will be leaving. If the mitigation strategy has been followed, backup is
available, information is documented, and knowledge has been dispersed across the team.
It is important to note that risk mitigation, monitoring, and management (RMMM) steps incur
additional project cost.
For example, spending the time to back up every critical technologist costs money. Part of risk
management, therefore, is to evaluate when the benefits accrued by the RMMM steps are outweighed
by the costs associated with implementing them.

80
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

In essence, you perform a classic cost-benefit analysis. If risk aversion steps for high turnover will
increase both project cost and duration by an estimated 15 percent, but the predominant cost factor is
“backup,” management may decide not to implement this step. On the other hand, if the risk aversion
steps are projected to increase costs by 5 percent and duration by only 3 percent, management will
likely put all into place.
A risk management strategy can be included in the software project plan, or the risk management
steps can be organized into a separate risk mitigation, monitoring, and management plan (RMMM).
The RMMM plan documents all work performed as part of risk analysis and are used by the project
manager as part of the overall project plan.
Some software teams do not develop a formal RMMM document. Rather, each risk is documented
individually using a risk information sheet (RIS). In most cases, the RIS is maintained using a
database system so that creation and information entry, priority ordering, searches, and other analysis
may be accomplished easily.
Once RMMM has been documented and the project has begun, risk mitigation and monitoring steps
commence. Risk mitigation is a problem avoidance activity.
Risk monitoring is a project tracking activity with three primary objectives:
(1)to assess whether predicted risks do, in fact, occur;
(2) to ensure that risk aversion steps defined for the risk are being properly applied;
(3) to collect information that can be used for future risk analysis.
In many cases, the problems that occur during a project can be traced to more than one risk
Question Bank
1. Explain 4 P’s of Software Project Management
2. Write a note on five part common sense approach for software projects development.
3. Explain Function Point Matrix
4. List limitations of FP Matrix.
5. Write a note on Project Cost Estimation techniques
6. Enlist and explain in brief three methods of Project Cost estimation.
7. Write a detailed note on COCOMO Model
8. Define Software Risk, Proactive and Reactive Risk Strategy.
9. Write a note on Project Risk, Technical Risk and Business Risk.
10. Define Generic Risk and Product specific Risk.
11. Explain the process of Risk Identification.
12. Write a note on Risk Components, Risk Analysis, Risk Projection.
13. Explain the process of assessing Risk Impact and Risk Control.
14. Describe RMMM Strategy.

81
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Chapter 5 Software Quality Assurance and Security

5.1.Project Scheduling
• Concept of Project Scheduling
Software project scheduling is an action that distributes estimated effort across the planned project
duration by allocating the effort to specific software engineering tasks.
During early stages of project planning, a macroscopic schedule is developed.
This type of schedule identifies all major process framework activities and the product function to
which they are applied. As the project gets under way, each entry on the macroscopic schedule is
refined into a detailed schedule.
Here, specific software actions and tasks are identified and scheduled.
• Factors that delay Project Schedule
Scheduling for software engineering can be viewed from two perspectives.
In first, an end date for release of a computer-based system has already been established. The
software organization is constrained to distribute effort within the prescribed time frame.
The second view of software scheduling assumes that rough chronological bounds have been
discussed but that the end date is set by the software engineering organization. Effort is distributed to
make best use of resources, and an end date is defined after careful analysis of software.
• Principles of Project Scheduling
Compartmentalization
The project must be compartmentalized into a number of manageable activities and tasks. To
accomplish compartmentalization, both the product and the process are refined.
Interdependency
The interdependency of each compartmentalized activity or task must be determined. Some tasks
must occur in sequence, while others can occur in parallel. Some activities cannot commence until
the work product produced by another is available. Other activities can occur independently.
Time allocation
Each task to be scheduled must be allocated some number of work units. In addition, each task must
be assigned a start date and a completion date that are a function of the interdependencies and
whether work will be conducted on a full-time or part-time basis.
Effort Validation
Every project has a defined number of people on the software team. As time allocation occurs, you
must ensure that no more than the allocated number of people has been scheduled at any given time.
Defined responsibilities
Every task that is scheduled should be assigned to a specific team member.
Defined outcomes
Every task that is scheduled should have a defined outcome. For software projects, the outcome is
normally a work product.

82
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Every task or group of tasks should be associated with a project milestone. A milestone is
accomplished when one or more work products has been reviewed for quality and has been
approved. Each of these principles is applied as the project schedule evolves.
• Work Breakdown Structure (WBS)
Dividing complex projects to simpler and manageable tasks is the process identified as Work
Breakdown Structure (WBS). Usually, the project managers use this method for simplifying the
project execution. In WBS, much larger tasks are broken down to manageable chunks of work. These
chunks can be easily supervised and estimated. WBS is not restricted to a specific field when it
comes to application. This methodology can be used for any type of project management.
Following are a few reasons for creating a WBS in a project:
• Accurate and readable project organization.
• Accurate assignment of responsibilities to the project team.
• Indicates the project milestones and control points.
• Helps to estimate the cost, time and risk.
• Illustrate the project scope, so the stakeholders can have a better understanding of the same.
Construction of a WBS
Identifying the main deliverables of a project is the starting point for deriving a work breakdown
structure. This important step is usually done by the project managers and the subject matter experts
(SMEs) involved in the project. Once this step is completed, the subject matter experts start breaking
down the high-level tasks into smaller chunks of work. In the process of breaking down the tasks,
one can break them down into different levels of detail. One can detail a high-level task into ten sub-
tasks while another can detail the same high-level task into 20 sub-tasks.
Therefore, there is no hard and fast rule on how you should breakdown a task in WBS. Rather, the
level of breakdown is a matter of the project type and the management style followed for the project.
In general, there are a few "rules" used for determining the smallest task chunk. In "two weeks" rule,
nothing is broken down smaller than two weeks worth of work. This means, the smallest task of the
WBS is at least two-week long. 8/80 is another rule used when creating a WBS. This rule implies
that no task should be smaller than 8 hours of work and should not be larger than 80 hours of work.
One can use many forms to display their WBS. Some use tree structure to illustrate the WBS, while
others use lists and tables. Outlining is one of the easiest ways of representing a WBS.
• Following example is an outlined WBS:

83
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

There are many design goals for WBS. Some important goals are as follows:
• Giving visibility to important work efforts.
• Giving visibility to risky work efforts.
• Illustrate the correlation between the activities and deliverables.
• Show clear ownership by task leaders.
WBS Diagram
• In a WBS diagram, the project scope is graphically expressed. Usually the diagram starts with a
graphic object or a box at the top, which represents the entire project. Then, there are sub-
components under the box.
• These boxes represent the deliverables of the project. Under each deliverable, there are sub-
elements listed. These sub-elements are the activities that should be performed in order to achieve
the deliverables.
• Although most of the WBS diagrams are designed based on the deliveries, some WBS are created
based on the project phases. Usually, information technology projects are perfectly fit into WBS
model. Therefore, almost all information technology projects make use of WBS. In addition to the
general use of WBS, there is specific objective for deriving a WBS as well. WBS is the input for
Gantt charts, a tool that is used for project management purpose. Gantt chart is used for tracking
the progression of the tasks derived by WBS.
Following is a sample WBS diagram:

• Project Scheduling Techniques –PERT, CPM


• Scheduling of a software project does not differ greatly from scheduling of any multitask
engineering effort. Therefore, generalized project scheduling tools and techniques can be applied
with little modification for software projects.
84
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

• Program Evaluation and Review Technique (PERT) and the Critical Path Method (CPM) are two
projects scheduling methods that can be applied to software development.
• Both techniques are driven by project planning activities like estimates effort, a decomposition of
the product function, the selection of the appropriate process model and task set and
decomposition of the tasks that are selected.
PERT: Program Evaluation and Review Technique
Project Evaluation and Review techniques have developed in 1950s to plan and control large
weapons development projects for the US navy. It was a graphic networking technique. The
Microsoft project and other PM software packages PERT chart represent another view of project. It
does represent inter task relationships more effectively. Tasks and milestones are included in the
chart symbols such as circles; squares are used to depict tasks and milestone.Microsoft uses
rectangles to represent task. Each task rectangle is divided into sections with task name at the top and
task-id/duration in the middle.
PERT plan diagrammatically represents the network of tasks required to complete a project. (Task
Network). It explicitly establishes sequential dependencies and relationships among the tasks.
PERT diagram consists of both activities and events.
Activity  Time and resource consuming efforts required to complete a segment of the total project.
These are represented using solid lines with directional arrays.
Events  Represent the completion of segments/ part of the project represented by circles.
Activities and events are coded as described to designate their functions in the overall project.

PERT chart and accompanying table defines estimated and actual times, costs, responsible personnel
for monitoring and control of project performances.
The total time required to complete the project can be determine by locating the longest path (in
terms of time) in the chart. This path is the “Critical Path”.

85
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Both PERT and CPM provide quantitative tools that allow you to
1) Determine the critical path – the chain of tasks that determines the duration of the project,
2) Establish “most likely” time estimates for individual tasks by applying statistical models and
3) Calculate “boundary times” that define a time “window” for a particular task.
• Critical Path Method (CPM)
A project of any kind involves a number of activities. Some of them are interdependent while others
are independent. It is important that project management should effectively plan, schedule, co-
ordinate and optimize the activities of the various participants in the project.
There are certain activities which are to be completed within the stipulated time. If those critical
activities are not completed within the prescribed time line, the completion of the whole project is
hampered.
If the project is quite large effective control over all the activities is difficult. To control such
projects, Network techniques have been developed.
Advantages of the Critical Path Method:
1. It helps in ascertaining the time schedule.
2. Control becomes easy for management.
3. It helps in preparing a detailed plan of action/operations/ activities
4. It helps in enforcing the plan of actions/operations/activities.
5. It gives a standard method for communicating project plans, schedules and time and cost
performances.
6. It identifies the most critical elements.
7. It shows ways to enforce strict supervision over the entire project programme.
Concept of Task Network
A task set is a collection of software engineering work tasks, milestones, work products, and quality
assurance filters that must be accomplished to complete a particular project. The task set must
provide enough discipline to achieve high software quality. But, it must not burden the project team
with unnecessary work. In order to develop a project schedule, a task set must be distributed on the
project time line. The task set will vary depending upon the project type and the degree of rigor with
which the software team decides to do its work.
A task set example
Concept development projects are initiated when the potential for some new technology must be
explored. There is no certainty that the technology will be applicable, but a customer believes that
potential benefit exists. Concept development projects are approached by applying the following
actions:
Concept scoping determines the overall scope of the project.
Preliminary concept planning establishes the organization’s ability to undertake the work implied
by the project scope.
Technology risk assessment evaluates the risk associated with the technology to be implemented as
part of the project scope.
86
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Proof of concept demonstrates the viability of a new technology in the software context.
Concept implementation implements the concept representation in a manner that can be reviewed
by a customer and is used for “marketing” purposes when a concept must be sold to other customers
or management.
Customer reaction to the concept solicits feedback on a new technology concept and targets
specific customer applications.
A task network is also called as an activity network, is a graphic representation of the task flow for a
project. It is sometimes used as the mechanism through which task sequence and dependencies are
input to an automated project scheduling tool. The concurrent nature of software engineering actions
leads to a number of important scheduling requirements. Because parallel tasks occur
asynchronously, it is important to determine dependencies to ensure continuous progress toward
completion.
In addition, one should be aware of those tasks that lie on the critical path. It means tasks that must
be completed on schedule if the project as a whole is to be completed on schedule. It is important to
note that the task network is macroscopic.

5.2.5.2 Project Tracking:


Project tracking consists of comparing the project plan with the actual advance of the project. Project
tracking is particularly important for organizations with a track record of time- and cost overruns,
particularly in the IT industry.
Process Activities
1. Determine Work Done
2. Determine Resources Spent
3. Compare Work Done vs. Resources Spent - Earned Value Analysis
4. Track Milestones
Milestones represent important achievements in a project

87
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Earned Value Analysis


Earned Value Analysis (EVA) is an industry standard method of measuring a project's progress at
any given point in time, forecasting its completion date and final cost, and analyzing variances in the
schedule and budget as the project proceeds.
It compares the planned amount of work with what has actually been completed, to determine if the
cost, schedule, and work accomplished are progressing in accordance with the plan. As work is
completed, it is considered "earned".
Calculating Earned Value
Earned Value Management measures progress against a baseline. It involves calculating three key
values for each activity in the WBS:
1. The Planned Value (PV), (formerly known as the budgeted cost of work scheduled or BCWS)—
that portion of the approved cost estimate planned to be spent on the given activity during a given
period.
2. The Actual Cost (AC), (formerly known as the actual cost of work performed or ACWP)—the
total of the costs incurred in accomplishing work on the activity in a given period. This Actual
Cost must correspond to whatever was budgeted for the Planned Value and the Earned Value
(e.g. all labor, material, equipment, and indirect costs).
3. The Earned Value (EV), (formerly known as the budget cost of work performed or BCWP)—the
value of the work actually completed.
These three values are combined to determine at that point in time whether or not work is being
accomplished as planned.
The most commonly used measures are the cost variance:
Cost Variance (CV) = EV – AC and the schedule variance:
Schedule Variance (SV) = EV - PV
These two values can be converted to efficiency indicators to reflect the cost and schedule
performance of the project. The most commonly used cost-efficiency indicator is the cost
performance index (CPI).
It is calculated thus:
CPI = EV / AC
The sum of all individual EV budgets divided by the sum of all individual AC's is known as the
cumulative CPI, and is generally used to forecast the cost to complete a project. The schedule
performance index (SPI), calculated thus:
SPI = EV / PV is often used with the CPI to forecast overall project completion estimates.
A negative schedule variance (SV) calculated at a given point in time means the project is behind
schedule, while a negative cost variance (CV) means the project is over budget.
Time-Line Charts/ Gantt Charts
When creating software project schedule, we begin with a set of tasks. If automated tools are used,
the work breakdown is input as a task network or task outline. Effort, duration and start date are then
input for each task, In addition, tasks may be assigned to specific individuals.

88
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

As a consequence of this input, a time-line chart, also called a Gantt chart is generated. A time-line
chart can be developed for the entire project.
The figure below depicts a part of a software project schedule that emphasizes scoping task for a
word-processing (WP) software product. All project tasks are listed in the left-hand column.
The horizontal bars indicate the duration of each task. When multiple bars occur at the same time on
the calendar, task concurrency is implied. The diamond indicate milestones.

Once the information necessary for the generation of a time-line chart has been input, the majority of
software project scheduling tools produce project tables – a tabular listing of all project tasks, their
planned and actual start and end dates, and a variety of related information. Used in conjunction with
the time-line chart, project tables enable to track progress.
5.3.Software Quality Management Vs. Software Quality Assurance
Basic Quality Concepts
• Quality: - Fit for use & meeting customer’s requirements
• Process: - A sequence of steps performed for a given purpose
• Quality assurance: All those planned and systematic actions necessary to provide adequate
confidence that a product or service will satisfy given requirements for quality.
• Quality control: Includes review & testing
 Quality Management
• Quality: A characteristic attribute of something. As an attribute of an item it refers to measurable
characteristics such as length, colures, electrical properties etc. Software is largely an intellectual
entity, which is more challenging to characteristics than physical objects.
• User satisfaction: compliant budget, good quality, delivery within scheduled.
• Variation control: inspection, review, and tests.

89
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

• Quality assurance: Audit, reporting, effectiveness and completeness of the activities.


• Cost of quality : Prevention Costs, Appraisal Costs Failure Costs
• Software Quality Assurance
• Conformance to explicit stated functional and performance requirements, explicitly documented
development standards and implicit characteristic that are expected of all professionally
developed software.
1. Software requirements are the bases from which quality is measured. Lack of conformance to
requirements is lack of quality.
2. Specified standards define a set of development criteria that guide the manner in which software
is engineered. If criteria is not followed, lack of quality will be the result.
3. Set of implicit requirements i.e. desire of ease of use, good maintainability. If software conforms
to explicit and fails to mean implicit requirements, quality is a suspect.
SQA activities
Software engineers do the technical works.
This is an independent group:
1. SQA plans for project.
2. Participants in the development of the project’s software process description.
3. Reviews software engineering activities to verify compliances with the defined software process.
4. .Audits designated software work products to verify compliances to define as part of software
process.
5. Ensure deviations in software work are documented; products are documented and handled
according to a documented procedure.
6. Record any non-compliances and report to senior management.
Software Reviews
It is a filter for software process. A formal technical review is essentials from QA point of views for
unnecessary errors and improving software quality.
Cost impact in software defect: - FTR for avoiding errors after software release. A Cost impacts are
minimum and errors are deleted easily.
Defect implications examiner: Generations and detector errors in preliminary design stage.
1. Review
• 3-5 people involved
• Advance preparations
• Medium durations
• What is reviewed?
• Who reviewed?
• Finding & conclusion

90
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

2. FTR Guidelines
• Reviews the product
• Set agenda & maintain
• Limits debate & rebuttal
• Enunciate problem errors
• Text written notes
• Limit number of participants and insist on advance preparation.
• Develop a checklist for each product that is likely to be reviewed.
• Allocate resources & scheduled time
• Conduct meaningful training for all reviews
• Review your early reviews
Statistical quality assurance reflects a growing trend throughout industry to become morequantitative
about quality. For software, statistical quality assurance implies the following steps:
1. Information about software errors and defects is collected and categorized.
2. An attempt is made to trace each error and defect to its underlying cause (e.g., nonconformance to
specifications, design error, violation of standards, poor communication with the customer).
3. Using the Pareto principle (80 percent of the defects can be traced to 20 percent of all possible
causes), isolate the 20 percent (the vital few).
4. Once the vital few causes have been identified, move to correct the problems that have caused the
errors and defects.
This relatively simple concept represents an important step toward the creation of an adaptive
software process in which changes are made to improve those elements of the process that introduce
error.
5.4.Quality Evaluation Standards
Six Sigma for Software
Six Sigma is the most widely used strategy for statistical quality assurance in industry today.
Originally popularized by Motorola in the 1980s, the Six Sigma strategy “is a rigorous and
disciplined methodology that uses data and statistical analysis to measure and improve a company’s
operational performance by identifying and eliminating defects’ in manufacturing and service-related
processes”.
The term Six Sigma is derived from six standard deviations—3.4 instances (defects) per million
occurrences—implying an extremely high quality standard
The Six Sigma methodology defines three core steps:
Define customer requirements and deliverables and project goals via well-defined methods of
customer communication.
• Measure the existing process and its output to determine current quality performance (collect
defect metrics).
• Analyze defect metrics and determine the vital few causes.

91
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

If an existing software process is in place, but improvement is required,


Six Sigma suggests two additional steps:
• Improve the process by eliminating the root causes of defects.
• Control the process to ensure that future work does not reintroduce the causes of defects.
These core and additional steps are sometimes referred to as the DMAIC (define, measure, analyze,
improve, and control) method.
If an organization is developing a software process (rather than improving an existing process), the
core steps are augmented as follows:
• Define customer requirements and deliverables and project goals via well-defined methods of
customer communication.
• Measure the existing process and its output to determine current quality performance (collect defect
metrics).
• Analyze defect metrics and determine the vital few causes.
• Design the process to (1) avoid the root causes of defects and (2) to meet customer requirements.
• Verify that the process model will, in fact, avoid defects and meet customer requirements.
This variation is sometimes called the DMADV (define, measure, analyze, design, and verify)
method.
ISO 9000 for Software – Concept and major considerations
• ISO (International Organization for Standardization) is the world’s largest developer of standards.
• ISO is a network of the national standards institutes of 148 countries, on the basis of one member
per country, with a central secretariat in Geneva, Switzerland, that coordinates the system.
• ISO is a non-government organization: its members are not, as is the case in the United Nations
system, delegations of national governments
• ISO is able to act, as a bridging organization in which a consensus can be reached on solutions
that meet both the requirements of business and the broader needs of society
ISO Standards benefits to society:
• For customers, the worldwide compatibility of technology
• For governments, International Standards provide the technological and scientific bases
underpinning health, safety and environmental legislation.
• For trade officials negotiating the emergence of regional and global markets, Internationals
Standards create “a level playing field” for all competitors on those markets.
• For developing countries, International Standards that represent an international an international
consensus on the state of the art constitute an important source of technological know-how
• For consumers, conformity of products and services to international Standards provides assurance
about their quality, safety and reliability. For everyone, International Standards can contribute to
the quality of life in general by ensuring that the transport, machinery and tools we use are safe.
• For the planet it inhabits, International Standards on air, water and soil quality, and on emission of
gasses and radiation, can contribute to efforts to preserve the environment.

92
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

ISO 9000 Standards


ISO 9000: 2000 – Guidelines for selection & use
ISO 9001: 2000 – Quality Assurance Model
ISO 9004: 2000 – Guidelines for process improvements

Model of a process-based Quality Management System


CMMI – CMMI levels, Process Area Considered
The Capability Maturity Model Integration (CMMI), a comprehensive process meta-model that is
predicated on a set of system and software engineering capabilities that should be present as
organizations reach different levels of process capability and maturity.
The CMMI represents a process meta-model in two different ways:
(1)As a “continuous” model and (2) as a “staged” model.
(2)The continuous CMMI metamodel describes a process in two dimensions. Each process area (e.g.,
project planning or requirements management) is formally assessed against specific goals and
practices and is rated according to the following capability levels:
Level 0: Incomplete—the process area (e.g., requirements management) is either not performed or
does not achieve all goals and objectives defined by the CMMI for level 1 capability for the process
area.
Level 1: Performed—all of the specific goals of the process area (as defined by the CMMI) have
been satisfied. Work tasks required to produce defined work products are being conducted.
Level 2: Managed—all capability level 1 criteria have been satisfied. In addition, all work associated
with the process area conforms to an organizationally defined policy; all people doing the work have
access to adequate resources to get the job done; stakeholders are actively involved in the process

93
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

area as required; all work tasks and work products are “monitored, controlled, and reviewed; and are
evaluated for adherence to the process description”.
Level 3: Defined—all capability level 2 criteria have been achieved. In addition, the process is
“tailored from the organization’s set of standard processes according to the organization’s tailoring
guidelines, and contributes work products, measures, and other process-improvement information to
the organizational process assets”.

Level 4: Quantitatively managed—all capability level 3 criteria have been achieved. In addition, the
process area is controlled and improved using measurement and quantitative assessment.
“Quantitative objectives for quality and process performance are established and used as criteria in
managing the process”.
Level 5: Optimized—all capability level 4 criteria have been achieved. In addition, the process area
is adapted and optimized using quantitative (statistical) means to meet changing customer needs and
to continually improve the efficacy of the process area under consideration.
The CMMI defines each process area in terms of “specific goals” and the “specific practices”
required to achieve these goals. Specific goals establish the characteristics that must exist if the
activities implied by a process area are to be effective. Specific practices refine a goal into a set of
process-related activities.
The staged CMMI model defines the same process areas, goals, and practices as the continuous
model. The primary difference is that the staged model defines five maturity levels, rather than five
capability levels. To achieve a maturity level, the specific goals and practices associated with a set of
process areas must be achieved.
The relationship between maturity level and Process level is shown in the diagram below:
Process Areas required to achieve a Maturity level

94
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

5.5.Software Security
Software security is the application of techniques that assess, mitigate, and protect software systems
from vulnerabilities. These techniques ensure that software continues to function and are safe from
attacks. Developing secure software involves considering security at every stage of the life cycle.
The major goal is to identify flaws and defects as early as possible.

Framework to securing Software


Applying software security techniques to software development produces higher levels of quality.
Safer software has correct and predictable behavior.
• Code review using tools to find bugs, vulnerabilities, and weaknesses
• Architectural risk analysis to identify flaws
• Penetration testing
• Risk-based security testing
95
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

• Abuse cases to examine how a system behaves under attack


• Defensive programming
• Secure coding
• Threat modeling (e.g., STRIDE)
• Understanding your attack surface
• Sandboxing
• Code auditing
• Application security (e.g., OWASP Top Ten)
• Defense in depth
Introduction to DevOps
• DevOps (development and operations) is a collection of tools and technologies combined to carry
out various business processes. It aims to bridge the gap between two of the most significant
departments in any IT organization, the development department and the operations department.
• DevOps is not a tool or a team, it is the process or a methodology of using various tools to solve
the problems between Developers and Operations team, hence the term “Dev-Ops.”
• The development team always had the pressure of completing the old, pending work that was
considered faulty by the operations team. With DevOps, there is no wait time to deploy the code
and getting it tested. Hence, the developer gets instantaneous feedback on the code, and therefore
can close the bugs, and can make the code production ready faster.

Life cycle of DevOps


DEVOPS (Development Operations)
• It is a continuous process – continuous development, testing, integration, deployment and
monitoring. Let me explain this with an example.
• Instagram is a widely used application all over the world. For it to work the way it does, there is
huge team behind the scenes continually developing, testing and releasing features to it.
• The developer plans and builds the code for the application which undergoes testing using test
suites, if test suites are successful the code is sent to production.

96
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

1) Continuous Development
This phase involves the planning and coding of the software. The vision of the project is decided
during the planning phase. And the developers begin developing the code for the application. There
are no DevOps tools that are required for planning, but there are several tools for maintaining the
code.
2) Continuous Integration
This stage is the heart of the entire DevOps lifecycle. It is a software development practice in which
the developers require to commit changes to the source code more frequently. This may be on a daily
or weekly basis. Then every commit is built, and this allows early detection of problems if they are
present. Building code is not only involved compilation, but it also includes unit testing, integration
testing, code review, and packaging.
The code supporting new functionality is continuously integrated with the existing code. Therefore,
there is continuous development of software. The updated code needs to be integrated continuously
and smoothly with the systems to reflect changes to the end-users.
3) Continuous Testing
This phase, where the developed software is continuously testing for bugs. For constant testing,
automation testing tools such as TestNG, JUnit, Selenium, etc are used. These tools allow QAs to
test multiple code-bases thoroughly in parallel to ensure that there is no flaw in the functionality. In
this phase, Docker Containers can be used for simulating the test environment.
4) Continuous Monitoring
Monitoring is a phase that involves all the operational factors of the entire DevOps process, where
important information about the use of the software is recorded and carefully processed to find out
trends and identify problem areas. Usually, the monitoring is integrated within the operational
capabilities of the software application.
5) Continuous Feedback
The application development is consistently improved by analyzing the results from the operations of
the software. This is carried out by placing the critical phase of constant feedback between the
operations and the development of the next version of the current software application.
6) Continuous Deployment
In this phase, the code is deployed to the production servers. Also, it is essential to ensure that the
code is correctly used on all the servers.
7) Continuous Operations
All DevOps operations are based on the continuity with complete automation of the release process
and allow the organization to accelerate the overall time to market continuingly.
It is clear from the discussion that continuity is the critical factor in the DevOps in removing steps
that often distract the development, take it longer to detect issues and produce a better version of the
product after several months. With DevOps, we can make any software product more efficient and
increase the overall count of interested customers in your product.

97
Course Name : Computer Engineering Subject Title : Software Engineering
Course Code : CO/CM/IF/CD Subject Code : 22413

Question Bank
1. Explain the concept of Project scheduling and the factors that delay a Project schedule.
2. Explain the Principles of Project scheduling
3. Explain PERT with an example
4. Explain CPM with an example
5. Explain EVA with an example
6. Explain Timeline/Gantt Chart with an example
7. Define Quality, Quality control, Quality Assurance.
8. Write a note on basic quality concepts
9. Define and differentiate between Quality Control and Quality Assurance( At least 4 points )
10. Write a note on Software Quality Assurance (SQA)
11. Explain the Formal Technical Review (FTR)
12. Write a note on Statistical Software Quality Assurance.(SSQA)
13. Explain Six Sigma approach DMADV for new system development
14. Describe Six Sigma approach DMAIC for upgraded systems.( For an existing system)
15. Explain in brief ISO standards for Software
16. List advantages of ISO for Consumers, Organization and Society.
17. Explain CMMI and its levels.
18. Write a note on Software security technique
19. Write a note on Devops and its life cycle

98

You might also like