0% found this document useful (0 votes)
21 views287 pages

SE Moodle Notes

This document provides an introduction to software engineering and process models. It defines software and discusses the attributes of good software. Software engineering is introduced as applying engineering principles to software development. The document also discusses essential attributes of good software including maintainability, dependability, efficiency, and acceptability. Finally, it provides an overview of the Capability Maturity Model (CMM) and its successor, the CMM Integration (CMMI), which are used to measure the maturity of an organization's software processes.

Uploaded by

Atharv Darekar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views287 pages

SE Moodle Notes

This document provides an introduction to software engineering and process models. It defines software and discusses the attributes of good software. Software engineering is introduced as applying engineering principles to software development. The document also discusses essential attributes of good software including maintainability, dependability, efficiency, and acceptability. Finally, it provides an overview of the Capability Maturity Model (CMM) and its successor, the CMM Integration (CMMI), which are used to measure the maturity of an organization's software processes.

Uploaded by

Atharv Darekar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 287

Introduction to Software

Engineering and Process Models

APSIT Prof. Krishnapriya S


What is Software?
Computer Software is the product that software professionals build and
then support over the long term.
Software encompasses: (1) instructions (computer programs) that when
executed provide desired features, function, and performance; (2) data
structures that enable the programs to adequately store and
manipulate information and (3) documentation that describes the
operation and use of the programs.
Software Engineering encompasses a process, a collection of methods
(practice) and an array of tools that allow professionals to build high-
quality computer software.

Prof. Krishnapriya S
APSIT
Software Application Domains
• System Software – programs written to service other programs. Eg.
Compilers, editors, file management utilities
• Application Software – stand alone programs that solve a specific
business need.
• Engineering / Scientific Software – Applications range from
astronomy to volcanology, from automotive stress analysis to space
shuttle orbital dynamics, molecular biology to automated
manufacturing.
• Embedded Software – resides within a product or system and is used
to implement and control features and functions for the end user and
the system itself.

Prof. Krishnapriya S
APSIT
Software Application Domains (cont..)
• Product – line Software – provide a specific capability for use by
many different customers. Can focus on a limited and esoteric
marketplace or address mass consumer markets.
• Web applications – network – centric software category that spans a
wide array of applications. Web application is a piece of software
which is accessed by the browser.
• Artificial intelligence software – makes use of non numerical
algorithms to solve complex problems. Applications include robotics,
expert systems, pattern recognition, artificial neural networks etc.

Prof. Krishnapriya S
APSIT
Software—New Categories
• Open world computing—pervasive, ubiquitous, distributed
computing with the help of wireless networking. Application software
that will allow mobile devices, personal computer, enterprise system
to communicate across vast network.
• Netsourcing—the Web as a computing engine as well as a content
provider. How to architect simple (eg. Personal financial planning)
and sophisticated applications to target end-users worldwide.
• Open source—”free” source code open to the computing community.

Prof. Krishnapriya S
APSIT
Software costs
• Software costs often dominate computer system costs. The costs of
software on a PC are often greater than the hardware cost.
• Software costs more to maintain than it does to develop. For systems
with a long life, maintenance costs may be several times development
costs.
• Software engineering is concerned with cost-effective software
development.

Prof. Krishnapriya S
APSIT
Features of Software?
• Software is developed or engineered; it is not manufactured in the
classical sense.
• Software doesn't "wear out.” but it deteriorates (due to change). Hardware
has bathtub curve of failure rate ( high failure rate in the beginning, then drop to steady
state, then cumulative effects of dust, vibration, abuse occurs).

• Although the industry is moving toward component-based


construction (e.g. standard screws and off-the-shelf integrated
circuits), most software continues to be custom-built. Modern
reusable components encapsulate data and processing that is applied
to the data, enabling the software engineer to create new
applications from reusable parts.
E.g. graphical user interface, window, pull-down menus in library etc.
Prof. Krishnapriya S
APSIT
Failure curves for Software

Prof. Krishnapriya S
APSIT
Introduction to Software
Engineering and Process Models

APSIT Prof. Krishnapriya S


Software Engineering Definition
Definition by Fritz Bauer at the seminal conference:
• [Software engineering is] the establishment and use of sound engineering
principles in order to obtain economically software that is reliable and works
efficiently on real machines.
• The IEEE definition:
• Software Engineering: (1) The application of a systematic, disciplined,
quantifiable approach to the development, operation, and maintenance of
software; that is, the application of engineering to software. (2) The study of
approaches as in (1).

Prof. Krishnapriya S
APSIT
Importance of Software Engineering

• More and more, individuals and society rely on advanced software


systems. We need to be able to produce reliable and trustworthy
systems economically and quickly.
• It is usually cheaper, in the long run, to use software engineering
methods and techniques for software systems rather than just write
the programs as if it was a personal programming project. For most
types of system, the majority of costs are the costs of changing the
software after it has gone into use.

Prof. Krishnapriya S
APSIT
• What is software?
• Computer programs, data structures and associated documentation. Software
products may be developed for a particular customer or may be developed for a
general market.
• What are the attributes of good software?
• Good software should deliver the required functionality and performance to the
user and should be maintainable, dependable and usable.
• What is software engineering?
• Software engineering is an engineering discipline that is concerned with all
aspects of software production.
• What is the difference between software engineering and computer science?
• Computer science focuses on theory and fundamentals; software engineering is
concerned with the practicalities of developing and delivering useful software.
• What is the difference between software engineering and system engineering?
• System engineering is concerned with all aspects of computer-based systems
development including hardware, software and process engineering. Software
engineering is part of this more general
Prof. process.
Krishnapriya S
APSIT
Essential attributes of good software
Product characteristic
• Maintainability
Software should be written in such a way that it can evolve to meet the changing needs of
customers. This is a critical attribute because software change is an inevitable requirement of a
changing business environment.
• Dependability and security
Software dependability includes a range of characteristics including reliability, security and
safety. Dependable software should not cause physical or economic damage in the event of
system failure. Malicious users should not be able to access or damage the system.
• Efficiency
Software should not make wasteful use of system resources such as memory and processor
cycles. Efficiency therefore includes responsiveness, processing time, memory utilisation, etc.
• Acceptability
Software must be acceptable to the type of users for which it is designed. This means that it
must be understandable, usable and compatible with other systems that they use.
Prof. Krishnapriya S
APSIT
Software Engineering
A Layered Technology

Any engineering approach must rest on organizational commitment to quality which fosters a continuous
process improvement culture.
Process layer as the foundation defines a framework with activities for effective delivery of software
engineering technology. Establish the context where products (model, data, report, and forms) are produced,
milestone are established, quality is ensured and change is managed.
Method provides technical how-to’s for building software. It encompasses many tasks including communication,
requirement analysis, design modeling, program construction, testing and support.
Tools provide automated or semi-automated support for the process and methods

Prof. Krishnapriya S
APSIT
CAPABILITY MATURITY MODELS - CMM
• A process meta-model developed by Software Engineering Institute.
• For measuring the maturity of an Organization’s software process
• 5 levels of maturity
• Initial
• Repeatable
• Defined
• Managed
• Optimized

Prof. Krishnapriya S
APSIT
CMMI (CMM Integration)
• A successor of CMM
• Incorporates best components of individual disciplines of CMM
There are 2 representations for CMMI
-> Staged Representation
-> Continuous Representation
Staged Representation has 5 Maturity levels
1) Initial
2) Managed
3) Defined
4) Quantitatively managed
5) Optimizing Prof. Krishnapriya S
APSIT
CMMI (CMM Integration) (cont.)
Continuous Representation – 6 Capability levels
1) Level 0 – Incomplete
2) Level 1 – Performed
3) Level 2 – Managed
4) Level 3 – Defined
5) Level 4 – Quantitatively Managed
6) Level 5 - Optimizing

Prof. Krishnapriya S
APSIT
Introduction to Software
Engineering and Process Models

APSIT Prof. Krishnapriya S


Software Process
• A framework for the activities, actions, and tasks that are required to build
high-quality software
• A process is a collection of activities, actions and tasks that are performed
when some work product is to be created.
• Each framework activity is populated by a set of software engineering actions.
• Each software engineering action is defined by a task set
• Task set – work tasks to be completed, work products that will be produced,
quality assurance points that will be required, milestones that will be used to
indicate progress.
• Purpose of process is to deliver software in a timely manner and with
sufficient quality to satisfy those who have sponsored its creation and those
who will use it.
Prof. Krishnapriya S
APSIT
A Generic Process Model

Prof. Krishnapriya S
APSIT
Five Activities of a Generic Process framework
• Communication: communicate with customer to understand objectives and gather requirements
• Planning: creates a “map” defines the work by describing the tasks, risks and resources, work products
and work schedule.
• Modeling: Create a “sketch”, what it looks like architecturally, how the constituent parts fit together and
other characteristics.
• Construction: code generation and the testing.
• Deployment: Delivered to the customer who evaluates the products and provides feedback based on the
evaluation. This includes delivery, support and feedback from customer
• These five framework activities can be used to all software development regardless of the application
domain, size of the project, complexity of the efforts etc, though the details will be different in each case.
• For many software projects, these framework activities are applied iteratively as a project progresses.
Each iteration produces a software increment that provides a subset of overall software features and
functionality.

Prof. Krishnapriya S
APSIT
Umbrella Activities
Complement the five process framework activities and help team manage and control progress, quality, change,
and risk.
• Software project tracking and control: assess progress against the plan and take actions to maintain the
schedule.
• Risk management: assesses risks that may affect the outcome and quality.
• Software quality assurance: defines and conduct activities to ensure quality.
• Technical reviews: assesses work products to uncover and remove errors before going to the next activity.
• Measurement: define and collects process, project, and product measures to ensure stakeholder’s needs are
met.
• Software configuration management: manage the effects of change throughout the software process.
• Reusability management: defines criteria for work product reuse and establishes mechanism to achieve
reusable components.
• Work product preparation and production: create work products such as models, documents, logs, forms and
lists.

Prof. Krishnapriya S
APSIT
Process Flow

Prof. Krishnapriya S
APSIT
Process Flow
Linear process flow executes each of the five activities in
sequence.
An iterative process flow repeats one or more of the activities
before proceeding to the next.
An evolutionary process flow executes the activities in a circular
manner. Each circuit leads to a more complete version of the
software.
A parallel process flow executes one or more activities in
parallel with other activities ( modeling for one aspect of the
software in parallel with construction of another aspect of the
software.

Prof. Krishnapriya S
APSIT
1
2
Introduction to Software
Engineering and Process Models

APSIT Prof. Krishnapriya S


Software Process Models (SDLC models)
• A software process model is an abstract representation of the development
process.
• The models specify the stages and order of a process.
• Also referred as SDLC models (Software Development Life Cycle models).
The most popular and important SDLC models are as follows:
• Waterfall model
• V model
• Incremental model
• RAD model
• Agile model
• Prototype model
• Spiral model

Prof. Krishnapriya S
APSIT
Prescriptive and Agile Process Models
Prescriptive process models
•Stress detailed definition, identification, and application of process activates
and tasks.
•Intent is to improve system quality,
make projects more manageable,
make delivery dates and costs more predictable,
guide teams of software engineers

Unfortunately, there have been times when these objectives were not
achieved. If prescriptive models are applied dogmatically and without
adaptation, they can increase the level of bureaucracy

Agile process models


Emphasize project “agility” and follow a set of principles that lead to a more
informal approach to software process. It emphasizes maneuverability and
adaptability. It is particularly useful when Web applications are engineered.
Prof. Krishnapriya S
APSIT
The Waterfall Model

It is the oldest paradigm for SE. When requirements are well defined and reasonably stable, it leads to a
linear fashion.
Each of these phases produces one or more documents that need to be approved before the next phase
begins.
It is a linear, sequential approach to software development, with distinct phases such as requirements
gathering, design, implementation, testing, and maintenance

Prof. Krishnapriya S
APSIT
1) The Waterfall Model (Cont.)
Advantages:
The waterfall model is easy to understand and follow.
It doesn’t require a lot of customer involvement after the specification is done
Clear and defined phases of development make it easy to plan and manage the project.
It is well-suited for projects with well-defined and unchanging requirements.
Disadvantages:
It can be difficult to know how long each phase will take, making it difficult to estimate the
overall time and cost of the project.
It does not have much room for iteration and feedback throughout the development process.
A working version of the program(s) will not be available until late in the project time span.
leads to “blocking states” in which some project team members must wait for other
members of the team to complete dependent tasks
A major mistake, if undetected until the working program is reviewed, can be disastrous.

Prof. Krishnapriya S
APSIT
The V-Model

Prof. Krishnapriya S
APSIT
The V-Model (cont.)
• A variation of waterfall model- V model (Verification and Validation model)
• It is based on the association of a testing phase for each corresponding
development stage.
• Verification: It involves static analysis technique (review) done without
executing code. It is the process of evaluation of the product development
phase to find whether specified requirements meet.
• Validation is the process to evaluate the software after the completion of
the development phase to determine whether software meets the customer
expectations and requirements
• Small to medium-sized projects with set and clearly specified needs are
recommended to use the V-shaped model

Prof. Krishnapriya S
APSIT
The Incremental Models
This model is applied when initial requirements are reasonably well
defined, but the overall scope of the development effort precludes a
purely linear process. Also, there is a compelling need to expand a
limited set of new functions to a later system release.
It combines elements of linear and parallel process flows. Each linear
sequence produces deliverable increments of the software in a
manner that is similar to the increments produced by an evolutionary
process flow.
The first increment is often a core product with many supplementary
features. Users use it and evaluate it with more modifications to
better meet the needs

Prof. Krishnapriya S
APSIT
The Incremental Models

Prof. Krishnapriya S
APSIT
The Incremental Models (cont..)
Advantages
• Errors are easy to be recognized.
• Easier to test and debug
• More flexible.
• Simple to manage risk because it handled during its iteration.
• The Client gets important functionality early
Disadvantages
• Need for good planning
• Total Cost is high.
• Well defined module interfaces are needed.

Prof. Krishnapriya S
APSIT
RAD model
• Rapid Action Development Model – a high speed adaptation of
waterfall model.
• Product can be developed within a short period of time (60 to 90 days)
Instead of lots of detailed planning, you break the project into five
phases:
business modeling,
data modeling,
process modeling,
application generation,
and testing and turnover
Prof. Krishnapriya S
APSIT
RAD model
Business modeling - To find out how information flows across your
business.
data modeling – Analyzing the information. Identify and refine key data
sets across your business, define them clearly, then group them in ways
that might be useful to you later.
process modeling - Understand the flow of information around key
objectives and how that data supports specific business functions.
application generation - developer team begins to actually construct your
software. With process and data models in hand, they’ll begin creating
components and full prototypes to be tested in the next phase
testing and turnover - prototypes are tested separately, so clients and
users can scrutinize each new component and examine carefully to identify
issues

Prof. Krishnapriya S
APSIT
RAD model

Prof. Krishnapriya S
APSIT
RAD model
Advantages of the rapid application development model include faster
development and delivery, a more flexible end-product, and reduced
risk.
Some disadvantages are that it’s not ideal for smaller, technically risky
projects, your system must be modular, and it requires a highly skilled
team that’s willing to stay engaged throughout the project.

Prof. Krishnapriya S
APSIT
Software Requirement
Analysis and Modelling

APSIT Prof. Krishnapriya S


Requirement Engineering
Requirement – The information which describes the user’s expectation about the
system performance
Characteristics of requirements
should be unambiguous :- requirements should not be confusing. It should have a
single meaning in every perspective.
should be testable :- tester should be able to easily verify the requirements whether
they are implemented successfully or not.
should be clear :- concise, simple, precise. Should not contain any unnecessary
information.
should be understandable :- when anyone read it, then it should be easily
understood by that person. Proper conventions have to be used, should be
grammatically correct
should be feasible : - realistic, possible. The requirement should be completed within
the given time and budget. It is realistic if it can be implemented using existing
technology, with estimated budget and time.
should be consistent :- It should not happen that processes produce different
outputs for same inputs coming from different sources.
Requirement Engineering
Requirement Engineering :
- The procedure which collects the software requirements from
customer, analyze and document them.
- The purpose is to create and maintain ‘System Requirement
Specification’ document.
- The process of understanding and defining which services are required
and identifying the constraints on these services.
- Ensures your software will meet the user expectations
- Very critical stage of software process as any errors at this stage will
reflect later on in next stages, which causes higher costs.
- At the end if this stage, a requirement document that specifies the
requirements will be produced and validated with the stakeholders.
Requirement Engineering
Requirement Engineering :
- The procedure which collects the software requirements from
customer, analyze and document them.
- The process of understanding and defining which services are required
and identifying the constraints on these services.
- Ensures your software will meet the user expectations
- Very critical stage of software process as any errors at this stage will
reflect later on in next stages, which causes higher costs.
- At the end if this stage, a requirement document that specifies the
requirements will be produced and validated with the stakeholders.
Activities involved in Requirement Engineering
- Requirement inception
- Requirement elicitation
- Requirement analysis and negotiation
- System modeling
- Requirements specification
- Requirements validation
- Requirements management
Activities involved in Requirement Engineering
- Requirement inception
- A set of questions are asked to establish a software process.
- It understands the problem and evaluates with the proper solution.
- Customer and developer meet and they decide the overall scope and nature of
the problem statement.
- Requirement elicitation
- It is the practice of researching and discovering the requirements of a system
from users, customers and other stakeholders.
- Also referred to as “ requirement gathering”
- Requirement analysis and negotiation (Elaboration)
- Requirements are identified and conflicts with stakeholders are solved.
- Both written and graphical tools can be used
- graphical tools : Unified modelling language (UML), Lifecycle Modelling
language (LML)
- Written analysis tools : use cases, user stories
Activities involved in Requirement Engineering
- System modeling
- System modelling is the process of developing abstract models of a system. Each model
presents a different view or perspective of that system.
- System modeling may represent a system using graphical notation, e.g. the Unified
Modeling Language (UML).
- Requirements specification
- Requirements are documented in a formal artifact called Requirements Specification (RS).
- It will become official after validation.
- RS can contain both written and graphical information if necessary.
- Requirements validation
- The process of checking whether the documented requirements and models are
consistent and meet the needs of the stakeholder.
- Only after validation, the RS becomes official.
- Requirements management
- Managing all activities related to the requirements since inception, supervising as it is
developed, until it is put into use.
Requirement Elicitation
- Requirement elicitation practices include interviews, questionnaires, user observation,
workshops, brainstorming, use cases, role playing and prototyping.
- Requirement elicitation is usually followed by analysis and specification of the
requirements
There different ways to identify customer requirement:
1) Interviews
2) Surveys
3) Questionnaires
4) Task analysis
5) Domain analysis
6) Brainstorming
7) Prototyping
8) Observation
Requirement Elicitation
There are different ways to identify customer requirement:
1) Interviews:
- Structured or closed interviews (in which information to be
collected from customer is decided in advanced)
-Non structured or open interviews
- Oral interviews
- Written interviews
- Face to face interviews
- Group interviews. Helps to cover any missing requirement as
number of people participates in this process
Requirement Elicitation

The different ways to identify customer requirement:


2) Surveys
A survey is a method of gathering information using relevant questions from a
sample of people with the aim of understanding populations as a whole.
It allows an analyst to collect information from many people in relatively short
amount of times.
This is especially helpful when stakeholders are spread out geographically, or there
are dozen to hundreds of respondents whose input will be needed to help establish
system requirements.
The advantage is, collecting requirements is economically beneficial, because it
collects requirements from a large number of persons at same time.
Surveys are less effective method of data discovery.
Requirement Elicitation

Different ways to identify customer requirement:


3) Questionnaires
This is a document way which contains in-built set of objectives that is based on
questions and their options.
Questionnaires should not be too long, to ensure that users will complete them.
This document is given to all stakeholders to give answer of those questions which
are gathered and compiled.
If an answer for some question is not given in the questionnaire, the issue maybe
remaining unattended.
The advantage is, collecting requirements is economically beneficial because it
collects requirements from a large number of persons at same time.
Questionnaires are less effective method of data discovery.
Requirement Elicitation
Different ways to identify customer requirement:
4) Task analysis
Here, a team of software developers and engineers may identify the functional
specifications for which the new system has to be developed.
If the customer has some software to do the particular operations then it is analyzed by
this team to find out requirements for proposed system.
5) Domain Analysis
Each software is put into some domain category. Experienced persons in that domain can
study general and specific requirements.
6) Brainstorming
It is an informal debate held between different stakeholders and all their suggestions are
documented for further requirement analysis.
Brainstorming is used in requirement gathering to get as many ideas as possible from
group of people. Generally used to identify possible solutions to problems
Requirement Elicitation
Different ways to identify customer requirement:
7) Prototyping
In this we create user interface without including detail functionality for user to interpret
features of desired software product.
It supports providing details about requirements.
If the client is not clear about requirements, in such case the developer develop a
prototype based on requirements provided at initial stage.
The prototype is displayed to the client and the feedback is collected from client.
8) Observation
Team of experienced persons visits the organization or workplace of client.
They observe the existing system’s work, the flow of control at client’s end and how
execution problems are dealt. The team then draws conclusions which help to form
requirements expected.
Requirement Analysis

Here, we understand and refine the collected requirements to make them consistent and
non-confusing.
The aim is to improve the understanding of the requirements.
Here, the developer can communicate with the customer to clear confusing points and to
understand which requirements are more vital.
The important factors in requirement analysis are:
1) Identify and solve the confusion among requirements at the same level and among
different levels
2) Identify the boundaries of the proposed software system and the way in which it
communicate with environment
3) Evaluate customer requirements for overall system and then breakdown into
component level
Requirement Analysis

Here, we understand and refine the collected requirements to make them consistent and
non-confusing.
The aim is to improve the understanding of the requirements.
Here, the developer can communicate with the customer to clear confusing points and to
understand which requirements are more vital.
The important factors in requirement analysis are:
1) Identify and solve the confusion among requirements at the same level and among
different levels
2) Identify the boundaries of the proposed software system and the way in which it
communicate with environment
3) Evaluate customer requirements for overall system and then breakdown into
component level
Requirement Analysis

Process of Requirement analysis includes


- Analysis of requirements
- Description of the solution
- Cost estimate and prioritization
Problems that can occur in requirement analysis
- Stakeholders don’t recognize what they really required.
- Stakeholders state the requirements in their own words.
- Various stakeholders may have different and confusing requirements.
- Organizational and political factors may affect the software needs.
- Requirements are modified at the time of requirement analysis process.
- New stockholders may be included and the business environment may change.
Types of Requirements
Requirements in general are divided into 3 categories:
- Functional requirements
- Non- functional requirements
- Domain requirements
Functional Requirements
This specifies the operations as well as activities which a system must be able
to carry out.
A Functional requirement is used to define a function regarding a system or its
component, in which a function is described as a specification of behavior
between outputs and inputs.
May involve calculations, technical details, data manipulation and processing.
It is used to mention specific results of a system.
Types of Requirements
Functional Requirements
It specifies the application architecture of a system.
Functional requirement must include:
- Description of data to be entered into the system
- Descriptions of operations performed
- Descriptions of work-flows
- Descriptions of system reports or other outputs
- Persons who can enter the data into the system
Types of Requirements
Functional Requirements
Example:
Interface requirements: Field 1 accepts numeric data entry.
Business requirements: Data must be entered before a request can be
approved.
Clicking the Approve button moves the request to the Approval Workflow.
Regulatory/ Compliance Requirements : Spreadsheet can secure data with
electronic signatures.
Database will have a functional audit trail.
Security Requirements : Managers can enter or approve a request but cannot
delete requests.
Administrators cannot enter or approve requests but can delete requests.
Types of Requirements
Non Functional Requirements
These are basically the quality constraints that the system must satisfy
according to the project contract.
Also called as non- behavioral requirements.
- Portability
- Security
- Performance
- Flexibility
- Scalability
They are classified into:
Interface constraints, Operating constraints, Economic constraints,
Performance constraints, Life cycle constraints (maintainability, portability)
Types of Requirements
Type of Non Functional Requirements
Product requirements : Execution speed, reliability etc.
Organizational requirements : requirements that are a result of
organizational policies as well as procedures such as standards,
implementation necessities etc.
External requirements: requirements that are generated from the factors
that are external to the system and its development process. Eg.
Interoperability requirements, legislative requirements.
Functional Requirements Non Functional Requirements

A functional requirement defines a system or its A non-functional requirement defines the quality
component. attribute of a software system.

It specifies “What should the software system It places constraints on “How should the software
do?” system fulfill the functional requirements?”

Non-functional requirement is specified by


Functional requirement is specified by User. technical peoples e.g. Architect, Technical leaders
and software developers.

It is mandatory. It is not mandatory.

It is captured in use case. It is captured as a quality attribute.

Defined at a component level. Applied to a system as a whole.

Helps you to verify the performance of the


Helps you verify the functionality of the software.
software.

Functional Testing like System, Integration, End to Non-Functional Testing like Performance, Stress,
End, API testing, etc are done. Usability, Security testing, etc are done.

Usually easy to define. Usually more difficult to define


Types of Requirements
Domain Requirements
Domain requirements are the requirements which are characteristic of a
particular category or domain of projects.
Describe system characteristics as well as features which impact the
domain.
Domain requirements can be functional or nonfunctional.
The basic functions that a system of a specific domain must necessarily
exhibit come under this category.
For instance, in an academic software that maintains records of a school
or college, the functionality of being able to access the list of faculty and
list of students of each grade is a domain requirement.
Requirement Modelling
Requirements Modeling is a process of documenting, analyzing, and
managing Requirements.
Requirements change throughout the project, so it is important to have a
way to track them and make sure everyone understands them.
Analysis model used a combination of text and diagrammatic forms to
depict requirements of data, functions and behavior.
Analysis model validates the software requirements and represent the
requirements in multiple dimensions.
Basic aim of analysis modeling is to create the model that represents the
information, functions and behavior of the system to be built.
Requirement Modelling
The requirement modeling action results in one or more of the following
types of models:
• Flow-oriented modeling – represents the functional elements of the
system and how they transform data as it moves through the system.
• Scenario-based modeling – represents the system from the various
system “actors” point of view
• Class-based modeling – represents the object-oriented classes
(attributes and operations) and the manner in which classes
collaborate to achieve system requirements.
• Behavioral modeling – depicts how the software behaves as a
consequence of external events.
Requirement Modelling
Elements of the Analysis Model

Scenario-based Flow-oriented
modeling modeling
Use case text Data flow diagrams
Use case diagrams Control-flow diagrams
Activity diagrams Control Specification
Swim lane diagrams Process Specifications

Class-based Behavioral
modeling modeling
Class diagrams
State diagrams
Analysis packages
Sequence diagrams
CRC models
Collaboration diagrams
Flow-oriented modeling
– They provide necessary information that how data objects are transformed by processing the
functions.
• Data Flow Model
• Control Flow Model
• Control Specification
• Process Specification
Data Flow Diagram
• Depicts how input is transformed into output as data objects move through a system
• DFD shows flow of information (data) between various business processes.
• It focuses on from where information comes, where it goes and how information gets stored into system.
• DFD may be further portioned into different levels to show detailed information flow. Eg: level 0, level 1,
level 2 etc.
• A DFD is a graphical representation of the “flow” of data through information system and modelling its
process aspects.
• It does not show information about process timing or whether processes will operate in sequence or in
parallel.
Flow-oriented modeling
Data Flow Diagram
-Consists of 4 main components:
i) External entities
Entities which are outside of the system boundary and communicate with system such that they
provide information to system or receive information from system are called as external entities.
ii) Processes
All necessary activities which makes use of system information and are carried out
within system boundary are called processes.
Process transforms the system input into output.
iii) Data Flow
Data flow is used to show flow of information between two objects or external
entities.
Label of data flow shows the transformed status of data.
iv) Data store
A place where data is stored and retrieved within the system is called data store. It
can be either file, log book, database system etc.
Data Flow Diagram (cont.)
Physical vs. Logical DFD
Logical data flow diagrams focus on what happens in a particular
information flow: what information is being transmitted, what entities
are receiving that info, what general processes occur, etc
The processes described in a logical DFD are business activities
A logical DFD doesn’t delve into the technical aspects of a process or
system, such as how the process is constructed and implemented
Physical data flow diagrams focus on how things happen in an
information flow. These diagrams specify the software, hardware, files,
and people involved in an information flow
Data Flow Diagram Levels
Data flow diagrams are also categorized by level. Starting with the most
basic, level 0, DFDs get increasingly complex as the level increases.
Level 0 DFDs, also known as context diagrams, are the most basic data
flow diagrams. They provide a broad view that is easily digestible but
offers little detail. Level 0 data flow diagrams show a single process node
and its connections to external entities.
Level 1 DFDs are still a general overview, but they go into more detail
than a context diagram. In level 1 DFD, the single process node from the
context diagram is broken down into sub-processes. As these processes
are added, the diagram will need additional data flows and data stores to
link them together.
Level 2+ DFDs simply break processes down into more detailed sub-
processes.
Data Flow Diagram
• Symbols and Notations Used in DFDs
• Three common systems of symbols are named after their creators:
• Yourdon and Coad
• Yourdon and DeMarco
• Gane and Sarson
• One main difference in their symbols is that Yourdon-Coad and Yourdon-DeMarco
use circles for processes, while Gane and Sarson use rectangles with rounded
corners, sometimes called lozenges
DFD rules and tips
• Each process should have at least one input and an output.
• Each data store should have at least one data flow in and one data flow out.
• Data stored in a system must go through a process.
• All processes in a DFD go to another process or a data store.
Data Flow Diagram Levels
Data Flow Diagram Levels
DFD 1
Scenario based model : UML models
Analysis modeling with UML begins with the creation of scenarios.
In Scenario Base modeling the system is represented in user’s point of view. Scenario
based elements are:
Use case diagrams
Activity diagrams
Swim lane diagrams
Use Case Diagrams
i) It describes how user interacts with the system to achieve certain goal.
ii) Consists of 3 basic elements – Actors, System and Goal
iii) Use case diagram describes various business activities carried out by a system.
Components Use case diagram
1) Use case
- Represents various business activities performed in a system.
- Represented by an elliptical shape labeled with use-case name
2) Actor
- Any entity or real world object which performs different functions in the given system.
- An actors in use case diagram interacts with use case of use case diagram
- Represented by stick figure outside the system boundary.
3) System Boundary
Defines the scope of system or limits of system
Represented by solid line rectangular box.
Use cases are drawn within system boundary where as actor is outside of the system
boundary.
Associations: A line between actors and use cases. In complex diagrams, it is
important to know which actors are associated with which use cases.
Relationships in use case diagram
Use case diagram
i) Include
Use cases may contain the functionality of another use case as part of their normal
processing.
In general it is assumed that any included use case will be called every time the basic
path is run. An included relationship is denoted by dotted arrow with arrow head
pointing towards derived use case.
<<include>>
Withdraw ---------------------------> Verify_pin

ii) Extend
One use case may be used to extend the behavior of another. It is represented by dotted
arrow with arrow head pointing towards parent use case.
<<extend>>
Cancel order <-------------------------- Get approval
Relationships in use case diagram
Use case diagram
iii) Generalization
It is parent-child relationship. Child use case is an underlying process of system but it
enhances parent use case.
Represented by arrow with triangular arrow head pointing towards parent use case.
Use case diagram
DFD vs. Unified Modeling Language (UML)
• While a DFD illustrates how data flows through a system, UML is a
modeling language used in Object Oriented Software Design to
provide a more detailed view. A DFD may still provide a good starting
point, but when actually developing the system, developers may turn
to UML diagrams such as class diagrams and structure diagrams to
achieve the required specificity.
Activity diagram
Activity diagrams are same similar to that of flowcharts.
Rounded rectangles imply specific system functions.
Arrows represent flow through the system.
Decision diamonds are used to depict branching decisions.
An activity diagram is used by developers to understand the flow of programs on a high level.
Basic components of an activity diagram
• Action: A step in the activity wherein the users or software perform a given task. Actions are
symbolized with round-edged rectangles.
• Decision node: A conditional branch in the flow that is represented by a diamond. It includes
a single input and two or more outputs.
• Control flows: Another name for the connectors that show the flow between steps in the
diagram.
• Start node: Symbolizes the beginning of the activity. The start node is represented by a black
circle.
• End node: Represents the final step in the activity. The end node is represented by an
outlined black circle.
Activity diagram
Join Symbol
Combines two concurrent activities and re-introduces them to a flow where only one
activity occurs at a time. Represented with a thick vertical or horizontal line.

Fork Symbol
Splits a single activity flow into two concurrent activities. Symbolized with multiple
arrowed lines from a join.

Note Symbol
Allows the diagram creators or collaborators to communicate additional messages that
don't fit within the diagram itself. Leave notes for added clarity and specification.
Activity diagram
Example – Client login page activity diagram
Swimlane diagram
A swimlane diagram is a type of flowchart that delineates who does what
in a process.
Using the metaphor of lanes in a pool, a swimlane diagram provides clarity
and accountability by placing process steps within the horizontal or vertical
“swimlanes” of a particular employee, work group or department.
It shows connections, communication and handoffs between these lanes,
and it can serve to highlight waste, redundancy and inefficiency in a
process.
Swimlane diagram
Software Estimation Metrics

APSIT Prof. Krishnapriya S


Management Spectrum
- Describes the management of a software project or how to make a project
successful
- Focus is on the 3 P’s People, Product, Process
1) The People
People of a project includes from manager to developer, from customer to end
user.
People are key to the success of any project.
We need to remember the importance of each member of a project team
This includes understanding all stakeholders, their personas, culture,
backgrounds, previous experiences with similar projects and expectations.
Enterprises which attain high levels of maturity in the areas of people
management have a great probability of carrying efficient software
engineering practices.
Management Spectrum
2) The Product
Product is nothing but any software which has to be designed and
developed.
Product objectives and scope must be set for the successful development
of product.
The product can consist of both tangible or intangible attributes such as
shifting the company to a new place or getting a new software in a
company.
The project manager should clearly define the product scope to ensure a
successful result, control the team members, as well technical hurdles
that he or she may encounter during the building of a product
Management Spectrum
2) The Process
A software process is used to provide the framework from which it is
possible to establish a complete plan for software development.
A clearly defined process is the key to the success of any product.
It regulates how the team will go about its development in the respective
time period.
The Process has several steps involved like, documentation phase,
implementation phase, deployment phase, and interaction phase.
Software Metrics
A software metric is a measure of software characteristics which are
measurable or countable.
Software metrics are valuable for
-measuring software performance,
-planning work items,
-measuring productivity etc.
• There are 4 functions related to software metrics:
• Planning
• Organizing
• Controlling
• Improving
Software Metrics
Characteristics of Software metrics
• Quantitative: Metrics must possess quantitative nature. It means
metrics can be expressed in values.
• Understandable: Metric computation should be easily understood, and
the method of computing metrics should be clearly defined.
• Applicability: Metrics should be applicable in the initial phases of the
development of the software.
• Repeatable: The metric values should be the same when measured
repeatedly and consistent in nature.
• Economical: The computation of metrics should be economical.
• Language Independent: Metrics should not depend on any
programming language.
Software Metrics
Advantages of Software metrics
• Reduction in cost or budget.
• It helps to identify the particular area for improvising.
• It helps to increase the product quality.
• Managing the workloads and teams.
• Reduction in overall time to produce the product,.
• It helps to determine the complexity of the code and to test the code
with resources.
• It helps in providing effective planning, controlling and managing of the
entire product.
Software Metrics
Disadvantages of Software metrics
• It is expensive and difficult to implement the metrics in some cases.
• Performance of the entire team or an individual from the team can’t be
determined. Only the performance of the product is determined.
• It leads to measure the unwanted data which is wastage of time.
• Measuring the incorrect data leads to make wrong decision making.
Software Metrics
Classification of Software metrics
• Product Metrics: Product metrics are used to evaluate the state of the product, tracing
risks and undercover prospective problem areas.
• It describes the characteristics of the product such as size, complexity, design features,
performance, and quality level.
• The ability of the team to control quality is evaluated.
• A working product is created at the end of each successive phase of the software
development process. At any step of development, a product can be measured. Metrics are
built for these items to determine whether a product is being developed in accordance
with user requirements.
• If a product fails to satisfy consumer expectations, the relevant steps are made in the
appropriate phase.
• Product metrics assist software engineers in detecting and correcting possible issues before
they cause catastrophic failures.
• Examples include lines of code, cyclomatic complexity, code coverage, defect density, and
code maintainability index.
Software Metrics
Classification of Software metrics
• Process Metrics: Process metrics pay particular attention to enhancing the long-
term process of the team or organization.
• Examples include the effectiveness of defect removal during development, the
pattern of testing defect arrival, and the response time of the fix process.
• They are used to measure the characteristics of methods, techniques, and tools
that are used for developing software.
• To improve any process
• measure specific attributes of the process
• develop a set of meaningful metrics based on these attributes, and
• then use the metrics to provide indicators
Software Metrics
Classification of Software metrics
• Process Metrics: (cont.)
To measure the efficiency and effectiveness of the software process, a set of metrics is
formulated based on the outcomes derived from the process. These outcomes are
• Number of errors found before the software release
• Defect detected and reported by the user after delivery of the software
• Time spent in fixing errors
• Work products delivered
• Human effort used
• Time expended
Process metrics are of two types, namely, private and public. Private Metrics are private to
the individual and serve as an indicator only for the specified individual(s). Defect rates by a
software module and defect errors by an individual are examples of private process metrics.
Some process metrics are public to all team members but private to the project. These
include errors detected while performing formal technical reviews and defects reported
about various functions included in the software
Software Metrics
Classification of Software metrics
• Process Metrics: (cont.)
Process is considered as one of the factors that can improve
software quality and organizational performance.
Figure shows that Product, People and Technology has deep
impact on software quality and organizational performance.
Skill and motivation of People affect quality and performance. Complexity of Product
affect team performance. Technology also has an impact.
Circle shows environmental conditions that include the
- development environment (e.g., integrated software tools),
- business conditions (e.g., deadlines, business rules), and
- customer characteristics (e.g., ease of communication and collaboration).
Software Metrics
Classification of Software metrics
• Project Metrics: The project matrix describes the project characteristic and execution
process. Examples include effort estimation accuracy, schedule deviation, cost variance,
and productivity.
• Number of software developer
• Staffing patterns over the life cycle of software
• Cost and schedule
• Productivity
The project manager monitors the project's progress using measures known as project
metrics. Various metrics, such as time, cost, and so on, are collected using data from previous
projects and utilized as an estimate for the new initiative.
Metrics which have been gathered from previous projects are used as a basis for new
projects.
The project manager monitors the project's progress on a regular basis, and effort, time, and
cost are compared to the initial effort, time, and cost.
These indicators can help lower development costs, efforts, hazards, and time.
The project's quality can also be improved. With an improvement in quality, there is a
decrease in the number of errors, time, cost, and so on.
Software Metrics
Classification of Software metrics
• Project Metrics: (cont.)
The aim of project metrics are :
- these metrics are helpful in reducing the development schedule by
adjustments which are important to avoid delays and lessen possible problems as
well as risks.

- Project metrics are helpful in assessing product quality on regular basis and
when needed , make changes in the technical approach to enhance quality.
Software Metrics
• Types of Software Metrics
• Internal metrics: Internal metrics are used to measure properties that
are deemed more important to a software developer than the users.
Lines of Code (LOC) is one example.
• External metrics: External metrics are used to measure features that
are deemed more important to the user than the software developers,
such as portability, reliability, functionality, usability, and so on.
• Hybrid Metrics: Metrics that mix product, process, and resource
metrics are known as hybrid metrics. Cost per FP is an example, where
FP stands for Function Point Metric.
Software Project Estimate
• Effective software project estimation is one of the most challenging and
important activities in software development.
• Proper project planning and control is not possible without a sound and
reliable estimate.
• Under-estimating a project leads to
• under-staffing it (resulting in staff burnout),
• under-scoping the quality assurance effort (results in low quality deliverables), and
• setting too short a schedule (resulting in loss of credibility as deadlines are missed)
If you give a project more resources than it really needs without sufficient
scope, the project is then likely to
cost more than it should ,
take longer to deliver than necessary (resulting in lost opportunities),
and delay the use of your resources on the next project
Software Project Estimate
• Measurement can be divided into 2 categories-
• Direct measures (eg. Weight)
• Indirect measures (eg. quality)
Similarly, direct measures of software process involves cost and
efforts applied.
In direct measures of the product, the elements involved are LOC
(Lines of code generated), speed of execution, memory size etc.
In the indirect measures of the product the elements involved are
functionality, quality, complexity, reliability, maintainability etc.
Metrics for Size Estimation
• Estimation of the size of the software is an essential part of Software Project
Management. It helps the project manager to further predict the effort and
time which will be needed to build the project. Various measures are used in
project size estimation.
• The project size is a measure of the problem complexity in terms of the effort
and time required to develop the product. Currently two metrics are popularly
being used widely to estimate size:
1- lines of code (LOC)
2- function point (FP)
Line of Code (LOC)
• Simplest among all metrics available to estimate project size.
• Very popular because it is simplest to use.
• Project size is estimated by counting the number of source instructions
in the development program.
• Lines used for commenting the code and the header lines should be
ignored.
• Determining the LOC count at the end of a project is a very simple job.
• But accurate estimation of the LOC count at the beginning of a project is
very difficult.
• To estimate LOC count at the beginning, usually we divide the problem
into modules, each modules into sub-modules and so on, until the sizes
of the different leaf-level modules can be approximately predicted.
Line of Code (LOC)
• Shortcomings of LOC
• Estimating LOC by analyzing the problem specification is
difficult. Estimation of accurate LOC is only possible once the
complete code has been developed. As project planning needs
to be done before the development work begins so this metrics
is of little use for project managers.
• Two different source files having same number of lines may not
require same effort. The file having complex logic would require
more effort than one with simple logic. Based on LOC proper
estimation may not be possible.
• LOC is the numerical measurement of problem size. This
metrics will vary to a large extent from programmer to
programmer. An experienced professional may write same
logic in less number of lines than a novice programmer.
Functional point metrics
• One of the important advantages of using the function point metric is that
it can be used to easily estimate the size of a software product directly
from the problem specification.
• This is in contrast to the LOC metric, where the size can be accurately
determined only after the product has fully been developed.
• The conceptual idea behind the function point metric is that the size of a
software product is directly dependent on the number of different
functions or features it supports.
• A software product supporting many features would certainly be of larger
size than a product with less number of features. Each function when
invoked reads some input data and transforms it to the corresponding
output data.
Functional point metrics
• There are two types of functions − Data Functions and Transaction Functions
• There are multiple parameters in functional point analysis process calculation.
• External Inputs(EI)
• External Outputs(EO) Transactional Functions
• External Inquiries (EI)
• Internal Logic Files(ILF)
• External Logic Files(ELF) Data Functional types

• External Input (EI): EI processes data or control information that comes from outside the application’s boundary. The EI is
an elementary process.
• External Output (EO): EO is an elementary process that generates data or control information sent outside the
application’s boundary.
• External Inquiries (EQ): EQ is an elementary process made up of an input-output combination that results in data
retrieval.
• Internal Logical File (ILF): A user-identifiable group of logically related data or control information maintained
within the boundary of the application.
• External Interface File (EIF): A group of users recognizable logically related data allusion to the software but
maintained within the boundary of another software.
Functional point metrics
Measurement Parameters Examples

Number of External Inputs (EI) Input screen and tables

Number of External Output (EO) Output screens and reports

Number of external inquiries (EQ) Prompts and interrupts

Number of internal files (ILF) Databases and directories

Number of external interfaces (EIF) Shared databases and shared routines

Weights of 5 Functional Point Attributes

Measurement Parameter Low Average High

Number of external inputs (EI) 3 4 6

Number of external outputs (EO) 4 5 7

Number of external inquiries (EQ) 3 4 6

Number of internal files (ILF) 7 10 15

Number of External Interfaces (EIF) 5 7 10


FP = UFP * CAF
FP = UFP * CAF
P = UFP * CAF Functional point metrics
Calculate Function Point (FP) by using given formula.
• Final F.P = UFP X CAF, where UFP (Unadjusted Functional Points) is the
addition of all individual multiplication of respective weight factors and
amount of parameter.
And [0.65 + 0.01 * ⅀(Xi)] is CAF (Complexity Adjustment Factor)
i.e. FP = UFP*[0.65+0.01*∑ (Xi)] , where Xi (i=1 to 14) is complexity adjustment
values depending upon responses to 14 questions. These 14 questions are
answered in a scale ranging from 0 to 5.
To calculate Productivity, documentation, cost per
function of the software application.
Productivity (P) = FP/Effort
Documentation (D) = PD/FP, where PD is total Page od Documentation
Cost of each Functionalities = COST/Productivity
Functional point metrics
• Calculate the function point, productivity, documentation, and cost per function
for software application with multiple Processing Factors 5, 1, 0, 4, 3, 5, 4, 3, 4, 5,
2, 3, 4, 2 by using following given Data: The number of EI(Avg): 22,The number of
EO(Low): 45,The number of EI(High): 06, The number of ILF(Avg): 05, The number
of ELF(Low): 02, Effort:37 PM, Software technical documents: 250 pages, User
related documents: 120 pages and Budgeting/Cost: $7520 per month.

Solution: Total Cost Factor (TC) = (22*4) + (45*5) + (06*6) + (05*10) + (02*5)
= 409
Function Point (FP) = TC *[0.65 + 0.01*∑ (Xi)]
= 409 *[0.65 + 0.01*(5+1+0+4+3+ 5+4+3+4+5+ 2+3+ 4+2)]
= 409 * [0.65 + 0.01*45]
= 409 * [0.65 + 0.45]
= 409* 1.10
= 450
Functional point metrics
Productivity (P) = FP/ Effort
=450/37
= 12.16

Total Page of Documentation (PD) = Software Technical Documents + User related


documents
= 250 + 120
= 370 pages
Documentation (D) = PD/FP
= 370/450
=0.82
Cost of each Functionalities = COST/Productivity
=7520/12.16
=$618.421
= $618.5

COCOMO Model
• COCOMO (Constructive Cost Model) is a regression model based on
LOC, i.e number of Lines of Code.
• It is a procedural cost estimate model for software projects
• It was proposed by Barry Boehm in 1981 and is based on the study of
63 projects, which makes it one of the best-documented models.
• The COCOMO model has basically two parameters like effort calculation and
development time to define the quality of any software product.
• Efforts calculation: Efforts can be calculated by the number of persons
required to complete the task successfully. It is calculated in the unit person-
month.
• Development time: the time that is required to complete the task.it is
calculated in the unit of time like months, weeks, and days. It depends on the
effort calculation, if the number of persons is greater then definitely the
development time is low.
COCOMO Model
Software projects under COCOMO model strategies are classified into 3 categories, organic,
semi-detached, and embedded.
Organic: A software project is said to be an organic type if-
• Project is small and simple.
• Project team is small with prior experience.
• The problem is well understood and has been solved in the past.
• Requirements of projects are not rigid, such a mode example is payroll processing system.
Examples: simple business systems, simple inventory management systems, and data processing systems.
Semi-Detached Mode: A software project is said to be a Semi-Detached type if-
• Project has complexity.
• Project team requires more experience, better guidance and creativity.
• The project has an intermediate size and has mixed rigid requirements such a mode example is a transaction processing
system which has fixed requirements.
• It also includes the elements of organic mode and embedded mode.
• Few such projects are- Database Management System(DBMS), new unknown operating system, difficult inventory
management system.
• Example: developing a new operating system (OS), a Database Management System (DBMS), and complex inventory management
system.
COCOMO Model
Embedded Mode: A software project is said to be an Embedded mode type if-
• A software project has fixed requirements of resources .
• Product is developed within very tight constraints.
• A software project requiring the highest level of complexity, creativity, and experience requirement fall under this
category.
• Such mode software requires a larger team size than the other two models.
For Example: ATM, Air Traffic control.
COCOMO Model
Types Of COCOMO models
According to Boehm, software cost estimation should be done through three stages:
COCOMO consists of a hierarchy of three increasingly detailed and accurate forms. Any of the three forms can
be adapted according to our requirements.
1.Basic Model
2.Intermediate Model
3.Detailed Model
• Basic COCOMO Model: The following expressions give the basic COCOMO estimation model:

Effort(E)=a(kLOC)b
Time(T)=c(E)d
People required=T/E
Where,
• E is effort applied in person-months.
D is development time in months.
P is the total no. of persons required to accomplish the project.
• The constant values a,b,c, and d for the Basic Model for the different categories of the system
COCOMO Model
Basic Model
The constants a,b,c,and d vary for each model type. The following are the constant values for the basic model:
Software
A B C D
Project
Organic 2.4 1.05 2.5 0.38
Semi-
3.0 1.12 2.5 0.35
Detached
Embedde
3.6 1.20 2.5 0.32
d

Eg: Suppose a project was estimated to be made in 400 kLOC. Lets calculate its effort, time, and the number of
people required while considering the project is of organic type:
Effort(E)=2.4(400 kLOC)1.05=1295.31 person−months
Time(T)=2.5(1295.31)0.38=30.07 months
People required=1295.31/30.07​=43.07 persons
COCOMO Model
Intermediate Model
• Intermediate model
• The intermediate model is an extension of the basic model and includes a set of cost drivers to calculate the estimates with better
accuracy. The effort factor includes the effort adjustment factor (EAF) that is calculated with the cost drivers.
• The formulae to calculate these entities are:
Effort(E)=a(kLOC)b∗EAF
Time(T)=c(E)d
• The effort is measured in person-months and time in months. The constants a,b,c,and d vary for each model type. The following are the
constant values for the basic model:

Project Type a b c d

Organic 3.2 1.05 2.5 0.38

Semi-detached 3.0 1.12 2.5 0.35

Embedded 2.8 1.20 2.5 0.32


COCOMO Model
Intermediate Model
Cost drivers of intermediate model
1) Product attributes
The product attributes are as follows:
• Required software reliability extent
• Size of the application database
• The complexity of the product

Product Very Low Low Nominal High Very High Extra High
Attributes

RELY 0.75 0.88 1.00 1.15 1.40 ...

DATA ... 0.94 1.00 1.08 1.16 ...

CPLX 0.70 0.85 1.00 1.15 1.30 1.65


COCOMO Model
Intermediate Model
Cost drivers of intermediate model
2) Hardware attributes
The hardware attributes are as follows:
• Run time performance constraints
• Memory constraints
• The volatility of the virtual machine environment
• Required turnabout time Hardware
Attributes Very L Low Nominal High Very High Extra High
ow

TIME ... ... 1.00 1.11 1.30 1.66

STOR ... ... 1.00 1.06 1.21 1.56

VIRT ... 0.87 1.00 1.15 1.30 ...

TURN ... 0.87 1.00 1.07 1.15 ...


COCOMO Model
Intermediate Model
Cost drivers of intermediate model
3) Personal attributes Personal
The personal attributes are as follows: attribut Very Low Nominal High Very Extra
• Analyst capabilities es Low High High
• Software engineering capabilities
• Applications experience ACAP 1.46 1.19 1.00 0.86 0.71 ...
• Virtual machine experience
• Programming language experience AXEP 1.29 1.13 1.00 0.91 0.82 ...

PCAP 1.42 1.17 1.00 0.86 0.70 ...

VEXP 1.21 1.10 1.00 0.90 ... ...

LEXP 1.14 1.07 1.00 0.95 ... ...


COCOMO Model
Intermediate Model
Cost drivers of intermediate model
4) Project attributes
The project attributes are as follows:
• Use of software tools
• Application of software engineering methods
• Required development schedule

Project
Attributes Very Low Low Nominal High Very High Extra High

MODP 1.24 1.10 1.00 0.91 0.82 ...

TOOL 1.24 1.10 1.00 0.91 0.83 ...

SCED 1.23 1.08 1.00 0.04 1.10 ...


COCOMO Model
Intermediate Model
Example: 1
For a given project was estimated with a size of 300 KLOC. Calculate the Effort, Scheduled time for development
by considering the developer having high application experience and very low experience in programming.
• Solution –
Given the estimated size of the project is: 300 KLOC
Developer having highly application experience: 0.82 (as per above table)
Developer having very low experience in programming: 1.14(as per above table)
EAF = 0.82 *1.14 = 0.9348 Effort (E) = a(KLOC)b EAF = 3.0(300)1.12 0.9348 = 1668.07 MM
Scheduled Time (D) = c(E)d = 2.5*(1668.07)0.35 = 33.55 Months(M

Example 2:
Suppose a project was estimated to be made in 400 kLOC.let's calculate its effort, time, and the number of
people required while considering the project is of organic type and has a nominal complexity. The developer
has a high virtual machine experience.
COCOMO Model
• Detailed model
• The detailed model is a combination of both the basic model and the intermediate model. The model is decomposed into multiple
modules, and the COCOMO model is applied to them individually. This model uses various effort multipliers for each cost driver attribute,
and the cost is calculated at each stage separately.
• The six stages of the detailed model are as follows:
• Planning and requirements
• System design
• Detailed design
• Module code and test
• Integration and test
• Cost constructive model
Project Scheduling and Tracking
It is the technique that decides the sequence of task to be done and assigns
organizational resources to complete those tasks in predefined timeframe.
Effective project scheduling leads to success of project, reduced cost,
and increased customer satisfaction. Eg: Gantt chart
Project scheduling establishes the ‘Road Map’ for the project manager.
• Advantages of Project Scheduling :
It ensures that everyone remains on same page as far as tasks get
completed.
• It helps in identifying issues early and concerns such as lack or
unavailability of resources.
• It also helps to identify relationships and to monitor process.
• It provides effective budget management and risk mitigation
Work Breakdown Structure (WBS)
⚫A work breakdown structure (WBS) is a way to organize the work into
smaller, more manageable pieces.
⚫A Work Breakdown Structure generally breaks down the main project
objective into smaller and manageable parts (work items) for specific
departments to produce their task with details including budget, required
resources, and people who in charge of the task etc.
⚫Work item is also called as a Task. Example of work item is creating a
database table with test data.
⚫The WBS may also picture a project subdivided into hierarchical units of
tasks, subtasks, work packages etc
⚫WBS is not limited to a specific field, this methodology can be used for
any type of project management.
Work Breakdown Structure (WBS)
Reasons for creating the WBS in project:
-> do accurate project organization
-> assign accurate task of responsibilities to the project team
-> do accurate estimation of the cost, time and risk involved in the project.
-> illustrate the project scope
-> plan the project according to the availability of resources
Work Breakdown Structure (WBS)
Process to create WBS
1. List of all tasks
o List of all tasks to be performed within the project in the form of work packages.
2. The tasks clusters
o The defined tasks are clustered according to subject areas or time schedule
3. Define work packages
o Following clustering, the identified tasks are summarized in work packages.
4. Assignment of responsibilities to the work packages
o The assignment of responsibilities to the work packages takes place in the team
with the technical experts.
5. Define start and end dates of work packages
6. Documentation of the created Work Breakdown Structure and
assignment of unique work package numbers
Scheduling Techniques
There are 2 primary scheduling techniques available in software engineering.
CPM – Critical Path Method
PERT – Program Evolution Review Technique
Critical Path Method
▪ Critical path is the longest sequence of activities in a project plan, which must be completed on
time for the project to complete on due date.
▪ Although many projects have only one critical path, some projects may have more than one critical
paths depending on the flow logic used in the project.
▪ Critical path method is based on mathematical calculations and it is used for scheduling project
activities.
▪ In the critical path method, the critical activities of a program or a project are identified.
▪ These are the activities that have a direct impact on the completion date of the project.
▪ If delay occurs in any of the activity involving critical path, then it will ultimately leads to the delay
in project deliverables.

Advantages of CPM
-> visual representation of the project activities
-> to calculate Project deadlines
-> to track the critical activities.
Scheduling Techniques
Critical Path Method (cont.)
Key Steps in Critical Path Method :
Step 1: Activity specification
WBS is used to find out which activities are involved in the project and hence it is the main input
of CPM. Generally, higher-level activities are selected for critical path method to reduce the complexity
of CPM.
Step 2: Activity sequence establishment
The correct activity sequence is recognized. Steps to recognize the sequence of CPM is as below.
-> identify the tasks the take place before the critical task happens.
-> identify the tasks that should be completed at the same time.
-> identify the tasks that should happen immediately after critical task.
Step 3: Network diagram
Once the activity sequence is identified, the network diagram can be drawn.
A network diagram is a graphical representation of the project and is composed of a series
of connected arrows and boxes to describe the inter-relationship between the activities
involved in the project.
Step 4: Estimates for each activity
This is the direct input from the WBS based estimation sheet.
Scheduling Techniques
Critical Path Method (cont.)

Step 5: Identification of the critical path


▪ Earliest start time (ES) - The earliest time an activity can start once the previous
dependent activities are over.
▪ Earliest finish time (EF) - ES + activity duration.
▪ Latest finish time (LF) - The latest time an activity can finish without delaying the
project.
▪ Latest start time (LS) - LF - activity duration.
The float time for an activity is the time between the earliest (ES) and the latest (LS)
start time or between the earliest (EF) and latest (LF) finish times. During floating
time, the activity can be delayed without having delay in the project date.
Step 6: Critical path diagram to show project progresses
Scheduling Techniques
Critical Path Method (cont.)

The longest path in the network above is S-B-C-E-G-E’ with a duration of 22 weeks. Hence,
path S-B-C-E-G-E’ is the critical path of the above schedule network diagram
Scheduling Techniques
Project Evaluation and Review technique (PERT)
▪ A PERT is a project management technique used to plan, organize and coordinate tasks during the
project.
▪ The Program Evaluation Review Technique (PERT) breaks down the individual tasks of a project for
analysis.
▪ PERT charts are considered preferable to Gantt charts because they identify task dependencies, but
they're often more difficult to interpret.
▪ A PERT chart is a project management tool that provides a graphical representation of a project's
timeline.
▪ A PERT chart consists of numbered nodes (circle or square) and these nodes represent events. The nodes are
linked by directional line which represents the tasks in the project. The direction of the arrow represents
the sequence of the task.

PERT has defined four types of time required to accomplish an activity:


▪ Optimistic time: The minimum possible time during which the activity can be finished. The assumption
is made that all the required resources are available and all the previous activities are completed as
planned.
▪ Pessimistic time: The maximum possible time. Resource unavailability, lot of reworks to be done etc.
will be considered when such estimate is derived.
▪ Most likely time: The best estimate of the time. Indicates a reasonable estimate of the best-case
scenario.
▪ Expected time: The average time ,the task would require if the task were repeated on a number of
occasions over an extended period of time
Scheduling Techniques
Project Evaluation and Review technique (PERT)

In figure, the tasks between nodes 1,2,4,8 and 10 have to be completed in series. These are called as
dependent or serial tasks.
The tasks between nodes 1 and 2 , and nodes 1 and 3 are not dependent on each other and hence they
can be started simultaneously. These tasks are called parallel or concurrent tasks.
Dotted lines indicate dependent tasks that do not require resources.
Tracking the Schedule
Tracking can be done in number of ways:
➢By calling project status meeting.
➢Evaluating the reviews during the software engineering process.
➢Setting the tentative project deadlines.
➢Comparing the real start to the intended start date for every project.
➢Informal meetings with the resources to obtain their progress on the given
assessment.
Project tracking is done by experienced project managers.
If any problem occurs, PM has to control them by identifying the issue, adding more
resources on problem area, redeploying staffs or changing project schedule.
If problem related to project deadline occurs, then PM may take control technique
called time-boxing.
Time-boxing approach identifies that whole product may not be deliverable by the
predefined target. So, an incremental software process is taken into consideration.
Tracking the Schedule
Time Line Charts
Timeline diagrams helps managers to get a high-level look at their tasks or to
view any time related activities.
It help to visualize 3 main timeframes:
-Planned time. Actual time in-progress that shows how long the tasks have been in
progress
- Forecasted time: Usually managers use timeline charts for Projects and tasks
planning, Road mapping and Task management. With the help of timeline charts,
they can plan many targets to be achieved and get forecasts
- TimeLine Components: Timeline components are the diagrammatic
representations of the tasks. Timelines may be highly detailed or simple. They can
contain hundreds of tasks and subtasks or have only a few deliverables.
Tracking the Schedule
Time Line Charts (cont.)
To build a timeline chart, managers need to pay attention to following points:
• The set of tasks and objectives to be completed.
• Approved dates and deadlines
• Dependencies between tasks
• Expected duration of tasks
TimeLine are in many forms, but most popular option is Gantt Chart.
Gantt Chart include horizontal bars which represent the duration of tasks.
Purpose of Timeline chart
- Make sure everything is planned properly and runs according to given schedule.
- It is better for project with many people. It does not require much time or effort to
form a chart.
Gantt Chart
▪ Gantt Chart One of the oldest but still one of the most useful methods of presenting
project schedule information is the Gantt chart, developed around 1917 by Henry L.
Gantt, a pioneer in the field of scientific management.
▪ The Gantt chart was invented as a scheduling aid. Occasionally, planners attempt to
plan by using Gantt charts, a network device commonly used to display project
schedules.
▪ In essence, the project’s activities are shown on a horizontal bar chart with the
horizontal bar lengths proportional to the activity durations.
▪ The Gantt chart shows planned and actual progress for a number of tasks displayed
as bars against a horizontal time scale. The activity bars are connected to
predecessor and successor activities with arrows.
▪ It is a particularly effective and easy-to-read method of indicating the actual current
status for each of a set of tasks compared to the planned progress for each item of
the set.
Gantt Chart
Earned Value Analysis
• EVA is to understand how the project is progressing.
• It is a measure of Progress to evaluate Percentage of Completeness.
• It is used to estimate the progress of a project based on earnings or money
and schedules are calculated on the basis of EVA.
Key elements of EVA
1. Planned Value (PV): The allocated cost for the project which is approved. Also known as
Budgeted Cost of Work Scheduled (BCWS)
2. Earned Value (EV) : the budgeted value of the completed work packages. Also known as
Budgeted Cost of Work Performance at a specified point (BCWP)
3. Actual Cost (AC) : The actual cost involved during the execution of the project work. It
was previously called as Actual Cost of Work Performed.
Module 4
Software Design

APSIT Prof. Krishnapriya S


Prof. Krishnapriya S
APSIT
Software Design
- It is a phase in software engineering which develops blueprint for the
construction of the software system.
- IEEE definition for software design
- a process of defining the architecture, components, interfaces,
and other characteristics of a system or component and the result of that
process.
- Design concepts must be clear before design practices are applied.
- Good software design should exhibit:
- Firmness – a program should not have any bugs that inhibit its function.
- Commodity – a program should be suitable for the purposes for which it was
intended.
- Delight – The experience of using the program should be pleasurable one.

3
Prof. Krishnapriya S
APSIT
Software design model
Software design model consists of 4 designs.
1. Data/class design
2. Architectural design
3. Interface design
4. Component design

translating requirement model into design model

Prof. Krishnapriya S
APSIT
Qualities of good design
Innovative
- design can be completely new design or resign of existing
product. (new design – market value. Redesign - improves quality)
Functional
- good design fulfils all its intended functions.
Honest
- an honest design expresses the functions and values it offer.
User – Oriented
- intended to improve solution to user problem.
Correctness
should correctly achieve all required functionalities as per SRS

Prof. Krishnapriya S
APSIT
Prof. Krishnapriya S
APSIT
Prof. Krishnapriya S
APSIT
1. Should not suffer from “Tunnel Vision” –

While designing the process, it should not suffer from “tunnel vision” which means
that is should not only focus on completing or achieving the aim but on other effects
also.
2. Traceable to analysis model –

The design process should be traceable to the analysis model which means it should
satisfy all the requirements that software requires to develop a high-quality product.
3. Should not “Reinvent The Wheel” –

The design process should not reinvent the wheel that means it should not waste
time or effort in creating things that already exist. Due to this, the overall
development will get increased.
4. Minimize Intellectual distance –

The design process should reduce the gap between real-world problems and
software solutions for that problem – design should be self explanatory.
5. Exhibit uniformity and integration –

The design should display uniformity which means it should be uniform throughout
the process without any change. Before design work begins rules of styles and format must be
defined for a design team. Integration means it should mix or combine all parts of software. i.e
subsystems into one system.
6. Accommodate change –

The software should be designed in such a way that it accommodates the change
implying that the software should adjust to the change that is required to be done as
per the user’s need.
7. Degrade gently –

The software should be designed in such a way that it degrades gracefully which
means it should work properly even if an error occurs during the execution.
8. Assessed or quality –

The design should be assessed or evaluated for the quality meaning that during the
evaluation, the quality of the design needs to be checked and focused on.
9. Review to discover errors –

The design should be reviewed which means that the overall evaluation should be
done to check if there is any error present or if it can be minimized
10. Design is not coding and coding is not design –

Design means describing the logic of the program to solve any problem and coding

is a type of language that is used for the implementation of a design


Design Concepts
• Abstraction
• Architecture
• Patterns
• Modularity
• Information hiding
• Concurrency
• Functional independence
• Refinement
• Refactoring
• Design classes

Prof. Krishnapriya S
APSIT
Design Concepts
Abstraction
• - multiple levels of abstraction when we are considering modular
solution for a problem.
• - highest level of abstraction – solution in broad terms
• - lower levels of abstraction – detailed description of solution.
Procedural abstraction
Refers to a sequence of instructions that have a specific and limited
function.
Data abstraction
It is a named collection of data that describes a data object.
Procedural abstraction makes use of the information contained in the
attributes of data abstraction.
Prof. Krishnapriya S
APSIT
Design Concepts
Architecture
Software architecture is the
- structure of program components (modules)
- the manner in which these components interact
- the structure of data that are used by the components
It can be represented using a number of different models.
Structural models
- represent architecture as an organized collection of program
components.
Framework models
increase the level of design abstraction by identifying repeatable design
frameworks in similar types of application.

Prof. Krishnapriya S
APSIT
Design Concepts
Architecture (cont.)
Dynamic models
- address the behavioural aspects of the program architecture.
Process models
- focus on the design of the business or technical process that the
system must accommodate.
Functional models
- used to represent the functional hierarchy of a system.

Prof. Krishnapriya S
APSIT
Design Concepts
Patterns
- describes a design structure that solves a particular design problem
within a specific context.
Design pattern enables a designer to determine whether the pattern :
- is applicable to the current work
- can be reused
- can serve as a guide for developing a similar, but functionally or
structurally different pattern.

Prof. Krishnapriya S
APSIT
Design Concepts
Patterns
- describes a design structure that solves a particular design problem
within a specific context.
Design pattern enables a designer to determine whether the pattern :
- is applicable to the current work
- can be reused
- can serve as a guide for developing a similar, but functionally or
structurally different pattern.

Prof. Krishnapriya S
APSIT
Design Concepts
Modularity
- Software is divided into separately named and addressable components,
called modules that are integrated to satisfy problem requirements.
- Modularity allows a program to be intellectually manageable.
Modular design
- easier to change
- easier to plan, build
- easier to maintain
- software increments can be defined and delivered
- changes can be easily accommodated
- testing and debugging can be conducted more efficiently.
Prof. Krishnapriya S
APSIT
Design Concepts
Modularity
Modularity and software cost

Effort (cost) to develop individual software modules decreases as the number of


software modules increases. However, as the number of modules grows, the effort
(cost) to integrate the modules also grows. As shown in the figure, there is a number
, M, of modules that will result in minimum development cost.

Prof. Krishnapriya S
APSIT
Design Concepts
Information Hiding
- Information hiding states that modules should be specified and
designed so that the information (algorithm and data) contained within a
module is inaccessible to other modules that have no need of such
information.
Hiding defines and enforces access constraints to both procedural details
within a module and any data structure used by the module.
Information hiding is beneficial when modification is required during
testing and later in maintenance. Since data and procedures are hidden
from other parts of software, inadvertent errors introducing during
modification are less likely to propagate.

Prof. Krishnapriya S
APSIT
Design Concepts
Information Hiding
-Advantages
• Results in low coupling
• Put emphasis on communication by controlled interfaces
• Reduces the possibility of adverse effects
• Controls the impacts of changes in one components to others
• Results in higher quality software

Prof. Krishnapriya S
APSIT
Design Concepts
Concurrency
Multiple tasks must be executed concurrently to utilize the resources
efficiently.
Every system should be designed in such a way that it should facilitate
multiple processes to execute concurrently.
For example, if the currently executing process is waiting for some
resources, the system must be able to execute any other process in the
mean time.

Prof. Krishnapriya S
APSIT
Design Concepts
Functional independence
The concept of functional dependencies is a direct outgrowth of modularity, abstraction
and information hiding.
Functional independence is achieved by developing functions that perform only one
kind of task and do not excessively interact with other modules.
Independence is important because it makes implementation more accessible and
faster.
The independent modules are easier to maintain, test, and reduce error propagation
and can be reused in other programs as well.
Thus, functional independence is a good design feature which ensures software quality.
• It is measured using two criteria:
• Cohesion: It measures the relative function strength of a module. A cohesive module
performs a single task and it requires a small interaction with the other components
in other parts of the program.
• Coupling: It measures the relative interdependence among modules.
Prof. Krishnapriya S
APSIT
Design Concepts
Refinement
Refinement is a top-down design approach.
It is a process of elaboration.
A program is developed by successively refining levels of procedural details.
A hierarchy is established by decomposing a statement of function in a stepwise manner
till the programming language statement are reached.

Refactoring
It is a reorganization technique which simplifies the design of components without
changing its function behaviour.
Refactoring is the process of changing the software system in a way that it does not
change the external behaviour of the code still improves its internal structure.
When a software id refactored, the existing design is examined for redundancy, unused
design elements, inefficient or unnecessary algorithms, poorly constructed data
structures or any other design failure that can be corrected to get a better design.

Prof. Krishnapriya S
APSIT
Design Concepts
Design Classes
The model of software is defined as a set of design classes.
Every class describes the elements of problem domain and that focus on features of
the problem which are user visible.
Five different types of design classes.
User Interface classes – Define all abstractions that are necessary for human-computer interaction.
Business domain classes – Identify the attributes and services (methods) that are required to implement
business domain.
Process classes – implement lower-level business abstractions required to fully manage the business
domain classes.
Persistent classes – represent data stores (eg: database) that will persist beyond the execution of the
software.
System classes – implement software management and control functions that enable the system to
operate and communicate within its computing environment and with the outside world.

Prof. Krishnapriya S
APSIT
Effective Modular Design
Modularity can be defined as dividing the software into distinctively
named and addressable components which are also called as modules.
A large program is divided into a set of distinct modules in such a way that
each module can be developed independent of other modules.
After developing the individual modules, all these modules are
incorporated together to full fill the software requirements.
If the number of modules is very huge, then it requires more efforts for
incorporating them.
Modularizing a design helps to plan the development in a more efficient
manner, put up changes easily, carries testing and debugging effectively,
and conducts maintenance work without harmfully affecting the
functioning of the software.

Prof. Krishnapriya S
APSIT
Effective Modular Design
Advantages of Modularization
Maintaining smaller components is very easy.
Program can be divided according to the functional aspects.
Level of abstraction can be carried out as per the requirement.
Components with high cohesion can be re-used again.
Concurrent execution can be made possible.
Security can be achieved.

Prof. Krishnapriya S
APSIT
Effective Modular Design
Functional Independence
In order to build a software with effective modular design there is a factor “Functional
Independence” which comes into play.
The meaning of Functional Independence is that a function is atomic in nature so that it
performs only a single task of the software without or with least interaction with other
modules.
Functional Independence is considered as a sign of growth in modularity i.e., presence of larger
functional independence results in a software system of good design and design further affects
the quality of the software.
• Benefits of Independent modules/functions in a software design:
Since the functionality of the software have been broken down into atomic levels, thus
developers get a clear requirement of each and every functions and hence designing of the
software becomes easy and error free.
• As the modules are independent they have limited or almost no dependency on other
modules. So, making changes in a module without affecting the whole system is possible in
this approach.
Error propagation from one module to another and further in whole system can be neglected
and it saves time during testing and debugging.
• Independence of modules of a software system can be measured using 2 criteria : Cohesion,
and Coupling.

Prof. Krishnapriya S
APSIT
Effective Modular Design
Functional Independence
Cohesion is a measure of strength in relationship between various
functions within Cohesion is a measure of strength in relationship
between various functions within a module. It is of 7 types which are
listed below in the order of high to low cohesion:a module. It is of 7
types which are listed below in the order of high to low cohesion:

Prof. Krishnapriya S
APSIT
Cohesion
Cohesion refers to the degree to which elements within a module work together to fulfill a
single, well-defined purpose.
Good system design must have high cohesion between the components of the systems.
High cohesion means that elements are closely related and focused on a single purpose, while
low cohesion means that elements are loosely related and serve multiple purposes.

It is of 7 types which are listed below in the order of high to low cohesion.

• Functional
• Layer
• Communicational
• Sequential
• Procedural
• Temporal
• Utility

Prof. Krishnapriya S
APSIT
• Functional Cohesion
This level of cohesion occurs when a module performs one and only one computation and then
returns a result.
• Layer
This type of cohesion occurs when a higher layer accesses the services of a lower layer, but lower
layers do not access higher layers. Exhibited by packages, components and classes.
Eg. Consider a security system that makes a phone call when an alarm is sensed.
Here the access is from control panel package downwards.

• Communicational
All operations that access the same data are defined within one class. Such classes focus solely on the
data, accessing it and storing it. Example- update record in the database and send it to the printer.
Classes and components that exhibit functional, layer and communicational cohesion are relatively
Prof. Krishnapriya S
easy to implement, test and maintain. APSIT
Cohesion
• Sequential
Components and operations are grouped in a manner that allows the first to provide input to the next
and so on. The intent is to implement a sequence of operations.
• Procedural
Components or operations are grouped in a manner that allows one to be invoked immediately after the
preceding one was invoked, even when there is no data passed between them.
When components of system are related to each other only by sequence then their exists procedural
cohesion.
• Temporal
Operations that are performed to reflect a specific behavior or state.
Temporal cohesion occurs when component of a system performs more than one function and these
functions must occur within same time span.
Eg. An operation performed at start-up or all operations performed when an error is detected.
• Utility
Components, classes or operations that exist within the same category but otherwise unrelated are
grouped together. For example, a class MathOperations which contains attributes and operations to
compute different mathematical operations.
Prof. Krishnapriya S
APSIT
Cohesion
Advantages of Cohesion
High cohesion -> better program design
High cohesion -> components can be easily reused
High cohesion -> components are more reliable

Disadvantages of Cohesion
Low cohesion components are difficult to maintain
Low cohesion -> components cannot be reused
Low cohesion -> difficult to understand, less reliable

Prof. Krishnapriya S
APSIT
Coupling
Coupling is the degree to which the classes are connected to one another.
As classes and components become more interdependent, coupling increases.
Coupling categories
Content Coupling
Occurs when one component modifies the data that is internal to another component. This
violates information hiding.
Common coupling
Occurs when a number of components all make use of a global variable. This may lead to
uncontrolled error propagation and unforeseen side effects when changes are made.
Control coupling
When data are passed between two components and that affects internal logic of a component
then there exists control coupling between those two components.
Eg. When operation A() invokes operation B() and passes a control flag to B. Here, an unrelated
change in B can result in the necessity to change the meaning of the control flag that A passes.

Prof. Krishnapriya S
APSIT
Coupling
Stamp Coupling
In stamp coupling, the complete data structure is passed from one module to
another module. Therefore, it involves tramp data. (data which is passed to a function only to
be passed on to another function.)

Data Coupling: If the dependency between the modules is based on the fact
that they communicate by passing only data, then the modules are said to be
data coupled. A component becomes difficult to maintain if too many
parameters are passed.
Routine call coupling
Occurs when one operation invokes another. This level of coupling is common but it
increases the connectedness of a system.
Type use coupling
Occurs when component A uses a data type defined in component B. If the type
definition changes, every component that uses the definition must also change.

Prof. Krishnapriya S
APSIT
Coupling
Inclusion or import coupling
Occurs when component A imports or includes a package or the content of
components B.
External coupling
Occurs when a component communicates or collaborates with infrastructure
components (eg: operating system functions, telecommunication functions etc.)
Advantages of Low Coupling
Components can be reused.
Do not cause ripple effect to other components
System can be build faster.
Disadvantages of high coupling
Difficult to understand, causes changes in other components, slows down
development process.

Prof. Krishnapriya S
APSIT
Difference between Coupling and Cohesion

Parameter Cohesion Coupling


Concept Cohesion indicates relationship with Coupling indicates the relationship between two or more
in the module different modules

Represents Cohesion represents relative Coupling represents relative interdependence among the
functional strength of a module modules.

Degree It is a degree up to which module It is a degree to which one module depends on another
focuses on systems’ single module.
function/component
High or Low High cohesion is good for system Low coupling is good for system design.
design.

Prof. Krishnapriya S
APSIT
Architectural design
IEEE definition –
Architectural design is the process of defining a collection of hardware and software components
and their interfaces to establish the framework for the development of a computer system.
Architectural Design Representation
Architectural design can be represented using the following models.

1.Structural model:
Illustrates architecture as an ordered collection of program components
2. Dynamic model:
- Specifies the behavioral aspect of the software architecture
- indicates how the structure changes due to change in the external environment.
3. Process model:
- Focuses on the design of the business or technical process which must be
implemented in the system

Prof. Krishnapriya S
APSIT
Architectural design
4. Functional model:
- Represents the functional hierarchy of a system
5. Framework model:
- Attempts to identify repeatable architectural design patterns encountered in
similar types of application. This leads to an increase in the level of abstraction.

Architectural Styles
The objective of using architectural styles is to establish a structure for all the components present in a
system.
Every architectural style describes a system category that includes the following.
• A set of components(eg: a database, computational modules)
• The set of connectors (eg. Procedure calls) to provide communication.
• Conditions that how components can be integrated to form the system.
• Semantic models that help the designer to understand the overall properties of the
system

Prof. Krishnapriya S
APSIT
Architectural design
Architectural Styles
Some of the commonly used architectural styles are

1. Data-centered Architecture
A data store will reside at the center of this architecture and is accessed frequently by the other
components that update, add, delete or modify the data present within the store. Eg. A client software
accessing a central repository.
Advantage of Data centered architecture
• Repository of data is independent of clients
• Client work independent of each other
• It may be simple to add additional clients.
• Modification can be very easy

Prof. Krishnapriya S
APSIT
Architectural design
Architectural Styles
Some of the commonly used architectural styles are

2. Data-flow Architecture

This is used in the systems that accept some inputs and transform it into the desired outputs by applying
a series of transformations.
Each component called as filter transforms the data and sends this transformed data to other filters with
the help of the connector called as pipe.

Each filter works as an independent entity, that is, it is not concerned with the filter which is producing or
consuming the data.
A pipe is a unidirectional channel which transports the data received on one end to the other end.
It does not change the data in anyway; it merely supplies the data to the filter on the receiver end.

If the data flow degenerates into a single line of transforms, then it is termed as batch sequential. This
structure accepts the batch of data and then applies a series of sequential components to transform it.

Prof. Krishnapriya S
APSIT
Architectural design
2. Data-flow Architecture (cont.)

Advantages
1. It supports reusability, maintainability and modifiability.
2. It supports concurrent execution.
Disadvantages
1. It often degenerates to batch sequential system.
2. It does not provide enough support for applications that require more user interaction.
3. It is difficult to synchronize two different but related streams.

Prof. Krishnapriya S
APSIT
Architectural design
3. Object-oriented Architecture

In this style, components of a system encapsulate data and operations, which are
applied to manipulate the data.
Components are represented as objects and they interact with each other through
methods (connectors).

Characteristics
Objects maintain the integrity of the system.
An object is not aware of the representation of other objects.

Advantages
It allows designers to decompose a problem into a collection of independent objects.
The implementation detail of objects is hidden from each other and hence, they can be
changed without affecting other objects.

Prof. Krishnapriya S
APSIT
Architectural design
4. Layered Architecture
In layered architecture, several layers (components) are defined in which each layer
perform a well-defined set of operations.
These layers are arranged in a hierarchical manner, each one built upon the one below
it.
Each layer provides a set of services to the layer above it and acts as a client to the
layer below it.
One common example of this architectural style is OSI-ISO (Open Systems
Interconnection-International Organisation for Standardisation) communication
system.

Prof. Krishnapriya S
APSIT
Architectural design
5.Call and Return Architecture
This style enables software designers to achieve a program structure, which can be easily
modified.
Two sub-styles
a) Main program/subprogram architecture: In this, function is decomposed into a control hierarchy where the main
program invokes a number of program components, which in turn may invoke other components.

b) Remote procedure call architecture: In this, components of the main or subprogram architecture are distributed over a
network across multiple computers.

Prof. Krishnapriya S
APSIT
Subject: Software Engineering Semester: V

Module 5

Software Testing
• To verify whether the actual results are same as of expected results.
• To assure that the software system does not contain any defects.
• To ensure that all user requirements are fulfilled by software.
• To give assurance that we deliver quality product to customer.
Software quality depends on at what extend software fulfills user requirements and the
number of defects occurred in software.
Test cases and test data are created to perform software testing. A test case refers to the
actions required to verify a specific feature or functionality in software testing.
Collection of test cases is called as test suit.
Test Data is the input given to a software program during test execution. It represents data that
affects or affected by software execution while testing. Test data is used for both positive
testing to verify that functions produce expected results for given inputs and for negative
testing to test software ability to handle unusual, exceptional or unexpected inputs.
Software testing is one element of a broader topic that is often referred to as verification and
validation
Verification
Verification refers to the set of activities that ensure that software correctly implements a
specific function. Verification is the process of evaluating the intermediary work products of a
software development lifecycle to check if we are in the right track of creating the final product.
Verification: "Are we building the product right?"
Validation
Validation refers to a different set of activities that ensure that the software that has been built
is traceable to customer requirements. Validation is the process of evaluating the final product

Department of Computer Engineering Prof. Krishnapriya S


Subject: Software Engineering Semester: V

to check whether the software meets the business needs. Validation: "Are we building the right
product?"

Advantages of Software Testing

• Earlier Defects Detection


• Increased Customer Satisfaction
• Cost Reduction
• Improved Product Quality and Reliability
• Quicker Development Process
• Enhanced Security
• Easier Recovery
• Enhanced Agility

1. Earlier Defects Detection

Early defect discovery is made simple, and the development team is assisted in fixing them
if the software testing team, also known as the quality assurance team, works parallel from
the start of the software development.

Department of Computer Engineering Prof. Krishnapriya S


Subject: Software Engineering Semester: V

If testing begins after the software has been fully developed, the developer will need to
redesign all interconnected modules to fix any module-specific flaws.

2. Increased Customer Satisfaction


By guaranteeing a defect-free application, software testing helps in increasing the
customer's trust and satisfaction. It aims to find all potential flaws and test a program in
accordance with client needs and requirements.
3. Cost Reduction

Several projects make up the software testing process. If a flaw is discovered earlier,
correcting it will be less expensive.

Therefore, you must complete the testing as soon as possible. When you hire quality
analysts and testers with a lot of expertise and technical training for the projects, they are
investments, and the project will benefit from them.

Also, all applications need to be maintained, and the program owner invests a significant
amount to keep it running and functioning properly. By testing the application, the
investment in the maintenance area is reduced.

4. Improved Product Quality and Reliability

It’s necessary to follow the requirements of the product as it helps in obtaining the
necessary outcomes. Products should help the user in some way, deliver on the promise,
and add value.

Therefore, a product should operate properly to guarantee a positive consumer


experience. Additionally, the compatibility of a device needs to be verified. For example,
if you’re preparing to release an application, you must ensure that it’s compatible with a
variety of devices and operating systems.

A product is reliable only if it meets user requirements and can build customer trust. Using
performance testing, security testing, and other methods, software testing improves the
reliability of an application.

5. Quicker Development Process

Only when an application's development is quick can it be delivered early. By identifying


defects and informing the development team, software testing helps the team produce

Department of Computer Engineering Prof. Krishnapriya S


Subject: Software Engineering Semester: V

software quickly. Earlier in the system's development, the found flaw can be quickly
repaired without affecting the operation of other capabilities.

The development process for an application is improved by finding defects and repairing
them concurrently with system development since the development team does not have
to wait for bug discovery and correction.

6. Easier Addition of New Features

The more complex and older the code, it’s more challenging to modify it. Luckily, tests
enable developers to confidently add new features.

Changing older parts of your codebase can be daunting, but with tests, you'll at least be
able to see if you've broken anything crucial. This helps your software stand out from the
competitors and dominate the market.

7. Enhanced Security

Security is the fundamental issue in the digital age and customers' top priorities always
include a secure system. That’s why owners spend a lot of money to protect their systems
from hackers, malicious attacks, and other types of theft.

The development team strives to cover the application with numerous security layers
while the testing team utilizes security testing to find defects. An application's security
level is determined using a security testing technique, and testers look for vulnerabilities
to break an application's security.

8. Easier Recovery

Recovery describes the process by which an application restarts more quickly after failing.
When an application recovers fast and carries out its regular functions, it’s successful.

Software testing helps determine an application's rate of recovery and the total amount
of time it takes to recover. When an application is being tested, testers look for scenarios
where it’s most likely to fail and measure how long it takes for it to recover. Testers provide
feedback to the development team, so they can alter the internal coding to speed up
application recovery.

9. Enhanced Agility

Department of Computer Engineering Prof. Krishnapriya S


Subject: Software Engineering Semester: V

Agile and testing follow ideas like collaboration, iteration, feedback, and continuous
improvement to provide value to the client faster and more effectively. These methods
make it possible to conduct testing frequently and early, which lowers the likelihood of
errors, rework, and waste.

Manual and Automation Testing


Manual testing is testing of the software where tests are executed manually by a QA
Analyst. It is performed to discover bugs in software under development.

In Manual testing, the tester checks all the essential features of the given application
or software. In this process, the software testers execute the test cases and generate
the test reports without the help of any automation software testing tools.

It is a classical method of all testing types and helps find bugs in software systems. It
is generally conducted by an experienced tester to accomplish the software testing
process.

First we do manual testing for any new application before going for automation
testing. This is to analyze the benefits of doing automation testing and to decide
whether to do automation testing or not.

Manual testing requires more effort, but is necessary to check automation feasibility.

Goals of Manual Testing

- to assure that our software product does not contain any defect and it fulfills all
functional requirements of end user.
- test suits (collection of test cases) or cases are designed in the testing phase to test all
functionalities of the product.
- to assure that reported defects are fixed by the developer and tester performs
retesting on it after fixing the defects.
- mainly manual testing verifies the quality of the software product and deploys bug-
free software to the customer.
List of manual testing :
Unit testing
Integration testing
System testing
Acceptance testing

Department of Computer Engineering Prof. Krishnapriya S


Subject: Software Engineering Semester: V

In Automated Software Testing, testers use automation tools to run our test cases
to find out bugs from the software product.

Automated testing entirely relies on the pre-scripted test which runs automatically to
compare actual result with the expected results. This helps the tester to determine
whether or not an application performs as expected.

Automated testing allows you to execute repetitive task and regression test without
the intervention of manual tester. Even though all processes are performed
automatically, automation requires some manual effort to create initial testing
scripts.

Automation testing requires considerable amount of money and resources such as


employees, testing tools etc.

We can record test suits by using test automation tools and re-play it when needed.

Aim of Automation testing is to decrease number of test cases to be executed


manually but not replace Manual Testing completely.

Automation testing is performed for project if requirements of projects are stable at


some extend i.e. requirements are not frequently changing.

List of some Automation tools

Selenium

Mentis

Quality Test Professional (QTP)

Buxila

HP ALM (Application Lifecycle Management)

Department of Computer Engineering Prof. Krishnapriya S


Subject: Software Engineering Semester: V

Software Testing Process


Software Testing process activities
• Requirement analysis
• Test Planning
• Test Case development
• Environment set up
• Test execution
• Test cycle closures
Each of the activity of testing process has defined entry and exit criteria.
Entry criteria
Entry criteria of testing are prerequisite conditions that must be satisfied before
testing begins
Entry criteria
An exit criteria of testing describes the conditions that must be satisfied before
testing is concluded.
Software testing levels are the different stages of the software development
lifecycle where testing is conducted.
There are 4 levels of software testing : Unit -> Integration -> Validation Testing -
> System Testing

Unit Testing
Unit Testing
- individual units/ components of a software are tested.
- to validate that each unit of the software performs as designed.
- A unit is the smallest testable part of any software. It usually has one or a few
inputs and usually a single output.
It is the first level of software testing and is performed prior Integration Testing.
It is normally performed by software developers themselves or their peers. In
some cases, it may also be performed by independent software testers.

Department of Computer Engineering Prof. Krishnapriya S


Subject: Software Engineering Semester: V

In procedural programming, a unit may be an individual program, function,


procedure etc.
In object oriented programming, the smallest unit is a method, which may belong
to a base class, abstract class or derived class.
Unit Testing Method
It is performed by using the White Box Testing method.
Unit Test environment
STUBS
Assume we have 3 modules, A, B and C.
Module A is ready and we need to test it, but module A calls functions from
module B and Module C which is not ready.
So, developer will write a dummy module which simulates b and C and returns
values to module A. This dummy module code is known as Stub

DRIVERS
Now suppose we have Modules B and C ready but Module A which calls functions
from Module B and Module C is not ready so developer will write a dummy piece
code for Module A which will return values to Module B and C. This dummy piece
of code is known as Driver.
Logically both driver and stubs are the
software that are written but not
submitted to the customer and thus are
considered as the overhead. So it is
recommended to keep these overhead
simple to reduce the cost.

Department of Computer Engineering Prof. Krishnapriya S


Subject: Software Engineering Semester: V

Integration Testing
- a method of software testing in which all units of software under test are
integrated and tested as a single group.
- always done after unit testing
- focuses on testing data communication among all units of system.
- main objective is to determine faults in communication between integrated
units.
- make use of methodologies like Big bang approach , incremental approach
(top-down, bottom-up or combination of both)

Types of Integration Testing


Big bang approach
In this method, all modules of software product are created first and then they are
combined together and whole software is tested at once.
Advantage
Suitable for small software projects.
Disadvantages
- Finding defect is difficult task because we test whole software
at once.
-There are lots of interfaces which needs to be tested.
- There is possibility that some interfaces links may remain untested
- Critical modules are not separated and tested on priority bases because all
modules are tested at the same time.

Department of Computer Engineering Prof. Krishnapriya S


Subject: Software Engineering Semester: V

Testing team gets small amount of time because in this approach, integration
testing starts after all the modules are developed.

Incremental Approach
- In this method, to perform testing two or more modules are merged with each
other which are logically related.
- Then other associated modules are included in this group of modules and
perform testing for checking whether they functioning correctly or not.
- This procedure continues until all modules are grouped together and tested
successfully.
- This procedure is done with the help of dummy programs which are known as
Stubs and Drivers.
Stubs and Drivers do not contain the complete programming logic of the module
but they contain code which is needed to perform communication with other
modules.
Stub is dummy program which is called by the Module on which testing is
performed.
Driver is dummy program which calls another module.
Incremental approach is performed using two different methods:
i) Top – Down ii) Bottom up iii) Sandwich

Department of Computer Engineering Prof. Krishnapriya S


Subject: Software Engineering Semester: V

i) Top – Down integration


Testing is performed from modules present at top
to modules which are present at the bottom.
In top down approach, stubs are used for testing.

Advantages
- Identification of bug is easy
- Critical modules are tested on priority basis so critical designing defects could be
found and fixed early.
Disadvantages
- Top down testing requires many stubs because for replacing lower level modules
there is need of stubs.
- Modules at lower level are tested insufficiently because of lack of time.
ii) Bottom up integration
In the bottom up approach, every module present at lower level is tested with
modules present at higher levels until all modules are tested.
- Drivers are used while performing bottom up incremental testing.

Department of Computer Engineering Prof. Krishnapriya S


Subject: Software Engineering Semester: V

Bottom up integration
Advantages
Finding defect is easy
After developing one module, testing is started. No need to wait for all modules
to be developed unlike Big Bang approach.
Disadvantages
Critical modules which are present at the top level of software architecture that
control the flow of software are tested last and may be defects occurs in critical
modules.
iii) Sandwich integration
• Consists of a combination of both top-down and bottom-up integration
• Occurs both at the highest level modules and also at the lowest level
modules
• Proceeds using functional groups of modules, with each group completed
before the next
• High and low-level modules are grouped based on the control and
data processing they provide for a specific program feature
• Integration within the group progresses in alternating steps between
the high and low level modules of the group
• When integration for a certain functional group is complete,
integration and testing moves onto the next group
• Reaps the advantages of both types of integration while minimizing the
need for drivers and stubs
• Requires a disciplined approach so that integration doesn’t tend towards
the “big bang” scenario
Smoke Testing
Smoke testing, also called build verification testing or confidence testing, is a
software testing method that is used to determine if a new software build is
ready for the next testing phase.
This testing method determines if the most crucial functions of a program
work but does not delve into finer details.
As a preliminary check of software, smoke testing finds basic and critical issues
in an application before more in-depth testing is done

Department of Computer Engineering Prof. Krishnapriya S


Subject: Software Engineering Semester: V

The goal of smoke testing is to discover simple but severe failures using test
cases that cover the most important functionalities of a software. Smoke tests
are performed by QA teams using a minimal set of tests on each build that
focuses on software functionality.

Validation Testing
Validation testing begins after integration testing.
- focused on actions which are directly visible by user.
- Validation is successful when the functionality of software is as per expectations
of the customer.
- main concern is user requirements.
Validation test criteria
All of the expected functional requirements are fulfilled.
Behavioral characteristics are achieved
Accuracy in content and properly presented
Consideration of all performance requirements
Accurate documentation
Alpha and Beta testing
Alpha Testing
- the testing done to find out bugs before deploying the software application to
end user.
- it is type of acceptance testing.
- objective is to find and fix bugs that were not discovered through previous tests.
- performed by in-house software engineers or QA staff.
- It is the final testing stage before the software is released into the real world.
Performed in 2 phases.
i) In first phase, software is tested by development team members. They
perform debugging of software to catch bugs quickly.
ii) In second phase, software is tested by software quality analyst team for
additional testing in actual user’s environment setup.
Advantages
Better insight about software reliability at its early stages.
Reduce delivery time, free up team for other projects.
Early feedback helps to improve software quality.

Department of Computer Engineering Prof. Krishnapriya S


Subject: Software Engineering Semester: V

Beta Testing
- performed at the location of customer.
- Actual as well as intended users will test the software to determine whether the
software is satisfying their expectations.
- allows users to test software before it is released to public.
- minimizes the product failure risks.
- ensures reliability, security, robustness etc. from user’s perspective.
Type of Beta testing : Traditional, public, technical, focused and past release.
Beta testing is also called as User acceptance testing, customer acceptance
testing, customer validation testing or pre-release testing
Advantages
Decreases product failure risks.
Improves software quality by using feedback of customer.
Cost effective as compared to other data collection techniques.
Improves customer satisfaction.
Types of Beta Testing
1. Traditional Beta testing
software product is provided to the targeted end user and associated data
is collected. This data is useful for product improvement.
2. Public beta testing
product is released publicly in real world using online channels and data can
be collected from anyone. Feedback is used for product improvements.
3. Technical Beta testing
Software product is released in internal group of an organization and
collect data from the employees of the organization.
4. Focused Beta
Product is released in the market and data is collected about specific features of
the program.
5. Post release Beta
Product is released in the market and data is collected to improve the software
product for the next release.

Department of Computer Engineering Prof. Krishnapriya S


Subject: Software Engineering Semester: V

System Testing
Testing of entire and completely integrated software.
System testing is end to end testing. i.e. test system from login module to log out
module.
Contains both functional as well as Non-functional testing.
System testing is included in black box testing.
Few Types of System Testing
Usability testing
Here, a group of end users of the software use the product to check user
friendliness.
It is non-functional testing.
Test the easiness to handle the application.
Also known as User Experience Testing.
Load Testing
A type of Performance Testing under real life load condition.
Checks how the application behaves when multiple users access it at same time.
It is non-functional testing.

Department of Computer Engineering Prof. Krishnapriya S


Subject: Software Engineering Semester: V

Normally used when we test Client/Server based applications and Web based
applications.
Used to find out:
➔ Maximum number of users who can access the it at same time.
➔ Specifies whether currently available infrastructure (software and
hardware) is adequate to execute the application.
➔ Checks what happens when maximum number of users accesses the
system simultaneously.
➔ Checks Scalability (increase capacity) to permit more users.
Regression Testing
Used to check whether changes made in code due to some error or change in
requirement affects existing working functionality.
Here, we perform already executed test cases to give assurance that old
functionalities work well after performing changes in code.
To give assurance that new code added in software does not disturb existing
functionalities.
Recovery Testing
Done to specify whether system recovers itself after the system crash due to
disaster such as power or network failure.
Check whether system will perform rollback.
Migration Testing
To give assurance that software can be moved from older system
infrastructure to current system infrastructure without any problem.

Functional testing
Checks that every function present in the software application works as per
requirements of user.
Includes black box testing and it does not focus on the source code of the
software.
Functionality is verified by tester using appropriate test data and actual result is
compared with expected result.
Done using Requirement Specification document.
Hardware/Software Testing
Perform testing of communication between the hardware and software used in
the system.
Department of Computer Engineering Prof. Krishnapriya S
Subject: Software Engineering Semester: V

Security testing
Verifies that protection mechanisms built into a system will, in fact, protect it
from improper access
Stress testing
Executes a system in a manner that demands resources in abnormal quantity,
frequency, or volume
Performance testing
Tests the run-time performance of software within the context of an integrated
system
Often coupled with stress testing and usually requires both hardware and
software instrumentation
Can uncover situations that lead to degradation and possible system failure

Department of Computer Engineering Prof. Krishnapriya S


Subject: Software Engineering Semester: V

Department of Computer Engineering Prof. Krishnapriya S


Subject: Software Engineering Semester: V

White Box Testing


In this testing process, we verify internal coding and infrastructure of software
product under the test.
Programming knowledge or detailed functional knowledge is pre-requisite for
tester.
Also called as Clear-Box testing, Open box testing, Structural testing, Transparent
Box testing, Code-based testing, Glass Box testing.
It uses following methods:
Statement Coverage: Testing all programming statements using minimum
number of tests.
Branch Coverage: to ensure that all branch conditions in system are tested at
least once.
Path Coverage: to ensure that each statement and branch in system is tested at
least once.
By using Statement and Branch coverage, we can perform 80-90% code coverage.
Other coverage types are Condition coverage, Multiple condition coverage, Path
coverage, function coverage etc.
White box testing involves two steps:
Step 1: Understand the source code.
Testers understand the source code of the product.
Tester must have good programming knowledge. They should be able to detect
security problems and protect the software from hackers and naïve users who
may add malicious code in software product.
Step 2: Create test cases and execute
This includes testing the source code of software product under the test for
checking of proper flow of control and structure.
Tester will generate test cases for every process or group of processes in product.
May be performed by developers.
Advantages:
1. White box testing is thorough as the entire code and structures are tested.
2. It results in the optimization of code removing errors and helps in removing
extra lines of code.
3. It can start at an earlier stage as it doesn’t require any interface as in the
case of black box testing.
Department of Computer Engineering Prof. Krishnapriya S
Subject: Software Engineering Semester: V

4. Easy to automate.
5. White box testing can be easily started in Software Development Life Cycle.
6. Easy Code Optimization
7. Testers can identify defects that cannot be detected through other testing
techniques.
8. Testers can create more comprehensive and effective test cases that cover all
code paths
Disadvantages:
• Testers need to have programming knowledge and access to the source
code to perform tests.
• Testers may focus too much on the internal workings of the software and
may miss external issues.
• Testers may have a biased view of the software since they are familiar with
its internal workings.

White Box Testing Techniques


1. Basis Path Testing
Basis path testing is a white-box testing technique first proposed by Tom McCabe
These test guarantee to execute every statement in the program at least once
during testing. Basis set is the set of all execution paths of a procedure.
Flow Graph notation
Before the basis path method can be introduced, a simple notation for the
representation of control flow, called a flow graph (or program graph) must be
introduced.
Flow graph depicts control flow and uses different constructs. These individual
constructs are combined together to produce the flow graph for a particular
procedure.

Department of Computer Engineering Prof. Krishnapriya S


Subject: Software Engineering Semester: V

Flow Graph terminology


Node: Each circle, called a flow graph node, represents one or more procedural
statements. A sequence of process boxes and a decision diamond can map into a
single node. Each node that contains a condition is called a predicate node
Edge: Edge is the connection between two nodes. It represent flow of control and
are analogous to flowchart arrows. An edge must terminate at a node, even if the
node does not represent any procedural statements.
Region: Areas bounded by edges and nodes are called regions.

Flowchart to flow graph →

Cyclomatic Complexity
Cyclomatic complexity is a software metric that provides a quantitative measure
of the logical complexity of a program.
The value computed for cyclomatic complexity defines the number of
independent paths.
Independent path is an execution flow from the start point to the end point.
There can be various execution paths depending upon decision taken on the
control statement.

Department of Computer Engineering Prof. Krishnapriya S


Subject: Software Engineering Semester: V

It provides us with an upper bound for the number of tests that must be
conducted, because for each independent path, a test should be conducted to see
if it actually reaches the end point or not.
Cyclomatic Complexity for a flow graph is computed in one of three ways:
1. The number of regions of the flow graph correspond to the cyclomatic
complexity.
2. Cyclomatic complexity, V(G), for a flow graph, G, is defined as
V(G) = E - N + 2
where E is the number of flow graph edges, N is the number of flow graph nodes.
3. Cyclomatic complexity, V(G), for a flow graph, G, is also defined as
V(G) = P + 1
where P is the number of predicate nodes contained in the flow graph G.
Referring once more to the flow graph in figure, the cyclomatic complexity can be
computed using each of the algorithms just noted:
1. The flow graph has four regions.
2. V(G) = 11 edges - 9 nodes + 2 = 4.
3. V(G) = 3 predicate nodes + 1 = 4.

Department of Computer Engineering Prof. Krishnapriya S


Subject: Software Engineering Semester: V

Deriving the test cases


The main objective of basic path testing is to derive the test cases for the
procedure under test. The process of deriving test cases is as follows:
From source code, derive flow graph.
1. Determine the Cyclomatic complexity, V(G)
2. Even without a flow graph, V(G) can be determined by counting the
number of conditional statements in code and adding 1 to it.
3. Prepare test cases. Each test case is executed and compared to the
expected results.

Graph Matrices
• A graph matrix is a square matrix whose rows and columns are equal to the
number of nodes in the flow graph. Each row and column identifies a
particular node and matrix entries represent a connection between the
nodes.
The following points describe a graph matrix:
• Each cell in the matrix can be a direct connection or link between one node
to another node.
• If there is a connection from node 'a' to node 'b', then it does not mean
that there is connection from node 'b' to node 'a'.
• Conventionally, to represent a graph matrix, digits are used for nodes and
letter symbols for edges or connections.

Connection matrices
The links between two nodes are assigned a link weight, which becomes the entry
in the cell of matrix.
when the connection exists, then the link weight is 1

Department of Computer Engineering Prof. Krishnapriya S


Subject: Software Engineering Semester: V

A matrix defined with link weights is called a connection matrix.

Flow Matrix
Flow Graph

Connection Matrix

Given below is the procedure to find the cyclomatic number from the connection
matrix:
Step 1: For each row, count the number of 1 s
and write it in front of that row.
Step 2:Subtract 1 from that count.
Ignore the blank rows, if any.
Step 3: Add the final count of each row.
Step 4: Add 1 to the sum calculated in Step 3.
Step 5: The final sum in Step 4 is the cyclomatic
number of the graph.

Department of Computer Engineering Prof. Krishnapriya S


Subject: Software Engineering Semester: V

2. Control Structure Testing


Control structure testing is a group of white-box testing methods.
Different types of testing performed in control structure
1. Condition Testing 2. Data Flow Testing 3. Loop Testing
1. Condition Testing
Condition testing is a test case design method, which ensures that the logical
condition and decision statements are free from errors.
The errors present in logical conditions can be incorrect Boolean operators,
missing parentheses in a Boolean expression, error in relational operators,
arithmetic expressions.
The common types of logical conditions that are tested using condition testing
are-
A relation expression, like E1 op E2 where ‘E1’ and ‘E2’ are arithmetic
expressions and ‘OP’ is an operator.
A simple condition like any relational expression preceded by a NOT (~)
operator. For example, (~E1) where ‘E1’ is an arithmetic expression and ‘a’
denotes NOT operator.
A compound condition consists of two or more simple conditions, Boolean
operator, and parenthesis. For example, (E1 & E2)|(E2 & E3) where E1, E2, E3
denote arithmetic expression and ‘&’ and ‘|’ denote AND or OR operators.
A Boolean expression consists of operands and a Boolean operator like ‘AND’,
OR, NOT. For example, ‘A|B’ is a Boolean expression where ‘A’ and ‘B’ denote
operands and | denotes OR operator.

2. Data Flow testing


Data flow test method chooses the test path of a program based on the locations
of the definitions and uses all the variables in the program.
Department of Computer Engineering Prof. Krishnapriya S
Subject: Software Engineering Semester: V

- Data flow testing method is effective for error protection because it is based on
the relationship between statements in the program according to the definition
and uses of variables.
For a statement numbered S, let
DEF(S) = {X} /statement S contains a definition of X}, and
USES(S) = {X} /statement S contains a use of X}
For the statement S:a=b+c;,
DEF(S) = {a}. USES(S) = {b,c}.
The definition of variable X at statement S is said to be live at statement S1, if
there exists a path from statement S to S1 which does not contain any definition
of X.
3. Loop testing
Loop testing method concentrates on validity of the loop structures.
- Loops are fundamental to many algorithms and need thorough testing.
- Loops can be defined as simple, concatenated, nested, and unstructured.
i) Simple loops :
• The following set of tests can be applied to simple loops, where n is the
maximum number of allowable passes through the loop.
• Skip the loop entirely.
• Only one pass through the loop.
• Two passes through the loop.
• M passes through the loop where m < n
• n –1, n, n + 1 passes through the loop.
ii) Nested Loops – Loops within loops are called as nested loops.
when testing nested loops, the number of test case increases as
level nesting increases. The following steps for testing nested
loops are as follows-
1. Start with inner loop. set all other loops to
minimum values.
2. Conduct simple loop testing on inner loop.
Department of Computer Engineering Prof. Krishnapriya S
Subject: Software Engineering Semester: V

3. Work outwards.
4. Continue until all loops tested.

iii) Concatenated Loops :


If the loops are not independent, it is possible to test concatenated loops with the
help of approach defined for simple loops.
However, if the loop counter for first loop is used as
the starting value for second loop, then there is
dependency.
In that case the approach which has been applied
to nested loops is recommended
iii) Unstructured loop.
Whenever possible, it is essential to redesign this class
of loops so as to reflect the use of the
structured programming constructs.

Department of Computer Engineering Prof. Krishnapriya S


Black box Testing
Test the functionality of an application without knowing its internal structure, coding
information and knowledge of internal paths of the software.
Test cases are build based on what the application is supposed to do.
Also know as behavioral testing or specification testing.
Focused on input to the software and output from the software.
Black box testing can be applied during each level of software testing process.
Tester select valid test data for positive test cases.
Also tester selects invalid test data for negative test scenario.
Defect is detected if actual result is not same as expected result.
Done using different testing techniques like Boundary Value Analysis (BVA),
equivalence class decision models etc.

Prof. Krishnapriya S
1
APSIT
Black box Testing techniques
Methods for developing test cases
i) Boundary Value Analysis (BVA)
It is a process of testing boundaries of the input values.
It is most commonly used technique in Black Box Testing.
Basic idea is to select input values : minimum, just above minimum, normal value,
maximum value and just below maximum value.
It checks for the input values near the boundary that have a higher chance of error.

Boundary Value Analysis(Age accepts 18 to 56)

Valid
Invalid Invalid
(min, min + 1, nominal, max – 1,
(min-1) (max + 1)
max)

17 18, 19, 37, 55, 56 57

Prof. Krishnapriya S
2
APSIT
Black box Testing techniques
Methods for developing test cases
ii) Equivalence class partitioning
- Reduces the number of all possible inputs by dividing them into classes.
- test the application thoroughly and avoids redundancy of input values.
- BVA and equivalence portioning are closely related and used together.
Example: Below is the example to combine Equivalence Partitioning and Boundary Value.
Consider a field that accepts a minimum of 6 characters and a maximum of 10 characters. Then the partition of the test cases
ranges 0 – 5, 6 – 10, 11 – 14.

Test Scenario Test Description Expected Outcome

1 Enter value 0 to 5 character Not accepted

2 Enter 6 to 10 character Accepted

3 Enter 11 to 14 character Not Accepted

Prof. Krishnapriya S
3
APSIT
Black box Testing techniques
Methods for developing test cases
iii) Graph Based
This technique of Black box testing involves a graph drawing that depicts the link
between the causes (inputs) and the effects (output), which trigger the effects.
Cause-Effect graph technique is based on a collection of requirements and used to
determine minimum possible test cases which can cover a maximum test area of the
software.
The main advantage of cause-effect graph testing is, it reduces the time of test
execution and cost.

Prof. Krishnapriya S
4
APSIT
Black box Testing techniques
Methods for developing test cases
iii) Graph Based (cont.)
Graph Representation
• A collection of nodes that represent objects,
• Links that represent the relationships between objects,
• Node weights that describe the properties of a node (e.g., a specific data value
or state behavior),
• Link weights that describe some characteristic of a link.
• Nodes are represented as circles connected by links that take a number of
different forms.
• A directed link (represented by an arrow) indicates that a relationship moves in
only one direction.
• A bidirectional link, also called a symmetric link, implies that the relationship
applies in both directions.
• Parallel links are used when a number of different relationships are established
between graph nodes. Prof. Krishnapriya S
APSIT
5
Black box Testing techniques
Methods for developing test cases
iv)Error- Guessing
This method makes use of tester’s experience and skill for testing similar
applications to find out defects which may not be determined by formal techniques.
It is solely based on judgment and perception of the earlier end user experience.
It is usually done after formal testing techniques are applied.

Prof. Krishnapriya S
6
APSIT
Black box Testing
Types of Black Box Testing
1. Functional testing
2. Non-Functional Testing
3. Regression Testing
Advantages.
Efficient for large systems.
Identifies contradictions in functional specifications.
Detailed functional knowledge of the system is not prerequisite for tester.
Tester and developer work immediately.
Disadvantages
Difficult to find out all possible inputs for test cases in limited time
Test method cannot be used for complex code
Test cases cannot be designed without knowledge
Prof. Krishnapriya S
of functional spec.
7
APSIT
Software Maintenance
- to give assurance that the software has ability to satisfy the changing requirement
of user.
Needs of maintenance
To correct defects found in product
To improve design of software
To improve performance of software product
To add new features
To improve communication with other software
To transfer legacy software system into new software system

Prof. Krishnapriya S
1
APSIT
Types Software Maintenance
1. Corrective Maintenance
This maintenance contains changes and updation performed for fixing of defects found in software
product that are detected by end user or tester.
This type is emergency maintenance in which unscheduled changes are done for temporarily keeping
software in operation.
Because of pressure from management, the maintenance team release small codes for emergency
corrections called as patching.
20% part of all maintenance activities are performed to do corrective maintenance.
2. Adaptive Maintenance
These maintenance involves making changes in software product to maintain the software product
up-to date.
Adaptive maintenance also is the modification of software to keep it usable after a change to its
operating environment. Many factors can change an application’s environment, including new
technical knowledge, hardware and security threats.
It can also include applying changes to component of the software application which does not work
properly due to change that is done in some other component of the software application.

Prof. Krishnapriya S
2
APSIT
Types Software Maintenance
3. Perfective Maintenance
Contains changes and updates performed to keep the software useful for long
duration.
These include new user requirements, new features for increasing the reliability and
performance of software.
Perfective maintenance improves the software’s functionality and usability.
Changes to the software’s interface and user journey are part of perfective
maintenance.
It basically associated with including new or modified user requirements.
Perfective maintenance contributes 50% of total maintenance which is the largest of
all the maintenance activities.

Prof. Krishnapriya S
3
APSIT
Types Software Maintenance
4. Preventive Maintenance
Contains changes and updating to avoid problems in software application in future.
Objective is to solve the problems which are not major at this point but may cause
serious issues in future.
This type of maintenance is also commonly known as future proofing. It includes
making the software easier to scale more easily in response to increased demand
and fixing latent faults before they become operational faults.
Preventive maintenance contributes 5% of all the maintenance activities.

Prof. Krishnapriya S
4
APSIT
Software Re-Engineering
When we required to update the software application without affecting its
functionality so that it can stay in the current market, it is known as software re-
engineering.
Sometimes developers observe that some components of software product require
more maintenance than other components and such components are requiring re-
engineering.
The re-Engineering procedure requires the following steps
Decide which components of the software we want to re-engineer. Is it the complete
software or just some components of the software?
Do Reverse Engineering to learn about existing software functionalities.
Perform restructuring of source code if needed for example modifying functional-
Oriented programs in Object-Oriented programs
Perform restructuring of data if required
Use Forward Engineering ideas to generate re-engineered software
Prof. Krishnapriya S
5
APSIT
Software Re-Engineering
The Software Reengineering process basically undergoes following phases phases.
(1) Reverse engineering, (2) Restructuring, (3) Forward engineering (4) Component reusability
1. Reverse Engineering
It is a procedure to get system specification by analyzing and understanding the existing system.
This procedure can be reverse software development life cycle model. ie. Go from maintenance
phase to requirement gathering phase.
2. Restructuring of source code
Restructuring of source code is a procedure of performing the re-structuring and re-construction
of the already existing software.
It is associated with re-arranging the source code. In Restructuring of source code, we can
perform source code-restructuring or data-restructuring or both.
Restructuring does not affect the functionality of the existing software. It improves reliability and
maintainability of software.
Program parts due to which errors occur frequently can be changed or updated in re-structuring.
Prof. Krishnapriya S
6
APSIT
Software Re-Engineering
3. Forward Engineering
Forward engineering is a procedure of finding desired software product from the requitements in
hand which is output of reverse engineering.
Forward engineering is similar as software engineering procedure with having a difference that it
is always performed after reverse engineering.

4. Component Reusability
A component is an element of source code which performs an independent task.
Software component reuse is the software engineering practice of creating new software
applications from existing components, rather than designing and building them from scratch.
Prof. Krishnapriya S
7
APSIT
Reverse Engineering
It is the procedure to get system specification by analyzing and understanding the existing
system.
Steps of Software Reverse Engineering:
Collection Information:
This step focuses on collecting all possible information (i.e., source design documents, etc.)
about the software.
Examining the information:
The information collected in step-1 is studied so as to get familiar with the system.
Extracting the structure:
This step concerns identifying program structure in the form of a structure chart where each
node corresponds to some routine.
Recording the functionality:
During this step processing details of each module of the structure, charts are recorded using
structured language like decision table, etc.
Prof. Krishnapriya S
8
APSIT
Reverse Engineering
Steps of Software Reverse Engineering: (cont.)
Recording data flow:
From the information extracted in step-3 and step-4, a set of data flow diagrams is derived to
show the flow of data among the processes.
Recording control flow:
The high-level control structure of the software is recorded.
Review extracted design:
The design document extracted is reviewed several times to ensure consistency and
correctness. It also ensures that the design represents the program.
Generate documentation:
Finally, in this step, the complete documentation including SRS, design document, history,
overview, etc. is recorded for future use.

Prof. Krishnapriya S
9
APSIT
Software Configuration Management
Configuration Management is a set of activities designed to manage change by

• identifying the work products that are likely to change,


• establishing relationships among them,
• defining mechanisms for managing different versions of these work products,
• controlling the changes imposed, and
• auditing and reporting on the changes made.
Software configuration management is a set of activities that have been developed to
manage change throughout the life cycle of computer software.
There are four fundamental sources of change:
• New business or market conditions.
• New stakeholder needs demand modification
• Reorganization or business growth/downsizing
• Budgetary or scheduling constraints cause a redefinition of the system or product.

Why do we need Configuration management?


The primary reasons for Implementing Technical Software Configuration Management
System are:

• There are multiple people working on software which is continually updating


• It may be a case where multiple version, branches, authors are involved in a
software config project, and the team is geographically distributed and works
concurrently
• Changes in user requirement, policy, budget, schedule need to be accommodated.
• Software should able to run on various machines and Operating Systems
• Helps to develop coordination among stakeholders
• SCM process is also beneficial to control the costs involved in making changes to a
system

Elements of a Configuration Management System


Four important elements should exist when a configuration management system is developed:
• Component elements—A set of tools coupled within a file management system (e.g., a database)
that enables access and management of each software configuration item.
• Process elements—A collection of procedures and tasks that define an effective approach to
change management
• Construction elements—A set of tools that automate the construction of software by ensuring
that the proper set of validated components (i.e., the correct version) have been assembled.
• Human elements—A set of tools and process features used by the software team to implement
effective SCM.

Prof. Krishnapriya S APSIT


Baselines
A specification or product that has been formally reviewed and agreed upon, that thereafter serves
as the basis for further development, and that can be changed only through formal change control
procedures.
Before a software configuration item becomes a baseline, change may be made quickly and
informally. However, once a baseline is established, changes can be made, but a specific, formal
procedure must be applied to evaluate and verify each change.
A baseline is a milestone in the development of software. A baseline is marked by the delivery of one
or more software configuration items that have been approved as a consequence of a technical
review.
For example, the elements of a design model have been documented and reviewed. Errors are found
and corrected. Once all parts of the model have been reviewed, corrected, and then approved, the
design model becomes a baseline.

Software Configuration Items


SCI is the integral part of software engineering development process. SCI could be considered to be
a single section of a large specification or one test case in a large suite of tests. More realistically, an
SCI is all or part of a work product (e.g., a document, an entire suite of test cases, or a named
program component).
SCIs are organized to form configuration objects that may be catalogued in the project database with
a single name. A configuration object has a name, attributes, and is “connected” to other objects by
relationships

SCM Repository
The SCM repository is the set of mechanisms and data structures that allow a software team to
manage change in an effective manner.
It provides functions of a database management system by ensuring data integrity, sharing, and
integration.
Also, the SCM repository provides a hub for the integration of software tools
To achieve these capabilities, the repository is defined in terms of a meta-model.

Prof. Krishnapriya S APSIT


The meta-model determines
• how information is stored in the repository,
• how data can be accessed by tools and viewed by software engineers,
• how well data security and integrity can be maintained, and
• how easily the existing model can be extended to accommodate new needs.

General Features and Content

Robust repository provides two different classes of services:

(1) the same types of services that might be expected from a sophisticated DBMS

(2) services that are specific to the software engineering environment.

To support SCM, the repository must have a tool set that provides support for the following
features:

Versioning. As a project progresses, many versions of individual work products will be created. The
repository must be able to save all of these versions to enable effective management of product
releases and to permit developers to go back to previous versions during testing and debugging.

Prof. Krishnapriya S APSIT


The repository must be able to control a wide variety of object types, including text, graphics, bit
maps, complex.

Dependency tracking and change management. The repository manages a wide variety of
relationships among the data elements stored in it.

Some of these relationships are merely associations, and some are dependencies or mandatory
relationships. The ability to keep track of all of these relationships is crucial to the integrity of the
information stored in the repository.

Requirements tracing. This special function depends on link management and provides the ability to
track all the design and construction components from a specific requirements specification
(forward tracing). In addition, it provides the ability to identify which requirement generated any
given work product (backward tracing).

Configuration management. A configuration management facility keeps track of a series of


configurations representing specific project milestones or production releases.

Audit trails. An audit trail establishes additional information about when, why, and by whom
changes are made. Information about the source of changes can be entered as attributes of specific
objects in the repository.

THE SCM PROCESS


The software configuration management process defines a series of tasks that have four primary
objectives:
(1) to identify all items that collectively defi ne the software configuration,
(2) to manage changes to one or more of these items,
(3) to facilitate the construction of different versions of an application, and
(4) to ensure that software quality is maintained as the configuration evolves over time.

There are five SCM tasks—

• identification,
• version control,
• change control,
• configuration auditing, and
• reporting

Prof. Krishnapriya S APSIT


SCM tasks can be viewed as concentric layers.

Software Configuration Item(SCI) is the information that is created as part of the software
engineering process. Typical SCIs include requirement specifications, design specification, source
code, test cases and recorded results, user guides and installation manuals, executable programs,
and standards and procedures.

As an SCI moves through a layer, the actions implied by each SCM task may or may not be
applicable. For example, when a new SCI is created, it must be identified. However, if no changes are
requested for the SCI, the change control layer does not apply. The SCI is assigned to a specific
version of the software (version control mechanisms come into play). A record of the SCI (its name,
creation date, version designation, etc.) is maintained for configuration auditing purposes and
reported to those who needs to know it.

Identification of Objects
To control and manage SCIs, each should be separately named and then organized using an object-
oriented approach. Two types of objects can be identified: basic objects and aggregate objects.

A basic object is a unit of information that you create during analysis, design, code, or test. For
example, a basic object might be a section of a requirements specification, part of a design model,
source code for a component, or a suite of test.

An aggregate object is a collection of basic objects and other aggregate objects. For example, a
Design Specification is an aggregate object.
Activities during this process:

• Identification of configuration Items like source code modules, test case, and requirements specification.
• Identification of each SCI in the SCM repository, by using an object-oriented approach
• The process starts with basic objects which are grouped into aggregate objects. Details of what, why, when
and by whom changes in the test are made
• Every object has its own features that identify its name that is explicit to all other objects
• List of resources required such as the document, the file, tools, etc.

Example:

Instead of naming a File login.php its should be named login_v1.2.php where v1.2 stands for the version number of the
file

Prof. Krishnapriya S APSIT


Version Control
Creating versions/specifications of the existing product to build new products from the help of SCM
system. A description of version is given below:
A version control system is directly integrated with four major capabilities:
(1) a project database (repository) that stores all relevant configuration objects,
(2) a version management capability that stores all versions of a configuration object
(3) a make facility that enables collection of all relevant configuration objects and construct a
specific version of the software
4) implement an issue tracking (also called bug tracking) capability that enables the team to record
and track the status of all outstanding issues associated with each configuration object
A number of version control systems establish a change set—a collection of all changes that are
required to create a specific version of the software. A number of named change sets can be
identified for an application. This enables you to construct a version of the software by specifying
the change sets (by name) that must be applied to the baseline configuration.
To accomplish this, a system modelling approach is applied. The system model contains:
(1) a template that includes a component hierarchy and a “build order” for the components that
describes how the system must be constructed,
(2) construction rules, and
(3) verification rules

Change Control
Change control is a procedural method which ensures quality and consistency when changes are
made in the configuration object.
A change request is submitted and evaluated to assess technical merit, potential side effects, overall
impact on other configuration objects, and the projected cost of the change.
The results of the evaluation are presented as a change report, which is used by a change control
authority (CCA)—a person or group that makes a final decision on the status and priority of the
change.
An engineering change order (ECO) is generated for each approved change. The ECO describes the
change to be made, the constraints that must be respected, and the criteria for review and audit.
The object to be placed can be placed in a directory that is controlled by the software engineer
making the change.
The version control mechanisms, integrated within the change control process, implement two
important elements of change management.
Access Control and
Synchronization control
Before an SCI becomes a Baseline, the changes should be applied. The developer will look after
whether the changes are justified or not by project. The technical requirement must check properly.
After approval from CCA, a baseline may be created and change control is implemented.
Once final product is released, the formal changes must be made. This formal change is outlined.

Prof. Krishnapriya S APSIT


.

Configuration Audit
To ensure that the change has been properly implemented, there must be:

(1) technical reviews and

(2) the software configuration audit

Prof. Krishnapriya S APSIT


The technical review focuses on the technical correctness of the configuration object that has been
modified.

A software configuration audit complements the technical review by assessing a configuration object
for characteristics that are generally not considered during review.

The audit asks and answers the following questions:

1. Has the change specified in the ECO been made? Have any additional modifications been
incorporated?

2. Has a technical review been conducted to assess technical correctness?

3. Has the software process been followed and have software engineering standards been properly
applied?

4. Has the change been “highlighted” in the SCI? Have the change date and change author been
specified? Do the attributes of the configuration object reflect the change?

5. Have SCM procedures for noting the change, recording it, and reporting it been followed?

6. Have all related SCIs been properly updated?

Status Reporting
Configuration status reporting (sometimes called status accounting) is an SCM task that answers the
following questions:

(1) What happened? (2) Who did it? (3) When did it happen? (4) What else will be affected?

Each time an SCI is assigned new or updated identification, a CSR (configuration status reporting)
entry is made.
Each time a change is approved by the CCA (i.e., an ECO is issued), a CSR entry is made.
Each time a configuration audit is conducted, the results are reported as part of the CSR task.
Output from CSR may be placed in an online database or website, so that software developers or
support staff can access change information by keyword category.

Prof. Krishnapriya S APSIT


Agile Process Model

APSIT Prof. Krishnapriya S


What is “Agility”?
• Effective (rapid and adaptive) response to change (team members, new
technology, requirements)
• Effective communication among all team members, technological and
business people, software engineers and managers。
• Drawing the customer into the team. Eliminate “us and them” attitude.
Planning in an uncertain world has its limits and plan must be flexible.
• Organizing a team so that it is in control of the work performed
• Emphasize an incremental delivery strategy as opposed to intermediate
products that gets working software to the customer as rapidly as feasible.
• Rapid, incremental delivery of software
• The development guidelines stress delivery over analysis and design
although these activates are not discouraged, and active and continuous
communication between developers and customers.

Prof. Krishnapriya S
APSIT
What is “Agility”?
• Why? The modern business environment is fast-paced and ever-
changing. It represents a reasonable alternative to conventional
software engineering for certain classes of software projects. It has
been demonstrated to deliver successful systems quickly.
• What? May be termed as “software engineering lite” The basic
activities- communication, planning, modeling, construction and
deployment remain. But they morph into a minimal task set that push
the team toward construction and delivery sooner.
• The only really important work product is an operational “software
increment” that is delivered.

Prof. Krishnapriya S
APSIT
Agility and the Cost of Change
• Conventional wisdom is that the cost of change increases nonlinearly as a
project progresses. It is relatively easy to accommodate a change when a
team is gathering requirements early in a project. If there are any changes,
the costs of doing this work are minimal. But if the middle of validation
testing, a stakeholder is requesting a major functional change. Then the
change requires a modification to the architectural design, construction of
new components, changes to other existing components, new testing and
so on. Costs escalate quickly.

• A well-designed agile process may “flatten” the cost of change curve by


coupling incremental delivery with agile practices such as continuous unit
testing and pair programming. Thus team can accommodate changes late in
the software project without dramatic cost and time impact.

Prof. Krishnapriya S
APSIT
Agility and the Cost of Change

Prof. Krishnapriya S
APSIT
Agility Principles - I

1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable
software.
2. Welcome changing requirements, even late in development. Agile processes harness change for
the customer's competitive advantage.
3. Deliver working software frequently, from a couple of weeks to a couple of months, with a
preference to the shorter timescale.
4. Business people and developers must work together daily throughout the project.
5. Build projects around motivated individuals. Give them the environment and support they need, and
trust them to get the job done.
6. The most efficient and effective method of conveying information to and within a development team
is face–to–face conversation.

6
Prof. Krishnapriya S
APSIT
Agility Principles - II
7.Working software is the primary measure of progress.
8.Agile processes promote sustainable development. The sponsors, developers, and users
should be able to maintain a constant pace indefinitely.
9.Continuous attention to technical excellence and good design enhances agility.
10. There must be Simplicity in development.
11. The best architectures, requirements, and designs emerge from self–organizing teams.
12. At regular intervals, the team reflects on how to become more effective, then tunes and
adjusts its behavior accordingly.

7
Prof. Krishnapriya S
APSIT
Extreme Programming (XP)
• The most widely used agile process, originally proposed by Kent Beck in 2004. It uses an object-
oriented approach as its preferred development paradigm.
• It encompasses a set of rules and practices that occur within the context of four framework
activities: planning, design, coding, and testing
• XP Planning
• Begins with the listening, leads to creation of “user stories” that describes required output, features, and
functionality. Each story is written by the customer and is placed on an index card. Customer assigns a
value(i.e., a priority) to each story based on the overall business value of the feature or function
• Agile team assesses each story and assigns a cost (development weeks. If more than 3 weeks, customer asked to
split into smaller stories)
• Working together, stories are grouped for a deliverable increment next release.
• A commitment (stories to be included, delivery date and other project matters) is made. Three ways: 1.
Either all stories will be implemented in a few weeks, 2. high priority stories first, or 3. the riskiest stories will be
implemented first.
• After the first increment “project velocity”, namely number of stories implemented during the first release
is used to help define subsequent delivery dates for other increments. Customers can add stories, delete
existing stories, change values of an existing story, split stories as development work proceeds.
Prof. Krishnapriya S
APSIT
8
Extreme Programming (XP)
• XP Design ( occurs both before and after coding as refactoring is encouraged)
• Follows the KIS principle (keep it simple) Nothing more nothing less than the story.
• Encourage the use of CRC (class-responsibility-collaborator) cards in an object-oriented context. CRC cards identify
and organize the object oriented classes that are relevant to the current software increment.
• For difficult design problems, suggests the creation of “spike solutions”—a design prototype for that portion is
implemented and evaluated. The intent is to lower risk when true implementation starts.
• Encourages “refactoring”—an iterative refinement of the internal program design. Does not alter the external
behavior yet improve the internal structure. Minimize chances of bugs. More efficient, easy to read.
• XP Coding
• Recommends the construction of a unit test for a story before coding commences. So implementer can focus on what
must be implemented to pass the test.
• Encourages “pair programming”. Two people work together at one workstation. Real time problem solving, real time
review for quality assurance. Each person take slightly different roles. (Eg. One will work on coding details, other one
will work on coding standards)
• XP Testing
• All unit tests are executed daily and ideally should be automated. Regression tests are conducted to test current and
previous components.
• “Acceptance tests” are defined by the customer and executed to assess customer visible functionality

9
Extreme Programming (XP)
spike solut ions
simple design
prot ot ypes
CRC cards
user st ories
values
accept ance t est crit eria
it erat ion plan

refact oring

pair
programming

Release
sof t ware increment
unit t est
project velocit y comput ed cont inuous int egrat ion

accept ance t est ing

10
Extreme Programming (XP)
XP Values
• Communication
• Simplicity
• Feedback
• Courage
• Respect

11
Industrial XP
IXP incorporates six new practices that are designed to help ensure that an XP project works successfully for signifi cant
projects within a large organization.
Readiness assessment. The IXP team ascertains whether all members of the project community (e.g., stakeholders,
developers, management) are on board, have the proper environment established, and understand the skill levels involved.
Project community. The IXP team determines whether the right people, with the right skills and training have been
staged for the project. The “community” encompasses technologists and other stakeholders.
Project chartering. The IXP team assesses the project itself to determine whether an appropriate business justification
for the project exists.
Test-driven management. An IXP team establishes a series of measurable “destinations” that assess progress to date
and then defines mechanisms for determining whether or not these destinations have been reached.
Retrospectives. An IXP team conducts a specialized technical review after a software increment is delivered. Called a
retrospective, the review examines “issues, events, and lessons-learned” across a software increment and/or the entire
software release.
Continuous learning. The IXP team is encouraged to learn new methods and techniques that can lead to a higher-quality
product.

12
Scrum
A software development method that was conceived by Jeff Sutherland and later further developed
by Schwaber and Beedle in early 1990s.
Scrum emphasizes the use of a set of software process patterns that have proven effective for
projects with tight timelines, changing requirements, and business criticality.
It is designed for teams of 3 to 9 members, who break their work into actions that can be completed
within time boxed iterations called “sprints”.
The progress tracking and re-planning is done in 15-min stand- up meetings called daily scrums.
Backlog—a prioritized list of project requirements or features that provide business value for the
customer. Items can be added to the backlog at any time (this is how changes are introduced). The
product manager assesses the backlog and updates priorities as required.
Sprints—consist of work units that are required to achieve a requirement defined in the backlog that
must befit into a predefined time-box (typically 30 days). Changes (e.g., backlog work items) are not
introduced during the sprint.

13
Scrum (cont.)
Scrum meetings—are short (typically 15-minute) meetings held daily by the Scrum team. Three key
questions are asked and answered by all team members :
• What did you do since the last team meeting?
• What obstacles are you encountering?
• What do you plan to accomplish by the next team meeting?
Demos implement the software increment at the client side so that the newly developed functionality
can be demonstrated as well as evaluated by the customer. It is important to note that the demo may
not contain all planned functionality, but rather those functions that can be delivered within the time-
box that was established
Product Owner : Person responsible for product backlog.
SCRUM Master: Person responsible for scrum process. Handles the meeting and evaluates the
responses from each person.
SCRUM Owner : It consists of product owner, development team, scrum master.
The important use of Scrum meeting is that it can uncover critical issues as early as possible.

14
Scrum Process Flow

15
KANBAN Model
Kanban is a popular framework which is used to implement agile software development.
It takes real time communication of capacity and complete transparency of work.
In Agile Kanban, the Kanban board is used to visualize the workflow.
Kanban Board
The Kanban board is normally put up on a wall in the project room. It is a physical or
digital (JIRA) board designed to help teams visualize their work at different stages and
processes. It also helps represent the stages of work with columns using cards
The Kanban board has columns and story cards. The columns are nothing, but
workflow states and cards are nothing but a demonstration of the actual task a team
member is performing.
It has columns that represent the status of the work like
• To-do,
• Dev
• Testing
• Done.

Prof. Krishnapriya S
APSIT
KANBAN Model (cont.)
• The Kanban cards are essential pieces on the Kanban board as it represents
the work that the team is working on. These cards will have
• Priority
• Owner
• Type
• Due date
• A column in Kanban board represents the work stage, and you can place a
WIP (Work in Progress) limit on the column. The WIP limit means the
maximum number of cards that can stay on that column.
• Each of the columns can have cards <=the WIP limit. The cards represent
the actual work.
• Since Kanban project management uses a pull-based system, as and when
a developer is free, he/she can pull a card from the to-do column to the
dev column.
Prof. Krishnapriya S
APSIT
KANBAN Model (cont.)

Prof. Krishnapriya S
APSIT
KANBAN Model (cont.)
Kanban Scrum

Kanban is an ongoing process. Scrum sprints have a start and stop dates

Kanban has no formal roles. Role is clearly defined of each team in the
scrum (product owner, development team,
and scrum master). Both teams are self-
organized.

A kanban board is used throughout the Scrum board is cleared and recycled after
lifecycle of a project each sprint.

This board is more flexible with regards to This board has the number of tasks and a
tasks and timing. Its task can be strict deadline to complete them.
reprioritized, reassigned, or updated as
needed.

Prof. Krishnapriya S
APSIT
1
2
3
4
5
6
Introduction to Software
Engineering and Process Models

APSIT Prof. Krishnapriya S


Evolutionary Models
• Software system evolves over time as requirements often change
as development proceeds. Thus, a straight line to a complete end
product is not possible. However, a limited version must be
delivered to meet competitive pressure.
• Usually a set of core product or system requirements is well
understood, but the details and extension have yet to be defined.
• You need a process model that has been explicitly designed to
accommodate a product that evolved over time.
• It is iterative that enables you to develop increasingly more
complete version of the software.
• Two types are introduced, namely Prototyping and Spiral models.

2
Evolutionary Models: Prototyping
• When to use: Customer defines a set of general objectives but does not identify
detailed requirements for functions and features. Or Developer may be unsure
of the efficiency of an algorithm, the form that human computer interaction
should take.
• What step: Begins with communication by meeting with stakeholders to define
the objective, identify whatever requirements are known, outline areas where
further definition is mandatory. A quick plan for prototyping and modeling (quick
design) occur. Quick design focuses on a representation of those aspects the
software that will be visible to end users. ( interface and output). Design leads to
the construction of a prototype which will be deployed and evaluated.
Stakeholder’s comments will be used to refine requirements.
• Both stakeholders and software engineers like the prototyping paradigm. Users
get a feel for the actual system, and developers get to build something
immediately. However, engineers may make compromises in order to get a
prototype working quickly. The less-than-ideal choice may be adopted forever
after you get used to it.

3
Prototyping (cont.)

Advantages Disadvantages

Reduced time and costs, but this can be a disadvantage if the Insufficient analysis· User confusion of prototype and
developer loses time in developing the prototypes. finished system.

Improved and increased user involvement. High chances of Developer misunderstanding of user
objectives.

Missing functionality & Confusing or difficult functions can be Excessive development time of the prototype.
identified easily

Provides Environment to resolve unclear objectives Expense of implementing prototyping

4
Prototyping (cont.)
• It has some types, such as:
• Throwaway prototyping : Prototypes that are eventually discarded rather than
becoming a part of the finally delivered software
• Evolutionary prototyping : prototypes that evolve into the final system through
an iterative incorporation of user feedback.
• Incremental prototyping : The final product is built as separate prototypes. At
the end, the separate prototypes are merged in an overall design.
• Extreme prototyping: used at web applications mainly. Basically, it breaks
down web development into three phases, each one based on the preceding
one. The first phase is a static prototype that consists mainly of HTML pages. In
the second phase, the screens are programmed and fully functional using a
simulated services layer. In the third phase, the services are implemented

5
Prototyping

Quick
plan
communication

Modeling
Quick design

Deployment Construction
delivery & of prototype
feedback Construction
of prototype

6
Evolutionary Models: The Spiral
• It couples the iterative nature of prototyping with the controlled and
systematic aspects of the waterfall model and is a risk-driven process
model generator that is used to guide multi-stakeholder concurrent
engineering of software intensive systems.
• Two main distinguishing features: one is cyclic approach for
incrementally growing a system’s degree of definition and
implementation while decreasing its degree of risk. The other is a set of
anchor point milestones for ensuring stakeholder commitment to
feasible and mutually satisfactory system solutions.
• A series of evolutionary releases are delivered. During the early
iterations, the release might be a model or prototype. During later
iterations, increasingly more complete version of the engineered system
are produced.

7
Evolutionary Models: The Spiral
• The first circuit in the clockwise direction might result in the
product specification; subsequent passes around the spiral might
be used to develop a prototype and then progressively more
sophisticated versions of the software. Each pass results in
adjustments to the project plan. Cost and schedule are adjusted
based on feedback. Also, the number of iterations will be
adjusted by project manager.
• Good to develop large-scale system as software evolves as the
process progresses and risk should be understood and properly
reacted to. Prototyping is used to reduce risk.
• However, it may be difficult to convince customers that it is
controllable as it demands considerable risk assessment
expertise.

8
Evolutionary
Advantages:
Models: The Spiral
Easy to monitor and more effective
Reduces the number of risks in software before it become a major
problem
Suitable for high risks projects.
Cost and time estimates are more realistic
Changes can be accommodated in the later stages of development.
Disadvantages:
If major risk is not identified in early iteration, it may become a
major risk in later stages.
Cost is usually high
Not suitable for low risks projects
9
Evolutionary Models: The Spiral

10
Three Concerns on Evolutionary
Processes
• First concern is that prototyping poses a problem to project planning because
of the uncertain number of cycles required to construct the product.
• Second, it does not establish the maximum speed of the evolution. If the
evolution occur too fast, without a period of relaxation, it is certain that the
process will fall into chaos. On the other hand if the speed is too slow then
productivity could be affected.
• Third, software processes should be focused on flexibility and extensibility
rather than on high quality. We should prioritize the speed of the development
over zero defects. Extending the development in order to reach high quality
could result in a late delivery of the product when the opportunity niche has
disappeared.

11
Concurrent Model
• Also known as concurrent engineering.
• It can be represented as a series of framework activities, task, actions and their associated states.

• The Figure shows modeling may be in any one of the states at any given time. For example,
communication activity has completed its first iteration and in the awaiting changes state. The
modeling activity was in inactive state, now makes a transition into the under development state. If
customer indicates changes in requirements, the modeling activity moves from the under development
state into the awaiting changes state.
• Concurrent modeling is applicable to all types of software development and provides an accurate
picture of the current state of a project. Rather than confining software engineering activities, actions
and tasks to a sequence of events, it defines a process network. Each activity, action or task on the
network exists simultaneously with other activities, actions or tasks. Events generated at one point
trigger transitions among the states.

12
Concurrent Model
Inactive : No activity performed (None)
Under development : Some activity performed
Awaiting changes : if customer want any changes
Under review: Testing activity starts.
Under revision : Do all required changes,
Baselined : As per the SRS document
Done : Project completed and deployed

13
Concurrent Model

14
Concurrent
Advantages:
Model
This model is applicable to all types of software development
processes.
Gives a clear picture of current state of the project.
Easy to use and understand.
New functionalities can be added late in projects.
Disadvantages:
Since all stages in this model work concurrently, any change in the
requirement from client may halt the process.
It requires excellent communication among team members.
SRS must be updated at regular intervals to reflect the changes.

15
1
2
Risk Management
Risk analysis and management are a series of steps that help a software team
understand and manage uncertainty.
Different categories of risks
Project risks threaten the project plan.
If project risks become real, it will cause project schedule to slip and that costs
will increase.
It identifies potential budgetary, schedule, resource, stakeholder, and
requirements problems and their impact on a software project.
Technical risks threaten the quality and timeliness of the software to be
produced.
If a technical risk becomes a reality, implementation may become difficult or
impossible.
Technical risks identify potential design, implementation, interface,
verification, and maintenance problem.
Business risks threaten the viability of the software to be built and often
jeopardize the project or the product.
Some of the causes of Business Risks are
(1) building an excellent product or system that no one really wants (market
risk),
(2) building a product that no longer fits into the overall business strategy for
the company (strategic risk)
(3) building a product that the sales force doesn’t understand how to sell (sales
risk)
(4) losing the support of senior management due to a change in focus or a
change in people (management risk)
(5) losing budgetary or personnel commitment (budget risks).
Known risks are those that can be uncovered after careful evaluation of the
project plan, the business and technical environment in which the project is
being developed, and other reliable information sources

Prof. Krishnapriya S APSIT


Predictable risks are extrapolated from past project.
Unpredictable risks They can and do occur, but they are extremely difficult to
identify in advance.

Risk Assessment
In Risk Assessment, following elements are there.
a) Risk Identification – Produces lists of the project-specific risk items likely
to compromise project’s success.
b) Risk Analysis – Assesses the loss probability and loss magnitude for each
identified item, and it assesses compound risks in risk-item interactions
c) Risk Prioritization - Produces a ranked ordering of the risk items
identified and analyzed.

Risk Identification
There are two distinct types of risks for each of the Risk categories
-Generic
-product Specific
Generic risks are a potential threat to every software project. Product-specific
risks can be identified only by those with a clear understanding of the
technology, the people, and the environment that is specific to the software
that is to be built. To identify product-specific risks, the project plan and the
software statement of scope are examined.
One method for identifying risks is to create a risk item checklist. This helps to
identify subset of predictable risks in the following subcategories.
Product size—Risks associated with the overall size of the software to be built
or modified.
Business impact—Risks associated with constraints imposed by management
or the marketplace.

Prof. Krishnapriya S APSIT


Stakeholder characteristics—Risks associated with the sophistication of the
stakeholders and the developer’s ability to communicate with stake holders in
a timely manner.
Process definition—Risks associated with the degree to which the software
process has been defined and is followed by the development organization.
Development environment—Risks associated with the availability and quality
of the tools to be used to build the product.
Technology to be built—Risks associated with the complexity of the system to
be built and the “newness” of the technology that is packaged by the system.
Staff size and experience—Risks associated with the overall technical and
project experience of the software engineers who will do the work.

Risk Analysis
i) Qualitative Risk Analysis
It is the process of prioritizing risks by combining and assessing their
probability of occurrence and impact.
It helps managers to lessen the uncertainty level and concentrate on high
priority risks.
Plan for risk management should take place early in the project.
The inputs for qualitative Project Risk Analysis and Management includes

•Risk management plan


• Scope baseline
• Risk register
• Enterprise environmental factors
• Organizational process assets
The output of this stage would be
- Project documents updates

ii) Quantitative Risk Analysis

It is the procedure of numerically analysing the effect of identified risks on


overall project objectives.

Prof. Krishnapriya S APSIT


In order to minimize the project uncertainty, this kind of analysis are quite
helpful for decision making.
The input of this stage is

• Risk management plan


• Cost management plan
• Schedule management plan
• Risk register
• Enterprise environmental factors
• Organizational process assets
While the output will be

• Project documents updates

Assessing Risk Impact


Three factors affect the consequences if a risk does occur:
Its Nature: This indicates the problems that are likely if the risk occurs.
Its Scope: This combines the severity of the risk with its overall distribution
(how much was affected)
Its timing: This considers when and for how long the impact will be felt.
The overall risk exposure formula is RE = P x C
P = the probability of occurrence for a risk
C = the cost to the project should the risk actually
occurs (Loss Due to Risk (Impact))

RISK MITIGATION , MONITORING, AND MANAGEMENT


An effective strategy to deal with risks must consider three issues:
• risk avoidance
• risk monitoring
• risk management and contingency planning.
If a software team adopts a proactive approach to risk, avoidance is always
the best strategy. This is achieved by developing a plan for risk mitigation.

Prof. Krishnapriya S APSIT


As the project proceeds, risk-monitoring activities commence. The project
manager monitors factors that may provide an indication of whether the risk
is becoming more or less likely.
Risk management and contingency planning assumes that mitigation
efforts have failed and that the risk has become a reality.

THE RMMM PLAN


Risk management steps can be organized into a separate risk mitigation,
monitoring and management plan.
Some software teams develop Risk information sheet (RIS) instead of a
formal RMMM document.
The format of the RIS is given below

Prof. Krishnapriya S APSIT


Once RMMM has been documented and the project has begun, risk mitigation
and monitoring steps commence.
Risk mitigation is a problem avoidance activity.
Risk monitoring is a project tracking activity with three primary objectives:
(1) to assess whether predicted risks do, in fact, occur;
(2) to ensure that risk aversion steps defined for the risk are being properly
applied; and
(3) to collect information that can be used for future risk analysis. In many
cases, the problems that occur during a project can be traced to more than
one risk. Another job of risk monitoring is to attempt to allocate origin [what
risk(s) caused which problems throughout the project]

Prof. Krishnapriya S APSIT


Software Quality Assurance

Software quality assurance (SQA) is the process that ensures the software product meets
the organization’s quality specifications.

SQA is a set of activities that verifies that everyone involved with the project has correctly
implemented all procedures and processes.

Instead of making quality checks after completion, software quality assurance checks for
quality issues in each development phase.

These are the characteristics common to all software quality assurance processes:

• A defined quality management approach


• Holding formal technical reviews
• Implementing a multi-testing strategy
• Using effective software engineering technology
• A measurement and reporting mechanism
Additionally, all software quality assurance programs contain the following ten vital
elements:

1. Software engineering standards


2. Technical reviews and audits
3. Software testing for quality control
4. Error collection and analysis
5. Change management
6. Educational programs
7. Vendor management
8. Security management
9. Safety
10. Risk management

SQA Techniques
Here are some examples of how quality assurance professionals implement SQA.
• Auditing.
This technique involves QA professionals inspecting the work to see if all standards
are followed.
• Reviewing.
In-house and outside stakeholders meet to examine the product, make comments on
what they find, and get approval.
• Code Inspection.
This technique is a formal code review using static testing to find bugs and defects.
This inspection requires a trained peer or mediator, not the original code author. The
inspection is based on established rules, checklists, and entry and exit criteria.
• Design Inspection.
Design inspection employs a checklist that covers the following design areas:
o General requirements and design
o Functional and Interface specifications
o Conventions
o Requirement traceability
o Structures and interfaces
o Logic
o Performance
o Error handling and recovery
o Testability, extensibility
o Coupling and cohesion
• Simulation.
Simulation models real-life conditions to virtually examine system behavior.
• Functional Testing.
This technique is a form of black-box testing where the QA person verifies what the
system does without caring about how it got there.
• Walkthroughs.
Walkthroughs are peer reviews where the developer guides development team
members through the product. Members then raise queries, suggest alternatives,
and make comments about possible errors, standard violations, or any possible
issues.
• Stress Testing.
Nothing shows you how good a program is than running it under high-demand
conditions.
• Six Sigma.
This is a well-respected quality assurance philosophy that strives for nearly perfect
products or services. Six Sigma’s main objective is a 99.76 % defect-free product.
What is Six Sigma?

Six Sigma is a set of methodologies and tools used to improve business processes
by reducing defects and errors, minimizing variation, and increasing quality and
efficiency. The goal of Six Sigma is to achieve a level of quality that is nearly perfect,
with only 3.4 defects per million opportunities. This is achieved by using a structured
approach called DMAIC (Define, Measure, Analyze, Improve, Control) to identify and
eliminate causes of variation and improve processes.

Six Sigma is a disciplined and data-driven approach widely used in project


management to achieve process improvement and minimize defects. It provides a
systematic framework to identify and eliminate variations that can impact project
performance.

Formal Technical Review


• Formal Technical review is a software quality assurance activity performed by
software engineer.
Objectives of FTR

1. FTR is useful to uncover error in logic, function and implementation for any
representation of the software.
2. The purpose of FTR is to ensure that software meets specified requirements.
3. It is also ensure that software is represented according to predefined standards.
4. It helps to review the uniformity in software development process.
5. It makes the project more manageable.

Steps in FTR
1. The review meeting

• Every review meeting should be conducted by considering the following


constraints-

➢ Involvement of people
Between 3 and 5 people should be involve in the review.
➢ Advance preparation Advance preparation should occur but it should be very
short that is at the most 2 hours of work for each person can be spent in this
preparation
➢ Short duration The short duration of the review meeting should be less than
two hour.
• Rather than attempting to review the entire design walkthrough are conducted for
modules or for small group of modules.
• The focus of the FTR is on work product
• The review leader is responsible for evaluating for product for its deadlines. The
copies of product material is then distributed to reviewers. -The producer
organises “walkthrough” the product, explaining the material, while the reviewers
raise the issues based on theirs advance preparation.
• One of the reviewers become recorder who records all the important issues raised
during the review. When error are discovered, the recorder notes each.
• At the end of the review, the attendees decide whether to accept the product or
not, with or without modification.

2. Review reporting and record keeping

• During the FTR, the reviewer actively record all the issues that have been raised.
• At the end of meeting these all raised issues are consolidated and review issue list
is prepared.
• Finally, formal technical review summary report is produced.

3. Review guidelines

• Guidelines for the conducting of formal technical review must be established in


advance. These guidelines must be distributed to all reviewers, agreed upon, and
then followed.
Walkthrough :
Walkthrough is a method of conducting informal group/individual review. In a
walkthrough, author describes and explain work product in an informal meeting to his
peers or supervisor to get feedback. Here, validity of the proposed solution for work
product is checked.
It is cheaper to make changes when design is on the paper rather than at time of
conversion. Walkthrough is a static method of quality assurance. Walkthrough are
informal meetings but with purpose.

The following are the objectives of Walkthrough.

• Understand and learn the development of software product till date.

• Detecting defects in the developed software product.

• To explain information present in the document

• To verify and discuss about the validity of the proposed system.

• Reporting the suggestions given by other employees.

2. Inspection :
An inspection is defined as formal, rigorous, in depth group review designed to identify
problems as close to their point of origin as possible. Inspections improve reliability,
availability, and maintainability of software product.
Anything readable that is produced during the software development can be inspected.
Inspections can be combined with structured, systematic testing to provide a powerful
tool for creating defect-free programs.
Inspection activity follows a specified process and participants play well-defined roles.
An inspection team consists of three to eight members who plays roles of moderator,
author, reader, recorder and inspector.

For example, designer can acts as inspector during code inspections while a quality
assurance representative can act as standard enforcer.

Prof. Krishnapriya S APSIT

You might also like