0% found this document useful (0 votes)
21 views137 pages

Software Development Ac and Lal Bahadur

The document provides an overview of software and software engineering, defining software as a set of programs and documentation designed for specific functions. It discusses the characteristics of good software, types of software applications, and the importance of software engineering in improving quality, reliability, and productivity. Additionally, it covers software processes, metrics, and the distinction between software products and processes.

Uploaded by

garenamagar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views137 pages

Software Development Ac and Lal Bahadur

The document provides an overview of software and software engineering, defining software as a set of programs and documentation designed for specific functions. It discusses the characteristics of good software, types of software applications, and the importance of software engineering in improving quality, reliability, and productivity. Additionally, it covers software processes, metrics, and the distinction between software products and processes.

Uploaded by

garenamagar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 137

Unit 1

Introduction

Software:
• Software is set of Computer programs associated with documentation & configuration data that is
needed to make these programs operate correctly.
• Software is a set of programs, which is designed to perform a well-defined function.
• Software is computer programs and associated documentation.
• A software system consists of a number of programs, configuration files (used to set up programs),
system documentation (describes the structure of the system) and user documentation (explains
how to use system).
• Software products may be developed for a particular customer or may be developed for a general
market.

Software products may be:


 Generic:
• These are stand-alone systems that are produced by a development organization and sold on the
open market to any customer who is able to buy them. (i.e. developed to be sold to a range of
different customers). e.g. Databases, Office packages, Drawing Packages etc.

 Bespoke (custom):
• These are the systems which are commissioned by a particular customer. A software contractor
develops the software especially for that customer. (i.e. developed for a single customer
according to their specification). e.g.: Control system for electronic device, software to support
particular business process.

What is Software Engineering?

Software engineering is defined as a process of analyzing user requirements and then designing,
building, and testing software application which will satisfy those requirements.

Let's look at the various definitions of software engineering:

• IEEE, in its standard 610.12-1990, defines software engineering as the application of a


systematic, disciplined, which is a computable approach for the development, operation, and
maintenance of software.
• Fritz Bauer defined it as 'the establishment and used standard engineering principles. It helps
you to obtain, economically, software which is reliable and works efficiently on the real
machines'.
• Boehm defines software engineering, which involves, 'the practical application of scientific
knowledge to the creative design and building of computer programs. It also includes associated
documentation needed for developing, operating, and maintaining them.'

Characteristics of good software:


 A software product can be judged by what it offers and how well it can be used. This software must
satisfy on the following grounds:
a) Operational
b) Transitional c) Maintenance
Well-engineered and crafted software is expected to have the following characteristics:

a) Operational:
 This tells us how well software works in operations. It can be measured on:
• Budget
• Usability
• Efficiency
• Correctness
• Functionality
• Dependability
• Security
• Safety

b) Transitional:
 This aspect is important when the software is moved from one platform to another:
• Portability
• Interoperability
• Reusability
• Adaptability

c) Maintenance:
 This aspect briefs about how well a software has the capabilities to maintain itself in the ever-
changing environment:
• Modularity
• Maintainability
• Flexibility
• Scalability

Functions provided by Software:


The Functions provided by Software are as follows:
1. It delivers most important product i.e. Information or Data.
2. It provides the Gateway for world wide information.
3. It provides the means of exchanging information.
4. It manages the business Information.

Software Applications (Categories of Computer Software):


 Software applications run on different areas, they are as follows:
1. System Software:
• System software is a collection of programs written to service other programs.
• E.g. Compilers, Editors, File Management Utilities.

2. Real-time software:
• Software that monitors/ analyzes/ controls real-world events as they occur is called real time
Software.
• E.g. Real time manufacturing process control.

1. Business Software:
• Business information processing is the largest single software application area. They include
software that accesses one or more large databases containing business information.
• E.g. Payroll, Inventory, Accounts.

4. Engineering and scientific software :


• Engineering & Scientific software is the software required for the engineering & scientific
development purpose. It will include all CASE tools & System Stimulations.
• E.g. LEX, YACC, CAD, CAM.

5. Embedded Software:
• Embedded software resides in read-only memory and is used to control products and systems for
the consumer and industrial markets.
• E.g. digital functions in an automobile such as fuel control, dashboard displays, and braking
systems.

6. Personal Computer Software:


• These soft wares are used for enhancing the personal computers & to facilitate more control to
the user.
• E.g. Computer Graphics, Multimedia, Text Editors.

7. Web Based Software:


• The Web pages retrieved by a browser are software that incorporates executable instructions
(e.g., CGI, HTML, Perl, or Java), and data.

8. Artificial Intelligence Software:


• Software makes use of non numerical algorithms to solve complex problems that are not
amenable to computation or straightforward analysis.
• E.g. Expert Systems, Pattern Recognition.

Advantages of Using Software Engineering:


 The advantages of using software engineering for developing software are:
• Improved quality
• Improved requirement specification
• Improved cost and schedule estimates
• Better use of automated tools and techniques.
• Better maintenance of delivered software
• Well defined process
• Improved reliability
• Improved productivity
• Less defects in final processes

Why Software Engineering?


• The economies of ALL developed nations are dependent on software.
• More and more systems are software controlled.
• Many software project late and over budget.
• Complicity of software project is increased.
• Demand for new software on the market.
• Software engineering is concerned with theories, methods and tools for professional software
development.
• Software engineering expenditure represents a significant fraction of GNP in all developed
countries.
Need of software engineering:
 The need of software engineering arises because of higher rate of change in user requirements and
environment on which the software is working.
i. Large software: It is easier to build a wall than to a house or building, likewise, as the size of
software become large engineering has to step to give it a scientific process.
ii. Scalability: If the software process were not based on scientific and engineering concepts, it
would be easier to re-create new software than to scale an existing one.
iii. Cost:As hardware industry has shown its skills and huge manufacturing has lower down the
price of computer and electronic hardware. But the cost of software remains high if proper
process is not adapted.
iv. Dynamic Nature:The always growing and adapting nature of software hugely depends upon the
environment in which the user works. If the nature of software is always changing, new
enhancements need to be done in the existing one. This is where software engineering plays a
good role.
v. Quality Management:Better process of software development provides better and quality
software product.

Program versus software:


BASIS OF PROGRAM SOFTWARE
COMPARISON
Description Program is a set of instructions Software is a set of instructions written
written in a programming language in a programming language used to
used to execute for a specific task or execute for a specific task or particular
particular function. function.
Categories A program does not have further Software can be categorized into two
categorization. categories: application software and
system software.
Flexibility A program cannot be software. Software can be a program.
Consists Of A program consists of a set of Software consists of bundles of
instructions which are coded in a programs and data files. Programs in
programming language like c, C++, specific software use these data files to
PHP, Java etc. perform a dedicated type of tasks.
User Interface Programs do not have a user Every software has a dedicated user
interface. interface.
Development A program is developed and used by Software is developed by either a single
either a single programmer or a programmer or a group of programmers.
group of programmers.
Compilation A program is compiled every time Whole software is compiled, tested and
when we need to generate some debugged during the development
output from it. process.
Functionality & Program has limited functionality Software has lots of functionality and
Features and less features. features such as GUI, input/output data,
process etc.
Dependability Program functionality is dependent Software functionality is dependent on
on compiler. the operating system.
Creation Time A program takes less time to Software takes relatively more time to
build/make. build/make when compared to program.
Development Program development approach is Software development approach is
Approach un-procedural, un-organized and systematic, organized and very well
unplanned. planned.
Size The size of a program ranges from The size of a software ranges from
kilobytes (Kb) to megabytes (Mb). megabytes (Mb) to Gigabytes (Gb).
Examples Operating system, office suite, video Microsoft Word, Microsoft Excel, VLC
games, malware, a web browser like media player, Firefox, Adobe Reader,
Mozilla Firefox and Apple Safari. Windows, Linux, Unix, Mac etc.

What is a software process?


 It is a set of activities and associated results that produce a software product.
 Generic activities in all software processes are:
a) Specification: Where customers and engineers define the software to be produced and the
constraints on its operation. (i.e. what the system should do and its development constraints)
b) Development:Where the software is designed and programmed. (i.e. production of the software
system)
c) Validation:Where the software is checked to ensure that it is what the customer requires. (i.e.
checking that the software is what the customer wants)
d) Evolution:Where the software is modified to adapt it to changing customer and market
requirements. (i.e. changing the software in response to changing demands).

Software Characteristics:
 Software Characteristics are classified into six major components:
i. Functionality:It refers to the degree of performance of the software against its intended
purpose. Required functions are:

ii. Reliability:A set of attribute that bear on capability of software to maintain its level of
performance under the given condition for a stated period of time.Required functions are:
iii. Efficiency:It refers to the ability of the software to use system resources in the most effective and
efficient manner. The software should make effective use of storage space and executive
command as per desired timing requirement.Required functions are:

iv. Usability:It refers to the extent to which the software can be used with ease. The amount of effort
or time required to learn how to use the software.Required functions are:

v. Maintainability:It refers to the ease with which the modifications can be made in a software
system to extend its functionality, improve its performance, or correct errors.Required functions
are:

vi. Portability:
A set of attribute that bear on the ability of software to be transferred from one environment to
another, without or minimum changes.
Required functions are:
Software Applications:
 The most significant factor in determining which software engineering methods and techniques
are most important is the type of application that is being developed.

Different Types of Software Application


i. System Software: A collection of programs written to service other programs. Compiler,
device driver, editors, file management.
ii. Application software or stand alone program: It solves a specific Business needs. It is
needed to convert the business function in real time. Example -point of sale, Transaction
processing, real time manufacturing control.

iii. Scientific / Engineering Software: Applications like based on astronomy, automative stress
analysis , molecular Biology, volcanology, space Shuttle orbital dynamic, automated
manufacturing.

iv. Embedded Software: There are software control systems that control and manage hardware
devices. Example- software in mobile phone, software in Anti Lock Braking in car, software
in microwave oven to control the cooking process.

v. Product Line Software: It is designed to provide a specific capability for used by many
different customers. It can focus on unlimited or esoteric Marketplace like inventory control
products. Or address mass market place like : Spreadsheets, computer graphics, multimedia,
entertainment, database management, personal, business financial applications.

vi. Web application: It is also called " web apps ", are evolving into sophisticated computing
environment that not only provide stand alone features, computing functions, and content to
the end user but also are integrated with corporate database and business applications.

vii. Artificial intelligence software:This include- robotic, expert system, pattern recognition,
image and voice, artificial neural network, game playing, theorem proving ... It solves
Complexproblems.

Deliverables and milestones:


Deliverable :
• It simply means result or software product, designed document, or asset of project plan that can be
submitted to customers, clients, or end-users. A deliverable should be completed in all aspects. It is
an element of output within scope of project or processes in the project. The deliverables have a
due date, are real and touchable, and measurable. The deliverable is simply given to client or
customer and satisfies milestone or due date that is often created and produced during project
planning. Deliverables are generally milestones but it not necessarythat milestone is deliverable.
Milestone :
• When project begins then it is expected that project related activities must be initiated. In project
planning, series of milestones must be established. Milestone can be defined as recognizable
endpoint of software project activity. At each milestone, report must be generated. Milestone is
distinct and logical stage of the project. It is used as signal post for project start and end date, need
for external review or input and for checking budget, submission of the deliverable, etc. It simply
represents clear sequence of events that are incrementally developed or build until project gets
successfully completed. It is generally referred to as task with zero-time duration because they are
used to symbolize an achievement or point of time in project. It helps in signifying change or stage
in development.
Product and Process:
Product:
• The Product is what we're actually building. What's our solution to the problem at hand? Half of
engineering is making sure you're building the right product and have the ability to actually build it.
For software engineers, that means coming up with a software solution and being able to code it up
properly.

Process:
• The hidden side of engineering is the Process, which means how we're actually building our
product. A software process specifies the abstract set of activities that should be performed to go
from user needs to final product. we need to make sure we're following a process that lets us create
that Product in the most efficient and effective way possible. That means coming up with a process
that is robust enough to get shit done in an imperfect world but also highly responsive to change
(which is inevitable).

Measures, metrics and measurement:


Measures:
• A measure is established when a number of errors is (single data point) detected in a software
component. Measurescan be understood as a process of quantifying and symbolizing various
attributes and aspects of software. Software measures are the fundamental requirements of software
engineering.Measurement is the process of collecting one or more data points.

Metrics:
• A metrics is a measurement of the level that any impute belongs to a system product or process.
There are 4 functions related to software metrics:
1. Planning
2. Organizing
3. Controlling
4. Improving

Characteristics of software Metrics:


 The Characteristics of software Metrics are as follows:
1. Quantitative:
• Metrics must possess quantitative nature.It means metrics can be expressed in values.

2. Understandable:
• Metric computation should be easily understood ,the method of computing metric should be
clearly defined.

3. Applicability:
• Metrics should be applicable in the initial phases of development of the software.

4. Repeatable:
• The metric values should be same when measured repeatedly and consistent in nature.

5. Economical:
• Computation of metric should be economical.

6. Language Independent:
• Metrics should not depend on any programming language.
Classification of Software Metrics:
 There are 2 types of software metrics:
1. Product Metrics:
• Product metrics are used to evaluate the state of the product, tracing risks and undercovering
prospective problem areas. The ability of team to control quality is evaluated.

2. Process Metrics:
• Process metrics pay particular attention on enhancing the long term process of the team or
organisation.

3. Project Metrics:
• Project matrix is describes the project characteristic and execution process.
o Number of software developer
o Staffing pattern over the life cycle of software
o Cost and schedule
o Productivity

Measurement:
• A measurement is an manifestation of the size, quantity, amount or dimension of a particular
attributes of a product or process. Software measurement is a titrate impute of a characteristic of a
software product or the software process. It is an authority within software engineering. Software
measurement process is defined and governed by ISO Standard.

Need of Software Measurement:


 Software is measured to:
1. Create the quality of the current product or process.
2. Anticipate future qualities of the product or process.
3. Enhance the quality of a product or process.
4. Regulate the state of the project in relation to budget and schedule.

Classification of Software Measurement:


There are 2 types of software measurement:
1. Direct Measurement:
• In direct measurement the product, process or thing is measured directly using standard scale.

2. Indirect Measurement:
• In indirect measurement the quantity or quality to be measured is measured using related
parameter i.e. by use of reference.

Generic Software Development:


Generic software development is a process executed by the developers that develops the software
product. Usually, this product is made for all types of business needs which has a positive demand in
the market over a duration of time. Software development companies develop generic software on
their own and handled it to a group of customers having a similar need.

Custom Software Development:


Customer software development is a mechanism by which a company develops the product for an
individual client. Individual client may be a company or group of persons. This product mostly has a
distinct need in the market only for a limited time and is for the specialized business needs. Software
development companies develop custom software at cost of particular customers.

Roles of management in software development:


 Management is very important whenever we work on anything, especially in cases when we are
working in a team and the number of co-workers is huge. If we talk specifically about the software
development process, then the main aim of software engineering is to define a procedure which is
applicable to all the software that needs to be developed, and through which we can successfully
finish our project till its deployment stage and also the final product that we get is an efficient one.
In short, software engineering somewhere directly or indirectly deals with the management part of
the software development too.
Factors upon which the Role of Management in Software Development depends
1) People:
• Of course, the management has to deal with people in every stage of the software developing
process. From the ideation phase to the final deployment phase, including the development and
testing phases in between, there are people involved in everything, whether they be the customers
or the developers, the designers or the salesmen. Hence, how they contact and communicate with
each other must be managed so that all the required information is successfully delivered to the
relevant person and hence there is no communication gap between the customers and the service
providers.

2) Project:
• From the ideation phase to the deployment phase, we term the process as a project. Many people
work together on a project to build a final product that can be delivered to the customer as per their
needs or demands. So, the entire process that goes on while working on the project must be
managed properly so that we can get a worthy result after completing the project and also so that
the project can be completed on time without any delay.

3) Process:
• Every process that takes place while developing the software, or we can say while working on the
project must be managed properly and separately. For example, there are various phases in a
software development process and every phase has its process like the designing process is
different from the coding process, and similarly, the coding process is different from the testing.
Hence, each process is managed according to its needs and each needs to be taken special care of.

4) Product:
• Even after the development process is completed and we reach our final product, still, it needs to be
delivered to its customers. Hence the entire process needs a separate management team like the
sales department.
Unit: 2
Data Flow Diagrams
A Data Flow Diagram (DFD) is a traditional visual representation of the
information flows within a system. A neat and clear DFD can depict the right
amount of the system requirement graphically. It can be manual, automated, or a
combination of both.

It shows how data enters and leaves the system, what changes the information, and
where data is stored.

The objective of a DFD is to show the scope and boundaries of a system as a


whole. It may be used as a communication tool between a system analyst and any
person who plays a part in the order that acts as a starting point for redesigning a
system. The DFD is also called as a data flow graph or bubble chart.

Basic Structure of DFD

Characteristics of DFD
 DFDs are commonly used during problem analysis.
 DFDs are quite general and are not limited to problem analysis for software
requirements specification.
 DFDs are very useful in understanding a system and can be effectively used
during analysis.
 It views a system as a function that transforms the inputs into desired outputs.
 The DFD aims to capture the transformations that take place within a system
to the input data so that eventually the output data is produced.
 The processes are shown by named circles and data flows are represented by
named arrows entering or leaving the bubbles.
 A rectangle represents a source or sink and it is a net originator or consumer
of data. A source sink is typically outside the main system of study.

The following observations about DFDs are essential:


1. All names should be unique. This makes it easier to refer to elements in the
DFD.
2. Remember that DFD is not a flow chart. Arrows is a flow chart that
represents the order of events; arrows in DFD represents flowing data. A
DFD does not involve any order of events.
3. Suppress logical decisions. If we ever have the urge to draw a diamond-
shaped box in a DFD, suppress that urge! A diamond-shaped box is used in
flow charts to represents decision points with multiple exists paths of which
the only one is taken. This implies an ordering of events, which makes no
sense in a DFD.
4. Do not become bogged down with details. Defer error conditions and error
handling until the end of the analysis.
Standard symbols for DFDs are derived from the electric circuit diagram analysis and are shown
in fig:

Circle: A circle (bubble) shows a process that transforms data inputs into data
outputs.

Data Flow: A curved line shows the flow of data into or out of a process or data
store.

Data Store: A set of parallel lines shows a place for the collection of data items. A
data store indicates that the data is stored which can be used at a later stage or by
the other processes in a different order. The data store can have an element or
group of elements.

Source or Sink: Source or Sink is an external entity and acts as a source of system
inputs or sink of system outputs.
Physical and Logical DFD with example
Logical DFD
 Logical DFD depicts how the business operates.
 The processes represent the business activities.
 The data stores represent the collection of data regardless of how the data are
stored.
 It s how business controls.

Physical DFD
 Physical DFD depicts how the system will be implemented (or how the
current system operates).
 The processes represent the programs, program modules, and manual
procedures.
 The data stores represent the physical files and databases, manual files.
 It show controls for validating input data, for obtaining a record, for
ensuring successful completion of a process, and for system security.
Features Logical DFD Physical DFD

Model How the business How the system will be implemented


operates

Process Essential sequence Actual sequence


Data store Collections of data Physical files and databases, manual files
Type of data Permanent data Master files, transaction files
store collections

System controls Business controls Controls for data validation, record status,
system security

DFD levels and context diagrams


The DFD may be used to perform a system or software at any level
of abstraction. Infact, DFDs may be partitioned into levels that
represent increasing information flow and functional detail. Levels
in DFD are numbered 0, 1, 2 or beyond. Here, we will see
primarily three levels in the data flow diagram, which are: 0-level
DFD, 1-level DFD, and 2-level DFD.

 DFD Level 0 is also called a Context Diagram. It’s a basic


overview of the whole system or process being analyzed or
modeled. It’s designed to be an at-a-glance view, showing the
system as a single high-level process, with its relationship to
external entities. It should be easily understood by a wide
audience, including stakeholders, business analysts, data analysts
and developers.
 DFD Level 1 provides a more detailed breakout of pieces of the
Context Level Diagram. You will highlight the main functions
carried out by the system, as you break down the high-level
process of the Context Diagram into its subprocesses.
 Level2 DFD: The 2-level DFD further decomposes Level 1 processes into
sub-processes, enhancing granularity and revealing intricate data
exchanges and transformations within the system. This level provides a
deeper understanding of system functionality. Here is a level 2 DFD
example:
Introduction of ER Model
The Entity Relational Model is a model for identifying entities to be
represented in the database and representation of how those entities are
related. The ER data model specifies an enterprise schema that represents
the overall logical structure of a database graphically.
The Entity Relationship Diagram explains the relationship among the entities
present in the database. ER models are used to model real-world objects like
a person, a car, or a company and the relation between these real-world
objects. In short, the ER Diagram is the structural format of the database.
Why Use ER Diagrams In DBMS?
 ER diagrams are used to represent the E-R model in a database,
which makes them easy to convert into relations (tables).
 ER diagrams provide the purpose of real-world modeling of objects
which makes them intently useful.
 ER diagrams require no technical knowledge and no hardware
support.
 These diagrams are very easy to understand and easy to create
even for a naive user.
 It gives a standard solution for visualizing the data logically.
Symbols Used in ER Model
ER Model is used to model the logical view of the system from a data
perspective which consists of these symbols:
 Rectangles: Rectangles represent Entities in the ER Model.
 Ellipses: Ellipses represent Attributes in the ER Model.
 Diamond: Diamonds represent Relationships among Entities.
 Lines: Lines represent attributes to entities and entity sets with other
relationship types.
 Double Ellipse: Double Ellipses represent Multi-Valued Attributes.
 Double Rectangle: Double Rectangle represents a Weak Entity.

For example, Suppose we design a school database. In this database, the student will be an entity with attributes like
address, name, id, age, etc. The address can be another entity with attributes like city, street name, pin code, etc and
there will be a relationship between them.
Component of ER Diagram

1. Entity:

An entity may be any object, class, person or place. In the ER diagram, an entity can be represented as rectangles.

Consider an organization as an example- manager, product, employee, department etc. can be taken as an entity.

a. Weak Entity
An entity that depends on another entity called a weak entity. The weak entity doesn't contain any key attribute of its
own. The weak entity is represented by a double rectangle.

2. Attribute

The attribute is used to describe the property of an entity. Eclipse is used to represent an attribute.

For example, id, age, contact number, name, etc. can be attributes of a student.

a. Key Attribute

The key attribute is used to represent the main characteristics of an entity. It represents a primary key. The key
attribute is represented by an ellipse with the text underlined.
b. Composite Attribute

An attribute that composed of many other attributes is known as a composite attribute. The composite attribute is
represented by an ellipse, and those ellipses are connected with an ellipse.

c. Multivalued Attribute

An attribute can have more than one value. These attributes are known as a multivalued attribute. The double oval is
used to represent multivalued attribute.

For example, a student can have more than one phone number.
d. Derived Attribute

An attribute that can be derived from other attribute is known as a derived attribute. It can be represented by a
dashed ellipse.

For example, A person's age changes over time and can be derived from another attribute like Date of birth.

3. Relationship

A relationship is used to describe the relation between entities. Diamond or rhombus is used to represent the
relationship.
Types of relationship are as follows:

a. One-to-One Relationship

When only one instance of an entity is associated with the relationship, then it is known as one to one relationship.

For example, A female can marry to one male, and a male can marry to one female.

b. One-to-many relationship

When only one instance of the entity on the left, and more than one instance of an entity on the right associates with
the relationship then this is known as a one-to-many relationship.

For example, Scientist can invent many inventions, but the invention is done by the only specific scientist.

C. Many-to-many relationship:- When more than one instance of the entity on the left, and more than one instance
of an entity on the right associates with the relationship then it is known as a many-to-many relationship.

For example, Employee can assign by many projects and project can have many employees.

Advantages of ER diagram
SIMPLE : It is simple to draw an ER diagram when we know entities and
relationships.
EFFECTIVE : It is an effective communication tool.
EASY TO UNDERSTAND : The design of ER is very logical and hence they are
easy to design and understand.They show database capabilities like how tables, keys
and columns are used to find a solution to the given question.
INTEGRATED : The ER Model can be easily integrated with relational model.
USEFUL IN DECISION MAKING : Since some of the entities in the database are
analyzed by an ER-Diagram, So by drawing an ER-Diagram we come to know what
kind of attributes and relationship exist between them.
EASY CONVERTION : It can be easily converted to other type of models.
Disadvantages of Er Diagram
LOSS OF INFORMATION: While drawing an ER Model some of the information
can be hidden or lost.
LIMITED RELATIONSHIP: ER model can represent limited relationships as
compared to other models and It is not possible to indicate primary keys and foreign
keys when they’re expected.
NO REPRESENTATION FOR DATA MANIPULATION: It is not possible to
represent data manipulation(commands like insert(),delete(),alter(),update()) in ER
model.
NO INDUSTRY STANDARD: There is no industry standard for notations of an ER
diagram.
DIFFICULT TO MODIFY: ER models can be difficult to modify once they are
created. Any changes made to the model may require extensive rework, which can be
time-consuming and expensive.
LIMITED ATTRIBUTE REPRESENTATION: ER models may not be able to
represent all the attributes required for a particular problem domain. This can lead to
either the loss of important data or the creation of a complex and unwieldy model.
LIMITED SUPPORT FOR ABSTRACTION: ER models are not designed to
support abstraction, which can make it difficult to represent complex relationships or
data structures in a simple and intuitive way.

2.6 Describing a system with ER Diagram


ER diagram of Library Management System
ER Diagram is known as Entity-Relationship Diagram, it is used to analyze
the structure of the Database. It shows relationships between entities and
their attributes. An ER Model provides a means of communication.
The Library Management System database keeps track of readers with the
following considerations –
 The system keeps track of the staff with a single point authentication
system comprising login Id and password.
 Staff maintains the book catalog with its ISBN, Book title, price(in
INR), category(novel, general, story), edition, author Number and
details.
 A publisher has publisher Id, Year when the book was published, and
name of the book.
 Readers are registered with their user_id, email, name (first name,
last name), Phone no (multiple entries allowed), communication
address. The staff keeps track of readers.
 Readers can return/reserve books that stamps with issue date and
return date. If not returned within the prescribed time period, it may
have a due date too.
 Staff also generate reports that has readers id, registration no of
report, book no and return/issue info.

Below is the ER Diagram for Library Management System:


This Library ER diagram illustrates key information about the Library, including
entities such as staff, readers, books, publishers, reports, and authentication
system. It allows for understanding the relationships between entities.
Entities and their Attributes –
 Book Entity : It has authno, isbn number, title, edition, category,
price. ISBN is the Primary Key for Book Entity.
 Reader Entity : It has UserId, Email, address, phone no, name.
Name is composite attribute of firstname and lastname. Phone no is
multi valued attribute. UserId is the Primary Key for Readers entity.
 Publisher Entity : It has PublisherId, Year of publication, name.
PublisherID is the Primary Key.
 Authentication System Entity : It has LoginId and password with
LoginID as Primary Key.
 Reports Entity : It has UserId, Reg_no, Book_no, Issue/Return
date. Reg_no is the Primary Key of reports entity.
 Staff Entity : It has name and staff_id with staff_id as Primary Key.
 Reserve/Return Relationship Set : It has three attributes: Reserve
date, Due date, Return date.

Relationships between Entities –


 A reader can reserve N books but one book can be reserved by only
one reader. The relationship 1:N.
 A publisher can publish many books but a book is published by only
one publisher. The relationship 1:N.
 Staff keeps track of readers. The relationship is M:N.
 Staff maintains multiple reports. The relationship 1:N.
 Staff maintains multiple Books. The relationship 1:N.
 Authentication system provides login to multiple staffs. The relation is
1:N.

ER diagram of Bank Management System


ER diagram is known as Entity-Relationship diagram. It is used to analyze to
structure of the Database. It shows relationships between entities and their
attributes. An ER model provides a means of communication.
ER diagram of Bank has the following description :

 Bank have Customer.


 Banks are identified by a name, code, address of main office.
 Banks have branches.
 Branches are identified by a branch_no., branch_name, address.
 Customers are identified by name, cust-id, phone number, address.
 Customer can have one or more accounts.
 Accounts are identified by account_no., acc_type, balance.
 Customer can avail loans.
 Loans are identified by loan_id, loan_type and amount.
 Account and loans are related to bank’s branch.

This bank ER diagram illustrates key information about banks, including


entities such as branches, customers, accounts, and loans. It allows us to
understand the relationships between entities.
Entities and their Attributes are :

 Bank Entity : Attributes of Bank Entity are Bank Name, Code and
Address.
Code is Primary Key for Bank Entity.
 Customer Entity : Attributes of Customer Entity are Customer_id,
Name, Phone Number and Address.
Customer_id is Primary Key for Customer Entity.
 Branch Entity : Attributes of Branch Entity are Branch_id, Name and
Address.
Branch_id is Primary Key for Branch Entity.
 Account Entity : Attributes of Account Entity are Account_number,
Account_Type and Balance.
Account_number is Primary Key for Account Entity.
 Loan Entity : Attributes of Loan Entity are Loan_id, Loan_Type and
Amount.
Loan_id is Primary Key for Loan Entity.

Relationships are :

 Bank has Branches => 1 : N


One Bank can have many Branches but one Branch can not belong
to many Banks, so the relationship between Bank and Branch is one
to many relationship.

 Branch maintain Accounts => 1 : N


One Branch can have many Accounts but one Account can not
belong to many Branches, so the relationship between Branch and
Account is one to many relationship.

 Branch offer Loans => 1 : N


One Branch can have many Loans but one Loan can not belong to
many Branches, so the relationship between Branch and Loan is one
to many relationship.
 Account held by Customers => M : N
One Customer can have more than one Accounts and also One
Account can be held by one or more Customers, so the relationship
between Account and Customers is many to many relationship.

 Loan availed by Customer => M : N


(Assume loan can be jointly held by many Customers).
One Customer can have more than one Loans and also One Loan can
be availed by one or more Customers, so the relationship between Loan
and Customers is many to many relationship.
Advantages of ER diagram
SIMPLE : It is simple to draw an ER diagram when we know entities and
relationships.
EFFECTIVE : It is an effective communication tool.
EASY TO UNDERSTAND : The design of ER is very logical and hence they are
easy to design and understand.They show database capabilities like how tables, keys
and columns are used to find a solution to the given question.
INTEGRATED : The ER Model can be easily integrated with relational model.
USEFUL IN DECISION MAKING : Since some of the entities in the database are
analyzed by an ER-Diagram, So by drawing an ER-Diagram we come to know what
kind of attributes and relationship exist between them.
EASY CONVERTION : It can be easily converted to other type of models.
Disadvantages of Er Diagram
LOSS OF INFORMATION: While drawing an ER Model some of the information
can be hidden or lost.
LIMITED RELATIONSHIP: ER model can represent limited relationships as
compared to other models and It is not possible to indicate primary keys and foreign
keys when they’re expected.
NO REPRESENTATION FOR DATA MANIPULATION: It is not possible to
represent data manipulation(commands like insert(),delete(),alter(),update()) in ER
model.
NO INDUSTRY STANDARD: There is no industry standard for notations of an ER
diagram.
DIFFICULT TO MODIFY: ER models can be difficult to modify once they are
created. Any changes made to the model may require extensive rework, which can be
time-consuming and expensive.
LIMITED ATTRIBUTE REPRESENTATION: ER models may not be able to
represent all the attributes required for a particular problem domain. This can lead to
either the loss of important data or the creation of a complex and unwieldy model.
LIMITED SUPPORT FOR ABSTRACTION: ER models are not designed to
support abstraction, which can make it difficult to represent complex relationships or
data structures in a simple and intuitive way.

2.6 Describing a system with ER Diagram


ER diagram of Library Management System
ER Diagram is known as Entity-Relationship Diagram, it is used to analyze
the structure of the Database. It shows relationships between entities and
their attributes. An ER Model provides a means of communication.
The Library Management System database keeps track of readers with the
following considerations –
 The system keeps track of the staff with a single point authentication
system comprising login Id and password.
 Staff maintains the book catalog with its ISBN, Book title, price(in
INR), category(novel, general, story), edition, author Number and
details.
 A publisher has publisher Id, Year when the book was published, and
name of the book.
 Readers are registered with their user_id, email, name (first name,
last name), Phone no (multiple entries allowed), communication
address. The staff keeps track of readers.
 Readers can return/reserve books that stamps with issue date and
return date. If not returned within the prescribed time period, it may
have a due date too.
 Staff also generate reports that has readers id, registration no of
report, book no and return/issue info.

Below is the ER Diagram for Library Management System:


This Library ER diagram illustrates key information about the Library, including
entities such as staff, readers, books, publishers, reports, and authentication
system. It allows for understanding the relationships between entities.
Entities and their Attributes –
 Book Entity : It has authno, isbn number, title, edition, category,
price. ISBN is the Primary Key for Book Entity.
 Reader Entity : It has UserId, Email, address, phone no, name.
Name is composite attribute of firstname and lastname. Phone no is
multi valued attribute. UserId is the Primary Key for Readers entity.
 Publisher Entity : It has PublisherId, Year of publication, name.
PublisherID is the Primary Key.
 Authentication System Entity : It has LoginId and password with
LoginID as Primary Key.
 Reports Entity : It has UserId, Reg_no, Book_no, Issue/Return
date. Reg_no is the Primary Key of reports entity.
 Staff Entity : It has name and staff_id with staff_id as Primary Key.
 Reserve/Return Relationship Set : It has three attributes: Reserve
date, Due date, Return date.

Relationships between Entities –


 A reader can reserve N books but one book can be reserved by only
one reader. The relationship 1:N.
 A publisher can publish many books but a book is published by only
one publisher. The relationship 1:N.
 Staff keeps track of readers. The relationship is M:N.
 Staff maintains multiple reports. The relationship 1:N.
 Staff maintains multiple Books. The relationship 1:N.
 Authentication system provides login to multiple staffs. The relation is
1:N.

ER diagram of Bank Management System


ER diagram is known as Entity-Relationship diagram. It is used to analyze to
structure of the Database. It shows relationships between entities and their
attributes. An ER model provides a means of communication.
ER diagram of Bank has the following description :

 Bank have Customer.


 Banks are identified by a name, code, address of main office.
 Banks have branches.
 Branches are identified by a branch_no., branch_name, address.
 Customers are identified by name, cust-id, phone number, address.
 Customer can have one or more accounts.
 Accounts are identified by account_no., acc_type, balance.
 Customer can avail loans.
 Loans are identified by loan_id, loan_type and amount.
 Account and loans are related to bank’s branch.
This bank ER diagram illustrates key information about banks, including
entities such as branches, customers, accounts, and loans. It allows us to
understand the relationships between entities.
Entities and their Attributes are :

 Bank Entity : Attributes of Bank Entity are Bank Name, Code and
Address.
Code is Primary Key for Bank Entity.
 Customer Entity : Attributes of Customer Entity are Customer_id,
Name, Phone Number and Address.
Customer_id is Primary Key for Customer Entity.
 Branch Entity : Attributes of Branch Entity are Branch_id, Name and
Address.
Branch_id is Primary Key for Branch Entity.
 Account Entity : Attributes of Account Entity are Account_number,
Account_Type and Balance.
Account_number is Primary Key for Account Entity.
 Loan Entity : Attributes of Loan Entity are Loan_id, Loan_Type and
Amount.
Loan_id is Primary Key for Loan Entity.

Relationships are :

 Bank has Branches => 1 : N


One Bank can have many Branches but one Branch can not belong
to many Banks, so the relationship between Bank and Branch is one
to many relationship.

 Branch maintain Accounts => 1 : N


One Branch can have many Accounts but one Account can not
belong to many Branches, so the relationship between Branch and
Account is one to many relationship.

 Branch offer Loans => 1 : N


One Branch can have many Loans but one Loan can not belong to
many Branches, so the relationship between Branch and Loan is one
to many relationship.
 Account held by Customers => M : N
One Customer can have more than one Accounts and also One
Account can be held by one or more Customers, so the relationship
between Account and Customers is many to many relationship.

 Loan availed by Customer => M : N


(Assume loan can be jointly held by many Customers).
One Customer can have more than one Loans and also One Loan can
be availed by one or more Customers, so the relationship between Loan
and Customers is many to many relationship.

Assignment:-2
i. ER diagram of School Management System
ii. ER diagram of Hotel Management System
iii. ER diagram of Laundry Management System
Assignment:-2
iv. ER diagram of School Management System
v. ER diagram of Hotel Management System
vi. ER diagram of Laundry Management System
Unit 3. Feasibility Analysis
 A feasibility study is an assessment of the practicality of a proposed project or system. It evaluates
the project’s potential for success based on factors such as cost, value, resources, opportunities,
and threats.
 A feasibility study is simply an assessment of the practicality of a proposed project plan or method.
It involves analyzing technical, economic, legal, operational, and time feasibility factors.

1. Purpose:
 A feasibility study evaluates whether a project or venture is likely to succeed.
 It considers all critical aspects to determine if the project aligns with the organization’s goals
and if it’s worth pursuing.
2. Types of Feasibility Analysis:
 Technical Feasibility:
 Assesses the practicality of implementing a specific technical solution.
 Considers factors like available technology, infrastructure, and expertise.
 Operational Feasibility:
 Examines how well the proposed solution fits within the existing
organizational processes.
 Considers user acceptance, ease of implementation, and potential
disruptions.
 Schedule Feasibility:
 Evaluates the reasonableness of the project timeline.
 Determines whether the project can be completed within the desired
timeframe.
 Economic Feasibility:
 Analyzes the cost-effectiveness of the project.
 Includes assessing initial investment, ongoing operational costs, and
potential returns.
3. Components of a Feasibility Study:
 A comprehensive feasibility study includes the following elements:
 Historical Background:
 Provides context about the business or project.
 Product or Service Description:
 Clearly defines what the project aims to deliver.
 Accounting Statements:
 Financial projections, including income statements, balance sheets,
and cash flow forecasts.
 Operations and Management Details:
 Describes how the project will be executed and managed.
 Market Research and Policies:
 Investigates market demand, competition, and regulatory
requirements.
 Financial Data:
 Quantifies costs, revenues, and potential profits.
 Legal Requirements and Tax Obligations:
 Ensures compliance with legal and tax regulations.
Advantages of Feasibility Analysis
1. Identifies potential risks and challenges early on.

2. Guides efficient allocation of resources.

3. Helps in making informed decisions about project viability.

4. Enhances stakeholder confidence.

5. Saves money by avoiding costly failures.

6. Provides a structured approach for evaluating project feasibility.

Disadvantages of Feasibility Analysis:


1. Can be time-consuming and resource-intensive.

2. Subjectivity in analysis may lead to biased results.

3. Uncertainty in predicting future outcomes.

4. Costs associated with conducting the analysis.

5. May create a false sense of security if not conducted thoroughly.

6. Findings may not always accurately reflect real-world conditions.

Cost-benefit analysis technique


Cost-benefit analysis (CBA) is a technique commonly used in software development to
evaluate the potential benefits of a project against its costs. It helps decision-makers
determine whether a proposed project or investment is worthwhile by comparing the
expected benefits with the anticipated costs. Here's how CBA is typically applied in software
development:

1. **Identify Costs**: Begin by identifying all the costs associated with the project. These
may include development costs (such as labor, software tools, equipment), operational costs
(such as maintenance, support, training), and any other relevant expenses.
2. **Identify Benefits**: Determine the potential benefits that the software project is expected
to deliver. These could be tangible benefits such as increased revenue, cost savings,
improved efficiency, or intangible benefits like enhanced customer satisfaction, brand
reputation, or competitive advantage.

3. **Quantify Costs and Benefits**: Assign monetary values to both costs and benefits
whenever possible. This may involve estimating future revenue or cost savings, considering
the time value of money, and accounting for any risks or uncertainties.
4. **Calculate Net Benefits**: Calculate the net benefits by subtracting the total costs from
the total benefits. This provides a quantitative measure of the project's potential profitability
or value to the organization.

5. **Assess Risks and Uncertainties**: Consider the risks and uncertainties associated with
the project, such as changes in technology, market conditions, or project scope. This may
involve conducting sensitivity analysis or scenario planning to understand how variations in
assumptions could affect the outcome.

6. **Compare Alternatives**: If there are multiple options or approaches to achieving the


project goals, compare the costs and benefits of each alternative to identify the most cost-
effective solution.

7. **Make Informed Decisions**: Use the results of the cost-benefit analysis to make
informed decisions about whether to proceed with the project, modify its scope, or abandon
it altogether. Consider factors such as strategic alignment, resource constraints, and
organizational priorities.

8. **Monitor and Review**: Continuously monitor the project's progress and periodically
review the cost-benefit analysis to ensure that assumptions remain valid and that the
expected benefits are being realized. Adjustments may be necessary as circumstances
change over time.

Cost-Benefit Analysis
Pros(Advantage)
 Requires data-driven analysis
 Limits analysis to only the purpose determined in the initial step of
the process
 Results in deeper, potentially more reliable findings
 Delivers insights to financial and non-financial outcomes

Cons(Disadvantage)
 May be unnecessary for smaller projects
 Requires capital and resources to gather data and make analysis
 Relies heavily on forecasted figures; if any single critical forecast is
off, estimated findings will likely be wrong.
Return of Investment (ROI)
Return on Investment (ROI) is a financial metric used to evaluate the profitability of an
investment or compare the efficiency of different investments. It measures the ratio of
the net profit generated by an investment relative to the initial cost of the investment.
ROI is typically expressed as a percentage.

The formula for calculating ROI is:

Where:
 Net Profit = Total returns (revenue generated or benefits realized) from the
investment minus the total costs or expenses associated with the investment.
 Initial Investment = The total cost or initial outlay required to make the
investment.

Key points about ROI:

 Expressed as a Percentage: ROI is expressed as a percentage, making it easier


to compare investments of different sizes and types.
 Positive or Negative ROI: A positive ROI indicates that the investment
generates more returns than the initial cost, resulting in a profit. Conversely, a
negative ROI suggests that the investment results in a loss.
 Relative Measure: ROI provides a relative measure of investment profitability.
It helps investors and decision-makers assess the efficiency and attractiveness of
various investment opportunities.
 Consideration of Costs and Benefits: ROI takes into account both the costs and
benefits of an investment. It provides a comprehensive view of the financial
performance of the investment by considering the net impact of both factors.
 Time Frame: The time frame over which ROI is calculated can vary depending
on the nature of the investment and the desired analysis period. It's essential to
specify the time frame clearly to ensure accurate comparison and interpretation
of results.

Example:- Let's say you invest $1,000 in a mutual fund, and after one year, it has
grown to $1,200. To calculate the return on investment (ROI):

𝐹𝑖𝑛𝑎𝑙 𝑉𝑎𝑙𝑢𝑒−𝐼𝑛𝑖𝑡𝑖𝑎𝑙 𝐼𝑛𝑣𝑒𝑠𝑡𝑚𝑒𝑛𝑡


RIO= ×100
𝐼𝑛𝑖𝑡𝑖𝑎𝑙 𝐼𝑛𝑣𝑒𝑠𝑡𝑚𝑒𝑛𝑡

So,
1200−1000
RIO= ×100
1000

200
= ×100
1000

=20%

ROI on this investment is 20%

Payback Period
In software development, the payback period serves as a financial metric used to
evaluate the time it takes for the investment in developing a software product to be
recovered through the generated revenues or cost savings. Here's how the payback
period is applied in the context of software development:

 Calculation:
o Determine the initial investment required for software development, including costs
such as development tools, salaries, and overhead.
o Estimate the annual cash inflow generated by the software, which could include
revenue from sales, subscriptions, or cost savings.
 Payback Period Calculation:
o Divide the initial investment by the annual cash inflow to calculate the payback
period.
o The result represents the number of years it will take for the project to recoup its
initial investment.
 Interpretation:
o A shorter payback period indicates a quicker return on investment, which is generally
preferable as it reduces financial risk.
o Longer payback periods may indicate higher risk or lower potential returns.
 Decision Making:
o The payback period is used as a criterion for decision-making in software
development projects.
o Projects with shorter payback periods are often prioritized as they provide quicker
returns and may be seen as less risky.
 Factors Considered:
o Development Costs: Include expenses related to software development, such as
salaries, tools, and infrastructure.
o Revenue Generation: Consider potential revenue sources, including sales, licensing
fees, or cost savings from improved efficiency.
o Market Factors: Assess market demand, competition, and potential disruptions that
may impact revenue projections.
 Long-term Sustainability:
o While a shorter payback period is preferred, consider the long-term sustainability
and profitability of the software beyond the payback period.
o Evaluate potential for ongoing revenue generation, customer retention, and
adaptability to changing market conditions.
 Comparative Analysis:
o Use the payback period for comparative analysis between different software
development projects or investment options.
o Helps stakeholders prioritize projects based on their expected payback periods,
alongside other financial metrics and qualitative factors.

Feasibility Report:-
In software development, a feasibility report is a detailed analysis that
evaluates the practicality and viability of a proposed software project. This
report is typically prepared during the initial stages of project planning and
serves as a foundation for decision-making. Here's an overview of what a
feasibility report in software development typically includes:

System Proposal
A system proposal is a comprehensive document that outlines a proposed solution to
address a specific need or problem within an organization. It serves as a formal
request for approval and funding to implement the proposed system.
The components of a system proposal typically include:
1. Introduction:
 Provides an overview of the proposal.
 Introduces the background, context, and objectives of the proposed system.
2. Problem Statement:
 Clearly identifies the existing problem or need that the proposed system aims to
address.
 Describes the deficiencies or inefficiencies in the current system or processes.
3. Objectives:
 States the specific goals and outcomes that the proposed system intends to
achieve.
 Objectives should be measurable, achievable, relevant, and time-bound (SMART).
4. Scope:
 Defines the boundaries and limitations of the proposed system.
 Outlines the features, functionalities, and deliverables of the system.
 Specifies any constraints, assumptions, or exclusions.
5. Methodology:
 Explains the approach or methodology that will be used to develop and
implement the proposed system.
 Describes the development process, technology stack, and project management
approach.
6. System Requirements:
 Details the hardware, software, and infrastructure requirements for the
proposed system.
 Includes performance criteria, security considerations, and integration needs.
7. System Design:
 Provides a high-level overview of the proposed system architecture.
 Describes system components, data flow, user interface design, and database
schema.
8. Implementation Plan:
 Outlines the timeline, milestones, and activities for implementing the proposed
system.
 Allocates resources, assigns responsibilities, and defines roles within the project
team.
9. Cost Estimation:
 Estimates the costs associated with developing and implementing the proposed
system.
 Includes expenses for hardware, software, labor, training, maintenance, and
other relevant items.
10. Benefits Analysis:
 Evaluates the potential benefits and returns on investment (ROI) of the proposed
system.
 Identifies both tangible and intangible benefits, such as cost savings, increased
efficiency, and improved decision-making.
11. Risks and Mitigation Strategies:
 Identifies potential risks and challenges that may arise during system
implementation.
 Proposes strategies and contingency plans to mitigate risks and minimize their
impact.
12. Conclusion:
 Summarizes the key points presented in the proposal.
 Reiterates the need for the proposed system and its expected outcomes.
 Provides a recommendation for whether the proposal should be approved.
Unit 4
Software Project Planning

Project:
• A project is a temporary endeavor(attempt) undertaken to provide a unique product or service
• A project is a (temporary) sequence of unique complex and connected activities that have one
goal or purpose and that must be completed by a specific time, within budget and according to
specification.
• A project is a sequence of activities that must be completed on time, within budget and
according to specification.

Project planning:
• It involves making detailed plan to achieve the objectives.
• Probably the most time-consuming project management activity.
• Continuous activity from initial concept through to system delivery. Plans must be regularly
revised as new information becomes available.
• Various different types of plan may be developed to support the main software project plan that
is concerned with schedule and budget.

Steps for Project Planning:


• Followings are the steps of project planning:
1) Project goals.
2) Project deliverable.
3) Project schedule.
4) Supporting plans

Software Project Planning:


• A Software Project is the complete methodology of programming advancement from
requirement gathering to testing and support, completed by the execution procedures, in a
specified period to achieve intended software product.
• Software project planning is task, which is performed before the production of software actually
starts. It is there for the software production but involves no concrete activity that has any
direction connection with software production; rather it is a set of multiple processes, which
facilitates software production.
• Software Project planning starts before technical work start. The various steps of planning
activities are:
Software Project Manager:
• A software project manager is a person who undertakes the responsibility of executing the
software project. Software project manager is thoroughly aware of all the phases of SDLC that
the software would go through. Project manager may never directly involve in producing the
end product but he controls and manages the activities involved in production.
• A project manager closely monitors the development process, prepares and executes various
plans, arranges necessary and adequate resources, maintains communication among all team
members in order to address issues of cost, budget, resources, time, quality and customer
satisfaction.
• Let us see few responsibilities that a project manager shoulders:
Managing People
o Act as project leader
o Liaison with stakeholders
o Managing human resources
o Setting up reporting hierarchy etc.

Managing Project
o Defining and setting up project scope
o Managing project management activities
o Monitoring progress and performance
o Risk analysis at every phase
o Take necessary step to avoid or come out of problems
o Act as project spokesperson

Size Estimation:
• Estimation of the size of software is an essential part of Software Project Management. It helps
the project manager to further predict the effort and time which will be needed to build the
project.
• Various measures are used in project size estimation. Some of these are:
▪ Lines of Code
▪ Number of entities in ER diagram
▪ Total number of processes in detailed data flow diagram
▪ Function points

1. Lines of Code (LOC):


• As the name suggest, LOC count the total number of lines of source code in a project. The units
of LOC are:
o KLOC- Thousand lines of code
o NLOC- Non comment lines of code
o KDSI- Thousands of delivered source instruction
• The size is estimated by comparing it with the existing systems of same kind. The experts use it
to predict the required size of various components of software and then add them to get the total
size.

Advantages:
• Universally accepted and is used in many models like COCOMO.
• Estimation is closer to developer’s perspective.
• Simple to use.

Disadvantages:
• Different programming languages contains different number of lines.
• No proper industry standard exist for this technique.
• It is difficult to estimate the size using this technique in early stages of project.

2. Number of entities in ER diagram:


• ER model provides a static view of the project. It describes the entities and its relationships. The
number of entities in ER model can be used to measure the estimation of size of project. Number
of entities depends on the size of the project. This is because more entities needed more
classes/structures thus leading to more coding.

Advantages:
• Size estimation can be done during initial stages of planning.
• Number of entities is independent of programming technologies used.

Disadvantages:
• No fixed standards exist. Some entities contribute more project size than others.
• Just like FPA, it is less used in cost estimation model. Hence, it must be converted to LOC.

3. Total number of processes in detailed data flow diagram:


• Data Flow Diagram(DFD) represents the functional view of a software. The model depicts the
main processes/functions involved in software and flow of data between them. Utilization of
number of functions in DFD to predict software size. Already existing processes of similar type
are studied and used to estimate the size of the process. Sum of the estimated size of each
process gives the final estimated size.

Advantages:
• It is independent of programming language.
• Each major processes can be decomposed into smaller processes. This will increase the
accuracy of estimation

Disadvantages:
• Studying similar kind of processes to estimate size takes additional time and effort.
• All software projects are not required to construction of DFD.

4. Function Point Analysis:


• In this method, the number and type of functions supported by the software are utilized to find
FPC(function point count).
• The steps in function point analysis are:
a) Count the number of functions of each proposed type.
b) Compute the Unadjusted Function Points(UFP).
c) Find Total Degree of Influence(TDI).
d) Compute Value Adjustment Factor(VAF).
e) Find the Function Point Count(FPC).

a) Count the number of functions of each proposed type: We have to find the number of
functions belonging to the following types:
• External Inputs: Functions related to data entering the system.
• External outputs:Functions related to data exiting the system.
• External Inquiries: They leads to data retrieval from system but don’t change the system.
• Internal Files: Logical files maintained within the system. Log files are not included here.
• External interface Files: These are logical files for other applications which are used by
our system.
b) Compute the Unadjusted Function Points(UFP): We have to Categorize each of the five
function types as simple, average or complex based on their complexity. Multiply count of each
function type with its weighting factor and find the weighted sum. The weighting factors for
each type based on their complexity are as follows:
FUNCTION TYPE SIMPLE AVERAGE COMPLEX

External Inputs 3 4 6

External Output 4 5 7

External Inquiries 3 4 6

Internal Logical Files 7 10 15

External Interface
Files 5 7 10

c) Find Total Degree of Influence:


• Use the ’14 general characteristics’ of a system to find the degree of influence of each of them.
The sum of all 14 degrees of influences will give the TDI.
• The range of TDI is 0 to 70.
• The 14 general characteristics are: Data Communications, Distributed Data Processing,
Performance, Heavily Used Configuration, Transaction Rate, On-Line Data Entry, End-user
Efficiency, Online Update, Complex Processing Reusability, Installation Ease, Operational Ease,
Multiple Sites and Facilitate Change.
• Each of above characteristics is evaluated on a scale of 0-5.

d) Compute Value Adjustment Factor(VAF): Use the following formula to calculate VAF
VAF = (TDI * 0.01) + 0.65

e) Find the Function Point Count: Use the following formula to calculate FPC
FPC = UFP * VAF

Advantages:
• It can be easily used in the early stages of project planning.
• It is independent on the programming language.
• It can be used to compare different projects even if they use different technologies(database,
language etc).

Disadvantages:
• It is not good for real time systems and embedded systems.
• Many cost estimation models like COCOMO uses LOC and hence FPC must be converted to
LOC.

Cost Estimation:
• Cost estimation in software engineering is typically concerned with the financial spend on the
effort to develop and test the software, this can also include requirements review, maintenance,
training, managing and buying extra equipment, servers and software. Many methods have been
developed for estimating software costs for a given project.
• Several estimation procedures have been developed and are having the following attributes in
common.
1. Project scope must be established in advanced.
2. Software metrics are used as a support from which evaluation is made.
3. The project is broken into small PCs which are estimated individually.
To achieve true cost & schedule estimate, several option arise.
4. Delay estimation
5. Used symbol decomposition techniques to generate project cost and schedule estimates.
6. Acquire one or more automated estimation tools.

Uses of Cost Estimation


1. During the planning stage, one needs to choose how many engineers are required for the
project and to develop a schedule.
2. In monitoring the project's progress, one needs to access whether the project is progressing
according to the procedure and takes corrective action, if necessary.

Cost Estimation Models:


• A model may be static or dynamic. In a static model, a single variable is taken as a key element
for calculating cost and time. In a dynamic model, all variable are interdependent, and there is no
basic variable.

Static, Single Variable Models:


• When a model makes use of single variables to calculate desired values such as cost, time,
efforts, etc. is said to be a single variable model. The most common equation is:
C=aLb
Where C = Costs
L= size
a and b are constants

The Software Engineering Laboratory established a model called SEL model, for estimating its
software production. This model is an example of the static, single variable model.
E=1.4L0.93
DOC=30.4L0.90
D=4.6L0.26
Where E= Efforts (Person Per Month)
DOC=Documentation (Number of Pages)
D = Duration (D, in months)
L = Number of Lines per code

Static, Multivariable Models:


• These models are based on method (1), they depend on several variables describing various
aspects of the software development environment. In some model, several variables are needed
to describe the software development process, and selected equation combined these variables to
give the estimate of time & cost. These models are called multivariable models.
• WALSTON and FELIX develop the models at IBM provide the following equation gives a
relationship between lines of source code and effort:
E=5.2L0.91
In the same manner duration of development is given by
D=4.1L0.36
The productivity index uses 29 variables which are found to be highly correlated productivity as
follows:

Where Wi is the weight factor for the ithvariable and Xi={-1,0,+1} the estimator gives Xione of the
values -1, 0 or +1depending on the variable decreases, has no effect or increases the productivity.

Example: Compare the Walston-Felix Model with the SEL model on a software development
expected to involve 8 person-years of effort.
a. Calculate the number of lines of source code that can be produced.
b. Calculate the duration of the development.
c. Calculate the productivity in LOC/PY
d. Calculate the average manning
Solution:
The amount of manpower involved = 8PY=96persons-months
(a)Number of lines of source code can be obtained by reversing equation to give:

Then
L (SEL) = (96/1.4)1⁄0.93=94264 LOC
L (SEL) = (96/5.2)1⁄0.91=24632 LOC
(b)Duration in months can be calculated by means of equation
D (SEL) = 4.6 (L)0.26
= 4.6 (94.264)0.26 = 15 months
D (W-F) = 4.1 L0.36
= 4.1 (24.632)0.36 = 13 months
(c) Productivity is the lines of code produced per persons/month (year)

(d)Average manning is the average number of persons required per month in the project

The constructive cost model (COCOMO):


• Cocomo (Constructive Cost Model) is a regression model based on LOC, i.e number of Lines
of Code.
• It is a procedural cost estimate model for software projects and often used as a process of
reliably predicting the various parameters associated with making a project such as size, effort,
cost, time and quality.
• It was proposed by Barry Boehm in 1970 and is based on the study of 63 projects, which make it
one of the best-documented models.
• The necessary steps in this model are:
1. Get an initial estimate of the development effort from evaluation of thousands of delivered
lines of source code (KDLOC).
2. Determine a set of 15 multiplying factors from various attributes of the project.
3. Calculate the effort estimate by multiplying the initial estimate with all the multiplying
factors i.e., multiply the values in step1 and step2.
The initial estimate (also called nominal estimate) is determined by an equation of the form used in
the static single variable models, using KDLOC as the measure of the size. To determine the initial
effort Ei in person-months the equation used is of the type is shown below:
Ei=a*(KDLOC)b
Where the value of the constant a and b are depends on the project type.
In COCOMO, projects are categorized into three types:
1. Organic
2. Semidetached
3. Embedded

1. Organic:
• A development project can be treated of the organic type, if the project deals with developing a
well-understood application program, the size of the development team is reasonably small, and
the team members are experienced in developing similar methods of projects.
• Examples of this type of projects are simple business systems, simple inventory management
systems, and data processing systems.

2. Semidetached:
• A development project can be treated with semidetached type if the development consists of a
mixture of experienced and inexperienced staff. Team members may have finite experience in
related systems but may be unfamiliar with some aspects of the order being developed.
• Example of Semidetached system includes developing a new operating system (OS), a Database
Management System (DBMS), and complex inventory management system.

3. Embedded:
• A development project is treated to be of an embedded type, if the software being developed is
strongly coupled to complex hardware, or if the stringent regulations on the operational method
exist.
• For Example: ATM, Air Traffic control.

According to Boehm, software cost estimation should be done through three stages:
1. Basic Model
2. Intermediate Model
3. Detailed Model

1. Basic COCOMO Model:


• The basic COCOMO model provide an accurate size of the project parameters. The following
expressions give the basic COCOMO estimation model:
Effort=a1*(KLOC) a2 PM
Tdev=b1*(efforts)b2 Months
Where, KLOC is the estimated size of the software product indicate in Kilo Lines of Code,
a1,a2,b1,b2 are constants for each group of software products,
Tdev is the estimated time to develop the software, expressed in months,
Effort is the total effort required to develop the software product, expressed
in personmonths (PMs).

Estimation of development effort:


• For the three classes of software products, the formulas for estimating the effort based on the
code size are shown below:
Organic: Effort = 2.4(KLOC) 1.05 PM
Semi-detached: Effort = 3.0(KLOC) 1.12 PM
Embedded: Effort = 3.6(KLOC) 1.20 PM

Estimation of development time:


• For the three classes of software products, the formulas for estimating the development time
based on the effort are given below:
Organic: Tdev = 2.5(Effort) 0.38 Months
Semi-detached: Tdev = 2.5(Effort) 0.35 Months
Embedded: Tdev = 2.5(Effort) 0.32 Months

Some insight into the basic COCOMO model can be obtained by plotting the estimated
characteristics for different software sizes. Fig shows a plot of estimated effort versus product size.
From fig, we can observe that the effort is somewhat superliner in the size of the software product.
Thus, the effort required to develop a product increases very rapidly with project size.
The development time versus the product size in KLOC is plotted in fig. From fig it can be observed
that the development time is a sub linear function of the size of the product, i.e. when the size of the
product increases by two times, the time to develop the product does not double but rises
moderately. This can be explained by the fact that for larger products, a larger number of activities
which can be carried out concurrently can be identified. The parallel activities can be carried out
simultaneously by the engineers. This reduces the time to complete the project. Further, from fig, it
can be observed that the development time is roughly the same for all three categories of products.
For example, a 60 KLOC program can be developed in approximately 18 months, regardless of
whether it is of organic, semidetached, or embedded type.

From the effort estimation, the project cost can be obtained by multiplying the required effort by the
manpower cost per month. But, implicit in this project cost computation is the assumption that the
entire project cost is incurred on account of the manpower cost alone. In addition to manpower cost,
a project would incur costs due to hardware and software required for the project and the company
overheads for administration, office space, etc.
It is important to note that the effort and the duration estimations obtained using the COCOMO
model are called a nominal effort estimate and nominal duration estimate. The term nominal implies
that if anyone tries to complete the project in a time shorter than the estimated duration, then the
cost will increase drastically. But, if anyone completes the project over a longer period of time than
the estimated, then there is almost no decrease in the estimated cost value.

Example1: Suppose a project was estimated to be 400 KLOC. Calculate the effort and development
time for each of the three model i.e., organic, semi-detached & embedded.

Solution: The basic COCOMO equation takes the form:


Effort=a1*(KLOC) a2 PM
Tdev=b1*(efforts)b2 Months
Estimated Size of project= 400 KLOC
(i)Organic Mode
E = 2.4 * (400)1.05 = 1295.31 PM
D = 2.5 * (1295.31)0.38=38.07 PM

(ii)Semidetached Mode
E = 3.0 * (400)1.12=2462.79 PM
D = 2.5 * (2462.79)0.35=38.45 PM

(iii) Embedded Mode


E = 3.6 * (400)1.20 = 4772.81 PM
D = 2.5 * (4772.8)0.32 = 38 PM

Example2: A project size of 200 KLOC is to be developed. Software development team has
average experience on similar type of projects. The project schedule is not very tight. Calculate the
Effort, development time, average staff size, and productivity of the project.

Solution: The semidetached mode is the most appropriate mode, keeping in view the size, schedule
and experience of development time.
Hence E=3.0(200)1.12=1133.12PM
D=2.5(1133.12)0.35=29.3PM

P = 176 LOC/PM

2. Intermediate Model:
• The basic Cocomo model considers that the effort is only a function of the number of lines of
code and some constants calculated according to the various software systems.
• The intermediate COCOMO model recognizes these facts and refines the initial estimates
obtained through the basic COCOMO model by using a set of 15 cost drivers based on various
attributes of software engineering.
Classification of Cost Drivers and their attributes:
(i) Product attributes:
o Required software reliability extent
o Size of the application database
o The complexity of the product
Hardware attributes:
o Run-time performance constraints
o Memory constraints
o The volatility of the virtual machine environment
o Required turnabout time
Personnel attributes:
o Analyst capability
o Software engineering capability
o Applications experience
o Virtual machine experience
o Programming language experience
Project attributes:
o Use of software tools
o Application of software engineering methods
o Required development schedule
The cost drivers aredivided into four categories:
Intermediate COCOMO equation:
E=ai (KLOC) bi*EAF
D=ci (E)di

Coefficients for intermediate COCOMO


Project ai bi ci di

Organic 2.4 1.05 2.5 0.38

Semidetached 3.0 1.12 2.5 0.35

Embedded 3.6 1.20 2.5 0.32

3. Detailed COCOMO Model:


• Detailed COCOMO incorporates all qualities of the standard version with an assessment of the
cost driver?s effect on each method of the software engineering process. The detailed model uses
various effort multipliers for each cost driver property.
• In detailed cocomo, the whole software is differentiated into multiple modules, and then we
apply COCOMO in various modules to estimate effort and then sum the effort.
• The Six phases of detailed COCOMO are:
1. Planning and requirements
2. System structure
3. Complete structure
4. Module code and test
5. Integration and test
6. Cost Constructive model
The effort is determined as a function of program estimate, and a set of cost drivers are given
according to every phase of the software lifecycle.

COCOMO II:
• COCOMO-II is the revised version of the original Cocomo (Constructive Cost Model)and is
developed at University of Southern California. It is the model that allows one to estimate the
cost, effort and schedule when planning a new software development activity.
• It consists of three sub-models:
1. End User Programming:
• Application generators are used in this sub-model. End user write the code by using these
application generators.
• Example – Spreadsheets, report generator, etc.

2. Intermediate Sector:

(a) Application Generators and Composition Aids:

• This category will create largely prepackaged capabilities for user programming. Their product
will have many reusable components. Typical firms operating in this sector are Microsoft, Lotus,
Oracle, IBM, Borland, Novell.

(b) Application Composition Sector:


• This category is too diversified and to be handled by prepackaged solutions. It includes GUI,
Databases, domain specific components such as financial, medical or industrial process control
packages.

(c) System Integration:


• This category deals with large scale and highly embedded systems.

3.Infrastructure Sector:
• This category provides infrastructure for the software development like Operating System,
Database Management System, User Interface Management System, Networking System, etc.

Stages of COCOMO II:


1. Stage-I:
• It supports estimation of prototyping. For this it uses Application Composition Estimation
Model. This model is used for the prototyping stage of application generator and system
integration.

2. Stage-II:
• It supports estimation in the early design stage of the project, when we less know about it. For
this it uses Early Design Estimation Model. This model is used in early design stage of
application generators, infrastructure, system integration.

3. Stage-III:
• It supports estimation in the post architecture stage of a project. For this it uses Post Architecture
Estimation Model. This model is used after the completion of the detailed architecture of
application generator, infrastructure, system integration.

Difference between COCOMO 1 and COCOMO 2


COCOMO 1 Model:
• The Constructive Cost Model was first developed by Barry W. Boehm. The model is for
estimating effort, cost, and schedule for software projects. It is also called as Basic COCOMO.
This model is used to give an approximate estimate of the various parameters of the project.
Example of projects based on this model is business system, payroll management system and
inventory management systems.

COCOMO 2 Model:
• The COCOMO-II is the revised version of the original Cocomo (Constructive Cost Model) and
is developed at the University of Southern California. This model calculates the development
time and effort taken as the total of the estimates of all the individual subsystems. In this model,
whole software is divided into different modules. Example of projects based on this model is
Spreadsheets and report generator.

Difference between COCOMO 1 and COCOMO 2:


COCOMO I COCOMO II
COCOMO I is useful in the waterfall COCOMO II is useful in non-sequential, rapid
models of the software development cycle. development and reuse models of software.
It provides estimates pf effort and schedule. It provides estimates that represent one standard
deviation around the most likely estimate.
This model is based upon the linear reuse This model is based upon the non linear reuse
formula. formula
This model is also based upon the This model is also based upon reuse model which
assumption of reasonably stable looks at effort needed to understand and estimate.
requirements.
Effort equation’s exponent is determined by Effort equation’s exponent is determined by 5
3 development modes. scale factors.
Development begins with the requirements It follows a spiral type of development.
assigned to the software.
Number of submodels in COCOMO I is 3 In COCOMO II, Number of submodel are 4 and
and 15 cost drivers are assigned 17 cost drivers are assigned
Size of software stated in terms of Lines of Size of software stated in terms of Object points,
code function points and lines of code
Software Risk Management:
• Risk Management is the system of identifying addressing and eliminating these problems before
they can damage the project.
• A software project can be concerned with a large variety of risks. In order to be adept to
systematically identify the significant risks which might affect a software project, it is essential
to classify risks into different classes. The project manager can then check which risks from each
class are relevant to the project.
• There are three main classifications of risks which can affect a software project:
1. Project risks
2. Technical risks
3. Business risks

1. Project risks:
• Project risks concern differ forms of budgetary, schedule, personnel, resource, and customer-
related problems. A vital project risk is schedule slippage. Since the software is intangible, it is
very tough to monitor and control a software project. It is very tough to control something which
cannot be identified. For any manufacturing program, such as the manufacturing of cars, the plan
executive can recognize the product taking shape.

2. Technical risks:
• Technical risks concern potential method, implementation, interfacing, testing, and maintenance
issue. It also consists of an ambiguous specification, incomplete specification, changing
specification, technical uncertainty, and technical obsolescence. Most technical risks appear due
to the development team's insufficient knowledge about the project.

3. Business risks:
• This type of risks contain risks of building an excellent product that no one need, losing
budgetary or personnel commitments, etc.

Other risk categories:


1. Known risks: Those risks that can be uncovered after careful assessment of the project
program, the business and technical environment in which the plan is being developed, and
more reliable data sources (e.g., unrealistic delivery date)
2. Predictable risks: Those risks that are hypothesized from previous project experience (e.g.,
past turnover)
3. Unpredictable risks: Those risks that can and do occur, but are extremely tough to identify
in advance.

Principle of Risk Management


1. Global Perspective: In this, we review the bigger system description, design, and
implementation. We look at the chance and the impact the risk is going to have.
2. Take a forward-looking view: Consider the threat which may appear in the future and
create future plans for directing the next events.
3. Open Communication: This is to allow the free flow of communications between the client
and the team members so that they have certainty about the risks.
4. Integrated management: In this method risk management is made an integral part of project
management.
5. Continuous process: In this phase, the risks are tracked continuously throughout the risk
management paradigm.
Unit:-5
Input/OutputDesign
The documents, reports and screens of a system are the user interface.
The user interface is the only part of the system that the user sees. The
rest is invisible. The user interface, therefore is the most important part
of the system to the user,
Advantages
 Simplicity: It is easy to understand because every input has a clear and direct
effect on the output.
 Predictability: The system's behavior is consistent, making it easier to
anticipate outcomes.
 Ease of Testing: Each input-output pair can be tested independently,
simplifying the process.
 Modularity: Functions are isolated, allowing for easier updates and debugging.
 User-Friendliness: Users can quickly learn how their actions lead to specific
results.
 Transparency: The clear relationship between inputs and outputs builds trust
in the system.
Disadvantage
 Flexibility: It’s hard to handle complex relationships between inputs.
 Scalability Issues: Managing many input-output pairs can become difficult as
the system grows.
 Redundancy: Similar outputs for different inputs can lead to repeated logic.
 Harder Maintenance: Changes may require updates to many pairs, increasing
effort.
 Not Ideal for Complex Systems: It struggles with dynamic or interconnected
inputs.
Input formDesign
The quality of system input determines the quality of system output. It is vital that
input forms and screens, and output reports be designed with this critical
relationship in mind.
ObjectivesforInputformDesign
Theobjectives ofinput designare −

 To design data entry and input procedures


 To reduce input volume
 To design source documents for data capture or devise other data
capture methods
 To design input data records, data entry screens, user interface screens,
etc.
 To use validation checks and develop effective input controls.

Well-designed input forms and visual display screens should meet the objectives of
effectiveness, accuracy, ease of use, consistency, simplicity, and attractiveness. All of
these objectives are attainable through the use of basic design principles, knowledge
of what is needed as input for the system, and an understanding of how user respond
to different element of forms and screens.

Well designed forms:


The heading section usually includes the name and address of the business originating the form. The
identification and access section includes codes that may be used to identify the form and user
accessing it.
The middle the form is its body, which requires the most detail and explicit information about the
process.
The bottom quarter of the form is composed of three section: signature and verification, totals and
comments. By requiring a signature is this part of the form, the designer is echoing the design of other
familiar documents, such as letter requiring ending totals and a summary of comments is a logical way
to provide closure for the person filling out the form.
Meeting the intended purpose:
Form are created to serve one or more purpose in the recording, processing, storing, and retrieving of
information for businesses.
Keeping forms attractive:
Forms should look uncluttered. They should appear organized and logical after they are filled in. TO be
attractive, forms should elicit information in the expected order, convention dictates asking for name,
street address, city, state, and zip or postal code, and country if necessary. Proper layout and flow
contribute to a for attractiveness.

Input Screen Design:


A data entry screen is used to enter data into computer files from source documents (input
forms) or reports, as well as to update data. The four guidelines for screen design are
important but not exhaustive.
They include:
 Keep the screen simple
 Keep the screen presentation consistent
 Facilitate user movement among screens
 Create an attractive screen
This will lead to two issues important in designing data entry screens:
1. Speed of data entry
2. Accuracy of data entry
How good a user interface is depends very much on the user of the interface.

Example
A data entry user who has to enter 100 or more purchase orders everyday will care most
about the speed of data entry. A usable interface insuch case would require a minimum
number of keystrokes to enter the order.
Likewise, a company director, who needs to check on the status of a project once or twice a
week, will care most about the ease with which the system can be used, and the way
information is presented to him.
Keeping the Screen Consistent:
The second guidelines for good screen design is to keep the screen display consistent .
Screens can be kept consistent by locating information in the same area, each time a new
screen is accessed. Also, information that logically belongs together should be consistently
grouped together. For Example, Name and address go together, not name and zip code.
Facilitating Movement between screens:
The third guideline for good screen design is to make it easy to move from one screen to
another. One common method for movement is to have users feel as if they are physically
moving to a new screen.
Screens should be so designed that user is always aware of the status of an action. this can
be done in following ways.
a. Use error massage to provide feedback on mistake.
b. Use confirmation messages to provide feedback on update actions
c. Use status message, when some backend process is taking place.
Menu Design
The human -computer dialogue defines how the interaction between the user and the
computer takes place. There are two common types of dialogue.
i. Menu
ii. Questions and answer
With a menu dialogue, a menu displays a list of alternate selections. The user makes a
selection by choosing the number or letter of the desired alternative. Menu dialogues are of
the most common type because they are appropriate for both frequent and infrequent users
of a system.
In menu selection, the user read a list of items and selects the one most appropriate to
their task, usually by highlighting the selection and pressing the return key, or by keying
the menu item number.

Menu selection system requires a very complete and accurate analysis of user tasks to
ensure that all the necessary functions are supported with clear and consistent terminology.
Question and Answer Dialogue
With a question and answer, dialogue, questions and alternative answers are presented.
The user selects the alternative that best answers the question. Question and answer
dialogues are most appropriate for intermediate user system.
Example.
After an order was entered on the order Entry screen, the system might ask if user would
like to create an invoice for the entered order. To this the user can offer two answer. Yes Or
NO
Do you want to enter Invoice (Y/N)
Output Design
output design refers to the process of defining how information generated by a
system will be presented to users, including the format, layout, and delivery
method, ensuring the data is clear, relevant, and easily understandable to support
decision-making and meet user needs; essentially, it's about designing the "output"
of the software, like reports, screens, or notifications, in a way that is user-friendly
and aligns with the system's objectives.

Objectives of Output Design


 To develop output design that serves the intended purpose and
eliminates the production of unwanted output.
 To develop the output design that meets the end users
requirements.
 To deliver the appropriate quantity of output.
 To form the output in appropriate format and direct it to the right
person.
 To make the output available on time form making good decisions.

External Outputs
Manufacturers create and design external outputs for printers. External output
the system to leave the trigger actions on the part of their recipients or confirm
actions to their recipients.
Some of the external outputs are designed as turnaround outputs, which are
implemented as a form and re-enter the system as an input.

Internal outputs
Internal outputs are present inside the system, and used by end-users and
managers. They support the management in decision making and reporting.

Detailed Reports − They contain present information which has almost no


filtering or restriction generated to assist management planning and control.
Summary Reports − They contain trends and potential problems which are
categorized and summarized that are generated for managers who do not
want details.
Unit 6
Software Reliability
Reliability:
 Reliability is usually defined in terms of a statistical measure for the operation of a software system
without a failure occurring.

Software Reliability:
 Software Reliability means Operational reliability. It is described as the ability of a system or
component to perform its required functions under static conditions for a specific period.
 Software reliability is also defined as the probability that a software system fulfills its assigned task
in a given environment for a predefined number of input cases, assuming that the hardware and the
input are free of error.
 Software Reliability is an essential connect of software quality, composed with functionality,
usability, performance, serviceability, capability, installability, maintainability, and documentation.
 Software Reliability is hard to achieve because the complexity of software turn to be high. While
any system with a high degree of complexity, containing software, will be hard to reach a certain
level of reliability, system developers tend to push complexity into the software layer, with the
speedy growth of system size and ease of doing so by upgrading the software.
 For example, large next-generation aircraft will have over 1 million source lines of software on-
board; next-generation air traffic control systems will contain between one and two million lines;
the upcoming International Space Station will have over two million lines on-board and over 10
million lines of ground support software; several significant life-critical defense systems will have
over 5 million source lines of software. While the complexity of software is inversely associated
with software reliability, it is directly related to other vital factors in software quality, especially
functionality, capability, etc.

Two terms related to software reliability:


i. Fault: a defect in the software, e.g. a bug in the code which may cause a failure.
ii. Failure: a derivation of the programs observed behavior from the required behavior

Failures and Faults:


 A failure corresponds to unexpected run-timebehaviour observed by a user of the software.
 A fault is a static software characteristic which causes a failure to occur. Fault need not necessarily
cause failures. They only do so if the faulty part of the software is used.

Failure Classification:
a) Transient: Failures occur only for certain inputs. Permanent: failures occur for all input values.
b) Recoverable: When failures occur the system recovers with or without operator intervention.
c) Unrecoverable: The system may have to be restarted.
d) Cosmetic:May cause minor irritations. Do not lead to incorrect results.

 Classification of Software Failures:


A possible classification of failures of software products into five different types is as follows:
i. Transient: Transient failures occur only for certain input values while invoking a function of the
system.
ii. Permanent: Permanent failures occur for all input values while invoking a function of the
system.
iii. Recoverable: When recoverable failures occur, the system recovers with or without operator
intervention.
iv. Unrecoverable: In unrecoverable failures, the system may need to be restarted.
v. Cosmetic: These classes of failures cause only minor irritations, and do not lead to incorrect
results. An example of a cosmetic failure is the case where the mouse button has to be clicked
twice instead of once to invoke a given function through the graphical user interface.

Reasons for software reliability being difficult to measure:


 The reasons why software reliability is difficult to measure can be summarized as follows:
 The reliability improvement due to fixing a single bug depends on where the bug is located in
the code.
 The perceived reliability of a software product is highly observer-dependent.
 The reliability of a product keeps changing as errors are detected and fixed.
 Hardware reliability vs. software reliability differs.

Software Reliability Measurement Techniques:


 Reliability metrics are used to quantitatively expressed the reliability of the software product. The
option of which parameter is to be used depends upon the type of system to which it applies & the
requirements of the application domain.
 Measuring software reliability is a severe problem because we don't have a good understanding of
the nature of software. It is difficult to find a suitable method to measure software reliability and
most of the aspects connected to software reliability. Even the software estimates have no uniform
definition. If we cannot measure the reliability directly, something can be measured that reflects the
features related to reliability.
 The current methods of software reliability measurement can be divided into four categories:
1. Product Metrics:
 Product metrics are those which are used to build the artifacts, i.e., requirement specification
documents, system design documents, etc. These metrics help in the assessment if the product is
right sufficient through records on attributes like usability, reliability, maintainability & portability.
In these measurements are taken from the actual body of the source code.
i. Software size is thought to be reflective of complexity, development effort, and reliability.
Lines of Code (LOC), or LOC in thousands (KLOC), is an initial intuitive approach to
measuring software size. The basis of LOC is that program length can be used as a predictor of
program characteristics such as effort &ease of maintenance. It is a measure of the functional
complexity of the program and is independent of the programming language.
ii. Function point metric is a technique to measure the functionality of proposed software
development based on the count of inputs, outputs, master files, inquires, and interfaces.
iii. Test coverage metric size fault and reliability by performing tests on software products,
assuming that software reliability is a function of the portion of software that is successfully
verified or tested.
iv. Complexity is directly linked to software reliability, so representing complexity is essential.
Complexity-oriented metrics is a way of determining the complexity of a program's control
structure by simplifying the code into a graphical representation. The representative metric is
McCabe's Complexity Metric.
v. Quality metrics measure the quality at various steps of software product development. An vital
quality metric is Defect Removal Efficiency (DRE). DRE provides a measure of quality
because of different quality assurance and control activities applied throughout the development
process.

2. Project Management Metrics:


 Project metrics define project characteristics and execution. If there is proper management of the
project by the programmer, then this helps us to achieve better products. A relationship exists
between the development process and the ability to complete projects on time and within the
desired quality objectives. Cost increase when developers use inadequate methods. Higher
reliability can be achieved by using a better development process, risk management process,
configuration management process.
 These metrics are:
o Number of software developers
o Staffing pattern over the life-cycle of the software
o Cost and schedule
o Productivity

3. Process Metrics:
 Process metrics quantify useful attributes of the software development process & its environment.
They tell if the process is functioning optimally as they report on characteristics like cycle time &
rework time. The goal of process metric is to do the right job on the first time through the process.
The quality of the product is a direct function of the process. So process metrics can be used to
estimate, monitor, and improve the reliability and quality of software. Process metrics describe the
effectiveness and quality of the processes that produce the software product.
 Examples are:
o The effort required in the process
o Time to produce the product
o Effectiveness of defect removal during development
o Number of defects found during testing
o Maturity of the process
4. Fault and Failure Metrics:
 A fault is a defect in a program which appears when the programmer makes an error and causes
failure when executed under particular conditions. These metrics are used to determine the failure-
free execution software.
 To achieve this objective, a number of faults found during testing and the failures or other problems
which are reported by the user after delivery are collected, summarized, and analyzed. Failure
metrics are based upon customer information regarding faults found after release of the software.
The failure data collected is therefore used to calculate failure density, Mean Time between
Failures (MTBF), or other parameters to measure or predict software reliability.

Software Quality:
 Software quality product is defined in term of its fitness of purpose. That is, a quality product does
precisely what the users want it to do. For software products, the fitness of use is generally
explained in terms of satisfaction of the requirements laid down in the SRS document. Although
"fitness of purpose" is a satisfactory interpretation of quality for many devices such as a car, a table
fan, a grinding machine, etc.for software products, "fitness of purpose" is not a wholly satisfactory
definition of quality.
 Example: Consider a functionally correct software product. That is, it performs all tasks as
specified in the SRS document. But, has an almost unusable user interface. Even though it may be
functionally right, we cannot consider it to be a quality product.
 The modern view of a quality associated with a software product several quality methods such as
the following:
i. Portability: A software device is said to be portable, if it can be freely made to work in various
operating system environments, in multiple machines, with other software products, etc.

ii. Usability: A software product has better usability if various categories of users can easily invoke
the functions of the product.

iii. Reusability: A software product has excellent reusability if different modules of the product can
quickly be reused to develop new products.

iv. Correctness: A software product is correct if various requirements as specified in the SRS
document have been correctly implemented.

v. Maintainability: A software product is maintainable if bugs can be easily corrected as and when
they show up, new tasks can be easily added to the product, and the functionalities of the product
can be easily modified, etc.

Software Reliability Models:


 A software reliability model indicates the form of a random process that defines the behavior of
software failures to time.
Software reliability models have appeared as people try to understand the features of how and why
software fails, and attempt to quantify software reliability.
Over 200 models have been established since the early 1970s, but how to quantify software reliability
remains mostly unsolved.
There is no individual model that can be used in all situations. No model is complete or even
representative.
Most software models contain the following parts:
o Assumptions
o Factors
Software Reliability Modeling Techniques:

 Both kinds of modeling methods are based on observing and accumulating failure data and
analyzing with statistical inference.

Differentiate between software reliability prediction models and software


reliability estimation models
Basics Prediction Models Estimation Models

Data Uses historical information Uses data from the current software
Reference development effort.

When used in Usually made before development Usually made later in the life cycle (after
development or test phases; can be used as early some data have been collected); not typically
cycle as concept phase. used in concept or development phases.

Time Frame Predict reliability at some future Estimate reliability at either present or some
time. next time.

Reliability Models:
 A reliability growth model is a numerical model of software reliability, which predicts how
software reliability should improve over time as errors are discovered and repaired. These models
help the manager in deciding how much efforts should be devoted to testing. The objective of the
project manager is to test and debug the system until the required level of reliability is reached.
 Following are the Software Reliability Models are:
Capability Maturity Model (CMM):
 Capability Maturity Model is a common-sense application of software or Business Process
Management and quality improvement concepts to software development and maintenance.
 The Capability Maturity Model (CMM) is a methodology used to develop and refine an
organization's software development process.
 CMM was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University in
1987.
 It is not a software process model. It is a framework which is used to analyse the approach and
techniques followed by any organization to develop a software product.
 It also provides guidelines to further enhance the maturity of those software products.
 It is based on profound feedback and development practices adopted by the most successful
organizations worldwide.
 This model describes a strategy that should be followed by moving through 5 different levels.
 Each level of maturity shows a process capability level. All the levels except level-1 are further
described by Key Process Areas (KPA’s).

Methods of SEICMM:
There are two methods of SEICMM:
1. Capability Evaluation: Capability evaluation provides a way to assess the software process
capability of an organization. The results of capability evaluation indicate the likely contractor
performance if the contractor is awarded a work. Therefore, the results of the software process
capability assessment can be used to select a contractor.

2. Software Process Assessment: Software process assessment is used by an organization to improve


its process capability. Thus, this type of evaluation is for purely internal use.

Levels of CMM:
 CMM classifies software development industries into the following five maturity levels:

i. Level 1: Initial
 Ad hoc activities characterize a software development organization at this level. Very few or no
processes are described and followed. Since software production processes are not limited,
different engineers follow their process and as a result, development efforts become chaotic.
Therefore, it is also called a chaotic level.

ii. Level 2: Repeatable


 At this level, the fundamental project management practices like tracking cost and schedule are
established. Size and cost estimation methods, like function point analysis, COCOMO, etc. are
used.

iii. Level 3: Defined


 At this level, the methods for both management and development activities are defined and
documented. There is a common organization-wide understanding of operations, roles, and
responsibilities. The ways through defined, the process and product qualities are not measured. ISO
9000 goals at achieving this level.

iv. Level 4: Managed


 At this level, the focus is on software metrics. Two kinds of metrics are composed.
a. Product metrics measure the features of the product being developed, such as its size, reliability,
time complexity, understandability, etc.

b. Process metrics follow the effectiveness of the process being used, such as average defect
correction time, productivity, the average number of defects found per hour inspection, the
average number of failures detected during testing per LOC, etc. The software process and
product quality are measured, and quantitative quality requirements for the product are met.
Various tools like Pareto charts, fishbone diagrams, etc. are used to measure the product and
process quality. The process metrics are used to analyze if a project performed satisfactorily.
Thus, the outcome of process measurements is used to calculate project performance rather than
improve the process.

v. Level 5: Optimizing
 At this phase, process and product metrics are collected. Process and product measurement data are
evaluated for continuous process improvement.

Why Use CMM?


 Today CMM act as a "seal of approval" in the software industry. It helps in various ways to
improve the software quality.
 It guides towards repeatable standard process and hence reduce the learning time on how to get
things done
 Practicing CMM means practicing standard protocol for development, which means it not only
helps the team to save time but also gives a clear view of what to do and what to expect
 The quality activities gel well with the project rather than thought of as a separate event
 It acts as a commuter between the project and the team
 CMM efforts are always towards the improvement of the process

Limitations of CMM Models:


The Limitations of CMM Models are as follows:
 CMM determines what a process should address instead of how it should be implemented
 It does not explain every possibility of software process improvement
 It concentrates on software issues but does not consider strategic business planning, adopting
technologies, establishing product line and managing human resources
 It does not tell on what kind of business an organization should be in
 CMM will not be useful in the project having a crisis right now
Unit 8
Software Testing
Testing:
• Testing is the process of evaluating a system or its component(s) with the intent to find whether it
satisfies the specified requirements or not.
• Testing is executing a system in order to identify any gaps, errors, or missing requirements in
contrary to the actual requirements.

According to IEEE: “Testing means the process of analyzing a software item to detect the differences
between existing and required condition (i.e. bugs) and to evaluate the feature of the software item”.

According to Myers:“Testing is the process of analyzing a program with the intent of finding an
error”.

Software Testing:
• Software testing is a process of identifying the correctness of software by considering its all
attributes (Reliability, Scalability, Portability, Re-usability, Usability) and evaluating the execution
of software components to find the software bugs or errors or defects.
• Software testing provides an independent view and objective of the software and gives surety of
fitness of the software. It involves testing of all components under the required services to confirm
that whether it is satisfying the specified requirements or not. The process is also providing the
client with information about the quality of the software.

Principles of Testing:-
i. All the test should meet the customer requirements
ii. To make our software testing should be performed by third party
iii. Exhaustive testing is not possible. As we need the optimal amount of testing based on the risk
assessment of the application.
iv. All the test to be conducted should be planned before implementing it
v. It follows pareto rule (80/20 rule) which states that 80% of errors comes from 20% of program
components.
vi. Start testing with small parts and extend it to large parts.

Why to Learn Software Testing?


 In the IT industry, large companies have a team with responsibilities to evaluate the developed
software in context of the given requirements. Moreover, developers also conduct testing which is
called Unit Testing.
 In most cases, the following professionals are involved in testing a system within their respective
capacities −
• Software Tester
• Software Developer
• Project Lead/Manager
• End User

What are the benefits of Software Testing?


 Here are the benefits of using software testing:
• Cost-Effective: It is one of the important advantages of software testing. Testing any IT project
on time helps you to save your money for the long term. In case if the bugs caught in the earlier
stage of software testing, it costs less to fix.
• Security: It is the most vulnerable and sensitive benefit of software testing. People are looking
for trusted products. It helps in removing risks and problems earlier.
• Product quality: It is an essential requirement of any software product. Testing ensures a
quality product is delivered to customers.
• Customer Satisfaction: The main aim of any product is to give satisfaction to their customers.
UI/UX Testing ensures the best user experience.

Applications of Software Testing:


• Cost Effective Development: Early testing saves both time and cost in many aspects, however
reducing the cost without testing may result in improper design of a software application
rendering the product useless.
• Product Improvement: During the SDLC phases, testing is never a time-consuming process.
However diagnosing and fixing the errors identified during proper testing is a time-consuming
but productive activity.
• Test Automation: Test Automation reduces the testing time, but it is not possible to start test
automation at any time during software development. Test automaton should be started when
the software has been manually tested and is stable to some extent. Moreover, test automation
can never be used if requirements keep changing.
• Quality Check: Software testing helps in determining following set of properties of any
software such as
o Functionality
o Reliability
o Usability
o Efficiency
o Maintainability
o Portability

Verification:
• Verification is the process of checking that a software achieves its goal without any bugs. It is the
process to ensure whether the product that is developed is right or not.
• It verifies whether the developed product fulfills the requirements that we have.
• Verification is Static Testing.
• Activities involved in verification:
a) Inspections
b) Reviews
c) Walkthroughs
d) Desk-checking

Validation:
• Validation is the process of checking whether the software product is up to the mark or in other
words product has high level requirements.
• It is the process of checking the validation of product i.e. it checks what we are developing is the
right product. It is validation of actual and expected product.
• Validation is the Dynamic Testing.
• Activities involved in validation:
a) Black box testing
b) White box testing
c) Unit testing
d) Integration testing
Difference between Verification and Validation:
Verification Validation
It includes checking documents, design, codes It includes testing and validating the actual
and programs. product.
Verification is the static testing. Validation is the dynamic testing.
It does not include the execution of the code. It includes the execution of the code.
Methods used in verification are reviews, Methods used in validation are Black Box
walkthroughs, inspections and desk-checking. Testing, White Box Testing and non-functional
testing.
It checks whether the software conforms to It checks whether the software meets the
specifications or not. requirements and expectations of a customer or
not.
It can find the bugs in the early stage of the It can only find the bugs that could not be found
development. by the verification process.
The goal of verification is application and The goal of validation is an actual product.
software architecture and specification.
Quality assurance team does verification. Validation is executed on software code with the
help of testing team.
It comes before validation. It comes after verification.

The testing process:


Component testing
• Testing of individual program components;
• Usually the responsibility of the component developer (except sometimes for critical systems);
• Tests are derived from the developer’s experience.
System testing
• Testing of groups of components integrated to create a system or sub-system
• The responsibility of an independent testing team
• Tests are based on a system specification.
• Tests are derived from the developer’s experience
What are different types of software testing?
 Software Testing can be broadly classified into two types:
1. Manual Testing:
• Manual testing includes testing a software manually, i.e., without using any automated tool or any
script. In this type, the tester takes over the role of an end-user and tests the software to identify any
unexpected behavior or bug.
• There are different stages for manual testing such as unit testing, integration testing, system testing,
and user acceptance testing.
• Testers use test plans, test cases, or test scenarios to test a software to ensure the completeness of
testing. Manual testing also includes exploratory testing, as testers explore the software to identify
errors in it.

2. Automation Testing:
• Automation testing, which is also known as Test Automation, is when the tester writes scripts and
uses another software to test the product. This process involves automation of a manual process.
• Automation Testing is used to re-run the test scenarios that were performed manually, quickly, and
repeatedly.
• Apart from regression testing, automation testing is also used to test the application from load,
performance, and stress point of view. It increases the test coverage, improves accuracy, and saves
time and money in comparison to manual testing.

Black Box Testing:


• Black box testing is a technique of software testing which examines the functionality of software
without peering into its internal structure or coding.
• The primary source of black box testing is a specification of requirements that is stated by the
customer.
• In this method, tester selects a function and gives input value to examine its functionality, and
checks whether the function is giving expected output or not. If the function produces correct
output, then it is passed in testing, otherwise failed. The test team reports the result to the
development team and then tests the next function. After completing testing of all functions if there
are severe problems, then it is given back to the development team for correction.

Generic steps of black box testing:


o The black box test is based on the specification of requirements, so it is examined in the
beginning.
o In the second step, the tester creates a positive test scenario and an adverse test scenario by
selecting valid and invalid input values to check that the software is processing them correctly
or incorrectly.
o In the third step, the tester develops various test cases such as decision table, all pairs test,
equivalent division, error estimation, cause-effect graph, etc.
o The fourth phase includes the execution of all test cases.
o In the fifth step, the tester compares the expected output against the actual output.
o In the sixth and final step, if there is any flaw in the software, then it is cured and tested again

Types of Black Box Testing:


There are many types of Black Box Testing but the following are the prominent ones -
• Functional testing - This black box testing type is related to the functional requirements of a
system; it is done by software testers.
• Non-functional testing - This type of black box testing is not related to testing of specific
functionality, but non-functional requirements such as performance, scalability, usability.
• Regression testing - Regression Testing is done after code fixes, upgrades or any other system
maintenance to check the new code has not affected the existing code.

Advantages:
• Tests are done from a user’s point of view and will help in exposing discrepancies in the
specifications.
• Tester need not know programming languages or how the software has been implemented.
• Tests can be conducted by a body independent from the developers, allowing for an objective
perspective and the avoidance of developer-bias.
• Test cases can be designed as soon as the specifications are complete.

Disadvantages:
• Only a small number of possible inputs can be tested and many program paths will be left untested.
• Without clear specifications, which is the situation in many projects, test cases will be difficult to
design.
• Tests can be redundant if the software designer/developer has already run a test case.
• Ever wondered why a soothsayer closes the eyes when foretelling events? So is almost the case in
Black Box Testing.

Techniques Used in Black Box Testing:


Boundary Value Analysis:
• Boundary value analysis is one of the widely used case design technique for black box testing. It is
used to test boundary values because the input values near the boundary have higher chances of
error.
• Whenever we do the testing by boundary value analysis, the tester focuses on, while entering
boundary value whether the software is producing correct output or not.
• Boundary values are those that contain the upper and lower limit of a variable. Assume that, age is
a variable of any function, and its minimum value is 18 and the maximum value is 30, both 18 and
30 will be considered as boundary values.

Advantages of boundary value analysis


• It is easier and faster to find defects as the density of defects at boundaries is more.
• The overall test execution time reduces as the number of test data greatly reduces.

Disadvantages of boundary value analysis


• The success of the testing using boundary value analysis depends on the equivalence classes
identified, which further depends on the expertise of the tester and his knowledge of application.
Hence, incorrect identification of equivalence classes leads to incorrect boundary value testing.
• For application with open boundaries or application not having one-dimensional boundaries are not
suitable for boundary value analysis. In those cases, other black-box techniques like “Domain
Analysis” are used.
What do you verify in White Box Testing?
 White box testing involves the testing of the software code for the following:
• Internal security holes
• Broken or poorly structured paths in the coding processes
• The flow of specific inputs through the code
• Expected output
• The functionality of conditional loops
• Testing of each statement, object, and function on an individual basis

Advantages:
• White box testing is very thorough as the entire code and structures are tested.
• It results in the optimization of code removing error and helps in removing extra lines of code.
• It can start at an earlier stage as it doesn’t require any interface as in case of black box testing.
• Easy to automate.

Disadvantages:
• Main disadvantage is that it is very expensive.
• Redesign of code and rewriting code needs test cases to be written again.
• Testers are required to have in-depth knowledge of the code and programming language as opposed
to black box testing.
• Missing functionalities cannot be detected as the code that exists is tested.
• Very complex and at times not realistic.

Differences between Black Box Testing vs White Box Testing:


Black Box Testing White Box Testing
It is a way of software testing in which the It is a way of testing the software in which the
internal structure or the program or the code tester has knowledge about the internal structure
is hidden and nothing is known about it. or the code or the program of the software.
It is mostly done by software testers. It is mostly done by software developers.
No knowledge of implementation is needed. Knowledge of implementation is required.
It can be referred as outer or external software It is the inner or the internal software testing.
testing.
It is functional test of the software. It is structural test of the software.
This testing can be initiated on the basis of This type of testing of software is started after
requirement specifications document. detail design document.
No knowledge of programming is required It is mandatory to have knowledge of
programming.
It is the behavior testing of the software. It is the logic testing of the software.
It is applicable to the higher levels of testing It is generally applicable to the lower levels of
of software. software testing.
It is also called closed testing. It is also called as clear box testing.
It is least time consuming. It is most time consuming.
It is not suitable or preferred for algorithm It is suitable for algorithm testing.
testing.
Can be done by trial and error ways and Data domains along with inner or internal
methods. boundaries can be better tested.
Example: search something on google by Example: by input to check and verify loops
using keywords
Types of Black Box Testing: Types of White Box Testing:
• A. Functional Testing • A. Path Testing
• B. Non-functional testing • B. Loop Testing
• C. Regression Testing • C. Condition testing

Levels of Testing:
 A level of software testing is a process where every unit or component of a software/ system is
tested. The main goal of system testing is to evaluate the system's compliance with the specified
needs.
 There are mainly four testing levels are:
1. UnitTesting:
A Unit is a smallest testable portion of system or application which can be compiled, liked,
loaded, and executed. This kind of testing helps to test each module separately.
The aim is to test each part of the software by separating it. It checks that component are
fulfilling functionalities or not. This kind of testing is performed by developers.

2. IntegrationTesting:
Integration means combining. For Example, In this testing phase, different software modules
are combined and tested as a group to make sure that integrated system is ready for system
testing.
Integrating testing checks the data flow from one module to other modules. This kind of testing
is performed by testers.

3. SystemTesting:
System testing is performed on a complete, integrated system. It allows checking system's
compliance as per the requirements. It tests the overall interaction of components. It involves
load, performance, reliability and security testing.
System testing most often the final test to verify that the system meets the specification. It
evaluates both functional and non-functional need for the testing.

4. AcceptanceTesting:
Acceptance testing is a test conducted to find if the requirements of a specification or contract
are met as per its delivery. Acceptance testing is basically done by the user or customer.
However, other stockholders can be involved in this process.

Software Testing Tools:


 Software Testing tools are the tools which are used for the testing of software. Software testing
tools are often used to assure firmness, thoroughness and performance in testing software products.
Unit testing and subsequent integration testing can be performed by software testing tools. These
tools are used to fulfill all the requirements of planned testing activities. These tools also works as
commercial software testing tools. The quality of the software is evaluated by software testers with
the help of various testing tools.

Types of Testing Tools:


1. Static Test Tools:
 Static test tools are used to work on the static testing processes. In the testing through these tools,
typical approach is taken. These tools do not test the real execution of the software. Certain input
and output are not required in these tools.
 Static test tools consists of the following:
• Flow analyzers:Flow analyzers provides flexibility in data flow from input to output.
• Path Tests:It finds the not used code and code with inconsistency in the software.
• Coverage Analyzers:All rationale paths in the software are assured by the coverage analyzers.
• Interface Analyzers:They check out the consequences of passing variables and data in the
modules.

2. Dynamic Test Tools:


 Dynamic testing process is performed by the dynamic test tools. These tools test the software with
existing or current data.
 Dynamic test tools comprises of the following:
• Test driver:Test driver provides the input data to a module-under-test (MUT).
• Test Beds:It displays source code along with the program under execution at the same time.
• Emulators:Emulators provides the response facilities which are used to imitate parts of the
system not yet developed.
• Mutation Analyzers:They are used for testing fault tolerance of the system by knowingly
providing the errors in the code of the software.
Unit 8
Software Maintenance

Software Maintenance:
• Software Maintenance is the process of modifying a software product after it has been delivered to
the customer.
• Software Maintenance is an inclusive activity that includes error corrections, enhancement of
capabilities, deletion of obsolete capabilities, and optimization.
• The main purpose of software maintenance is to modify and update software application after
delivery to correct faults and to improve performance.
• Software Maintenance is a very broad activity that includes error, corrections, enhancement of
capabilities, deletion of obsolete capabilities and optimization.

Need for Maintenance:


 Software Maintenance is needed for:
o Correct errors
o Change in user requirement with time
o Changing hardware/software requirements
o To improve system efficiency
o To optimize the code to run faster
o To modify the components
o To reduce any unwanted side effects.
o Migrate legacy software.
o Retire software.

Types of Software Maintenance:


The types of software maintenance are as follows:
a) Corrective Maintenance: This includes modifications and updations done in order to correct
or fix problems, which are either discovered by user or concluded by user error reports.

b) Adaptive Maintenance: This includes modifications and updations applied to keep the
software product up-to date and tuned to the ever changing world of technology and business
environment.

c) Perfective Maintenance: This includes modifications and updates done in order to keep the
software usable over long period of time. It includes new features, new user requirements for
refining the software and improve its reliability and performance.

d) Preventive Maintenance: This includes modifications and updations to prevent future


problems of the software. It aims to attend problems, which are not significant at this moment
but may cause serious issues in future.

Challenges in Software Maintenance:


 Maintaining software is though considered essential these days, it is not a simple procedure and
entails extreme efforts. The process requires knowledgeable experts who are well versed in latest
software engineering trends and can perform suitable programming and testing. Furthermore, the
programmers can face several challenges while executing software maintenance which can make
the process time consuming and costly. Some of the challenges encountered while performing
software maintenance are:
• Finding the person or developer who constructed the program can be difficult and time
consuming.
• Changes are made by an individual who is unable to understand the program clearly.
• The systems are not maintained by the original authors, which can result in confusion and
misinterpretation of changes executed in the program.
• Information gap between user and the developer can also become a huge challenge in software
maintenance.
• The biggest challenge in software maintenance is when systems are not designed for changes.

Process of Software Maintenance:


 Software Maintenance is an important phase of Software Development Life Cycle (SDLC), and it
is implemented in the system through a proper software maintenance process, known as Software
Maintenance Life Cycle (SMLC). This life cycle consists of seven different phases, each of which
can be used in iterative manner and can be extended so that customized items and processes can be
included. These seven phases of Software Maintenance process are:
1. Identification Phase:
• In this phase, the requests for modifications in the software are identified and analyzed. Each of
the requested modification is then assessed to determine and classify the type of maintenance
activity it requires. This is either generated by the system itself, via logs or error messages, or
by the user.

2. Analysis Phase:
• The feasibility and scope of each validated modification request are determined and a plan is
prepared to incorporate the changes in the software. The input attribute comprises validated
modification request, initial estimate of resources, project documentation, and repository
information. The cost of modification and maintenance is also estimated.

3. Design Phase:
• The new modules that need to be replaced or modified are designed as per the requirements
specified in the earlier stages. Test cases are developed for the new design including the safety
and security issues. These test cases are created for the validation and verification of the system.

4. Implementation Phase:
• In the implementation phase, the actual modification in the software code are made, new
features that support the specifications of the present software are added, and the modified
software is installed. The new modules are coded with the assistance of structured design
created in the design phase.

5. System Testing Phase:


• Regression testing is performed on the modified system to ensure that no defect, error or bug is
left undetected. Furthermore, it validates that no new faults are introduced in the software as a
result of maintenance activity. Integration testing is also carried out between new modules and
the system.

6. Acceptance Testing Phase:


• Acceptance testing is performed on the fully integrated system by the user or by the third party
specified by the end user. The main objective of this testing is to verify that all the features of
the software are according to the requirements stated in the modification request.
7. Delivery Phase:
• Once the acceptance testing is successfully accomplished, the modified system is delivered to
the users. In addition to this, the user is provided proper consisting of manuals and help files
that describe the operation of the software along with its hardware specifications. The final
testing of the system is done by the client after the system is delivered.

Software Maintenance Models:


 To overcome internal as well as external problems of the software, Software maintenance models
are proposed. These models use different approaches and techniques to simplify the process of
maintenance as well as to make is cost effective. Software maintenance models that are of most
importance are:

1. Quick-Fix Model:
• This is an ad hoc approach used for maintaining the software system. The objective of this model is
to identify the problem and then fix it as quickly as possible. The advantage is that it performs its
work quickly and at a low cost. This model is an approach to modify the software code with little
consideration for its impact on the overall structure of the software system.

2. Iterative Enhancement Model:


• Iterative enhancement model considers the changes made to the system are iterative in nature. This
model incorporates changes in the software based on the analysis of the existing system. It assumes
complete documentation of the software is available in the beginning. Moreover, it attempts to
control complexity and tries to maintain good design.

• Iterative Enhancement Model is divided into three stages:


i. Analysis of software system.
ii. Classification of requested modifications.
iii. Implementation of requested modifications.

3. The Re-use Oriented Model:


• The parts of the old/existing system that are appropriate for reuse are identified and understood, in
Reuse Oriented Model. These parts are then go through modification and enhancement, which are
done on the basis of the specified new requirements. The final step of this model is the integration
of modified parts into the new system.

4. Boehm's Model:
• Boehm’s Model performs maintenance process based on the economic models and principles. It
represents the maintenance process in a closed loop cycle, wherein changes are suggested and
approved first and then are executed.

5. Taute Maintenance Model:


• Named after the person who proposed the model, Taute’s model is a typical maintenance model
that consists of eight phases in cycle fashion. The process of maintenance begins by requesting the
change and ends with its operation. The phases of Taute’s Maintenance Model are:
i. Change request Phase.
ii. Estimate Phase.
iii. Schedule Phase.
iv. Programming Phase.
v. Test Phase.
vi. Documentation Phase.
vii. Release Phase.
viii. Operation Phase.

Software Maintenance Cost Factors:


 There are two types of cost factors involved in software maintenance.
 These are
1. Non-Technical Factors
2. Technical Factors

1. Non-Technical Factors:
a. Application Domain:
o If the application of the program is defined and well understood, the system requirements may
be definitive and maintenance due to changing needs minimized.
o If the form is entirely new, it is likely that the initial conditions will be modified frequently, as
user gain experience with the system.

b. Staff Stability:
o It is simple for the original writer of a program to understand and change an application rather
than some other person who must understand the program by the study of the reports and code
listing.
o If the implementation of a system also maintains that systems, maintenance costs will reduce.
o In practice, the feature of the programming profession is such that persons change jobs
regularly. It is unusual for one user to develop and maintain an application throughout its useful
life.

c. Program Lifetime:
o Programs become obsolete when the program becomes obsolete, or their original hardware is
replaced, and conversion costs exceed rewriting costs.

d. Dependence on External Environment:


o If an application is dependent on its external environment, it must be modified as the climate
changes.
o For example:
o Changes in a taxation system might need payroll, accounting, and stock control programs to be
modified.
o Taxation changes are nearly frequent, and maintenance costs for these programs are associated
with the frequency of these changes.
o A program used in mathematical applications does not typically depend on humans changing
the assumptions on which the program is based.

e. Hardware Stability:
o If an application is designed to operate on a specific hardware configuration and that
configuration does not changes during the program's lifetime, no maintenance costs due to
hardware changes will be incurred.
o Hardware developments are so increased that this situation is rare.
o The application must be changed to use new hardware that replaces obsolete equipment.

2. Technical Factors:
 Technical Factors include the following:

Module Independence:
• It should be possible to change one program unit of a system without affecting any other unit.

Programming Language:
• Programs written in a high-level programming language are generally easier to understand than
programs written in a low-level language.

Programming Style:
• The method in which a program is written contributes to its understandability and hence, the ease
with which it can be modified.

Program Validation and Testing:


• Generally, more the time and effort are spent on design validation and program testing, the fewer
bugs in the program and, consequently, maintenance costs resulting from bugs correction are lower.
• Maintenance costs due to bug's correction are governed by the type of fault to be repaired.
• Coding errors are generally relatively cheap to correct, design errors are more expensive as they
may include the rewriting of one or more program units.
• Bugs in the software requirements are usually the most expensive to correct because of the drastic
design which is generally involved.

Documentation:
• If a program is supported by clear, complete yet concise documentation, the functions of
understanding the application can be associatively straight-forward.
• Program maintenance costs tends to be less for well-reported systems than for the system supplied
with inadequate or incomplete documentation.

Configuration Management Techniques:


• One of the essential costs of maintenance is keeping track of all system documents and ensuring
that these are kept consistent.
• Effective configuration management can help control these costs.

Role of Documentation in Software Maintenance:


• It is essential in the maintenance phase to record all the changes made and the reason for each
change. For corrective maintenance, this documentation can start as a fault report, which is filed by
a user. These fault reports are compiled into a bug-tracking.

Types of Documentation:
 All software documentation can be divided into two main categories: Product Documentation and
Process Documentation.
1. Product documentation:
• Product documentation describes the product that is being developed and provides instructions on
how to perform various tasks with it.
• Product documentation can be broken down into:
a) System documentation: System documentation represents documents that describe the system
itself and its parts. It includes requirements documents, design decisions, architecture descriptions,
program source code, and help guides.

b) User documentation: User documentation covers manuals that are mainly prepared for end-users
of the product and system administrators. User documentation includes tutorials, user guides,
troubleshooting manuals, installation, and reference manuals.

2. Process documentation:
• Process documentation represents all documents produced during development and maintenance
that describe... well, process.
• The common examples of process documentation are project plans, test schedules, reports,
standards, meeting notes, or even business correspondence.

You might also like