0% found this document useful (0 votes)
30 views

Notes

Uploaded by

Girraj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Notes

Uploaded by

Girraj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 142

Software Engineering Introduction

Software Engineering is a systematic, disciplined, quantifiable study and approach to the design,
development, operation, and maintenance of a software system. These article help you understand
the basics of software engineering. This Introduction part covers the topic like Basics of Software
and Software engineering, What is the need of Software Engineering etc.
1. Introduction to Software Engineering
2. Introduction to Software Development
3. Classification of Software
4. Software Evolution
5. What is the Need of Software Engineering?
6. What does a Software Engineer Do?

Software Development Models & Architecture


Software development models are frameworks that guide the process of creating software
applications. They provide a structured approach to planning, designing, implementing, testing, and
deploying software. Here are some common software development models.
1. Classical Waterfall Model
2. Iterative Waterfall Model
3. Spiral Model
4. Incremental process model
5. Rapid application development model(RAD)
6. RAD Model vs Traditional SDLC
7. Agile Development Models
8. Agile Software Development
9. Extreme Programming (XP)
10. SDLC V-Model
11. Comparison of different life cycle models

Software architecture refers to the high-level structure of a software system. It defines the
components, their interactions, and the principles guiding their design. Here are some common
software architectures:
1. User Interface Design
2. Coupling and Cohesion
3. Information System Life Cycle
4. Database application system life cycle
5. Pham-Nordmann-Zhang Model (PNZ model)
6. Schick-Wolverton software reliability model

Software Project Management(SPM)


Software Project Management (SPM) involves planning, organizing, and controlling software
development projects to ensure they are completed on time, within budget, and according to
specified quality standards. Here are some articles that gives you a deep understanding of Software
Project Management (SPM):
1. Project Management Process
2. Project size estimation techniques
3. System configuration management
4. COCOMO Model
5. Capability maturity model (CMM)
6. Integrating Risk Management in SDLC | Set 1
7. Integrating Risk Management in SDLC | Set 2
8. Integrating Risk Management in SDLC | Set 3
9. Role and Responsibilities of a software Project Manager
10. Software Project Management Complexities
11. Quasi renewal processes
12. Reliability Growth Models

1|Page
13. Jelinski Moranda software reliability model
14. Schick-Wolverton software reliability model
15. Goel-Okumoto Model
16. Mills’ Error Seeding Model
17. Basic fault tolerant software techniques
18. Software Maintenance

Software Metrices
Software metrics are quantitative measures used to assess various aspects of software development
processes, products, and projects. These metrics provide valuable insights into the quality,
performance, and efficiency of software development efforts. Here are some common software
metrics:
1. Software Measurement and Metrics
2. People Metrics and Process Metrics in Software Engineering
3. Halstead’s Software Metrics
4. Cyclomatic Complexity
5. Functional Point (FP) Analysis – Software Engineering
6. Lines of Code (LOC) in Software Engineering

Software Requirements
Software requirements are descriptions of the features, functions, capabilities, and constraints that a
software system must possess to meet the needs of its users and stakeholders. They serve as the
foundation for software development, guiding the design, implementation, and testing phases of the
project. These articles break down software requirements into easy-to-understand concepts
1. Requirements Engineering Process
2. Classification of Software Requirements
3. How to write a good SRS for your Project
4. Quality Characteristics of a good SRS
5. Requirements Elicitation
6. Challenges in eliciting requirements

Software Configuration
Software configuration refers to the process of managing and controlling changes to software
systems, components, and related artifacts throughout the software development lifecycle. Here are
some articles that helps you in exploring the knowledge of Software Configuration:
1. Software Configuration Management
2. Objectives of Software Configuration Management
3. Software Quality Assurance
4. Project Monitoring & Control

Software Quality
Software quality refers to the degree to which a software product meets specified requirements and
satisfies customer expectations, ensuring it is reliable, efficient, maintainable, and user-friendly.
These article provide in depth explanation of Software Quality:
1. Software Quality
2. ISO 9000 Certification
3. SEICMM
4. Six Sigma

Software Design
Software design involves creating a blueprint or plan for how a software system will be structured
and organized to meet its requirements effectively and efficiently. These articles gives you a clear
explanation about Software Design.
1. Software Design Process
2. Software Design process – Set 2

2|Page
3. Software Design Principles
4. Coupling and Cohesion
5. Function Oriented Design
6. Object Oriented Design
7. User Interface Design

Software Reliability
Software reliability refers to the ability of a software system to consistently perform its intended
functions under specified conditions for a defined period of time, without failures or errors that may
disrupt its operation. Here are some articles that help to understand various concepts regarding
software reliability.
1. Software Reliability
2. Software Fault Tolerance

Software Testing and Debugging


Software testing and debugging are integral parts of the software development lifecycle, aimed at
ensuring the quality and reliability of software products. Here are some articles that help to
understand various concepts regarding software testing and debugging.
1. Software Testing Tutorial
2. Seven Principles of software testing
3. Testing Guidelines
4. Black box testing
5. White box Testing
6. Debugging
7. Selenium: An Automation tool
8. Integration Testing

Software Maintenance
Software maintenance refers to the process of updating, modifying, and enhancing software to
ensure its continued effectiveness, efficiency, and relevance over time. Here are some articles that
help to understand various concepts regarding software maintenance.
1. Software Maintenance
2. Cost and efforts of software maintenance

Difference Between
Understanding the differences between software engineering concepts provides clarity on their
unique strengths and weaknesses, empowering individuals to make informed decisions about which
concept is best suited for specific purposes or projects. This knowledge enables effective selection,
implementation, and optimization of software engineering practices to achieve desired outcomes
efficiently.
1. Waterfall model vs Incremental model
2. v-model vs waterfall model
3. Manual testing vs Automation testing
4. Sanity Testing vs Smoke Testing
5. Cohesion vs Coupling
6. Alpha Testing vs Beta Testing
7. Testing and Debugging
8. Functional vs Non-functional Testing
9. Waterfall Model vs Spiral Model
10. RAD vs Waterfall
11. Unit Testing vs System Testing
12. Load Testing vs Stress Testing
13. Frontend Testing vs Backend Testing
14. Agile Model vs V-Model

3|Page
Introduction to Software Engineering – Software Engineering
Software is a program or set of programs containing instructions that provide the desired
functionality. Engineering is the process of designing and building something that serves a particular
purpose and finds a cost-effective solution to problems.

What is Software Engineering?


Software Engineering is the process of designing, developing, testing, and maintaining software. It
is a systematic and disciplined approach to software development that aims to create high-quality,
reliable, and maintainable software.
1. Software engineering includes a variety of techniques, tools, and methodologies, including
requirements analysis, design, testing, and maintenance.
2. It is a rapidly evolving field, and new tools and technologies are constantly being developed to
improve the software development process.
3. By following the principles of software engineering and using the appropriate tools and
methodologies, software developers can create high-quality, reliable, and maintainable software
that meets the needs of its users.
4. Software Engineering is mainly used for large projects based on software systems rather than
single programs or applications.
5. The main goal of Software Engineering is to develop software applications for improving
quality, budget, and time efficiency.
6. Software Engineering ensures that the software that has to be built should be consistent, correct,
also on budget, on time, and within the required requirements.

Key Principles of Software Engineering


1. Modularity: Breaking the software into smaller, reusable components that can be developed
and tested independently.
2. Abstraction: Hiding the implementation details of a component and exposing only the
necessary functionality to other parts of the software.
3. Encapsulation: Wrapping up the data and functions of an object into a single unit, and
protecting the internal state of an object from external modifications.
4. Reusability: Creating components that can be used in multiple projects, which can save time
and resources.
5. Maintenance: Regularly updating and improving the software to fix bugs, add new features,
and address security vulnerabilities.
6. Testing: Verifying that the software meets its requirements and is free of bugs.
7. Design Patterns: Solving recurring problems in software design by providing templates for
solving them.
8. Agile methodologies: Using iterative and incremental development processes that focus on
customer satisfaction, rapid delivery, and flexibility.
9. Continuous Integration & Deployment: Continuously integrating the code changes and
deploying them into the production environment.

Main Attributes of Software Engineering


Software Engineering is a systematic, disciplined, quantifiable study and approach to the design,
development, operation, and maintenance of a software system. There are four main Attributes of
Software Engineering.
1. Efficiency: It provides a measure of the resource requirement of a software product efficiently.
2. Reliability: It assures that the product will deliver the same results when used in similar
working environment.
3. Reusability: This attribute makes sure that the module can be used in multiple applications.
4. Maintainability: It is the ability of the software to be modified, repaired, or enhanced easily
with changing requirements.

4|Page
Dual Role of Software
There is a dual role of software in the industry. The first one is as a product and the other one is as a
vehicle for delivering the product. We will discuss both of them.
1. As a Product
 It delivers computing potential across networks of Hardware.
 It enables the Hardware to deliver the expected functionality.
 It acts as an information transformer because it produces, manages, acquires, modifies,
displays, or transmits information.
2. As a Vehicle for Delivering a Product
 It provides system functionality (e.g., payroll system).
 It controls other software (e.g., an operating system).
 It helps build other software (e.g., software tools).

Objectives of Software Engineering


1. Maintainability: It should be feasible for the software to evolve to meet changing
requirements.
2. Efficiency: The software should not make wasteful use of computing devices such as memory,
processor cycles, etc.
3. Correctness: A software product is correct if the different requirements specified in the SRS
Document have been correctly implemented.
4. Reusability: A software product has good reusability if the different modules of the product
can easily be reused to develop new products.
5. Testability: Here software facilitates both the establishment of test criteria and the evaluation
of the software concerning those criteria.
6. Reliability: It is an attribute of software quality. The extent to which a program can be
expected to perform its desired function, over an arbitrary time period.
7. Portability: In this case, the software can be transferred from one computer system or
environment to another.
8. Adaptability: In this case, the software allows differing system constraints and the user needs
to be satisfied by making changes to the software.
9. Interoperability: Capability of 2 or more functional units to process data cooperatively.
Program vs Software Product
Parameters Program Software Product

Software is when a program is made available


A program is a set of
for commercial business and is properly
instructions that are given to a
Definition documented along with its licensing.
computer in order to achieve a
Software Product = Program + Documentation
specific task.
+ Licensing.

Software Development usually follows a life


Program is one of the stages cycle, which involves the feasibility study of
Stages Involved involved in the development the project, requirement gathering,
of the software. development of a prototype, system design,
coding, and testing.

Advantages of Software Engineering


There are several advantages to using a systematic and disciplined approach to software
development, such as:
1. Improved Quality: By following established software engineering principles and techniques,
the software can be developed with fewer bugs and higher reliability.
2. Increased Productivity: Using modern tools and methodologies can streamline the
development process, allowing developers to be more productive and complete projects faster.

5|Page
3. Better Maintainability: Software that is designed and developed using sound software
engineering practices is easier to maintain and update over time.
4. Reduced Costs: By identifying and addressing potential problems early in the development
process, software engineering can help to reduce the cost of fixing bugs and adding new
features later on.
5. Increased Customer Satisfaction: By involving customers in the development process and
developing software that meets their needs, software engineering can help to increase customer
satisfaction.
6. Better Team Collaboration: By using Agile methodologies and continuous integration,
software engineering allows for better collaboration among development teams.
7. Better Scalability: By designing software with scalability in mind, software engineering can
help to ensure that software can handle an increasing number of users and transactions.
8. Better Security: By following the Software Development Life Cycle (SDLC) and performing
security testing, software engineering can help to prevent security breaches and protect
sensitive data.

In summary, software engineering offers a structured and efficient approach to software


development, which can lead to higher-quality software that is easier to maintain and adapt to
changing requirements. This can help to improve customer satisfaction and reduce costs, while also
promoting better collaboration among development teams.

Disadvantages of Software Engineering


While Software Engineering offers many advantages, there are also some potential disadvantages to
consider:
1. High upfront costs: Implementing a systematic and disciplined approach to software
development can be resource-intensive and require a significant investment in tools and
training.
2. Limited flexibility: Following established software engineering principles and methodologies
can be rigid and may limit the ability to quickly adapt to changing requirements.
3. Bureaucratic: Software Engineering can create an environment that is bureaucratic, with a lot
of processes and paperwork, which may slow down the development process.
4. Complexity: With the increase in the number of tools and methodologies, software engineering
can be complex and difficult to navigate.
5. Limited creativity: The focus on structure and process can stifle creativity and innovation
among developers.
6. High learning curve: The development process can be complex, and it requires a lot of
learning and training, which can be challenging for new developers.
7. High dependence on tools: Software engineering heavily depends on the tools, and if the tools
are not properly configured or are not compatible with the software, it can cause issues.
8. High maintenance: The software engineering process requires regular maintenance to ensure
that the software is running efficiently, which can be costly and time-consuming.

In summary, software engineering can be expensive and time-consuming, and it may limit
flexibility and creativity. However, the benefits of improved quality, increased productivity, and
better maintainability can outweigh the costs and complexity. It’s important to weigh the pros and
cons of using software engineering and determine if it is the right approach for a particular software
project.

Questions For Practice


1. A software configuration management tool helps in [GATE CS 2004]
(A) keeping track of the schedule based on the milestone reached
(B) maintaining different versions of the configurable items
(C) managing manpower distribution by changing the project structure
(D) all of the above
Solution: Correct Answer is (B).

6|Page
2. Which of the following statements is/are true? [UGC NET CSE 2018]
P: Software Reengineering is preferable for software products having high failure rates, poor
design, and/or poor code structure.
Q: Software Reverse Engineering is the process of analyzing software with the objective of
recovering its design and requirement specification.
(A) P only
(B) Neither P nor Q
(C) Q only
(D) Both P and Q
Solution: Correct Answer is (D).

3. The diagram that helps in understanding and representing user requirements for a
software project using UML (Unified Modeling Language) is: [GATE CS 2004]
(A) Entity Relationship Diagram
(B) Deployment Diagram
(C) Data Flow Diagram
(D) Use Case Diagram
Solution: Correct Answer is (D).
Conclusion
Software engineering is a key field that involves creating and maintaining software. It combines
technical skills, creativity, and problem-solving. As technology advances, the need for software
engineers increases, making it a great career choice. Whether you’re new to the field or want to
learn more, understanding software engineering is crucial. Keep exploring, learning, and enjoying
the challenges and opportunities this field offers.

FAQs on Software Engineering


What is Software Re-Engineering?
Ans: Software Re-Engineering is basically a process of software development that helps in
maintaining the quality of the system.
2. State some Software Development Life Cycle Models?
Ans: Some of the Software Development Life Cycle Models are mentioned below.
 Waterfall Model
 Big-Bang Model
 Spiral Model
 Iterative Model
 V-Model

3. What is Verification and Validation in Software Engineering?


Ans: Verification refers to the set of activities or functions that checks whether software has
implemented the correct function or not.
Validation refers to set of activities that ensures that the software is built as per the requirement of
the client.

Software Development | Introduction, SDLC, Roadmap, Courses


Software development is defined as the process of designing, creating, testing, and maintaining
computer programs and applications. This software development roadmap is best suited for students
as well as software development enthusiasts. It covers all the terminologies and details that will guide
a software development enthusiast, starting with an introduction to software development, the
software development life cycle, software development methodologies, the Agile Framework,
software development processes, and interview experience and questions.

7|Page
What is Software Development?
Software development is defined as the process of designing, creating, testing, and maintaining
computer programs and applications. Software development plays an important role in our daily
lives. It empowers smartphone apps and supports businesses worldwide. Software developers
develop the software, which itself is a set of instructions in order to perform a specific task.
Software developers are responsible for the activities related to software, which include designing,
programming, creating, implementing, testing, deploying, and maintaining software. Software
developers develop system software, programming software, and application software.

Software Development Processes Steps


Software development is a complex and multifaceted process that transforms a concept or idea into
a functional, reliable software application. To ensure the successful creation of software, developers
follow a structured process that consists of several key steps. Each step in the software development
process contributes to the overall success of the project. It’s a collaborative effort that involves
different roles, and the effectiveness of each step impacts the final quality, functionality, and user
satisfaction of the software. By understanding these steps, their uses, significance, advantages, and
disadvantages, you can appreciate the intricacies and challenges of software development.

5 Steps of Software Development Process


Stage 1: Planning and Requirement Gathering
The first step in software development is gathering and understanding the requirements. This stage
involves identifying the needs, objectives, and constraints of the project. The goal is to define what
the software should do and what problems it will solve.

Stage 2: Design
In the design phase, the software’s architecture and user interface are developed. This step defines
how the software will work and how users will interact with it. Design includes creating
wireframes, prototypes, and system architecture diagrams.
Design Phase is crucial phase in software development life cycle, this phase comes after
Requirement Gathering phase. It helps in designing the requirement decided in Requirement
Gathering. The Output of design phase will be implemented in the implementation phase.

How to Conduct Design Phase in Software Developments


Design phase of software development can include multiple task like creating teams and set role,
initiate design and architecture activities.

Stage 3: Implementation
Implementation phase is the most important phase of Software Development Life Cycle (SDLC)
this phase comes after design phase. Output of the design phase will be implemented in the this
phase. Here comes a question:
Why Is Implementation So Important In The Software Development Process?
As mentioned above this is the most important phase of Software Development Process because all
the planning that have done in planning phase and designing that have done in designing phase are
implement in this phase. At this phase physical source code are created and deployed in the real
world.
Following are the work that is implemented in the this phase.
 Development
 Version Management
o What is version Control
o Git and GitHub
o Git Branching
o Best Git Branching Strategy
o Git Terminology

8|Page
o Git in Action
 Risk assessment
o Identification of Software Risk
o Analysis of Software Risk
o Planning of Software Risk
o Monitoring of Software Risk
 Change Management
o What is Change Management in Software Development
o Steps in Change Management Software
o Agile Change Management
 Deployment Processes
o What is deployment in Software Development
o The Software Deployment Processes.
o Best Strategies for Agile Software Deployment
o Regression Testing

Stage 4: Quality Assurance


Quality Assurance phase ensure the high quality product. Quality Assurance involves a set of
process and practise whose purpose is to ensure high quality content.
Following are the work that is implemented in the Quality Assurance
 Verification
o Verification Phase(V-MODEL)
o Software Quality
o Software Testing Life Cycle
o Agile Software Testing
o Measuring Software Quality using Quality Metrics
o What is Test Scenario
o What is test cases?
o Integration Tests
o Performance Test
 Validation
o What is Software Validation
o What is User Acceptance Test (UAT)
 Incident Management, debugging, bug fixing
o What is Incident Management? Definition, System, Process and Report

Stage 5: Go Live
After all the above phases, Go Live is the last phases of Software Development Processes. In this
phase product is ready to launch in the market.

Software Maintenance
Software Maintenance refers to the process of modifying and updating a software system after it
has been delivered to the customer. This can include fixing bugs, adding new features, improving
performance, or updating the software to work with new hardware or software systems. The goal of
software maintenance is to keep the software system working correctly, efficiently, and securely,
and to ensure that it continues to meet the needs of the users.

Software Development – FAQs


What are the different phases of software development?
The different phases of software development include planning, analysis, design, implementation,
testing, deployment, and maintenance. Each phase is crucial for the successful development and
deployment of software applications.

9|Page
What programming languages are commonly used in software development?
Commonly used programming languages in software development include Java, Python, C++,
JavaScript, Ruby, Swift, and PHP, among others. The choice of programming language depends on
the specific requirements of the project.

How important is testing in software development?


Testing is a critical aspect of software development as it ensures that the software functions as
intended, is free from bugs and errors, and meets the specified requirements. Testing helps maintain
the quality and reliability of the software.

What is the role of a software developer?


A software developer is responsible for the creation, design, and development of software
applications. They analyze user needs, design software solutions, write code, test functionality, and
collaborate with other team members to ensure the successful deployment of software.

How does Agile methodology influence software development?


Agile methodology influences software development by promoting iterative development,
collaboration, flexibility, and customer feedback. It emphasizes adaptive planning, evolutionary
development, and continuous improvement, thereby enhancing the efficiency and effectiveness of
the development process.

Classification of Software – Software Engineering


Software Engineering is the process of developing a software product in a well-defined systematic
approach software engineering is the process of analyzing user needs and then designing,
constructing, and testing end-user applications that will satisfy these needs through the use of
software programming languages.

Parameters Defining Software Project


The software should be produced at a reasonable cost, in a reasonable time, and should be of good
quality. These three parameters often drive and define a software project.

1. Cost: As the main cost of producing software is the manpower employed, the cost of
developing software is generally measured in terms of person-months of effort spent in
development. The productivity in the software industry for writing fresh code mostly ranges
from a few hundred to about 1000 + LOC per person per month.
2. Schedule: The schedule is another important factor in many projects. Business trends are
dictating that the time to market a product should be reduced; that is, the cycle time from
concept to delivery should be small. This means that software needs to be developed faster and
within the specified time.
3. Quality: Quality is one of the main mantras, and business strategies are designed around it.
Developing high-quality software is another fundamental goal of software engineering.

Attributes of Software
The international standard on software product quality suggests that software quality comprises six
main attributes:
1. Reliability: The capability to provide failure-free service.
2. Functionality: The capability to provide functions that meet stated and implied needs when the
software is used.
3. Usability: The capability to be understood, learned, and used.
4. Efficiency: The capability to provide appropriate performance relative to the amount of
resources used.
5. Maintainability: the capability to be modified for purposes of making corrections,
improvements, or adaptations.

10 | P a g e
6. Portability: The capability to be adapted for different specified environments without applying
actions or means other than those provided for this purpose in the product.

Classification of Software
The software can be classified based on various criteria, including:
1. Purpose: Software can be classified as system software (e.g., operating systems, device
drivers) or application software (e.g., word processors, games).
2. Platform: Software can be classified as native software (designed for a specific operating
system) or cross-platform software (designed to run on multiple operating systems).
3. Deployment: Software can be classified as installed software (installed on the user’s device) or
cloud-based software (hosted on remote servers and accessed via the internet).
4. License: Software can be classified as proprietary software (owned by a single entity) or open-
source software (available for free with the source code accessible to the public).
5. Development Model: Software can be classified as traditional software (developed using a
waterfall model) or agile software (developed using an iterative and adaptive approach).
6. Size: Software can be classified as small-scale software (designed for a single user or small
group) or enterprise software (designed for large organizations).
7. User Interface: Software can be classified as Graphical User Interface (GUI) software
or Command-Line Interface (CLI) software.

These classifications are important for understanding the characteristics and limitations of different
types of software, and for selecting the best software for a particular need.

Types of Software
The software is used extensively in several domains including hospitals, banks, schools, defense,
finance, stock markets, and so on.

It can be categorized into different types:

1. Based on Application
2. Based on Copyright

11 | P a g e
1. Based on Application
The software can be classified on the basis of the application. These are to be done on this basis.
1. System Software:
System Software is necessary to manage computer resources and support the execution of
application programs. Software like operating systems, compilers, editors and drivers, etc., come
under this category. A computer cannot function without the presence of these. Operating
systems are needed to link the machine-dependent needs of a program with the capabilities of the
machine on which it runs. Compilers translate programs from high-level language to machine
language.

2. Application Software:
Application software is designed to fulfill the user’s requirement by interacting with the user
directly. It could be classified into two major categories:- generic or customized. Generic Software
is software that is open to all and behaves the same for all of its users. Its function is limited and not
customized as per the user’s changing requirements. However, on the other hand, customized
software is the software products designed per the client’s requirement, and are not available for all.
3. Networking and Web Applications Software:
Networking Software provides the required support necessary for computers to interact with each
other and with data storage facilities. Networking software is also used when software is running on
a network of computers (such as the World Wide Web). It includes all network management
software, server software, security and encryption software, and software to develop web-based
applications like HTML, PHP, XML, etc.

4. Embedded Software:
This type of software is embedded into the hardware normally in the Read-Only Memory (ROM) as
a part of a large system and is used to support certain functionality under the control conditions.
Examples are software used in instrumentation and control applications like washing machines,
satellites, microwaves, etc.

5. Reservation Software:
A Reservation system is primarily used to store and retrieve information and perform transactions
related to air travel, car rental, hotels, or other activities. They also provide access to bus and
railway reservations, although these are not always integrated with the main system. These are also
used to relay computerized information for users in the hotel industry, making a reservation and
ensuring that the hotel is not overbooked.

6. Business Software:
This category of software is used to support business applications and is the most widely used
category of software. Examples are software for inventory management, accounts, banking,
hospitals, schools, stock markets, etc.

7. Entertainment Software:
Education and Entertainment software provides a powerful tool for educational agencies, especially
those that deal with educating young children. There is a wide range of entertainment software such
as computer games, educational games, translation software, mapping software, etc.

8. Artificial Intelligence Software:


Software like expert systems, decision support systems, pattern recognition software, artificial
neural networks, etc. come under this category. They involve complex problems which are not
affected by complex computations using non-numerical algorithms.

12 | P a g e
9. Scientific Software:
Scientific and engineering software satisfies the needs of a scientific or engineering user to perform
enterprise-specific tasks. Such software is written for specific applications using principles,
techniques, and formulae particular to that field. Examples are software like MATLAB,
AUTOCAD, PSPICE, ORCAD, etc.

10. Utility Software:


The programs coming under this category perform specific tasks and are different from other
software in terms of size, cost, and complexity. Examples are antivirus software, voice recognition
software, compression programs, etc.
11. Document Management Software:
Document Management Software is used to track, manage, and store documents to reduce the
paperwork. Such systems are capable of keeping a record of the various versions created and
modified by different users (history tracking). They commonly provide storage, versioning,
metadata, security, as well as indexing and retrieval capabilities.

2. Based on Copyright
Classification of Software can be done based on copyright. These are stated as follows:
1. Commercial Software:
It represents the majority of software that we purchase from software companies, commercial
computer stores, etc. In this case, when a user buys software, they acquire a license key to use it.
Users are not allowed to make copies of the software. The company owns the copyright of the
program.

2. Shareware Software:
Shareware software is also covered under copyright, but the purchasers are allowed to make and
distribute copies with the condition that after testing the software, if the purchaser adopts it for use,
then they must pay for it. In both of the above types of software, changes to the software are not
allowed.

3. Freeware Software:
In general, according to freeware software licenses, copies of the software can be made both for
archival and distribution purposes, but here, distribution cannot be for making a profit. Derivative
works and modifications to the software are allowed and encouraged. Decompiling of the program
code is also allowed without the explicit permission of the copyright holder.

4. Public Domain Software:


In the case of public domain software, the original copyright holder explicitly relinquishes all rights
to the software. Hence, software copies can be made both for archival and distribution purposes
with no restrictions on distribution. Modifications to the software and reverse engineering are also
allowed.

FAQs
1. How is System Software classified?
System Software is classified on the basis of how the tasks are to be performed and how the
software system interacts.

2. What are the five functions of the Software?


Software is the program that is required to work on the input, processing, output, storage and
control.

13 | P a g e
Software Evolution – Software Engineering
Software Evolution is a term that refers to the process of developing software initially, and then
timely updating it for various reasons, i.e., to add new features or to remove obsolete
functionalities, etc. This article focuses on discussing Software Evolution in detail.

What is Software Evolution?


The software evolution process includes fundamental activities of change analysis, release
planning, system implementation, and releasing a system to customers.
1. The cost and impact of these changes are accessed to see how much the system is affected by
the change and how much it might cost to implement the change.
2. If the proposed changes are accepted, a new release of the software system is planned.
3. During release planning, all the proposed changes (fault repair, adaptation, and new
functionality) are considered.
4. A design is then made on which changes to implement in the next version of the system.
5. The process of change implementation is an iteration of the development process where the
revisions to the system are designed, implemented, and tested.

Necessity of Software Evolution


Software evaluation is necessary just because of the following reasons:
1. Change in requirement with time: With time, the organization’s needs and modus Operandi
of working could substantially be changed so in this frequently changing time the
tools(software) that they are using need to change to maximize the performance.
2. Environment change: As the working environment changes the things(tools) that enable us to
work in that environment also changes proportionally same happens in the software world as
the working environment changes then, the organizations require reintroduction of old software
with updated features and functionality to adapt the new environment.
3. Errors and bugs: As the age of the deployed software within an organization increases their
preciseness or impeccability decrease and the efficiency to bear the increasing complexity
workload also continually degrades. So, in that case, it becomes necessary to avoid use of
obsolete and aged software. All such obsolete Pieces of software need to undergo the evolution
process in order to become robust as per the workload complexity of the current environment.
4. Security risks: Using outdated software within an organization may lead you to at the verge of
various software-based cyberattacks and could expose your confidential data illegally
associated with the software that is in use. So, it becomes necessary to avoid such security
breaches through regular assessment of the security patches/modules are used within the
software. If the software isn’t robust enough to bear the current occurring Cyber attacks so it
must be changed (updated).
5. For having new functionality and features: In order to increase the performance and fast data
processing and other functionalities, an organization need to continuously evolute the software
throughout its life cycle so that stakeholders & clients of the product could work efficiently.

Laws used for Software Evolution


1. Law of Continuing Change
This law states that any software system that represents some real-world reality undergoes
continuous change or become progressively less useful in that environment.
2. Law of Increasing Complexity
As an evolving program changes, its structure becomes more complex unless effective efforts are
made to avoid this phenomenon.
3. Law of Conservation of Organization Stability
Over the lifetime of a program, the rate of development of that program is approximately constant
and independent of the resource devoted to system development.
4. Law of Conservation of Familiarity
This law states that during the active lifetime of the program, changes made in the successive
release are almost constant.

14 | P a g e
What is the Need of Software Engineering?
Software engineering is a technique through which we can develop or create software for computer
systems or any other electronic devices. It is a systematic, scientific and disciplined approach to the
development, functioning, and maintenance of software.
Basically, Software engineering was introduced to address the issues of low-quality software
projects. Here, the development of the software uses the well-defined scientific principle method
and procedure.

In other words, software engineering is a process in which the need of users are analyzed and then
the software is designed as per the requirement of the user. Software engineering builds this
software and application by using designing and programming language.
In order to create complex software, we need to use software engineering techniques as well as
reduce the complexity we should use abstraction and decomposition, where abstraction describes
only the important part of the software and remove the irrelevant things for the later stage of
development so the requirement of the software becomes simple. Decomposition breakdown of the
software in a number of modules where each module procedure as well defines the independent
task

Need of Software Engineering:


 Handling Big Projects: A corporation must use a software engineering methodology in order
to handle large projects without any issues.
 To manage the cost: Software engineering programmers plan everything and reduce all those
things that are not required.
 To decrease time: It will save a lot of time if you are developing software using a software
engineering technique.
 Reliable software: It is the company’s responsibility to deliver software products on schedule
and to address any defects that may exist.
 Effectiveness: Effectiveness results from things being created in accordance with the standards.
 Reduces complexity: Large challenges are broken down into smaller ones and solved one at a
time in software engineering. Individual solutions are found for each of these issues.
 Productivity: Because it contains testing systems at every level, proper care is done to
maintain software productivity.

Software Development Models & Architecture

15 | P a g e
Software development models are frameworks that guide the process of creating software
applications. They provide a structured approach to planning, designing, implementing, testing, and
deploying software. Here are some common software development models.
1. Classical Waterfall Model
2. Iterative Waterfall Model
3. Spiral Model
4. Incremental process model
5. Rapid application development model(RAD)
6. RAD Model vs Traditional SDLC
7. Agile Development Models
8. Agile Software Development
9. Extreme Programming (XP)
10. SDLC V-Model
11. Comparison of different life cycle models

Waterfall Model – Software Engineering


The classical waterfall model is the basic software development life cycle model. It is very simple but
idealistic. Earlier this model was very popular but nowadays it is not used. However, it is very
important because all the other software development life cycle models are based on the classical
waterfall model.

What is the SDLC Waterfall Model?


The waterfall model is a software development model used in the context of large, complex
projects, typically in the field of information technology. It is characterized by a structured,
sequential approach to project management and software development.
The waterfall model is useful in situations where the project requirements are well-defined and the
project goals are clear. It is often used for large-scale projects with long timelines, where there is
little room for error and the project stakeholders need to have a high level of confidence in the
outcome.

Features of the SDLC Waterfall Model


1. Sequential Approach: The waterfall model involves a sequential approach to software
development, where each phase of the project is completed before moving on to the next one.
2. Document-Driven: The waterfall model relies heavily on documentation to ensure that the
project is well-defined and the project team is working towards a clear set of goals.
3. Quality Control: The waterfall model places a high emphasis on quality control and testing at
each phase of the project, to ensure that the final product meets the requirements and
expectations of the stakeholders.
4. Rigorous Planning: The waterfall model involves a rigorous planning process, where the
project scope, timelines, and deliverables are carefully defined and monitored throughout the
project lifecycle.

Overall, the waterfall model is used in situations where there is a need for a highly structured and
systematic approach to software development. It can be effective in ensuring that large, complex
projects are completed on time and within budget, with a high level of quality and customer
satisfaction.

Importance of SDLC Waterfall Model


1. Clarity and Simplicity: The linear form of the Waterfall Model offers a simple and
unambiguous foundation for project development.
2. Clearly Defined Phases: The Waterfall Model’s phases each have unique inputs and outputs,
guaranteeing a planned development with obvious checkpoints.
3. Documentation: A focus on thorough documentation helps with software comprehension,
upkeep, and future growth.

16 | P a g e
4. Stability in Requirements: Suitable for projects when the requirements are clear and steady,
reducing modifications as the project progresses.
5. Resource Optimization: It encourages effective task-focused work without continuously
changing contexts by allocating resources according to project phases.
6. Relevance for Small Projects: Economical for modest projects with simple specifications and
minimal complexity.

Phases of SDLC Waterfall Model – Design


The Waterfall Model is a classical software development methodology that was first introduced by
Winston W. Royce in 1970. It is a linear and sequential approach to software development that
consists of several phases that must be completed in a specific order.

The Waterfall Model has six phases which are:


1. Requirements: The first phase involves gathering requirements from stakeholders and analyzing
them to understand the scope and objectives of the project.

2. Design: Once the requirements are understood, the design phase begins. This involves creating a
detailed design document that outlines the software architecture, user interface, and system
components.

3. Development: The Development phase include implementation involves coding the software
based on the design specifications. This phase also includes unit testing to ensure that each
component of the software is working as expected.

4. Testing: In the testing phase, the software is tested as a whole to ensure that it meets the
requirements and is free from defects.

5. Deployment: Once the software has been tested and approved, it is deployed to the production
environment.

6. Maintenance: The final phase of the Waterfall Model is maintenance, which involves fixing any
issues that arise after the software has been deployed and ensuring that it continues to meet the
requirements over time.
The classical waterfall model divides the life cycle into a set of phases. This model considers that
one phase can be started after the completion of the previous phase. That is the output of one phase
will be the input to the next phase. Thus the development process can be considered as a sequential
flow in the waterfall. Here the phases do not overlap with each other. The different sequential
phases of the classical waterfall model are shown in the below figure.

Let us now learn about each of these phases in detail which include further phases.

17 | P a g e
1. Feasibility Study:
The main goal of this phase is to determine whether it would be financially and technically feasible
to develop the software.

The feasibility study involves understanding the problem and then determining the various possible
strategies to solve the problem. These different identified solutions are analyzed based on their
benefits and drawbacks, The best solution is chosen and all the other phases are carried out as per
this solution strategy.

2. Requirements Analysis and Specification:


The requirement analysis and specification phase aims to understand the exact requirements of the
customer and document them properly. This phase consists of two different activities.
 Requirement gathering and analysis: Firstly all the requirements regarding the software are
gathered from the customer and then the gathered requirements are analyzed. The goal of the
analysis part is to remove incompleteness (an incomplete requirement is one in which some
parts of the actual requirements have been omitted) and inconsistencies (an inconsistent
requirement is one in which some part of the requirement contradicts some other part).
 Requirement specification: These analyzed requirements are documented in a software
requirement specification (SRS) document. SRS document serves as a contract between the
development team and customers. Any future dispute between the customers and the developers
can be settled by examining the SRS document.

3. Design:
The goal of this phase is to convert the requirements acquired in the SRS into a format that can be
coded in a programming language. It includes high-level and detailed design as well as the overall
software architecture. A Software Design Document is used to document all of this effort (SDD).

4. Coding and Unit Testing:


In the coding phase software design is translated into source code using any suitable programming
language. Thus each designed module is coded. The unit testing phase aims to check whether each
module is working properly or not.

5. Integration and System testing:


Integration of different modules is undertaken soon after they have been coded and unit tested.
Integration of various modules is carried out incrementally over several steps. During each
integration step, previously planned modules are added to the partially integrated system and the
resultant system is tested. Finally, after all the modules have been successfully integrated and
tested, the full working system is obtained and system testing is carried out on this.
System testing consists of three different kinds of testing activities as described below.
 Alpha testing: Alpha testing is the system testing performed by the development team.
 Beta testing: Beta testing is the system testing performed by a friendly set of customers.
 Acceptance testing: After the software has been delivered, the customer performs acceptance
testing to determine whether to accept the delivered software or reject it.

6. Maintenance:
Maintenance is the most important phase of a software life cycle. The effort spent on maintenance
is 60% of the total effort spent to develop a full software. There are three types of maintenance.
 Corrective Maintenance: This type of maintenance is carried out to correct errors that were
not discovered during the product development phase.
 Perfective Maintenance: This type of maintenance is carried out to enhance the functionalities
of the system based on the customer’s request.

18 | P a g e
 Adaptive Maintenance: Adaptive maintenance is usually required for porting the software to
work in a new environment such as working on a new computer platform or with a new
operating system.

Advantages of the SDLC Waterfall Model


The classical waterfall model is an idealistic model for software development. It is very simple, so
it can be considered the basis for other software development life cycle models. Below are some of
the major advantages of this SDLC model.
 Easy to Understand: The Classical Waterfall Model is very simple and easy to understand.
 Individual Processing: Phases in the Classical Waterfall model are processed one at a time.
 Properly Defined: In the classical waterfall model, each stage in the model is clearly defined.
 Clear Milestones: The classical Waterfall model has very clear and well-understood
milestones.
 Properly Documented: Processes, actions, and results are very well documented.
 Reinforces Good Habits: The Classical Waterfall Model reinforces good habits like define-
before-design and design-before-code.
 Working: Classical Waterfall Model works well for smaller projects and projects where
requirements are well understood.

Disadvantages of the SDLC Waterfall Model


The Classical Waterfall Model suffers from various shortcomings we can’t use it in real projects,
but we use other software development lifecycle models which are based on the classical waterfall
model. Below are some major drawbacks of this model.
 No Feedback Path: In the classical waterfall model evolution of software from one phase to
another phase is like a waterfall. It assumes that no error is ever committed by developers
during any phase. Therefore, it does not incorporate any mechanism for error correction.
 Difficult to accommodate Change Requests: This model assumes that all the customer
requirements can be completely and correctly defined at the beginning of the project, but the
customer’s requirements keep on changing with time. It is difficult to accommodate any change
requests after the requirements specification phase is complete.
 No Overlapping of Phases: This model recommends that a new phase can start only after the
completion of the previous phase. But in real projects, this can’t be maintained. To increase
efficiency and reduce cost, phases may overlap.
 Limited Flexibility: The Waterfall Model is a rigid and linear approach to software
development, which means that it is not well-suited for projects with changing or uncertain
requirements. Once a phase has been completed, it is difficult to make changes or go back to a
previous phase.
 Limited Stakeholder Involvement: The Waterfall Model is a structured and sequential
approach, which means that stakeholders are typically involved in the early phases of the
project (requirements gathering and analysis) but may not be involved in the later
phases (implementation, testing, and deployment).
 Late Defect Detection: In the Waterfall Model, testing is typically done toward the end of the
development process. This means that defects may not be discovered until late in the
development process, which can be expensive and time-consuming to fix.
 Lengthy Development Cycle: The Waterfall Model can result in a lengthy development cycle,
as each phase must be completed before moving on to the next. This can result in delays and
increased costs if requirements change or new issues arise.

When to Use the SDLC Waterfall Model?


Here are some cases where the use of the Waterfall Model is best suited:
 Well-understood Requirements: Before beginning development, there are precise, reliable,
and thoroughly documented requirements available.
 Very Little Changes Expected: During development, very little adjustments or expansions to
the project’s scope are anticipated.

19 | P a g e
 Small to Medium-Sized Projects: Ideal for more manageable projects with a clear
development path and little complexity.
 Predictable: Projects that are predictable, low-risk, and able to be addressed early in the
development life cycle are those that have known, controllable risks.
 Regulatory Compliance is Critical: Circumstances in which paperwork is of utmost
importance and stringent regulatory compliance is required.
 Client Prefers a Linear and Sequential Approach: This situation describes the client’s
preference for a linear and sequential approach to project development.
 Limited Resources: Projects with limited resources can benefit from a set-up strategy, which
enables targeted resource allocation.

The Waterfall approach involves little client engagement in the product development process. The
product can only be shown to end consumers when it is ready.

Applications of SDLC Waterfall Model


 Large-scale Software Development Projects: The Waterfall Model is often used for large-
scale software development projects, where a structured and sequential approach is necessary to
ensure that the project is completed on time and within budget.
 Safety-Critical Systems: The Waterfall Model is often used in the development of safety-
critical systems, such as aerospace or medical systems, where the consequences of errors or
defects can be severe.
 Government and Defense Projects: The Waterfall Model is also commonly used in
government and defense projects, where a rigorous and structured approach is necessary to
ensure that the project meets all requirements and is delivered on time.
 Projects with well-defined Requirements: The Waterfall Model is best suited for projects
with well-defined requirements, as the sequential nature of the model requires a clear
understanding of the project objectives and scope.
 Projects with Stable Requirements: The Waterfall Model is also well-suited for projects with
stable requirements, as the linear nature of the model does not allow for changes to be made
once a phase has been completed.

Conclusion
The Waterfall Model has greatly influenced conventional software development processes. This
methodical, sequential technique provides an easily understood and applied structured framework.
Project teams have a clear roadmap due to the model’s methodical evolution through the phases of
requirements, design, implementation, testing, deployment, and maintenance.

Frequently Asked Questions on Waterfall Model (SDLC) – FAQs


1. What is the difference between the Waterfall Model and Agile Model?
Ans: The main difference between the Waterfall Model and the Agile Model is that the Waterfall
model relies on thorough front planning whereas the Agile model is more flexible as it takes these
processes in repeating cycles.

2. What is the Waterfall Process?


Ans: The Waterfall process is a step-by-step development and project management process. As the
name suggests, this model follows a straight path where each step (like planning, designing,
building, testing, and launching) needs to be finished before moving to the next. This approach
works well for projects where all the steps are clear from the beginning.

3. What are the benefits of the Waterfall Model?


Ans: The waterfall Model has several benefits as it helps projects keep a well-defined, predictable
project under the budget.

20 | P a g e
4. Is Waterfall better than Agile?
Ans: Waterfall works best for well-defined, unchanging projects, while Agile is for dynamic,
evolving projects.

Iterative Waterfall Model – Software Engineering


In a practical software development project, the classical waterfall model is hard to use. So, the
iterative waterfall model can be thought of as incorporating the necessary changes to the classical
waterfall model to make it usable in practical software development projects. It is almost the same as
the classical waterfall model, except some changes are made to increase the efficiency of the software
development.

Table of Content
 What is the Iterative Waterfall Model?
 Process of Iterative Waterfall Model
 When to use Iterative Waterfall Model?
 Application of Iterative Waterfall Model
 Why is iterative waterfall model used?
 Advantages of Iterative Waterfall Model
 Drawbacks of Iterative Waterfall Model

What is the Iterative Waterfall Model?


The Iterative Waterfall Model is a software development approach that combines the sequential steps
of the traditional Waterfall Model with the flexibility of iterative design. It allows for improvements
and changes to be made at each stage of the development process, instead of waiting until the end of
the project. The iterative waterfall model provides feedback paths from every phase to its preceding
phases, which is the main difference from the classical waterfall model.

1. When errors are detected at some later phase, these feedback paths allow for correcting errors
committed by programmers during some phase.
2. The feedback paths allow the phase to be reworked in which errors are committed and these
changes are reflected in the later phases.
3. But, there is no feedback path to the stage – feasibility study, because once a project has been
taken, does not give up the project easily.
4. It is good to detect errors in the same phase in which they are committed.
5. It reduces the effort and time required to correct the errors.
6. A real-life example could be building a new website for a small business.

Process of Iterative Waterfall Model

21 | P a g e
Following are the phases of Iterative Waterfall Model:
1. Requirements Gathering: This is the first stage where the business owners and developers meet
to discuss the goals and requirements of the website.
2. Design: In this stage, the developers create a preliminary design of the website based on the
requirements gathered in stage 1.
3. Implementation: In this stage, the developers begin to build the website based on the design
created in stage 2.
4. Testing: Once the website has been built, it is tested to ensure that it meets the requirements and
functions properly.
5. Deployment: The website is then deployed and made live to the public.
6. Review and Improvement: After the website has been live for a while, the business owners and
developers review its performance and make any necessary improvements.

This process is repeated until the website meets the needs and goals of the business. Each iteration
builds upon the previous one, allowing for continuous improvement and iteration until the final
product is complete.

When to use Iterative Waterfall Model?


1. The prerequisite of being well-defined and comprehended.
2. The development team is gaining knowledge about new technologies.
3. Certain characteristics and objectives carry a significant chance of failure in the future.

Application of Iterative Waterfall Model


Below are some application of Iterative Waterfall Model:
1. The essential needs are established, but as time passes, the finer points may become relevant.
2. Programmers have a learning curve to climb when they utilize new technology.
3. The resources needed to complete a large project are constrained, hence on a smaller scale, the
automation is more temporary.
4. Very high risk as the project’s objective may occasionally alter.

Why is iterative waterfall model used?


The main reason behind using iterative waterfall model is feedback path. In the classical waterfall
model, there are no feedback paths, so there is no mechanism for error correction. But in the iterative
waterfall model feedback path from one phase to its preceding phase allows correcting the errors that
are committed and these changes are reflected in the later phases.

Advantages of Iterative Waterfall Model


Following are the advantage of Iterative Waterfall Model:
1. Phase Containment of Errors: The principle of detecting errors as close to their points of
commitment as possible is known as Phase containment of errors.
2. Collaboration: Throughout each stage of the process, there is collaboration between the business
owners and developers. This ensures that the website meets the needs of the business and that any
issues or concerns are addressed in a timely manner.
3. Flexibility: The iterative waterfall model allows for flexibility in the development process. If
changes or new requirements arise, they can be incorporated into the next iteration of the website.
4. Testing and feedback: The testing stage of the process is important for identifying any issues or
bugs that need to be addressed before the website is deployed. Additionally, feedback from users
or customers can be gathered and used to improve the website in subsequent iterations.
5. Scalability: The iterative waterfall model is scalable, meaning it can be used for projects of
various sizes and complexities. For example, a larger business may require more iterations or
more complex requirements, but the same process can still be followed.
6. Maintenance: Once the website is live, ongoing maintenance is necessary to ensure it continues
to meet the needs of the business and its users. The iterative waterfall model can be used for
maintenance and improvement cycles, allowing the website to evolve and stay up-to-date.

22 | P a g e
7. Easy to Manage: The iterative waterfall model is easy to manage as each phase is well-defined
and has a clear set of deliverables. This makes it easier to track progress, identify issues, and
manage resources.
8. Faster Time to Market: The iterative approach allows for faster time to market as small and
incremental improvements are made over time, rather than waiting for a complete product to be
developed.
9. Predictable Outcomes: The phased approach of the iterative waterfall model allows for more
predictable outcomes and greater control over the development process, ensuring that the project
stays on track and within budget.
10. Improved Customer Satisfaction: The iterative approach allows for customer involvement and
feedback throughout the development process, resulting in a final product that better meets the
needs and expectations of the customer.
11. Quality Assurance: The iterative approach promotes quality assurance by providing
opportunities for testing and feedback throughout the development process. This results in a
higher-quality end product.
12. Risk Reduction: The iterative approach allows for early identification and mitigation of risks,
reducing the likelihood of costly errors later in the development process.
13. Well-organized: In this model, less time is consumed on documenting and the team can spend
more time on development and designing.
14. Cost-Effective: It is highly cost-effective to change the plan or requirements in the model.
Moreover, it is best suited for agile organizations.
15. Simple: Iterative waterfall model is very simple to understand and use. That’s why it is one of the
most widely used software development models.
16. Feedback Path: In the classical waterfall model, there are no feedback paths, so there is no
mechanism for error correction. But in the iterative waterfall model feedback path from one phase
to its preceding phase allows correcting the errors that are committed and these changes are
reflected in the later phases.

Drawbacks of Iterative Waterfall Model


Following are the disadvantage of Iterative Waterfall Model:
1. Difficult to incorporate change requests: The major drawback of the iterative waterfall model is
that all the requirements must be clearly stated before starting the development phase. Customers
may change requirements after some time but the iterative waterfall model does not leave any
scope to incorporate change requests that are made after the development phase starts.
2. Incremental delivery not supported: In the iterative waterfall model, the full software is
completely developed and tested before delivery to the customer. There is no scope for any
intermediate delivery. So, customers have to wait a long for getting the software.
3. Overlapping of phases not supported: Iterative waterfall model assumes that one phase can
start after completion of the previous phase, But in real projects, phases may overlap to reduce the
effort and time needed to complete the project.
4. Risk handling not supported: Projects may suffer from various types of risks. But, the Iterative
waterfall model has no mechanism for risk handling.
5. Limited customer interactions: Customer interaction occurs at the start of the project at the time
of requirement gathering and at project completion at the time of software delivery. These fewer
interactions with the customers may lead to many problems as the finally developed software may
differ from the customers’ actual requirements.

Conclusion
Iterative waterfall model is a an improved version of traditional waterfall model. Instead of doing
each phase (like planning, designing, building, and testing) just once, you go through these phases in
small, repeated cycles. This helps catch and fix problems early and allows for adjustments based on
feedback, leading to a more refined and reliable final product.

23 | P a g e
Frequently Asked Questions related to Iterative Waterfall Model
What is the difference between agile and iterative waterfall?
Agile enables the rapid delivery of projects with shorter lifecycles because each iteration produces a
working result while the Iterative Waterfall Model is a software development approach that combines
the sequential steps of the traditional Waterfall Model with the flexibility of iterative design.

Why is the iterative model used?


Early in the project, a working model is created using the iterative waterfall model. It is feasible to
identify and isolate function or design flaws as it is being evaluated and discussed. Early detection of
these problems may make it easier to solve them immediately and affordably.

What is the difference between waterfall and iterative waterfall?


The waterfall model is a one-time, linear process where each phase is completed before moving to the
next. The iterative waterfall model repeats the phases in cycles, allowing for refinements based on
feedback after each cycle.

What is Spiral Model in Software Engineering?


The Spiral Model is one of the most important Software Development Life Cycle models. The Spiral
Model is a combination of the waterfall model and the iterative model. It provides support for Risk
Handling. The Spiral Model was first proposed by Barry Boehm. This article focuses on discussing
the Spiral Model in detail.

What is the Spiral Model?


The Spiral Model is a Software Development Life Cycle (SDLC) model that provides a systematic
and iterative approach to software development. In its diagrammatic representation, looks like a
spiral with many loops. The exact number of loops of the spiral is unknown and can vary from
project to project. Each loop of the spiral is called a phase of the software development process.
Some Key Points regarding the phase of a Spiral Model:

1. The exact number of phases needed to develop the product can be varied by the project manager
depending upon the project risks.
2. As the project manager dynamically determines the number of phases, the project manager has
an important role in developing a product using the spiral model.
3. It is based on the idea of a spiral, with each iteration of the spiral representing a complete
software development cycle, from requirements gathering and analysis to design,
implementation, testing, and maintenance.

What Are the Phases of the Spiral Model?


The Spiral Model is a risk-driven model, meaning that the focus is on managing risk through
multiple iterations of the software development process. It consists of the following phases:
1. Objectives Defined: In first phase of the spiral model we clarify what the project aims to
achieve, including functional and non-functional requirements.
2. Risk Analysis: In the risk analysis phase, the risks associated with the project are identified and
evaluated.
3. Engineering: In the engineering phase, the software is developed based on the requirements
gathered in the previous iteration.
4. Evaluation: In the evaluation phase, the software is evaluated to determine if it meets the
customer’s requirements and if it is of high quality.
5. Planning: The next iteration of the spiral begins with a new planning phase, based on the
results of the evaluation.

The Spiral Model is often used for complex and large software development projects, as it allows
for a more flexible and adaptable approach to software development. It is also well-suited to
projects with significant uncertainty or high levels of risk.

24 | P a g e
The Radius of the spiral at any point represents the expenses (cost) of the project so far, and the
angular dimension represents the progress made so far in the current phase.

Each phase of the Spiral Model is divided into four quadrants as shown in the above figure.
The functions of these four quadrants are discussed below:
1. Objectives determination and identify alternative solutions: Requirements are gathered
from the customers and the objectives are identified, elaborated, and analyzed at the start of
every phase. Then alternative solutions possible for the phase are proposed in this quadrant.
2. Identify and resolve Risks: During the second quadrant, all the possible solutions are
evaluated to select the best possible solution. Then the risks associated with that solution are
identified and the risks are resolved using the best possible strategy. At the end of this quadrant,
the Prototype is built for the best possible solution.
3. Develop the next version of the Product: During the third quadrant, the identified features are
developed and verified through testing. At the end of the third quadrant, the next version of the
software is available.
4. Review and plan for the next Phase: In the fourth quadrant, the Customers evaluate the so-far
developed version of the software. In the end, planning for the next phase is started.

Risk Handling in Spiral Model


A risk is any adverse situation that might affect the successful completion of a software project. The
most important feature of the spiral model is handling these unknown risks after the project has
started. Such risk resolutions are easier done by developing a prototype.
1. The spiral model supports coping with risks by providing the scope to build a prototype at every
phase of software development.
2. The Prototyping Model also supports risk handling, but the risks must be identified completely
before the start of the development work of the project.
3. But in real life, project risk may occur after the development work starts, in that case, we cannot
use the Prototyping Model.
4. In each phase of the Spiral Model, the features of the product dated and analyzed, and the risks
at that point in time are identified and are resolved through prototyping.
5. Thus, this model is much more flexible compared to other SDLC models.

Why Spiral Model is called Meta Model?


The Spiral model is called a Meta-Model because it subsumes all the other SDLC models. For
example, a single loop spiral actually represents the Iterative Waterfall Model.
1. The spiral model incorporates the stepwise approach of the Classical Waterfall Model.

25 | P a g e
2. The spiral model uses the approach of the Prototyping Model by building a prototype at the
start of each phase as a risk-handling technique.
3. Also, the spiral model can be considered as supporting the Evolutionary model – the iterations
along the spiral can be considered as evolutionary levels through which the complete system is
built.

Advantages of the Spiral Model


Below are some advantages of the Spiral Model.
1. Risk Handling: The projects with many unknown risks that occur as the development
proceeds, in that case, Spiral Model is the best development model to follow due to the risk
analysis and risk handling at every phase.
2. Good for large projects: It is recommended to use the Spiral Model in large and complex
projects.
3. Flexibility in Requirements: Change requests in the Requirements at a later phase can be
incorporated accurately by using this model.
4. Customer Satisfaction: Customers can see the development of the product at the early phase
of the software development and thus, they habituated with the system by using it before
completion of the total product.
5. Iterative and Incremental Approach: The Spiral Model provides an iterative and incremental
approach to software development, allowing for flexibility and adaptability in response to
changing requirements or unexpected events.
6. Emphasis on Risk Management: The Spiral Model places a strong emphasis on risk
management, which helps to minimize the impact of uncertainty and risk on the software
development process.
7. Improved Communication: The Spiral Model provides for regular evaluations and reviews,
which can improve communication between the customer and the development team.
8. Improved Quality: The Spiral Model allows for multiple iterations of the software
development process, which can result in improved software quality and reliability.

Disadvantages of the Spiral Model


Below are some main disadvantages of the spiral model.
1. Complex: The Spiral Model is much more complex than other SDLC models.
2. Expensive: Spiral Model is not suitable for small projects as it is expensive.
3. Too much dependability on Risk Analysis: The successful completion of the project is very
much dependent on Risk Analysis. Without very highly experienced experts, it is going to be a
failure to develop a project using this model.
4. Difficulty in time management: As the number of phases is unknown at the start of the
project, time estimation is very difficult.
5. Complexity: The Spiral Model can be complex, as it involves multiple iterations of the
software development process.
6. Time-Consuming: The Spiral Model can be time-consuming, as it requires multiple
evaluations and reviews.
7. Resource Intensive: The Spiral Model can be resource-intensive, as it requires a significant
investment in planning, risk analysis, and evaluations.

The most serious issue we face in the cascade model is that taking a long length to finish the item,
and the product became obsolete. To tackle this issue, we have another methodology, which is
known as the Winding model or spiral model. The winding model is otherwise called the cyclic
model.
When To Use the Spiral Model?
1. When a project is vast in software engineering, a spiral model is utilized.
2. A spiral approach is utilized when frequent releases are necessary.
3. When it is appropriate to create a prototype
4. When evaluating risks and costs is crucial
5. The spiral approach is beneficial for projects with moderate to high risk.

26 | P a g e
6. The SDLC’s spiral model is helpful when requirements are complicated and ambiguous.
7. If modifications are possible at any moment
8. When committing to a long-term project is impractical owing to shifting economic priorities.

Conclusion
Spiral Model is a valuable choice for software development projects where risk management is on
high priority. Spiral Model deliver high-quality software by promoting risk identification, iterative
development and continuous client feedback. When a project is vast in software engineering, a
spiral model is utilized.

Questions For Practice


1. Match each software lifecycle model in List – I to its description in List – II: [UGC NET
CSE 2016]
List-1 List-2

a. Assess risks at each step; do the most critical


I. Code and Fix
action first

b. Build initial small requirement specifications, code


II. Evolutionary Prototyping them, then ‘evolve’ the specifications and code as
needed.

c. Build initial requirement specification for several


III. Spiral
releases, then design and code in sequence.

d. Standard phases (requirement, design, code, test)


IV. Staged Delivery
in order

V. Waterfall e. Write some code, debug it, and repeat (i.e ad-hoc)

Choose the Correct Option:


I II III IV V

A e b a c d

B e c a b d

C d a b c e

D c e a b d
Solution: Correct Answer is (A).

2. In the Spiral model of software development, the primary determinant in selecting activities
in each iteration is [ISRO 2016]
(A) Iteration Size
(B) Cost
(C) Adopted process such as Rational Unified Process or Extreme Programming
(D) Risk
Solution: Correct Answer is (D).

Frequently Asked Questions related to Spiral Model – Software Engineering

27 | P a g e
How does Spiral Model differ from Waterfall Model?
Spiral Model is different from Waterfall Model as Waterfall Model follows a linear and sequential
approach whereas Spiral Model has repeated cycles of development.

What are the places where the Spiral Model is commonly used?
Spiral Model is commonly used in industries where risk management is critical like software
development medical device manufacturing, etc.

Why is the spiral model expensive?


Spiral Model is Expensive because risk handling requires extra resources.

Incremental Process Model – Software Engineering


The Incremental Process Model is also known as the Successive version model. This article
focuses on discussing the Incremental Process Model in detail.

Table of Content
 What is the Incremental Process Model?
 Phases of incremental model
 Requirement Process Model
 Types of Incremental Model
 When to use Incremental Process Model
 Characteristics of Incremental Process Model
 Advantages of Incremental Process Model
 Disadvantages of Incremental Process Model

What is the Incremental Process Model?


First, a simple working system implementing only a few basic features is built and then that is
delivered to the customer. Then thereafter many successive iterations/ versions are implemented
and delivered to the customer until the desired system is released.

A, B, and C are modules of Software Products that are incrementally developed and delivered.

Phases of incremental model


Requirements of Software are first broken down into several modules that can be incrementally
constructed and delivered.
1. Requirement analysis: In Requirement Analysis At any time, the plan is made just for the next
increment and not for any kind of long-term plan. Therefore, it is easier to modify the version
as per the needs of the customer.
2. Design & Development: At any time, the plan is made just for the next increment and not for
any kind of long-term plan. Therefore, it is easier to modify the version as per the needs of the
customer. The Development Team first undertakes to develop core features (these do not need
services from other features) of the system. Once the core features are fully developed, then
these are refined to increase levels of capabilities by adding new functions in Successive
versions. Each incremental version is usually developed using an iterative waterfall model of
development.

28 | P a g e
3. Deployment and Testing: After Requirements gathering and specification, requirements are
then split into several different versions starting with version 1, in each successive increment,
the next version is constructed and then deployed at the customer site. in development and
Testing the product is checked and tested for the actual process of the model.
4. Implementation: In implementation After the last version (version n), it is now deployed at the
client site.

Requirement Process Model

Types of Incremental Model


1. Staged Delivery Model
2. Parallel Development Model

1. Staged Delivery Model


Construction of only one part of the project at a time.

2. Parallel Development Model


Different subsystems are developed at the same time. It can decrease the calendar time needed for
the development, i.e. TTM (Time to Market) if enough resources are available.

29 | P a g e
When to use the Incremental Process Model
1. Funding Schedule, Risk, Program Complexity, or need for early realization of benefits.
2. When Requirements are known up-front.
3. When Projects have lengthy development schedules.
4. Projects with new Technology.
 Error Reduction (core modules are used by the customer from the beginning of the phase
and then these are tested thoroughly).
 Uses divide and conquer for a breakdown of tasks.
 Lowers initial delivery cost.
 Incremental Resource Deployment.
5. Requires good planning and design.
6. The total cost is not lower.
7. Well-defined module interfaces are required.

Characteristics of Incremental Process Model


1. System development is divided into several smaller projects.
2. To create a final complete system, partial systems are constructed one after the other.
3. Priority requirements are addressed first.
4. The requirements for that increment are frozen once they are created.

Advantages of the Incremental Process Model


1. Prepares the software fast.
2. Clients have a clear idea of the project.
3. Changes are easy to implement.
4. Provides risk handling support, because of its iterations.
5. Adjusting the criteria and scope is flexible and less costly.
6. Comparing this model to others, it is less expensive.
7. The identification of errors is simple.

Disadvantages of the Incremental Process Model


1. A good team and proper planned execution are required.
2. Because of its continuous iterations the cost increases.

30 | P a g e
3. Issues may arise from the system design if all needs are not gathered upfront throughout the
program lifecycle.
4. Every iteration step is distinct and does not flow into the next.
5. It takes a lot of time and effort to fix an issue in one unit if it needs to be corrected in all the
units.

Rapid application development model (RAD) – Software Engineering


The Rapid Application Development Model was first proposed by IBM in the 1980s. The RAD model
is a type of incremental process model in which there is an extremely short development cycle. When
the requirements are fully understood and the component-based construction approach is adopted then
the RAD model is used. Various phases in RAD are Requirements Gathering, Analysis and Planning,
Design, Build or Construction, and finally Deployment.

Table of Content
 When to use the RAD Model?
 Objectives of Rapid Application Development Model (RAD)
 Advantages of Rapid Application Development Model (RAD)
 Disadvantages of Rapid application development model (RAD)
 Applications of Rapid Application Development Model (RAD)
 Drawbacks of Rapid Application Development

The critical feature of this model is the use of powerful development tools and techniques. A software
project can be implemented using this model if the project can be broken down into small modules
wherein each module can be assigned independently to separate teams. These modules can finally be
combined to form the final product. Development of each module involves the various basic steps as
in the waterfall model i.e. analyzing, designing, coding, and then testing, etc. as shown in the figure.
Another striking feature of this model is a short period i.e. the time frame for delivery(time-box) is
generally 60-90 days.

Multiple teams work on developing the software system using the RAD model parallelly.

The use of powerful developer tools such as JAVA, C++, Visual BASIC, XML, etc. is also an integral
part of the projects. This model consists of 4 basic phases:
1. Requirements Planning – This involves the use of various techniques used in requirements
elicitation like brainstorming, task analysis, form analysis, user scenarios, FAST (Facilitated
Application Development Technique), etc. It also consists of the entire structured plan describing
the critical data, methods to obtain it, and then processing it to form a final refined model.

31 | P a g e
2. User Description – This phase consists of taking user feedback and building the prototype using
developer tools. In other words, it includes re-examination and validation of the data collected in
the first phase. The dataset attributes are also identified and elucidated in this phase.
3. Construction – In this phase, refinement of the prototype and delivery takes place. It includes the
actual use of powerful automated tools to transform processes and data models into the final
working product. All the required modifications and enhancements are to be done in this phase.
4. Cutover – All the interfaces between the independent modules developed by separate teams have
to be tested properly. The use of powerfully automated tools and subparts makes testing easier.
This is followed by acceptance testing by the user.

The process involves building a rapid prototype, delivering it to the customer, and taking feedback.
After validation by the customer, the SRS document is developed and the design is finalized.

When to use the RAD Model?


1. Well-understood Requirements: When project requirements are stable and transparent, RAD is
appropriate.
2. Time-sensitive Projects: Suitable for projects that need to be developed and delivered quickly
due to tight deadlines.
3. Small to Medium-Sized Projects: Better suited for smaller initiatives requiring a controllable
number of team members.
4. High User Involvement: Fits where ongoing input and interaction from users are essential.
5. Innovation and Creativity: Helpful for tasks requiring creative inquiry and innovation.
6. Prototyping: It is necessary when developing and improving prototypes is a key component of
the development process.
7. Low technological Complexity: Suitable for tasks using comparatively straightforward
technological specifications.

Objectives of Rapid Application Development Model (RAD)


1. Speedy Development
Accelerating the software development process is RAD’s main goal. RAD prioritizes rapid
prototyping and iterations to produce a working system as soon as possible. This is especially helpful
for projects when deadlines must be met.

2. Adaptability and Flexibility


RAD places a strong emphasis on adapting quickly to changing needs. Due to the model’s flexibility,
stakeholders can modify and improve the system in response to changing requirements and user input.

3. Stakeholder Participation
Throughout the development cycle, RAD promotes end users and stakeholders’ active participation.
Collaboration and frequent feedback make it possible to make sure that the changing system satisfies
both user and corporate needs.

4. Improved Interaction
Development teams and stakeholders may collaborate and communicate more effectively thanks to
RAD. Frequent communication and feedback loops guarantee that all project participants are in
agreement, which lowers the possibility of misunderstandings.
5. Improved Quality via Prototyping
Prototypes enable early system component testing and visualization in Rapid Application
Development (RAD). This aids in spotting any problems, confirming design choices, and
guaranteeing that the finished product lives up to consumer expectations.

32 | P a g e
6. Customer Satisfaction
Delivering a system that closely satisfies user expectations and needs is the goal of RAD. Through
rapid delivery of functioning prototypes and user involvement throughout the development process,
Rapid Application Development (RAD) enhances the probability of customer satisfaction with the
final product.

Advantages of Rapid Application Development Model (RAD)


 The use of reusable components helps to reduce the cycle time of the project.
 Feedback from the customer is available at the initial stages.
 Reduced costs as fewer developers are required.
 The use of powerful development tools results in better quality products in comparatively shorter
periods.
 The progress and development of the project can be measured through the various stages.
 It is easier to accommodate changing requirements due to the short iteration time spans.
 Productivity may be quickly boosted with a lower number of employees.

Disadvantages of Rapid application development model (RAD)


 The use of powerful and efficient tools requires highly skilled professionals.
 The absence of reusable components can lead to the failure of the project.
 The team leader must work closely with the developers and customers to close the project on
time.
 The systems which cannot be modularized suitably cannot use this model.
 Customer involvement is required throughout the life cycle.
 It is not meant for small-scale projects as in such cases, the cost of using automated tools and
techniques may exceed the entire budget of the project.
 Not every application can be used with RAD.

Applications of Rapid Application Development Model (RAD)


1. This model should be used for a system with known requirements and requiring a short
development time.
2. It is also suitable for projects where requirements can be modularized and reusable components
are also available for development.
3. The model can also be used when already existing system components can be used in developing
a new system with minimum changes.
4. This model can only be used if the teams consist of domain experts. This is because relevant
knowledge and the ability to use powerful techniques are a necessity.
5. The model should be chosen when the budget permits the use of automated tools and techniques
required.

Drawbacks of Rapid Application Development


 It requires multiple teams or a large number of people to work on scalable projects.
 This model requires heavily committed developers and customers. If commitment is lacking then
RAD projects will fail.
 The projects using the RAD model require heavy resources.
 If there is no appropriate modularization then RAD projects fail. Performance can be a problem
for such projects.
 The projects using the RAD model find it difficult to adopt new technologies. This is because
RAD focuses on quickly building and refining prototypes using existing tools. Changing to new
technologies can disrupt this process, making it harder to keep up with the fast pace of
development. Even with skilled developers and advanced tools, the rapid nature of RAD leaves
little time to learn and integrate new technologies smoothly.

33 | P a g e
RAD Model vs Traditional SDLC – Software Engineering
Software Development is the development of software for distinct purposes. There are several types
of Software Development Models. In this article, we will see the difference between the RAD
Model and the Traditional Software Development Life Cycle (SDLC).

What is Traditional SDLC?


In the traditional SDLC model what happens is that it works sequentially one phase is complete
then only it will go to the next phase and this deployment happens in the end.
1. The output of one step is used as input for the following one.
2. The most popular traditional SDLC models are Waterfall, Iterative, Spiral, and V-shaped
Models.
3. Models like RAD are reshaping the design of modern businesses to incorporate agile
procedures.

Various Phases of Traditional SDLC


1. Planning: The Planning phase is the first and foremost stage which requires all the planning as
well as gathering the requirements of the project.
2. Designing: The design phase is the next stage, in this, the requirements which are listed
previously are converted into the architectural or system design.
3. Implementation: The third stage is the main development stage which involves
real implementation, here the developers do the code followed by the testers who test the code.
4. Maintenance: The last stage is the deployment and maintenance stage. In this stage, the
application gets deployed and if any bug or defect is there, then it will be maintained and fixed
in this stage.

Where Traditional Model is used?


Traditional Models are used in the following scenarios:
1. When the technology which we are using on software is not going to change nearly.
2. When the project is not going to stay long.
3. When the project delivery speed is not a priority of work.
4. When the requirements are perfectly defined and documented.

What is RAD Model?


Unlike the traditional SDLC model in which the end product is available in the end, in the RAD
model (Rapid Application Development) after each iteration the model is shown to the client and
based on the feedback of the client, necessary changes will be done, hence in this, there is the total
involvement of the client in every phase of the model.
1. It represents a Radical shift in software development.
2. In this model, the product is continually demonstrated to the user to provide the required input
to help enhance it.
3. It is suited for developing software that are driven by user interface requirements.
4. It emphasizes on delivering the incremental and iterative delivery of functioning models to the
client.

Various Phases of RAD Model


1. Planning: Initial phase is planning, which involves requirement gathering, discussing the
timeline of the project.
2. Prototype: In this phase the model is built, or the prototype will be constructed so that it can
be shown to the client and necessary changes can be done quickly, despite the traditional SDLC
model where the complete model is constructed first.
3. Feedback: Once the prototype is available now it can be shown to the client and
necessary feedback can be taken and depending on their requirement further actions will be
taken, if clients require any change then those changes will be done until there is no
modification from client side.

34 | P a g e
4. Deployed: Once the above three phases are completed, the application will be deployed to the
client.
Benefits of RAD Model
1. Better quality software: It provides better quality of software that is more usable and more
focused on businesses.
2. Better reusability: RAD Models has a better reusability of components.
3. Flexible: RAD Models are more flexible as it helps in easy adjustments.
4. Minimum failures: It helps in completing projects within time and within budget. Failures are
minimum in RAD Model.

Differences Between Traditional SDLC and RAD Model


Parameters RAD Model Traditional SDLC

Structured Methodology and well-


Stages are not well-defined.
Stages defined stages.

Different stages of application


Follows a predictive, inflexible, and
Application development can be reviewed
rigid approach to application
Development and repeated as the approach
development.
Approach is iterative.

Follows a predictive, Follows a predictive, inflexible, and


inflexible, and rigid approach rigid approach to application
Prototypes to application development. development.

All the requirements should be


It is not necessary to know all
known before starting the project
the requirements beforehand.
Requirements due to the rigidity of the models.

Difficult to accommodate changes


Easier to accommodate
due to the sequential nature of
changes.
Changes models.

Extensive customer feedback


leading to more customer
Limited customer feedback.
satisfaction and better quality
Customer Feeback of final software.

Stringent and extensive


It involves minimal
documentation of the entire project
documentation.
Documentation process is necessary.

Separate small teams can be As there is no modularization, a


assigned to individual larger team is required for different
Team Size modules. stages with strictly defined roles.

Generally preferred for


projects with shorter time Used for projects with longer
durations and budgets large development schedules and where
enough to afford the use of budgets do not allow the use of
automated tools and expensive and powerful tools.
Preferred Projects techniques.

35 | P a g e
Parameters RAD Model Traditional SDLC

The use of reusable The use of powerful and efficient


components helps to reduce tools requires highly skilled
Components Used the cycle time of the project. professionals.

Usage of identified and ready- Elements are not reusable since they
to-use themes, templates, must be created from scratch in
Reusability of layouts, and micro apps that accordance with project
Elements have been predefined. requirements.

Previously Asked Questions


1. Software Engineering is an engineering discipline that is concerned with: [UGC NET CS
2017 Jan – II]
(A) how computer systems work
(B) theories and methods that underlie computers and software systems.
(C) all aspects of software production
(D) all aspects of computer-based systems development, including hardware, software and process
engineering.
Solution: Correct Answer is (C).

2. __________ are applied throughout the software process. [UGC NET CS 2014 Dec – II]
(A) Framework activities
(B) Umbrella activities
(C) Planning activities
(D) Construction activities
Solution: Correct Answer is (B).

3. Software engineering primarily aims at: [UGC NET CS June Paper – II]
(A) reliable software
(B) cost-effective software
(C) reliable and cost-effective software
(D) question does not provide sufficient data
Solution: Correct Answer is (C).

FAQs
1. Which is most commonly used SDLC model?
Most commonly used model is Agile model and even in industries also it is mostly preferred

2. What phase of SDLC is the most critical?


Initial phase in which all the requirements are gathered are critical, on its basis rest steps depends

3. What type of model is RAD ?


RAD is an incremental model where each phase is done incrementally until product is finished

Agile Development Models – Software Engineering


In earlier days, the Iterative Waterfall Model was very popular for completing a project. But
nowadays, developers face various problems while using it to develop software. The main difficulties

36 | P a g e
included handling customer change requests during project development and the high cost and time
required to incorporate these changes. To overcome these drawbacks of the Waterfall Model, in the
mid-1990s the Agile Software Development model was proposed.

Table of Content
 What is Agile Model?
 Agile SDLC Models/Methods
 Steps in the Agile Model
 Principles of the Agile Model
 Characteristics of the Agile Process
 When To Use the Agile Model?
 Advantages of the Agile Model
 Disadvantages of the Agile Model
 Questions For Practice
 Conclusion
 Frequently Asked Questions on Agile Model – FAQs

What is Agile Model?


The Agile Model was primarily designed to help a project adapt quickly to change requests. So, the
main aim of the Agile model is to facilitate quick project completion. To accomplish this task, agility
is required. Agility is achieved by fitting the process to the project and removing activities that may
not be essential for a specific project. Also, anything that is a waste of time and effort is avoided. The
Agile Model refers to a group of development processes. These processes share some basic
characteristics but do have certain subtle differences among themselves.

AGILE SDLC MODELS/METHODS


Given below are some Agile SDLC Models:
 Crystal Agile methodology: The Crystal Agile Software Development Methodology places a
strong emphasis on fostering effective communication and collaboration among team members, as
well as taking into account the human elements that are crucial for a successful development
process. This methodology is particularly beneficial for projects with a high degree of uncertainty,
where requirements tend to change frequently.
 Dynamic Systems Development Method (DSDM): DSDSM methodology is tailored for
projects with moderate to high uncertainty where requirements are prone to change frequently. Its
clear-cut roles and responsibilities focus on delivering working software in short time frames.
Governance practices set it apart and make it an effective approach for teams and projects.
 Feature-driven development (FDD): FDD approach is implemented by utilizing a series of
techniques, like creating feature lists, conducting model evaluations, and implementing a design-
by-feature method, to meet its goal. This methodology is particularly effective in ensuring that the
end product is delivered on time and that it aligns with the requirements of the customer.
 Scrum: Scrum methodology serves as a framework for tackling complex projects and ensuring
their successful completion. It is led by a Scrum Master, who oversees the process, and a Product
Owner, who establishes the priorities. The Development Team, accountable for delivering the
software, is another key player.
 Extreme Programming (XP): Extreme Programming uses specific practices like pair
programming, continuous integration, and test-driven development to achieve these goals.
Extreme programming is ideal for projects that have high levels of uncertainty and require
frequent changes, as it allows for quick adaptation to new requirements and feedback.
 Lean Development: Lean Development is rooted in the principles of lean manufacturing and
aims to streamline the process by identifying and removing unnecessary steps and activities. This
is achieved through practices such as continuous improvement, visual management, and value
stream mapping, which helps in identifying areas of improvement and implementing changes
accordingly.
 Unified Process: Unified Process is a methodology that can be tailored to the specific needs of
any given project. It combines elements of both waterfall and Agile methodologies, allowing for

37 | P a g e
an iterative and incremental approach to development. This means that the UP is characterized by
a series of iterations, each of which results in a working product increment, allowing for
continuous improvement and the delivery of value to the customer.

All Agile Software Development Methodology discussed above share the same core values and
principles, but they may differ in their implementation and specific practices. Agile development
requires a high degree of collaboration and communication among team members, as well as a
willingness to adapt to changing requirements and feedback from customers.

In the Agile model, the requirements are decomposed into many small parts that can be incrementally
developed. The Agile model adopts Iterative development. Each incremental part is developed over an
iteration. Each iteration is intended to be small and easily manageable and can be completed within a
couple of weeks only. At a time one iteration is planned, developed, and deployed to the customers.
Long-term plans are not made.

Steps in the Agile Model


The agile model is a combination of iterative and incremental process models. The steps involve in
agile SDLC models are:
 Requirement gathering
 Design the Requirements
 Construction / Iteration
 Testing / Quality Assurance
 Deployment
 Feedback

Steps in Agile Model

1. Requirement Gathering:- In this step, the development team must gather the requirements, by
interaction with the customer. development team should plan the time and effort needed to build
the project. Based on this information you can evaluate technical and economical feasibility.
2. Design the Requirements:- In this step, the development team will use user-flow-diagram or high-
level UML diagrams to show the working of the new features and show how they will apply to the
existing software. Wireframing and designing user interfaces are done in this phase.
3. Construction / Iteration:- In this step, development team members start working on their project,
which aims to deploy a working product.

38 | P a g e
4. Testing / Quality Assurance:- Testing involves Unit Testing, Integration Testing, and System
Testing. A brief introduction of these three tests is as follows:
 Unit Testing:- Unit testing is the process of checking small pieces of code to ensure that the
individual parts of a program work properly on their own. Unit testing is used to test individual
blocks (units) of code.
 Integration Testing:- Integration testing is used to identify and resolve any issues that may
arise when different units of the software are combined.
 System Testing:- Goal is to ensure that the software meets the requirements of the users and
that it works correctly in all possible scenarios.
5. Deployment:- In this step, the development team will deploy the working project to end users.

6. Feedback:- This is the last step of the Agile Model. In this, the team receives feedback about the
product and works on correcting bugs based on feedback provided by the customer.

The time required to complete an iteration is known as a Time Box. Time-box refers to the maximum
amount of time needed to deliver an iteration to customers. So, the end date for an iteration does not
change. However, the development team can decide to reduce the delivered functionality during a
Time-box if necessary to deliver it on time. The Agile model’s central principle is delivering an
increment to the customer after each Time-box.

Principles of the Agile Model


 To establish close contact with the customer during development and to gain a clear
understanding of various requirements, each Agile project usually includes a customer
representative on the team. At the end of each iteration stakeholders and the customer
representative review, the progress made and re-evaluate the requirements.
 The agile model relies on working software deployment rather than comprehensive
documentation.
 Frequent delivery of incremental versions of the software to the customer representative in
intervals of a few weeks.
 Requirement change requests from the customer are encouraged and efficiently incorporated.
 It emphasizes having efficient team members and enhancing communications among them is
given more importance. It is realized that improved communication among the development team
members can be achieved through face-to-face communication rather than through the exchange
of formal documents.
 It is recommended that the development team size should be kept small (5 to 9 people) to help the
team members meaningfully engage in face-to-face communication and have a collaborative work
environment.
 The agile development process usually deploys Pair Programming. In Pair programming, two
programmers work together at one workstation. One does coding while the other reviews the code
as it is typed in. The two programmers switch their roles every hour or so.

Characteristics of the Agile Process


 Agile processes must be adaptable to technical and environmental changes. That means if any
technological changes occur, then the agile process must accommodate them.
 The development of agile processes must be incremental. That means, in each development, the
increment should contain some functionality that can be tested and verified by the customer.
 The customer feedback must be used to create the next increment of the process.
 The software increment must be delivered in a short span of time.
 It must be iterative so that each increment can be evaluated regularly.

When To Use the Agile Model?


 When frequent modifications need to be made, this method is implemented.
 When a highly qualified and experienced team is available.
 When a customer is ready to have a meeting with the team all the time.
 when the project needs to be delivered quickly.

39 | P a g e
 Projects with few regulatory requirements or not certain requirements.
 projects utilizing a less-than-strict current methodology
 Those undertakings where the product proprietor is easily reachable
 Flexible project schedules and budgets.

Advantages of the Agile Model


 Working through Pair programming produces well-written compact programs which have fewer
errors as compared to programmers working alone.
 It reduces the total development time of the whole project.
 Agile development emphasizes face-to-face communication among team members, leading to
better collaboration and understanding of project goals.
 Customer representatives get the idea of updated software products after each iteration. So, it is
easy for him to change any requirement if needed.
 Agile development puts the customer at the center of the development process, ensuring that the
end product meets their needs.

Disadvantages of the Agile Model


 The lack of formal documents creates confusion and important decisions taken during different
phases can be misinterpreted at any time by different team members.
 It is not suitable for handling complex dependencies.
 The agile model depends highly on customer interactions so if the customer is not clear, then the
development team can be driven in the wrong direction.
 Agile development models often involve working in short sprints, which can make it difficult to
plan and forecast project timelines and deliverables. This can lead to delays in the project and can
make it difficult to accurately estimate the costs and resources needed for the project.
 Agile development models require a high degree of expertise from team members, as they need to
be able to adapt to changing requirements and work in an iterative environment. This can be
challenging for teams that are not experienced in agile development practices and can lead to
delays and difficulties in the project.
 Due to the absence of proper documentation, when the project completes and the developers are
assigned to another project, maintenance of the developed project can become a problem.

Questions For Practice


1. Which of the following is not a key issue stressed by an agile philosophy of software
engineering? [UGC NET CS 2017]
(A) The importance of self-organizing teams as well as communication and collaboration between
team members and customers
(B) Recognition that change represents an opportunity
(C) Emphasis on rapid delivery of software that satisfies the customer
(D) Having a separate testing phase after a build phase
Solution: Correct Answer is (D).

2. Which of the following is not one of the principles of the agile software development method?
[UGC NET CS 2018]
(A) Following the plan
(B) Embrace change
(C) Customer involvement
(D) Incremental delivery
Solution: Correct Answer is (A).

Conclusion
Agile development models prioritize flexibility, collaboration, and customer satisfaction. They focus
on delivering working software in short iterations, allowing for quick adaptation to changing
requirements. While Agile offers advantages like faster delivery and customer involvement, it may

40 | P a g e
face challenges with complex dependencies and lack of formal documentation. Overall, Agile is best
suited for projects requiring rapid development, continuous feedback, and a highly skilled team.

Frequently Asked Questions on Agile Model – FAQs


1. What is Product Backlog in Agile?
Product Backlog simply refers to the list of features, and tasks that are to be developed in Software
Product. These things are continuously monitored and managed by the Product Owner.

2. Is it possible to use Agile Model for large and complex Projects?


Yes, it is possible to use for Large and Complex Projects, but we need to change some adaptations for
using it like we have to add some frameworks like SAFE (Scaled Agile Framework) and LeSS
(Large-Scale Scrum) to use large and complex projects.

3. What is Sprint Review in Agile Model?


Sprint Review is simply a type of meeting that is held at the end of each sprint in the Agile Model. In
this meeting, the development team details the work done to the stakeholder and Product Owner.

AGILE SOFTWARE DEVELOPMENT – SOFTWARE ENGINEERING


Agile Software Development is a software development methodology that values flexibility,
collaboration, and customer satisfaction. It is based on the Agile Manifesto, a set of principles for
software development that prioritize individuals and interactions, working software, customer
collaboration, and responding to change.

Agile Software Development is an iterative and incremental approach to software development that
emphasizes the importance of delivering a working product quickly and frequently. It involves close
collaboration between the development team and the customer to ensure that the product meets their
needs and expectations.
Table of Content
 Why Agile is Used?
 4 Core Values of Agile Software Development
 12 Principles of Agile Software Development Methodology
 The Agile Software Development Process:
 Agile Software development cycle:
 Design Process of Agile software Development:
 Example of Agile Software Development:
 Advantages Agile Software Development:
 Disadvantages Agile Software Development:
 Practices of Agile Software Development:
 Advantages of Agile Software Development over traditional software development approaches:

Why Agile is Used?


1. Creating Tangible Value: Agile places a high priority on creating tangible value as soon as
possible in a project. Customers can benefit from early delivery of promised advantages and
opportunity for prompt feedback and modifications.
2. Concentrate on Value-Added Work: Agile methodology promotes teams to concentrate on
producing functional and value-added product increments, hence reducing the amount of time and
energy allocated to non-essential tasks.
3. Agile as a Mindset: Agile represents a shift in culture that values adaptability, collaboration, and
client happiness. It gives team members more authority and promotes a cooperative and upbeat
work atmosphere.
4. Quick Response to Change: Agile fosters a culture that allows teams to respond swiftly to
constantly shifting priorities and requirements. This adaptability is particularly useful in sectors of
the economy or technology that experience fast changes.

41 | P a g e
5. Regular Demonstrations: Agile techniques place a strong emphasis on regular demonstrations of
project progress. Stakeholders may clearly see the project’s status, upcoming problems, and
upcoming new features due to this transparency.
6. Cross-Functional Teams: Agile fosters self-organizing, cross-functional teams that share
information effectively, communicate more effectively and feel more like a unit.

4 Core Values of Agile Software Development


The Agile Software Development Methodology Manifesto describe four core values of Agile in
software development.
1. Individuals and Interactions over Processes and Tools
2. Working Software over Comprehensive Documentation
3. Customer Collaboration over Contract Negotiation
4. Responding to Change over Following a Plan

12 Principles of Agile Software Development


The Agile Manifesto is based on four values and twelve principles that form the basis, for
methodologies.

These principles include:


1. Ensuring customer satisfaction through the early delivery of software.
2. Being open to changing requirements in the stages of the development.
3. Frequently delivering working software with a main focus on preference for timeframes.
4. Promoting collaboration between business stakeholders and developers as an element.
5. Structuring the projects around individuals. Providing them with the necessary environment and
support.
6. Prioritizing face to face communication whenever needed.
7. Considering working software as the measure of the progress.
8. Fostering development by allowing teams to maintain a pace indefinitely.
9. Placing attention on excellence and good design practices.
10. Recognizing the simplicity as crucial factor aiming to maximize productivity by minimizing the
work.
11. Encouraging self organizing teams as the approach to design and build systems.
12. Regularly reflecting on how to enhance effectiveness and to make adjustments accordingly.

The Agile Software Development Process

42 | P a g e
1. Requirements Gathering: The customer’s requirements for the software are gathered and
prioritized.
2. Planning: The development team creates a plan for delivering the software, including the features
that will be delivered in each iteration.
3. Development: The development team works to build the software, using frequent and rapid
iterations.
4. Testing: The software is thoroughly tested to ensure that it meets the customer’s requirements
and is of high quality.
5. Deployment: The software is deployed and put into use.
6. Maintenance: The software is maintained to ensure that it continues to meet the customer’s needs
and expectations.

Agile Software Development is widely used by software development teams and is considered to be
a flexible and adaptable approach to software development that is well-suited to changing
requirements and the fast pace of software development.

Agile is a time-bound, iterative approach to software delivery that builds software incrementally from
the start of the project, instead of trying to deliver all at once.

Agile Software development cycle


Let’s see a brief overview of how development occurs in Agile philosophy.
1. concept
2. inception
3. iteration/construction
4. release
5. production
6. retirement

 Step 1: In the first step, concept, and business opportunities in each possible project are identified
and the amount of time and work needed to complete the project is estimated. Based on their
technical and financial viability, projects can then be prioritized and determined which ones are
worthwhile pursuing.
 Step 2: In the second phase, known as inception, the customer is consulted regarding the initial
requirements, team members are selected, and funding is secured. Additionally, a schedule
outlining each team’s responsibilities and the precise time at which each sprint’s work is expected
to be finished should be developed.

43 | P a g e
 Step 3: Teams begin building functional software in the third step, iteration/construction, based
on requirements and ongoing feedback. Iterations, also known as single development cycles, are
the foundation of the Agile software development cycle.

Design Process of Agile software Development


 In Agile development, Design and Implementation are considered to be the central activities in
the software process.
 The design and Implementation phase also incorporates other activities such as requirements
elicitation and testing.
 In an agile approach, iteration occurs across activities. Therefore, the requirements and the design
are developed together, rather than separately.
 The allocation of requirements and the design planning and development as executed in a series
of increments. In contrast with the conventional model, where requirements gathering needs to be
completed to proceed to the design and development phase, it gives Agile development an extra
level of flexibility.
 An agile process focuses more on code development rather than documentation.

Example of Agile Software Development


Let’s go through an example to understand clearly how agile works. A Software company
named ABC wants to make a new web browser for the latest release of its operating system. The
deadline for the task is 10 months. The company’s head assigned two teams named Team
A and Team B for this task. To motivate the teams, the company head says that the first team to
develop the browser would be given a salary hike and a one-week full-sponsored travel plan. With the
dreams of their wild travel fantasies, the two teams set out on the journey of the web browser. Team A
decided to play by the book and decided to choose the Waterfall model for the development. Team B
after a heavy discussion decided to take a leap of faith and choose Agile as their development model.
The Development Plan of the Team A is as follows:
 Requirement analysis and Gathering – 1.5 Months
 Design of System – 2 Months
 Coding phase – 4 Months
 System Integration and Testing – 2 Months
 User Acceptance Testing – 5 Weeks

The Development Plan for the Team B is as follows:


 Since this was an Agile, the project was broken up into several iterations.
 The iterations are all of the same time duration.
 At the end of each iteration, a working product with a new feature has to be delivered.
 Instead of Spending 1.5 months on requirements gathering, they will decide the core features that
are required in the product and decide which of these features can be developed in the first
iteration.
 Any remaining features that cannot be delivered in the first iteration will be delivered in the next
subsequent iteration, based on the priority.
 At the end of the first iterations, the team will deliver working software with the core basic
features.

The team has put their best efforts into getting the product to a complete stage. But then out of the
blue due to the rapidly changing environment, the company’s head came up with an entirely new set
of features that wanted to be implemented as quickly as possible and wanted to push out a working
model in 2 days. Team A was now in a fix, they were still in their design phase and had not yet started
coding and they had no working model to display. Moreover, it was practically impossible for them to
implement new features since the waterfall model there is not revert to the old phase once you
proceed to the next stage, which means they would have to start from square one again. That would
incur heavy costs and a lot of overtime. Team B was ahead of Team A in a lot of aspects, all thanks to
Agile Development. They also had a working product with most of the core requirements since the

44 | P a g e
first increment. And it was a piece of cake for them to add the new requirements. All they had to do
was schedule these requirements for the next increment and then implement them.

Advantages Agile Software Development


 Deployment of software is quicker and thus helps in increasing the trust of the customer.
 Can better adapt to rapidly changing requirements and respond faster.
 Helps in getting immediate feedback which can be used to improve the software in the next
increment.
 People – Not Process. People and interactions are given a higher priority than processes and tools.
 Continuous attention to technical excellence and good design.
 Increased collaboration and communication: Agile Software Development
Methodology emphasize collaboration and communication among team members, stakeholders,
and customers. This leads to improved understanding, better alignment, and increased buy-in
from everyone involved.
 Flexibility and adaptability: Agile methodologies are designed to be flexible and adaptable,
making it easier to respond to changes in requirements, priorities, or market conditions. This
allows teams to quickly adjust their approach and stay focused on delivering value.
 Improved quality and reliability: Agile methodologies place a strong emphasis on testing,
quality assurance, and continuous improvement. This helps to ensure that software is delivered
with high quality and reliability, reducing the risk of defects or issues that can impact the user
experience.
 Enhanced customer satisfaction: Agile methodologies prioritize customer satisfaction and focus
on delivering value to the customer. By involving customers throughout the development process,
teams can ensure that the software meets their needs and expectations.
 Increased team morale and motivation: Agile methodologies promote a collaborative,
supportive, and positive work environment. This can lead to increased team morale, motivation,
and engagement, which can in turn lead to better productivity, higher quality work, and improved
outcomes.

Disadvantages Agile Software Development


 In the case of large software projects, it is difficult to assess the effort required at the initial stages
of the software development life cycle.
 Agile Development is more code-focused and produces less documentation.
 Agile development is heavily dependent on the inputs of the customer. If the customer has
ambiguity in his vision of the outcome, it is highly likely that the project to get off track.
 Face-to-face communication is harder in large-scale organizations.
 Only senior programmers are capable of making the kind of decisions required during the
development process. Hence, it’s a difficult situation for new programmers to adapt to the
environment.
 Lack of predictability: Agile Development relies heavily on customer feedback and continuous
iteration, which can make it difficult to predict project outcomes, timelines, and budgets.
 Limited scope control: Agile Development is designed to be flexible and adaptable, which means
that scope changes can be easily accommodated. However, this can also lead to scope creep and a
lack of control over the project scope.
 Lack of emphasis on testing: Agile Development places a greater emphasis on delivering
working code quickly, which can lead to a lack of focus on testing and quality assurance. This can
result in bugs and other issues that may go undetected until later stages of the project.
 Risk of team burnout: Agile Development can be intense and fast-paced, with frequent sprints
and deadlines. This can put a lot of pressure on team members and lead to burnout, especially if
the team is not given adequate time for rest and recovery.
 Lack of structure and governance: Agile Development is often less formal and structured than
other development methodologies, which can lead to a lack of governance and oversight. This can
result in inconsistent processes and practices, which can impact project quality and outcomes.

45 | P a g e
Agile is a framework that defines how software development needs to be carried on. Agile is not a
single method, it represents the various collection of methods and practices that follow the value
statements provided in the manifesto. Agile methods and practices do not promise to solve every
problem present in the software industry (No Software model ever can). But they sure help to
establish a culture and environment where solutions emerge.

Agile software development is an iterative and incremental approach to software development. It


emphasizes collaboration between the development team and the customer, flexibility, and
adaptability in the face of changing requirements, and the delivery of working software in short
iterations.

The Agile Manifesto, which outlines the principles of agile development, values individuals and
interactions, working software, customer collaboration, and response to change.
Practices of Agile Software Development
 Scrum: Scrum is a framework for agile software development that involves iterative cycles called
sprints, daily stand-up meetings, and a product backlog that is prioritized by the customer.
 Kanban: Kanban is a visual system that helps teams manage their work and improve their
processes. It involves using a board with columns to represent different stages of the development
process, and cards or sticky notes to represent work items.
 Continuous Integration: Continuous Integration is the practice of frequently merging code
changes into a shared repository, which helps to identify and resolve conflicts early in the
development process.
 Test-Driven Development: Test-Driven Development (TDD) is a development practice that
involves writing automated tests before writing the code. This helps to ensure that the code meets
the requirements and reduces the likelihood of defects.
 Pair Programming: Pair programming involves two developers working together on the same
code. This helps to improve code quality, share knowledge, and reduce the likelihood of defects.

Advantages of Agile Software Development over traditional software development approaches


1. Increased customer satisfaction: Agile development involves close collaboration with the
customer, which helps to ensure that the software meets their needs and expectations.
2. Faster time-to-market: Agile development emphasizes the delivery of working software in short
iterations, which helps to get the software to market faster.
3. Reduced risk: Agile development involves frequent testing and feedback, which helps to identify
and resolve issues early in the development process.
4. Improved team collaboration: Agile development emphasizes collaboration and communication
between team members, which helps to improve productivity and morale.
5. Adaptability to change: Agile Development is designed to be flexible and adaptable, which
means that changes to the project scope, requirements, and timeline can be accommodated easily.
This can help the team to respond quickly to changing business needs and market demands.
6. Better quality software: Agile Development emphasizes continuous testing and feedback, which
helps to identify and resolve issues early in the development process. This can lead to higher-
quality software that is more reliable and less prone to errors.
7. Increased transparency: Agile Development involves frequent communication and
collaboration between the team and the customer, which helps to improve transparency and
visibility into the project status and progress. This can help to build trust and confidence with the
customer and other stakeholders.
8. Higher productivity: Agile Development emphasizes teamwork and collaboration, which helps
to improve productivity and reduce waste. This can lead to faster delivery of working software
with fewer defects and rework.
9. Improved project control: Agile Development emphasizes continuous monitoring and
measurement of project metrics, which helps to improve project control and decision-making.
This can help the team to stay on track and make data-driven decisions throughout the
development process.

46 | P a g e
In summary, Agile software development is a popular approach to software development that
emphasizes collaboration, flexibility, and the delivery of working software in short iterations. It has
several advantages over traditional software development approaches, including increased customer
satisfaction, faster time-to-market, and reduced risk.

What is Extreme Programming (XP)?


Extreme programming (XP) is one of the most important software development frameworks of
Agile models. It is used to improve software quality and responsiveness to customer requirements.

Table of Content
 What is Extreme Programming (XP)?
 Good Practices in Extreme Programming
 Basic principles of Extreme programming
 Applications of Extreme Programming (XP)
 Life Cycle of Extreme Programming (XP)
 Values of Extreme Programming (XP)
 Advantages of Extreme Programming (XP)
 Conclusion
 Frequently Asked Questions related to Extreme Programming

The extreme programming model recommends taking the best practices that have worked well in
the past in program development projects to extreme levels.

What is Extreme Programming (XP)?


Extreme Programming (XP) is an Agile software development methodology that focuses on
delivering high-quality software through frequent and continuous feedback, collaboration, and
adaptation. XP emphasizes a close working relationship between the development team, the
customer, and stakeholders, with an emphasis on rapid, iterative development and deployment.

Agile development approaches evolved in the 1990s as a reaction to documentation and


bureaucracy-based processes, particularly the waterfall approach. Agile approaches are based on
some common principles, some of which are:
1. Working software is the key measure of progress in a project.
2. For progress in a project, therefore software should be developed and delivered rapidly in small
increments.

47 | P a g e
3. Even late changes in the requirements should be entertained.
4. Face-to-face communication is preferred over documentation.
5. Continuous feedback and involvement of customers are necessary for developing good-quality
software.
6. A simple design that involves and improves with time is a better approach than doing an
elaborate design up front for handling all possible scenarios.
7. The delivery dates are decided by empowered teams of talented individuals.

Extreme programming is one of the most popular and well-known approaches in the family of agile
methods. an XP project starts with user stories which are short descriptions of what scenarios the
customers and users would like the system to support. Each story is written on a separate card, so
they can be flexibly grouped.

Good Practices in Extreme Programming


Some of the good practices that have been recognized in the extreme programming model and
suggested to maximize their use are given below:
 Code Review: Code review detects and corrects errors efficiently. It suggests pair
programming as coding and reviewing of written code carried out by a pair of programmers
who switch their work between them every hour.
 Testing: Testing code helps to remove errors and improves its reliability. XP suggests test-
driven development (TDD) to continually write and execute test cases. In the TDD approach,
test cases are written even before any code is written.
 Incremental development: Incremental development is very good because customer feedback
is gained and based on this development team comes up with new increments every few days
after each iteration.
 Simplicity: Simplicity makes it easier to develop good-quality code as well as to test and debug
it.
 Design: Good quality design is important to develop good quality software. So, everybody
should design daily.
 Integration testing: Integration Testing helps to identify bugs at the interfaces of different
functionalities. Extreme programming suggests that the developers should achieve continuous
integration by building and performing integration testing several times a day.

Basic Principles of Extreme programming


XP is based on the frequent iteration through which the developers implement User Stories. User
stories are simple and informal statements of the customer about the functionalities needed. A User
Story is a conventional description by the user of a feature of the required system. It does not
mention finer details such as the different scenarios that can occur. Based on User stories, the
project team proposes Metaphors. Metaphors are a common vision of how the system would work.
The development team may decide to build a Spike for some features. A Spike is a very simple
program that is constructed to explore the suitability of a solution being proposed. It can be
considered similar to a prototype. Some of the basic activities that are followed during software
development by using the XP model are given below:
 Coding: The concept of coding which is used in the XP model is slightly different from
traditional coding. Here, the coding activity includes drawing diagrams (modeling) that will be
transformed into code, scripting a web-based system, and choosing among several alternative
solutions.
 Testing: The XP model gives high importance to testing and considers it to be the primary
factor in developing fault-free software.
 Listening: The developers need to carefully listen to the customers if they have to develop
good quality software. Sometimes programmers may not have the depth knowledge of the
system to be developed. So, the programmers should understand properly the functionality of
the system and they have to listen to the customers.
 Designing: Without a proper design, a system implementation becomes too complex, and very
difficult to understand the solution, thus making maintenance expensive. A good design results

48 | P a g e
elimination of complex dependencies within a system. So, effective use of suitable design is
emphasized.
 Feedback: One of the most important aspects of the XP model is to gain feedback to
understand the exact customer needs. Frequent contact with the customer makes the
development effective.
 Simplicity: The main principle of the XP model is to develop a simple system that will work
efficiently in the present time, rather than trying to build something that would take time and
may never be used. It focuses on some specific features that are immediately needed, rather
than engaging time and effort on speculations of future requirements.
 Pair Programming: XP encourages pair programming where two developers work together at
the same workstation. This approach helps in knowledge sharing, reduces errors, and improves
code quality.
 Continuous Integration: In XP, developers integrate their code into a shared repository several
times a day. This helps to detect and resolve integration issues early on in the development
process.
 Refactoring: XP encourages refactoring, which is the process of restructuring existing code to
make it more efficient and maintainable. Refactoring helps to keep the codebase clean,
organized, and easy to understand.
 Collective Code Ownership: In XP, there is no individual ownership of code. Instead, the
entire team is responsible for the codebase. This approach ensures that all team members have a
sense of ownership and responsibility towards the code.
 Planning Game: XP follows a planning game, where the customer and the development team
collaborate to prioritize and plan development tasks. This approach helps to ensure that the
team is working on the most important features and delivers value to the customer.
 On-site Customer: XP requires an on-site customer who works closely with the development
team throughout the project. This approach helps to ensure that the customer’s needs are
understood and met, and also facilitates communication and feedback.

Applications of Extreme Programming (XP)


Some of the projects that are suitable to develop using the XP model are given below:
 Small projects: The XP model is very useful in small projects consisting of small teams as
face-to-face meeting is easier to achieve.
 Projects involving new technology or Research projects: This type of project faces changing
requirements rapidly and technical problems. So XP model is used to complete this type of
project.
 Web development projects: The XP model is well-suited for web development projects as the
development process is iterative and requires frequent testing to ensure the system meets the
requirements.
 Collaborative projects: The XP model is useful for collaborative projects that require close
collaboration between the development team and the customer.
 Projects with tight deadlines: The XP model can be used in projects that have a tight deadline,
as it emphasizes simplicity and iterative development.
 Projects with rapidly changing requirements: The XP model is designed to handle rapidly
changing requirements, making it suitable for projects where requirements may change
frequently.
 Projects where quality is a high priority: The XP model places a strong emphasis on testing
and quality assurance, making it a suitable approach for projects where quality is a high
priority.
XP, and other agile methods, are suitable for situations where the volume and space of requirements
change are high and where requirement risks are considerable.

Life Cycle of Extreme Programming (XP)


The Extreme Programming Life Cycle consist of five phases:

49 | P a g e
1. Planning: The first stage of Extreme Programming is planning. During this phase, clients
define their needs in concise descriptions known as user stories. The team calculates the effort
required for each story and schedules releases according to priority and effort.
2. Design: The team creates only the essential design needed for current user stories, using a
common analogy or story to help everyone understand the overall system architecture and keep
the design straightforward and clear.
3. Coding: Extreme Programming (XP) promotes pair programming i.e. wo developers work
together at one workstation, enhancing code quality and knowledge sharing. They write tests
before coding to ensure functionality from the start (TDD), and frequently integrate their code
into a shared repository with automated tests to catch issues early.
4. Testing: Extreme Programming (XP) gives more importance to testing that consist of both unit
tests and acceptance test. Unit tests, which are automated, check if specific features work
correctly. Acceptance tests, conducted by customers, ensure that the overall system meets initial
requirements. This continuous testing ensures the software’s quality and alignment with
customer needs.
5. Listening: In the listening phase regular feedback from customers to ensure the product meets
their needs and to adapt to any changes.

Values of Extreme Programming (XP)


There are five core values of Extreme Programming (XP)
1. Communication: The essence of communication is for information and ideas to be exchanged
amongst development team members so that everyone has an understanding of the system
requirements and goals. Extreme Programming (XP) supports this by allowing open and
frequent communication between members of a team.
2. Simplicity: Keeping things as simple as possible helps reduce complexity and makes it easier
to understand and maintain the code.
3. Feedback: Feedback loops which are constant are among testing as well as customer
involvements which helps in detecting problems earlier during development.
4. Courage: Team members are encouraged to take risks, speak up about problems, and adapt to
change without fear of repercussions.
5. Respect: Every member’s input or opinion is appreciated which promotes a collective way of
working among people who are supportive within a certain group.

Advantages of Extreme Programming (XP)


 Slipped schedules: Timely delivery is ensured through slipping timetables and doable
development cycles.
 Misunderstanding the business and/or domain − Constant contact and explanations are
ensured by including the client on the team.
 Canceled projects: Focusing on ongoing customer engagement guarantees open
communication with the consumer and prompt problem-solving.
 Staff turnover: Teamwork that is focused on cooperation provides excitement and goodwill.
Team spirit is fostered by multidisciplinary cohesion.
 Costs incurred in changes: Extensive and continuing testing ensures that the modifications do
not impair the functioning of the system. A functioning system always guarantees that there is
enough time to accommodate changes without impairing ongoing operations.
 Business changes: Changes are accepted at any moment since they are seen to be inevitable.
 Production and post-delivery defects: the unit tests to find and repair bugs as soon as
possible.

Conclusion
Extreme Programming (XP) is a Software Development Methodology, known for its flexibility,
collaboration and rapid feedback using techniques like continuous testing, frequent releases, and
pair programming, in which two programmers collaborate on the same code. XP supports user
involvement throughout the development process while prioritizing simplicity and communication.

50 | P a g e
Overall, XP aims to deliver high-quality software quickly and adapt to changing requirements
effectively.
Frequently Asked Questions on Extreme Programming – FAQ’s
1. What are the 5 phases of extreme programming?
Five Phases of Extreme Programming are:
 Planning.
 Design.
 Coding.
 Testing.
 Listening

2. Why use extreme programming?


Extreme Programming (XP) enables developers to respond to user stories, adapt, and modify in real
time.

3. Who created extreme programming?


In 1996, software developer Kent Beck created XP as a lightweight agile framework. He structured
the methodology on 12 practices, which may have been inspired by the 12 principles of the Agile
Manifesto.

SDLC V-MODEL – SOFTWARE ENGINEERING


The V-model is a type of SDLC model where the process executes sequentially in a V-shape. It is also
known as the Verification and Validation model. It is based on the association of a testing phase for
each corresponding development stage. The development of each step is directly associated with the
testing phase. The next phase starts only after completion of the previous phase i.e., for each
development activity, there is a testing activity corresponding to it.
Table of Content
 V-Model Design
 Importance of V-Model
 Principles of V-Model
 When to Use of V-Model?
 Advantages of V-Model
 Disadvantages of V-Model
 Conclusion

The V-Model is a software development life cycle (SDLC) model that provides a systematic and
visual representation of the software development process. It is based on the idea of a “V” shape, with
the two legs of the “V” representing the progression of the software development
process from requirements gathering and analysis to design, implementation, testing, and
maintenance.

V-Model Design
1. Requirements Gathering and Analysis: The first phase of the V-Model is the requirements
gathering and analysis phase, where the customer’s requirements for the software are gathered
and analyzed to determine the scope of the project.
2. Design: In the design phase, the software architecture and design are developed, including the
high-level design and detailed design.
3. Implementation: In the implementation phase, the software is built based on the design.
4. Testing: In the testing phase, the software is tested to ensure that it meets the customer’s
requirements and is of high quality.
5. Deployment: In the deployment phase, the software is deployed and put into use.
6. Maintenance: In the maintenance phase, the software is maintained to ensure that it continues to
meet the customer’s needs and expectations.

51 | P a g e
7. The V-Model is often used in safety: critical systems, such as aerospace and defence systems,
because of its emphasis on thorough testing and its ability to clearly define the steps involved in
the software development process.

The following illustration depicts the different phases in a V-Model of the SDLC.
Verification Phases:
It involves a static analysis technique (review) done without executing code. It is the process of
evaluation of the product development phase to find whether specified requirements are met.
There are several Verification phases in the V-Model:

Business Requirement Analysis:


This is the first step of the designation of the development cycle where product requirement needs to
be cured from the customer’s perspective. in these phases include proper communication with the
customer to understand the requirements of the customers. these are the very important activities that
need to be handled properly, as most of the time customers do not know exactly what they want, and
they are not sure about it at that time then we use an acceptance test design planning which is done at
the time of business requirement it will be used as an input for acceptance testing.

System Design:
Design of the system will start when the overall we are clear with the product requirements, and then
need to design the system completely. This understanding will be at the beginning of complete under
the product development process. these will be beneficial for the future execution of test cases.

Architectural Design:
In this stage, architectural specifications are comprehended and designed. Usually, several technical
approaches are put out, and the ultimate choice is made after considering both the technical and
financial viability. The system architecture is further divided into modules that each handle a distinct
function. Another name for this is High-Level Design (HLD).
At this point, the exchange of data and communication between the internal modules and external
systems are well understood and defined. During this phase, integration tests can be created and
documented using the information provided.

Module Design:
This phase, known as Low-Level Design (LLD), specifies the comprehensive internal design for
every system module. Compatibility between the design and other external systems as well as other
modules in the system architecture is crucial. Unit tests are a crucial component of any development

52 | P a g e
process since they assist in identifying and eradicating the majority of mistakes and flaws at an early
stage. Based on the internal module designs, these unit tests may now be created.

Coding Phase:
The Coding step involves writing the code for the system modules that were created during the
Design phase. The system and architectural requirements are used to determine which programming
language is most appropriate.
The coding standards and principles are followed when performing the coding. Before the final build
is checked into the repository, the code undergoes many code reviews and is optimized for optimal
performance.

Validation Phases:
It involves dynamic analysis techniques (functional, and non-functional), and testing done by
executing code. Validation is the process of evaluating the software after the completion of the
development phase to determine whether the software meets the customer’s expectations and
requirements.
So, V-Model contains Verification phases on one side of the Validation phases on the other side. The
verification and Validation phases are joined by the coding phase in a V-shape. Thus, it is called V-
Model.

There are several Validation phases in the V-Model:


Unit Testing:
Unit Test Plans are developed during the module design phase. These Unit Test Plans are executed to
eliminate bugs in code or unit level.
Integration testing:
After completion of unit testing Integration testing is performed. In integration testing, the modules
are integrated and the system is tested. Integration testing is performed in the Architecture design
phase. This test verifies the communication of modules among themselves.
System Testing:
System testing tests the complete application with its functionality, inter-dependency, and
communication. It tests the functional and non-functional requirements of the developed application.
User Acceptance Testing (UAT):
UAT is performed in a user environment that resembles the production environment. UAT verifies
that the delivered system meets the user’s requirement and the system is ready for use in the real
world.

Design Phase:
 Requirement Analysis: This phase contains detailed communication with the customer to
understand their requirements and expectations. This stage is known as Requirement Gathering.
 System Design: This phase contains the system design and the complete hardware and
communication setup for developing the product.
 Architectural Design: System design is broken down further into modules taking up different
functionalities. The data transfer and communication between the internal modules and with the
outside world (other systems) is clearly understood.
 Module Design: In this phase, the system breaks down into small modules. The detailed design
of modules is specified, also known as Low-Level Design (LLD).

Testing Phases:
 Unit Testing: Unit Test Plans are developed during the module design phase. These Unit Test
Plans are executed to eliminate bugs at the code or unit level.
 Integration testing: After completion of unit testing Integration testing is performed. In
integration testing, the modules are integrated, and the system is tested. Integration testing is
performed in the Architecture design phase. This test verifies the communication of modules
among themselves.

53 | P a g e
 System Testing: System testing tests the complete application with its functionality,
interdependency, and communication. It tests the functional and non-functional requirements of
the developed application.
 User Acceptance Testing (UAT): UAT is performed in a user environment that resembles the
production environment. UAT verifies that the delivered system meets the user’s requirement and
the system is ready for use in the real world.
Industrial Challenge:
As the industry has evolved, the technologies have become more complex, increasingly faster, and
forever changing, however, there remains a set of basic principles and concepts that are as applicable
today as when IT was in its infancy.
 Accurately define and refine user requirements.
 Design and build an application according to the authorized user requirements.
 Validate that the application they had built adhered to the authorized business requirements.

Importance of V-Model
1. Early Defect Identification
By incorporating verification and validation tasks into every stage of the development process, the V-
Model encourages early testing. This lowers the cost and effort needed to remedy problems later in
the development lifecycle by assisting in the early detection and resolution of faults.
2. determining the Phases of Development and Testing
The V-Model contains a testing phase that corresponds to each stage of the development process. By
ensuring that testing and development processes are clearly mapped out, this clear mapping promotes
a methodical and orderly approach to software engineering.
3. Prevents “Big Bang” Testing
Testing is frequently done at the very end of the development lifecycle in traditional development
models, which results in a “Big Bang” approach where all testing operations are focused at once. By
integrating testing activities into the development process and encouraging a more progressive and
regulated testing approach, the V-Model prevents this.
4. Improves Cooperation
At every level, the V-Model promotes cooperation between the testing and development teams.
Through this collaboration, project requirements, design choices, and testing methodologies are better
understood, which improves the effectiveness and efficiency of the development process.
5. Improved Quality Assurance
Overall quality assurance is enhanced by the V-Model, which incorporates testing operations at every
level. Before the program reaches the final deployment stage, it makes sure that it satisfies the
requirements and goes through a strict validation and verification process.

Principles of V-Model
 Large to Small: In V-Model, testing is done in a hierarchical perspective, for example,
requirements identified by the project team, creating High-Level Design, and Detailed Design
phases of the project. As each of these phases is completed the requirements, they are defining
become more and more refined and detailed.
 Data/Process Integrity: This principle states that the successful design of any project requires
the incorporation and cohesion of both data and processes. Process elements must be identified at
every requirement.
 Scalability: This principle states that the V-Model concept has the flexibility to accommodate
any IT project irrespective of its size, complexity, or duration.
 Cross Referencing: A direct correlation between requirements and corresponding testing activity
is known as cross-referencing.

Tangible Documentation:
This principle states that every project needs to create a document. This documentation is required
and applied by both the project development team and the support team. Documentation is used to
maintain the application once it is available in a production environment.

54 | P a g e
Why preferred?
 It is easy to manage due to the rigidity of the model. Each phase of V-Model has specific
deliverables and a review process.
 Proactive defect tracking – that is defects are found at an early stage.

When to Use of V-Model?


 Traceability of Requirements: The V-Model proves beneficial in situations when it’s imperative
to create precise traceability between the requirements and their related test cases.
 Complex Projects: The V-Model offers a methodical way to manage testing activities and reduce
risks related to integration and interface problems for projects with a high level of complexity and
interdependencies among system components.
 Waterfall-Like Projects: Since the V-Model offers an approachable structure for organizing,
carrying out, and monitoring testing activities at every level of development, it is appropriate for
projects that use a sequential approach to development, much like the waterfall model.
 Safety-Critical Systems: These systems are used in the aerospace, automotive, and healthcare
industries. They place a strong emphasis on rigid verification and validation procedures, which
help to guarantee that essential system requirements are fulfilled and that possible risks are found
and eliminated early in the development process.

Advantages of V-Model
 This is a highly disciplined model and Phases are completed one at a time.
 V-Model is used for small projects where project requirements are clear.
 Simple and easy to understand and use.
 This model focuses on verification and validation activities early in the life cycle thereby
enhancing the probability of building an error-free and good quality product.
 It enables project management to track progress accurately.
 Clear and Structured Process: The V-Model provides a clear and structured process for software
development, making it easier to understand and follow.
 Emphasis on Testing: The V-Model places a strong emphasis on testing, which helps to ensure the
quality and reliability of the software.
 Improved Traceability: The V-Model provides a clear link between the requirements and the final
product, making it easier to trace and manage changes to the software.
 Better Communication: The clear structure of the V-Model helps to improve communication
between the customer and the development team.

Disadvantages of V-Model
 High risk and uncertainty.
 It is not good for complex and object-oriented projects.
 It is not suitable for projects where requirements are not clear and contain a high risk of changing.
 This model does not support iteration of phases.
 It does not easily handle concurrent events.
 Inflexibility: The V-Model is a linear and sequential model, which can make it difficult to adapt
to changing requirements or unexpected events.
 Time-Consuming: The V-Model can be time-consuming, as it requires a lot of documentation and
testing.
 Overreliance on Documentation: The V-Model places a strong emphasis on documentation, which
can lead to an overreliance on documentation at the expense of actual development work.

Conclusion
A scientific and organized approach to the Software Development Life Cycle (SDLC) is provided by
the Software Engineering V-Model. The team’s expertise with the selected methodology, the unique
features of the project, and the nature of the requirements should all be taken into consideration when
selecting any SDLC models, including the V-Model.

55 | P a g e
Coupling and Cohesion – Software Engineering
The purpose of the Design phase in the Software Development Life Cycle is to produce a solution to a
problem given in the SRS(Software Requirement Specification) document. The output of the design
phase is a Software Design Document (SDD).

Coupling and Cohesion are two key concepts in software engineering that are used to measure the
quality of a software system’s design.
Table of Content
 What is Coupling and Cohesion?
 Types of Coupling
 Types of Cohesion
 Advantages of low coupling
 Advantages of high cohesion
 Disadvantages of high coupling
 Disadvantages of low cohesion
 Conclusion
What is Coupling and Cohesion?
Coupling refers to the degree of interdependence between software modules. High coupling means
that modules are closely connected and changes in one module may affect other modules. Low
coupling means that modules are independent, and changes in one module have little impact on other
modules.

Cohesion refers to the degree to which elements within a module work together to fulfill a single,
well-defined purpose. High cohesion means that elements are closely related and focused on a single
purpose, while low cohesion means that elements are loosely related and serve multiple purposes.

56 | P a g e
Both coupling and cohesion are important factors in determining the maintainability, scalability, and
reliability of a software system. High coupling and low cohesion can make a system difficult to
change and test, while low coupling and high cohesion make a system easier to maintain and improve.
Basically, design is a two-part iterative process. The first part is Conceptual Design which tells the
customer what the system will do. Second is Technical Design which allows the system builders to
understand the actual hardware and software needed to solve a customer’s problem.

Conceptual design of the system:


 Written in simple language i.e. customer understandable language.
 Detailed explanation about system characteristics.
 Describes the functionality of the system.
 It is independent of implementation.
 Linked with requirement document.

Technical Design of the System:


 Hardware component and design.
 Functionality and hierarchy of software components.
 Software architecture
 Network architecture
 Data structure and flow of data.
 I/O component of the system.
 Shows interface.

Modularization is the process of dividing a software system into multiple independent modules where
each module works independently. There are many advantages of Modularization in software
engineering. Some of these are given below:
 Easy to understand the system.
 System maintenance is easy.
 A module can be used many times as their requirements. No need to write it again and again.

Types of Coupling
Coupling is the measure of the degree of interdependence between the modules. A good software will
have low coupling.

57 | P a g e
Following are the types of Coupling:
 Data Coupling: If the dependency between the modules is based on the fact that they
communicate by passing only data, then the modules are said to be data coupled. In data coupling,
the components are independent of each other and communicate through data. Module
communications don’t contain tramp data. Example-customer billing system.
 Stamp Coupling In stamp coupling, the complete data structure is passed from one module to
another module. Therefore, it involves tramp data. It may be necessary due to efficiency factors-
this choice was made by the insightful designer, not a lazy programmer.
 Control Coupling: If the modules communicate by passing control information, then they are
said to be control coupled. It can be bad if parameters indicate completely different behavior and
good if parameters allow factoring and reuse of functionality. Example- sort function that takes
comparison function as an argument.
 External Coupling: In external coupling, the modules depend on other modules, external to the
software being developed or to a particular type of hardware. Ex- protocol, external file, device
format, etc.
 Common Coupling: The modules have shared data such as global data structures. The changes in
global data mean tracing back to all modules which access that data to evaluate the effect of the
change. So it has got disadvantages like difficulty in reusing modules, reduced ability to control
data accesses, and reduced maintainability.
 Content Coupling: In a content coupling, one module can modify the data of another module, or
control flow is passed from one module to the other module. This is the worst form of coupling
and should be avoided.
 Temporal Coupling: Temporal coupling occurs when two modules depend on the timing or
order of events, such as one module needing to execute before another. This type of coupling can
result in design issues and difficulties in testing and maintenance.
 Sequential Coupling: Sequential coupling occurs when the output of one module is used as the
input of another module, creating a chain or sequence of dependencies. This type of coupling can
be difficult to maintain and modify.
 Communicational Coupling: Communicational coupling occurs when two or more modules
share a common communication mechanism, such as a shared message queue or database. This
type of coupling can lead to performance issues and difficulty in debugging.
 Functional Coupling: Functional coupling occurs when two modules depend on each other’s
functionality, such as one module calling a function from another module. This type of coupling
can result in tightly-coupled code that is difficult to modify and maintain.
 Data-Structured Coupling: Data-structured coupling occurs when two or more modules share a
common data structure, such as a database table or data file. This type of coupling can lead to
difficulty in maintaining the integrity of the data structure and can result in performance issues.

58 | P a g e
 Interaction Coupling: Interaction coupling occurs due to the methods of a class invoking
methods of other classes. Like with functions, the worst form of coupling here is if methods
directly access internal parts of other methods. Coupling is lowest if methods communicate
directly through parameters.
 Component Coupling: Component coupling refers to the interaction between two classes where
a class has variables of the other class. Three clear situations exist as to how this can happen. A
class C can be component coupled with another class C1, if C has an instance variable of type C1,
or C has a method whose parameter is of type C1,or if C has a method which has a local variable
of type C1. It should be clear that whenever there is component coupling, there is likely to be
interaction coupling.

Types of Cohesion
Cohesion is a measure of the degree to which the elements of the module are functionally related. It is
the degree to which all elements directed towards performing a single task are contained in the
component. Basically, cohesion is the internal glue that keeps the module together. A good software
design will have high cohesion.

Types of Cohesion

Following are the types of Cohesion:

 Functional Cohesion: Every essential element for a single computation is contained in the
component. A functional cohesion performs the task and functions. It is an ideal situation.
 Sequential Cohesion: An element outputs some data that becomes the input for other element,
i.e., data flow between the parts. It occurs naturally in functional programming languages.
 Communicational Cohesion: Two elements operate on the same input data or contribute towards
the same output data. Example- update record in the database and send it to the printer.
 Procedural Cohesion: Elements of procedural cohesion ensure the order of execution. Actions
are still weakly connected and unlikely to be reusable. Ex- calculate student GPA, print student
record, calculate cumulative GPA, print cumulative GPA.
 Temporal Cohesion: The elements are related by their timing involved. A module connected
with temporal cohesion all the tasks must be executed in the same time span. This cohesion
contains the code for initializing all the parts of the system. Lots of different activities occur, all at
unit time.
 Logical Cohesion: The elements are logically related and not functionally. Ex- A component
reads inputs from tape, disk, and network. All the code for these functions is in the same
component. Operations are related, but the functions are significantly different.
 Coincidental Cohesion: The elements are not related(unrelated). The elements have no
conceptual relationship other than location in source code. It is accidental and the worst form of
cohesion. Ex- print next line and reverse the characters of a string in a single component.
 Procedural Cohesion: This type of cohesion occurs when elements or tasks are grouped together
in a module based on their sequence of execution, such as a module that performs a set of related

59 | P a g e
procedures in a specific order. Procedural cohesion can be found in structured programming
languages.
 Communicational Cohesion: Communicational cohesion occurs when elements or tasks are
grouped together in a module based on their interactions with each other, such as a module that
handles all interactions with a specific external system or module. This type of cohesion can be
found in object-oriented programming languages.
 Temporal Cohesion: Temporal cohesion occurs when elements or tasks are grouped together in a
module based on their timing or frequency of execution, such as a module that handles all
periodic or scheduled tasks in a system. Temporal cohesion is commonly used in real-time and
embedded systems.
 Informational Cohesion: Informational cohesion occurs when elements or tasks are grouped
together in a module based on their relationship to a specific data structure or object, such as a
module that operates on a specific data type or object. Informational cohesion is commonly used
in object-oriented programming.
 Functional Cohesion: This type of cohesion occurs when all elements or tasks in a module
contribute to a single well-defined function or purpose, and there is little or no coupling between
the elements. Functional cohesion is considered the most desirable type of cohesion as it leads to
more maintainable and reusable code.
 Layer Cohesion: Layer cohesion occurs when elements or tasks in a module are grouped together
based on their level of abstraction or responsibility, such as a module that handles only low-level
hardware interactions or a module that handles only high-level business logic. Layer cohesion is
commonly used in large-scale software systems to organize code into manageable layers.

Advantages of low coupling


 Improved maintainability: Low coupling reduces the impact of changes in one module on other
modules, making it easier to modify or replace individual components without affecting the entire
system.
 Enhanced modularity: Low coupling allows modules to be developed and tested in isolation,
improving the modularity and reusability of code.
 Better scalability: Low coupling facilitates the addition of new modules and the removal of
existing ones, making it easier to scale the system as needed.

Advantages of high cohesion


 Improved readability and understandability: High cohesion results in clear, focused modules with
a single, well-defined purpose, making it easier for developers to understand the code and make
changes.
 Better error isolation: High cohesion reduces the likelihood that a change in one part of a module
will affect other parts, making it easier to
 Improved reliability: High cohesion leads to modules that are less prone to errors and that
function more consistently,
 leading to an overall improvement in the reliability of the system.

Disadvantages of high coupling


 Increased complexity: High coupling increases the interdependence between modules, making the
system more complex and difficult to understand.
 Reduced flexibility: High coupling makes it more difficult to modify or replace individual
components without affecting the entire system.
 Decreased modularity: High coupling makes it more difficult to develop and test modules in
isolation, reducing the modularity and reusability of code.

Disadvantages of low cohesion


 Increased code duplication: Low cohesion can lead to the duplication of code, as elements that
belong together are split into separate modules.

60 | P a g e
 Reduced functionality: Low cohesion can result in modules that lack a clear purpose and contain
elements that don’t belong together, reducing their functionality and making them harder to
maintain.
 Difficulty in understanding the module: Low cohesion can make it harder for developers to
understand the purpose and behavior of a module, leading to errors and a lack of clarity.

Conclusion
In conclusion, it’s good for software to have low coupling and high cohesion. Low coupling means
the different parts of the software don’t rely too much on each other, which makes it safer to make
changes without causing unexpected problems. High cohesion means each part of the software has a
clear purpose and sticks to it, making the code easier to work with and reuse. Following these
principles helps make software stronger, more adaptable, and easier to grow.

Information System Life Cycle – Software Engineering


Information System Life Cycle (ISLC) is a framework used to manage the development, maintenance,
and retirement of an organization’s information systems. This article focuses on discussing the
Information System Life Cycle in detail.

What is the Information System Life Cycle (ISLC)?


The ISLC is a useful framework for managing the development, maintenance, and retirement of an
organization’s information systems.
1. In a large organization, the database system is typically part of the information system which
includes all the resources that are involved in the collection, management, use, and dissemination
of the information resources of the organization.
2. Thus, the database system is a part of a much larger organizational information system.
3. ISLC helps to ensure that information systems meet the needs of the organization and are
developed in a structured and controlled manner.
4. However, it can be difficult to maintain control over the entire process, especially as the
organization’s needs change over time.

Phases of ISLC
Information cycle is also known as Macro life cycle. These cycle typically includes following phases:

1. Feasibility Analysis
This phase basically concerned with following points:
1. Analyzing potential application areas.
2. Identifying the economics of information gathering.
3. Performing preliminary cost benefit studies.
4. Determining the complexity of data and processes.
5. Setting up priorities among application.

2. Requirements Collection and Analysis


In this phase we basically do the following points:
1. Detailed requirements are collected by interacting with potential users and groups to identify their
particular problems and needs.
2. Inter application dependencies are identified.
3. Communication and reporting procedures are identified.

3. Design
This phase has following two aspects:
1. Design of database.
2. Design of application system that uses and process the database.

61 | P a g e
4. Implementation
In this phase following steps are implemented:
1. The information system is implemented
2. The database is loaded.
3. The database transaction are implemented and tested.

5. Validation and Acceptance Testing


The acceptability of the system is meeting’s users requirements and performance criteria is validated.
The system is tested against performance criteria and behavior specification.

6. Deployment Operation and Maintenance


This may be preceded by conversion of users from older system as well as by user training. The
operational phase starts when all system function is operational and have been validated. As new
requirements or application crop up, they pass through all the previous phases until they are validated
and incorporated into system. Monitoring and system maintenance are important activities during
operational phase.

7. Training and Support


The deployment phase includes training and support for end-users and administrators. It is essential to
provide adequate training to ensure that users can effectively use the new system and take advantage
of its features. Ongoing support is also necessary to address any issues that may arise.

8. Continuous Improvement
The information system life cycle is a continuous process of improvement. The system should be
regularly evaluated to identify areas for improvement, such as performance, functionality, and
usability. This may involve revisiting previous phases of the cycle to make changes or improvements.

9. Risk Management
Throughout the entire ISLC, risk management should be an integral part of the process. This includes
identifying potential risks and developing strategies to mitigate them. Risk management should be an
ongoing process throughout the life cycle, from the feasibility analysis to deployment and
maintenance.

10. Integration
Integration with other systems is often necessary, and should be considered early in the life cycle.
This includes integration with existing systems, as well as with new systems that may be developed in
the future.

11. Scalability
As the organization grows and changes, the information system must be able to scale up to meet new
demands. This should be considered during the design phase to ensure that the system can
accommodate future growth and changes in the organization.

12. Sustainability
Sustainable design and development practices should be considered throughout the ISLC to reduce the
environmental impact of the information system. This includes reducing energy consumption,
minimizing waste, and using sustainable materials where possible.

62 | P a g e
Benefits of Using the ISLC Framework
1. Improved alignment with business goals: By following the ISLC, organizations can ensure that
their information systems align with their business goals and support the organization’s overall
mission.
2. Better project management: The ISLC provides a structured and controlled approach to
managing information system projects, which can help to improve project management and
reduce risks.
3. Increased efficiency: The ISLC can help organizations to use their resources more efficiently, by
ensuring that the development, maintenance, and retirement of information systems is planned
and managed in a consistent and controlled manner.
4. Improved user satisfaction: By involving users in the ISLC process, organizations can ensure
that their information systems meet the needs of the users, which can lead to improved user
satisfaction.
5. Better data management: By following the ISLC, organizations can ensure that their data is
properly managed throughout the entire system’s life cycle, which can help to improve data
quality and reduce risks associated with data loss or corruption.
6. Enhanced security: The ISLC can help organizations to ensure that their information systems are
designed, developed, and maintained with security in mind. This can help to reduce the risk of
data breaches and other security incidents.
7. Improved collaboration: The ISLC can help to promote collaboration between different teams
and departments involved in the development, maintenance, and retirement of information
systems. This can lead to better communication, more efficient use of resources, and improved
outcomes.
8. Better compliance: The ISLC can help organizations to ensure that their information systems
comply with relevant laws, regulations, and industry standards. This can help to reduce the risk of
legal and financial penalties, as well as damage to the organization’s reputation.
9. Increased agility: The ISLC can help organizations to be more agile and responsive to changing
business needs and technological trends. By using a structured and flexible approach to
information system development and management, organizations can more easily adapt to
changing requirements and opportunities.
10. Enhanced innovation: The ISLC can help to promote innovation and creativity in information
system development and management. By encouraging experimentation, iteration, and continuous
improvement, organizations can discover new ways to use technology to support their business
goals and mission.
11. Better cost management: By following the ISLC, organizations can ensure that they are only
investing in information systems that will provide value to the organization, and that the systems
are retired before they become too costly to maintain.

What is the Database Application Development?


Database application development is the process of obtaining the following things:
1. Real-world requirements.
2. Analyzing the real-world requirements.
3. To design the data and functions of the system.
4. Implementing the operations in the system.

Database Application Life Cycle


Activities related to the database application system (micro) life cycle include the following:
1. System Definition
The scope of the database system, its users, and its application are defined. The interfaces for
various categories of users, the response time constraints, and storage and processing needs are
identified.

63 | P a g e
2. Database Design
At the end of this phase, a complete logical and physical design of the database system on the
chosen DBMS is ready.
3. Database Implementation
This comprises the process of specifying the conceptual, external, and internal database definition
creating empty database files, and implementing the software application.
4. Loading or Data Conversion
The database is populated either by loading the data directly or by converting existing files into
database system format.
5. Application Conversion
Any software application from a previous system is converted to the new system.
6. Testing and Validation
The new system is tested and validated. Testing and validation of application programs can be a
very involved process and the techniques that are employed are usually covered in the software
engineering course. The automated tools that assist in the process.
7. Operation
The Database system and its application are put into operation Usually the old and new system are
operated in parallel for some time.
8. Monitoring and Maintenance

Phases of Project Management Process


Project management involves several key phases that guide the project from initiation to completion,
ensuring that objectives are met efficiently and effectively. It’s like having a step-by-step guide to
follow, ensuring you stay on track and reach your goals smoothly. These phases typically include
initiation, planning, execution, monitoring and control, and closure. Each phase is crucial for
managing tasks, resources, timelines, and deliverables, ultimately leading to the successful completion
of projects.
This article explores each phase of project management in detail, highlighting their importance and
how they contribute to project success.

What is the Project Management Process?


Project Management is the discipline of planning, monitoring, and controlling software projects,
identifying the scope, estimating the work involved, and creating a project schedule. Along with it
is also responsible for keeping the team up to date on the project’s progress handling problems and
discussing solutions.
Phases of Project Management Process

64 | P a g e
1. Project Initiation/Feasibility Study:
A feasibility study explores system requirements to determine project feasibility. There are several
fields of feasibility study including economic feasibility, operational feasibility, and technical
feasibility. The goal is to determine whether the system can be implemented or not. The process of
feasibility study takes as input the required details as specified by the user and other domain-
specific details. The output of this process simply tells whether the project should be undertaken or
not and if yes, what would the constraints be. Additionally, all the risks and their potential effects
on the projects are also evaluated before a decision to start the project is taken.
This phase of Project Management involves defining the project, identifying the stakeholders, and
establishing the project’s goals and objectives.

2. Project Planning:
In this phase of Project Management, the project manager defines the scope of the project, develops
a detailed project plan, and identifies the resources required to complete the project. A detailed plan
stating a stepwise strategy to achieve the listed objectives is an integral part of any project. Planning
consists of the following activities:
 Set objectives or goals
 Develop strategies
 Develop project policies
 Determine courses of action
 Making planning decisions
 Set procedures and rules for the project
 Develop a software project plan
 Prepare budget
 Conduct risk management
 Document software project plans

This step also involves the construction of a work breakdown structure(WBS). It also includes size,
effort, schedule, and cost estimation using various techniques.
3. Project Execution:
The Project Execution phase of the Project Management process involves the actual implementation
of the project, including the allocation of resources, the execution of tasks, and the monitoring and
control of project progress. A project is executed by choosing an appropriate software development
lifecycle model (SDLC). It includes several steps including requirements analysis, design, coding,
testing and implementation, testing, delivery, and maintenance. Many factors need to be considered
while doing so including the size of the system, the nature of the project, time and budget
constraints, domain requirements, etc. An inappropriate SDLC can lead to the failure of the project.

65 | P a g e
4. Project Monitoring and Controlling:
This phase of Project Management involves tracking the project’s progress, comparing actual
results to the project plan, and making changes to the project as necessary. In the project
management process, in that third and fourth phases are not sequential in nature. These phase will
run regularly with the project execution phase. These phase will ensure that project deliverable are
need to meet.
During the monitoring phase of the project management phases. The manager will respond to the
proper tracking the cost and effort during the process. This tracking will not ensure that budget is
also important for the future projects.
5. Project Closing:
There can be many reasons for the termination of a project. Though expecting a project to terminate
after successful completion is conventional, at times, a project may also terminate without
completion. Projects have to be closed down when the requirements are not fulfilled according to
given time and cost constraints. This phase of Project Management involves completing the project,
documenting the results, and closing out any open issues.
Some reasons for failure include:
 Fast-changing technology
 The project running out of time
 Organizational politics
 Too much change in customer requirements
 Project exceeding budget or funds

Once the project is terminated, a post-performance analysis is done. Also, a final report is published
describing the experiences, lessons learned, and recommendations for handling future projects.
Project management is a systematic approach to planning, organizing, and controlling the resources
required to achieve specific project goals and objectives. The project management process involves
a set of activities that are performed to plan, execute, and close a project. The project management
process can be divided into several phases, each of which has a specific purpose and set of tasks.

Principles of Project Management


The project should be effective so that the project begins with well-defined tasks. Effective project
planning helps to minimize the additional costs incurred on the project while it is in progress.
1. Planning is necessary: Planning should be done before a project begins.
2. Risk analysis: Before starting the project, senior management and the project management team
should consider the risks that may affect the project.
3. Tracking of project plan: Once the project plan is prepared, it should be tracked and modified
accordingly.
4. Most quality standards and produce quality deliverables: The project plan should identify
processes by which the project management team can ensure quality in software.
5. Description of flexibility to accommodate changes: The result of project planning is recorded in
the form of a project plan, which should allow new changes to be accommodated when the
project is in progress

Advantages of the Project Management process:


 Provides a structured approach to managing projects.
 Helps to define project objectives and requirements.
 Facilitates effective communication and collaboration among team members.
 Helps to manage project risks and issues.
 Ensures that the project is delivered on time and within budget.

Disadvantages of the Project Management Process:


 Can be time-consuming and bureaucratic
 May be inflexible and less adaptable to changes
 Requires a skilled project manager to implement effectively

66 | P a g e
 May not be suitable for small or simple projects.

What is a Project life Cycle in Project Management?


The Project Life Cycle is a framework that outlines the phases a project goes through from
initiation to closure. It typically includes five main phases. Each phase has specific objectives,
activities, and deliverables, and the Project Life Cycle provides a structured approach for managing
and executing projects efficiently. The Project Life Cycle helps project managers and teams
understand the project’s progression, allocate resources effectively, manage risks, and ensure
successful project outcomes.

Responsibilities of Software Project Manager:


 Proper project management is essential for the successful completion of a software project and
the person who is responsible for it is called the project manager.
 To do his job effectively, the project manager must have a certain set of skills. This section
discusses both the job responsibilities of a project manager and the skills required by him.

Job Responsibilities of Software Project Manager:


 Involves the senior managers in the process of appointing team members.
 Builds the project team and assigns tasks to various team members.
 Responsible for effective project planning and scheduling, project monitoring, and control
activities to achieve the project objectives.
 Acts as a communicator between the senior management and the other persons involved in the
project like the development team and internal and external stakeholders.
 Effectively resolves issues that arise between the team members by changing their roles and
responsibilities.
 Modifies the project plan(if required)to deal with the situation.

Conclusion
Project management is a procedure that requires responsibility. The project management process
brings all the other project tasks together and ensures that the project runs smoothly. Understanding
the phases of project management—initiation, planning, execution, monitoring and control, and
closure—is crucial for successfully managing any project. Each phase plays a vital role in ensuring
that projects are completed on time, within budget, and to the satisfaction of stakeholders. By
meticulously following these phases, project managers can effectively coordinate tasks, resources,
and teams, address challenges proactively, and deliver high-quality outcomes.

What is Project Size Estimation?


Project size estimation is determining the scope and resources required for the project.
1. It involves assessing the various aspects of the project to estimate the effort, time, cost, and
resources needed to complete the project.
2. Accurate project size estimation is important for effective and efficient project planning,
management, and execution.

Importance of Project Size Estimation


Here are some of the reasons why project size estimation is critical in project management:
1. Financial Planning: Project size estimation helps in planning the financial aspects of the
project, thus helping to avoid financial shortfalls.
2. Resource Planning: It ensures the necessary resources are identified and allocated accordingly.
3. Timeline Creation: It facilitates the development of realistic timelines and milestones for the
project.
4. Identifying Risks: It helps to identify potential risks associated with overall project execution.
5. Detailed Planning: It helps to create a detailed plan for the project execution, ensuring all the
aspects of the project are considered.
6. Planning Quality Assurance: It helps in planning quality assurance activities and ensuring that
the project outcomes meet the required standards.

67 | P a g e
Who Estimates Projects Size?
Here are the key roles involved in estimating the project size:
1. Project Manager: Project manager is responsible for overseeing the estimation process.
2. Subject Matter Experts (SMEs): SMEs provide detailed knowledge related to the specific
areas of the project.
3. Business Analysts: Business Analysts help in understanding and documenting the project
requirements.
4. Technical Leads: They estimate the technical aspects of the project such as system design,
development, integration, and testing.
5. Developers: They will provide detailed estimates for the tasks they will handle.
6. Financial Analysts: They provide estimates related to the financial aspects of the project
including labor costs, material costs, and other expenses.
7. Risk Managers: They assess the potential risks that could impact the projects’ size and effort.
8. Clients: They provide input on project requirements, constraints, and expectations.

Different Methods of Project Estimation


1. Expert Judgment: In this technique, a group of experts in the relevant field estimates the
project size based on their experience and expertise. This technique is often used when there is
limited information available about the project.
2. Analogous Estimation: This technique involves estimating the project size based on the
similarities between the current project and previously completed projects. This technique is
useful when historical data is available for similar projects.
3. Bottom-up Estimation: In this technique, the project is divided into smaller modules or tasks,
and each task is estimated separately. The estimates are then aggregated to arrive at the overall
project estimate.
4. Three-point Estimation: This technique involves estimating the project size using three
values: optimistic, pessimistic, and most likely. These values are then used to calculate the
expected project size using a formula such as the PERT formula.
5. Function Points: This technique involves estimating the project size based on the functionality
provided by the software. Function points consider factors such as inputs, outputs, inquiries,
and files to arrive at the project size estimate.
6. Use Case Points: This technique involves estimating the project size based on the number of
use cases that the software must support. Use case points consider factors such as the
complexity of each use case, the number of actors involved, and the number of use cases.
7. Parametric Estimation: For precise size estimation, mathematical models founded on project
parameters and historical data are used.
8. COCOMO (Constructive Cost Model): It is an algorithmic model that estimates effort, time,
and cost in software development projects by taking into account several different elements.
9. Wideband Delphi: Consensus-based estimating method for balanced size estimations that
combines expert estimates from anonymous experts with cooperative conversations.
10. Monte Carlo Simulation: This technique, which works especially well for complicated and
unpredictable projects, estimates project size and analyses hazards using statistical methods and
random sampling.

Each of these techniques has its strengths and weaknesses, and the choice of technique depends on
various factors such as the project’s complexity, available data, and the expertise of the team.
Estimating the Size of the Software
Estimation of the size of the software is an essential part of Software Project Management. It helps
the project manager to further predict the effort and time that will be needed to build the project.
Here are some of the measures that are used in project size estimation:

68 | P a g e
1. Lines of Code (LOC)
As the name suggests, LOC counts the total number of lines of source code in a project. The units
of LOC are:

1. KLOC: Thousand lines of code


2. NLOC: Non-comment lines of code
3. KDSI: Thousands of delivered source instruction
 The size is estimated by comparing it with the existing systems of the same kind. The experts
use it to predict the required size of various components of software and then add them to get
the total size.
 It’s tough to estimate LOC by analyzing the problem definition. Only after the whole code has
been developed can accurate LOC be estimated. This statistic is of little utility to project
managers because project planning must be completed before development activity can begin.
 Two separate source files having a similar number of lines may not require the same effort. A
file with complicated logic would take longer to create than one with simple logic. Proper
estimation may not be attainable based on LOC.
 The length of time it takes to solve an issue is measured in LOC. This statistic will differ
greatly from one programmer to the next. A seasoned programmer can write the same logic in
fewer lines than a newbie coder.
Advantages:
1. Universally accepted and is used in many models like COCOMO.
2. Estimation is closer to the developer’s perspective.
3. Both people throughout the world utilize and accept it.
4. At project completion, LOC is easily quantified.
5. It has a specific connection to the result.
6. Simple to use.

Disadvantages:
1. Different programming languages contain a different number of lines.
2. No proper industry standard exists for this technique.
3. It is difficult to estimate the size using this technique in the early stages of the project.
4. When platforms and languages are different, LOC cannot be used to normalize.

2. Number of Entities in ER Diagram


ER model provides a static view of the project. It describes the entities and their relationships. The
number of entities in the ER model can be used to measure the estimation of the size of the project.
The number of entities depends on the size of the project. This is because more entities needed
more classes/structures thus leading to more coding.
Advantages:
1. Size estimation can be done during the initial stages of planning.
2. The number of entities is independent of the programming technologies used.

Disadvantages:
1. No fixed standards exist. Some entities contribute more to project size than others.
2. Just like FPA, it is less used in the cost estimation model. Hence, it must be converted to LOC.

3. Total Number of Processes in DFD


Data Flow Diagram(DFD) represents the functional view of software. The model depicts the main
processes/functions involved in software and the flow of data between them. Utilization of the
number of functions in DFD to predict software size. Already existing processes of similar type are
studied and used to estimate the size of the process. The sum of the estimated size of each process
gives the final estimated size.
Advantages:

69 | P a g e
1. It is independent of the programming language.
2. Each major process can be decomposed into smaller processes. This will increase the accuracy
of the estimation.
Disadvantages:
1. Studying similar kinds of processes to estimate size takes additional time and effort.
2. All software projects are not required for the construction of DFD.

4. Function Point Analysis


In this method, the number and type of functions supported by the software are utilized to find
FPC(function point count). The steps in function point analysis are:
1. Count the number of functions of each proposed type.
2. Compute the Unadjusted Function Points(UFP).
3. Find the Total Degree of Influence(TDI).
4. Compute Value Adjustment Factor(VAF).
5. Find the Function Point Count(FPC).

The explanation of the above points is given below:


1. Count the number of functions of each proposed type:
Find the number of functions belonging to the following types:
 External Inputs: Functions related to data entering the system.
 External outputs: Functions related to data exiting the system.
 External Inquiries: They lead to data retrieval from the system but don’t change the system.
 Internal Files: Logical files maintained within the system. Log files are not included here.
 External interface Files: These are logical files for other applications which are used by our
system.
2. Compute the Unadjusted Function Points(UFP):
Categorize each of the five function types as simple, average, or complex based on their
complexity. Multiply the count of each function type with its weighting factor and find the
weighted sum. The weighting factors for each type based on their complexity are as follows:
Function type Simple Average Complex

External Inputs 3 4 6

External
4 5 7
Output

External
3 4 6
Inquiries

Internal Logical
7 10 15
Files

External
5 7 10
Interface Files
3. Find the Total Degree of Influence:
Use the ’14 general characteristics of a system to find the degree of influence of each of them. The
sum of all 14 degrees of influence will give the TDI. The range of TDI is 0 to 70. The 14 general
characteristics are: Data Communications, Distributed Data Processing, Performance, Heavily Used
Configuration, Transaction Rate, On-Line Data Entry, End-user Efficiency, Online Update,
Complex Processing Reusability, Installation Ease, Operational Ease, Multiple Sites and Facilitate

70 | P a g e
Change.
Each of the above characteristics is evaluated on a scale of 0-5.

4. Compute Value Adjustment Factor(VAF):


Use the following formula to calculate VAF:
VAF = (TDI * 0.01) + 0.65
5. Find the Function Point Count:
Use the following formula to calculate FPC:
FPC = UFP * VAF
Advantages:
1. It can be easily used in the early stages of project planning.
2. It is independent of the programming language.
3. It can be used to compare different projects even if they use different technologies(database,
language, etc).

Disadvantages:
1. It is not good for real-time systems and embedded systems.
2. Many cost estimation models like COCOMO use LOC and hence FPC must be converted to
LOC.

When Should Estimates Take Place?


Project size estimates must take place at multiple key points throughout the project lifecycle. It
should take place during the following stages to ensure accuracy and relevance:
1. Project Initiation: Project is assessed to determine its feasibility and scope.
2. Project Planning: Precise estimates are done to create a realistic budget and timeline.
3. Project Execution: Res-estimation when there are significant changes in scope.
4. Project Monitoring and Control: Regular reviews to make sure that the project is on track.
5. Project Closeout: Comparing original estimates with actual outcomes and documenting
estimation accuracy.

Challenges in Project Size Estimation


Project size estimation can be challenging due to multiple factors. Here are some factors that can
affect the accuracy and reliability of estimates:
1. Unclear Requirements: Initial project requirements can be vague or subject to change, thus
making it difficult to estimate accurately.
2. Lack of Historical Data: Without access to the data of similar past projects, it becomes
difficult to make informed estimates, thus estimates becoming overly optimistic or pessimistic
and leading to inaccurate planning.
3. Interdependencies: Project with numerous interdependent tasks are harder to estimate due to
the complicated interactions between components.
4. Productivity Variability: Estimating the productivity of resources and their availability can be
challenging due to fluctuations and uncertainties.
5. Risks: Identifying and quantifying risks and uncertainties is very difficult. Underestimating the
potential risks can lead to inadequate contingency planning, thus causing the project to go off
track.

Improving Accuracy in Project Size Estimation


Improving the accuracy of project size estimation involves a combination of techniques and best
practices. Here are some key strategies to enhance estimation accuracy:
1. Define Clear Requirements: Ensure all project requirements are thoroughly documented and
engage all stakeholders early and frequently to clarify and validate the requirements.
2. Use Historical Data: Use data from similar past projects to make informed estimates.
3. Use Estimation Techniques: Use various estimation techniques like Analogue Estimation,
Parametric Estimation, Bottom-Up Estimation, and Three-Point Estimation.

71 | P a g e
4. Break Down the Project: Use Work Breakdown Structure (WBS) and detailed take analysis to
make sure that each task is specific and measurable.
5. Incorporate Expert Judgement: Engage subject matter experts and experienced team
members to provide input on estimates.

Future of Project Size Estimation


The future of project size estimation will be shaped by the advancements in technology and
methodologies. Here are some key developments that can define the future of project size
estimation:
1. Smarter Technology: Artificial intelligence (AI) could analyze past projects and code to give
more accurate forecasts, considering how complex the project features are.
2. Data-Driven Insights: Instead of just lines of code, estimates could consider factors like the
number of users, the type of software (mobile app vs. web app), and how much data it handles.
3. Human-AI Collaboration: Combining human expertise with AI can enhance the decision-
making process in project size estimation.
4. Collaborative Platforms: Tools that facilitate collaboration among geographically dispersed
teams can help to enhance the project size estimation process.
5. Agile Methodologies: The adoption of agile methodologies can promote continuous estimation
and iterative refinement.

Conclusion
In conclusion, accurate project size estimation is crucial for software project success. Traditional
techniques like lines of code have limitations. The future of estimation lies in AI and data-
driven insights for better resource allocation, risk management, and project planning.

Frequently Asked Questions on Project Size Estimation Techniques


1. Which technique is used for project estimation?
There are many project estimation techniques, including expert judgment, analogous estimation,
and bottom-up estimation.
2. What is a project estimation tool?
Project estimation tools are software applications that help project managers estimate the time,
resources, and cost required to complete a project.
3. What are the methods of estimation?
Project estimation methods can be top-down (breaking down large projects) or bottom-up (building
up from small tasks).

SYSTEM CONFIGURATION MANAGEMENT


Whenever software is built, there is always scope for improvement and those improvements bring
picture changes. Changes may be required to modify or update any existing solution or to create a
new solution for a problem. Requirements keep on changing daily so we need to keep on upgrading
our systems based on the current requirements and needs to meet desired outputs. Changes should
be analyzed before they are made to the existing system, recorded before they are implemented,
reported to have details of before and after, and controlled in a manner that will improve quality and
reduce error. This is where the need for System Configuration Management comes. System
Configuration Management (SCM) is an arrangement of exercises that controls change by
recognizing the items for change, setting up connections between those things,
making/characterizing instruments for overseeing diverse variants, controlling the changes being
executed in the current framework, inspecting and revealing/reporting on the changes made. It is
essential to control the changes because if the changes are not checked legitimately then they may
wind up undermining a well-run programming. In this way, SCM is a fundamental piece of all
project management activities.
Processes involved in SCM – Configuration management provides a disciplined environment for
smooth control of work products. It involves the following activities:

72 | P a g e
1. Identification and Establishment – Identifying the configuration items from products that
compose baselines at given points in time (a baseline is a set of mutually consistent
Configuration Items, which has been formally reviewed and agreed upon, and serves as the
basis of further development). Establishing relationships among items, creating a mechanism to
manage multiple levels of control and procedure for the change management system.
2. Version control – Creating versions/specifications of the existing product to build new
products with the help of the SCM system. A description of the version is given below:

Suppose after
some changes, the version of the configuration object changes from 1.0 to 1.1. Minor
corrections and changes result in versions 1.1.1 and 1.1.2, which is followed by a major update
that is object 1.2. The development of object 1.0 continues through 1.3 and 1.4, but finally, a
noteworthy change to the object results in a new evolutionary path, version 2.0. Both versions
are currently supported.
3. Change control – Controlling changes to Configuration items (CI). The change control process
is explained in Figure below:

73 | P a g e
A change
request (CR) is submitted and evaluated to assess technical merit, potential side effects, the
overall impact on other configuration objects and system functions, and the projected cost of
the change. The results of the evaluation are presented as a change report, which is used by a
change control board (CCB) —a person or group who makes a final decision on the status and
priority of the change. An engineering change Request (ECR) is generated for each approved
change. Also, CCB notifies the developer in case the change is rejected with proper reason. The
ECR describes the change to be made, the constraints that must be respected, and the criteria for
review and audit. The object to be changed is “checked out” of the project database, the change
is made, and then the object is tested again. The object is then “checked in” to the database and
appropriate version control mechanisms are used to create the next version of the software.
4. Configuration auditing – A software configuration audit complements the formal technical
review of the process and product. It focuses on the technical correctness of the configuration
object that has been modified. The audit confirms the completeness, correctness, and
consistency of items in the SCM system and tracks action items from the audit to closure.
5. Reporting – Providing accurate status and current configuration data to developers, testers, end
users, customers, and stakeholders through admin guides, user guides, FAQs, Release notes,
Memos, Installation Guide, Configuration guides, etc.
System Configuration Management (SCM) is a software engineering practice that focuses on
managing the configuration of software systems and ensuring that software components are
properly controlled, tracked, and stored. It is a critical aspect of software development, as it helps to
ensure that changes made to a software system are properly coordinated and that the system is
always in a known and stable state.
SCM involves a set of processes and tools that help to manage the different components of a
software system, including source code, documentation, and other assets. It enables teams to track
changes made to the software system, identify when and why changes were made, and manage the
integration of these changes into the final product.
Importance of Software Configuration Management
1. Effective Bug Tracking: Linking code modifications to issues that have been reported, makes
bug tracking more effective.
2. Continuous Deployment and Integration: SCM combines with continuous processes to automate
deployment and testing, resulting in more dependable and timely software delivery.

74 | P a g e
3. Risk management: SCM lowers the chance of introducing critical flaws by assisting in the early
detection and correction of problems.
4. Support for Big Projects: Source Code Control (SCM) offers an orderly method to handle code
modifications for big projects, fostering a well-organized development process.
5. Reproducibility: By recording precise versions of code, libraries, and dependencies, source
code versioning (SCM) makes builds repeatable.
6. Parallel Development: SCM facilitates parallel development by enabling several developers to
collaborate on various branches at once.

Why need for System configuration management?


1. Replicability: Software version control (SCM) makes ensures that a software system can be
replicated at any stage of its development. This is necessary for testing, debugging, and
upholding consistent environments in production, testing, and development.
2. Identification of Configuration: Source code, documentation, and executable files are examples
of configuration elements that SCM helps in locating and labeling. The management of a
system’s constituent parts and their interactions depend on this identification.
3. Effective Process of Development: By automating monotonous processes like managing
dependencies, merging changes, and resolving disputes, SCM simplifies the development
process. Error risk is decreased and efficiency is increased because of this automation.

Key objectives of SCM


1. Control the evolution of software systems: SCM helps to ensure that changes to a software
system are properly planned, tested, and integrated into the final product.
2. Enable collaboration and coordination: SCM helps teams to collaborate and coordinate their
work, ensuring that changes are properly integrated and that everyone is working from the same
version of the software system.
3. Provide version control: SCM provides version control for software systems, enabling teams
to manage and track different versions of the system and to revert to earlier versions if
necessary.
4. Facilitate replication and distribution: SCM helps to ensure that software systems can be
easily replicated and distributed to other environments, such as test, production, and customer
sites.
5. SCM is a critical component of software development, and effective SCM practices can help to
improve the quality and reliability of software systems, as well as increase efficiency and
reduce the risk of errors.

The main advantages of SCM


1. Improved productivity and efficiency by reducing the time and effort required to manage
software changes.
2. Reduced risk of errors and defects by ensuring that all changes were properly tested and
validated.
3. Increased collaboration and communication among team members by providing a central
repository for software artifacts.
4. Improved quality and stability of software systems by ensuring that all changes are properly
controlled and managed.

The main disadvantages of SCM


1. Increased complexity and overhead, particularly in large software systems.
2. Difficulty in managing dependencies and ensuring that all changes are properly integrated.
3. Potential for conflicts and delays, particularly in large development teams with multiple
contributors.

75 | P a g e
WHAT IS THE COCOMO MODEL?
The COCOMO Model is a procedural cost estimate model for software projects and is often used as
a process of reliably predicting the various parameters associated with making a project such as
size, effort, cost, time, and quality. It was proposed by Barry Boehm in 1981 and is based on the
study of 63 projects, which makes it one of the best-documented models.
The key parameters that define the quality of any software product, which are also an outcome of
COCOMO, are primarily effort and schedule:
1. Effort: Amount of labor that will be required to complete a task. It is measured in person-
months units.
2. Schedule: This simply means the amount of time required for the completion of the job, which
is, of course, proportional to the effort put in. It is measured in the units of time such as weeks,
and months.

Types of Projects in the COCOMO Model


In the COCOMO model, software projects are categorized into three types based on their
complexity, size, and the development environment. These types are:
1. Organic: A software project is said to be an organic type if the team size required is adequately
small, the problem is well understood and has been solved in the past and also the team
members have a nominal experience regarding the problem.
2. Semi-detached: A software project is said to be a Semi-detached type if the vital characteristics
such as team size, experience, and knowledge of the various programming environments lie in
between organic and embedded. The projects classified as Semi-Detached are comparatively
less familiar and difficult to develop compared to the organic ones and require more experience
better guidance and creativity. Eg: Compilers or different Embedded Systems can be considered
Semi-Detached types.
3. Embedded: A software project requiring the highest level of complexity, creativity, and
experience requirement falls under this category. Such software requires a larger team size than
the other two models and also the developers need to be sufficiently experienced and creative to
develop such complex models.

Comparison of these three types of Projects in COCOMO Model


Aspects Organic Semidetached Embedded

Project Size 2 to 50 KLOC 50-300 KLOC 300 and above KLOC

Complexity Low Medium High

Team Some experienced as well as Mixed experience,


Highly experienced
Experience inexperienced staff includes experts

Flexible, fewer Somewhat flexible, moderate Highly rigorous, strict


Environment constraints constraints requirements

Effort
E = 2.4(400)1.05 E = 3.0(400)1.12 E = 3.6(400)1.20
Equation

Simple payroll New system interfacing with


Flight control software
Example system existing systems

Detailed Structure of COCOMO Model


Detailed COCOMO incorporates all characteristics of the intermediate version with an assessment
of the cost driver’s impact on each step of the software engineering process. The detailed model
uses different effort multipliers for each cost driver attribute. In detailed COCOMO, the whole

76 | P a g e
software is divided into different modules and then we apply COCOMO in different modules to
estimate effort and then sum the effort.
The Six phases of detailed COCOMO are:

Phases of COCOMO Model

1. Planning and requirements: This initial phase involves defining the scope, objectives, and
constraints of the project. It includes developing a project plan that outlines the schedule,
resources, and milestones
2. System design: : In this phase, the high-level architecture of the software system is created.
This includes defining the system’s overall structure, including major components, their
interactions, and the data flow between them.
3. Detailed design: This phase involves creating detailed specifications for each component of the
system. It breaks down the system design into detailed descriptions of each module, including
data structures, algorithms, and interfaces.
4. Module code and test: This involves writing the actual source code for each module or
component as defined in the detailed design. It includes coding the functionalities,
implementing algorithms, and developing interfaces.
5. Integration and test: This phase involves combining individual modules into a complete
system and ensuring that they work together as intended.
6. Cost Constructive model: The Constructive Cost Model (COCOMO) is a widely used
method for estimating the cost and effort required for software development projects.

Different models of COCOMO have been proposed to predict the cost estimation at different
levels, based on the amount of accuracy and correctness required. All of these models can be
applied to a variety of projects, whose characteristics determine the value of the constant to be used
in subsequent calculations. These characteristics of different system types are mentioned below.
Boehm’s definition of organic, semidetached, and embedded systems:

Importance of the COCOMO Model


1. Cost Estimation: To help with resource planning and project budgeting, COCOMO offers a
methodical approach to software development cost estimation.
2. Resource Management: By taking team experience, project size, and complexity into account,
the model helps with efficient resource allocation.
3. Project Planning: COCOMO assists in developing practical project plans that include
attainable objectives, due dates, and benchmarks.
4. Risk management: Early in the development process, COCOMO assists in identifying and
mitigating potential hazards by including risk elements.
5. Support for Decisions: During project planning, the model provides a quantitative foundation
for choices about scope, priorities, and resource allocation.
6. Benchmarking: To compare and assess various software development projects to industry
standards, COCOMO offers a benchmark.
7. Resource Optimization: The model helps to maximize the use of resources, which raises
productivity and lowers costs.

Types of COCOMO Model


There are three types of COCOMO Model:
 Basic COCOMO Model
 Intermediate COCOMO Model
 Detailed COCOMO Model

77 | P a g e
1. Basic COCOMO Model
The Basic COCOMO model is a straightforward way to estimate the effort needed for a software
development project. It uses a simple mathematical formula to predict how many person-months of
work are required based on the size of the project, measured in thousands of lines of code (KLOC).
It estimates effort and time required for development using the following expression:
E = a*(KLOC)b PM
Tdev = c*(E)d
Person required = Effort/ Time
Where,
E is effort applied in Person-Months
KLOC is the estimated size of the software product indicate in Kilo Lines of Code
Tdev is the development time in months
a, b, c are constants determined by the category of software project given in below table.
The above formula is used for the cost estimation of the basic COCOMO model and also is used in
the subsequent models. The constant values a, b, c, and d for the Basic Model for the different
categories of the software projects are:
Software
Projects a b c d

Organic 2.4 1.05 2.5 0.38

Semi-
3.0 1.12 2.5 0.35
Detached

Embedded 3.6 1.20 2.5 0.32


1. The effort is measured in Person-Months and as evident from the formula is dependent on
Kilo-Lines of code. The development time is measured in months.
2. These formulas are used as such in the Basic Model calculations, as not much consideration
of different factors such as reliability, and expertise is taken into account, henceforth the
estimate is rough.

Example of Basic COCOMO Model


Suppose that a Basic project was estimated to be 400 KLOC (kilo lines of code). Calculate effort
and time for each of the three modes of development. All the constants value provided in the
following table:
Solution
From the above table we take the value of constant a,b,c and d.
1. For organic mode,
 effort = 2.4 × (400) 1.05 ≈ 1295 person-month.
 dev. time = 2.5 × (1295) 0.38 ≈ 38 months.
2. For semi-detach mode,
 effort = 3 × (400) 1.12 ≈ 2462 person-month.
 dev. time = 2.5 × (2462) 0.35 ≈ 38 months.
3. For Embedded mode,
 effort = 3.6 × (400) 1.20 ≈ 4772 person-month.
 dev. time = 2.5 × (4772) 0.32 ≈ 38 months.
2. Intermediate COCOMO Model
The basic COCOMO model assumes that the effort is only a function of the number of lines of code
and some constants evaluated according to the different software systems. However, in reality, no
system’s effort and schedule can be solely calculated based on Lines of Code. For that, various

78 | P a g e
other factors such as reliability, experience, and Capability. These factors are known as Cost
Drivers (multipliers) and the Intermediate Model utilizes 15 such drivers for cost estimation.

Classification of Cost Drivers and their Attributes:


The cost drivers are divided into four categories
Product attributes:
 Required software reliability extent
 Size of the application database
 The complexity of the product

Hardware attributes
 Run-time performance constraints
 Memory constraints
 The volatility of the virtual machine environment
 Required turnabout time

Personal attributes
 Analyst capability
 Software engineering capability
 Application experience
 Virtual machine experience
 Programming language experience

Project attributes
 Use of software tools
 Application of software engineering methods
 Required development schedule

Each of the 15 such attributes can be rated on a six-point scale ranging from “very low” to “extra
high” in their relative order of importance. Each attribute has an effort multiplier fixed as per the
rating. Table give below represents Cost Drivers and their respective rating:
The Effort Adjustment Factor (EAF) is determined by multiplying the effort multipliers
associated with each of the 15 attributes.
The Effort Adjustment Factor (EAF) is employed to enhance the estimates generated by the basic
COCOMO model in the following expression:

Intermediate COCOMO Model equation:


E = a*(KLOC)b * EAF PM
Tdev = c*(E)d
Where,
 E is effort applied in Person-Months
 KLOC is the estimated size of the software product indicate in Kilo Lines of Code
 EAF is the Effort Adjustment Factor (EAF) is a multiplier used to refine the effort estimate
obtained from the basic COCOMO model.
 Tdev is the development time in months
 a, b, c are constants determined by the category of software project given in below table.

The constant values a, b, c, and d for the Basic Model for the different categories of the software
projects are:
Software
Projects a b c d

Organic 3.2 1.05 2.5 0.38

79 | P a g e
Software
Projects a b c d

Semi-
3.0 1.12 2.5 0.35
Detached

Embedded 2.8 1.20 2.5 0.32

3. Detailed COCOMO Model


Detailed COCOMO goes beyond Basic and Intermediate COCOMO by diving deeper into project-
specific factors. It considers a wider range of parameters, like team experience, development
practices, and software complexity. By analyzing these factors in more detail, Detailed COCOMO
provides a highly accurate estimation of effort, time, and cost for software projects. It’s like
zooming in on a project’s unique characteristics to get a clearer picture of what it will take to
complete it successfully.

CASE Studies and Examples


1. NASA Space Shuttle Software Development: NASA estimated the time and money needed to
build the software for the Space Shuttle program using the COCOMO model. NASA was able
to make well-informed decisions on resource allocation and project scheduling by taking into
account variables including project size, complexity, and team experience.
2. Big Business Software Development: The COCOMO model has been widely used by big
businesses to project the time and money needed to construct intricate business software
systems. These organizations were able to better plan and allocate resources for their software
projects by using COCOMO’s estimation methodology.
3. Commercial Software goods: The COCOMO methodology has proven advantageous for
software firms that create commercial goods as well. These businesses were able to decide on
pricing, time-to-market, and resource allocation by precisely calculating the time and expense
of building new software products or features.
4. Academic Research Initiatives: To estimate the time and expense required to create software
prototypes or carry out experimental studies, academic research initiatives have employed
COCOMO. Researchers were able to better plan their projects and allocate resources by using
COCOMO’s estimate approaches.

Advantages of the COCOMO Model


1. Systematic cost estimation: Provides a systematic way to estimate the cost and effort of a
software project.
2. Helps to estimate cost and effort: This can be used to estimate the cost and effort of a
software project at different stages of the development process.
3. Helps in high-impact factors: Helps in identifying the factors that have the greatest impact on
the cost and effort of a software project.
4. Helps to evaluate the feasibility of a project: This can be used to evaluate the feasibility of a
software project by estimating the cost and effort required to complete it.

Disadvantages of the COCOMO Model


1. Assumes project size as the main factor: Assumes that the size of the software is the main
factor that determines the cost and effort of a software project, which may not always be the
case.
2. Does not count development team-specific characteristics: Does not take into account the
specific characteristics of the development team, which can have a significant impact on the
cost and effort of a software project.
3. Not enough precise cost and effort estimate: This does not provide a precise estimate of the
cost and effort of a software project, as it is based on assumptions and averages.

80 | P a g e
Best Practices for Using COCOMO
1. Recognize the Assumptions Underpinning the Model: Become acquainted with the
COCOMO model’s underlying assumptions, which include its emphasis on team experience,
size, and complexity. Understand that although COCOMO offers useful approximations, project
results cannot be predicted with accuracy.
2. Customize the Model: Adapt COCOMO’s inputs and parameters to your project’s unique
requirements, including organizational capacity, development processes, and industry
standards. By doing this, you can be confident that the estimations produced by COCOMO are
more precise and appropriate for your situation.
3. Utilize Historical Data: To verify COCOMO inputs and improve estimating parameters,
collect and examine historical data from previous projects. Because real-world data takes
project-specific aspects and lessons learned into account, COCOMO projections become more
accurate and reliable.
4. Verify and validate: Compare COCOMO estimates with actual project results, and make
necessary adjustments to estimation procedures in light of feedback and lessons discovered.
Review completed projects to find errors and enhance future project estimation accuracy.
5. Combine with Other Techniques: To reduce biases or inaccuracies in any one method and to
triangulate results, add COCOMO estimates to other estimation techniques including expert
judgment, similar estimation, and bottom-up estimation.

Important Questions on COCOMO Model


1. A company needs to develop digital signal processing software for one of its newest inventions.
The software is expected to have 20000 lines of code. The company needs to determine the effort in
person-months needed to develop this software using the basic COCOMO model. The
multiplicative factor for this model is given as 2.2 for the software development on embedded
systems, while the exponentiation factor is given as 1.50. What is the estimated effort in person-
months? [ ISRO CS 2016 ]
(A) 196.77
(B) 206.56
(C) 199.56
(D) 210.68
Solution: The correct Answer is (A).

2. Estimation of software development effort for organic software in basic COCOMO is


[ ISRO CS 2017 – May ]
(A) E = 2.0(KLOC)1.05PM
(B) E = 3.4(KLOC)1.06PM
(C) E = 2.4(KLOC)1.05PM
(D) E = 2.4(KLOC)1.07PM
Solution: The correct Answer is (C).
Conclusion
The COCOMO model provides a structured way to estimate the time, effort, and cost needed for
software development based on project size and various influencing factors. It helps project
managers and developers plan resources effectively and set realistic timelines and budgets,
improving overall project management and success.

Frequently Asked Questions on COCOMO Model -FAQs


1. How many COCOMO models are there?
There are three types of COCOMO model:
 Basic COCOMO Model
 Intermediate COCOMO Model
 Detailed COCOMO Model

81 | P a g e
2. What does COCOMO Model stands for?
COCOMO Model stands for Constructive Cost Model.

3. Why is COCOMO II used?


COCOMO II is useful for software development that is non-sequential, quick, and reusable. It gives
estimations of work and schedule. It delivers estimates within one standard deviation of the most
likely estimate.

What is the Capability Maturity Model (CMM)


Capability Maturity Model (CMM) was developed by the Software Engineering Institute (SEI) at
Carnegie Mellon University in 1987. It is not a software process model. It is a framework that is
used to analyze the approach and techniques followed by any organization to develop software
products. It also provides guidelines to enhance further the maturity of the process used to develop
those software products.
It is based on profound feedback and development practices adopted by the most successful
organizations worldwide. This model describes a strategy for software process improvement that
should be followed by moving through 5 different levels. Each level of maturity shows a process
capability level. All the levels except level 1 are further described by Key Process Areas (KPA).

Importance of Capability Maturity Model


 Optimization of Resources: CMM helps businesses make the best use of all of their resources,
including money, labor, and time. Organizations can improve the effectiveness of resource
allocation by recognizing and getting rid of unproductive practices.
 Comparing and Evaluating: A formal framework for benchmarking and self-evaluation is
offered by CMM. Businesses can assess their maturity levels, pinpoint their advantages and
disadvantages, and compare their performance to industry best practices.
 Management of Quality: CMM emphasizes quality management heavily. The framework
helps businesses apply best practices for quality assurance and control, which raises the quality
of their goods and services.
 Enhancement of Process: CMM gives businesses a methodical approach to evaluate and
enhance their operations. It provides a road map for gradually improving processes, which
raises productivity and usefulness.
 Increased Output: CMM seeks to boost productivity by simplifying and optimizing processes.
Organizations can increase output and efficiency without compromising quality as they go
through the CMM levels.

Principles of Capability Maturity Model (CMM)


 People’s capability is a competitive issue. Competition arises when different organizations are
performing the same task (such as software development). In such a case, the people of an
organization are sources of strategy and skills, which in turn results in better performance of the
organization.
 The people’s capability should be defined by the business objectives of the organization.
 An organization should invest in improving the capabilities and skills of the people as they are
important for its success.
 The management should be responsible for enhancing the capability of the people in the
organization.
 The improvement in the capability of people should be done as a process. This process should
incorporate appropriate practices and procedures.
 The organization should be responsible for providing improvement opportunities so that people
can take advantage of them.
 Since new technologies and organizational practices emerge rapidly, organizations should
continually improve their practices and develop the abilities of people.

82 | P a g e
Shortcomings of the Capability Maturity Model (CMM)
 It encourages the achievement of a higher maturity level in some cases by displacing the true
mission, which is improving the process and overall software quality.
 It only helps if it is put into place early in the software development process.
 It has no formal theoretical basis and in fact, is based on the experience of very knowledgeable
people.
 It does not have good empirical support and this same empirical support could also be
constructed to support other models.
 Difficulty in measuring process improvement: The SEI/CMM model may not provide an
accurate measure of process improvement, as it relies on self-assessment by the organization
and may not capture all aspects of the development process.
 Focus on documentation rather than outcomes: The SEI/CMM model may focus too much on
documentation and adherence to procedures, rather than on actual outcomes such as software
quality and customer satisfaction.
 May not be suitable for all types of organizations: The SEI/CMM model may not be suitable for
all kinds of organizations, particularly those with smaller development teams or those with less
structured development processes.
 May not keep up with rapidly evolving technologies: The SEI/CMM model may not be able to
keep up with rapidly evolving technologies and development methodologies, which could limit
its usefulness in certain contexts.
 Lack of agility: The SEI/CMM model may not be agile enough to respond quickly to changing
business needs or customer requirements, which could limit its usefulness in dynamic and
rapidly changing environments.

Key Process Areas (KPA)


Each of these KPA (Key Process Areas) defines the basic requirements that should be met by a
software process to satisfy the KPA and achieve that level of maturity.
Conceptually, key process areas form the basis for management control of the software project and
establish a context in which technical methods are applied, work products like models, documents,
data, reports, etc. are produced, milestones are established, quality is ensured and change is
properly managed.

Levels of Capability Maturity Model (CMM)


There are 5 levels of Capability Maturity Models. We will discuss each one of them in detail.

83 | P a g e
Level-1: Initial
 No KPIs defined.
 Processes followed are Adhoc and immature and are not well defined.
 Unstable environment for software development.
 No basis for predicting product quality, time for completion, etc.
 Limited project management capabilities, such as no systematic tracking of schedules, budgets,
or progress.
 We have limited communication and coordination among team members and stakeholders.
 No formal training or orientation for new team members.
 Little or no use of software development tools or automation.
 Highly dependent on individual skills and knowledge rather than standardized processes.
 High risk of project failure or delays due to a lack of process control and stability.

Level-2: Repeatable
 Focuses on establishing basic project management policies.
 Experience with earlier projects is used for managing new similar-natured projects.
 Project Planning- It includes defining resources required, goals, constraints, etc. for the
project. It presents a detailed plan to be followed systematically for the successful completion
of good-quality software.
 Configuration Management- The focus is on maintaining the performance of the software
product, including all its components, for the entire lifecycle.
 Requirements Management- It includes the management of customer reviews and feedback
which result in some changes in the requirement set. It also consists of accommodation of those
modified requirements.
 Subcontract Management- It focuses on the effective management of qualified software
contractors i.e. it manages the parts of the software developed by third parties.

84 | P a g e
 Software Quality Assurance- It guarantees a good quality software product by following
certain rules and quality standard guidelines while developing.

Level-3: Defined
 At this level, documentation of the standard guidelines and procedures takes place.
 It is a well-defined integrated set of project-specific software engineering and management
processes.
 Peer Reviews: In this method, defects are removed by using several review methods like
walkthroughs, inspections, buddy checks, etc.
 Intergroup Coordination: It consists of planned interactions between different development
teams to ensure efficient and proper fulfillment of customer needs.
 Organization Process Definition: Its key focus is on the development and maintenance of
standard development processes.
 Organization Process Focus: It includes activities and practices that should be followed to
improve the process capabilities of an organization.
 Training Programs: It focuses on the enhancement of knowledge and skills of the team
members including the developers and ensuring an increase in work efficiency.

Level-4: Managed
 At this stage, quantitative quality goals are set for the organization for software products as well
as software processes.
 The measurements made help the organization to predict the product and process quality within
some limits defined quantitatively.
 Software Quality Management: It includes the establishment of plans and strategies to
develop quantitative analysis and understanding of the product’s quality.
 Quantitative Management: It focuses on controlling the project performance quantitatively.

Level-5: Optimizing
 This is the highest level of process maturity in CMM and focuses on continuous process
improvement in the organization using quantitative feedback.
 The use of new tools, techniques, and evaluation of software processes is done to prevent the
recurrence of known defects.
 Process Change Management: Its focus is on the continuous improvement of the
organization’s software processes to improve productivity, quality, and cycle time for the
software product.
 Technology Change Management: It consists of the identification and use of new
technologies to improve product quality and decrease product development time.
 Defect Prevention It focuses on the identification of causes of defects and prevents them from
recurring in future projects by improving project-defined processes.

Case-Studies Capability Maturity Model (CMM):


1. Tata Consultancy Services (TCS)
CMMI has long been used by TCS, a well-known Indian provider of IT services and consulting, to
enhance its software development and delivery procedures. TCS has been able to provide high-
quality solutions and meet client expectations owing in part to this deployment.
2. Infosys
CMMI has been used by India-based Infosys, a global provider of IT services and consulting, to
improve its software development and delivery skills. To increase process efficiency and provide its
clients with high-quality solutions, the organization has adopted CMMI methods.
3. Lockheed Martin
Global aerospace and defense giant Lockheed Martin has a long history of being acknowledged for
reaching high CMM maturity levels. The company’s software development and project
management procedures have improved as a result of its successful CMM implementation.

85 | P a g e
CMM (Capability Maturity Model) vs CMMI (Capability Maturity Model Integration)
Capability Maturity Model Capability Maturity Model
Aspects (CMM) Integration (CMMI)

Expands to various disciplines like


Primarily focused on software
Scope systems engineering, hardware
engineering processes.
development, etc.

Initially had a staged


Had a five-level maturity model
Maturity Levels representation; it introduced
(Level 1 to Level 5).
continuous representation later.

More rigid structure with Offers flexibility to tailor process


Flexibility
predefined practices. areas to organizational needs.

Gained wider adoption across


Adoption and Gained popularity in the
industries due to broader
Popularity software development industry.
applicability.

Levels of CMMI
CMMI, like CMM, is organized into five stages of process maturity. However, they differ from the
levels in CMM.
There are 5 performance levels of the CMMI Model.
Level 1: Initial: Processes are often ad hoc and unpredictable. There is little or no formal process
in place.
Level 2: Managed: Basic project management processes are established. Projects are planned,
monitored, and controlled.
Level 3: Defined: Organizational processes are well-defined and documented. Standardized
processes are used across the organization.
Level 4: Quantitatively Managed: Processes are measured and controlled using statistical and
quantitative techniques. Process performance is quantitatively understood and managed.
Level 5: Optimizing: Continuous process improvement is a key focus. Processes are continuously
improved based on quantitative feedback.

Questions For Practice


1. Capability Maturity Model (CMM) is the methodology to [ISRO 2017]
(A) Develop and refine an organization’s software development process
(B) Develop the software
(C) Test the software
(D) All of the above
Solution: The correct answer is (A).

2. Match the 5 CMM Maturity levels/CMMI staged representations in List- I with their
characterizations in List-II codes: [UGC NET CS 2018]
List – 1 List – 2

(i) Processes are improved


(a) Initial
quantitatively and continually.

(ii) The plan for a project comes


(b) Repeatable
from a template for plans.

86 | P a g e
List – 1 List – 2

(ii) The plan for a project comes


(c) Defined
from a template for plans.

(iv) There may not exist a plan or it


(d) Managed
may be abandoned.

(v) There’s a plan and people stick


(e) Optimizing
to it.
Choose the Correct Option:
(a) (b) (c) (d) (e)

(A) iv v i iii ii

(B) i ii iv v iii

(C) v iv ii iii i

(D) iv v ii iii i
Solution: The correct answer is (D).

3. Which one of the following is not a key process area in CMM level 5? [UGC NET CSE
2014]
(A) Defect prevention
(B) Process change management
(C) Software product engineering
(D) Technology change management
Solution: The correct answer is (C).

Conclusion
The Capability Maturity Model (CMM) is a framework designed to help organizations improve
their software development processes. It outlines five levels of maturity, each representing a step
towards more organized and efficient practices. In simple words, CMM helps companies identify
their current process capabilities, find weaknesses, and provide a structured path for improvement,
ensuring better project management and higher quality outcomes over time.

FAQs on the Capability Maturity Model


How does the Capability Maturity Model give benefit to the Organization?
Answer:
The capability Maturity Model helps an organization in these ways:
 It helps in improving the quality of the product and the reliability of the product.
 It helps in reducing development cycle time and cost.
 It helps in better project management.

List some of the alternatives of the Capability Maturity Model for the improvement of
Processes.
Answer:
Some of the alternatives of the Capability Maturity Model are listed below.
 Six Sigma
 ISO 9000

87 | P a g e
 Agile methodologies
 Lean Software Development

Integrating Risk Management in SDLC | Set 1


The Software Development Life Cycle (SDLC) is a conceptual model for defining the tasks
performed at each step of the software development process. This model gives you a brief about the
life cycle of Software in the development phase. In this particular article, we are going to discuss risk
management in each and every step of the SDLC Model.

Steps in SDLC Model


Though there are various models for SDLC, in general, SDLC (Software Development Life Cycle)
comprises of following steps:
 Preliminary Analysis
 System Analysis and Requirement Definition
 System Design
 Development
 Integration and System Testing
 Installation, Operation, and Acceptance Testing
 Maintenance
 Disposal

We will be discussing these steps in brief and how risk assessment and management are incorporated
into these steps to ensure less risk in the software being developed.

1. Preliminary Analysis
In this step, you need to find out the organization’s objective
 Nature and scope of problem under study
 Propose alternative solutions and proposals after having a deep understanding of the problem and
what competitors are doing
 Describe costs and benefits.
Support from Risk Management Activities: Below mentioned is the support from the activities of

Risk Management.
 Establish a process and responsibilities for risk management
 Document Initial known risks
 The Project Manager should prioritize the risks

2. System Analysis and Requirement Definition


This step is very important for a clear understanding of customer expectations and requirements. Thus
it is very important to conduct this phase with utmost care and given due time as any possible error
will cause the failure of the entire process. Following are the series of steps that are conducted during
this stage.
 End-user requirements are obtained through documentation, client interviews, observation, and
questionnaires
 Pros and cons of the current system are identified so as to avoid the cons and carry forward the
pros in the new system.
 Any Specific user proposals are used to prepare the specifications and solutions for the
shortcomings discovered in step two are found.
 Identify assets that need to be protected and assign their criticality in terms of confidentiality,
integrity, and availability.
 Identify threats and resulting risks to those assets.
 Determine existing security controls to reduce that risks.

88 | P a g e
Feasibility Study: This is the first and most important phase. Often this phase is conducted as a
standalone phase in big projects not as a sub-phase under the requirement definition phase. This phase
allows the team to get an estimate of major risk factors cost and time for a given project. You might
be wondering why this is so important. A feasibility study helps us to get an idea of whether it is
worth constructing the system or not. It helps to identify the main risk factors.

Risk Factors: Following is the list of risk factors for the feasibility study phase.
 Project managers often make a mistake in estimating the cost, time, resources, and scope of the
project. Unrealistic budget, time, inadequate resources, and unclear scope often lead to project
failure.
 Unrealistic Budget: As discussed above inaccurate estimation of the budget may lead to the
project running out of funds early in the SDLC. An accurate estimation budget is directly related
to correct knowledge of time, effort, and resources.
 Unrealistic Schedule: Incorrect time estimation lead to a burden on developers by project
managers to deliver the project on time. Thus compromising the overall quality of the project and
thus making the system less secure and more vulnerable.
 Insufficient resources: In some cases, the technology, and tools available are not up-to-date to
meet project requirements, or resources(people, tools, technology) available are not enough to
complete the project. In any case, it is the project will get delayed, or in the worst case it may lead
to project failure.
 Unclear project scope: Clear understanding of what the project is supposed to do, which
functionalities are important, which functionalities are mandatory, and which functionalities can
be considered as extra is very important for project managers. Insufficient knowledge of the
system may lead to project failure.

Requirement Elicitation: It starts with an analysis of the application domain. This phase requires the
participation of different stakeholders to ensure efficient, correct, and complete gathering of system
services, their performance, and constraints. This data set is then reviewed and articulated to make it
ready for the next phase.

Risk Factors: Following is the list of risk factors for the Requirement Elicitation phase.
 Incomplete requirements: In 60% of the cases users are unable to state all requirements in the
beginning. Therefore requirements have the most dynamic nature in the complete SDLC
(Software Development Life Cycle) Process. If any of the user needs, constraints, or other
functional/non-functional requirements are not covered then the requirement set is said to be
incomplete.
 Inaccurate requirements: If the requirement set does not reflect real user needs then in that case
requirements are said to be inaccurate.
 Unclear requirements: Often in the process of SDLC there exists a communication gap between
users and developers. This ultimately affects the requirement set. If the requirements stated by
users are not understandable by analysts and developers then these requirements are said to be
unclear.
 Ignoring nonfunctional requirements: Sometimes developers and analysts ignore the fact that
nonfunctional requirements hold equal importance as functional requirements. In this confusion,
they focus on delivering what the system should do rather than how the system should be like
scalability, maintainability, testability, etc.
 Conflicting user requirements: Multiple users in a system may have different requirements. If
not listed and analyzed carefully, this may lead to inconsistency in the requirements.
 Gold plating: It is very important to list out all requirements in the beginning. Adding
requirements later during development may lead to threats in the system. Gold plating is nothing
but adding extra functionality to the system that was not considered earlier. Thus inviting threats
and making the system vulnerable.
 Unclear description of real operating environment: Insufficient knowledge of real operating
environment leads to certain missed vulnerabilities thus threats remain undetected until a later
stage of the software development life cycle.

89 | P a g e
Requirement Analysis Activity: In this step requirements that are gathered by interviewing users or
brainstorming or by another means will be: first analyzed and then classified and organized such as
functional and nonfunctional requirements groups and then these are prioritized to get a better
knowledge of which requirements are of high priority and need to be definitely present in the system.
After all these steps requirements are negotiated.

Risk Factors: Risk management in this Requirement Analysis Activity step has the following task to
do.
 Nonverifiable requirements: If a finite cost-effective process like testing, inspection, etc is not
available to check whether the software meets the requirement or not then that requirement is said
to be nonverifiable.
 Infeasible requirement: if sufficient resources are not available to successfully implement the
requirement then it is said to be an infeasible requirement.
 Inconsistent requirement: If a requirement contradicts any other requirement then the
requirement is said to be inconsistent.
 Nontraceable requirement: It is very important for every requirement to have an origin source.
During documentation, it is necessary to write the origin source of each requirement so that it can
be traced back in the future when required.
 Unrealistic requirement: If a requirement meets all the above criteria like it is complete,
accurate, consistent, traceable, verifiable, etc then that requirement is realistic enough to be
documented and implemented.

Requirement Validation Activity: This involves validating the requirements that are gathered and
analyzed till now to check whether they actually define what users want from the system.
Risk Factors: Following is the list of risk factors for the Requirement Validation Activity phase.
 Misunderstood domain-specific terminology: Developers and Application specialists often use
domain-specific terminology or we can say technical terms which are not understandable for the
majority of end users. Thus creating a misunderstanding between end users and developers.
 Using natural language to express requirements: Natural language is not always the best way
to express requirements as different users may have different signs and conventions. Thus it is
advisable to use formal language for expressing and documenting.

Requirement Documentation Activity: This step involves creating a Requirement Document (RD)
by writing down all the agreed-upon requirements using formal language. RD serves as a means of
communication between different stakeholders.

Risk Factors: Following is the list of risk factors for the Requirement Documentation Activity phase.
 Inconsistent requirements data and RD: Sometimes it might happen, due to glitches in the
gathering and documentation process, actual requirements may differ from the documented ones.
 Nonmodifiable RD: If during RD preparation, structuring of RD with maintainability is not
considered then it will become difficult to edit the document in the course of change without
rewriting it.

Questions For Practice


1. Requirement Development, Organizational Process Focus, Organizational Training, Risk
Management, and Integrated Supplier Management are process areas required to achieve
maturity level. [UGC NET CSE 2014]
(A) Performed
(B) Managed
(C) Defined
(D) Optimized
Solution: Correct Answer is (C).
For a detailed Solution, refer to UGC-NET | UGC NET CS 2014 Dec – II | Question 42.
Frequently Asked Questions

90 | P a g e
1. Which SDLC Model is Best for Risk Management?
Answer:
The Spiral Model is a systems development lifecycle (SDLC) that is the best method for risk
management.
2. What is Risk Analysis in SDLC?
Answer:
Risk Analysis is simply identifying risks in applications and prioritizing them for testing purpose.
3. How Risk is Managed in the Waterfall Model?
Answer:
Risks in Waterfall Model are managed with the help of Charts. After the detection of Risks, Risk
Chart begins.

Integrating Risk Management in SDLC | Set 2


Prerequisite: Integrating Risk Management in SDLC | Set 1
We have seen the Risk Management Techniques in SDLC which we have discussed Preliminary
Analysis, System Analysis, and Requirement Definition part. In this article, we will be
discussing the System Design and Development phase of the Software Development Life Cycle
(SDLC). We will discuss how risk is managed in these two phases. Let’s proceed with the System
Design part.

3. System Design
This is the second phase of SDLC wherein system architecture must be established and all
requirements that are documented needs to be addressed. In this phase, the system (operations and
features) is described in detail using screen layouts, pseudocode, business rules, process diagrams,
etc.

Support from Risk Management Activities


 Accurate Classification of assets criticality
 Planned controls accurately identified
System Design undergoes a list of seven activities that are listed as follows.
 Examine the Requirement Document: It is quite important for the developers to be a part of
examining the requirement document to ensure the understandability of the requirements listed
in the requirement document.
o Risk Factors – RD is not clear for developers: It is necessary for the developers to be
involved in the requirements definition and analysis phase otherwise they won’t be having a
good understanding of the system to be developed. They will be unable to start designing
on a solid understanding of the requirements of the system. Hence will land up creating a
design for the system other than the intended one.
 Choosing the Architectural Design Method Activity: It is the method to decompose the
system into components. Thus it is a way to define software system components. There exist
many methods for architectural design including structured object-oriented, Jackson system
development, and formal methods. But there is no standard architectural design method.
o Risk Factors – Improper Architectural Design Method: As discussed above there is no
standard architectural design method, one can choose the most suitable method depending
on the project’s need. But it is important to choose the method with utmost care. If chosen
incorrectly it may result in problems in system implementation and integration. Even if
implementation and integration are successful it may be possible that the architectural
design may not work successfully on the current machine. The choice of programming
language depends upon the architectural model chosen.
 Choosing the Programming Language Activity: Choosing a programming language should be
done side by side with the architectural method. As programming language should be compatible
with the architectural method chosen.

91 | P a g e
o Risk Factors – Improper choice of programming language: Incorrect choice of programming
language may not support chosen architectural method. This may reduce the maintainability
and portability of the system.
 Constructing Physical Model Activity: The physical model consisting of symbols is a
simplified description of a hierarchically organized system.
o Risk Factors:
o Complex System: If the system to be developed is very large and complex then it will
create problems for developers. as developers will get confused and will not be able to
make out where to start and how to decompose such large and complex systems into
components.
o Complicated Design: For a large complex system it may be possible due to confusion
and lack of enough skills, developers may create a complicated design, which will be
difficult to implement.
o Large-Size Components: In the case of large-size components that are further
decomposable into sub-components, may suffer difficulty in implementation and also
poses difficulty in assigning functions to these components.
o Unavailability of Expertise for Reusability: Lack of proper expertise to determine the
components that can be reused poses a serious risk to the project. Developing
components from scratch takes a lot of time in comparison to reusing the components.
Thus delaying the projection completion.
o Less Reusable Components: Incorrect estimation of reusable components during the
analysis phase leads to two serious risks to the project- first delay in project completion
and second budget overrun. Developers will be surprised to see that the percentage of the
code that was considered ready, needs to be rewritten from scratch and it will eventually
make the project budget overrun.

 Verifying Design Activity: Verifying design means ensuring that the design is the correct
solution for the system under construction and it meets all user requirements.
o Risk Factors:
o Difficulties in Verifying Design to Requirements: Sometimes it is quite difficult for
the developer to check whether the proposed design meets all user requirements or not.
In order to make sure that the design is the correct solution for the system it is necessary
that the design meets all requirements.
o Many Feasible Solutions: When verifying the design, the developer may come across
many alternate solutions for the same problem Thus, choosing the best possible design
that meets all requirements is difficult. The choice depends upon the system and its
nature.
o Incorrect Design: While verifying the design it might be possible that the proposed
design either matches few requirements or no requirements at all. It may be possible that
it is a completely different design.
 Specifying Design Activity: This activity involves the following main tasks. It involves the
components and defines the data flow between them and for each identified component state its
function, data input, data output, and utilization of resources.
o Risk Factors:
o Difficulty in allocating functions to components: Developers may face difficulty in
allocating functions to components in two cases- first when the system is not
decomposed correctly and secondly if the requirement documentation is not done
properly in that case developers will find it difficult to identify functions for the
components as functional requirements constitute the functions of the components.
o Extensive specification: Extensive specification of module processing should be
avoided to keep the design document as small as possible.
o Omitting Data Processing Functions: Data processing functions like create, and
read are the operations that components perform on data. Accidental omission of
these functions should be avoided.

92 | P a g e
 Documenting Design Activity: In this phase design document(DD) is prepared. This will help to
control and coordinate the project during implementation and other phases.
o Risk Factors:
o Incomplete DD: The design document should be detailed enough to explain each
component, sub-components, and sub-sub-components in full detail so that developers
may work independently on different modules. If DD lacks these features then
programmers cannot work independently.
o Inconsistent DD: If the same function is carried out by more than one component. Then
in that case it will result in redundancy in the design document and will eventually result
in inconsistent documents.
o Unclear DD: If the design document does not clearly define components and is written
in uncommon natural language, then in that case it might be difficult for the developers
to understand the proposed design.
o Large DD: The design document should be detailed enough to list all components will
full details about functions, input, output, resources required, etc. It should not contain
unnecessary information. Large design documents will be difficult for programmers to
understand.
4. Development
This stage involves the actual coding of the software as per the agreed-upon requirements between
the developer and the client.

Support from Risk Management Activities


All designed controls are implemented during this stage. This phase involves two steps: Coding
and Unit Testing.
 Coding Activity: This step involves writing the source code for the system to be developed.
User interfaces are developed in this step. Each developed module is then tested in the unit
testing step.
o Risk Factors:
o Unclear design document: If the design document is large and unclear it will be
difficult for the programmer to understand the document and to find out where to start
coding.
o Lack of independent working environment: Due to unclear and incomplete design
documents it is difficult for the team of developers to assign independent modules to
work on.
o Wrong user interface and user functions developed: Incomplete, inconsistent, and
unclear design documents lead to wrongly implemented user interfaces and functions.
Poor user interface reduces the acceptability of the system in the real environment
among the customers.
o Programming language incompatible with architectural design: choice of
architectural method solely drives the choice of programming language. They must be
chosen in sequence otherwise if incompatible, the programming language may not work
with the chosen method.
o Repetitive code: In large projects, there arises a need to rewrite the same piece of code
again and again. This consumes a lot of time and also increases lines of code.
o Modules developed by different programmers: In large projects, modules are divided
between the programmers. But the different programmer has a different style and way
of thinking. This led to an inconsistent, complex, and ambiguous code.
 Unit Testing Activity: Each module is tested individually to check whether it meets the
specified requirements or not and performs the functions it is intended to do.
o Risk Factors:
o Lack of fully automated testing tools: Even today unit testing is not fully
automated. This makes the testing process boring and monotonous. Testers
don’t bother to generate all possible test cases.

93 | P a g e
o Code not understandable by reviewers: During unit testing, developers need
to review and make changes to the code. If the code is not understandable it will
be very difficult to update the code.
o Coding drivers and stubs: During unit testing, modules need data from other
modules or need to pass data to another module. As no module is completely
independent in itself. A stub is a piece of code that replaces the module that
accepts data from the module being tested. The driver is the piece of code that
replaces the module that passes data to the module being tested. Coding drivers
and stubs consume a lot of time and effort. Since these will not be delivered
with the final system so these are considered extras.
o Poor documentation of test cases: Test cases need to be documented properly
so that these can be used in the future.
o The testing team is not experienced: The testing team is not experienced
enough to handle the automated tools and to write short concise code for drivers
and stubs.
o Poor regression testing: Regression testing means rerunning all successful test
cases when a change is made. This saves time and effort but it can be time-
consuming if all test cases are selected for rerun.

Frequently Asked Questions On Risk Management


1. How do stakeholders help in Risk Management during Software Development Life Cycle?
Answer:
Stakeholders has crucial role in Risk Management during SDLC. They help in risk identification by
providing insights into potential risks using different perspectives.
2. How does Risk Monitoring and Control happen in the Software Development life Cycle
(SDLC)?
Answer:
Risk Monitoring and Control happens during SDLC by continuously tracking and reviewing the
project during its development phase.

We have already discussed the first four steps of the Software Development Life Cycle. In this
article, we will be discussing the remaining four steps: Integration and System Testing,
Installation, Operation and Acceptance Testing, Maintenance, and Disposal. We will discuss
Risk Management in these four steps in detail.

5. Integration and System Testing


In this phase, first, all modules are independently checked for errors, bugs. Then they are related to
their dependents and dependency is checked for errors finally all modules are integrated into one
complete software and checked as a whole for bugs.

Support from Risk Management Activities


In this phase, designed controls are tested to see whether they work accurately in an integrated
environment. This phase includes three activities: Integration Activity, Integration Testing
Activity, and System Testing Activity. We will be discussing these activities in a bit more detail
along with the risk factors in each activity.

 Integration Activity: In this phase, individual units are combined into one working system.
o Risk Factors:
o Difficulty in combining components: Integration should be done incrementally else it
will be very difficult to locate errors and bugs. The wrong sequence of integration will
eventually hamper the functionality for which the system was designed.
o Integrate wrong versions of components: Developing a system involves writing
multiple versions of the same component. If the incorrect version of the component is
selected for integration it may not produce the desired functionality.

94 | P a g e
o Omissions: Integration of components should be done carefully. Single missed
components may result in errors and bugs, that will be difficult to locate.
 Integration Testing Activity: After integrating the components next step is to test whether the
components interface correctly and to evaluate their integration. This process is known as
integration testing.
o Risk Factors:
o Bugs during integration: If wrong versions of components are integrated or components
are accidentally omitted, then it will result in bugs and errors in the resultant system.
o Data loss through the interface: Wrong integration leads to a data loss between the
components where the number of parameters in the calling component does not match the
number of parameters in the called component.
o Desired functionality not achieved: Errors and bugs introduced during integration result
in a system that fails to generate the desired functionality.
o Difficulty in locating and repairing errors: If integration is not done incrementally, it
results in errors and bugs that are hard to locate. Even if the bugs are located, they need to
be fixed. Fixing errors in one component may introduce errors in other components. Thus
it becomes quite cumbersome to locate and repair errors.
 System Testing Activity: In this step integrated system is tested to ensure that it meets all the
system requirements gathered from the users.
o Risk Factors:
o Unqualified testing team: The lack of a good testing team is a major setback for good
software as testers may misuse the available resources and testing tools.
o Limited testing resources: Time, budget, and tools if not used properly or unavailable
may delay project delivery.
o Not possible to test in a real environment: Sometimes it is not able to test the system in a
real environment due to lack of budget, time constraints, etc.
o Testing cannot cope with requirements change: User requirements often change during
the entire software development life cycle, so test cases should be designed to handle such
changes. If not designed properly they will not be able to cope with change.
o The system being tested is not testable enough: If the requirements are not verifiable,
then In that case it becomes quite difficult to test such a system.

6. Installation, Operation, and Acceptance Testing


This is the last and longest phase in SDLC. This system is delivered, installed, deployed, and tested
for user acceptance.

Support from Risk Management Activities


The system owner will want to ensure that the prescribed controls, including any physical or
procedural controls, are in place prior to the system going live. Decisions regarding risks identified
must be made prior to system operation. This phase involves three activities: Installation, Operation,
and Acceptance Testing.
 Installation Activity: The software system is delivered and installed at the customer site.
o Risk Factors:
o Problems in installation: If deployers are not experienced enough or if the system is
complex and distributed, then in that case it becomes difficult to install the software
system.
o Change in the environment: Sometimes the installed software system doesn’t work
correctly in the real environment, in some cases due to hardware advancement.
 Operation Activity: Here end users are given training on how to use software systems and their
services.
o Risk Factors:
o New requirements emerge: While using the system, sometimes users feel the need to
add new requirements.

95 | P a g e
o Difficulty in using the system: Being a human it is always difficult in the beginning to
accept a change or we can say to accept a new system. But this should not go on for long
otherwise this will be a serious threat to the acceptability of the system.
 Acceptance Testing Activity: The delivered system is put into acceptance testing to check
whether it meets all user requirements or not.
o Risk Factors:
o User resistance to change: It is human behavior to resist any new change in the
surroundings. But for the success of a newly delivered system, it is very important that
the end users accept the system and start using it.
o Too many software faults: Software faults should be discovered earlier before the
system operation phase, as discovery in the later phases leads to high costs in handling
these faults.
o Insufficient data handling: New system should be developed keeping in mind the load
of user data it will have to handle in a real environment.
o Missing requirements: While using the system it might be possible that the end users
discover some of the requirements and capabilities are missing.

7. Maintenance
In this stage, the system is assessed to ensure it does not become obsolete. This phase also involves
continuous evaluation of the system in terms of performance and changes are made from time to time
to initial software to make it up-to-date. Errors, and faults discovered during acceptance testing are
fixed in this phase. This step involves making improvements to the system, fixing errors, enhancing
services, and upgrading software.

Support from Risk Management Activities


Any change to a system has the potential to reduce the effectiveness of existing controls or to
otherwise have some impact on the confidentiality, availability, or integrity of the system. The
solution is to ensure that a risk assessment step is included in evaluating system changes.
 Risk Factors:
o Budget overrun: Finding errors and fixing them involves repeating a few steps in SDLC
again. Thus exceeding the budget.
o Problems in upgrading: Constraints from the end-user or the not-so-flexible architecture of
the system force it to be not easily maintainable.

8. Disposal
In this phase, plans are developed for discarding system information, hardware, and software to make
the transition to a new system. The purpose is to prevent any possibility of unauthorized disclosure of
sensitive data due to improper disposal of information. All of this should be done in accordance with
the organization’s security requirements.

Support from Risk Management Activities


The Risk Management plan developed must also include threats to the confidentiality of residual data,
proper procedures, and controls to reduce the risk of data theft due to improper disposal. However, by
identifying the risk early in the project, the controls could be documented in advance ensuring proper
disposition.
 Risk Factors:
o Lack of knowledge for proper disposal: Proper disposal of information requires an
experienced team, having a plan on how to handle the residual data.
o Lack of proper procedures: Sometimes in a hurry to launch a new system, the organization
sidelines the task of disposal. Procedures used to handle residual data should be properly
documented, so that they can be used in the future.

How To Integrate Risk Management in SDLC?

96 | P a g e
Integrating risk management into the Software Development Life Cycle (SDLC) is crucial for
ensuring the development of secure and reliable software. Here are the ways to integrate Risk
Management in SDLC.
 Define and document the risk management process: The first step is to define the risk
management process and document it in a formal policy or procedure. This process should
include the identification, analysis, evaluation, treatment, and monitoring of risks throughout the
SDLC.
 Identify and assess risks: The next step is to identify and assess risks at every stage of
the Software Development Life Cycle (SDLC). This can be done through various techniques such
as brainstorming sessions, risk assessments, threat modeling, and vulnerability assessments.
 Prioritize risks: Once risks have been identified and assessed, they need to be prioritized based
on their potential impact on the system and their likelihood of occurrence. This helps in
determining which risks need to be addressed first.
 Develop risk mitigation strategies: Once risks have been prioritized, risk mitigation strategies
need to be developed. These strategies can include designing security controls, implementing
secure coding practices, and conducting security testing.
 Incorporate risk management into the SDLC: Risk management should be incorporated into
every phase of the SDLC. This can be done by including risk assessments in the requirements
gathering phase, conducting security testing during the development phase, and conducting
vulnerability assessments during the testing phase.
 Monitor and update the risk management plan: Risk management is an ongoing process, and
risks need to be monitored and updated regularly. This can be done through regular risk
assessments, vulnerability assessments, and threat modeling.
 By integrating risk management into the SDLC: Organizations can develop more secure and
reliable software. This can help reduce the risk of data breaches, system failures, and other
security incidents that can impact an organization’s reputation, financial stability, and customer
trust.
Frequently Asked Questions
1. List some typical risk response strategies used in SDLC?
Answer:
In SDLC, there are four main risk response strategies:
 Avoidance
 Mitigation
 Transfer
 Acceptance

2. What differentiates Integrated Risk Management from Traditional Risk Management?


Answer:
Traditional Risk Management focuses on individual risks, while Integrated Risk Management also
focuses on interactions between different risks.

3. List some common challenges that are faced while implementing Integrated Risk
Management in SDLC?
Answer:
Some of the common challenges include:
 resistance to change
 difficulty in obtaining full support from all stakeholders
 complex risk interdependencies,
 data integration issues, etc.

ROLE AND RESPONSIBILITIES OF A SOFTWARE PROJECT


A software project manager is the most important person inside a team who takes the overall
responsibilities to manage the software projects and plays an important role in the successful

97 | P a g e
completion of the projects. This article focuses on discussing the role and responsibilities of
a software project manager.

Who is a Project Manager?


A project manager has to face many difficult situations to accomplish these works. The job
responsibilities of a project manager range from invisible activities like building up team morale to
highly visible customer presentations. Most of the managers take responsibility for writing the
project proposal, project cost estimation, scheduling, project staffing, software process
tailoring, project monitoring and control, software configuration management, risk management,
managerial report writing, and presentation, and interfacing with clients.
The tasks of a project manager are classified into two major types:
1. Project planning
2. Project monitoring and control

Project Planning
Project planning is undertaken immediately after the feasibility study phase and before the starting
of the requirement analysis and specification phase. Once a project is feasible, Software project
managers start project planning. Project planning is completed before any development phase starts.
1. Project planning involves estimating several characteristics of a project and then plan the
project activities based on these estimations.
2. Project planning is done with most care and attention.
3. A wrong estimation can result in schedule slippage.
4. Schedule delay can cause customer dissatisfaction, which may lead to a project failure.
5. Before starting a software project, it is essential to determine the tasks to be performed and
properly manage allocation of tasks among individuals involved is the software development.
6. Hence, planning is important as it results in effective software development.
7. Project planning is an organized and integrated management process, which focuses on
activities required for successful completion of the project.
8. It prevents obstacles that arise in the project such as changes in projects or organizations
objectives, non-availability of resources, and so on.
9. Project planning also helps in better utilization of resources and optimal usage of the allotted
time for a project.
10. For effective project planning, in addition to a very good knowledge of various estimation
techniques, experience is also very important.

Objectives of Project Planning


1. It defines the roles and responsibilities of the project management team members .
2. It ensures that the project management team works according to the business objectives.
3. It checks feasibility of the schedule and user requirements.
4. It determines project constraints, several individuals help in planning the project.

Activities Performed by Project Manager


1. Project Estimation
Project Size Estimation is the most significant parameter based on which all other estimations like
cost, duration and effort are made.
 Cost Estimation: Total expenses to develop the software product is estimated.
 Time Estimation: The total time required to complete the project.
 Effort Estimation: The effort needed to complete the project is estimated.

2. Scheduling
After the completion of the estimation of all the project parameters, scheduling for manpower and
other resources is done.
3. Staffing
Team structure and staffing plans are made.

98 | P a g e
4. Risk Management
The project manager should identify the unanticipated risks that may occur during project
development risk, analyze the damage that might cause these risks, and take a risk reduction plan to
cope with these risks.

5. Miscellaneous Plans
This includes making several other plans such as quality assurance plans, configuration
management plans, etc.
 Lead the team: The project manager must be a good leader who makes a team of different
members of various skills and can complete their individual tasks.
 Motivate the team-member: One of the key roles of a software project manager is to
encourage team members to work properly for the successful completion of the project.
 Tracking the progress: The project manager should keep an eye on the progress of the project.
A project manager must track whether the project is going as per plan or not. If any problem
arises, then take the necessary action to solve the problem. Moreover, check whether the
product is developed by maintaining correct coding standards or not.
 Liaison: The project manager is the link between the development team and the customer.
Project manager analysis the customer requirements and convey it to the development team and
keep telling the progress of the project to the customer. Moreover, the project manager checks
whether the project is fulfilling the customer’s requirements or not.
 Monitoring and reviewing: Project monitoring is a continuous process that lasts the whole
time a product is being developed, during which the project manager compares actual progress
and cost reports with anticipated reports as soon as possible. While most firms have a formal
system in place to track progress, qualified project managers may still gain a good
understanding of the project’s development by simply talking with participants.
 Documenting project report: The project manager prepares the documentation of the project
for future purposes. The reports contain detailed features of the product and various techniques.
These reports help to maintain and enhance the quality of the project in the future.
 Reporting: Reporting project status to the customer and his or her organization is the
responsibility of the project manager. Additionally, they could be required to prepare brief,
well-organized pieces that summarize key details from in-depth studies.

Features of a Good Project Manager


1. Knowledge of project estimation techniques.
2. Good decision-making abilities at the right time.
3. Previous experience managing a similar type of projects.
4. Good communication skills to meet the customer satisfaction.
5. A project manager must encourage all the team members to successfully develop the product.
6. He must know the various type of risks that may occur and the solution to these problems.

Software Project Management Complexities | Software Engineering


Software project management complexities refer to the various challenges and difficulties involved in
managing software development projects. The primary goal of software project management is to
guide a team of developers to complete a project successfully within a given timeframe. However, this
task is quite challenging due to several factors. Many projects have failed in the past due to poor
project management practices. Software projects are often more complex to manage than other types
of projects. This article explores the different types of complexities and the factors that contribute to
the difficulty of managing software projects.

What is Software Project Management Complexities?


Software Project Management Complexities refer to the various difficulties to manage a software
project. It recognizes in many different ways. The main goal of software project management is to
enable a group of developers to work effectively toward the successful completion of a project in a
given time. But software project management is a very difficult task.

99 | P a g e
Earlier many projects have failed due to faulty project management practices. Management of
software projects is much more complex than management of many other types of projects. In this
article, we will discuss the types of Complexity as well as the factors that make Project
Management Complex.

Types of Complexity
The following are the types of complexity in software project management:
 Time Management Complexity: Complexities to estimate the duration of the project. It also
includes the complexities to make the schedule for different activities and timely completion of
the project.
 Cost Management Complexity: Estimating the total cost of the project is a very difficult task
and another thing is to keep an eye that the project does not overrun the budget.
 Quality Management Complexity: The quality of the project must satisfy the customer’s
requirements. It must assure that the requirements of the customer are fulfilled.
 Risk Management Complexity: Risks are the unanticipated things that may occur during any
phase of the project. Various difficulties may occur to identify these risks and make amendment
plans to reduce the effects of these risks.
 Human Resources Management Complexity: It includes all the difficulties regarding
organizing, managing, and leading the project team.
 Communication Management Complexity: All the members must interact with all the other
members and there must be good communication with the customer.
 Infrastructure complexity: Computing infrastructure refers to all of the operations performed
on the devices that execute our code. Networking, load balancers, queues, firewalls, security,
monitoring, databases, shading, etc. We are solely interested in dealing with data, processing
business policy rules, and clients since we are software engineers that are committed to
providing value in a continuous stream. The aforementioned infrastructure ideas are nothing
more than irksome minutiae that don’t offer any benefit to the clients. Since it is a necessary
evil, we view infrastructure as accidental complexity. Our policies for scaling, monitoring, and
other issues are of little interest to our paying clients.
 Deployment complexity: A release candidate, or finalized code, has to be synchronized from
one system to another. Conceptually, such an operation ought to be simple. To perform this
synchronization swiftly and securely in practice proves to be difficult.
 API complexity: An API should ideally not be any more difficult to use than calling a function.
However, that hardly ever occurs. These calls are inadvertently complicated due to
authentication, rate restrictions, retries, mistakes, and other factors.
 Procurement Management Complexity: Projects need many services from third parties to
complete the task. These may increase the complexity of the project to acquire the services.
 Integration Management Complexity: The difficulties regarding coordinating processes and
developing a proper project plan. Many changes may occur during the project development and
it may hamper the project completion, which increases the complexity.
o Invisibility: Until the development of a software project is complete, Software remains
invisible. Anything that is invisible, is difficult to manage and control. Software project
managers cannot view the progress of the project due to the invisibility of the software until
it is completely developed. The project manager can monitor the modules of the software
that have been completed by the development team and the documents that have been
prepared, which are rough indicators of the progress achieved. Thus invisibility causes a
major problem in the complexity of managing a software project.
o Changeability: Requirements of a software product are undergone various changes. Most
of these change requirements come from the customer during the software development.
Sometimes these change requests resulted in redoing of some work, which may cause
various risks and increase expenses. Thus frequent changes to the requirements play a
major role to make software project management complex.
o Interaction: Even moderate-sized software has millions of parts (functions) that interact
with each other in many ways such as data coupling, serial and concurrent runs, state
transitions, control dependency, file sharing, etc. Due to the inherent complexity of the

100 | P a g e
functioning of a software product in terms of the basic parts making up the software, many
types of risks are associated with its development. This makes managing software projects
much more difficult compared to many other kinds of projects.
o Uniqueness: Every software project is usually associated with many unique features or
situations. This makes every software product much different from the other software
projects. This is unlike the projects in other domains such as building construction, bridge
construction, etc. where the projects are more predictable. Due to this uniqueness of the
software projects, during the software development, a project manager faces many
unknown problems that are quite dissimilar to other software projects that he had
encountered in the past. As a result, a software project manager has to confront many
unanticipated issues in almost every project that he manages.
o The exactness of the Solution: A small error can create a huge problem in a software
project. The solution must be exact according to its design. The parameters of a function
call in a program are required to be correct with the function definition. This requirement of
exact conformity of the parameters of a function introduces additional risks and increases
the complexity of managing software projects.
o Team-oriented and Intellect-intensive work: Software development projects are team-
oriented and intellect-intensive work. The software cannot be developed without interaction
between developers. In a software development project, the life cycle activities are not only
intellect-intensive, but each member has to typically interact, review the work done by other
members, and interface with several other team members creating various complexity to
manage software projects.
o The huge task regarding Estimation: One of the most important aspects of
software project management is Estimation. During project planning, a project manager has
to estimate the cost of the project, the probable duration to complete the project, and how
much effort is needed to complete the project based on size estimation. This estimation is a
very complex task, which increases the complexity of software project management.

Factors that Make Project Management Complex


Give Below are factors that make project management complex
 Changing Requirements: Software projects often involve complex requirements that can
change throughout the development process. Managing these changes can be a significant
challenge for project managers, who must ensure that the project remains on track despite the
changes.
 Resource Constraints: Software projects often require a large amount of resources, including
software developers, designers, and testers. Managing these resources effectively can be a
major challenge, especially when there are constraints on the availability of skilled personnel or
budgets.
 Technical Challenges: Software projects can be complex and difficult due to the technical
challenges involved. This can include complex algorithms, database design, and system
integration, which can be difficult to manage and test effectively.
 Schedule Constraints: Software projects are often subject to tight schedules and deadlines,
which can make it difficult to manage the project effectively and ensure that all tasks are
completed on time.
 Quality Assurance: Ensuring that software meets the required quality standards is a critical
aspect of software project management. This can be a complex and time-consuming process,
especially when dealing with large, complex systems.
 Stakeholder Management: Software projects often involve multiple stakeholders, including
customers, users, and executives. Managing these stakeholders effectively can be a major
challenge, especially when there are conflicting requirements or expectations.
 Risk Management: Software projects are subject to a variety of risks, including technical,
schedule, and resource risks. Managing these risks effectively can be a complex and time-
consuming process, and requires a structured approach to risk management.

101 | P a g e
Software project management is a complex and challenging process that requires a skilled and
experienced project manager to manage effectively. It involves balancing the conflicting demands
of schedule, budget, quality, and stakeholder expectations while ensuring that the project remains
on track and delivers the required results.
Software engineering and software project management can be complex due to various factors, such
as the dynamic nature of software development, changing requirements, technical challenges, team
management, budget constraints, and timeline pressures. Here are some advantages and
disadvantages of managing software projects in such an environment.

Advantages of Software Project Management Complexity


 Improved software quality: Software engineering practices can help ensure the development
of high-quality software that meets user requirements and is reliable, secure, and scalable.
 Better risk management: Project management practices such as risk management can help
identify and address potential risks, reducing the likelihood of project failure.
 Improved collaboration: Effective communication and collaboration among team members
can lead to better software development outcomes, higher productivity, and better morale.
 Flexibility and adaptability: Software development projects require flexibility to adapt to
changing requirements, and software engineering practices provide a framework for managing
these changes.
 Increased efficiency: Software engineering practices can help streamline the development
process, reducing the time and resources required to complete a project.
 Improved customer satisfaction: By ensuring that software meets user requirements and is
delivered on time and within budget, software engineering practices can help improve customer
satisfaction.
 Better maintenance and support: Software engineering practices can help ensure that
software is designed to be maintainable and supportable, making it easier to fix bugs, add new
features, and provide ongoing support to users.
 Increased scalability: By designing software with scalability in mind, software engineering
practices can help ensure that software can handle growing user bases and increasing demands
over time.
 Higher quality documentation: Software engineering practices typically require thorough
documentation throughout the development process, which can help ensure that software is
well-documented and easier to maintain over time.

Disadvantages of Software Project Management Complexity


 Increased complexity: The dynamic nature of software development and the changing
requirements can make software engineering and project management more complex and
challenging.
 Cost overruns: Software development projects can be expensive, and managing them
effectively requires careful budget planning and monitoring to avoid cost overruns.
 Schedule delays: Technical challenges, scope creep, and other factors can cause schedule
delays, which can impact the project’s success and increase costs.
 Difficulty in accurately estimating time and resources: The complexity of software
development and the changing requirements can make it difficult to accurately estimate the time
and resources required for a project.
 Dependency on technology: Software development projects heavily rely on technology, which
can be a double-edged sword. While technology can enable efficient and effective development,
it can also create dependencies and vulnerabilities that can negatively impact the project.
 Lack of creativity: The structured and formalized approach of software engineering can stifle
creativity and innovation, leading to a lack of new and innovative solutions.
 Overemphasis on the process: While processes and methodologies are important in software
development, overemphasizing them can lead to a lack of focus on the end product and the
user’s needs.
 Resistance to change: Some team members may resist changes to established processes or
methodologies, which can impede progress and hinder innovation.

102 | P a g e
 A mismatch between expectations and reality: Stakeholders may have unrealistic
expectations for software development projects, leading to disappointment and frustration when
the final product does not meet their expectations.

Overall, the advantages of software engineering and project management outweigh the
disadvantages. Effective management practices can help ensure successful software development
outcomes and deliver high-quality software that meets user requirements. However, managing
software development projects requires careful planning, execution, and monitoring to overcome
the complexities and challenges that may arise.

Questions For Practice


1. Which project management approach is most suitable for highly complex and uncertain
software projects?
(I) Waterfall
(II) Agile
(III) Scrum
(IV) Lean
Solution: Correct Answer is (II).
2. Software Project Manager is responsible for the following tasks: [UGC NET CSE 2022]
(I) Project Planning
(II) Project Status Tracking
(III) Resource Management
(IV) Risk Management
(V) Project Delivery Within Time and Budget.
Choose the correct answer from the options given below.
(A) All the statements are correct.
(B) Only B & C are Correct.
(C) Only A & D are Correct.
(D) Only A & D are Correct.
Solution: Correct Answer is (A).
3. The COCOMO Model is used to estimate project complexity based on which factor?
(A) Project team size
(B) Project timeline
(C) Project cost
(D) Project scope
Solution: Correct Answer is (A).
Conclusion
Managing software projects is a complex and challenging task due to a variety of factors such as
changing requirements, technical challenges, resource constraints, and the need for effective
communication among team members. These complexities can lead to issues like cost overruns,
schedule delays, and difficulties in ensuring high-quality outcomes. Despite these challenges,
effective project management practices, such as thorough planning, risk management, and
stakeholder coordination, can significantly improve the chances of project success.
Frequently Asked Questions On Software Project Management Complexities
1. How can a Project Manager identify and assess project complexities?
Answer:
Project Managers can easily identify complexity via risk assessments, data analysis, and also with
the help of tools.
2. List some external complexities in Software Projects?
Answer:
Some of the external complexities in Software Project Management are:
 Government Regulations
 Current Market Demands
 Preference of Customers

103 | P a g e
Software Maintenance
Software Maintenance refers to the process of modifying and updating a software system after it has
been delivered to the customer. This involves fixing bugs, adding new features, and adapting to new
hardware or software environments. Effective maintenance is crucial for extending the software’s
lifespan and aligning it with evolving user needs. It is an essential part of the software development
life cycle (SDLC), involving planned and unplanned activities to keep the system reliable and up-to-
date. This article focuses on discussing Software Maintenance in detail.

What is Software Maintenance?


Software maintenance is a continuous process that occurs throughout the entire life cycle of the
software system.
 The goal of software maintenance is to keep the software system working correctly, efficiently,
and securely, and to ensure that it continues to meet the needs of the users.
 This can include fixing bugs, adding new features, improving performance, or updating the
software to work with new hardware or software systems.
 It is also important to consider the cost and effort required for software maintenance when
planning and developing a software system.
 It is important to have a well-defined maintenance process in place, which includes testing and
validation, version control, and communication with stakeholders.
 It’s important to note that software maintenance can be costly and complex, especially for large
and complex systems. Therefore, the cost and effort of maintenance should be taken into
account during the planning and development phases of a software project.
 It’s also important to have a clear and well-defined maintenance plan that includes regular
maintenance activities, such as testing, backup, and bug fixing.

Several Key Aspects of Software Maintenance


1. Bug Fixing: The process of finding and fixing errors and problems in the software.
2. Enhancements: The process of adding new features or improving existing features to meet the
evolving needs of the users.
3. Performance Optimization: The process of improving the speed, efficiency, and reliability of
the software.
4. Porting and Migration: The process of adapting the software to run on new hardware or
software platforms.
5. Re-Engineering: The process of improving the design and architecture of the software to make
it more maintainable and scalable.
6. Documentation: The process of creating, updating, and maintaining the documentation for the
software, including user manuals, technical specifications, and design documents.

Several Types of Software Maintenance


1. Corrective Maintenance: This involves fixing errors and bugs in the software system.
2. Patching: It is an emergency fix implemented mainly due to pressure from management.
Patching is done for corrective maintenance but it gives rise to unforeseen future errors due to
lack of proper impact analysis.
3. Adaptive Maintenance: This involves modifying the software system to adapt it to changes in
the environment, such as changes in hardware or software, government policies, and business
rules.
4. Perfective Maintenance: This involves improving functionality, performance, and reliability,
and restructuring the software system to improve changeability.
5. Preventive Maintenance: This involves taking measures to prevent future problems, such as
optimization, updating documentation, reviewing and testing the system, and implementing
preventive measures such as backups.

104 | P a g e
Maintenance can be categorized into proactive and reactive types. Proactive maintenance involves
taking preventive measures to avoid problems from occurring, while reactive maintenance involves
addressing problems that have already occurred.

Maintenance can be performed by different stakeholders, including the original development team,
an in-house maintenance team, or a third-party maintenance provider. Maintenance activities can be
planned or unplanned. Planned activities include regular maintenance tasks that are scheduled in
advance, such as updates and backups. Unplanned activities are reactive and are triggered by
unexpected events, such as system crashes or security breaches. Software maintenance can involve
modifying the software code, as well as its documentation, user manuals, and training materials.
This ensures that the software is up-to-date and continues to meet the needs of its users.

Software maintenance can also involve upgrading the software to a new version or platform. This
can be necessary to keep up with changes in technology and to ensure that the software remains
compatible with other systems. The success of software maintenance depends on effective
communication with stakeholders, including users, developers, and management. Regular updates
and reports can help to keep stakeholders informed and involved in the maintenance process.

Software maintenance is also an important part of the Software Development Life Cycle
(SDLC). To update the software application and do all modifications in software application so as
to improve performance is the main focus of software maintenance. Software is a model that runs
on the basis of the real world. so, whenever any change requires in the software that means the need
for real-world changes wherever possible.

Need for Maintenance


Software Maintenance must be performed in order to:
 Correct faults.
 Improve the design.
 Implement enhancements.
 Interface with other systems.
 Accommodate programs so that different hardware, software, system features, and
telecommunications facilities can be used.
 Migrate legacy software.
 Retire software.
 Requirement of user changes.
 Run the code fast

Challenges in Software Maintenance


The various challenges in software maintenance are given below:
 The popular age of any software program is taken into consideration up to ten to fifteen years.
As software program renovation is open-ended and might maintain for decades making it very
expensive.
 Older software programs, which had been intended to paint on sluggish machines with much
less reminiscence and garage ability can not maintain themselves tough in opposition to newly
coming more advantageous software programs on contemporary-day hardware.
 Changes are frequently left undocumented which can also additionally reason greater conflicts
in the future.
 As the era advances, it turns into high prices to preserve vintage software programs.
 Often adjustments made can without problems harm the authentic shape of the software
program, making it difficult for any next adjustments.
 There is a lack of Code Comments.
 Lack of documentation: Poorly documented systems can make it difficult to understand how
the system works, making it difficult to identify and fix problems.
 Legacy code: Maintaining older systems with outdated technologies can be difficult, as it may
require specialized knowledge and skills.

105 | P a g e
 Complexity: Large and complex systems can be difficult to understand and modify, making it
difficult to identify and fix problems.
 Changing requirements: As user requirements change over time, the software system may
need to be modified to meet these new requirements, which can be difficult and time-
consuming.
 Interoperability issues: Systems that need to work with other systems or software can be
difficult to maintain, as changes to one system can affect the other systems.
 Lack of test coverage: Systems that have not been thoroughly tested can be difficult to
maintain as it can be hard to identify and fix problems without knowing how the system
behaves in different scenarios.
 Lack of personnel: A lack of personnel with the necessary skills and knowledge to maintain
the system can make it difficult to keep the system up-to-date and running smoothly.
 High-Cost: The cost of maintenance can be high, especially for large and complex systems,
which can be difficult to budget for and manage.

To overcome these challenges, it is important to have a well-defined maintenance process in place,


which includes testing and validation, version control, and communication with stakeholders. It is
also important to have a clear and well-defined maintenance plan that includes regular maintenance
activities, such as testing, backup, and bug fixing. Additionally, it is important to have personnel
with the necessary skills and knowledge to maintain the system.

Categories of Software Maintenance


Maintenance can be divided into the following categories.
 Corrective maintenance: Corrective maintenance of a software product may be essential either
to rectify some bugs observed while the system is in use, or to enhance the performance of the
system.
 Adaptive maintenance: This includes modifications and updations when the customers need
the product to run on new platforms, on new operating systems, or when they need the product
to interface with new hardware and software.
 Perfective maintenance: A software product needs maintenance to support the new features
that the users want or to change different types of functionalities of the system according to the
customer’s demands.
 Preventive maintenance: This type of maintenance includes modifications and updations to
prevent future problems with the software. It goals to attend to problems, which are not
significant at this moment but may cause serious issues in the future.

Reverse Engineering
Reverse Engineering is the process of extracting knowledge or design information from anything
man-made and reproducing it based on the extracted information. It is also called back
engineering. The main objective of reverse engineering is to check out how the system works.
There are many reasons to perform reverse engineering. Reverse engineering is used to know how
the thing works. Also, reverse engineering is to recreate the object by adding some enhancements.

Software Reverse Engineering


Software Reverse Engineering is the process of recovering the design and the requirements
specification of a product from an analysis of its code. Reverse Engineering is becoming important,
since several existing software products, lack proper documentation, are highly unstructured, or
their structure has degraded through a series of maintenance efforts.

Why Reverse Engineering?


 Providing proper system documentation.
 Recovery of lost information.
 Assisting with maintenance.
 The facility of software reuse.
 Discovering unexpected flaws or faults.

106 | P a g e
 Implements innovative processes for specific use.
 Easy to document the things how efficiency and power can be improved.

Uses of Software Reverse Engineering


 Software Reverse Engineering is used in software design, reverse engineering enables the
developer or programmer to add new features to the existing software with or without knowing
the source code.
 Reverse engineering is also useful in software testing, it helps the testers to study or detect the
virus and other malware code.
 Software reverse engineering is the process of analyzing and understanding the internal
structure and design of a software system. It is often used to improve the understanding of a
software system, to recover lost or inaccessible source code, and to analyze the behavior of a
system for security or compliance purposes.
 Malware analysis: Reverse engineering is used to understand how malware works and to
identify the vulnerabilities it exploits, in order to develop countermeasures.
 Legacy systems: Reverse engineering can be used to understand and maintain legacy systems
that are no longer supported by the original developer.
 Intellectual property protection: Reverse engineering can be used to detect and prevent
intellectual property theft by identifying and preventing the unauthorized use of code or other
assets.
 Security: Reverse engineering is used to identify security vulnerabilities in a system, such as
backdoors, weak encryption, and other weaknesses.
 Compliance: Reverse engineering is used to ensure that a system meets compliance standards,
such as those for accessibility, security, and privacy.
 Reverse-engineering of proprietary software: To understand how a software works, to
improve the software, or to create new software with similar features.
 Reverse-engineering of software to create a competing product: To create a product that
functions similarly or to identify the features that are missing in a product and create a new
product that incorporates those features.
 It’s important to note that reverse engineering can be a complex and time-consuming process,
and it is important to have the necessary skills, tools, and knowledge to perform it effectively.
Additionally, it is important to consider the legal and ethical implications of reverse
engineering, as it may be illegal or restricted in some jurisdictions.

Advantages of Software Maintenance


 Improved Software Quality: Regular software maintenance helps to ensure that the software
is functioning correctly and efficiently and that it continues to meet the needs of the users.
 Enhanced Security: Maintenance can include security updates and patches, helping to ensure
that the software is protected against potential threats and attacks.
 Increased User Satisfaction: Regular software maintenance helps to keep the software up-to-
date and relevant, leading to increased user satisfaction and adoption.
 Extended Software Life: Proper software maintenance can extend the life of the software,
allowing it to be used for longer periods of time and reducing the need for costly replacements.
 Cost Savings: Regular software maintenance can help to prevent larger, more expensive
problems from occurring, reducing the overall cost of software ownership.
 Better Alignment with business goals: Regular software maintenance can help to ensure that
the software remains aligned with the changing needs of the business. This can help to improve
overall business efficiency and productivity.
 Competitive Advantage: Regular software maintenance can help to keep the software ahead of
the competition by improving functionality, performance, and user experience.
 Compliance with Regulations: Software maintenance can help to ensure that the software
complies with relevant regulations and standards. This is particularly important in industries
such as healthcare, finance, and government, where compliance is critical.

107 | P a g e
 Improved Collaboration: Regular software maintenance can help to improve collaboration
between different teams, such as developers, testers, and users. This can lead to better
communication and more effective problem-solving.
 Reduced Downtime: Software maintenance can help to reduce downtime caused by system
failures or errors. This can have a positive impact on business operations and reduce the risk of
lost revenue or customers.
 Improved Scalability: Regular software maintenance can help to ensure that the software is
scalable and can handle increased user demand. This can be particularly important for growing
businesses or for software that is used by a large number of users.

Disadvantages of Software Maintenance


 Cost: Software maintenance can be time-consuming and expensive, and may require significant
resources and expertise.
 Schedule disruptions: Maintenance can cause disruptions to the normal schedule and
operations of the software, leading to potential downtime and inconvenience.
 Complexity: Maintaining and updating complex software systems can be challenging,
requiring specialized knowledge and expertise.
 Risk of introducing new bugs: The process of fixing bugs or adding new features can introduce
new bugs or problems, making it important to thoroughly test the software after maintenance.
 User resistance: Users may resist changes or updates to the software, leading to decreased
satisfaction and adoption.
 Compatibility issues: Maintenance can sometimes cause compatibility issues with other
software or hardware, leading to potential integration problems.
 Lack of documentation: Poor documentation or lack of documentation can make software
maintenance more difficult and time-consuming, leading to potential errors or delays.
 Technical debt: Over time, software maintenance can lead to technical debt, where the cost of
maintaining and updating the software becomes increasingly higher than the cost of developing
a new system.
 Skill gaps: Maintaining software systems may require specialized skills or expertise that may
not be available within the organization, leading to potential outsourcing or increased costs.
 Inadequate testing: Inadequate testing or incomplete testing after maintenance can lead to
errors, bugs, and potential security vulnerabilities.
 End-of-life: Eventually, software systems may reach their end-of-life, making maintenance and
updates no longer feasible or cost-effective. This can lead to the need for a complete system
replacement, which can be costly and time-consuming.

Questions For Practice


1. Match the software maintenance activities in List 1 to their meaning in List 2. [ UGC NET
2016]
List 1 List 2

a. Concerned with performing activities to


reduce the software complexity thereby
i. Corrective
improving program understandability and
increasing software maintainability.

b. Concerned with fixing errors that are observed


ii. Adaptive
when the software is in use.

c. Concerned with the change in the software


that takes place to make the software adaptable
iii. Perfective
to new environments (both hardware and
software).

108 | P a g e
List 1 List 2

d. Concerned with the changes in the software


iv. Preventive that takes place to make the software adaptable
to changing user requirements.
(A) i-b, ii-d, iii-c, iv-a
(B) i-b, ii-c, iii-d, iv-a
(C) i-c, ii-b, iii-d, iv-a
(D) i-a, ii-d, iii-b, iv-c
Solution: Correct Answer is (B).

Conclusion
In summary, software maintenance is important for ensuring that software continues to meet user
needs and perform optimally over time. It involves a range of activities, from bug fixes to
performance enhancements and adaptation to new technologies. Despite the challenges and costs
associated with maintenance, its benefits, such as improved software quality, enhanced security,
and extended software life, make it indispensable for sustainable software development.

Frequently Asked Questions on Software Maintenance


1. What are the examples of Software Maintenance?
Answer:
Some of examples of Software Maintenance are fixing bugs, updating software, and improving
system performance.
2. What is dirty Coding?
Answer:
Dirty Coding happens when code has many editors with styles in conflicting, and that becomes
impossible to main the software.

Software Measurement and Metrics


Software Measurement: A measurement is a manifestation of the size, quantity, amount, or
dimension of a particular attribute of a product or process. Software measurement is a titrate impute of
a characteristic of a software product or the software process.

It is an authority within software engineering. The software measurement process is defined and
governed by ISO Standard.

Software Measurement Principles


The software measurement process can be characterized by five activities-
1. Formulation: The derivation of software measures and metrics appropriate for the
representation of the software that is being considered.
2. Collection: The mechanism used to accumulate data required to derive the formulated metrics.
3. Analysis: The computation of metrics and the application of mathematical tools.
4. Interpretation: The evaluation of metrics results in insight into the quality of the
representation.
5. Feedback: Recommendation derived from the interpretation of product metrics transmitted to
the software team.

Need for Software Measurement


Software is measured to:
 Create the quality of the current product or process.
 Anticipate future qualities of the product or process.
 Enhance the quality of a product or process.
 Regulate the state of the project concerning budget and schedule.

109 | P a g e
 Enable data-driven decision-making in project planning and control.
 Identify bottlenecks and areas for improvement to drive process improvement activities.
 Ensure that industry standards and regulations are followed.
 Give software products and processes a quantitative basis for evaluation.
 Enable the ongoing improvement of software development practices.

Classification of Software Measurement


There are 2 types of software measurement:
1. Direct Measurement: In direct measurement, the product, process, or thing is measured
directly using a standard scale.
2. Indirect Measurement: In indirect measurement, the quantity or quality to be measured is
measured using related parameters i.e. by use of reference.

Software Metrics
A metric is a measurement of the level at which any impute belongs to a system product or process.
Software metrics are a quantifiable or countable assessment of the attributes of a software product.
There are 4 functions related to software metrics:

1. Planning
2. Organizing
3. Controlling
4. Improving

Characteristics of software Metrics


1. Quantitative: Metrics must possess a quantitative nature. It means metrics can be expressed in
numerical values.
2. Understandable: Metric computation should be easily understood, and the method of
computing metrics should be clearly defined.
3. Applicability: Metrics should be applicable in the initial phases of the development of the
software.
4. Repeatable: When measured repeatedly, the metric values should be the same and consistent.
5. Economical: The computation of metrics should be economical.
6. Language Independent: Metrics should not depend on any programming language.

Types of Software Metrics


1. Product Metrics: Product metrics are used to evaluate the state of the product, tracing risks
and undercover prospective problem areas. The ability of the team to control quality is
evaluated. Examples include lines of code, cyclomatic complexity, code coverage, defect
density, and code maintainability index.
2. Process Metrics: Process metrics pay particular attention to enhancing the long-term process of
the team or organization. These metrics are used to optimize the development process and
maintenance activities of software. Examples include effort variance, schedule variance, defect
injection rate, and lead time.
3. Project Metrics: The project metrics describes the characteristic and execution of a
project. Examples include effort estimation accuracy, schedule deviation, cost variance, and
productivity. Usually measures-
 Number of software developer
 Staffing patterns over the life cycle of software
 Cost and schedule
 Productivity
Advantages of Software Metrics
1. Reduction in cost or budget.
2. It helps to identify the particular area for improvising.
3. It helps to increase the product quality.
4. Managing the workloads and teams.

110 | P a g e
5. Reduction in overall time to produce the product,.
6. It helps to determine the complexity of the code and to test the code with resources.
7. It helps in providing effective planning, controlling and managing of the entire product.

Disadvantages of Software Metrics


1. It is expensive and difficult to implement the metrics in some cases.
2. Performance of the entire team or an individual from the team can’t be determined. Only the
performance of the product is determined.
3. Sometimes the quality of the product is not met with the expectation.
4. It leads to measure the unwanted data which is wastage of time.
5. Measuring the incorrect data leads to make wrong decision making.

People Metrics and Process Metrics in Software Engineering


People Metrics and Process Metrics, both play important roles in software development. People
Metrics helps in quantifying the useful attributes whereas Process Metrics creates the body of the
software. People metrics focus on how well team members work together and their overall
satisfaction, while process metrics measure how smoothly tasks are completed. By paying attention to
these metrics, teams can improve collaboration, efficiency, and the quality of their work, leading to
successful project outcomes.

People Metrics
People metrics play an important role in software project management. These are also called
personnel metrics. Some authors view resource metrics to include personnel metrics, software
metrics, and hardware metrics but most of the authors mainly view resource metrics as consisting of
personnel metrics only. In the present context, we also assume resource metrics to include mainly
personnel metrics. People metrics quantify useful attributes of those generating the products using
the available processes, methods, and tools. These metrics tell you about the attributes like turnover
rates, productivity, and absenteeism.

Why should you track people metrics?


Tracking people metrics is important because it helps ensure that everyone working on a software
project is happy, motivated, and performing at their best. By keeping an eye on metrics like
productivity, teamwork experience, and communication skills, you can make sure that teams are
working well together and that everyone has what they need to succeed. This can lead to better
outcomes for the project, as well as a more positive and supportive work environment. Plus,
tracking these metrics allows you to identify any issues early on and address them before they
become bigger problems.

Top 7 People Matrices to Track


Following are the People Metrices:
1. Productivity: Productivity metrics are simple ways to measure how much work is done in a
certain period. They help see how efficient and effective someone or something is at completing
tasks.
2. Employee Net Promoter Score (eNPS) : Employee Net Promoter Score measures how likely
employees are to recommend their workplace to others. It should be track because eNPS
evaluate overall employee satisfaction and suggest areas for improvement. It can be calculated

111 | P a g e
by conducting regular surveys asking employees how likely they are to recommend the
company to friends or family.
3. Team Collaboration: Team Collaboration measures how well team members work together
and communicate. It’s essential to track because effective teamwork streamlines workflows and
enhances project outcomes. To track, monitor communication frequency, participation in team
activities, and gather feedback from team members regularly.
4. Attrition: Attrition tracks the rate at which employees leave the organization. It’s important to
track because it helps identify trends and reasons for turnover, allowing proactive measures to
retain talent. To track, calculate the percentage of employees leaving within a given period and
analyze reasons through exit interviews or surveys.
5. Absenteeism: Absenteeism measures the frequency at which employees are absent from work.
It’s crucial to track because it highlights patterns of absence, enabling the identification and
resolution of underlying issues. To track, maintain records of employee attendance, including
reasons for absence, and analyze trends over time to minimize disruptions to productivity.
6. Total cost of workforce: The Total Cost of Workforce calculates all expenses associated with
employing staff. It’s important to track because it helps manage budget allocation and optimize
resource utilization. To track, compile data on salaries, benefits, training costs, and other
expenses related to workforce management to understand the total cost of employing staff.
7. Quality of Work: Quality of Work evaluates the standard and effectiveness of tasks completed
by employees. It’s vital to track because it ensures deliverables meet quality standards, satisfy
customer requirements, and uphold organizational reputation. To track, employ quality
assurance processes, gather feedback from stakeholders, and conduct performance evaluations
to measure and improve work quality.
For more, refer to Most Important People Metrics.

Process Metrics
 Process Metrics are the measures of the development process that create a body of software. A
common example of a process metric is the length of time that the process of software creation
tasks.
 Based on the assumption that the quality of the product is a direct function of the process,
process metrics can be used to estimate, monitor, and improve the reliability and quality of
software. ISO- 9000 certification, or “Quality Management Standards“, is the generic
reference for a family of standards developed by the International Standard Organization
(ISO).
 Often, Process Metrics are tools of management in their attempt to gain insight into the creation
of a product that is intangible. Since the software is abstract, there is no visible, traceable
artifact from software projects. Objectively tracking progress becomes extremely difficult.
Management is interested in measuring progress and productivity and being able to make
predictions concerning both.
 Process metrics are often collected as part of a model of software development. Models such
as Boehm’s COCOMO (Constructive Cost Model) make cost estimations for software
projects. The boat’s COPMO makes predictions about the need for additional effort on large
projects.
 Although valuable management tools, process metrics are not directly relevant to program
understanding. They are more useful in measuring and predicting such things as resource usage
and schedule.

Types of Process Metrics


 Static Process Metrics: Static Process Metrics are directly related to the defined process. For
example, the number of types of roles, types of artifacts, etc.
 Dynamic Process Metrics: Dynamic Process Metrics are simply related to the properties of
process performance. For example, how many activities are performed, how many artifacts are
created, etc.)
 Process Evolution Metrics: Process Evolution Metrics are related to the process of making
changes over a period of time. For example, how many iterations are there within the process)

112 | P a g e
Top 7 Process Metrices
 Lead Time: Lead Time measures the time taken from initiating a process (such as starting work
on a task) to its completion (finishing the task). It indicates how quickly work moves through
the development process.
 Cycle Time: Cycle Time tracks the duration it takes to complete one full cycle of a process,
from beginning to end. It provides insights into the efficiency and effectiveness of the
development workflow.
 Throughput: Throughput basically Quantifies the rate at which tasks or features are completed
within a given timeframe. It reflects the productivity and capacity of the development team.
 Work in Progress (WIP): Work in Progress (WIP) indicates the number of tasks or features
currently being worked on but not yet completed. It helps in identifying bottlenecks and
managing workflow to ensure tasks are completed efficiently.
 Defect Density: Defect Density measures the number of defects or bugs found per unit of work
or code. It helps in assessing the quality and reliability of the software being developed.
 Process Efficiency: Process Efficiency evaluates the ratio of value-added work (tasks that
directly contribute to delivering value to the customer) to non-value-added work (tasks that do
not directly contribute to value delivery). It identifies opportunities for streamlining processes
and reducing waste.
 Process Compliance: Process Compliance assesses the extent to which development processes
adhere to defined standards, guidelines, or regulations. It ensures consistency and quality in the
software development process.

Questions for Practice


1. Size and Complexity are a part of [UGC-NET 2022]
(A) People Metrics
(B) Project Metrics
(C) Process Metrics
(D) Product Metrics
Solution: Correct Answer is (D).

2. Which one of the following sets of attributes should not be encompassed by effective
software metrics? [UGC-NET 2014]
(A) Simple and computable
(B) Consistent and objective
(C) Consistent in the use of units and dimensions
(D) Programming language dependent
Solution: Correct Answer is (D).

Conclusion
In software engineering, tracking both people and process metrics is crucial for ensuring successful
project outcomes. People metrics, such as employee satisfaction and teamwork effectiveness, help
in maintaining a motivated and productive workforce. Process metrics, like lead time and defect
density, allow teams to monitor and improve the efficiency and quality of their development
processes. By focusing on both aspects, teams can better manage resources, identify areas for
improvement, and ultimately deliver high-quality software products on time and within budget.

Frequently Asked Questions on People Metrices and Process Metrices in Software


Engineering – FAQs
1. What are the three main characteristics of Process Metrics?
Answer: The three main characteristics of Process Metrics are:
 Static Process Metrics
 Dynamic Process Metrics
 Process Evolution Metrics

113 | P a g e
2. What is an example of Process Metrics?
Answer: The main example of Process Metrics is the time taken by the process in the creation of
the software tasks.

3. Why are People Metrics Important?


Answer: People Metrics help in reviewing and analyzing important data about work.

Functional Point (FP) Analysis – Software Engineering


Functional Point Analysis (FPA) is a software measurement technique used to assess the size and
complexity of a software system based on its functionality. It involves categorizing the functions of
the software, such as input screens, output reports, inquiries, files, and interfaces, and assigning
weights to each based on their complexity. By quantifying these functions and their associated
weights, FPA provides an objective measure of the software’s size and complexity.

What is Functional Point Analysis?


Function Point Analysis was initially developed by Allan J. Albrecht in 1979 at IBM and has been
further modified by the International Function Point User’s Group (IFPUG) in 1984, to clarify
rules, establish standards, and encourage their use and evolution. Allan J. Albrecht gave the initial
definition, Functional Point Analysis gives a dimensionless number defined in function points
which we have found to be an effective relative measure of function value delivered to our
customer. A systematic approach to measuring the different functionalities of a software application
is offered by function point metrics. Function point metrics evaluate functionality from the
perspective of the user, that is, based on the requests and responses they receive.

Objectives of Functional Point Analysis


1. Encourage Approximation: FPA helps in the estimation of the work, time, and materials
needed to develop a software project. Organizations can plan and manage projects more
accurately when a common measure of functionality is available.
2. To assist with project management: Project managers can monitor and manage software
development projects with the help of FPA. Managers can evaluate productivity, monitor
progress, and make well-informed decisions about resource allocation and project timeframes
by measuring the software’s functional points.
3. Comparative analysis: By enabling benchmarking, it gives businesses the ability to assess how
their software projects measure up to industry standards or best practices in terms of size and
complexity. This can be useful for determining where improvements might be made and for
evaluating how well development procedures are working.
4. Improve Your Cost-Benefit Analysis: It offers a foundation for assessing the value provided
by the program concerning its size and complexity, which helps with cost-benefit analysis.
Making educated judgements about project investments and resource allocations can benefit
from having access to this information.
5. Comply with Business Objectives: It assists in coordinating software development activities
with an organization’s business objectives. It guarantees that software development efforts are
directed toward providing value to end users by concentrating on user-oriented functionality.

Types of Functional Point Analysis


There are two types of Functional Point Analysis:
1. Transactional Functional Type
 External Input (EI): EI processes data or control information that comes from outside the
application’s boundary. The EI is an elementary process.
 External Output (EO): EO is an elementary process that generates data or control information
sent outside the application’s boundary.
 External Inquiries (EQ): EQ is an elementary process made up of an input-output
combination that results in data retrieval.

114 | P a g e
2. Data Functional Type
 Internal Logical File (ILF): A user-identifiable group of logically related data or control
information maintained within the boundary of the application.
 External Interface File (EIF): A group of users recognizable logically related data allusion to
the software but maintained within the boundary of another software.

Benefits of Functional Point Analysis


Following are the benefits of Functional Point Analysis:
1. Technological Independence: It calculates a software system’s functional size independent of
the underlying technology or programming language used to implement it. As a result, it is a
technology-neutral metric that makes it easier to compare projects created with various
technologies.
2. Better Accurate Project Estimation: It helps to improve project estimation accuracy by
measuring user interactions and functional needs. Project managers can improve planning and
budgeting by using the results of the FPA to estimate the time, effort and resources required for
development.
3. Improved Interaction: It provides a common language for business analysts, developers, and
project managers to communicate with one another and with other stakeholders. By
communicating the size and complexity of software in a way that both technical and non-
technical audiences can easily understand this helps close the communication gap.
4. Making Well-Informed Decisions: FPA assists in making well-informed decisions at every
stage of the software development life cycle. Based on the functional requirements,
organizations can use the results of the FPA to make decisions about resource allocation,
project prioritization, and technology selection.
5. Early Recognition of Changes in Scope: Early detection of changes in project scope is made
easier with the help of FPA. Better scope change management is made possible by the
measurement of functional requirements, which makes it possible to evaluate additions or
changes for their effect on the project’s overall size.

Disadvantage of Functional Point Analysis


Given below are some disadvantages of Functional Point Analysis:
1. Subjective Judgement: One of the main disadvantages of Functional Point Analysis is it’s
dependency on subjective judgement i.e. relying on personal opinions and interpretations
instead of just using clear, measurable standards.
2. Low Accuracy: It has low evaluation accuracy as it’s dependency on subjective judgement.
3. Time Consuming: Functional Point Analysis is a time consuming process, particularly during
the initial stages of implementation.
4. Steep Learning Curve: Learning FPA can be challenging due to its complexity and the length
of time required to gain proficiency.
5. Less Research Data: Compared to LOC-based metrics, there is relatively less research data
available on function points.
6. Costly: The need for thorough analysis and evaluation can result in increased project timelines
and associated costs.

115 | P a g e
Characteristics of Functional Point Analysis
 We can calculate the functional point with the help of the number of functions and types of
functions used in applications. These are classified into five types:

Types of FP Attributes or Information Domain Characteristics


Measurement Parameters Examples

Number of External Inputs (EI) Input screen and tables

Number of External Output (EO) Output screens and reports

Number of external inquiries (EQ) Prompts and interrupts

Number of internal files (ILF) Databases and directories

Shared databases and shared


Number of external interfaces (EIF)
routines
 Functional Point helps in describing system complexity and also shows project timelines.
 It is majorly used for business systems like information systems.
 FP is language and technology independent, meaning it can be applied to software systems
developed using any programming language or technology stack.
 All the factors mentioned above are given weights, and these weights are determined through
practical experiments in the following table.

Weights of 5 Functional Point Attributes


Measurement Parameter Low Average High

Number of external inputs (EI) 3 4 6

Number of external outputs (EO) 4 5 7

Number of external inquiries (EQ) 3 4 6

Number of internal files (ILF) 7 10 15

Number of External Interfaces


5 7 10
(EIF)

Functional Complexities help us in finding the corresponding weights, which results in finding the
Unadjusted Functional point (UFp) of the Subsystem. Consider the complexity as average for all
cases. Below-mentioned is the way how to compute FP.
Weighing Factor
Measurement
Parameter Count Total_Count Simple Average Complex

Number of external
32 32*4=128 3 4 6
inputs (EI)

Number of external 60 60*5=300 4 5 7

116 | P a g e
Weighing Factor
Measurement
outputs (EO)
Parameter Count

Number of external
24 24*4=96 3 4 6
inquiries (EQ)

Number of internal files


8 8*10=80 7 10 15
(ILF)

Number of external
2 2*7=14 5 7 10
interfaces (EIF)

Algorithms used Count


618
total →
From the above tables, Functional Point is calculated with the following formula
FP = Count-Total * [0.65 + 0.01 * ⅀(fi)] = Count * CAF
Here, the count-total is taken from the chart.
CAF = [0.65 + 0.01 * ⅀(fi)]
1. ⅀(fi) = sum of all 14 questions and it also shows the complexity factor – CAF.
2. CAF varies from 0.65 to 1.35 and ⅀(fi) ranges from 0 to 70.
3. When ⅀(fi) = 0, CAF = 0.65 and when ⅀(fi) = 70, CAF = 0.65 + (0.01*70) = 0.65 + 0.7 = 1.35

Questions on Functional Point


1. Consider a software project with the following information domain characteristic for the
calculation of function point metric.
Number of external inputs (I) = 30
Number of external output (O) = 60
Number of external inquiries (E) = 23
Number of files (F) = 08
Number of external interfaces (N) = 02

It is given that the complexity weighting factors for I, O, E, F, and N are 4, 5, 4, 10, and 7,
respectively. It is also given that, out of fourteen value adjustment factors that influence the
development effort, four factors are not applicable, each of the other four factors has value 3,
and each of the remaining factors has value 4. The computed value of the function point
metric is _____. [GATE CS 2015]
(A) 612.06
(B) 404.66
(C) 305.09
(D) 806.9
Solution: Correct Answer is (B).
For more, refer to GATE CS 2015 | Question 65.

2. While estimating the cost of the software, Lines of Code(LOC) and Function Points (FP)
are used to measure which of the following? [UGC-NET CSE 2013]
(A) Length of Code
(B) Size of Software
(C) Functionality of Software
(D) None of the Above
Solution: Correct Answer is (B).

117 | P a g e
3. In functional point analysis, the number of complexity adjustment factors is [UGC-NET CS
2014]
(A) 10
(B) 12
(C) 14
(D) 20
Solution: Correct Answer is (C).
Conclusion
Functional Point Analysis (FPA) offers a structured approach to measure the size and complexity of
software systems based on their functionality. By categorizing functions and assigning weights,
FPA provides an objective measurement that helps in estimating project timelines, resource
requirements, and overall system complexity. It focuses on user-centric features, making it valuable
for business systems like management information systems (MIS).

Frequently Asked Questions (FAQs) on Functional Point (FP) Analysis


1. What do you mean by Functional Point?
Ans: Functional Point basically determines the size of the application system on the basis of the
functionality of the system.

2. How do you find the Functional Point?


Ans: The functional Point is calculated with the total count factor. It is simply calculated using the
formula FP = TC * [0.65 + 0.01* ⅀(Xi)].

3. List the five components of the Functional Point?


Ans: The five components of the functional point are listed below:
1. Internal Logical Files (ILF)
2. External Interface Files (EIF)
3. External Inputs (EI)
4. External Outputs (EO)
5. External Enquiries (EQ)

Lines of Code (LOC) in Software Engineering


A line of code (LOC) is any line of text in a code that is not a comment or blank line, and also header
lines, in any case of the number of statements or fragments of statements on the line. LOC consists of
all lines containing the declaration of any variable, and executable and non-executable statements.

As Lines of Code (LOC) only counts the volume of code, you can only use it to compare or
estimate projects that use the same language and are coded using the same coding standards.
Features of Lines of Code (LOC)
 Change Tracking: Variations in LOC as time passes can be tracked to analyze the growth or
reduction of a codebase, providing insights into project progress.
 Limited Representation of Complexity: Despite LOC provides a general idea of code size, it
does not accurately depict code complexity. It is possible for two programs having the same
LOC to be incredibly complex.
 Ease of Computation: LOC is an easy measure to obtain because it is easy to calculate and
takes little time.
 Easy to Understand: The idea of expressing code size in terms of lines is one that
stakeholders, even those who are not technically inclined, can easily understand.

Advantages of Lines of Code (LOC)


 Effort Estimation: LOC is occasionally used to estimate development efforts and project
deadlines at a high level. Although caution is necessary, project planning can begin with this.

118 | P a g e
 Comparative Analysis: High-level productivity comparisons between several projects or
development teams can be made using LOC. It might provide an approximate figure of the
volume of code generated over a specific time frame.
 Benchmarking Tool: When comparing various iterations of the same program, LOC can be used
as a benchmarking tool. It may bring information on how modifications affect the codebase’s
total size.
Disadvantages of Lines of Code (LOC)
 Challenges in Agile Work Environments: Focusing on initial LOC estimates may not
adequately reflect the iterative and dynamic nature of development in agile development, as
requirements may change.
 Not Considering Into Account External Libraries: Code from other libraries or frameworks,
which can greatly enhance a project’s overall usefulness, is not taken into account by LOC.
 Challenges with Maintenance: Higher LOC codebases are larger codebases that typically
demand more maintenance work.

Research has shown a rough correlation between LOC and the overall cost and length of developing
a project/ product in Software Development and between LOC and the number of defects. This
means the lower your LOC measurement is, the better off you probably are in the development of
your product.
Let’s take an example and check how the Line of code works in the simple sorting program given
below:
void selSort(int x[], int n) {
//Below function sorts an array in ascending order
int i, j, min, temp;
for (i = 0; i < n - 1; i++) {
min = i;
for (j = i + 1; j < n; j++)
if (x[j] < x[min])
min = j;
temp = x[i];
x[i] = x[min];
x[min] = temp;
}
}
So, now If LOC is simply a count of the number of lines then the above function shown contains 13
lines of code (LOC). But when comments and blank lines are ignored, the function shown above
contains 12 lines of code (LOC).

Let’s take another example and check how does the Line of code work the given below:
void main()
{
int fN, sN, tN;
cout << "Enter the 2 integers: ";
cin >> fN >> sN;
// sum of two numbers in stored in variable sum
sum = fN + sN;
// Prints sum
cout << fN << " + " << sN << " = " << sum;
return 0;
}

Here also, If LOC is simply a count of the numbers of lines then the above function shown contains
11 lines of code (LOC). But when comments and blank lines are ignored, the function shown above
contains 9 lines of code (LOC).

119 | P a g e
Requirements Engineering Process in Software Engineering
Requirements Engineering is the process of identifying, eliciting, analyzing, specifying, validating,
and managing the needs and expectations of stakeholders for a software system.

In this article, we’ll learn about its process, advantages, and disadvantages.

What is Requirements Engineering?


A systematic and strict approach to the definition, creation, and verification of requirements for a
software system is known as requirements engineering. To guarantee the effective creation of a
software product, the requirements engineering process entails several tasks that help in
understanding, recording, and managing the demands of stakeholders.
Requirements Engineering Process
1. Feasibility Study
2. Requirements elicitation
3. Requirements specification
4. Requirements for verification and validation
5. Requirements management

1. Feasibility Study
The feasibility study mainly concentrates on below five mentioned areas below. Among these
Economic Feasibility Study is the most important part of the feasibility analysis and the Legal
Feasibility Study is less considered feasibility analysis.
1. Technical Feasibility: In Technical Feasibility current resources both hardware software along
required technology are analyzed/assessed to develop the project. This technical feasibility
study reports whether there are correct required resources and technologies that will be used for
project development. Along with this, the feasibility study also analyzes the technical skills and
capabilities of the technical team, whether existing technology can be used or not, whether
maintenance and up-gradation are easy or not for the chosen technology, etc.
2. Operational Feasibility: In Operational Feasibility degree of providing service to requirements
is analyzed along with how easy the product will be to operate and maintain after deployment.
Along with this other operational scopes are determining the usability of the product,
Determining suggested solution by the software development team is acceptable or not, etc.
3. Economic Feasibility: In the Economic Feasibility study cost and benefit of the project are
analyzed. This means under this feasibility study a detailed analysis is carried out will be cost
of the project for development which includes all required costs for final development hardware
and software resources required, design and development costs operational costs, and so on.
After that, it is analyzed whether the project will be beneficial in terms of finance for the
organization or not.
4. Legal Feasibility: In legal feasibility, the project is ensured to comply with all relevant laws,
regulations, and standards. It identifies any legal constraints that could impact the project and
reviews existing contracts and agreements to assess their effect on the project’s execution.
Additionally, legal feasibility considers issues related to intellectual property, such as patents
and copyrights, to safeguard the project’s innovation and originality.
5. Schedule Feasibility: In schedule feasibility, the project timeline is evaluated to determine if it
is realistic and achievable. Significant milestones are identified, and deadlines are established to
track progress effectively. Resource availability is assessed to ensure that the necessary
resources are accessible to meet the project schedule. Furthermore, any time constraints that
might affect project delivery are considered to ensure timely completion. This focus on
schedule feasibility is crucial for the successful planning and execution of a project.
2. Requirements Elicitation
It is related to the various ways used to gain knowledge about the project domain and requirements.
The various sources of domain knowledge include customers, business manuals, the existing
software of the same type, standards, and other stakeholders of the project. The techniques used for
requirements elicitation include interviews, brainstorming, task analysis, Delphi technique,

120 | P a g e
prototyping, etc. Some of these are discussed here. Elicitation does not produce formal models of
the requirements understood. Instead, it widens the domain knowledge of the analyst and thus helps
in providing input to the next stage.

Requirements elicitation is the process of gathering information about the needs and expectations of
stakeholders for a software system. This is the first step in the requirements engineering process
and it is critical to the success of the software development project. The goal of this step is to
understand the problem that the software system is intended to solve and the needs and expectations
of the stakeholders who will use the system.

Several techniques can be used to elicit requirements, including:


 Interviews: These are one-on-one conversations with stakeholders to gather information about
their needs and expectations.
 Surveys: These are questionnaires that are distributed to stakeholders to gather information
about their needs and expectations.
 Focus Groups: These are small groups of stakeholders who are brought together to discuss
their needs and expectations for the software system.
 Observation: This technique involves observing the stakeholders in their work environment to
gather information about their needs and expectations.
 Prototyping: This technique involves creating a working model of the software system, which
can be used to gather feedback from stakeholders and to validate requirements.

It’s important to document, organize, and prioritize the requirements obtained from all these
techniques to ensure that they are complete, consistent, and accurate.

3. Requirements Specification
This activity is used to produce formal software requirement models. All the requirements including
the functional as well as the non-functional requirements and the constraints are specified by these
models in totality. During specification, more knowledge about the problem may be required which
can again trigger the elicitation process. The models used at this stage include ER diagrams, data
flow diagrams(DFDs), function decomposition diagrams(FDDs), data dictionaries, etc.

Requirements specification is the process of documenting the requirements identified in the analysis
step in a clear, consistent, and unambiguous manner. This step also involves prioritizing and
grouping the requirements into manageable chunks.

The goal of this step is to create a clear and comprehensive document that describes the
requirements for the software system. This document should be understandable by both the
development team and the stakeholders.

Several types of requirements are commonly specified in this step, including


1. Functional Requirements: These describe what the software system should do. They specify
the functionality that the system must provide, such as input validation, data storage, and user
interface.
2. Non-Functional Requirements : These describe how well the software system should do it.
They specify the quality attributes of the system, such as performance, reliability, usability, and
security.
3. Constraints: These describe any limitations or restrictions that must be considered when
developing the software system.
4. Acceptance Criteria: These describe the conditions that must be met for the software system to
be considered complete and ready for release.

To make the requirements specification clear, the requirements should be written in a natural
language and use simple terms, avoiding technical jargon, and using a consistent format throughout

121 | P a g e
the document. It is also important to use diagrams, models, and other visual aids to help
communicate the requirements effectively.

Once the requirements are specified, they must be reviewed and validated by the stakeholders and
development team to ensure that they are complete, consistent, and accurate.

4. Requirements Verification and Validation


Verification: It refers to the set of tasks that ensures that the software correctly implements a
specific function.

Validation: It refers to a different set of tasks that ensures that the software that has been built is
traceable to customer requirements. If requirements are not validated, errors in the requirement
definitions would propagate to the successive stages resulting in a lot of modification and rework.

The main steps for this process include:


1. The requirements should be consistent with all the other requirements i.e. no two requirements
should conflict with each other.
2. The requirements should be complete in every sense.
3. The requirements should be practically achievable.

Reviews, buddy checks, making test cases, etc. are some of the methods used for this.
Requirements verification and validation (V&V) is the process of checking that the requirements
for a software system are complete, consistent, and accurate and that they meet the needs and
expectations of the stakeholders. The goal of V&V is to ensure that the software system being
developed meets the requirements and that it is developed on time, within budget, and to the
required quality.
1. Verification is checking that the requirements are complete, consistent, and accurate. It involves
reviewing the requirements to ensure that they are clear, testable, and free of errors and
inconsistencies. This can include reviewing the requirements document, models, and diagrams,
and holding meetings and walkthroughs with stakeholders.
2. Validation is the process of checking that the requirements meet the needs and expectations of
the stakeholders. It involves testing the requirements to ensure that they are valid and that the
software system being developed will meet the needs of the stakeholders. This can include
testing the software system through simulation, testing with prototypes, and testing with the
final version of the software.
3. Verification and Validation is an iterative process that occurs throughout the software
development life cycle. It is important to involve stakeholders and the development team in the
V&V process to ensure that the requirements are thoroughly reviewed and tested.

It’s important to note that V&V is not a one-time process, but it should be integrated and continue
throughout the software development process and even in the maintenance stage.

5. Requirements Management
Requirement management is the process of analyzing, documenting, tracking, prioritizing, and
agreeing on the requirement and controlling the communication with relevant stakeholders. This
stage takes care of the changing nature of requirements. It should be ensured that the SRS is as
modifiable as possible to incorporate changes in requirements specified by the end users at later
stages too. Modifying the software as per requirements in a systematic and controlled manner is an
extremely important part of the requirements engineering process.
Requirements management is the process of managing the requirements throughout the software
development life cycle, including tracking and controlling changes, and ensuring that the
requirements are still valid and relevant. The goal of requirements management is to ensure that the
software system being developed meets the needs and expectations of the stakeholders and that it is
developed on time, within budget, and to the required quality.

122 | P a g e
Several key activities are involved in requirements management, including:
1. Tracking and controlling changes: This involves monitoring and controlling changes to the
requirements throughout the development process, including identifying the source of the
change, assessing the impact of the change, and approving or rejecting the change.
2. Version control: This involves keeping track of different versions of the requirements
document and other related artifacts.
3. Traceability: This involves linking the requirements to other elements of the development
process, such as design, testing, and validation.
4. Communication: This involves ensuring that the requirements are communicated effectively to
all stakeholders and that any changes or issues are addressed promptly.
5. Monitoring and reporting: This involves monitoring the progress of the development process
and reporting on the status of the requirements.

Requirements management is a critical step in the software development life cycle as it helps to
ensure that the software system being developed meets the needs and expectations of stakeholders
and that it is developed on time, within budget, and to the required quality. It also helps to prevent
scope creep and to ensure that the requirements are aligned with the project goals.

Tools Involved in Requirement Engineering


 Observation report
 Questionnaire ( survey, poll )
 Use cases
 User stories
 Requirement workshop
 Mind mapping
 Roleplaying
 Prototyping

Advantages of Requirements Engineering Process


 Helps ensure that the software being developed meets the needs and expectations of the
stakeholders
 Can help identify potential issues or problems early in the development process, allowing for
adjustments to be made before significant
 Helps ensure that the software is developed in a cost-effective and efficient manner
 Can improve communication and collaboration between the development team and stakeholders
 Helps to ensure that the software system meets the needs of all stakeholders.
 Provides an unambiguous description of the requirements, which helps to reduce
misunderstandings and errors.
 Helps to identify potential conflicts and contradictions in the requirements, which can be
resolved before the software development process begins.
 Helps to ensure that the software system is delivered on time, within budget, and to the required
quality standards.
 Provides a solid foundation for the development process, which helps to reduce the risk of
failure.

Disadvantages of Requirements Engineering Process


 Can be time-consuming and costly, particularly if the requirements-gathering process is not
well-managed
 Can be difficult to ensure that all stakeholders’ needs and expectations are taken into account
 It Can be challenging to ensure that the requirements are clear, consistent, and complete
 Changes in requirements can lead to delays and increased costs in the development process.
 As a best practice, Requirements engineering should be flexible, adaptable, and should be
aligned with the overall project goals.
 It can be time-consuming and expensive, especially if the requirements are complex.

123 | P a g e
 It can be difficult to elicit requirements from stakeholders who have different needs and
priorities.
 Requirements may change over time, which can result in delays and additional costs.
 There may be conflicts between stakeholders, which can be difficult to resolve.
 It may be challenging to ensure that all stakeholders understand and agree on the requirements.

Stages in Software Engineering Process


Requirements engineering is a critical process in software engineering that involves identifying,
analyzing, documenting, and managing the requirements of a software system. The requirements
engineering process consists of the following stages:
 Elicitation: In this stage, the requirements are gathered from various stakeholders such as
customers, users, and domain experts. The aim is to identify the features and functionalities that
the software system should provide.
 Analysis: In this stage, the requirements are analyzed to determine their feasibility,
consistency, and completeness. The aim is to identify any conflicts or contradictions in the
requirements and resolve them.
 Specification: In this stage, the requirements are documented in a clear, concise, and
unambiguous manner. The aim is to provide a detailed description of the requirements that can
be understood by all stakeholders.
 Validation: In this stage, the requirements are reviewed and validated to ensure that they meet
the needs of all stakeholders. The aim is to ensure that the requirements are accurate, complete,
and consistent.
 Management: In this stage, the requirements are managed throughout the software
development lifecycle. The aim is to ensure that any changes or updates to the requirements are
properly documented and communicated to all stakeholders.
 Effective requirements engineering is crucial to the success of software development projects. It
helps ensure that the software system meets the needs of all stakeholders and is delivered on
time, within budget, and to the required quality standards.

Conclusion
As the project develops and new information becomes available, the iterative requirements
engineering process may involve going back and reviewing earlier phases. Throughout the process,
stakeholders in the project must effectively communicate and collaborate to guarantee that the
software system satisfies user needs and is in line with the company’s overall goals.

Requirements Engineering Process – FAQs


1. What is requirements engineering?
Requirements engineering is the process of identifying, analyzing, documenting, and managing the
needs and expectations of stakeholders for a software system.
2. Why is requirements engineering important?
It ensures that the software meets the needs of its users, is delivered on time, within budget, and to
the required quality standards.
3. What are the main steps in the requirements engineering process?
The main steps are feasibility study, requirements elicitation, requirements specification,
requirements verification and validation, and requirements management.
4. How does requirements validation differ from requirements verification?
Verification checks that the requirements are correctly specified and error-free, while validation
ensures that the requirements meet the needs and expectations of the stakeholders.
5. What techniques are used to gather requirements?
Techniques include interviews, surveys, focus groups, observation, and prototyping, which help
collect detailed information about stakeholder needs and expectations.

124 | P a g e
Classification of Software Requirements – Software Engineering
Last Updated : 19 Jun, 2024



Classification of Software Requirements is important in the software development process. It


organizes our requirements into different categories that make them easier to manage, prioritize,
and track. The main types of Software Requirements are functional, non-functional, and domain
requirements.

System configuration management – Software


Engineering
Last Updated : 30 Jan, 2024



Whenever software is built, there is always scope for improvement and those improvements bring picture
changes. Changes may be required to modify or update any existing solution or to create a new solution for a
problem. Requirements keep on changing daily so we need to keep on upgrading our systems based on the
current requirements and needs to meet desired outputs. Changes should be analyzed before they are made
to the existing system, recorded before they are implemented, reported to have details of before and after, and
controlled in a manner that will improve quality and reduce error. This is where the need for System
Configuration Management comes. System Configuration Management (SCM) is an arrangement of
exercises that controls change by recognizing the items for change, setting up connections between those
things, making/characterizing instruments for overseeing diverse variants, controlling the changes being
executed in the current framework, inspecting and revealing/reporting on the changes made. It is essential to
control the changes because if the changes are not checked legitimately then they may wind up undermining a
well-run programming. In this way, SCM is a fundamental piece of all project management activities.
Processes involved in SCM – Configuration management provides a disciplined environment for smooth
control of work products. It involves the following activities:
1. Identification and Establishment – Identifying the configuration items from products that compose
baselines at given points in time (a baseline is a set of mutually consistent Configuration Items, which has
been formally reviewed and agreed upon, and serves as the basis of further development). Establishing
relationships among items, creating a mechanism to manage multiple levels of control and procedure for
the change management system.
2. Version control – Creating versions/specifications of the existing product to build new products with the
help of the SCM system. A description of the version is given below:

125 | P a g e
Suppose after some
changes, the version of the configuration object changes from 1.0 to 1.1. Minor corrections and changes
result in versions 1.1.1 and 1.1.2, which is followed by a major update that is object 1.2. The development
of object 1.0 continues through 1.3 and 1.4, but finally, a noteworthy change to the object results in a new
evolutionary path, version 2.0. Both versions are currently supported.
3. Change control – Controlling changes to Configuration items (CI). The change control process is
explained in Figure below:

A change request (CR) is submitted and evaluated to assess technical merit, potential side effects, the
overall impact on other configuration objects and system functions, and the projected cost of the change.
The results of the evaluation are presented as a change report, which is used by a change control board
(CCB) —a person or group who makes a final decision on the status and priority of the change. An
engineering change Request (ECR) is generated for each approved change. Also, CCB notifies the

126 | P a g e
developer in case the change is rejected with proper reason. The ECR describes the change to be made,
the constraints that must be respected, and the criteria for review and audit. The object to be changed is
“checked out” of the project database, the change is made, and then the object is tested again. The object
is then “checked in” to the database and appropriate version control mechanisms are used to create the
next version of the software.
4. Configuration auditing – A software configuration audit complements the formal technical review of the
process and product. It focuses on the technical correctness of the configuration object that has been
modified. The audit confirms the completeness, correctness, and consistency of items in the SCM system
and tracks action items from the audit to closure.
5. Reporting – Providing accurate status and current configuration data to developers, testers, end users,
customers, and stakeholders through admin guides, user guides, FAQs, Release notes, Memos, Installation
Guide, Configuration guides, etc.
System Configuration Management (SCM) is a software engineering practice that focuses on managing the
configuration of software systems and ensuring that software components are properly controlled, tracked, and
stored. It is a critical aspect of software development , as it helps to ensure that changes made to a software
system are properly coordinated and that the system is always in a known and stable state.
SCM involves a set of processes and tools that help to manage the different components of a software
system, including source code, documentation, and other assets. It enables teams to track changes made to
the software system, identify when and why changes were made, and manage the integration of these
changes into the final product.
Importance of Software Configuration Management
1. Effective Bug Tracking: Linking code modifications to issues that have been reported, makes bug tracking
more effective.
2. Continuous Deployment and Integration: SCM combines with continuous processes to automate
deployment and testing, resulting in more dependable and timely software delivery.
3. Risk management: SCM lowers the chance of introducing critical flaws by assisting in the early detection
and correction of problems.
4. Support for Big Projects: Source Code Control (SCM) offers an orderly method to handle code
modifications for big projects, fostering a well-organized development process.
5. Reproducibility: By recording precise versions of code, libraries, and dependencies, source code versioning
(SCM) makes builds repeatable.
6. Parallel Development: SCM facilitates parallel development by enabling several developers to collaborate
on various branches at once.
Why need for System configuration management?
1. Replicability: Software version control (SCM) makes ensures that a software system can be replicated at
any stage of its development. This is necessary for testing, debugging, and upholding consistent
environments in production, testing, and development.
2. Identification of Configuration: Source code, documentation, and executable files are examples of
configuration elements that SCM helps in locating and labeling. The management of a system’s constituent
parts and their interactions depend on this identification.
3. Effective Process of Development: By automating monotonous processes like managing dependencies,
merging changes, and resolving disputes, SCM simplifies the development process. Error risk is decreased
and efficiency is increased because of this automation.
Key objectives of SCM
1. Control the evolution of software systems: SCM helps to ensure that changes to a software system are
properly planned, tested, and integrated into the final product.
2. Enable collaboration and coordination: SCM helps teams to collaborate and coordinate their work,
ensuring that changes are properly integrated and that everyone is working from the same version of the
software system.
3. Provide version control: SCM provides version control for software systems, enabling teams to manage
and track different versions of the system and to revert to earlier versions if necessary.
4. Facilitate replication and distribution: SCM helps to ensure that software systems can be easily
replicated and distributed to other environments, such as test, production, and customer sites.
5. SCM is a critical component of software development , and effective SCM practices can help to improve the
quality and reliability of software systems, as well as increase efficiency and reduce the risk of errors.
The main advantages of SCM
1. Improved productivity and efficiency by reducing the time and effort required to manage software changes.
2. Reduced risk of errors and defects by ensuring that all changes were properly tested and validated.
3. Increased collaboration and communication among team members by providing a central repository for
software artifacts.
4. Improved quality and stability of software systems by ensuring that all changes are properly controlled and
managed.
The main disadvantages of SCM
1. Increased complexity and overhead, particularly in large software systems.
2. Difficulty in managing dependencies and ensuring that all changes are properly integrated.

127 | P a g e
3. Potential for conflicts and delays, particularly in large development teams with multiple contributors.

Objectives of Software Configuration Management


Last Updated : 23 Aug, 2022



Software Configuration Management ( SCM ) is just like an umbrella activity which is to be applied
throughout the software process. It manages and tracks the emerging product and its versions also
it identifies and controls the configuration of software, hardware and the tools that are used
throughout the development cycle. SCM ensures that all people involved in the software process
know what is being designed developed, built, tested and delivered.
Objectives of SCM Standards : Major objectives of software configuration are depicted as in the
following figure:

1. Remote System Administration :


 For the remote system administration tools, the configuration standard should include
necessary software and/or privilege’s.
 The cornerstone on the client side is a remote administration client that is correctly installed
and configured for the remotely administered network.
 These remote tools can be used to check the version of virus protection, check machine
configuration or offer remote help-desk functionality.
2. Reduced User Downtime :
 A great advantage of using a standard configuration is that systems become completely
interchangeable resulting in reduced user downtime.
 On experiencing an unrecoverable error, an identical new system can be dropped into place.
 User data can be transferred if the non-functional machine is still accessible, or the most
recent copy can be pulled off of the backup tape with the ultimate goal being that the user
experiences little change in the system interface. Software installed.
3. Reliable Data Backups :
 Using standard directory for user data allows backup system to selectively backup a small
portion of a machine, greatly reducing the network traffic and tape usage for backup systems.
 A divided directory structure, between system and user data, is one of the main goals of the
configuration standards.
4. Easy workstation setup :
 The standardized configuration of any sort will streamline the process of setting up the
system and insures that vital components are available.
 If multiple machines are being setup according to a standard set-up, most of the setup
configurations can be automated.
5. Multi-user Support :
 It is not common to share the workstation for the users. So that system configuration is
designed in such a way to use the same workstation without interfering with each other
networks.
 Some software packages do not support completely independent settings for all users,
however, users can have independent data users.
 Usage of structure will not impose limits on the number of independent users a system can
have.
6. Remote Software Installation :

128 | P a g e
 Mostly the modern software packages are the pre-defined directories installed in factory. No
doubt, this type of installation is good for single user, but for the collection of machines it will
lead to non-uniform configuration.
 A good configuration standard will have software installed in specific directory areas to
logically divide software on the disk.
 With the help of universal scripts, it becomes easy to easily identify the installed components
and the possibility of automating installation procedures.
 As software will be installed into specific directories, it’s maintenance and upgrading running
software becomes less complex.
Software Quality Assurance – Software Engineering
Last Updated : 02 Aug, 2024



Software Quality Assurance (SQA) is simply a way to assure quality in the software. It is the set
of activities that ensure processes, procedures as well as standards are suitable for the project and
implemented correctly.
Software Quality Assurance is a process that works parallel to Software Development. It focuses on
improving the process of development of software so that problems can be prevented before they
become major issues. Software Quality Assurance is a kind of Umbrella activity that is applied
throughout the software process.
For those looking to deepen their expertise in SQA and elevate their professional skills, consider
exploring a specialized training program – Manual to Automation Testing: A QA Engineer’s
Guide . This program offers practical, hands-on experience and advanced knowledge that
complements the concepts covered in this guide.
What is quality?
Quality in a product or service can be defined by several measurable characteristics. Each of these
characteristics plays a crucial role in determining the overall quality.

What is quality?

Software Quality Assurance (SQA) encompasse s

129 | P a g e
SQA process Specific quality assurance and quality control tasks (including technical reviews and a multitiered
testing strategy) Effective software engineering practice (methods and tools) Control of all software work
products and the changes made to them a procedure to ensure compliance with software
development standards (when applicable) measurement and reporting mechanisms
Elements of Software Quality Assurance (SQA)
1. Standards: The IEEE, ISO, and other standards organizations have produced a broad array of software
engineering standards and related documents. The job of SQA is to ensure that standards that have been
adopted are followed and that all work products conform to them.
2. Reviews and audits: Technical reviews are a quality control activity performed by software engineers for
software engineers. Their intent is to uncover errors. Audits are a type of review performed by SQA
personnel (people employed in an organization) with the intent of ensuring that quality guidelines are being
followed for software engineering work.
3. Testing: Software testing is a quality control function that has one primary goal—to find errors. The job of
SQA is to ensure that testing is properly planned and efficiently conducted for primary goal of software.
4. Error/defect collection and analysis : SQA collects and analyzes error and defect data to better
understand how errors are introduced and what software engineering activities are best suited to
eliminating them.
5. Change management: SQA ensures that adequate change management practices have been instituted.
6. Education: Every software organization wants to improve its software engineering practices. A key
contributor to improvement is education of software engineers, their managers, and other stakeholders. The
SQA organization takes the lead in software process improvement which is key proponent and sponsor of
educational programs.
7. Security management: SQA ensures that appropriate process and technology are used to achieve
software security.
8. Safety: SQA may be responsible for assessing the impact of software failure and for initiating those steps
required to reduce risk.
9. Risk management : The SQA organization ensures that risk management activities are properly conducted
and that risk-related contingency plans have been established.
Software Quality Assurance (SQA) focuses
The Software Quality Assurance (SQA) focuses on the following

130 | P a g e
Software Quality Assurance (SQA)

 Software’s portability: Software’s portability refers to its ability to be easily transferred or adapted to
different environments or platforms without needing significant modifications. This ensures that the software
can run efficiently across various systems, enhancing its accessibility and flexibility.
 software’s usability: Usability of software refers to how easy and intuitive it is for users to interact with
and navigate through the application. A high level of usability ensures that users can effectively accomplish
their tasks with minimal confusion or frustration, leading to a positive user experience.
 software’s reusability: Reusability in software development involves designing components or modules
that can be reused in multiple parts of the software or in different projects. This promotes efficiency and
reduces development time by eliminating the need to reinvent the wheel for similar functionalities,
enhancing productivity and maintainability.

131 | P a g e
 software’s correctness: Correctness of software refers to its ability to produce the desired results under
specific conditions or inputs. Correct software behaves as expected without errors or unexpected
behaviors, meeting the requirements and specifications defined for its functionality.
 software’s maintainability: Maintainability of software refers to how easily it can be modified, updated, or
extended over time. Well-maintained software is structured and documented in a way that allows
developers to make changes efficiently without introducing errors or compromising its stability.
 software’s error control: Error control in software involves implementing mechanisms to detect, handle,
and recover from errors or unexpected situations gracefully. Effective error control ensures that the
software remains robust and reliable, minimizing disruptions to users and providing a smoother experience
overall.
Software Quality Assurance (SQA) Include
1. A quality management approach.
2. Formal technical reviews.
3. Multi testing strategy.
4. Effective software engineering technology.
5. Measurement and reporting mechanism.
Major Software Quality Assurance (SQA) Activities
1. SQA Management Plan: Make a plan for how you will carry out the SQA throughout the project. Think
about which set of software engineering activities are the best for project. check level of SQA team skills.
2. Set The Check Points: SQA team should set checkpoints. Evaluate the performance of the project on the
basis of collected data on different check points.
3. Measure Change Impact: The changes for making the correction of an error sometimes re introduces
more errors keep the measure of impact of change on project. Reset the new change to check the
compatibility of this fix with whole project.
4. Multi testing Strategy: Do not depend on a single testing approach. When you have a lot of testing
approaches available use them.
5. Manage Good Relations: In the working environment managing good relations with other teams involved
in the project development is mandatory. Bad relation of SQA team with programmers team will impact
directly and badly on project. Don’t play politics.
6. Maintaining records and reports: Comprehensively document and share all QA records, including test
cases, defects, changes, and cycles, for stakeholder awareness and future reference.
7. Reviews software engineering activities: The SQA group identifies and documents the processes. The
group also verifies the correctness of software product.
8. Formalize deviation handling: Track and document software deviations meticulously. Follow established
procedures for handling variances.
Benefits of Software Quality Assurance (SQA)
1. SQA produces high quality software.
2. High quality application saves time and cost.
3. SQA is beneficial for better reliability.
4. SQA is beneficial in the condition of no maintenance for a long time.
5. High quality commercial software increase market share of company.
6. Improving the process of creating software.
7. Improves the quality of the software.
8. It cuts maintenance costs. Get the release right the first time, and your company can forget about it and
move on to the next big thing. Release a product with chronic issues, and your business bogs down in a
costly, time-consuming, never-ending cycle of repairs.
Disadvantage of Software Quality Assurance (SQA)
There are a number of disadvantages of quality assurance.
 Cost: Some of them include adding more resources, which cause the more budget its not, Addition of more
resources For betterment of the product.
 Time Consuming: Testing and Deployment of the project taking more time which cause delay in the
project.
 Overhead : SQA processes can introduce administrative overhead, requiring documentation, reporting, and
tracking of quality metrics. This additional administrative burden can sometimes outweigh the benefits,
especially for smaller projects.
 Resource Intensive : SQA requires skilled personnel with expertise in testing methodologies, tools, and
quality assurance practices. Acquiring and retaining such talent can be challenging and expensive.
 Resistance to Change : Some team members may resist the implementation of SQA processes, viewing
them as bureaucratic or unnecessary. This resistance can hinder the adoption and effectiveness of quality
assurance practices within an organization.
 Not Foolproof : Despite thorough testing and quality assurance efforts, software can still contain defects or
vulnerabilities. SQA cannot guarantee the elimination of all bugs or issues in software products.
 Complexity : SQA processes can be complex, especially in large-scale projects with multiple stakeholders,
dependencies, and integration points. Managing the complexity of quality assurance activities requires
careful planning and coordination.

132 | P a g e
Conclusion
Software Quality Assurance (SQA) maintain a most important role in the ensuring the quality, reliability and
efficiency of the product. By implementation of these control process which cause the improvement of
the software engineering process . SQA gives a higher quality product which help to meet user expectations,
its having some drawback also like Cost, time-consuming process, after maintaining the process of the SQA
its improved the reliability and maintain the maintenance cost which affect in a future.
Overall, Software Quality Assurance (SQA) is important for the success in the project development in Software
Engineering
Frequently Asked Questions
What Does Do Software Quality Assurance (SQA) in software Development?
SQA makes sure that the software is made according to the need and checking its build.
How does Software Quality Assurance (SQA) help software work better?
SQA Finds the faults in the Software before its use, it will help to make software more trustable.
What parts are important in Software Quality Assurance (SQA)?
SQA Checks the software follows rules, it will learn from example, manage changes, check working well,
educate teams, ensure security, and handle the risk.
What is Monitoring and Control in Project Management?
Last Updated : 30 May, 2024



Monitoring and control is one of the key processes in any project management which has great
significance in making sure that business goals are achieved successfully. We are seeing All points
and Subpoints in a Detailed way:

These processes enable the ability to supervise, make informed decisions, and adjust in response to changes
during the project life cycle are critical.
What is Monitoring Phase in Project Management?
Monitoring in project management is the systematic process of observing, measuring, and evaluating
activities, resources, and progress to verify that a given asset has been developed according to the terms set
out. It is intended to deliver instant insights, detect deviations from the plan, and allow quick decision-making.
Purpose
1. Track Progress: Monitor the actual implementation of the project along with indicators such as designs,
timelines budgets, and standards.
2. Identify Risks and Issues: Identify other risks and possible issues in the early stage to create immediate
intervention measures as well as resolutions.
3. Ensure Resource Efficiency: Monitor how resources are being distributed and used to improve efficiency
while avoiding resource shortages.
4. Facilitate Decision-Making: Supply project managers and stakeholders with reliable and timely
information for informed
5. Enhance Communication: Encourage honest team communication and stakeholder engagement related
to project status, challenges
Key Activities
1. Performance Measurement: Identify and monitor critical performance indicators (KPIs) to compare the
progress of a project against defined targets.
2. Progress Tracking: Update schedules and timelines for the project on a regular basis, and compare actual
work with planned milestones to detect any delays or deviations.
3. Risk Identification and Assessment: Monitor actual risks, including their probability and consequences.
Find new risks and assess the performance of current risk mitigation mechanisms.
4. Issue Identification and Resolution: Point out problems discovered in the process of project
implementation, evaluate their scale and introduce corrective measures immediately.
5. Resource Monitoring: Track how resources are distributed and used, to ensure there is adequate
equipment as well as support by the team members in meeting their objectives.
6. Quality Assurance: Monitor compliance with quality standards and processes, reporting deviations to take
actions necessary for restoring the targeted level of quality.
7. Communication and Reporting: Disseminate project status updates, milestones reached and important
findings to the stakeholders on a regular basis.
8. Change Control: Review and evaluate project scope, schedule or budget changes. Adopt structured
change control processes to define, justify and approve changes.

133 | P a g e
9. Documentation Management: Make sure that project documentation is accurate, current and readily
available for ready reference. This involves project plans, reports and other documents related to a
particular project.
Tools and Technologies for Monitoring
1. Project Management Software: Tools such as Microsoft Project, Jira, and Trello offer features in terms of
scheduling monitoring resources for task execution.
2. Performance Monitoring Tools: The solutions that New Relic, AppDynamics and Dynatrace provide cater
to monitoring of application performances as well as infrastructure performance besides user experience.
3. Network Monitoring Tools: The three tools namely SolarWinds Network Performance Monitor, Wireshark
and PRTG Network monitor help in monitoring and analyzing the network performance.
4. Server and Infrastructure Monitoring Tools: The mentioned monitoring tools, namely Nagios prometheus
and Zabbix monitor servers systems and IT infrastructure for performance availability.
5. Log Management Tools: Log analysis and visualization are performed using ELK Stack (Elasticsearch,
Logstash, Kibana), Splunk, and Graylog.
6. Cloud Monitoring Tools: Amazon CloudWatch, Google Cloud Operations Suite, and Azure Monitor
provide monitoring solutions for cloud-based services and resources.
7. Security Monitoring Tools: Security Information and Event Management tools like Splunk, IBM QRadar or
ArcSight provide support to the process of monitoring security events and incidents.
What is Control Phase in Project Management?
In project management, the control stage refers to taking corrective measures using data collected during
monitoring. It seeks to keep the project on track and in line with its purpose by resolving issues, minimizing
risks, and adopting appropriate modifications into plan documents for projects.
Purpose
1. Implement Corrective Actions: Using the issues, risks, or deviations from the project plan as a pretext to
implement corrective actions and put back on course.
2. Adapt to Changes: Accommodate changes in requirements, external parameters or unknown
circumstances by altering project plans resources and strategies.
3. Optimize Resource Utilization: Do not allow the overruns of resources or lack thereof that directly affect
project performance.
4. Ensure Quality and Compliance: Comply with quality standards, regulatory mandates and project policies
to achieve the best results possible.
5. Facilitate Communication: Communicate changes, updates and resolutions to the stakeholders in order
to preserve transparency and cooperation through project.
Key Activities
1. Issue Resolution: Respond to identified issues in a timely manner by instituting remedial measures. Work
with the project team to address obstacles that threaten progress in this assignment.
2. Risk Mitigation: Perform risk response plans in order to avoid the negative influence of risks identified.
Take proactive actions that can minimize the possibility or magnitude of potential problems.
3. Change Management: Evaluate and put into practice the approved amendments to the project scope,
schedule or budget. Make sure that changes are plugged into project plans.
4. Resource Adjustment: Optimize resource allocation based on project requirements and variability in the
workload. Make sure that team members are provided with adequate support in order to play their
respective roles efficiently.
5. Quality Control: Supervise and ensure that quality standards are followed. Ensure that project deliverables
comply with the stated requirements through quality control measures.
6. Performance Adjustment: Adjust project schedules, budgets and other resources according to monitoring
observations. Ensure alignment with project goals.
7. Communication of Changes: Share changes, updates, and resolutions to stakeholders via periodic
reports or project documents. Keep lines of communication open.
8. Documentation Management: Update project documentation for changes made in control phase. Record
decisions, actions taken and any changes to project plans.
Tools and Technologies for Control
1. Project Management Software: It is possible to adjust project plans, schedules and tasks using Microsoft
Project Jira or Trello depending on changes identified in the control phase.
2. Change Control Tools: ChangeScout, Prosci or integrated change management modules within project
management software allow for systematic changes.
3. Collaboration Platforms: Instruments such as Microsoft Teams, Slack or Asana enhance interaction and
cooperation; the platforms allow real-time information sharing between team members.
4. Version Control Systems: To control changes to project documentation and maintain versioning, Git or
Subversion tools are necessary.
5. Quality Management Tools: Quality control activities are facilitated by tools such as TestRail, Jira and
Quality Center to make sure the project deliverables meet predetermined quality standards.
6. Risk Management Software: Tools like RiskWatch, RiskTrak or ARM (Active risk Management) help in
monitoring and controlling risks helping to implement the mitigation strategies on risks.
7. Resource Management Tools: There are tools such as ResourceGuru, LiquidPlanner or Smartsheet that
contribute to optimizing resource allocation and easing adjustments in the control phase.

134 | P a g e
8. Communication Platforms: Communication tools like Zoom, Microsoft Teams or Slack make it possible to
inform the stakeholders of changes, updates and resolutions in a timely manner.
Integrating Monitoring and Control
Seamless combination of the monitoring and control processes is necessary in project management for
successfully completed projects. While monitoring is concerned with the constant observation and
measurement of project activities, control refers to controlling actions that arise from these insights. These two
processes form a synergy that shapes an agile environment, promotes efficient decision-making and mitigates
risk as well ensuring good performance of the project.
Here’s an in-depth explanation of how to effectively integrate monitoring and control:
1. Continuous Feedback Loop
The integration starts with continuous feedback loops between the monitoring and control. Measuring allows
real time information on project advancements, risks and resource utilization as a foundation for control
decision making.
2. Establishing Key Performance Indicators (KPIs)
First, identify and check KPIs that are relevant for the project goals. These parameters act as performance
measures and deviations standards which give the base for control phase to make corrections.
3. Early Identification of Risks and Issues
Using continuous monitoring, the problems are identified in early stages of their emergence. Through this
integration, the organization is able to be proactive where project teams can implement timely and effective
compliance measures keeping these risks from becoming major issues.
4. Real-Time Data Analysis
During the monitoring phase, use sophisticated instruments to analyze data in real-time. Some technologies,
including artificial intelligence and machine learning as well as data analytics help to understand what the
trends, patterns or anomalies are of project dynamics for better control.
5. Proactive Change Management
Integration guarantees that changes identified during monitoring smoothly undergo control. A good change
management process enables the assessment, acceptance and implementation of changes without affecting
project stability.
6. Stakeholder Communication and Transparency
To achieve effective integration, errors in transparent communication must be avoided. Keep stakeholders
abreast of the project’s status, changes made and how they were resolved. Proper communication assures
everyone is aligned with the direction of the project and promotes synergy among monitoring activities.
7. Adaptive Project Plans
Create project plans that can be modified based on changes established during monitoring. Bringing control in
means working with schedules, resource allocations, and objectives that can be changed depending on the
nature of conditions while project plans remain flexible.
8. Agile Methodologies
The use of agile methodologies enhances integration even more. Agile principles prioritize iterative
development, continual feedback, and flexible planning in accordance with monitoring-control integration.
9. Documentation and Lessons Learned
It is vital to note insights from the phases of monitoring and control. This documentation enables future
projects to use lessons learned as a resource, fine-tune the strategy for monitoring and optimize control
processes systems on an ongoing basis.
Benefits of Effective Monitoring and Control
Proper monitoring and control processes play an important role in the success of projects that are guided by
project management. Here are key advantages associated with implementing robust monitoring and control
measures:
1. Timely Issue Identification and Resolution: Prompt resolution of issues is possible if they are detected
early. Monitoring and control effectiveness see early challenges, thus preventing the escalation into serious
problems likely to affect project timelines or overall objectives.
2. Optimized Resource Utilization: Monitoring and controlling resource allocation and use ensures optimum
efficiency. Teams can detect resources underutilized or overallocated, thereby allocating adjusting towards
a balance workload and efficient use of resource.
3. Risk Mitigation: A continuous monitoring approach aids proactive risk management. Identification of future
risks at an early stage enables establishment of mitigation plans for the project teams to reduce likelihood
and severity levels that often lead adverse events on projects.
4. Adaptability to Changes: Effective monitoring highlights shifts in project requirements, influences outside
the system or stakeholder expectations. Control processes enable a smooth adjustment of project plans to
reflect the ongoing change, thus minimizing resistance.
5. Improved Decision-Making: As the monitoring processes provide accurate and real-time data, decision
making can be improved. Stakeholders and project managers can base their decisions on the most current
of information, thereby facilitating more strategic choices that result in better outcomes.

135 | P a g e
6. Enhanced Communication and Transparency: Frequent communication of the status, progress and
issues supports transparency. The shareholders are kept with updated information, and this results in the
build-up of trust among the team members’ clients to other interested parties.
7. Quality Assurance: The monitoring and control processes also help in the quality assurance of project
deliverables. Therefore, through continuous tracking and management of quality metrics, teams can find
any deviations from the standards to take timely corrective actions that meet stakeholders’ needs.
8. Cost Control: Cost overruns, in turn, could be mitigated through continuous monitoring of project budgets
and expenses accompanied by the control processes. Teams can spot variances early and take corrective
actions to ensure that the project stays within budget limit.
9. Efficient Stakeholder Management: Monitoring and control allows for providing timely notice about the
project’s progress and any changes to interested parties. This preemptive approach increases the
satisfaction of Stakeholders while reducing misconception.
10. Continuous Improvement: Improvement continues as lessons learned through monitoring and control
activities are applied. Teams can learn from past projects, understand what needs to improve, and
implement good practices in future initiatives establishing an atmosphere of constant development.
11. Increased Predictability: Monitoring and control that is effective make project outcomes better predictable.
The accurate timelines, costs and risk forecasts are attained through closely controlling project activities
which the teams manage to provide effective stakeholders with a clear understanding of all their projects
expectations.
12. Project Success and Client Satisfaction: Finally, the result of successful monitoring and control is project
success. The final result of the projects satisfaction for clients and positive outcomes from that project.
Challenges and Solutions
1. Incomplete or Inaccurate Data
 Challenge: Lack of proper or trustworthy data may impair efficient monitoring and control, making wrong
decisions.
 Solution: Develop effective data collection methods, use reliable instruments and invest in training to
increase the accuracy of information captured.
2. Scope Creep
 Challenge: Lack of sufficient control can lead to scope creep that affects overall timelines and costs.
 Solution: Implement rigid change control procedures, review project scope on a regular basis and ensure
that all changes are appropriately evaluated assessed approved documented.
3. Communication Breakdowns
 Challenge: Poordiscussions are often based on misunderstandings, delays and unresolved matters.
 Solution: Set up proper communication channels, use collaboration tools and have regular meetings about
the project’s status to ensure productive communication between team members and stakeholders.
4. Resource Constraints
 Challenge: Lack of resources, in terms of budget, personnel or technology hinders timely monitoring and
control.
 Solution: Focus on resource requirements, obtain further help where required and maximize resource
utilization by planning carefully.
5. Lack of Stakeholder Engagement
 Challenge: Lack of engagement among some stakeholders affects the pace and decisions made during
such a project.
 Solution: Develop a culture that supports stakeholder engagement by providing regular updates,
conducting feedback sessions and involving key decision makers at critical junctions.
6. Unforeseen Risks
 Challenge: During the project lifecycle, new risks can surfaced that had not been previously identified.
 Solution: Apply a risk management approach that is responsive, reassess risks regularly and ensure
contingency plans are in place to cope with the unexpected.
7. Resistance to Change
 Challenge: Enforced changes made within the control stage might be rejected by team members or
stakeholders.
 Solution: Clearly communicate the rationale for changes, engage appropriate stakeholders in decision-
making processes and emphasize the value of flexibility to facilitate a more comfortable change process.
8. Technology Integration Issues
 Challenge: The integration of monitoring and control tools is complicated, which can bring inefficiencies or
data inconsistency.
 Solution: In order to achieve effective integration, invest in interoperable technologies that are easy-to-use
while providing continuous training and keeping the systems up to date.
9. Insufficient Training and Skill Gaps
 Challenge: Lack of proper training and skill deficiencies among the team members pose a threat to
effective use of monitoring and control mechanism.
 Solution: Offer wide training opportunities, point out and resolve the areas of deficiency as well as build
curiosity for continuous learning with a view to increase effectiveness in project team.

136 | P a g e
10. Lack of Standardized Processes
 Challenge: Non-uniform or irregular processes may also result in the confusion and mistakes while
performing activities of monitoring and control.
 Solution: Create and record standardized processes, ensure that the entire team understands these
procedures, continually reviewing them when necessary after going through lessons learned.
Related Posts:
 Project Management Process Activities
 Phases of Project Management Process
 Quality Control in Project Management?
 What is Project Management?
 What is the Initiation phase in project management?
Conclusion
In the final analysis, successful project management is based upon the incorporation of efficient monitoring
and control processes. The symbiotic relationship between these two phases, creates a dynamic framework
that allows for adaptability transparency and informed decision-making throughout the project life cycle.
Frequently Asked Questions on Monitoring and Control in Project
Management – FAQs
How to monitor a project plan?
Monitor a project plan by regularly tracking progress against milestones and deadlines, identifying any
deviations or risks, and adjusting the plan accordingly to ensure timely completion and alignment with project
goals.
What are the 4 types of project monitoring?
The 4 types of Project monitoring are : Progress Monitoring, Performance Monitoring, Risk Monitoring,
Resource Monitoring.
What are the 5 project controls?
Time, cost, scope, quality, and resources are the five project controls.
What is the project control cycle?
The project control cycle is like a loop of steps where we first set goals and plans, then check how things are
going, compare it to the plan, fix any problems we find, and then start the loop again. It helps us keep our
projects on track and make sure they’re successful.

Software Quality – Software Engineering


Last Updated : 03 Jun, 2024



Traditionally, a high-quality product is outlined in terms of its fitness of purpose. That is, a high-
quality product will specifically be what the users need to try. For code merchandise, the fitness of
purpose is typically taken in terms of satisfaction of the wants arranged down within the SRS
document.

Though “fitness of purpose” could be a satisfactory definition of quality for some merchandise like an
automobile, a table fan, a grinding machine, etc. – for code merchandise, “fitness of purpose” isn’t a
completely satisfactory definition of quality.

137 | P a g e
What is Software Quality?
Software Quality shows how good and reliable a product is. To convey an associate degree example, think
about functionally correct software. It performs all functions as laid out in the SRS document. But, it has an
associate degree virtually unusable program. even though it should be functionally correct, we tend not to
think about it to be a high-quality product.
Another example is also that of a product that will have everything that the users need but has an associate
degree virtually incomprehensible and not maintainable code. Therefore, the normal construct of quality as
“fitness of purpose” for code merchandise isn’t satisfactory.
Factors of Software Quality
The modern read of high-quality associates with software many quality factors like the following:

138 | P a g e
1. Portability: A software is claimed to be transportable, if it may be simply created to figure in several
package environments, in several machines, with alternative code merchandise, etc.
2. Usability: A software has smart usability if completely different classes of users (i.e. knowledgeable and
novice users) will simply invoke the functions of the merchandise.
3. Reusability: A software has smart reusability if completely different modules of the merchandise will simply
be reused to develop new merchandise.
4. Correctness: Software is correct if completely different needs as laid out in the SRS document are
properly enforced.
5. Maintainability: A software is reparable, if errors may be simply corrected as and once they show up, new
functions may be simply added to the merchandise, and therefore the functionalities of the merchandise
may be simply changed, etc
6. Reliability: Software is more reliable if it has fewer failures. Since software engineers do not deliberately
plan for their software to fail, reliability depends on the number and type of mistakes they make. Designers
can improve reliability by ensuring the software is easy to implement and change, by testing it thoroughly,
and also by ensuring that if failures occur, the system can handle them or can recover easily.
7. Efficiency. The more efficient software is, the less it uses of CPU-time, memory, disk space, network
bandwidth, and other resources. This is important to customers in order to reduce their costs of running the
software, although with today’s powerful computers, CPU time, memory and disk usage are less of a
concern than in years gone by.
Software Quality Management System
Software Quality Management System contains the methods that are used by the authorities to develop
products having the desired quality.
Some of the methods are:
 Managerial Structure: Quality System is responsible for managing the structure as a whole. Every
Organization has a managerial structure.
 Individual Responsibilities: Each individual present in the organization must have some responsibilities
that should be reviewed by the top management and each individual present in the system must take this
seriously.
 Quality System Activities: The activities which each quality system must have been
o Project Auditing.
o Review of the quality system.
o It helps in the development of methods and guidelines.
Evolution of Quality Management System

139 | P a g e
Quality Systems are basically evolved over the past some years. The evolution of a Quality Management
System is a four-step process.
1. Inception: Product inspection task provided an instrument for quality control (QC).
2. Quality Control: The main task of quality control is to detect defective devices, and it also helps in finding
the cause that leads to the defect. It also helps in the correction of bugs.
3. Quality Assurance: Quality Assurance helps an organization in making good quality products. It also helps
in improving the quality of the product by passing the products through security checks.
4. Total Quality Management (TQM): Total Quality Management(TQM) checks and assures that all the
procedures must be continuously improved regularly through process measurements.

Evolution of Quality Management System

Questions for Practice


1. In software testing, how the error, fault, and failure are related to each other? [UGC-NET 2015]
(A) Error leads to failure, but fault is not related to error and failure.
(B) Fault leads to failure, but error is not related to fault and failure.
(C) Error leads to fault and fault leads to failure.
(D) Fault leads to error and error leads to failure.
Solution: Correct Answer is (C).
2. A Software Requirement Specification (SRS) document should avoid discussing which one of the
following? [GATE CS 2015]
(A) User Interface Issues
(B) Non-Functional Requirements
(C) Design Specification
(D) Interfaces with Third-Party Software
Solution: Correct Answer is (C).
Conclusion
Software quality ensures a product is reliable, maintainable, and user-friendly, going beyond just meeting
requirements. It involves key factors like portability, usability, correctness, and efficiency. A robust quality
management system and continuous improvement processes help achieve these standards. High-quality
software is functional, efficient, and adaptable to user needs.
FAQs related to Software Quality
1. What are the five views of software quality?
Five Views of Software Quality are:
 Transcendent based,
 Product based

140 | P a g e
 User based
 Development and manufacturer based
 Value-based.
2. What is the purpose of the software quality?
Ans: The main purpose of software quality is to make ensure that software products are properly developed
and maintained to meet the requirements.
3. What are the three C’s of Software Quality?
Ans: The three C’s of Software Quality is Consistency, Completeness, and Correctness.

ISO 9000 Certification in Software Engineering


Last Updated : 23 Nov, 2020



The International organization for Standardization is a world wide federation of national standard
bodies. The International standards organization (ISO) is a standard which serves as a for
contract between independent parties. It specifies guidelines for development of quality system.
Quality system of an organization means the various activities related to its products or services.
Standard of ISO addresses to both aspects i.e. operational and organizational aspects which
includes responsibilities, reporting etc. An ISO 9000 standard contains set of guidelines of
production process without considering product itself.

ISO 9000 Certification

Why ISO Certification required by Software Industry?


There are several reasons why software industry must get an ISO certification. Some of reasons are
as follows :
 This certification has become a standards for international bidding.
 It helps in designing high-quality repeatable software products.
 It emphasis need for proper documentation.
 It facilitates development of optimal processes and totally quality measurements.
Features of ISO 9001 Requirements :
 Document control –
All documents concerned with the development of a software product should be properly
managed and controlled.
 Planning –
Proper plans should be prepared and monitored.
 Review –
For effectiveness and correctness all important documents across all phases should be
independently checked and reviewed .
 Testing –
The product should be tested against specification.
 Organizational Aspects –
Various organizational aspects should be addressed e.g., management reporting of the quality
team.

141 | P a g e
Advantages of ISO 9000 Certification :
Some of the advantages of the ISO 9000 certification process are following :
 Business ISO-9000 certification forces a corporation to specialize in “how they are doing
business”. Each procedure and work instruction must be documented and thus becomes a
springboard for continuous improvement.
 Employees morale is increased as they’re asked to require control of their processes and
document their work processes
 Better products and services result from continuous improvement process.
 Increased employee participation, involvement, awareness and systematic employee training
are reduced problems.
Shortcomings of ISO 9000 Certification :
Some of the shortcoming of the ISO 9000 certification process are following :
 ISO 9000 does not give any guideline for defining an appropriate process and does not give
guarantee for high quality process.
 ISO 9000 certification process have no international accreditation agency exists.

142 | P a g e

You might also like