Notes
Notes
Software Engineering is a systematic, disciplined, quantifiable study and approach to the design,
development, operation, and maintenance of a software system. These article help you understand
the basics of software engineering. This Introduction part covers the topic like Basics of Software
and Software engineering, What is the need of Software Engineering etc.
1. Introduction to Software Engineering
2. Introduction to Software Development
3. Classification of Software
4. Software Evolution
5. What is the Need of Software Engineering?
6. What does a Software Engineer Do?
Software architecture refers to the high-level structure of a software system. It defines the
components, their interactions, and the principles guiding their design. Here are some common
software architectures:
1. User Interface Design
2. Coupling and Cohesion
3. Information System Life Cycle
4. Database application system life cycle
5. Pham-Nordmann-Zhang Model (PNZ model)
6. Schick-Wolverton software reliability model
1|Page
13. Jelinski Moranda software reliability model
14. Schick-Wolverton software reliability model
15. Goel-Okumoto Model
16. Mills’ Error Seeding Model
17. Basic fault tolerant software techniques
18. Software Maintenance
Software Metrices
Software metrics are quantitative measures used to assess various aspects of software development
processes, products, and projects. These metrics provide valuable insights into the quality,
performance, and efficiency of software development efforts. Here are some common software
metrics:
1. Software Measurement and Metrics
2. People Metrics and Process Metrics in Software Engineering
3. Halstead’s Software Metrics
4. Cyclomatic Complexity
5. Functional Point (FP) Analysis – Software Engineering
6. Lines of Code (LOC) in Software Engineering
Software Requirements
Software requirements are descriptions of the features, functions, capabilities, and constraints that a
software system must possess to meet the needs of its users and stakeholders. They serve as the
foundation for software development, guiding the design, implementation, and testing phases of the
project. These articles break down software requirements into easy-to-understand concepts
1. Requirements Engineering Process
2. Classification of Software Requirements
3. How to write a good SRS for your Project
4. Quality Characteristics of a good SRS
5. Requirements Elicitation
6. Challenges in eliciting requirements
Software Configuration
Software configuration refers to the process of managing and controlling changes to software
systems, components, and related artifacts throughout the software development lifecycle. Here are
some articles that helps you in exploring the knowledge of Software Configuration:
1. Software Configuration Management
2. Objectives of Software Configuration Management
3. Software Quality Assurance
4. Project Monitoring & Control
Software Quality
Software quality refers to the degree to which a software product meets specified requirements and
satisfies customer expectations, ensuring it is reliable, efficient, maintainable, and user-friendly.
These article provide in depth explanation of Software Quality:
1. Software Quality
2. ISO 9000 Certification
3. SEICMM
4. Six Sigma
Software Design
Software design involves creating a blueprint or plan for how a software system will be structured
and organized to meet its requirements effectively and efficiently. These articles gives you a clear
explanation about Software Design.
1. Software Design Process
2. Software Design process – Set 2
2|Page
3. Software Design Principles
4. Coupling and Cohesion
5. Function Oriented Design
6. Object Oriented Design
7. User Interface Design
Software Reliability
Software reliability refers to the ability of a software system to consistently perform its intended
functions under specified conditions for a defined period of time, without failures or errors that may
disrupt its operation. Here are some articles that help to understand various concepts regarding
software reliability.
1. Software Reliability
2. Software Fault Tolerance
Software Maintenance
Software maintenance refers to the process of updating, modifying, and enhancing software to
ensure its continued effectiveness, efficiency, and relevance over time. Here are some articles that
help to understand various concepts regarding software maintenance.
1. Software Maintenance
2. Cost and efforts of software maintenance
Difference Between
Understanding the differences between software engineering concepts provides clarity on their
unique strengths and weaknesses, empowering individuals to make informed decisions about which
concept is best suited for specific purposes or projects. This knowledge enables effective selection,
implementation, and optimization of software engineering practices to achieve desired outcomes
efficiently.
1. Waterfall model vs Incremental model
2. v-model vs waterfall model
3. Manual testing vs Automation testing
4. Sanity Testing vs Smoke Testing
5. Cohesion vs Coupling
6. Alpha Testing vs Beta Testing
7. Testing and Debugging
8. Functional vs Non-functional Testing
9. Waterfall Model vs Spiral Model
10. RAD vs Waterfall
11. Unit Testing vs System Testing
12. Load Testing vs Stress Testing
13. Frontend Testing vs Backend Testing
14. Agile Model vs V-Model
3|Page
Introduction to Software Engineering – Software Engineering
Software is a program or set of programs containing instructions that provide the desired
functionality. Engineering is the process of designing and building something that serves a particular
purpose and finds a cost-effective solution to problems.
4|Page
Dual Role of Software
There is a dual role of software in the industry. The first one is as a product and the other one is as a
vehicle for delivering the product. We will discuss both of them.
1. As a Product
It delivers computing potential across networks of Hardware.
It enables the Hardware to deliver the expected functionality.
It acts as an information transformer because it produces, manages, acquires, modifies,
displays, or transmits information.
2. As a Vehicle for Delivering a Product
It provides system functionality (e.g., payroll system).
It controls other software (e.g., an operating system).
It helps build other software (e.g., software tools).
5|Page
3. Better Maintainability: Software that is designed and developed using sound software
engineering practices is easier to maintain and update over time.
4. Reduced Costs: By identifying and addressing potential problems early in the development
process, software engineering can help to reduce the cost of fixing bugs and adding new
features later on.
5. Increased Customer Satisfaction: By involving customers in the development process and
developing software that meets their needs, software engineering can help to increase customer
satisfaction.
6. Better Team Collaboration: By using Agile methodologies and continuous integration,
software engineering allows for better collaboration among development teams.
7. Better Scalability: By designing software with scalability in mind, software engineering can
help to ensure that software can handle an increasing number of users and transactions.
8. Better Security: By following the Software Development Life Cycle (SDLC) and performing
security testing, software engineering can help to prevent security breaches and protect
sensitive data.
In summary, software engineering can be expensive and time-consuming, and it may limit
flexibility and creativity. However, the benefits of improved quality, increased productivity, and
better maintainability can outweigh the costs and complexity. It’s important to weigh the pros and
cons of using software engineering and determine if it is the right approach for a particular software
project.
6|Page
2. Which of the following statements is/are true? [UGC NET CSE 2018]
P: Software Reengineering is preferable for software products having high failure rates, poor
design, and/or poor code structure.
Q: Software Reverse Engineering is the process of analyzing software with the objective of
recovering its design and requirement specification.
(A) P only
(B) Neither P nor Q
(C) Q only
(D) Both P and Q
Solution: Correct Answer is (D).
3. The diagram that helps in understanding and representing user requirements for a
software project using UML (Unified Modeling Language) is: [GATE CS 2004]
(A) Entity Relationship Diagram
(B) Deployment Diagram
(C) Data Flow Diagram
(D) Use Case Diagram
Solution: Correct Answer is (D).
Conclusion
Software engineering is a key field that involves creating and maintaining software. It combines
technical skills, creativity, and problem-solving. As technology advances, the need for software
engineers increases, making it a great career choice. Whether you’re new to the field or want to
learn more, understanding software engineering is crucial. Keep exploring, learning, and enjoying
the challenges and opportunities this field offers.
7|Page
What is Software Development?
Software development is defined as the process of designing, creating, testing, and maintaining
computer programs and applications. Software development plays an important role in our daily
lives. It empowers smartphone apps and supports businesses worldwide. Software developers
develop the software, which itself is a set of instructions in order to perform a specific task.
Software developers are responsible for the activities related to software, which include designing,
programming, creating, implementing, testing, deploying, and maintaining software. Software
developers develop system software, programming software, and application software.
Stage 2: Design
In the design phase, the software’s architecture and user interface are developed. This step defines
how the software will work and how users will interact with it. Design includes creating
wireframes, prototypes, and system architecture diagrams.
Design Phase is crucial phase in software development life cycle, this phase comes after
Requirement Gathering phase. It helps in designing the requirement decided in Requirement
Gathering. The Output of design phase will be implemented in the implementation phase.
Stage 3: Implementation
Implementation phase is the most important phase of Software Development Life Cycle (SDLC)
this phase comes after design phase. Output of the design phase will be implemented in the this
phase. Here comes a question:
Why Is Implementation So Important In The Software Development Process?
As mentioned above this is the most important phase of Software Development Process because all
the planning that have done in planning phase and designing that have done in designing phase are
implement in this phase. At this phase physical source code are created and deployed in the real
world.
Following are the work that is implemented in the this phase.
Development
Version Management
o What is version Control
o Git and GitHub
o Git Branching
o Best Git Branching Strategy
o Git Terminology
8|Page
o Git in Action
Risk assessment
o Identification of Software Risk
o Analysis of Software Risk
o Planning of Software Risk
o Monitoring of Software Risk
Change Management
o What is Change Management in Software Development
o Steps in Change Management Software
o Agile Change Management
Deployment Processes
o What is deployment in Software Development
o The Software Deployment Processes.
o Best Strategies for Agile Software Deployment
o Regression Testing
Stage 5: Go Live
After all the above phases, Go Live is the last phases of Software Development Processes. In this
phase product is ready to launch in the market.
Software Maintenance
Software Maintenance refers to the process of modifying and updating a software system after it
has been delivered to the customer. This can include fixing bugs, adding new features, improving
performance, or updating the software to work with new hardware or software systems. The goal of
software maintenance is to keep the software system working correctly, efficiently, and securely,
and to ensure that it continues to meet the needs of the users.
9|Page
What programming languages are commonly used in software development?
Commonly used programming languages in software development include Java, Python, C++,
JavaScript, Ruby, Swift, and PHP, among others. The choice of programming language depends on
the specific requirements of the project.
1. Cost: As the main cost of producing software is the manpower employed, the cost of
developing software is generally measured in terms of person-months of effort spent in
development. The productivity in the software industry for writing fresh code mostly ranges
from a few hundred to about 1000 + LOC per person per month.
2. Schedule: The schedule is another important factor in many projects. Business trends are
dictating that the time to market a product should be reduced; that is, the cycle time from
concept to delivery should be small. This means that software needs to be developed faster and
within the specified time.
3. Quality: Quality is one of the main mantras, and business strategies are designed around it.
Developing high-quality software is another fundamental goal of software engineering.
Attributes of Software
The international standard on software product quality suggests that software quality comprises six
main attributes:
1. Reliability: The capability to provide failure-free service.
2. Functionality: The capability to provide functions that meet stated and implied needs when the
software is used.
3. Usability: The capability to be understood, learned, and used.
4. Efficiency: The capability to provide appropriate performance relative to the amount of
resources used.
5. Maintainability: the capability to be modified for purposes of making corrections,
improvements, or adaptations.
10 | P a g e
6. Portability: The capability to be adapted for different specified environments without applying
actions or means other than those provided for this purpose in the product.
Classification of Software
The software can be classified based on various criteria, including:
1. Purpose: Software can be classified as system software (e.g., operating systems, device
drivers) or application software (e.g., word processors, games).
2. Platform: Software can be classified as native software (designed for a specific operating
system) or cross-platform software (designed to run on multiple operating systems).
3. Deployment: Software can be classified as installed software (installed on the user’s device) or
cloud-based software (hosted on remote servers and accessed via the internet).
4. License: Software can be classified as proprietary software (owned by a single entity) or open-
source software (available for free with the source code accessible to the public).
5. Development Model: Software can be classified as traditional software (developed using a
waterfall model) or agile software (developed using an iterative and adaptive approach).
6. Size: Software can be classified as small-scale software (designed for a single user or small
group) or enterprise software (designed for large organizations).
7. User Interface: Software can be classified as Graphical User Interface (GUI) software
or Command-Line Interface (CLI) software.
These classifications are important for understanding the characteristics and limitations of different
types of software, and for selecting the best software for a particular need.
Types of Software
The software is used extensively in several domains including hospitals, banks, schools, defense,
finance, stock markets, and so on.
1. Based on Application
2. Based on Copyright
11 | P a g e
1. Based on Application
The software can be classified on the basis of the application. These are to be done on this basis.
1. System Software:
System Software is necessary to manage computer resources and support the execution of
application programs. Software like operating systems, compilers, editors and drivers, etc., come
under this category. A computer cannot function without the presence of these. Operating
systems are needed to link the machine-dependent needs of a program with the capabilities of the
machine on which it runs. Compilers translate programs from high-level language to machine
language.
2. Application Software:
Application software is designed to fulfill the user’s requirement by interacting with the user
directly. It could be classified into two major categories:- generic or customized. Generic Software
is software that is open to all and behaves the same for all of its users. Its function is limited and not
customized as per the user’s changing requirements. However, on the other hand, customized
software is the software products designed per the client’s requirement, and are not available for all.
3. Networking and Web Applications Software:
Networking Software provides the required support necessary for computers to interact with each
other and with data storage facilities. Networking software is also used when software is running on
a network of computers (such as the World Wide Web). It includes all network management
software, server software, security and encryption software, and software to develop web-based
applications like HTML, PHP, XML, etc.
4. Embedded Software:
This type of software is embedded into the hardware normally in the Read-Only Memory (ROM) as
a part of a large system and is used to support certain functionality under the control conditions.
Examples are software used in instrumentation and control applications like washing machines,
satellites, microwaves, etc.
5. Reservation Software:
A Reservation system is primarily used to store and retrieve information and perform transactions
related to air travel, car rental, hotels, or other activities. They also provide access to bus and
railway reservations, although these are not always integrated with the main system. These are also
used to relay computerized information for users in the hotel industry, making a reservation and
ensuring that the hotel is not overbooked.
6. Business Software:
This category of software is used to support business applications and is the most widely used
category of software. Examples are software for inventory management, accounts, banking,
hospitals, schools, stock markets, etc.
7. Entertainment Software:
Education and Entertainment software provides a powerful tool for educational agencies, especially
those that deal with educating young children. There is a wide range of entertainment software such
as computer games, educational games, translation software, mapping software, etc.
12 | P a g e
9. Scientific Software:
Scientific and engineering software satisfies the needs of a scientific or engineering user to perform
enterprise-specific tasks. Such software is written for specific applications using principles,
techniques, and formulae particular to that field. Examples are software like MATLAB,
AUTOCAD, PSPICE, ORCAD, etc.
2. Based on Copyright
Classification of Software can be done based on copyright. These are stated as follows:
1. Commercial Software:
It represents the majority of software that we purchase from software companies, commercial
computer stores, etc. In this case, when a user buys software, they acquire a license key to use it.
Users are not allowed to make copies of the software. The company owns the copyright of the
program.
2. Shareware Software:
Shareware software is also covered under copyright, but the purchasers are allowed to make and
distribute copies with the condition that after testing the software, if the purchaser adopts it for use,
then they must pay for it. In both of the above types of software, changes to the software are not
allowed.
3. Freeware Software:
In general, according to freeware software licenses, copies of the software can be made both for
archival and distribution purposes, but here, distribution cannot be for making a profit. Derivative
works and modifications to the software are allowed and encouraged. Decompiling of the program
code is also allowed without the explicit permission of the copyright holder.
FAQs
1. How is System Software classified?
System Software is classified on the basis of how the tasks are to be performed and how the
software system interacts.
13 | P a g e
Software Evolution – Software Engineering
Software Evolution is a term that refers to the process of developing software initially, and then
timely updating it for various reasons, i.e., to add new features or to remove obsolete
functionalities, etc. This article focuses on discussing Software Evolution in detail.
14 | P a g e
What is the Need of Software Engineering?
Software engineering is a technique through which we can develop or create software for computer
systems or any other electronic devices. It is a systematic, scientific and disciplined approach to the
development, functioning, and maintenance of software.
Basically, Software engineering was introduced to address the issues of low-quality software
projects. Here, the development of the software uses the well-defined scientific principle method
and procedure.
In other words, software engineering is a process in which the need of users are analyzed and then
the software is designed as per the requirement of the user. Software engineering builds this
software and application by using designing and programming language.
In order to create complex software, we need to use software engineering techniques as well as
reduce the complexity we should use abstraction and decomposition, where abstraction describes
only the important part of the software and remove the irrelevant things for the later stage of
development so the requirement of the software becomes simple. Decomposition breakdown of the
software in a number of modules where each module procedure as well defines the independent
task
15 | P a g e
Software development models are frameworks that guide the process of creating software
applications. They provide a structured approach to planning, designing, implementing, testing, and
deploying software. Here are some common software development models.
1. Classical Waterfall Model
2. Iterative Waterfall Model
3. Spiral Model
4. Incremental process model
5. Rapid application development model(RAD)
6. RAD Model vs Traditional SDLC
7. Agile Development Models
8. Agile Software Development
9. Extreme Programming (XP)
10. SDLC V-Model
11. Comparison of different life cycle models
Overall, the waterfall model is used in situations where there is a need for a highly structured and
systematic approach to software development. It can be effective in ensuring that large, complex
projects are completed on time and within budget, with a high level of quality and customer
satisfaction.
16 | P a g e
4. Stability in Requirements: Suitable for projects when the requirements are clear and steady,
reducing modifications as the project progresses.
5. Resource Optimization: It encourages effective task-focused work without continuously
changing contexts by allocating resources according to project phases.
6. Relevance for Small Projects: Economical for modest projects with simple specifications and
minimal complexity.
2. Design: Once the requirements are understood, the design phase begins. This involves creating a
detailed design document that outlines the software architecture, user interface, and system
components.
3. Development: The Development phase include implementation involves coding the software
based on the design specifications. This phase also includes unit testing to ensure that each
component of the software is working as expected.
4. Testing: In the testing phase, the software is tested as a whole to ensure that it meets the
requirements and is free from defects.
5. Deployment: Once the software has been tested and approved, it is deployed to the production
environment.
6. Maintenance: The final phase of the Waterfall Model is maintenance, which involves fixing any
issues that arise after the software has been deployed and ensuring that it continues to meet the
requirements over time.
The classical waterfall model divides the life cycle into a set of phases. This model considers that
one phase can be started after the completion of the previous phase. That is the output of one phase
will be the input to the next phase. Thus the development process can be considered as a sequential
flow in the waterfall. Here the phases do not overlap with each other. The different sequential
phases of the classical waterfall model are shown in the below figure.
Let us now learn about each of these phases in detail which include further phases.
17 | P a g e
1. Feasibility Study:
The main goal of this phase is to determine whether it would be financially and technically feasible
to develop the software.
The feasibility study involves understanding the problem and then determining the various possible
strategies to solve the problem. These different identified solutions are analyzed based on their
benefits and drawbacks, The best solution is chosen and all the other phases are carried out as per
this solution strategy.
3. Design:
The goal of this phase is to convert the requirements acquired in the SRS into a format that can be
coded in a programming language. It includes high-level and detailed design as well as the overall
software architecture. A Software Design Document is used to document all of this effort (SDD).
6. Maintenance:
Maintenance is the most important phase of a software life cycle. The effort spent on maintenance
is 60% of the total effort spent to develop a full software. There are three types of maintenance.
Corrective Maintenance: This type of maintenance is carried out to correct errors that were
not discovered during the product development phase.
Perfective Maintenance: This type of maintenance is carried out to enhance the functionalities
of the system based on the customer’s request.
18 | P a g e
Adaptive Maintenance: Adaptive maintenance is usually required for porting the software to
work in a new environment such as working on a new computer platform or with a new
operating system.
19 | P a g e
Small to Medium-Sized Projects: Ideal for more manageable projects with a clear
development path and little complexity.
Predictable: Projects that are predictable, low-risk, and able to be addressed early in the
development life cycle are those that have known, controllable risks.
Regulatory Compliance is Critical: Circumstances in which paperwork is of utmost
importance and stringent regulatory compliance is required.
Client Prefers a Linear and Sequential Approach: This situation describes the client’s
preference for a linear and sequential approach to project development.
Limited Resources: Projects with limited resources can benefit from a set-up strategy, which
enables targeted resource allocation.
The Waterfall approach involves little client engagement in the product development process. The
product can only be shown to end consumers when it is ready.
Conclusion
The Waterfall Model has greatly influenced conventional software development processes. This
methodical, sequential technique provides an easily understood and applied structured framework.
Project teams have a clear roadmap due to the model’s methodical evolution through the phases of
requirements, design, implementation, testing, deployment, and maintenance.
20 | P a g e
4. Is Waterfall better than Agile?
Ans: Waterfall works best for well-defined, unchanging projects, while Agile is for dynamic,
evolving projects.
Table of Content
What is the Iterative Waterfall Model?
Process of Iterative Waterfall Model
When to use Iterative Waterfall Model?
Application of Iterative Waterfall Model
Why is iterative waterfall model used?
Advantages of Iterative Waterfall Model
Drawbacks of Iterative Waterfall Model
1. When errors are detected at some later phase, these feedback paths allow for correcting errors
committed by programmers during some phase.
2. The feedback paths allow the phase to be reworked in which errors are committed and these
changes are reflected in the later phases.
3. But, there is no feedback path to the stage – feasibility study, because once a project has been
taken, does not give up the project easily.
4. It is good to detect errors in the same phase in which they are committed.
5. It reduces the effort and time required to correct the errors.
6. A real-life example could be building a new website for a small business.
21 | P a g e
Following are the phases of Iterative Waterfall Model:
1. Requirements Gathering: This is the first stage where the business owners and developers meet
to discuss the goals and requirements of the website.
2. Design: In this stage, the developers create a preliminary design of the website based on the
requirements gathered in stage 1.
3. Implementation: In this stage, the developers begin to build the website based on the design
created in stage 2.
4. Testing: Once the website has been built, it is tested to ensure that it meets the requirements and
functions properly.
5. Deployment: The website is then deployed and made live to the public.
6. Review and Improvement: After the website has been live for a while, the business owners and
developers review its performance and make any necessary improvements.
This process is repeated until the website meets the needs and goals of the business. Each iteration
builds upon the previous one, allowing for continuous improvement and iteration until the final
product is complete.
22 | P a g e
7. Easy to Manage: The iterative waterfall model is easy to manage as each phase is well-defined
and has a clear set of deliverables. This makes it easier to track progress, identify issues, and
manage resources.
8. Faster Time to Market: The iterative approach allows for faster time to market as small and
incremental improvements are made over time, rather than waiting for a complete product to be
developed.
9. Predictable Outcomes: The phased approach of the iterative waterfall model allows for more
predictable outcomes and greater control over the development process, ensuring that the project
stays on track and within budget.
10. Improved Customer Satisfaction: The iterative approach allows for customer involvement and
feedback throughout the development process, resulting in a final product that better meets the
needs and expectations of the customer.
11. Quality Assurance: The iterative approach promotes quality assurance by providing
opportunities for testing and feedback throughout the development process. This results in a
higher-quality end product.
12. Risk Reduction: The iterative approach allows for early identification and mitigation of risks,
reducing the likelihood of costly errors later in the development process.
13. Well-organized: In this model, less time is consumed on documenting and the team can spend
more time on development and designing.
14. Cost-Effective: It is highly cost-effective to change the plan or requirements in the model.
Moreover, it is best suited for agile organizations.
15. Simple: Iterative waterfall model is very simple to understand and use. That’s why it is one of the
most widely used software development models.
16. Feedback Path: In the classical waterfall model, there are no feedback paths, so there is no
mechanism for error correction. But in the iterative waterfall model feedback path from one phase
to its preceding phase allows correcting the errors that are committed and these changes are
reflected in the later phases.
Conclusion
Iterative waterfall model is a an improved version of traditional waterfall model. Instead of doing
each phase (like planning, designing, building, and testing) just once, you go through these phases in
small, repeated cycles. This helps catch and fix problems early and allows for adjustments based on
feedback, leading to a more refined and reliable final product.
23 | P a g e
Frequently Asked Questions related to Iterative Waterfall Model
What is the difference between agile and iterative waterfall?
Agile enables the rapid delivery of projects with shorter lifecycles because each iteration produces a
working result while the Iterative Waterfall Model is a software development approach that combines
the sequential steps of the traditional Waterfall Model with the flexibility of iterative design.
1. The exact number of phases needed to develop the product can be varied by the project manager
depending upon the project risks.
2. As the project manager dynamically determines the number of phases, the project manager has
an important role in developing a product using the spiral model.
3. It is based on the idea of a spiral, with each iteration of the spiral representing a complete
software development cycle, from requirements gathering and analysis to design,
implementation, testing, and maintenance.
The Spiral Model is often used for complex and large software development projects, as it allows
for a more flexible and adaptable approach to software development. It is also well-suited to
projects with significant uncertainty or high levels of risk.
24 | P a g e
The Radius of the spiral at any point represents the expenses (cost) of the project so far, and the
angular dimension represents the progress made so far in the current phase.
Each phase of the Spiral Model is divided into four quadrants as shown in the above figure.
The functions of these four quadrants are discussed below:
1. Objectives determination and identify alternative solutions: Requirements are gathered
from the customers and the objectives are identified, elaborated, and analyzed at the start of
every phase. Then alternative solutions possible for the phase are proposed in this quadrant.
2. Identify and resolve Risks: During the second quadrant, all the possible solutions are
evaluated to select the best possible solution. Then the risks associated with that solution are
identified and the risks are resolved using the best possible strategy. At the end of this quadrant,
the Prototype is built for the best possible solution.
3. Develop the next version of the Product: During the third quadrant, the identified features are
developed and verified through testing. At the end of the third quadrant, the next version of the
software is available.
4. Review and plan for the next Phase: In the fourth quadrant, the Customers evaluate the so-far
developed version of the software. In the end, planning for the next phase is started.
25 | P a g e
2. The spiral model uses the approach of the Prototyping Model by building a prototype at the
start of each phase as a risk-handling technique.
3. Also, the spiral model can be considered as supporting the Evolutionary model – the iterations
along the spiral can be considered as evolutionary levels through which the complete system is
built.
The most serious issue we face in the cascade model is that taking a long length to finish the item,
and the product became obsolete. To tackle this issue, we have another methodology, which is
known as the Winding model or spiral model. The winding model is otherwise called the cyclic
model.
When To Use the Spiral Model?
1. When a project is vast in software engineering, a spiral model is utilized.
2. A spiral approach is utilized when frequent releases are necessary.
3. When it is appropriate to create a prototype
4. When evaluating risks and costs is crucial
5. The spiral approach is beneficial for projects with moderate to high risk.
26 | P a g e
6. The SDLC’s spiral model is helpful when requirements are complicated and ambiguous.
7. If modifications are possible at any moment
8. When committing to a long-term project is impractical owing to shifting economic priorities.
Conclusion
Spiral Model is a valuable choice for software development projects where risk management is on
high priority. Spiral Model deliver high-quality software by promoting risk identification, iterative
development and continuous client feedback. When a project is vast in software engineering, a
spiral model is utilized.
V. Waterfall e. Write some code, debug it, and repeat (i.e ad-hoc)
A e b a c d
B e c a b d
C d a b c e
D c e a b d
Solution: Correct Answer is (A).
2. In the Spiral model of software development, the primary determinant in selecting activities
in each iteration is [ISRO 2016]
(A) Iteration Size
(B) Cost
(C) Adopted process such as Rational Unified Process or Extreme Programming
(D) Risk
Solution: Correct Answer is (D).
27 | P a g e
How does Spiral Model differ from Waterfall Model?
Spiral Model is different from Waterfall Model as Waterfall Model follows a linear and sequential
approach whereas Spiral Model has repeated cycles of development.
What are the places where the Spiral Model is commonly used?
Spiral Model is commonly used in industries where risk management is critical like software
development medical device manufacturing, etc.
Table of Content
What is the Incremental Process Model?
Phases of incremental model
Requirement Process Model
Types of Incremental Model
When to use Incremental Process Model
Characteristics of Incremental Process Model
Advantages of Incremental Process Model
Disadvantages of Incremental Process Model
A, B, and C are modules of Software Products that are incrementally developed and delivered.
28 | P a g e
3. Deployment and Testing: After Requirements gathering and specification, requirements are
then split into several different versions starting with version 1, in each successive increment,
the next version is constructed and then deployed at the customer site. in development and
Testing the product is checked and tested for the actual process of the model.
4. Implementation: In implementation After the last version (version n), it is now deployed at the
client site.
29 | P a g e
When to use the Incremental Process Model
1. Funding Schedule, Risk, Program Complexity, or need for early realization of benefits.
2. When Requirements are known up-front.
3. When Projects have lengthy development schedules.
4. Projects with new Technology.
Error Reduction (core modules are used by the customer from the beginning of the phase
and then these are tested thoroughly).
Uses divide and conquer for a breakdown of tasks.
Lowers initial delivery cost.
Incremental Resource Deployment.
5. Requires good planning and design.
6. The total cost is not lower.
7. Well-defined module interfaces are required.
30 | P a g e
3. Issues may arise from the system design if all needs are not gathered upfront throughout the
program lifecycle.
4. Every iteration step is distinct and does not flow into the next.
5. It takes a lot of time and effort to fix an issue in one unit if it needs to be corrected in all the
units.
Table of Content
When to use the RAD Model?
Objectives of Rapid Application Development Model (RAD)
Advantages of Rapid Application Development Model (RAD)
Disadvantages of Rapid application development model (RAD)
Applications of Rapid Application Development Model (RAD)
Drawbacks of Rapid Application Development
The critical feature of this model is the use of powerful development tools and techniques. A software
project can be implemented using this model if the project can be broken down into small modules
wherein each module can be assigned independently to separate teams. These modules can finally be
combined to form the final product. Development of each module involves the various basic steps as
in the waterfall model i.e. analyzing, designing, coding, and then testing, etc. as shown in the figure.
Another striking feature of this model is a short period i.e. the time frame for delivery(time-box) is
generally 60-90 days.
Multiple teams work on developing the software system using the RAD model parallelly.
The use of powerful developer tools such as JAVA, C++, Visual BASIC, XML, etc. is also an integral
part of the projects. This model consists of 4 basic phases:
1. Requirements Planning – This involves the use of various techniques used in requirements
elicitation like brainstorming, task analysis, form analysis, user scenarios, FAST (Facilitated
Application Development Technique), etc. It also consists of the entire structured plan describing
the critical data, methods to obtain it, and then processing it to form a final refined model.
31 | P a g e
2. User Description – This phase consists of taking user feedback and building the prototype using
developer tools. In other words, it includes re-examination and validation of the data collected in
the first phase. The dataset attributes are also identified and elucidated in this phase.
3. Construction – In this phase, refinement of the prototype and delivery takes place. It includes the
actual use of powerful automated tools to transform processes and data models into the final
working product. All the required modifications and enhancements are to be done in this phase.
4. Cutover – All the interfaces between the independent modules developed by separate teams have
to be tested properly. The use of powerfully automated tools and subparts makes testing easier.
This is followed by acceptance testing by the user.
The process involves building a rapid prototype, delivering it to the customer, and taking feedback.
After validation by the customer, the SRS document is developed and the design is finalized.
3. Stakeholder Participation
Throughout the development cycle, RAD promotes end users and stakeholders’ active participation.
Collaboration and frequent feedback make it possible to make sure that the changing system satisfies
both user and corporate needs.
4. Improved Interaction
Development teams and stakeholders may collaborate and communicate more effectively thanks to
RAD. Frequent communication and feedback loops guarantee that all project participants are in
agreement, which lowers the possibility of misunderstandings.
5. Improved Quality via Prototyping
Prototypes enable early system component testing and visualization in Rapid Application
Development (RAD). This aids in spotting any problems, confirming design choices, and
guaranteeing that the finished product lives up to consumer expectations.
32 | P a g e
6. Customer Satisfaction
Delivering a system that closely satisfies user expectations and needs is the goal of RAD. Through
rapid delivery of functioning prototypes and user involvement throughout the development process,
Rapid Application Development (RAD) enhances the probability of customer satisfaction with the
final product.
33 | P a g e
RAD Model vs Traditional SDLC – Software Engineering
Software Development is the development of software for distinct purposes. There are several types
of Software Development Models. In this article, we will see the difference between the RAD
Model and the Traditional Software Development Life Cycle (SDLC).
34 | P a g e
4. Deployed: Once the above three phases are completed, the application will be deployed to the
client.
Benefits of RAD Model
1. Better quality software: It provides better quality of software that is more usable and more
focused on businesses.
2. Better reusability: RAD Models has a better reusability of components.
3. Flexible: RAD Models are more flexible as it helps in easy adjustments.
4. Minimum failures: It helps in completing projects within time and within budget. Failures are
minimum in RAD Model.
35 | P a g e
Parameters RAD Model Traditional SDLC
Usage of identified and ready- Elements are not reusable since they
to-use themes, templates, must be created from scratch in
Reusability of layouts, and micro apps that accordance with project
Elements have been predefined. requirements.
2. __________ are applied throughout the software process. [UGC NET CS 2014 Dec – II]
(A) Framework activities
(B) Umbrella activities
(C) Planning activities
(D) Construction activities
Solution: Correct Answer is (B).
3. Software engineering primarily aims at: [UGC NET CS June Paper – II]
(A) reliable software
(B) cost-effective software
(C) reliable and cost-effective software
(D) question does not provide sufficient data
Solution: Correct Answer is (C).
FAQs
1. Which is most commonly used SDLC model?
Most commonly used model is Agile model and even in industries also it is mostly preferred
36 | P a g e
included handling customer change requests during project development and the high cost and time
required to incorporate these changes. To overcome these drawbacks of the Waterfall Model, in the
mid-1990s the Agile Software Development model was proposed.
Table of Content
What is Agile Model?
Agile SDLC Models/Methods
Steps in the Agile Model
Principles of the Agile Model
Characteristics of the Agile Process
When To Use the Agile Model?
Advantages of the Agile Model
Disadvantages of the Agile Model
Questions For Practice
Conclusion
Frequently Asked Questions on Agile Model – FAQs
37 | P a g e
an iterative and incremental approach to development. This means that the UP is characterized by
a series of iterations, each of which results in a working product increment, allowing for
continuous improvement and the delivery of value to the customer.
All Agile Software Development Methodology discussed above share the same core values and
principles, but they may differ in their implementation and specific practices. Agile development
requires a high degree of collaboration and communication among team members, as well as a
willingness to adapt to changing requirements and feedback from customers.
In the Agile model, the requirements are decomposed into many small parts that can be incrementally
developed. The Agile model adopts Iterative development. Each incremental part is developed over an
iteration. Each iteration is intended to be small and easily manageable and can be completed within a
couple of weeks only. At a time one iteration is planned, developed, and deployed to the customers.
Long-term plans are not made.
1. Requirement Gathering:- In this step, the development team must gather the requirements, by
interaction with the customer. development team should plan the time and effort needed to build
the project. Based on this information you can evaluate technical and economical feasibility.
2. Design the Requirements:- In this step, the development team will use user-flow-diagram or high-
level UML diagrams to show the working of the new features and show how they will apply to the
existing software. Wireframing and designing user interfaces are done in this phase.
3. Construction / Iteration:- In this step, development team members start working on their project,
which aims to deploy a working product.
38 | P a g e
4. Testing / Quality Assurance:- Testing involves Unit Testing, Integration Testing, and System
Testing. A brief introduction of these three tests is as follows:
Unit Testing:- Unit testing is the process of checking small pieces of code to ensure that the
individual parts of a program work properly on their own. Unit testing is used to test individual
blocks (units) of code.
Integration Testing:- Integration testing is used to identify and resolve any issues that may
arise when different units of the software are combined.
System Testing:- Goal is to ensure that the software meets the requirements of the users and
that it works correctly in all possible scenarios.
5. Deployment:- In this step, the development team will deploy the working project to end users.
6. Feedback:- This is the last step of the Agile Model. In this, the team receives feedback about the
product and works on correcting bugs based on feedback provided by the customer.
The time required to complete an iteration is known as a Time Box. Time-box refers to the maximum
amount of time needed to deliver an iteration to customers. So, the end date for an iteration does not
change. However, the development team can decide to reduce the delivered functionality during a
Time-box if necessary to deliver it on time. The Agile model’s central principle is delivering an
increment to the customer after each Time-box.
39 | P a g e
Projects with few regulatory requirements or not certain requirements.
projects utilizing a less-than-strict current methodology
Those undertakings where the product proprietor is easily reachable
Flexible project schedules and budgets.
2. Which of the following is not one of the principles of the agile software development method?
[UGC NET CS 2018]
(A) Following the plan
(B) Embrace change
(C) Customer involvement
(D) Incremental delivery
Solution: Correct Answer is (A).
Conclusion
Agile development models prioritize flexibility, collaboration, and customer satisfaction. They focus
on delivering working software in short iterations, allowing for quick adaptation to changing
requirements. While Agile offers advantages like faster delivery and customer involvement, it may
40 | P a g e
face challenges with complex dependencies and lack of formal documentation. Overall, Agile is best
suited for projects requiring rapid development, continuous feedback, and a highly skilled team.
Agile Software Development is an iterative and incremental approach to software development that
emphasizes the importance of delivering a working product quickly and frequently. It involves close
collaboration between the development team and the customer to ensure that the product meets their
needs and expectations.
Table of Content
Why Agile is Used?
4 Core Values of Agile Software Development
12 Principles of Agile Software Development Methodology
The Agile Software Development Process:
Agile Software development cycle:
Design Process of Agile software Development:
Example of Agile Software Development:
Advantages Agile Software Development:
Disadvantages Agile Software Development:
Practices of Agile Software Development:
Advantages of Agile Software Development over traditional software development approaches:
41 | P a g e
5. Regular Demonstrations: Agile techniques place a strong emphasis on regular demonstrations of
project progress. Stakeholders may clearly see the project’s status, upcoming problems, and
upcoming new features due to this transparency.
6. Cross-Functional Teams: Agile fosters self-organizing, cross-functional teams that share
information effectively, communicate more effectively and feel more like a unit.
42 | P a g e
1. Requirements Gathering: The customer’s requirements for the software are gathered and
prioritized.
2. Planning: The development team creates a plan for delivering the software, including the features
that will be delivered in each iteration.
3. Development: The development team works to build the software, using frequent and rapid
iterations.
4. Testing: The software is thoroughly tested to ensure that it meets the customer’s requirements
and is of high quality.
5. Deployment: The software is deployed and put into use.
6. Maintenance: The software is maintained to ensure that it continues to meet the customer’s needs
and expectations.
Agile Software Development is widely used by software development teams and is considered to be
a flexible and adaptable approach to software development that is well-suited to changing
requirements and the fast pace of software development.
Agile is a time-bound, iterative approach to software delivery that builds software incrementally from
the start of the project, instead of trying to deliver all at once.
Step 1: In the first step, concept, and business opportunities in each possible project are identified
and the amount of time and work needed to complete the project is estimated. Based on their
technical and financial viability, projects can then be prioritized and determined which ones are
worthwhile pursuing.
Step 2: In the second phase, known as inception, the customer is consulted regarding the initial
requirements, team members are selected, and funding is secured. Additionally, a schedule
outlining each team’s responsibilities and the precise time at which each sprint’s work is expected
to be finished should be developed.
43 | P a g e
Step 3: Teams begin building functional software in the third step, iteration/construction, based
on requirements and ongoing feedback. Iterations, also known as single development cycles, are
the foundation of the Agile software development cycle.
The team has put their best efforts into getting the product to a complete stage. But then out of the
blue due to the rapidly changing environment, the company’s head came up with an entirely new set
of features that wanted to be implemented as quickly as possible and wanted to push out a working
model in 2 days. Team A was now in a fix, they were still in their design phase and had not yet started
coding and they had no working model to display. Moreover, it was practically impossible for them to
implement new features since the waterfall model there is not revert to the old phase once you
proceed to the next stage, which means they would have to start from square one again. That would
incur heavy costs and a lot of overtime. Team B was ahead of Team A in a lot of aspects, all thanks to
Agile Development. They also had a working product with most of the core requirements since the
44 | P a g e
first increment. And it was a piece of cake for them to add the new requirements. All they had to do
was schedule these requirements for the next increment and then implement them.
45 | P a g e
Agile is a framework that defines how software development needs to be carried on. Agile is not a
single method, it represents the various collection of methods and practices that follow the value
statements provided in the manifesto. Agile methods and practices do not promise to solve every
problem present in the software industry (No Software model ever can). But they sure help to
establish a culture and environment where solutions emerge.
The Agile Manifesto, which outlines the principles of agile development, values individuals and
interactions, working software, customer collaboration, and response to change.
Practices of Agile Software Development
Scrum: Scrum is a framework for agile software development that involves iterative cycles called
sprints, daily stand-up meetings, and a product backlog that is prioritized by the customer.
Kanban: Kanban is a visual system that helps teams manage their work and improve their
processes. It involves using a board with columns to represent different stages of the development
process, and cards or sticky notes to represent work items.
Continuous Integration: Continuous Integration is the practice of frequently merging code
changes into a shared repository, which helps to identify and resolve conflicts early in the
development process.
Test-Driven Development: Test-Driven Development (TDD) is a development practice that
involves writing automated tests before writing the code. This helps to ensure that the code meets
the requirements and reduces the likelihood of defects.
Pair Programming: Pair programming involves two developers working together on the same
code. This helps to improve code quality, share knowledge, and reduce the likelihood of defects.
46 | P a g e
In summary, Agile software development is a popular approach to software development that
emphasizes collaboration, flexibility, and the delivery of working software in short iterations. It has
several advantages over traditional software development approaches, including increased customer
satisfaction, faster time-to-market, and reduced risk.
Table of Content
What is Extreme Programming (XP)?
Good Practices in Extreme Programming
Basic principles of Extreme programming
Applications of Extreme Programming (XP)
Life Cycle of Extreme Programming (XP)
Values of Extreme Programming (XP)
Advantages of Extreme Programming (XP)
Conclusion
Frequently Asked Questions related to Extreme Programming
The extreme programming model recommends taking the best practices that have worked well in
the past in program development projects to extreme levels.
47 | P a g e
3. Even late changes in the requirements should be entertained.
4. Face-to-face communication is preferred over documentation.
5. Continuous feedback and involvement of customers are necessary for developing good-quality
software.
6. A simple design that involves and improves with time is a better approach than doing an
elaborate design up front for handling all possible scenarios.
7. The delivery dates are decided by empowered teams of talented individuals.
Extreme programming is one of the most popular and well-known approaches in the family of agile
methods. an XP project starts with user stories which are short descriptions of what scenarios the
customers and users would like the system to support. Each story is written on a separate card, so
they can be flexibly grouped.
48 | P a g e
elimination of complex dependencies within a system. So, effective use of suitable design is
emphasized.
Feedback: One of the most important aspects of the XP model is to gain feedback to
understand the exact customer needs. Frequent contact with the customer makes the
development effective.
Simplicity: The main principle of the XP model is to develop a simple system that will work
efficiently in the present time, rather than trying to build something that would take time and
may never be used. It focuses on some specific features that are immediately needed, rather
than engaging time and effort on speculations of future requirements.
Pair Programming: XP encourages pair programming where two developers work together at
the same workstation. This approach helps in knowledge sharing, reduces errors, and improves
code quality.
Continuous Integration: In XP, developers integrate their code into a shared repository several
times a day. This helps to detect and resolve integration issues early on in the development
process.
Refactoring: XP encourages refactoring, which is the process of restructuring existing code to
make it more efficient and maintainable. Refactoring helps to keep the codebase clean,
organized, and easy to understand.
Collective Code Ownership: In XP, there is no individual ownership of code. Instead, the
entire team is responsible for the codebase. This approach ensures that all team members have a
sense of ownership and responsibility towards the code.
Planning Game: XP follows a planning game, where the customer and the development team
collaborate to prioritize and plan development tasks. This approach helps to ensure that the
team is working on the most important features and delivers value to the customer.
On-site Customer: XP requires an on-site customer who works closely with the development
team throughout the project. This approach helps to ensure that the customer’s needs are
understood and met, and also facilitates communication and feedback.
49 | P a g e
1. Planning: The first stage of Extreme Programming is planning. During this phase, clients
define their needs in concise descriptions known as user stories. The team calculates the effort
required for each story and schedules releases according to priority and effort.
2. Design: The team creates only the essential design needed for current user stories, using a
common analogy or story to help everyone understand the overall system architecture and keep
the design straightforward and clear.
3. Coding: Extreme Programming (XP) promotes pair programming i.e. wo developers work
together at one workstation, enhancing code quality and knowledge sharing. They write tests
before coding to ensure functionality from the start (TDD), and frequently integrate their code
into a shared repository with automated tests to catch issues early.
4. Testing: Extreme Programming (XP) gives more importance to testing that consist of both unit
tests and acceptance test. Unit tests, which are automated, check if specific features work
correctly. Acceptance tests, conducted by customers, ensure that the overall system meets initial
requirements. This continuous testing ensures the software’s quality and alignment with
customer needs.
5. Listening: In the listening phase regular feedback from customers to ensure the product meets
their needs and to adapt to any changes.
Conclusion
Extreme Programming (XP) is a Software Development Methodology, known for its flexibility,
collaboration and rapid feedback using techniques like continuous testing, frequent releases, and
pair programming, in which two programmers collaborate on the same code. XP supports user
involvement throughout the development process while prioritizing simplicity and communication.
50 | P a g e
Overall, XP aims to deliver high-quality software quickly and adapt to changing requirements
effectively.
Frequently Asked Questions on Extreme Programming – FAQ’s
1. What are the 5 phases of extreme programming?
Five Phases of Extreme Programming are:
Planning.
Design.
Coding.
Testing.
Listening
The V-Model is a software development life cycle (SDLC) model that provides a systematic and
visual representation of the software development process. It is based on the idea of a “V” shape, with
the two legs of the “V” representing the progression of the software development
process from requirements gathering and analysis to design, implementation, testing, and
maintenance.
V-Model Design
1. Requirements Gathering and Analysis: The first phase of the V-Model is the requirements
gathering and analysis phase, where the customer’s requirements for the software are gathered
and analyzed to determine the scope of the project.
2. Design: In the design phase, the software architecture and design are developed, including the
high-level design and detailed design.
3. Implementation: In the implementation phase, the software is built based on the design.
4. Testing: In the testing phase, the software is tested to ensure that it meets the customer’s
requirements and is of high quality.
5. Deployment: In the deployment phase, the software is deployed and put into use.
6. Maintenance: In the maintenance phase, the software is maintained to ensure that it continues to
meet the customer’s needs and expectations.
51 | P a g e
7. The V-Model is often used in safety: critical systems, such as aerospace and defence systems,
because of its emphasis on thorough testing and its ability to clearly define the steps involved in
the software development process.
The following illustration depicts the different phases in a V-Model of the SDLC.
Verification Phases:
It involves a static analysis technique (review) done without executing code. It is the process of
evaluation of the product development phase to find whether specified requirements are met.
There are several Verification phases in the V-Model:
System Design:
Design of the system will start when the overall we are clear with the product requirements, and then
need to design the system completely. This understanding will be at the beginning of complete under
the product development process. these will be beneficial for the future execution of test cases.
Architectural Design:
In this stage, architectural specifications are comprehended and designed. Usually, several technical
approaches are put out, and the ultimate choice is made after considering both the technical and
financial viability. The system architecture is further divided into modules that each handle a distinct
function. Another name for this is High-Level Design (HLD).
At this point, the exchange of data and communication between the internal modules and external
systems are well understood and defined. During this phase, integration tests can be created and
documented using the information provided.
Module Design:
This phase, known as Low-Level Design (LLD), specifies the comprehensive internal design for
every system module. Compatibility between the design and other external systems as well as other
modules in the system architecture is crucial. Unit tests are a crucial component of any development
52 | P a g e
process since they assist in identifying and eradicating the majority of mistakes and flaws at an early
stage. Based on the internal module designs, these unit tests may now be created.
Coding Phase:
The Coding step involves writing the code for the system modules that were created during the
Design phase. The system and architectural requirements are used to determine which programming
language is most appropriate.
The coding standards and principles are followed when performing the coding. Before the final build
is checked into the repository, the code undergoes many code reviews and is optimized for optimal
performance.
Validation Phases:
It involves dynamic analysis techniques (functional, and non-functional), and testing done by
executing code. Validation is the process of evaluating the software after the completion of the
development phase to determine whether the software meets the customer’s expectations and
requirements.
So, V-Model contains Verification phases on one side of the Validation phases on the other side. The
verification and Validation phases are joined by the coding phase in a V-shape. Thus, it is called V-
Model.
Design Phase:
Requirement Analysis: This phase contains detailed communication with the customer to
understand their requirements and expectations. This stage is known as Requirement Gathering.
System Design: This phase contains the system design and the complete hardware and
communication setup for developing the product.
Architectural Design: System design is broken down further into modules taking up different
functionalities. The data transfer and communication between the internal modules and with the
outside world (other systems) is clearly understood.
Module Design: In this phase, the system breaks down into small modules. The detailed design
of modules is specified, also known as Low-Level Design (LLD).
Testing Phases:
Unit Testing: Unit Test Plans are developed during the module design phase. These Unit Test
Plans are executed to eliminate bugs at the code or unit level.
Integration testing: After completion of unit testing Integration testing is performed. In
integration testing, the modules are integrated, and the system is tested. Integration testing is
performed in the Architecture design phase. This test verifies the communication of modules
among themselves.
53 | P a g e
System Testing: System testing tests the complete application with its functionality,
interdependency, and communication. It tests the functional and non-functional requirements of
the developed application.
User Acceptance Testing (UAT): UAT is performed in a user environment that resembles the
production environment. UAT verifies that the delivered system meets the user’s requirement and
the system is ready for use in the real world.
Industrial Challenge:
As the industry has evolved, the technologies have become more complex, increasingly faster, and
forever changing, however, there remains a set of basic principles and concepts that are as applicable
today as when IT was in its infancy.
Accurately define and refine user requirements.
Design and build an application according to the authorized user requirements.
Validate that the application they had built adhered to the authorized business requirements.
Importance of V-Model
1. Early Defect Identification
By incorporating verification and validation tasks into every stage of the development process, the V-
Model encourages early testing. This lowers the cost and effort needed to remedy problems later in
the development lifecycle by assisting in the early detection and resolution of faults.
2. determining the Phases of Development and Testing
The V-Model contains a testing phase that corresponds to each stage of the development process. By
ensuring that testing and development processes are clearly mapped out, this clear mapping promotes
a methodical and orderly approach to software engineering.
3. Prevents “Big Bang” Testing
Testing is frequently done at the very end of the development lifecycle in traditional development
models, which results in a “Big Bang” approach where all testing operations are focused at once. By
integrating testing activities into the development process and encouraging a more progressive and
regulated testing approach, the V-Model prevents this.
4. Improves Cooperation
At every level, the V-Model promotes cooperation between the testing and development teams.
Through this collaboration, project requirements, design choices, and testing methodologies are better
understood, which improves the effectiveness and efficiency of the development process.
5. Improved Quality Assurance
Overall quality assurance is enhanced by the V-Model, which incorporates testing operations at every
level. Before the program reaches the final deployment stage, it makes sure that it satisfies the
requirements and goes through a strict validation and verification process.
Principles of V-Model
Large to Small: In V-Model, testing is done in a hierarchical perspective, for example,
requirements identified by the project team, creating High-Level Design, and Detailed Design
phases of the project. As each of these phases is completed the requirements, they are defining
become more and more refined and detailed.
Data/Process Integrity: This principle states that the successful design of any project requires
the incorporation and cohesion of both data and processes. Process elements must be identified at
every requirement.
Scalability: This principle states that the V-Model concept has the flexibility to accommodate
any IT project irrespective of its size, complexity, or duration.
Cross Referencing: A direct correlation between requirements and corresponding testing activity
is known as cross-referencing.
Tangible Documentation:
This principle states that every project needs to create a document. This documentation is required
and applied by both the project development team and the support team. Documentation is used to
maintain the application once it is available in a production environment.
54 | P a g e
Why preferred?
It is easy to manage due to the rigidity of the model. Each phase of V-Model has specific
deliverables and a review process.
Proactive defect tracking – that is defects are found at an early stage.
Advantages of V-Model
This is a highly disciplined model and Phases are completed one at a time.
V-Model is used for small projects where project requirements are clear.
Simple and easy to understand and use.
This model focuses on verification and validation activities early in the life cycle thereby
enhancing the probability of building an error-free and good quality product.
It enables project management to track progress accurately.
Clear and Structured Process: The V-Model provides a clear and structured process for software
development, making it easier to understand and follow.
Emphasis on Testing: The V-Model places a strong emphasis on testing, which helps to ensure the
quality and reliability of the software.
Improved Traceability: The V-Model provides a clear link between the requirements and the final
product, making it easier to trace and manage changes to the software.
Better Communication: The clear structure of the V-Model helps to improve communication
between the customer and the development team.
Disadvantages of V-Model
High risk and uncertainty.
It is not good for complex and object-oriented projects.
It is not suitable for projects where requirements are not clear and contain a high risk of changing.
This model does not support iteration of phases.
It does not easily handle concurrent events.
Inflexibility: The V-Model is a linear and sequential model, which can make it difficult to adapt
to changing requirements or unexpected events.
Time-Consuming: The V-Model can be time-consuming, as it requires a lot of documentation and
testing.
Overreliance on Documentation: The V-Model places a strong emphasis on documentation, which
can lead to an overreliance on documentation at the expense of actual development work.
Conclusion
A scientific and organized approach to the Software Development Life Cycle (SDLC) is provided by
the Software Engineering V-Model. The team’s expertise with the selected methodology, the unique
features of the project, and the nature of the requirements should all be taken into consideration when
selecting any SDLC models, including the V-Model.
55 | P a g e
Coupling and Cohesion – Software Engineering
The purpose of the Design phase in the Software Development Life Cycle is to produce a solution to a
problem given in the SRS(Software Requirement Specification) document. The output of the design
phase is a Software Design Document (SDD).
Coupling and Cohesion are two key concepts in software engineering that are used to measure the
quality of a software system’s design.
Table of Content
What is Coupling and Cohesion?
Types of Coupling
Types of Cohesion
Advantages of low coupling
Advantages of high cohesion
Disadvantages of high coupling
Disadvantages of low cohesion
Conclusion
What is Coupling and Cohesion?
Coupling refers to the degree of interdependence between software modules. High coupling means
that modules are closely connected and changes in one module may affect other modules. Low
coupling means that modules are independent, and changes in one module have little impact on other
modules.
Cohesion refers to the degree to which elements within a module work together to fulfill a single,
well-defined purpose. High cohesion means that elements are closely related and focused on a single
purpose, while low cohesion means that elements are loosely related and serve multiple purposes.
56 | P a g e
Both coupling and cohesion are important factors in determining the maintainability, scalability, and
reliability of a software system. High coupling and low cohesion can make a system difficult to
change and test, while low coupling and high cohesion make a system easier to maintain and improve.
Basically, design is a two-part iterative process. The first part is Conceptual Design which tells the
customer what the system will do. Second is Technical Design which allows the system builders to
understand the actual hardware and software needed to solve a customer’s problem.
Modularization is the process of dividing a software system into multiple independent modules where
each module works independently. There are many advantages of Modularization in software
engineering. Some of these are given below:
Easy to understand the system.
System maintenance is easy.
A module can be used many times as their requirements. No need to write it again and again.
Types of Coupling
Coupling is the measure of the degree of interdependence between the modules. A good software will
have low coupling.
57 | P a g e
Following are the types of Coupling:
Data Coupling: If the dependency between the modules is based on the fact that they
communicate by passing only data, then the modules are said to be data coupled. In data coupling,
the components are independent of each other and communicate through data. Module
communications don’t contain tramp data. Example-customer billing system.
Stamp Coupling In stamp coupling, the complete data structure is passed from one module to
another module. Therefore, it involves tramp data. It may be necessary due to efficiency factors-
this choice was made by the insightful designer, not a lazy programmer.
Control Coupling: If the modules communicate by passing control information, then they are
said to be control coupled. It can be bad if parameters indicate completely different behavior and
good if parameters allow factoring and reuse of functionality. Example- sort function that takes
comparison function as an argument.
External Coupling: In external coupling, the modules depend on other modules, external to the
software being developed or to a particular type of hardware. Ex- protocol, external file, device
format, etc.
Common Coupling: The modules have shared data such as global data structures. The changes in
global data mean tracing back to all modules which access that data to evaluate the effect of the
change. So it has got disadvantages like difficulty in reusing modules, reduced ability to control
data accesses, and reduced maintainability.
Content Coupling: In a content coupling, one module can modify the data of another module, or
control flow is passed from one module to the other module. This is the worst form of coupling
and should be avoided.
Temporal Coupling: Temporal coupling occurs when two modules depend on the timing or
order of events, such as one module needing to execute before another. This type of coupling can
result in design issues and difficulties in testing and maintenance.
Sequential Coupling: Sequential coupling occurs when the output of one module is used as the
input of another module, creating a chain or sequence of dependencies. This type of coupling can
be difficult to maintain and modify.
Communicational Coupling: Communicational coupling occurs when two or more modules
share a common communication mechanism, such as a shared message queue or database. This
type of coupling can lead to performance issues and difficulty in debugging.
Functional Coupling: Functional coupling occurs when two modules depend on each other’s
functionality, such as one module calling a function from another module. This type of coupling
can result in tightly-coupled code that is difficult to modify and maintain.
Data-Structured Coupling: Data-structured coupling occurs when two or more modules share a
common data structure, such as a database table or data file. This type of coupling can lead to
difficulty in maintaining the integrity of the data structure and can result in performance issues.
58 | P a g e
Interaction Coupling: Interaction coupling occurs due to the methods of a class invoking
methods of other classes. Like with functions, the worst form of coupling here is if methods
directly access internal parts of other methods. Coupling is lowest if methods communicate
directly through parameters.
Component Coupling: Component coupling refers to the interaction between two classes where
a class has variables of the other class. Three clear situations exist as to how this can happen. A
class C can be component coupled with another class C1, if C has an instance variable of type C1,
or C has a method whose parameter is of type C1,or if C has a method which has a local variable
of type C1. It should be clear that whenever there is component coupling, there is likely to be
interaction coupling.
Types of Cohesion
Cohesion is a measure of the degree to which the elements of the module are functionally related. It is
the degree to which all elements directed towards performing a single task are contained in the
component. Basically, cohesion is the internal glue that keeps the module together. A good software
design will have high cohesion.
Types of Cohesion
Functional Cohesion: Every essential element for a single computation is contained in the
component. A functional cohesion performs the task and functions. It is an ideal situation.
Sequential Cohesion: An element outputs some data that becomes the input for other element,
i.e., data flow between the parts. It occurs naturally in functional programming languages.
Communicational Cohesion: Two elements operate on the same input data or contribute towards
the same output data. Example- update record in the database and send it to the printer.
Procedural Cohesion: Elements of procedural cohesion ensure the order of execution. Actions
are still weakly connected and unlikely to be reusable. Ex- calculate student GPA, print student
record, calculate cumulative GPA, print cumulative GPA.
Temporal Cohesion: The elements are related by their timing involved. A module connected
with temporal cohesion all the tasks must be executed in the same time span. This cohesion
contains the code for initializing all the parts of the system. Lots of different activities occur, all at
unit time.
Logical Cohesion: The elements are logically related and not functionally. Ex- A component
reads inputs from tape, disk, and network. All the code for these functions is in the same
component. Operations are related, but the functions are significantly different.
Coincidental Cohesion: The elements are not related(unrelated). The elements have no
conceptual relationship other than location in source code. It is accidental and the worst form of
cohesion. Ex- print next line and reverse the characters of a string in a single component.
Procedural Cohesion: This type of cohesion occurs when elements or tasks are grouped together
in a module based on their sequence of execution, such as a module that performs a set of related
59 | P a g e
procedures in a specific order. Procedural cohesion can be found in structured programming
languages.
Communicational Cohesion: Communicational cohesion occurs when elements or tasks are
grouped together in a module based on their interactions with each other, such as a module that
handles all interactions with a specific external system or module. This type of cohesion can be
found in object-oriented programming languages.
Temporal Cohesion: Temporal cohesion occurs when elements or tasks are grouped together in a
module based on their timing or frequency of execution, such as a module that handles all
periodic or scheduled tasks in a system. Temporal cohesion is commonly used in real-time and
embedded systems.
Informational Cohesion: Informational cohesion occurs when elements or tasks are grouped
together in a module based on their relationship to a specific data structure or object, such as a
module that operates on a specific data type or object. Informational cohesion is commonly used
in object-oriented programming.
Functional Cohesion: This type of cohesion occurs when all elements or tasks in a module
contribute to a single well-defined function or purpose, and there is little or no coupling between
the elements. Functional cohesion is considered the most desirable type of cohesion as it leads to
more maintainable and reusable code.
Layer Cohesion: Layer cohesion occurs when elements or tasks in a module are grouped together
based on their level of abstraction or responsibility, such as a module that handles only low-level
hardware interactions or a module that handles only high-level business logic. Layer cohesion is
commonly used in large-scale software systems to organize code into manageable layers.
60 | P a g e
Reduced functionality: Low cohesion can result in modules that lack a clear purpose and contain
elements that don’t belong together, reducing their functionality and making them harder to
maintain.
Difficulty in understanding the module: Low cohesion can make it harder for developers to
understand the purpose and behavior of a module, leading to errors and a lack of clarity.
Conclusion
In conclusion, it’s good for software to have low coupling and high cohesion. Low coupling means
the different parts of the software don’t rely too much on each other, which makes it safer to make
changes without causing unexpected problems. High cohesion means each part of the software has a
clear purpose and sticks to it, making the code easier to work with and reuse. Following these
principles helps make software stronger, more adaptable, and easier to grow.
Phases of ISLC
Information cycle is also known as Macro life cycle. These cycle typically includes following phases:
1. Feasibility Analysis
This phase basically concerned with following points:
1. Analyzing potential application areas.
2. Identifying the economics of information gathering.
3. Performing preliminary cost benefit studies.
4. Determining the complexity of data and processes.
5. Setting up priorities among application.
3. Design
This phase has following two aspects:
1. Design of database.
2. Design of application system that uses and process the database.
61 | P a g e
4. Implementation
In this phase following steps are implemented:
1. The information system is implemented
2. The database is loaded.
3. The database transaction are implemented and tested.
8. Continuous Improvement
The information system life cycle is a continuous process of improvement. The system should be
regularly evaluated to identify areas for improvement, such as performance, functionality, and
usability. This may involve revisiting previous phases of the cycle to make changes or improvements.
9. Risk Management
Throughout the entire ISLC, risk management should be an integral part of the process. This includes
identifying potential risks and developing strategies to mitigate them. Risk management should be an
ongoing process throughout the life cycle, from the feasibility analysis to deployment and
maintenance.
10. Integration
Integration with other systems is often necessary, and should be considered early in the life cycle.
This includes integration with existing systems, as well as with new systems that may be developed in
the future.
11. Scalability
As the organization grows and changes, the information system must be able to scale up to meet new
demands. This should be considered during the design phase to ensure that the system can
accommodate future growth and changes in the organization.
12. Sustainability
Sustainable design and development practices should be considered throughout the ISLC to reduce the
environmental impact of the information system. This includes reducing energy consumption,
minimizing waste, and using sustainable materials where possible.
62 | P a g e
Benefits of Using the ISLC Framework
1. Improved alignment with business goals: By following the ISLC, organizations can ensure that
their information systems align with their business goals and support the organization’s overall
mission.
2. Better project management: The ISLC provides a structured and controlled approach to
managing information system projects, which can help to improve project management and
reduce risks.
3. Increased efficiency: The ISLC can help organizations to use their resources more efficiently, by
ensuring that the development, maintenance, and retirement of information systems is planned
and managed in a consistent and controlled manner.
4. Improved user satisfaction: By involving users in the ISLC process, organizations can ensure
that their information systems meet the needs of the users, which can lead to improved user
satisfaction.
5. Better data management: By following the ISLC, organizations can ensure that their data is
properly managed throughout the entire system’s life cycle, which can help to improve data
quality and reduce risks associated with data loss or corruption.
6. Enhanced security: The ISLC can help organizations to ensure that their information systems are
designed, developed, and maintained with security in mind. This can help to reduce the risk of
data breaches and other security incidents.
7. Improved collaboration: The ISLC can help to promote collaboration between different teams
and departments involved in the development, maintenance, and retirement of information
systems. This can lead to better communication, more efficient use of resources, and improved
outcomes.
8. Better compliance: The ISLC can help organizations to ensure that their information systems
comply with relevant laws, regulations, and industry standards. This can help to reduce the risk of
legal and financial penalties, as well as damage to the organization’s reputation.
9. Increased agility: The ISLC can help organizations to be more agile and responsive to changing
business needs and technological trends. By using a structured and flexible approach to
information system development and management, organizations can more easily adapt to
changing requirements and opportunities.
10. Enhanced innovation: The ISLC can help to promote innovation and creativity in information
system development and management. By encouraging experimentation, iteration, and continuous
improvement, organizations can discover new ways to use technology to support their business
goals and mission.
11. Better cost management: By following the ISLC, organizations can ensure that they are only
investing in information systems that will provide value to the organization, and that the systems
are retired before they become too costly to maintain.
63 | P a g e
2. Database Design
At the end of this phase, a complete logical and physical design of the database system on the
chosen DBMS is ready.
3. Database Implementation
This comprises the process of specifying the conceptual, external, and internal database definition
creating empty database files, and implementing the software application.
4. Loading or Data Conversion
The database is populated either by loading the data directly or by converting existing files into
database system format.
5. Application Conversion
Any software application from a previous system is converted to the new system.
6. Testing and Validation
The new system is tested and validated. Testing and validation of application programs can be a
very involved process and the techniques that are employed are usually covered in the software
engineering course. The automated tools that assist in the process.
7. Operation
The Database system and its application are put into operation Usually the old and new system are
operated in parallel for some time.
8. Monitoring and Maintenance
64 | P a g e
1. Project Initiation/Feasibility Study:
A feasibility study explores system requirements to determine project feasibility. There are several
fields of feasibility study including economic feasibility, operational feasibility, and technical
feasibility. The goal is to determine whether the system can be implemented or not. The process of
feasibility study takes as input the required details as specified by the user and other domain-
specific details. The output of this process simply tells whether the project should be undertaken or
not and if yes, what would the constraints be. Additionally, all the risks and their potential effects
on the projects are also evaluated before a decision to start the project is taken.
This phase of Project Management involves defining the project, identifying the stakeholders, and
establishing the project’s goals and objectives.
2. Project Planning:
In this phase of Project Management, the project manager defines the scope of the project, develops
a detailed project plan, and identifies the resources required to complete the project. A detailed plan
stating a stepwise strategy to achieve the listed objectives is an integral part of any project. Planning
consists of the following activities:
Set objectives or goals
Develop strategies
Develop project policies
Determine courses of action
Making planning decisions
Set procedures and rules for the project
Develop a software project plan
Prepare budget
Conduct risk management
Document software project plans
This step also involves the construction of a work breakdown structure(WBS). It also includes size,
effort, schedule, and cost estimation using various techniques.
3. Project Execution:
The Project Execution phase of the Project Management process involves the actual implementation
of the project, including the allocation of resources, the execution of tasks, and the monitoring and
control of project progress. A project is executed by choosing an appropriate software development
lifecycle model (SDLC). It includes several steps including requirements analysis, design, coding,
testing and implementation, testing, delivery, and maintenance. Many factors need to be considered
while doing so including the size of the system, the nature of the project, time and budget
constraints, domain requirements, etc. An inappropriate SDLC can lead to the failure of the project.
65 | P a g e
4. Project Monitoring and Controlling:
This phase of Project Management involves tracking the project’s progress, comparing actual
results to the project plan, and making changes to the project as necessary. In the project
management process, in that third and fourth phases are not sequential in nature. These phase will
run regularly with the project execution phase. These phase will ensure that project deliverable are
need to meet.
During the monitoring phase of the project management phases. The manager will respond to the
proper tracking the cost and effort during the process. This tracking will not ensure that budget is
also important for the future projects.
5. Project Closing:
There can be many reasons for the termination of a project. Though expecting a project to terminate
after successful completion is conventional, at times, a project may also terminate without
completion. Projects have to be closed down when the requirements are not fulfilled according to
given time and cost constraints. This phase of Project Management involves completing the project,
documenting the results, and closing out any open issues.
Some reasons for failure include:
Fast-changing technology
The project running out of time
Organizational politics
Too much change in customer requirements
Project exceeding budget or funds
Once the project is terminated, a post-performance analysis is done. Also, a final report is published
describing the experiences, lessons learned, and recommendations for handling future projects.
Project management is a systematic approach to planning, organizing, and controlling the resources
required to achieve specific project goals and objectives. The project management process involves
a set of activities that are performed to plan, execute, and close a project. The project management
process can be divided into several phases, each of which has a specific purpose and set of tasks.
66 | P a g e
May not be suitable for small or simple projects.
Conclusion
Project management is a procedure that requires responsibility. The project management process
brings all the other project tasks together and ensures that the project runs smoothly. Understanding
the phases of project management—initiation, planning, execution, monitoring and control, and
closure—is crucial for successfully managing any project. Each phase plays a vital role in ensuring
that projects are completed on time, within budget, and to the satisfaction of stakeholders. By
meticulously following these phases, project managers can effectively coordinate tasks, resources,
and teams, address challenges proactively, and deliver high-quality outcomes.
67 | P a g e
Who Estimates Projects Size?
Here are the key roles involved in estimating the project size:
1. Project Manager: Project manager is responsible for overseeing the estimation process.
2. Subject Matter Experts (SMEs): SMEs provide detailed knowledge related to the specific
areas of the project.
3. Business Analysts: Business Analysts help in understanding and documenting the project
requirements.
4. Technical Leads: They estimate the technical aspects of the project such as system design,
development, integration, and testing.
5. Developers: They will provide detailed estimates for the tasks they will handle.
6. Financial Analysts: They provide estimates related to the financial aspects of the project
including labor costs, material costs, and other expenses.
7. Risk Managers: They assess the potential risks that could impact the projects’ size and effort.
8. Clients: They provide input on project requirements, constraints, and expectations.
Each of these techniques has its strengths and weaknesses, and the choice of technique depends on
various factors such as the project’s complexity, available data, and the expertise of the team.
Estimating the Size of the Software
Estimation of the size of the software is an essential part of Software Project Management. It helps
the project manager to further predict the effort and time that will be needed to build the project.
Here are some of the measures that are used in project size estimation:
68 | P a g e
1. Lines of Code (LOC)
As the name suggests, LOC counts the total number of lines of source code in a project. The units
of LOC are:
Disadvantages:
1. Different programming languages contain a different number of lines.
2. No proper industry standard exists for this technique.
3. It is difficult to estimate the size using this technique in the early stages of the project.
4. When platforms and languages are different, LOC cannot be used to normalize.
Disadvantages:
1. No fixed standards exist. Some entities contribute more to project size than others.
2. Just like FPA, it is less used in the cost estimation model. Hence, it must be converted to LOC.
69 | P a g e
1. It is independent of the programming language.
2. Each major process can be decomposed into smaller processes. This will increase the accuracy
of the estimation.
Disadvantages:
1. Studying similar kinds of processes to estimate size takes additional time and effort.
2. All software projects are not required for the construction of DFD.
External Inputs 3 4 6
External
4 5 7
Output
External
3 4 6
Inquiries
Internal Logical
7 10 15
Files
External
5 7 10
Interface Files
3. Find the Total Degree of Influence:
Use the ’14 general characteristics of a system to find the degree of influence of each of them. The
sum of all 14 degrees of influence will give the TDI. The range of TDI is 0 to 70. The 14 general
characteristics are: Data Communications, Distributed Data Processing, Performance, Heavily Used
Configuration, Transaction Rate, On-Line Data Entry, End-user Efficiency, Online Update,
Complex Processing Reusability, Installation Ease, Operational Ease, Multiple Sites and Facilitate
70 | P a g e
Change.
Each of the above characteristics is evaluated on a scale of 0-5.
Disadvantages:
1. It is not good for real-time systems and embedded systems.
2. Many cost estimation models like COCOMO use LOC and hence FPC must be converted to
LOC.
71 | P a g e
4. Break Down the Project: Use Work Breakdown Structure (WBS) and detailed take analysis to
make sure that each task is specific and measurable.
5. Incorporate Expert Judgement: Engage subject matter experts and experienced team
members to provide input on estimates.
Conclusion
In conclusion, accurate project size estimation is crucial for software project success. Traditional
techniques like lines of code have limitations. The future of estimation lies in AI and data-
driven insights for better resource allocation, risk management, and project planning.
72 | P a g e
1. Identification and Establishment – Identifying the configuration items from products that
compose baselines at given points in time (a baseline is a set of mutually consistent
Configuration Items, which has been formally reviewed and agreed upon, and serves as the
basis of further development). Establishing relationships among items, creating a mechanism to
manage multiple levels of control and procedure for the change management system.
2. Version control – Creating versions/specifications of the existing product to build new
products with the help of the SCM system. A description of the version is given below:
Suppose after
some changes, the version of the configuration object changes from 1.0 to 1.1. Minor
corrections and changes result in versions 1.1.1 and 1.1.2, which is followed by a major update
that is object 1.2. The development of object 1.0 continues through 1.3 and 1.4, but finally, a
noteworthy change to the object results in a new evolutionary path, version 2.0. Both versions
are currently supported.
3. Change control – Controlling changes to Configuration items (CI). The change control process
is explained in Figure below:
73 | P a g e
A change
request (CR) is submitted and evaluated to assess technical merit, potential side effects, the
overall impact on other configuration objects and system functions, and the projected cost of
the change. The results of the evaluation are presented as a change report, which is used by a
change control board (CCB) —a person or group who makes a final decision on the status and
priority of the change. An engineering change Request (ECR) is generated for each approved
change. Also, CCB notifies the developer in case the change is rejected with proper reason. The
ECR describes the change to be made, the constraints that must be respected, and the criteria for
review and audit. The object to be changed is “checked out” of the project database, the change
is made, and then the object is tested again. The object is then “checked in” to the database and
appropriate version control mechanisms are used to create the next version of the software.
4. Configuration auditing – A software configuration audit complements the formal technical
review of the process and product. It focuses on the technical correctness of the configuration
object that has been modified. The audit confirms the completeness, correctness, and
consistency of items in the SCM system and tracks action items from the audit to closure.
5. Reporting – Providing accurate status and current configuration data to developers, testers, end
users, customers, and stakeholders through admin guides, user guides, FAQs, Release notes,
Memos, Installation Guide, Configuration guides, etc.
System Configuration Management (SCM) is a software engineering practice that focuses on
managing the configuration of software systems and ensuring that software components are
properly controlled, tracked, and stored. It is a critical aspect of software development, as it helps to
ensure that changes made to a software system are properly coordinated and that the system is
always in a known and stable state.
SCM involves a set of processes and tools that help to manage the different components of a
software system, including source code, documentation, and other assets. It enables teams to track
changes made to the software system, identify when and why changes were made, and manage the
integration of these changes into the final product.
Importance of Software Configuration Management
1. Effective Bug Tracking: Linking code modifications to issues that have been reported, makes
bug tracking more effective.
2. Continuous Deployment and Integration: SCM combines with continuous processes to automate
deployment and testing, resulting in more dependable and timely software delivery.
74 | P a g e
3. Risk management: SCM lowers the chance of introducing critical flaws by assisting in the early
detection and correction of problems.
4. Support for Big Projects: Source Code Control (SCM) offers an orderly method to handle code
modifications for big projects, fostering a well-organized development process.
5. Reproducibility: By recording precise versions of code, libraries, and dependencies, source
code versioning (SCM) makes builds repeatable.
6. Parallel Development: SCM facilitates parallel development by enabling several developers to
collaborate on various branches at once.
75 | P a g e
WHAT IS THE COCOMO MODEL?
The COCOMO Model is a procedural cost estimate model for software projects and is often used as
a process of reliably predicting the various parameters associated with making a project such as
size, effort, cost, time, and quality. It was proposed by Barry Boehm in 1981 and is based on the
study of 63 projects, which makes it one of the best-documented models.
The key parameters that define the quality of any software product, which are also an outcome of
COCOMO, are primarily effort and schedule:
1. Effort: Amount of labor that will be required to complete a task. It is measured in person-
months units.
2. Schedule: This simply means the amount of time required for the completion of the job, which
is, of course, proportional to the effort put in. It is measured in the units of time such as weeks,
and months.
Effort
E = 2.4(400)1.05 E = 3.0(400)1.12 E = 3.6(400)1.20
Equation
76 | P a g e
software is divided into different modules and then we apply COCOMO in different modules to
estimate effort and then sum the effort.
The Six phases of detailed COCOMO are:
1. Planning and requirements: This initial phase involves defining the scope, objectives, and
constraints of the project. It includes developing a project plan that outlines the schedule,
resources, and milestones
2. System design: : In this phase, the high-level architecture of the software system is created.
This includes defining the system’s overall structure, including major components, their
interactions, and the data flow between them.
3. Detailed design: This phase involves creating detailed specifications for each component of the
system. It breaks down the system design into detailed descriptions of each module, including
data structures, algorithms, and interfaces.
4. Module code and test: This involves writing the actual source code for each module or
component as defined in the detailed design. It includes coding the functionalities,
implementing algorithms, and developing interfaces.
5. Integration and test: This phase involves combining individual modules into a complete
system and ensuring that they work together as intended.
6. Cost Constructive model: The Constructive Cost Model (COCOMO) is a widely used
method for estimating the cost and effort required for software development projects.
Different models of COCOMO have been proposed to predict the cost estimation at different
levels, based on the amount of accuracy and correctness required. All of these models can be
applied to a variety of projects, whose characteristics determine the value of the constant to be used
in subsequent calculations. These characteristics of different system types are mentioned below.
Boehm’s definition of organic, semidetached, and embedded systems:
77 | P a g e
1. Basic COCOMO Model
The Basic COCOMO model is a straightforward way to estimate the effort needed for a software
development project. It uses a simple mathematical formula to predict how many person-months of
work are required based on the size of the project, measured in thousands of lines of code (KLOC).
It estimates effort and time required for development using the following expression:
E = a*(KLOC)b PM
Tdev = c*(E)d
Person required = Effort/ Time
Where,
E is effort applied in Person-Months
KLOC is the estimated size of the software product indicate in Kilo Lines of Code
Tdev is the development time in months
a, b, c are constants determined by the category of software project given in below table.
The above formula is used for the cost estimation of the basic COCOMO model and also is used in
the subsequent models. The constant values a, b, c, and d for the Basic Model for the different
categories of the software projects are:
Software
Projects a b c d
Semi-
3.0 1.12 2.5 0.35
Detached
78 | P a g e
other factors such as reliability, experience, and Capability. These factors are known as Cost
Drivers (multipliers) and the Intermediate Model utilizes 15 such drivers for cost estimation.
Hardware attributes
Run-time performance constraints
Memory constraints
The volatility of the virtual machine environment
Required turnabout time
Personal attributes
Analyst capability
Software engineering capability
Application experience
Virtual machine experience
Programming language experience
Project attributes
Use of software tools
Application of software engineering methods
Required development schedule
Each of the 15 such attributes can be rated on a six-point scale ranging from “very low” to “extra
high” in their relative order of importance. Each attribute has an effort multiplier fixed as per the
rating. Table give below represents Cost Drivers and their respective rating:
The Effort Adjustment Factor (EAF) is determined by multiplying the effort multipliers
associated with each of the 15 attributes.
The Effort Adjustment Factor (EAF) is employed to enhance the estimates generated by the basic
COCOMO model in the following expression:
The constant values a, b, c, and d for the Basic Model for the different categories of the software
projects are:
Software
Projects a b c d
79 | P a g e
Software
Projects a b c d
Semi-
3.0 1.12 2.5 0.35
Detached
80 | P a g e
Best Practices for Using COCOMO
1. Recognize the Assumptions Underpinning the Model: Become acquainted with the
COCOMO model’s underlying assumptions, which include its emphasis on team experience,
size, and complexity. Understand that although COCOMO offers useful approximations, project
results cannot be predicted with accuracy.
2. Customize the Model: Adapt COCOMO’s inputs and parameters to your project’s unique
requirements, including organizational capacity, development processes, and industry
standards. By doing this, you can be confident that the estimations produced by COCOMO are
more precise and appropriate for your situation.
3. Utilize Historical Data: To verify COCOMO inputs and improve estimating parameters,
collect and examine historical data from previous projects. Because real-world data takes
project-specific aspects and lessons learned into account, COCOMO projections become more
accurate and reliable.
4. Verify and validate: Compare COCOMO estimates with actual project results, and make
necessary adjustments to estimation procedures in light of feedback and lessons discovered.
Review completed projects to find errors and enhance future project estimation accuracy.
5. Combine with Other Techniques: To reduce biases or inaccuracies in any one method and to
triangulate results, add COCOMO estimates to other estimation techniques including expert
judgment, similar estimation, and bottom-up estimation.
81 | P a g e
2. What does COCOMO Model stands for?
COCOMO Model stands for Constructive Cost Model.
82 | P a g e
Shortcomings of the Capability Maturity Model (CMM)
It encourages the achievement of a higher maturity level in some cases by displacing the true
mission, which is improving the process and overall software quality.
It only helps if it is put into place early in the software development process.
It has no formal theoretical basis and in fact, is based on the experience of very knowledgeable
people.
It does not have good empirical support and this same empirical support could also be
constructed to support other models.
Difficulty in measuring process improvement: The SEI/CMM model may not provide an
accurate measure of process improvement, as it relies on self-assessment by the organization
and may not capture all aspects of the development process.
Focus on documentation rather than outcomes: The SEI/CMM model may focus too much on
documentation and adherence to procedures, rather than on actual outcomes such as software
quality and customer satisfaction.
May not be suitable for all types of organizations: The SEI/CMM model may not be suitable for
all kinds of organizations, particularly those with smaller development teams or those with less
structured development processes.
May not keep up with rapidly evolving technologies: The SEI/CMM model may not be able to
keep up with rapidly evolving technologies and development methodologies, which could limit
its usefulness in certain contexts.
Lack of agility: The SEI/CMM model may not be agile enough to respond quickly to changing
business needs or customer requirements, which could limit its usefulness in dynamic and
rapidly changing environments.
83 | P a g e
Level-1: Initial
No KPIs defined.
Processes followed are Adhoc and immature and are not well defined.
Unstable environment for software development.
No basis for predicting product quality, time for completion, etc.
Limited project management capabilities, such as no systematic tracking of schedules, budgets,
or progress.
We have limited communication and coordination among team members and stakeholders.
No formal training or orientation for new team members.
Little or no use of software development tools or automation.
Highly dependent on individual skills and knowledge rather than standardized processes.
High risk of project failure or delays due to a lack of process control and stability.
Level-2: Repeatable
Focuses on establishing basic project management policies.
Experience with earlier projects is used for managing new similar-natured projects.
Project Planning- It includes defining resources required, goals, constraints, etc. for the
project. It presents a detailed plan to be followed systematically for the successful completion
of good-quality software.
Configuration Management- The focus is on maintaining the performance of the software
product, including all its components, for the entire lifecycle.
Requirements Management- It includes the management of customer reviews and feedback
which result in some changes in the requirement set. It also consists of accommodation of those
modified requirements.
Subcontract Management- It focuses on the effective management of qualified software
contractors i.e. it manages the parts of the software developed by third parties.
84 | P a g e
Software Quality Assurance- It guarantees a good quality software product by following
certain rules and quality standard guidelines while developing.
Level-3: Defined
At this level, documentation of the standard guidelines and procedures takes place.
It is a well-defined integrated set of project-specific software engineering and management
processes.
Peer Reviews: In this method, defects are removed by using several review methods like
walkthroughs, inspections, buddy checks, etc.
Intergroup Coordination: It consists of planned interactions between different development
teams to ensure efficient and proper fulfillment of customer needs.
Organization Process Definition: Its key focus is on the development and maintenance of
standard development processes.
Organization Process Focus: It includes activities and practices that should be followed to
improve the process capabilities of an organization.
Training Programs: It focuses on the enhancement of knowledge and skills of the team
members including the developers and ensuring an increase in work efficiency.
Level-4: Managed
At this stage, quantitative quality goals are set for the organization for software products as well
as software processes.
The measurements made help the organization to predict the product and process quality within
some limits defined quantitatively.
Software Quality Management: It includes the establishment of plans and strategies to
develop quantitative analysis and understanding of the product’s quality.
Quantitative Management: It focuses on controlling the project performance quantitatively.
Level-5: Optimizing
This is the highest level of process maturity in CMM and focuses on continuous process
improvement in the organization using quantitative feedback.
The use of new tools, techniques, and evaluation of software processes is done to prevent the
recurrence of known defects.
Process Change Management: Its focus is on the continuous improvement of the
organization’s software processes to improve productivity, quality, and cycle time for the
software product.
Technology Change Management: It consists of the identification and use of new
technologies to improve product quality and decrease product development time.
Defect Prevention It focuses on the identification of causes of defects and prevents them from
recurring in future projects by improving project-defined processes.
85 | P a g e
CMM (Capability Maturity Model) vs CMMI (Capability Maturity Model Integration)
Capability Maturity Model Capability Maturity Model
Aspects (CMM) Integration (CMMI)
Levels of CMMI
CMMI, like CMM, is organized into five stages of process maturity. However, they differ from the
levels in CMM.
There are 5 performance levels of the CMMI Model.
Level 1: Initial: Processes are often ad hoc and unpredictable. There is little or no formal process
in place.
Level 2: Managed: Basic project management processes are established. Projects are planned,
monitored, and controlled.
Level 3: Defined: Organizational processes are well-defined and documented. Standardized
processes are used across the organization.
Level 4: Quantitatively Managed: Processes are measured and controlled using statistical and
quantitative techniques. Process performance is quantitatively understood and managed.
Level 5: Optimizing: Continuous process improvement is a key focus. Processes are continuously
improved based on quantitative feedback.
2. Match the 5 CMM Maturity levels/CMMI staged representations in List- I with their
characterizations in List-II codes: [UGC NET CS 2018]
List – 1 List – 2
86 | P a g e
List – 1 List – 2
(A) iv v i iii ii
(B) i ii iv v iii
(C) v iv ii iii i
(D) iv v ii iii i
Solution: The correct answer is (D).
3. Which one of the following is not a key process area in CMM level 5? [UGC NET CSE
2014]
(A) Defect prevention
(B) Process change management
(C) Software product engineering
(D) Technology change management
Solution: The correct answer is (C).
Conclusion
The Capability Maturity Model (CMM) is a framework designed to help organizations improve
their software development processes. It outlines five levels of maturity, each representing a step
towards more organized and efficient practices. In simple words, CMM helps companies identify
their current process capabilities, find weaknesses, and provide a structured path for improvement,
ensuring better project management and higher quality outcomes over time.
List some of the alternatives of the Capability Maturity Model for the improvement of
Processes.
Answer:
Some of the alternatives of the Capability Maturity Model are listed below.
Six Sigma
ISO 9000
87 | P a g e
Agile methodologies
Lean Software Development
We will be discussing these steps in brief and how risk assessment and management are incorporated
into these steps to ensure less risk in the software being developed.
1. Preliminary Analysis
In this step, you need to find out the organization’s objective
Nature and scope of problem under study
Propose alternative solutions and proposals after having a deep understanding of the problem and
what competitors are doing
Describe costs and benefits.
Support from Risk Management Activities: Below mentioned is the support from the activities of
Risk Management.
Establish a process and responsibilities for risk management
Document Initial known risks
The Project Manager should prioritize the risks
88 | P a g e
Feasibility Study: This is the first and most important phase. Often this phase is conducted as a
standalone phase in big projects not as a sub-phase under the requirement definition phase. This phase
allows the team to get an estimate of major risk factors cost and time for a given project. You might
be wondering why this is so important. A feasibility study helps us to get an idea of whether it is
worth constructing the system or not. It helps to identify the main risk factors.
Risk Factors: Following is the list of risk factors for the feasibility study phase.
Project managers often make a mistake in estimating the cost, time, resources, and scope of the
project. Unrealistic budget, time, inadequate resources, and unclear scope often lead to project
failure.
Unrealistic Budget: As discussed above inaccurate estimation of the budget may lead to the
project running out of funds early in the SDLC. An accurate estimation budget is directly related
to correct knowledge of time, effort, and resources.
Unrealistic Schedule: Incorrect time estimation lead to a burden on developers by project
managers to deliver the project on time. Thus compromising the overall quality of the project and
thus making the system less secure and more vulnerable.
Insufficient resources: In some cases, the technology, and tools available are not up-to-date to
meet project requirements, or resources(people, tools, technology) available are not enough to
complete the project. In any case, it is the project will get delayed, or in the worst case it may lead
to project failure.
Unclear project scope: Clear understanding of what the project is supposed to do, which
functionalities are important, which functionalities are mandatory, and which functionalities can
be considered as extra is very important for project managers. Insufficient knowledge of the
system may lead to project failure.
Requirement Elicitation: It starts with an analysis of the application domain. This phase requires the
participation of different stakeholders to ensure efficient, correct, and complete gathering of system
services, their performance, and constraints. This data set is then reviewed and articulated to make it
ready for the next phase.
Risk Factors: Following is the list of risk factors for the Requirement Elicitation phase.
Incomplete requirements: In 60% of the cases users are unable to state all requirements in the
beginning. Therefore requirements have the most dynamic nature in the complete SDLC
(Software Development Life Cycle) Process. If any of the user needs, constraints, or other
functional/non-functional requirements are not covered then the requirement set is said to be
incomplete.
Inaccurate requirements: If the requirement set does not reflect real user needs then in that case
requirements are said to be inaccurate.
Unclear requirements: Often in the process of SDLC there exists a communication gap between
users and developers. This ultimately affects the requirement set. If the requirements stated by
users are not understandable by analysts and developers then these requirements are said to be
unclear.
Ignoring nonfunctional requirements: Sometimes developers and analysts ignore the fact that
nonfunctional requirements hold equal importance as functional requirements. In this confusion,
they focus on delivering what the system should do rather than how the system should be like
scalability, maintainability, testability, etc.
Conflicting user requirements: Multiple users in a system may have different requirements. If
not listed and analyzed carefully, this may lead to inconsistency in the requirements.
Gold plating: It is very important to list out all requirements in the beginning. Adding
requirements later during development may lead to threats in the system. Gold plating is nothing
but adding extra functionality to the system that was not considered earlier. Thus inviting threats
and making the system vulnerable.
Unclear description of real operating environment: Insufficient knowledge of real operating
environment leads to certain missed vulnerabilities thus threats remain undetected until a later
stage of the software development life cycle.
89 | P a g e
Requirement Analysis Activity: In this step requirements that are gathered by interviewing users or
brainstorming or by another means will be: first analyzed and then classified and organized such as
functional and nonfunctional requirements groups and then these are prioritized to get a better
knowledge of which requirements are of high priority and need to be definitely present in the system.
After all these steps requirements are negotiated.
Risk Factors: Risk management in this Requirement Analysis Activity step has the following task to
do.
Nonverifiable requirements: If a finite cost-effective process like testing, inspection, etc is not
available to check whether the software meets the requirement or not then that requirement is said
to be nonverifiable.
Infeasible requirement: if sufficient resources are not available to successfully implement the
requirement then it is said to be an infeasible requirement.
Inconsistent requirement: If a requirement contradicts any other requirement then the
requirement is said to be inconsistent.
Nontraceable requirement: It is very important for every requirement to have an origin source.
During documentation, it is necessary to write the origin source of each requirement so that it can
be traced back in the future when required.
Unrealistic requirement: If a requirement meets all the above criteria like it is complete,
accurate, consistent, traceable, verifiable, etc then that requirement is realistic enough to be
documented and implemented.
Requirement Validation Activity: This involves validating the requirements that are gathered and
analyzed till now to check whether they actually define what users want from the system.
Risk Factors: Following is the list of risk factors for the Requirement Validation Activity phase.
Misunderstood domain-specific terminology: Developers and Application specialists often use
domain-specific terminology or we can say technical terms which are not understandable for the
majority of end users. Thus creating a misunderstanding between end users and developers.
Using natural language to express requirements: Natural language is not always the best way
to express requirements as different users may have different signs and conventions. Thus it is
advisable to use formal language for expressing and documenting.
Requirement Documentation Activity: This step involves creating a Requirement Document (RD)
by writing down all the agreed-upon requirements using formal language. RD serves as a means of
communication between different stakeholders.
Risk Factors: Following is the list of risk factors for the Requirement Documentation Activity phase.
Inconsistent requirements data and RD: Sometimes it might happen, due to glitches in the
gathering and documentation process, actual requirements may differ from the documented ones.
Nonmodifiable RD: If during RD preparation, structuring of RD with maintainability is not
considered then it will become difficult to edit the document in the course of change without
rewriting it.
90 | P a g e
1. Which SDLC Model is Best for Risk Management?
Answer:
The Spiral Model is a systems development lifecycle (SDLC) that is the best method for risk
management.
2. What is Risk Analysis in SDLC?
Answer:
Risk Analysis is simply identifying risks in applications and prioritizing them for testing purpose.
3. How Risk is Managed in the Waterfall Model?
Answer:
Risks in Waterfall Model are managed with the help of Charts. After the detection of Risks, Risk
Chart begins.
3. System Design
This is the second phase of SDLC wherein system architecture must be established and all
requirements that are documented needs to be addressed. In this phase, the system (operations and
features) is described in detail using screen layouts, pseudocode, business rules, process diagrams,
etc.
91 | P a g e
o Risk Factors – Improper choice of programming language: Incorrect choice of programming
language may not support chosen architectural method. This may reduce the maintainability
and portability of the system.
Constructing Physical Model Activity: The physical model consisting of symbols is a
simplified description of a hierarchically organized system.
o Risk Factors:
o Complex System: If the system to be developed is very large and complex then it will
create problems for developers. as developers will get confused and will not be able to
make out where to start and how to decompose such large and complex systems into
components.
o Complicated Design: For a large complex system it may be possible due to confusion
and lack of enough skills, developers may create a complicated design, which will be
difficult to implement.
o Large-Size Components: In the case of large-size components that are further
decomposable into sub-components, may suffer difficulty in implementation and also
poses difficulty in assigning functions to these components.
o Unavailability of Expertise for Reusability: Lack of proper expertise to determine the
components that can be reused poses a serious risk to the project. Developing
components from scratch takes a lot of time in comparison to reusing the components.
Thus delaying the projection completion.
o Less Reusable Components: Incorrect estimation of reusable components during the
analysis phase leads to two serious risks to the project- first delay in project completion
and second budget overrun. Developers will be surprised to see that the percentage of the
code that was considered ready, needs to be rewritten from scratch and it will eventually
make the project budget overrun.
Verifying Design Activity: Verifying design means ensuring that the design is the correct
solution for the system under construction and it meets all user requirements.
o Risk Factors:
o Difficulties in Verifying Design to Requirements: Sometimes it is quite difficult for
the developer to check whether the proposed design meets all user requirements or not.
In order to make sure that the design is the correct solution for the system it is necessary
that the design meets all requirements.
o Many Feasible Solutions: When verifying the design, the developer may come across
many alternate solutions for the same problem Thus, choosing the best possible design
that meets all requirements is difficult. The choice depends upon the system and its
nature.
o Incorrect Design: While verifying the design it might be possible that the proposed
design either matches few requirements or no requirements at all. It may be possible that
it is a completely different design.
Specifying Design Activity: This activity involves the following main tasks. It involves the
components and defines the data flow between them and for each identified component state its
function, data input, data output, and utilization of resources.
o Risk Factors:
o Difficulty in allocating functions to components: Developers may face difficulty in
allocating functions to components in two cases- first when the system is not
decomposed correctly and secondly if the requirement documentation is not done
properly in that case developers will find it difficult to identify functions for the
components as functional requirements constitute the functions of the components.
o Extensive specification: Extensive specification of module processing should be
avoided to keep the design document as small as possible.
o Omitting Data Processing Functions: Data processing functions like create, and
read are the operations that components perform on data. Accidental omission of
these functions should be avoided.
92 | P a g e
Documenting Design Activity: In this phase design document(DD) is prepared. This will help to
control and coordinate the project during implementation and other phases.
o Risk Factors:
o Incomplete DD: The design document should be detailed enough to explain each
component, sub-components, and sub-sub-components in full detail so that developers
may work independently on different modules. If DD lacks these features then
programmers cannot work independently.
o Inconsistent DD: If the same function is carried out by more than one component. Then
in that case it will result in redundancy in the design document and will eventually result
in inconsistent documents.
o Unclear DD: If the design document does not clearly define components and is written
in uncommon natural language, then in that case it might be difficult for the developers
to understand the proposed design.
o Large DD: The design document should be detailed enough to list all components will
full details about functions, input, output, resources required, etc. It should not contain
unnecessary information. Large design documents will be difficult for programmers to
understand.
4. Development
This stage involves the actual coding of the software as per the agreed-upon requirements between
the developer and the client.
93 | P a g e
o Code not understandable by reviewers: During unit testing, developers need
to review and make changes to the code. If the code is not understandable it will
be very difficult to update the code.
o Coding drivers and stubs: During unit testing, modules need data from other
modules or need to pass data to another module. As no module is completely
independent in itself. A stub is a piece of code that replaces the module that
accepts data from the module being tested. The driver is the piece of code that
replaces the module that passes data to the module being tested. Coding drivers
and stubs consume a lot of time and effort. Since these will not be delivered
with the final system so these are considered extras.
o Poor documentation of test cases: Test cases need to be documented properly
so that these can be used in the future.
o The testing team is not experienced: The testing team is not experienced
enough to handle the automated tools and to write short concise code for drivers
and stubs.
o Poor regression testing: Regression testing means rerunning all successful test
cases when a change is made. This saves time and effort but it can be time-
consuming if all test cases are selected for rerun.
We have already discussed the first four steps of the Software Development Life Cycle. In this
article, we will be discussing the remaining four steps: Integration and System Testing,
Installation, Operation and Acceptance Testing, Maintenance, and Disposal. We will discuss
Risk Management in these four steps in detail.
Integration Activity: In this phase, individual units are combined into one working system.
o Risk Factors:
o Difficulty in combining components: Integration should be done incrementally else it
will be very difficult to locate errors and bugs. The wrong sequence of integration will
eventually hamper the functionality for which the system was designed.
o Integrate wrong versions of components: Developing a system involves writing
multiple versions of the same component. If the incorrect version of the component is
selected for integration it may not produce the desired functionality.
94 | P a g e
o Omissions: Integration of components should be done carefully. Single missed
components may result in errors and bugs, that will be difficult to locate.
Integration Testing Activity: After integrating the components next step is to test whether the
components interface correctly and to evaluate their integration. This process is known as
integration testing.
o Risk Factors:
o Bugs during integration: If wrong versions of components are integrated or components
are accidentally omitted, then it will result in bugs and errors in the resultant system.
o Data loss through the interface: Wrong integration leads to a data loss between the
components where the number of parameters in the calling component does not match the
number of parameters in the called component.
o Desired functionality not achieved: Errors and bugs introduced during integration result
in a system that fails to generate the desired functionality.
o Difficulty in locating and repairing errors: If integration is not done incrementally, it
results in errors and bugs that are hard to locate. Even if the bugs are located, they need to
be fixed. Fixing errors in one component may introduce errors in other components. Thus
it becomes quite cumbersome to locate and repair errors.
System Testing Activity: In this step integrated system is tested to ensure that it meets all the
system requirements gathered from the users.
o Risk Factors:
o Unqualified testing team: The lack of a good testing team is a major setback for good
software as testers may misuse the available resources and testing tools.
o Limited testing resources: Time, budget, and tools if not used properly or unavailable
may delay project delivery.
o Not possible to test in a real environment: Sometimes it is not able to test the system in a
real environment due to lack of budget, time constraints, etc.
o Testing cannot cope with requirements change: User requirements often change during
the entire software development life cycle, so test cases should be designed to handle such
changes. If not designed properly they will not be able to cope with change.
o The system being tested is not testable enough: If the requirements are not verifiable,
then In that case it becomes quite difficult to test such a system.
95 | P a g e
o Difficulty in using the system: Being a human it is always difficult in the beginning to
accept a change or we can say to accept a new system. But this should not go on for long
otherwise this will be a serious threat to the acceptability of the system.
Acceptance Testing Activity: The delivered system is put into acceptance testing to check
whether it meets all user requirements or not.
o Risk Factors:
o User resistance to change: It is human behavior to resist any new change in the
surroundings. But for the success of a newly delivered system, it is very important that
the end users accept the system and start using it.
o Too many software faults: Software faults should be discovered earlier before the
system operation phase, as discovery in the later phases leads to high costs in handling
these faults.
o Insufficient data handling: New system should be developed keeping in mind the load
of user data it will have to handle in a real environment.
o Missing requirements: While using the system it might be possible that the end users
discover some of the requirements and capabilities are missing.
7. Maintenance
In this stage, the system is assessed to ensure it does not become obsolete. This phase also involves
continuous evaluation of the system in terms of performance and changes are made from time to time
to initial software to make it up-to-date. Errors, and faults discovered during acceptance testing are
fixed in this phase. This step involves making improvements to the system, fixing errors, enhancing
services, and upgrading software.
8. Disposal
In this phase, plans are developed for discarding system information, hardware, and software to make
the transition to a new system. The purpose is to prevent any possibility of unauthorized disclosure of
sensitive data due to improper disposal of information. All of this should be done in accordance with
the organization’s security requirements.
96 | P a g e
Integrating risk management into the Software Development Life Cycle (SDLC) is crucial for
ensuring the development of secure and reliable software. Here are the ways to integrate Risk
Management in SDLC.
Define and document the risk management process: The first step is to define the risk
management process and document it in a formal policy or procedure. This process should
include the identification, analysis, evaluation, treatment, and monitoring of risks throughout the
SDLC.
Identify and assess risks: The next step is to identify and assess risks at every stage of
the Software Development Life Cycle (SDLC). This can be done through various techniques such
as brainstorming sessions, risk assessments, threat modeling, and vulnerability assessments.
Prioritize risks: Once risks have been identified and assessed, they need to be prioritized based
on their potential impact on the system and their likelihood of occurrence. This helps in
determining which risks need to be addressed first.
Develop risk mitigation strategies: Once risks have been prioritized, risk mitigation strategies
need to be developed. These strategies can include designing security controls, implementing
secure coding practices, and conducting security testing.
Incorporate risk management into the SDLC: Risk management should be incorporated into
every phase of the SDLC. This can be done by including risk assessments in the requirements
gathering phase, conducting security testing during the development phase, and conducting
vulnerability assessments during the testing phase.
Monitor and update the risk management plan: Risk management is an ongoing process, and
risks need to be monitored and updated regularly. This can be done through regular risk
assessments, vulnerability assessments, and threat modeling.
By integrating risk management into the SDLC: Organizations can develop more secure and
reliable software. This can help reduce the risk of data breaches, system failures, and other
security incidents that can impact an organization’s reputation, financial stability, and customer
trust.
Frequently Asked Questions
1. List some typical risk response strategies used in SDLC?
Answer:
In SDLC, there are four main risk response strategies:
Avoidance
Mitigation
Transfer
Acceptance
3. List some common challenges that are faced while implementing Integrated Risk
Management in SDLC?
Answer:
Some of the common challenges include:
resistance to change
difficulty in obtaining full support from all stakeholders
complex risk interdependencies,
data integration issues, etc.
97 | P a g e
completion of the projects. This article focuses on discussing the role and responsibilities of
a software project manager.
Project Planning
Project planning is undertaken immediately after the feasibility study phase and before the starting
of the requirement analysis and specification phase. Once a project is feasible, Software project
managers start project planning. Project planning is completed before any development phase starts.
1. Project planning involves estimating several characteristics of a project and then plan the
project activities based on these estimations.
2. Project planning is done with most care and attention.
3. A wrong estimation can result in schedule slippage.
4. Schedule delay can cause customer dissatisfaction, which may lead to a project failure.
5. Before starting a software project, it is essential to determine the tasks to be performed and
properly manage allocation of tasks among individuals involved is the software development.
6. Hence, planning is important as it results in effective software development.
7. Project planning is an organized and integrated management process, which focuses on
activities required for successful completion of the project.
8. It prevents obstacles that arise in the project such as changes in projects or organizations
objectives, non-availability of resources, and so on.
9. Project planning also helps in better utilization of resources and optimal usage of the allotted
time for a project.
10. For effective project planning, in addition to a very good knowledge of various estimation
techniques, experience is also very important.
2. Scheduling
After the completion of the estimation of all the project parameters, scheduling for manpower and
other resources is done.
3. Staffing
Team structure and staffing plans are made.
98 | P a g e
4. Risk Management
The project manager should identify the unanticipated risks that may occur during project
development risk, analyze the damage that might cause these risks, and take a risk reduction plan to
cope with these risks.
5. Miscellaneous Plans
This includes making several other plans such as quality assurance plans, configuration
management plans, etc.
Lead the team: The project manager must be a good leader who makes a team of different
members of various skills and can complete their individual tasks.
Motivate the team-member: One of the key roles of a software project manager is to
encourage team members to work properly for the successful completion of the project.
Tracking the progress: The project manager should keep an eye on the progress of the project.
A project manager must track whether the project is going as per plan or not. If any problem
arises, then take the necessary action to solve the problem. Moreover, check whether the
product is developed by maintaining correct coding standards or not.
Liaison: The project manager is the link between the development team and the customer.
Project manager analysis the customer requirements and convey it to the development team and
keep telling the progress of the project to the customer. Moreover, the project manager checks
whether the project is fulfilling the customer’s requirements or not.
Monitoring and reviewing: Project monitoring is a continuous process that lasts the whole
time a product is being developed, during which the project manager compares actual progress
and cost reports with anticipated reports as soon as possible. While most firms have a formal
system in place to track progress, qualified project managers may still gain a good
understanding of the project’s development by simply talking with participants.
Documenting project report: The project manager prepares the documentation of the project
for future purposes. The reports contain detailed features of the product and various techniques.
These reports help to maintain and enhance the quality of the project in the future.
Reporting: Reporting project status to the customer and his or her organization is the
responsibility of the project manager. Additionally, they could be required to prepare brief,
well-organized pieces that summarize key details from in-depth studies.
99 | P a g e
Earlier many projects have failed due to faulty project management practices. Management of
software projects is much more complex than management of many other types of projects. In this
article, we will discuss the types of Complexity as well as the factors that make Project
Management Complex.
Types of Complexity
The following are the types of complexity in software project management:
Time Management Complexity: Complexities to estimate the duration of the project. It also
includes the complexities to make the schedule for different activities and timely completion of
the project.
Cost Management Complexity: Estimating the total cost of the project is a very difficult task
and another thing is to keep an eye that the project does not overrun the budget.
Quality Management Complexity: The quality of the project must satisfy the customer’s
requirements. It must assure that the requirements of the customer are fulfilled.
Risk Management Complexity: Risks are the unanticipated things that may occur during any
phase of the project. Various difficulties may occur to identify these risks and make amendment
plans to reduce the effects of these risks.
Human Resources Management Complexity: It includes all the difficulties regarding
organizing, managing, and leading the project team.
Communication Management Complexity: All the members must interact with all the other
members and there must be good communication with the customer.
Infrastructure complexity: Computing infrastructure refers to all of the operations performed
on the devices that execute our code. Networking, load balancers, queues, firewalls, security,
monitoring, databases, shading, etc. We are solely interested in dealing with data, processing
business policy rules, and clients since we are software engineers that are committed to
providing value in a continuous stream. The aforementioned infrastructure ideas are nothing
more than irksome minutiae that don’t offer any benefit to the clients. Since it is a necessary
evil, we view infrastructure as accidental complexity. Our policies for scaling, monitoring, and
other issues are of little interest to our paying clients.
Deployment complexity: A release candidate, or finalized code, has to be synchronized from
one system to another. Conceptually, such an operation ought to be simple. To perform this
synchronization swiftly and securely in practice proves to be difficult.
API complexity: An API should ideally not be any more difficult to use than calling a function.
However, that hardly ever occurs. These calls are inadvertently complicated due to
authentication, rate restrictions, retries, mistakes, and other factors.
Procurement Management Complexity: Projects need many services from third parties to
complete the task. These may increase the complexity of the project to acquire the services.
Integration Management Complexity: The difficulties regarding coordinating processes and
developing a proper project plan. Many changes may occur during the project development and
it may hamper the project completion, which increases the complexity.
o Invisibility: Until the development of a software project is complete, Software remains
invisible. Anything that is invisible, is difficult to manage and control. Software project
managers cannot view the progress of the project due to the invisibility of the software until
it is completely developed. The project manager can monitor the modules of the software
that have been completed by the development team and the documents that have been
prepared, which are rough indicators of the progress achieved. Thus invisibility causes a
major problem in the complexity of managing a software project.
o Changeability: Requirements of a software product are undergone various changes. Most
of these change requirements come from the customer during the software development.
Sometimes these change requests resulted in redoing of some work, which may cause
various risks and increase expenses. Thus frequent changes to the requirements play a
major role to make software project management complex.
o Interaction: Even moderate-sized software has millions of parts (functions) that interact
with each other in many ways such as data coupling, serial and concurrent runs, state
transitions, control dependency, file sharing, etc. Due to the inherent complexity of the
100 | P a g e
functioning of a software product in terms of the basic parts making up the software, many
types of risks are associated with its development. This makes managing software projects
much more difficult compared to many other kinds of projects.
o Uniqueness: Every software project is usually associated with many unique features or
situations. This makes every software product much different from the other software
projects. This is unlike the projects in other domains such as building construction, bridge
construction, etc. where the projects are more predictable. Due to this uniqueness of the
software projects, during the software development, a project manager faces many
unknown problems that are quite dissimilar to other software projects that he had
encountered in the past. As a result, a software project manager has to confront many
unanticipated issues in almost every project that he manages.
o The exactness of the Solution: A small error can create a huge problem in a software
project. The solution must be exact according to its design. The parameters of a function
call in a program are required to be correct with the function definition. This requirement of
exact conformity of the parameters of a function introduces additional risks and increases
the complexity of managing software projects.
o Team-oriented and Intellect-intensive work: Software development projects are team-
oriented and intellect-intensive work. The software cannot be developed without interaction
between developers. In a software development project, the life cycle activities are not only
intellect-intensive, but each member has to typically interact, review the work done by other
members, and interface with several other team members creating various complexity to
manage software projects.
o The huge task regarding Estimation: One of the most important aspects of
software project management is Estimation. During project planning, a project manager has
to estimate the cost of the project, the probable duration to complete the project, and how
much effort is needed to complete the project based on size estimation. This estimation is a
very complex task, which increases the complexity of software project management.
101 | P a g e
Software project management is a complex and challenging process that requires a skilled and
experienced project manager to manage effectively. It involves balancing the conflicting demands
of schedule, budget, quality, and stakeholder expectations while ensuring that the project remains
on track and delivers the required results.
Software engineering and software project management can be complex due to various factors, such
as the dynamic nature of software development, changing requirements, technical challenges, team
management, budget constraints, and timeline pressures. Here are some advantages and
disadvantages of managing software projects in such an environment.
102 | P a g e
A mismatch between expectations and reality: Stakeholders may have unrealistic
expectations for software development projects, leading to disappointment and frustration when
the final product does not meet their expectations.
Overall, the advantages of software engineering and project management outweigh the
disadvantages. Effective management practices can help ensure successful software development
outcomes and deliver high-quality software that meets user requirements. However, managing
software development projects requires careful planning, execution, and monitoring to overcome
the complexities and challenges that may arise.
103 | P a g e
Software Maintenance
Software Maintenance refers to the process of modifying and updating a software system after it has
been delivered to the customer. This involves fixing bugs, adding new features, and adapting to new
hardware or software environments. Effective maintenance is crucial for extending the software’s
lifespan and aligning it with evolving user needs. It is an essential part of the software development
life cycle (SDLC), involving planned and unplanned activities to keep the system reliable and up-to-
date. This article focuses on discussing Software Maintenance in detail.
104 | P a g e
Maintenance can be categorized into proactive and reactive types. Proactive maintenance involves
taking preventive measures to avoid problems from occurring, while reactive maintenance involves
addressing problems that have already occurred.
Maintenance can be performed by different stakeholders, including the original development team,
an in-house maintenance team, or a third-party maintenance provider. Maintenance activities can be
planned or unplanned. Planned activities include regular maintenance tasks that are scheduled in
advance, such as updates and backups. Unplanned activities are reactive and are triggered by
unexpected events, such as system crashes or security breaches. Software maintenance can involve
modifying the software code, as well as its documentation, user manuals, and training materials.
This ensures that the software is up-to-date and continues to meet the needs of its users.
Software maintenance can also involve upgrading the software to a new version or platform. This
can be necessary to keep up with changes in technology and to ensure that the software remains
compatible with other systems. The success of software maintenance depends on effective
communication with stakeholders, including users, developers, and management. Regular updates
and reports can help to keep stakeholders informed and involved in the maintenance process.
Software maintenance is also an important part of the Software Development Life Cycle
(SDLC). To update the software application and do all modifications in software application so as
to improve performance is the main focus of software maintenance. Software is a model that runs
on the basis of the real world. so, whenever any change requires in the software that means the need
for real-world changes wherever possible.
105 | P a g e
Complexity: Large and complex systems can be difficult to understand and modify, making it
difficult to identify and fix problems.
Changing requirements: As user requirements change over time, the software system may
need to be modified to meet these new requirements, which can be difficult and time-
consuming.
Interoperability issues: Systems that need to work with other systems or software can be
difficult to maintain, as changes to one system can affect the other systems.
Lack of test coverage: Systems that have not been thoroughly tested can be difficult to
maintain as it can be hard to identify and fix problems without knowing how the system
behaves in different scenarios.
Lack of personnel: A lack of personnel with the necessary skills and knowledge to maintain
the system can make it difficult to keep the system up-to-date and running smoothly.
High-Cost: The cost of maintenance can be high, especially for large and complex systems,
which can be difficult to budget for and manage.
Reverse Engineering
Reverse Engineering is the process of extracting knowledge or design information from anything
man-made and reproducing it based on the extracted information. It is also called back
engineering. The main objective of reverse engineering is to check out how the system works.
There are many reasons to perform reverse engineering. Reverse engineering is used to know how
the thing works. Also, reverse engineering is to recreate the object by adding some enhancements.
106 | P a g e
Implements innovative processes for specific use.
Easy to document the things how efficiency and power can be improved.
107 | P a g e
Improved Collaboration: Regular software maintenance can help to improve collaboration
between different teams, such as developers, testers, and users. This can lead to better
communication and more effective problem-solving.
Reduced Downtime: Software maintenance can help to reduce downtime caused by system
failures or errors. This can have a positive impact on business operations and reduce the risk of
lost revenue or customers.
Improved Scalability: Regular software maintenance can help to ensure that the software is
scalable and can handle increased user demand. This can be particularly important for growing
businesses or for software that is used by a large number of users.
108 | P a g e
List 1 List 2
Conclusion
In summary, software maintenance is important for ensuring that software continues to meet user
needs and perform optimally over time. It involves a range of activities, from bug fixes to
performance enhancements and adaptation to new technologies. Despite the challenges and costs
associated with maintenance, its benefits, such as improved software quality, enhanced security,
and extended software life, make it indispensable for sustainable software development.
It is an authority within software engineering. The software measurement process is defined and
governed by ISO Standard.
109 | P a g e
Enable data-driven decision-making in project planning and control.
Identify bottlenecks and areas for improvement to drive process improvement activities.
Ensure that industry standards and regulations are followed.
Give software products and processes a quantitative basis for evaluation.
Enable the ongoing improvement of software development practices.
Software Metrics
A metric is a measurement of the level at which any impute belongs to a system product or process.
Software metrics are a quantifiable or countable assessment of the attributes of a software product.
There are 4 functions related to software metrics:
1. Planning
2. Organizing
3. Controlling
4. Improving
110 | P a g e
5. Reduction in overall time to produce the product,.
6. It helps to determine the complexity of the code and to test the code with resources.
7. It helps in providing effective planning, controlling and managing of the entire product.
People Metrics
People metrics play an important role in software project management. These are also called
personnel metrics. Some authors view resource metrics to include personnel metrics, software
metrics, and hardware metrics but most of the authors mainly view resource metrics as consisting of
personnel metrics only. In the present context, we also assume resource metrics to include mainly
personnel metrics. People metrics quantify useful attributes of those generating the products using
the available processes, methods, and tools. These metrics tell you about the attributes like turnover
rates, productivity, and absenteeism.
111 | P a g e
by conducting regular surveys asking employees how likely they are to recommend the
company to friends or family.
3. Team Collaboration: Team Collaboration measures how well team members work together
and communicate. It’s essential to track because effective teamwork streamlines workflows and
enhances project outcomes. To track, monitor communication frequency, participation in team
activities, and gather feedback from team members regularly.
4. Attrition: Attrition tracks the rate at which employees leave the organization. It’s important to
track because it helps identify trends and reasons for turnover, allowing proactive measures to
retain talent. To track, calculate the percentage of employees leaving within a given period and
analyze reasons through exit interviews or surveys.
5. Absenteeism: Absenteeism measures the frequency at which employees are absent from work.
It’s crucial to track because it highlights patterns of absence, enabling the identification and
resolution of underlying issues. To track, maintain records of employee attendance, including
reasons for absence, and analyze trends over time to minimize disruptions to productivity.
6. Total cost of workforce: The Total Cost of Workforce calculates all expenses associated with
employing staff. It’s important to track because it helps manage budget allocation and optimize
resource utilization. To track, compile data on salaries, benefits, training costs, and other
expenses related to workforce management to understand the total cost of employing staff.
7. Quality of Work: Quality of Work evaluates the standard and effectiveness of tasks completed
by employees. It’s vital to track because it ensures deliverables meet quality standards, satisfy
customer requirements, and uphold organizational reputation. To track, employ quality
assurance processes, gather feedback from stakeholders, and conduct performance evaluations
to measure and improve work quality.
For more, refer to Most Important People Metrics.
Process Metrics
Process Metrics are the measures of the development process that create a body of software. A
common example of a process metric is the length of time that the process of software creation
tasks.
Based on the assumption that the quality of the product is a direct function of the process,
process metrics can be used to estimate, monitor, and improve the reliability and quality of
software. ISO- 9000 certification, or “Quality Management Standards“, is the generic
reference for a family of standards developed by the International Standard Organization
(ISO).
Often, Process Metrics are tools of management in their attempt to gain insight into the creation
of a product that is intangible. Since the software is abstract, there is no visible, traceable
artifact from software projects. Objectively tracking progress becomes extremely difficult.
Management is interested in measuring progress and productivity and being able to make
predictions concerning both.
Process metrics are often collected as part of a model of software development. Models such
as Boehm’s COCOMO (Constructive Cost Model) make cost estimations for software
projects. The boat’s COPMO makes predictions about the need for additional effort on large
projects.
Although valuable management tools, process metrics are not directly relevant to program
understanding. They are more useful in measuring and predicting such things as resource usage
and schedule.
112 | P a g e
Top 7 Process Metrices
Lead Time: Lead Time measures the time taken from initiating a process (such as starting work
on a task) to its completion (finishing the task). It indicates how quickly work moves through
the development process.
Cycle Time: Cycle Time tracks the duration it takes to complete one full cycle of a process,
from beginning to end. It provides insights into the efficiency and effectiveness of the
development workflow.
Throughput: Throughput basically Quantifies the rate at which tasks or features are completed
within a given timeframe. It reflects the productivity and capacity of the development team.
Work in Progress (WIP): Work in Progress (WIP) indicates the number of tasks or features
currently being worked on but not yet completed. It helps in identifying bottlenecks and
managing workflow to ensure tasks are completed efficiently.
Defect Density: Defect Density measures the number of defects or bugs found per unit of work
or code. It helps in assessing the quality and reliability of the software being developed.
Process Efficiency: Process Efficiency evaluates the ratio of value-added work (tasks that
directly contribute to delivering value to the customer) to non-value-added work (tasks that do
not directly contribute to value delivery). It identifies opportunities for streamlining processes
and reducing waste.
Process Compliance: Process Compliance assesses the extent to which development processes
adhere to defined standards, guidelines, or regulations. It ensures consistency and quality in the
software development process.
2. Which one of the following sets of attributes should not be encompassed by effective
software metrics? [UGC-NET 2014]
(A) Simple and computable
(B) Consistent and objective
(C) Consistent in the use of units and dimensions
(D) Programming language dependent
Solution: Correct Answer is (D).
Conclusion
In software engineering, tracking both people and process metrics is crucial for ensuring successful
project outcomes. People metrics, such as employee satisfaction and teamwork effectiveness, help
in maintaining a motivated and productive workforce. Process metrics, like lead time and defect
density, allow teams to monitor and improve the efficiency and quality of their development
processes. By focusing on both aspects, teams can better manage resources, identify areas for
improvement, and ultimately deliver high-quality software products on time and within budget.
113 | P a g e
2. What is an example of Process Metrics?
Answer: The main example of Process Metrics is the time taken by the process in the creation of
the software tasks.
114 | P a g e
2. Data Functional Type
Internal Logical File (ILF): A user-identifiable group of logically related data or control
information maintained within the boundary of the application.
External Interface File (EIF): A group of users recognizable logically related data allusion to
the software but maintained within the boundary of another software.
115 | P a g e
Characteristics of Functional Point Analysis
We can calculate the functional point with the help of the number of functions and types of
functions used in applications. These are classified into five types:
Functional Complexities help us in finding the corresponding weights, which results in finding the
Unadjusted Functional point (UFp) of the Subsystem. Consider the complexity as average for all
cases. Below-mentioned is the way how to compute FP.
Weighing Factor
Measurement
Parameter Count Total_Count Simple Average Complex
Number of external
32 32*4=128 3 4 6
inputs (EI)
116 | P a g e
Weighing Factor
Measurement
outputs (EO)
Parameter Count
Number of external
24 24*4=96 3 4 6
inquiries (EQ)
Number of external
2 2*7=14 5 7 10
interfaces (EIF)
It is given that the complexity weighting factors for I, O, E, F, and N are 4, 5, 4, 10, and 7,
respectively. It is also given that, out of fourteen value adjustment factors that influence the
development effort, four factors are not applicable, each of the other four factors has value 3,
and each of the remaining factors has value 4. The computed value of the function point
metric is _____. [GATE CS 2015]
(A) 612.06
(B) 404.66
(C) 305.09
(D) 806.9
Solution: Correct Answer is (B).
For more, refer to GATE CS 2015 | Question 65.
2. While estimating the cost of the software, Lines of Code(LOC) and Function Points (FP)
are used to measure which of the following? [UGC-NET CSE 2013]
(A) Length of Code
(B) Size of Software
(C) Functionality of Software
(D) None of the Above
Solution: Correct Answer is (B).
117 | P a g e
3. In functional point analysis, the number of complexity adjustment factors is [UGC-NET CS
2014]
(A) 10
(B) 12
(C) 14
(D) 20
Solution: Correct Answer is (C).
Conclusion
Functional Point Analysis (FPA) offers a structured approach to measure the size and complexity of
software systems based on their functionality. By categorizing functions and assigning weights,
FPA provides an objective measurement that helps in estimating project timelines, resource
requirements, and overall system complexity. It focuses on user-centric features, making it valuable
for business systems like management information systems (MIS).
As Lines of Code (LOC) only counts the volume of code, you can only use it to compare or
estimate projects that use the same language and are coded using the same coding standards.
Features of Lines of Code (LOC)
Change Tracking: Variations in LOC as time passes can be tracked to analyze the growth or
reduction of a codebase, providing insights into project progress.
Limited Representation of Complexity: Despite LOC provides a general idea of code size, it
does not accurately depict code complexity. It is possible for two programs having the same
LOC to be incredibly complex.
Ease of Computation: LOC is an easy measure to obtain because it is easy to calculate and
takes little time.
Easy to Understand: The idea of expressing code size in terms of lines is one that
stakeholders, even those who are not technically inclined, can easily understand.
118 | P a g e
Comparative Analysis: High-level productivity comparisons between several projects or
development teams can be made using LOC. It might provide an approximate figure of the
volume of code generated over a specific time frame.
Benchmarking Tool: When comparing various iterations of the same program, LOC can be used
as a benchmarking tool. It may bring information on how modifications affect the codebase’s
total size.
Disadvantages of Lines of Code (LOC)
Challenges in Agile Work Environments: Focusing on initial LOC estimates may not
adequately reflect the iterative and dynamic nature of development in agile development, as
requirements may change.
Not Considering Into Account External Libraries: Code from other libraries or frameworks,
which can greatly enhance a project’s overall usefulness, is not taken into account by LOC.
Challenges with Maintenance: Higher LOC codebases are larger codebases that typically
demand more maintenance work.
Research has shown a rough correlation between LOC and the overall cost and length of developing
a project/ product in Software Development and between LOC and the number of defects. This
means the lower your LOC measurement is, the better off you probably are in the development of
your product.
Let’s take an example and check how the Line of code works in the simple sorting program given
below:
void selSort(int x[], int n) {
//Below function sorts an array in ascending order
int i, j, min, temp;
for (i = 0; i < n - 1; i++) {
min = i;
for (j = i + 1; j < n; j++)
if (x[j] < x[min])
min = j;
temp = x[i];
x[i] = x[min];
x[min] = temp;
}
}
So, now If LOC is simply a count of the number of lines then the above function shown contains 13
lines of code (LOC). But when comments and blank lines are ignored, the function shown above
contains 12 lines of code (LOC).
Let’s take another example and check how does the Line of code work the given below:
void main()
{
int fN, sN, tN;
cout << "Enter the 2 integers: ";
cin >> fN >> sN;
// sum of two numbers in stored in variable sum
sum = fN + sN;
// Prints sum
cout << fN << " + " << sN << " = " << sum;
return 0;
}
Here also, If LOC is simply a count of the numbers of lines then the above function shown contains
11 lines of code (LOC). But when comments and blank lines are ignored, the function shown above
contains 9 lines of code (LOC).
119 | P a g e
Requirements Engineering Process in Software Engineering
Requirements Engineering is the process of identifying, eliciting, analyzing, specifying, validating,
and managing the needs and expectations of stakeholders for a software system.
In this article, we’ll learn about its process, advantages, and disadvantages.
1. Feasibility Study
The feasibility study mainly concentrates on below five mentioned areas below. Among these
Economic Feasibility Study is the most important part of the feasibility analysis and the Legal
Feasibility Study is less considered feasibility analysis.
1. Technical Feasibility: In Technical Feasibility current resources both hardware software along
required technology are analyzed/assessed to develop the project. This technical feasibility
study reports whether there are correct required resources and technologies that will be used for
project development. Along with this, the feasibility study also analyzes the technical skills and
capabilities of the technical team, whether existing technology can be used or not, whether
maintenance and up-gradation are easy or not for the chosen technology, etc.
2. Operational Feasibility: In Operational Feasibility degree of providing service to requirements
is analyzed along with how easy the product will be to operate and maintain after deployment.
Along with this other operational scopes are determining the usability of the product,
Determining suggested solution by the software development team is acceptable or not, etc.
3. Economic Feasibility: In the Economic Feasibility study cost and benefit of the project are
analyzed. This means under this feasibility study a detailed analysis is carried out will be cost
of the project for development which includes all required costs for final development hardware
and software resources required, design and development costs operational costs, and so on.
After that, it is analyzed whether the project will be beneficial in terms of finance for the
organization or not.
4. Legal Feasibility: In legal feasibility, the project is ensured to comply with all relevant laws,
regulations, and standards. It identifies any legal constraints that could impact the project and
reviews existing contracts and agreements to assess their effect on the project’s execution.
Additionally, legal feasibility considers issues related to intellectual property, such as patents
and copyrights, to safeguard the project’s innovation and originality.
5. Schedule Feasibility: In schedule feasibility, the project timeline is evaluated to determine if it
is realistic and achievable. Significant milestones are identified, and deadlines are established to
track progress effectively. Resource availability is assessed to ensure that the necessary
resources are accessible to meet the project schedule. Furthermore, any time constraints that
might affect project delivery are considered to ensure timely completion. This focus on
schedule feasibility is crucial for the successful planning and execution of a project.
2. Requirements Elicitation
It is related to the various ways used to gain knowledge about the project domain and requirements.
The various sources of domain knowledge include customers, business manuals, the existing
software of the same type, standards, and other stakeholders of the project. The techniques used for
requirements elicitation include interviews, brainstorming, task analysis, Delphi technique,
120 | P a g e
prototyping, etc. Some of these are discussed here. Elicitation does not produce formal models of
the requirements understood. Instead, it widens the domain knowledge of the analyst and thus helps
in providing input to the next stage.
Requirements elicitation is the process of gathering information about the needs and expectations of
stakeholders for a software system. This is the first step in the requirements engineering process
and it is critical to the success of the software development project. The goal of this step is to
understand the problem that the software system is intended to solve and the needs and expectations
of the stakeholders who will use the system.
It’s important to document, organize, and prioritize the requirements obtained from all these
techniques to ensure that they are complete, consistent, and accurate.
3. Requirements Specification
This activity is used to produce formal software requirement models. All the requirements including
the functional as well as the non-functional requirements and the constraints are specified by these
models in totality. During specification, more knowledge about the problem may be required which
can again trigger the elicitation process. The models used at this stage include ER diagrams, data
flow diagrams(DFDs), function decomposition diagrams(FDDs), data dictionaries, etc.
Requirements specification is the process of documenting the requirements identified in the analysis
step in a clear, consistent, and unambiguous manner. This step also involves prioritizing and
grouping the requirements into manageable chunks.
The goal of this step is to create a clear and comprehensive document that describes the
requirements for the software system. This document should be understandable by both the
development team and the stakeholders.
To make the requirements specification clear, the requirements should be written in a natural
language and use simple terms, avoiding technical jargon, and using a consistent format throughout
121 | P a g e
the document. It is also important to use diagrams, models, and other visual aids to help
communicate the requirements effectively.
Once the requirements are specified, they must be reviewed and validated by the stakeholders and
development team to ensure that they are complete, consistent, and accurate.
Validation: It refers to a different set of tasks that ensures that the software that has been built is
traceable to customer requirements. If requirements are not validated, errors in the requirement
definitions would propagate to the successive stages resulting in a lot of modification and rework.
Reviews, buddy checks, making test cases, etc. are some of the methods used for this.
Requirements verification and validation (V&V) is the process of checking that the requirements
for a software system are complete, consistent, and accurate and that they meet the needs and
expectations of the stakeholders. The goal of V&V is to ensure that the software system being
developed meets the requirements and that it is developed on time, within budget, and to the
required quality.
1. Verification is checking that the requirements are complete, consistent, and accurate. It involves
reviewing the requirements to ensure that they are clear, testable, and free of errors and
inconsistencies. This can include reviewing the requirements document, models, and diagrams,
and holding meetings and walkthroughs with stakeholders.
2. Validation is the process of checking that the requirements meet the needs and expectations of
the stakeholders. It involves testing the requirements to ensure that they are valid and that the
software system being developed will meet the needs of the stakeholders. This can include
testing the software system through simulation, testing with prototypes, and testing with the
final version of the software.
3. Verification and Validation is an iterative process that occurs throughout the software
development life cycle. It is important to involve stakeholders and the development team in the
V&V process to ensure that the requirements are thoroughly reviewed and tested.
It’s important to note that V&V is not a one-time process, but it should be integrated and continue
throughout the software development process and even in the maintenance stage.
5. Requirements Management
Requirement management is the process of analyzing, documenting, tracking, prioritizing, and
agreeing on the requirement and controlling the communication with relevant stakeholders. This
stage takes care of the changing nature of requirements. It should be ensured that the SRS is as
modifiable as possible to incorporate changes in requirements specified by the end users at later
stages too. Modifying the software as per requirements in a systematic and controlled manner is an
extremely important part of the requirements engineering process.
Requirements management is the process of managing the requirements throughout the software
development life cycle, including tracking and controlling changes, and ensuring that the
requirements are still valid and relevant. The goal of requirements management is to ensure that the
software system being developed meets the needs and expectations of the stakeholders and that it is
developed on time, within budget, and to the required quality.
122 | P a g e
Several key activities are involved in requirements management, including:
1. Tracking and controlling changes: This involves monitoring and controlling changes to the
requirements throughout the development process, including identifying the source of the
change, assessing the impact of the change, and approving or rejecting the change.
2. Version control: This involves keeping track of different versions of the requirements
document and other related artifacts.
3. Traceability: This involves linking the requirements to other elements of the development
process, such as design, testing, and validation.
4. Communication: This involves ensuring that the requirements are communicated effectively to
all stakeholders and that any changes or issues are addressed promptly.
5. Monitoring and reporting: This involves monitoring the progress of the development process
and reporting on the status of the requirements.
Requirements management is a critical step in the software development life cycle as it helps to
ensure that the software system being developed meets the needs and expectations of stakeholders
and that it is developed on time, within budget, and to the required quality. It also helps to prevent
scope creep and to ensure that the requirements are aligned with the project goals.
123 | P a g e
It can be difficult to elicit requirements from stakeholders who have different needs and
priorities.
Requirements may change over time, which can result in delays and additional costs.
There may be conflicts between stakeholders, which can be difficult to resolve.
It may be challenging to ensure that all stakeholders understand and agree on the requirements.
Conclusion
As the project develops and new information becomes available, the iterative requirements
engineering process may involve going back and reviewing earlier phases. Throughout the process,
stakeholders in the project must effectively communicate and collaborate to guarantee that the
software system satisfies user needs and is in line with the company’s overall goals.
124 | P a g e
Classification of Software Requirements – Software Engineering
Last Updated : 19 Jun, 2024
Whenever software is built, there is always scope for improvement and those improvements bring picture
changes. Changes may be required to modify or update any existing solution or to create a new solution for a
problem. Requirements keep on changing daily so we need to keep on upgrading our systems based on the
current requirements and needs to meet desired outputs. Changes should be analyzed before they are made
to the existing system, recorded before they are implemented, reported to have details of before and after, and
controlled in a manner that will improve quality and reduce error. This is where the need for System
Configuration Management comes. System Configuration Management (SCM) is an arrangement of
exercises that controls change by recognizing the items for change, setting up connections between those
things, making/characterizing instruments for overseeing diverse variants, controlling the changes being
executed in the current framework, inspecting and revealing/reporting on the changes made. It is essential to
control the changes because if the changes are not checked legitimately then they may wind up undermining a
well-run programming. In this way, SCM is a fundamental piece of all project management activities.
Processes involved in SCM – Configuration management provides a disciplined environment for smooth
control of work products. It involves the following activities:
1. Identification and Establishment – Identifying the configuration items from products that compose
baselines at given points in time (a baseline is a set of mutually consistent Configuration Items, which has
been formally reviewed and agreed upon, and serves as the basis of further development). Establishing
relationships among items, creating a mechanism to manage multiple levels of control and procedure for
the change management system.
2. Version control – Creating versions/specifications of the existing product to build new products with the
help of the SCM system. A description of the version is given below:
125 | P a g e
Suppose after some
changes, the version of the configuration object changes from 1.0 to 1.1. Minor corrections and changes
result in versions 1.1.1 and 1.1.2, which is followed by a major update that is object 1.2. The development
of object 1.0 continues through 1.3 and 1.4, but finally, a noteworthy change to the object results in a new
evolutionary path, version 2.0. Both versions are currently supported.
3. Change control – Controlling changes to Configuration items (CI). The change control process is
explained in Figure below:
A change request (CR) is submitted and evaluated to assess technical merit, potential side effects, the
overall impact on other configuration objects and system functions, and the projected cost of the change.
The results of the evaluation are presented as a change report, which is used by a change control board
(CCB) —a person or group who makes a final decision on the status and priority of the change. An
engineering change Request (ECR) is generated for each approved change. Also, CCB notifies the
126 | P a g e
developer in case the change is rejected with proper reason. The ECR describes the change to be made,
the constraints that must be respected, and the criteria for review and audit. The object to be changed is
“checked out” of the project database, the change is made, and then the object is tested again. The object
is then “checked in” to the database and appropriate version control mechanisms are used to create the
next version of the software.
4. Configuration auditing – A software configuration audit complements the formal technical review of the
process and product. It focuses on the technical correctness of the configuration object that has been
modified. The audit confirms the completeness, correctness, and consistency of items in the SCM system
and tracks action items from the audit to closure.
5. Reporting – Providing accurate status and current configuration data to developers, testers, end users,
customers, and stakeholders through admin guides, user guides, FAQs, Release notes, Memos, Installation
Guide, Configuration guides, etc.
System Configuration Management (SCM) is a software engineering practice that focuses on managing the
configuration of software systems and ensuring that software components are properly controlled, tracked, and
stored. It is a critical aspect of software development , as it helps to ensure that changes made to a software
system are properly coordinated and that the system is always in a known and stable state.
SCM involves a set of processes and tools that help to manage the different components of a software
system, including source code, documentation, and other assets. It enables teams to track changes made to
the software system, identify when and why changes were made, and manage the integration of these
changes into the final product.
Importance of Software Configuration Management
1. Effective Bug Tracking: Linking code modifications to issues that have been reported, makes bug tracking
more effective.
2. Continuous Deployment and Integration: SCM combines with continuous processes to automate
deployment and testing, resulting in more dependable and timely software delivery.
3. Risk management: SCM lowers the chance of introducing critical flaws by assisting in the early detection
and correction of problems.
4. Support for Big Projects: Source Code Control (SCM) offers an orderly method to handle code
modifications for big projects, fostering a well-organized development process.
5. Reproducibility: By recording precise versions of code, libraries, and dependencies, source code versioning
(SCM) makes builds repeatable.
6. Parallel Development: SCM facilitates parallel development by enabling several developers to collaborate
on various branches at once.
Why need for System configuration management?
1. Replicability: Software version control (SCM) makes ensures that a software system can be replicated at
any stage of its development. This is necessary for testing, debugging, and upholding consistent
environments in production, testing, and development.
2. Identification of Configuration: Source code, documentation, and executable files are examples of
configuration elements that SCM helps in locating and labeling. The management of a system’s constituent
parts and their interactions depend on this identification.
3. Effective Process of Development: By automating monotonous processes like managing dependencies,
merging changes, and resolving disputes, SCM simplifies the development process. Error risk is decreased
and efficiency is increased because of this automation.
Key objectives of SCM
1. Control the evolution of software systems: SCM helps to ensure that changes to a software system are
properly planned, tested, and integrated into the final product.
2. Enable collaboration and coordination: SCM helps teams to collaborate and coordinate their work,
ensuring that changes are properly integrated and that everyone is working from the same version of the
software system.
3. Provide version control: SCM provides version control for software systems, enabling teams to manage
and track different versions of the system and to revert to earlier versions if necessary.
4. Facilitate replication and distribution: SCM helps to ensure that software systems can be easily
replicated and distributed to other environments, such as test, production, and customer sites.
5. SCM is a critical component of software development , and effective SCM practices can help to improve the
quality and reliability of software systems, as well as increase efficiency and reduce the risk of errors.
The main advantages of SCM
1. Improved productivity and efficiency by reducing the time and effort required to manage software changes.
2. Reduced risk of errors and defects by ensuring that all changes were properly tested and validated.
3. Increased collaboration and communication among team members by providing a central repository for
software artifacts.
4. Improved quality and stability of software systems by ensuring that all changes are properly controlled and
managed.
The main disadvantages of SCM
1. Increased complexity and overhead, particularly in large software systems.
2. Difficulty in managing dependencies and ensuring that all changes are properly integrated.
127 | P a g e
3. Potential for conflicts and delays, particularly in large development teams with multiple contributors.
Software Configuration Management ( SCM ) is just like an umbrella activity which is to be applied
throughout the software process. It manages and tracks the emerging product and its versions also
it identifies and controls the configuration of software, hardware and the tools that are used
throughout the development cycle. SCM ensures that all people involved in the software process
know what is being designed developed, built, tested and delivered.
Objectives of SCM Standards : Major objectives of software configuration are depicted as in the
following figure:
128 | P a g e
Mostly the modern software packages are the pre-defined directories installed in factory. No
doubt, this type of installation is good for single user, but for the collection of machines it will
lead to non-uniform configuration.
A good configuration standard will have software installed in specific directory areas to
logically divide software on the disk.
With the help of universal scripts, it becomes easy to easily identify the installed components
and the possibility of automating installation procedures.
As software will be installed into specific directories, it’s maintenance and upgrading running
software becomes less complex.
Software Quality Assurance – Software Engineering
Last Updated : 02 Aug, 2024
Software Quality Assurance (SQA) is simply a way to assure quality in the software. It is the set
of activities that ensure processes, procedures as well as standards are suitable for the project and
implemented correctly.
Software Quality Assurance is a process that works parallel to Software Development. It focuses on
improving the process of development of software so that problems can be prevented before they
become major issues. Software Quality Assurance is a kind of Umbrella activity that is applied
throughout the software process.
For those looking to deepen their expertise in SQA and elevate their professional skills, consider
exploring a specialized training program – Manual to Automation Testing: A QA Engineer’s
Guide . This program offers practical, hands-on experience and advanced knowledge that
complements the concepts covered in this guide.
What is quality?
Quality in a product or service can be defined by several measurable characteristics. Each of these
characteristics plays a crucial role in determining the overall quality.
What is quality?
129 | P a g e
SQA process Specific quality assurance and quality control tasks (including technical reviews and a multitiered
testing strategy) Effective software engineering practice (methods and tools) Control of all software work
products and the changes made to them a procedure to ensure compliance with software
development standards (when applicable) measurement and reporting mechanisms
Elements of Software Quality Assurance (SQA)
1. Standards: The IEEE, ISO, and other standards organizations have produced a broad array of software
engineering standards and related documents. The job of SQA is to ensure that standards that have been
adopted are followed and that all work products conform to them.
2. Reviews and audits: Technical reviews are a quality control activity performed by software engineers for
software engineers. Their intent is to uncover errors. Audits are a type of review performed by SQA
personnel (people employed in an organization) with the intent of ensuring that quality guidelines are being
followed for software engineering work.
3. Testing: Software testing is a quality control function that has one primary goal—to find errors. The job of
SQA is to ensure that testing is properly planned and efficiently conducted for primary goal of software.
4. Error/defect collection and analysis : SQA collects and analyzes error and defect data to better
understand how errors are introduced and what software engineering activities are best suited to
eliminating them.
5. Change management: SQA ensures that adequate change management practices have been instituted.
6. Education: Every software organization wants to improve its software engineering practices. A key
contributor to improvement is education of software engineers, their managers, and other stakeholders. The
SQA organization takes the lead in software process improvement which is key proponent and sponsor of
educational programs.
7. Security management: SQA ensures that appropriate process and technology are used to achieve
software security.
8. Safety: SQA may be responsible for assessing the impact of software failure and for initiating those steps
required to reduce risk.
9. Risk management : The SQA organization ensures that risk management activities are properly conducted
and that risk-related contingency plans have been established.
Software Quality Assurance (SQA) focuses
The Software Quality Assurance (SQA) focuses on the following
130 | P a g e
Software Quality Assurance (SQA)
Software’s portability: Software’s portability refers to its ability to be easily transferred or adapted to
different environments or platforms without needing significant modifications. This ensures that the software
can run efficiently across various systems, enhancing its accessibility and flexibility.
software’s usability: Usability of software refers to how easy and intuitive it is for users to interact with
and navigate through the application. A high level of usability ensures that users can effectively accomplish
their tasks with minimal confusion or frustration, leading to a positive user experience.
software’s reusability: Reusability in software development involves designing components or modules
that can be reused in multiple parts of the software or in different projects. This promotes efficiency and
reduces development time by eliminating the need to reinvent the wheel for similar functionalities,
enhancing productivity and maintainability.
131 | P a g e
software’s correctness: Correctness of software refers to its ability to produce the desired results under
specific conditions or inputs. Correct software behaves as expected without errors or unexpected
behaviors, meeting the requirements and specifications defined for its functionality.
software’s maintainability: Maintainability of software refers to how easily it can be modified, updated, or
extended over time. Well-maintained software is structured and documented in a way that allows
developers to make changes efficiently without introducing errors or compromising its stability.
software’s error control: Error control in software involves implementing mechanisms to detect, handle,
and recover from errors or unexpected situations gracefully. Effective error control ensures that the
software remains robust and reliable, minimizing disruptions to users and providing a smoother experience
overall.
Software Quality Assurance (SQA) Include
1. A quality management approach.
2. Formal technical reviews.
3. Multi testing strategy.
4. Effective software engineering technology.
5. Measurement and reporting mechanism.
Major Software Quality Assurance (SQA) Activities
1. SQA Management Plan: Make a plan for how you will carry out the SQA throughout the project. Think
about which set of software engineering activities are the best for project. check level of SQA team skills.
2. Set The Check Points: SQA team should set checkpoints. Evaluate the performance of the project on the
basis of collected data on different check points.
3. Measure Change Impact: The changes for making the correction of an error sometimes re introduces
more errors keep the measure of impact of change on project. Reset the new change to check the
compatibility of this fix with whole project.
4. Multi testing Strategy: Do not depend on a single testing approach. When you have a lot of testing
approaches available use them.
5. Manage Good Relations: In the working environment managing good relations with other teams involved
in the project development is mandatory. Bad relation of SQA team with programmers team will impact
directly and badly on project. Don’t play politics.
6. Maintaining records and reports: Comprehensively document and share all QA records, including test
cases, defects, changes, and cycles, for stakeholder awareness and future reference.
7. Reviews software engineering activities: The SQA group identifies and documents the processes. The
group also verifies the correctness of software product.
8. Formalize deviation handling: Track and document software deviations meticulously. Follow established
procedures for handling variances.
Benefits of Software Quality Assurance (SQA)
1. SQA produces high quality software.
2. High quality application saves time and cost.
3. SQA is beneficial for better reliability.
4. SQA is beneficial in the condition of no maintenance for a long time.
5. High quality commercial software increase market share of company.
6. Improving the process of creating software.
7. Improves the quality of the software.
8. It cuts maintenance costs. Get the release right the first time, and your company can forget about it and
move on to the next big thing. Release a product with chronic issues, and your business bogs down in a
costly, time-consuming, never-ending cycle of repairs.
Disadvantage of Software Quality Assurance (SQA)
There are a number of disadvantages of quality assurance.
Cost: Some of them include adding more resources, which cause the more budget its not, Addition of more
resources For betterment of the product.
Time Consuming: Testing and Deployment of the project taking more time which cause delay in the
project.
Overhead : SQA processes can introduce administrative overhead, requiring documentation, reporting, and
tracking of quality metrics. This additional administrative burden can sometimes outweigh the benefits,
especially for smaller projects.
Resource Intensive : SQA requires skilled personnel with expertise in testing methodologies, tools, and
quality assurance practices. Acquiring and retaining such talent can be challenging and expensive.
Resistance to Change : Some team members may resist the implementation of SQA processes, viewing
them as bureaucratic or unnecessary. This resistance can hinder the adoption and effectiveness of quality
assurance practices within an organization.
Not Foolproof : Despite thorough testing and quality assurance efforts, software can still contain defects or
vulnerabilities. SQA cannot guarantee the elimination of all bugs or issues in software products.
Complexity : SQA processes can be complex, especially in large-scale projects with multiple stakeholders,
dependencies, and integration points. Managing the complexity of quality assurance activities requires
careful planning and coordination.
132 | P a g e
Conclusion
Software Quality Assurance (SQA) maintain a most important role in the ensuring the quality, reliability and
efficiency of the product. By implementation of these control process which cause the improvement of
the software engineering process . SQA gives a higher quality product which help to meet user expectations,
its having some drawback also like Cost, time-consuming process, after maintaining the process of the SQA
its improved the reliability and maintain the maintenance cost which affect in a future.
Overall, Software Quality Assurance (SQA) is important for the success in the project development in Software
Engineering
Frequently Asked Questions
What Does Do Software Quality Assurance (SQA) in software Development?
SQA makes sure that the software is made according to the need and checking its build.
How does Software Quality Assurance (SQA) help software work better?
SQA Finds the faults in the Software before its use, it will help to make software more trustable.
What parts are important in Software Quality Assurance (SQA)?
SQA Checks the software follows rules, it will learn from example, manage changes, check working well,
educate teams, ensure security, and handle the risk.
What is Monitoring and Control in Project Management?
Last Updated : 30 May, 2024
Monitoring and control is one of the key processes in any project management which has great
significance in making sure that business goals are achieved successfully. We are seeing All points
and Subpoints in a Detailed way:
These processes enable the ability to supervise, make informed decisions, and adjust in response to changes
during the project life cycle are critical.
What is Monitoring Phase in Project Management?
Monitoring in project management is the systematic process of observing, measuring, and evaluating
activities, resources, and progress to verify that a given asset has been developed according to the terms set
out. It is intended to deliver instant insights, detect deviations from the plan, and allow quick decision-making.
Purpose
1. Track Progress: Monitor the actual implementation of the project along with indicators such as designs,
timelines budgets, and standards.
2. Identify Risks and Issues: Identify other risks and possible issues in the early stage to create immediate
intervention measures as well as resolutions.
3. Ensure Resource Efficiency: Monitor how resources are being distributed and used to improve efficiency
while avoiding resource shortages.
4. Facilitate Decision-Making: Supply project managers and stakeholders with reliable and timely
information for informed
5. Enhance Communication: Encourage honest team communication and stakeholder engagement related
to project status, challenges
Key Activities
1. Performance Measurement: Identify and monitor critical performance indicators (KPIs) to compare the
progress of a project against defined targets.
2. Progress Tracking: Update schedules and timelines for the project on a regular basis, and compare actual
work with planned milestones to detect any delays or deviations.
3. Risk Identification and Assessment: Monitor actual risks, including their probability and consequences.
Find new risks and assess the performance of current risk mitigation mechanisms.
4. Issue Identification and Resolution: Point out problems discovered in the process of project
implementation, evaluate their scale and introduce corrective measures immediately.
5. Resource Monitoring: Track how resources are distributed and used, to ensure there is adequate
equipment as well as support by the team members in meeting their objectives.
6. Quality Assurance: Monitor compliance with quality standards and processes, reporting deviations to take
actions necessary for restoring the targeted level of quality.
7. Communication and Reporting: Disseminate project status updates, milestones reached and important
findings to the stakeholders on a regular basis.
8. Change Control: Review and evaluate project scope, schedule or budget changes. Adopt structured
change control processes to define, justify and approve changes.
133 | P a g e
9. Documentation Management: Make sure that project documentation is accurate, current and readily
available for ready reference. This involves project plans, reports and other documents related to a
particular project.
Tools and Technologies for Monitoring
1. Project Management Software: Tools such as Microsoft Project, Jira, and Trello offer features in terms of
scheduling monitoring resources for task execution.
2. Performance Monitoring Tools: The solutions that New Relic, AppDynamics and Dynatrace provide cater
to monitoring of application performances as well as infrastructure performance besides user experience.
3. Network Monitoring Tools: The three tools namely SolarWinds Network Performance Monitor, Wireshark
and PRTG Network monitor help in monitoring and analyzing the network performance.
4. Server and Infrastructure Monitoring Tools: The mentioned monitoring tools, namely Nagios prometheus
and Zabbix monitor servers systems and IT infrastructure for performance availability.
5. Log Management Tools: Log analysis and visualization are performed using ELK Stack (Elasticsearch,
Logstash, Kibana), Splunk, and Graylog.
6. Cloud Monitoring Tools: Amazon CloudWatch, Google Cloud Operations Suite, and Azure Monitor
provide monitoring solutions for cloud-based services and resources.
7. Security Monitoring Tools: Security Information and Event Management tools like Splunk, IBM QRadar or
ArcSight provide support to the process of monitoring security events and incidents.
What is Control Phase in Project Management?
In project management, the control stage refers to taking corrective measures using data collected during
monitoring. It seeks to keep the project on track and in line with its purpose by resolving issues, minimizing
risks, and adopting appropriate modifications into plan documents for projects.
Purpose
1. Implement Corrective Actions: Using the issues, risks, or deviations from the project plan as a pretext to
implement corrective actions and put back on course.
2. Adapt to Changes: Accommodate changes in requirements, external parameters or unknown
circumstances by altering project plans resources and strategies.
3. Optimize Resource Utilization: Do not allow the overruns of resources or lack thereof that directly affect
project performance.
4. Ensure Quality and Compliance: Comply with quality standards, regulatory mandates and project policies
to achieve the best results possible.
5. Facilitate Communication: Communicate changes, updates and resolutions to the stakeholders in order
to preserve transparency and cooperation through project.
Key Activities
1. Issue Resolution: Respond to identified issues in a timely manner by instituting remedial measures. Work
with the project team to address obstacles that threaten progress in this assignment.
2. Risk Mitigation: Perform risk response plans in order to avoid the negative influence of risks identified.
Take proactive actions that can minimize the possibility or magnitude of potential problems.
3. Change Management: Evaluate and put into practice the approved amendments to the project scope,
schedule or budget. Make sure that changes are plugged into project plans.
4. Resource Adjustment: Optimize resource allocation based on project requirements and variability in the
workload. Make sure that team members are provided with adequate support in order to play their
respective roles efficiently.
5. Quality Control: Supervise and ensure that quality standards are followed. Ensure that project deliverables
comply with the stated requirements through quality control measures.
6. Performance Adjustment: Adjust project schedules, budgets and other resources according to monitoring
observations. Ensure alignment with project goals.
7. Communication of Changes: Share changes, updates, and resolutions to stakeholders via periodic
reports or project documents. Keep lines of communication open.
8. Documentation Management: Update project documentation for changes made in control phase. Record
decisions, actions taken and any changes to project plans.
Tools and Technologies for Control
1. Project Management Software: It is possible to adjust project plans, schedules and tasks using Microsoft
Project Jira or Trello depending on changes identified in the control phase.
2. Change Control Tools: ChangeScout, Prosci or integrated change management modules within project
management software allow for systematic changes.
3. Collaboration Platforms: Instruments such as Microsoft Teams, Slack or Asana enhance interaction and
cooperation; the platforms allow real-time information sharing between team members.
4. Version Control Systems: To control changes to project documentation and maintain versioning, Git or
Subversion tools are necessary.
5. Quality Management Tools: Quality control activities are facilitated by tools such as TestRail, Jira and
Quality Center to make sure the project deliverables meet predetermined quality standards.
6. Risk Management Software: Tools like RiskWatch, RiskTrak or ARM (Active risk Management) help in
monitoring and controlling risks helping to implement the mitigation strategies on risks.
7. Resource Management Tools: There are tools such as ResourceGuru, LiquidPlanner or Smartsheet that
contribute to optimizing resource allocation and easing adjustments in the control phase.
134 | P a g e
8. Communication Platforms: Communication tools like Zoom, Microsoft Teams or Slack make it possible to
inform the stakeholders of changes, updates and resolutions in a timely manner.
Integrating Monitoring and Control
Seamless combination of the monitoring and control processes is necessary in project management for
successfully completed projects. While monitoring is concerned with the constant observation and
measurement of project activities, control refers to controlling actions that arise from these insights. These two
processes form a synergy that shapes an agile environment, promotes efficient decision-making and mitigates
risk as well ensuring good performance of the project.
Here’s an in-depth explanation of how to effectively integrate monitoring and control:
1. Continuous Feedback Loop
The integration starts with continuous feedback loops between the monitoring and control. Measuring allows
real time information on project advancements, risks and resource utilization as a foundation for control
decision making.
2. Establishing Key Performance Indicators (KPIs)
First, identify and check KPIs that are relevant for the project goals. These parameters act as performance
measures and deviations standards which give the base for control phase to make corrections.
3. Early Identification of Risks and Issues
Using continuous monitoring, the problems are identified in early stages of their emergence. Through this
integration, the organization is able to be proactive where project teams can implement timely and effective
compliance measures keeping these risks from becoming major issues.
4. Real-Time Data Analysis
During the monitoring phase, use sophisticated instruments to analyze data in real-time. Some technologies,
including artificial intelligence and machine learning as well as data analytics help to understand what the
trends, patterns or anomalies are of project dynamics for better control.
5. Proactive Change Management
Integration guarantees that changes identified during monitoring smoothly undergo control. A good change
management process enables the assessment, acceptance and implementation of changes without affecting
project stability.
6. Stakeholder Communication and Transparency
To achieve effective integration, errors in transparent communication must be avoided. Keep stakeholders
abreast of the project’s status, changes made and how they were resolved. Proper communication assures
everyone is aligned with the direction of the project and promotes synergy among monitoring activities.
7. Adaptive Project Plans
Create project plans that can be modified based on changes established during monitoring. Bringing control in
means working with schedules, resource allocations, and objectives that can be changed depending on the
nature of conditions while project plans remain flexible.
8. Agile Methodologies
The use of agile methodologies enhances integration even more. Agile principles prioritize iterative
development, continual feedback, and flexible planning in accordance with monitoring-control integration.
9. Documentation and Lessons Learned
It is vital to note insights from the phases of monitoring and control. This documentation enables future
projects to use lessons learned as a resource, fine-tune the strategy for monitoring and optimize control
processes systems on an ongoing basis.
Benefits of Effective Monitoring and Control
Proper monitoring and control processes play an important role in the success of projects that are guided by
project management. Here are key advantages associated with implementing robust monitoring and control
measures:
1. Timely Issue Identification and Resolution: Prompt resolution of issues is possible if they are detected
early. Monitoring and control effectiveness see early challenges, thus preventing the escalation into serious
problems likely to affect project timelines or overall objectives.
2. Optimized Resource Utilization: Monitoring and controlling resource allocation and use ensures optimum
efficiency. Teams can detect resources underutilized or overallocated, thereby allocating adjusting towards
a balance workload and efficient use of resource.
3. Risk Mitigation: A continuous monitoring approach aids proactive risk management. Identification of future
risks at an early stage enables establishment of mitigation plans for the project teams to reduce likelihood
and severity levels that often lead adverse events on projects.
4. Adaptability to Changes: Effective monitoring highlights shifts in project requirements, influences outside
the system or stakeholder expectations. Control processes enable a smooth adjustment of project plans to
reflect the ongoing change, thus minimizing resistance.
5. Improved Decision-Making: As the monitoring processes provide accurate and real-time data, decision
making can be improved. Stakeholders and project managers can base their decisions on the most current
of information, thereby facilitating more strategic choices that result in better outcomes.
135 | P a g e
6. Enhanced Communication and Transparency: Frequent communication of the status, progress and
issues supports transparency. The shareholders are kept with updated information, and this results in the
build-up of trust among the team members’ clients to other interested parties.
7. Quality Assurance: The monitoring and control processes also help in the quality assurance of project
deliverables. Therefore, through continuous tracking and management of quality metrics, teams can find
any deviations from the standards to take timely corrective actions that meet stakeholders’ needs.
8. Cost Control: Cost overruns, in turn, could be mitigated through continuous monitoring of project budgets
and expenses accompanied by the control processes. Teams can spot variances early and take corrective
actions to ensure that the project stays within budget limit.
9. Efficient Stakeholder Management: Monitoring and control allows for providing timely notice about the
project’s progress and any changes to interested parties. This preemptive approach increases the
satisfaction of Stakeholders while reducing misconception.
10. Continuous Improvement: Improvement continues as lessons learned through monitoring and control
activities are applied. Teams can learn from past projects, understand what needs to improve, and
implement good practices in future initiatives establishing an atmosphere of constant development.
11. Increased Predictability: Monitoring and control that is effective make project outcomes better predictable.
The accurate timelines, costs and risk forecasts are attained through closely controlling project activities
which the teams manage to provide effective stakeholders with a clear understanding of all their projects
expectations.
12. Project Success and Client Satisfaction: Finally, the result of successful monitoring and control is project
success. The final result of the projects satisfaction for clients and positive outcomes from that project.
Challenges and Solutions
1. Incomplete or Inaccurate Data
Challenge: Lack of proper or trustworthy data may impair efficient monitoring and control, making wrong
decisions.
Solution: Develop effective data collection methods, use reliable instruments and invest in training to
increase the accuracy of information captured.
2. Scope Creep
Challenge: Lack of sufficient control can lead to scope creep that affects overall timelines and costs.
Solution: Implement rigid change control procedures, review project scope on a regular basis and ensure
that all changes are appropriately evaluated assessed approved documented.
3. Communication Breakdowns
Challenge: Poordiscussions are often based on misunderstandings, delays and unresolved matters.
Solution: Set up proper communication channels, use collaboration tools and have regular meetings about
the project’s status to ensure productive communication between team members and stakeholders.
4. Resource Constraints
Challenge: Lack of resources, in terms of budget, personnel or technology hinders timely monitoring and
control.
Solution: Focus on resource requirements, obtain further help where required and maximize resource
utilization by planning carefully.
5. Lack of Stakeholder Engagement
Challenge: Lack of engagement among some stakeholders affects the pace and decisions made during
such a project.
Solution: Develop a culture that supports stakeholder engagement by providing regular updates,
conducting feedback sessions and involving key decision makers at critical junctions.
6. Unforeseen Risks
Challenge: During the project lifecycle, new risks can surfaced that had not been previously identified.
Solution: Apply a risk management approach that is responsive, reassess risks regularly and ensure
contingency plans are in place to cope with the unexpected.
7. Resistance to Change
Challenge: Enforced changes made within the control stage might be rejected by team members or
stakeholders.
Solution: Clearly communicate the rationale for changes, engage appropriate stakeholders in decision-
making processes and emphasize the value of flexibility to facilitate a more comfortable change process.
8. Technology Integration Issues
Challenge: The integration of monitoring and control tools is complicated, which can bring inefficiencies or
data inconsistency.
Solution: In order to achieve effective integration, invest in interoperable technologies that are easy-to-use
while providing continuous training and keeping the systems up to date.
9. Insufficient Training and Skill Gaps
Challenge: Lack of proper training and skill deficiencies among the team members pose a threat to
effective use of monitoring and control mechanism.
Solution: Offer wide training opportunities, point out and resolve the areas of deficiency as well as build
curiosity for continuous learning with a view to increase effectiveness in project team.
136 | P a g e
10. Lack of Standardized Processes
Challenge: Non-uniform or irregular processes may also result in the confusion and mistakes while
performing activities of monitoring and control.
Solution: Create and record standardized processes, ensure that the entire team understands these
procedures, continually reviewing them when necessary after going through lessons learned.
Related Posts:
Project Management Process Activities
Phases of Project Management Process
Quality Control in Project Management?
What is Project Management?
What is the Initiation phase in project management?
Conclusion
In the final analysis, successful project management is based upon the incorporation of efficient monitoring
and control processes. The symbiotic relationship between these two phases, creates a dynamic framework
that allows for adaptability transparency and informed decision-making throughout the project life cycle.
Frequently Asked Questions on Monitoring and Control in Project
Management – FAQs
How to monitor a project plan?
Monitor a project plan by regularly tracking progress against milestones and deadlines, identifying any
deviations or risks, and adjusting the plan accordingly to ensure timely completion and alignment with project
goals.
What are the 4 types of project monitoring?
The 4 types of Project monitoring are : Progress Monitoring, Performance Monitoring, Risk Monitoring,
Resource Monitoring.
What are the 5 project controls?
Time, cost, scope, quality, and resources are the five project controls.
What is the project control cycle?
The project control cycle is like a loop of steps where we first set goals and plans, then check how things are
going, compare it to the plan, fix any problems we find, and then start the loop again. It helps us keep our
projects on track and make sure they’re successful.
Traditionally, a high-quality product is outlined in terms of its fitness of purpose. That is, a high-
quality product will specifically be what the users need to try. For code merchandise, the fitness of
purpose is typically taken in terms of satisfaction of the wants arranged down within the SRS
document.
Though “fitness of purpose” could be a satisfactory definition of quality for some merchandise like an
automobile, a table fan, a grinding machine, etc. – for code merchandise, “fitness of purpose” isn’t a
completely satisfactory definition of quality.
137 | P a g e
What is Software Quality?
Software Quality shows how good and reliable a product is. To convey an associate degree example, think
about functionally correct software. It performs all functions as laid out in the SRS document. But, it has an
associate degree virtually unusable program. even though it should be functionally correct, we tend not to
think about it to be a high-quality product.
Another example is also that of a product that will have everything that the users need but has an associate
degree virtually incomprehensible and not maintainable code. Therefore, the normal construct of quality as
“fitness of purpose” for code merchandise isn’t satisfactory.
Factors of Software Quality
The modern read of high-quality associates with software many quality factors like the following:
138 | P a g e
1. Portability: A software is claimed to be transportable, if it may be simply created to figure in several
package environments, in several machines, with alternative code merchandise, etc.
2. Usability: A software has smart usability if completely different classes of users (i.e. knowledgeable and
novice users) will simply invoke the functions of the merchandise.
3. Reusability: A software has smart reusability if completely different modules of the merchandise will simply
be reused to develop new merchandise.
4. Correctness: Software is correct if completely different needs as laid out in the SRS document are
properly enforced.
5. Maintainability: A software is reparable, if errors may be simply corrected as and once they show up, new
functions may be simply added to the merchandise, and therefore the functionalities of the merchandise
may be simply changed, etc
6. Reliability: Software is more reliable if it has fewer failures. Since software engineers do not deliberately
plan for their software to fail, reliability depends on the number and type of mistakes they make. Designers
can improve reliability by ensuring the software is easy to implement and change, by testing it thoroughly,
and also by ensuring that if failures occur, the system can handle them or can recover easily.
7. Efficiency. The more efficient software is, the less it uses of CPU-time, memory, disk space, network
bandwidth, and other resources. This is important to customers in order to reduce their costs of running the
software, although with today’s powerful computers, CPU time, memory and disk usage are less of a
concern than in years gone by.
Software Quality Management System
Software Quality Management System contains the methods that are used by the authorities to develop
products having the desired quality.
Some of the methods are:
Managerial Structure: Quality System is responsible for managing the structure as a whole. Every
Organization has a managerial structure.
Individual Responsibilities: Each individual present in the organization must have some responsibilities
that should be reviewed by the top management and each individual present in the system must take this
seriously.
Quality System Activities: The activities which each quality system must have been
o Project Auditing.
o Review of the quality system.
o It helps in the development of methods and guidelines.
Evolution of Quality Management System
139 | P a g e
Quality Systems are basically evolved over the past some years. The evolution of a Quality Management
System is a four-step process.
1. Inception: Product inspection task provided an instrument for quality control (QC).
2. Quality Control: The main task of quality control is to detect defective devices, and it also helps in finding
the cause that leads to the defect. It also helps in the correction of bugs.
3. Quality Assurance: Quality Assurance helps an organization in making good quality products. It also helps
in improving the quality of the product by passing the products through security checks.
4. Total Quality Management (TQM): Total Quality Management(TQM) checks and assures that all the
procedures must be continuously improved regularly through process measurements.
140 | P a g e
User based
Development and manufacturer based
Value-based.
2. What is the purpose of the software quality?
Ans: The main purpose of software quality is to make ensure that software products are properly developed
and maintained to meet the requirements.
3. What are the three C’s of Software Quality?
Ans: The three C’s of Software Quality is Consistency, Completeness, and Correctness.
The International organization for Standardization is a world wide federation of national standard
bodies. The International standards organization (ISO) is a standard which serves as a for
contract between independent parties. It specifies guidelines for development of quality system.
Quality system of an organization means the various activities related to its products or services.
Standard of ISO addresses to both aspects i.e. operational and organizational aspects which
includes responsibilities, reporting etc. An ISO 9000 standard contains set of guidelines of
production process without considering product itself.
141 | P a g e
Advantages of ISO 9000 Certification :
Some of the advantages of the ISO 9000 certification process are following :
Business ISO-9000 certification forces a corporation to specialize in “how they are doing
business”. Each procedure and work instruction must be documented and thus becomes a
springboard for continuous improvement.
Employees morale is increased as they’re asked to require control of their processes and
document their work processes
Better products and services result from continuous improvement process.
Increased employee participation, involvement, awareness and systematic employee training
are reduced problems.
Shortcomings of ISO 9000 Certification :
Some of the shortcoming of the ISO 9000 certification process are following :
ISO 9000 does not give any guideline for defining an appropriate process and does not give
guarantee for high quality process.
ISO 9000 certification process have no international accreditation agency exists.
142 | P a g e