0% found this document useful (0 votes)
11 views59 pages

Wa0011.

Uploaded by

owaisraza1704
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views59 pages

Wa0011.

Uploaded by

owaisraza1704
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

Software Engineering

Introduction to Software Engineering:

Introduction to Software Engineering:

● Software engineering is the process of designing, developing, testing, and


maintaining software.

● It involves using engineering principles to create software that is reliable, efficient,


and easy to maintain.

● Software engineering is a systematic and disciplined approach to the design,


development, testing, and maintenance of software. It involves using engineering
principles to create software that is reliable, efficient, and easy to maintain.
Characteristics of Software Engineering:
The main characteristics of software engineering are:

1. Complexity: Software systems are complex and have a large number of components
that need to be integrated and tested. This complexity can make it difficult to
understand and maintain the software.

2. Changeability: Software systems are constantly changing to meet new requirements


or fix bugs. This means that the software must be designed to be easily modified
and maintained over time.

3. Reusability: Software components should be designed to be reusable in other


systems. This allows for more efficient development and lower costs.

4. Maintainability: Software should be designed to be easily maintained and modified.


This includes using clear and consistent coding conventions, documenting the code,
and using version control.

5. Portability: Software should be designed to be portable across different platforms


and environments. This means that the software should be able to run on different
operating systems, hardware configurations, and in different environments.

6. Reliability : Software should be designed to be reliable, it should be able to perform


its functions without fail and without error.

7. Usability : Software should be designed to be easy to use for its intended users, it
should be user-friendly, intuitive, and easy to navigate.

8. Scalability: Software should be designed to be able to handle an increasing amount


of work as the number of users or amount of data increases.

9. Performance : Software should be designed to perform well, it should be fast,


efficient and consume minimal resources.

10. Security : Software should be designed to be secure, it should protect against


unauthorized access, hacking, and data breaches.

2
Software Metrics & Models:

● Software metrics are quantitative measures of some aspect of software, such as


size, complexity, or performance.

● Software models are abstract representations of software, such as UML diagrams or


state machines.

● Software metrics and models are used to evaluate the quality of software and to
identify potential problems.

Process Metrics:

● Process metrics are used to measure the efficiency and effectiveness of the
software development process.

● Examples of process metrics include:


○ Number of defects per thousand lines of code (KLOC)
○ Time to complete a task
○ Number of tasks completed per unit time
○ Code review feedback turnaround time
○ Number of code reviews per developer
○ Number of bugs found during testing

Product Metrics:

● Product metrics are used to measure the quality and functionality of the final
software product.

● Examples of product metrics include:


○ Lines of code
○ Number of functions
○ Number of classes
○ Cyclomatic complexity
○ Number of bugs

3
○ Test coverage
○ User satisfaction
○ Performance (e.g. response time)

It is important to choose the right metrics and models to use, as they can provide valuable
information about the software and the development process. The metrics and models
should

be chosen based on the specific goals and objectives of the software development project.
They should also be appropriate for the stage of the development process and the type of
software being developed.

It is also important to establish a process for collecting, analyzing, and reporting on the
metrics. This allows for regular monitoring of the software development process and early
identification of any issues that may need to be addressed.

Additionally, it is crucial to understand that metrics alone do not guarantee the quality of
the software, they should be used in conjunction with other techniques such as code
reviews, testing, and inspections to ensure software quality.

In summary, Software Metrics & Models, Process & Product Metrics are important tools in
software engineering as they provide a way to measure the efficiency and effectiveness of
the software development process, and the quality of the final software product

Sofware Life Cycle Models


● Software life cycle models are frameworks that describe the stages and activities
involved in the software development process.

● Different models have been developed over time to address the specific needs and
constraints of different types of software projects.

Waterfall Model:

● The Waterfall model is a linear, sequential model in which each stage of the
development process must be completed before the next stage can begin.

4
● The stages of the Waterfall model are: Requirements gathering and analysis, Design,
Implementation, Testing, and Maintenance.

● The main advantage of the Waterfall model is its simplicity and clear, defined stages.

● The main disadvantage of the Waterfall model is that it does not allow for changes
or iterations once a stage has been completed.

Prototype Model:

● The Prototype model is an iterative model in which a working prototype of the


software is developed, tested, and refined based on feedback from users.
● The main advantage of the Prototype model is that it allows for changes and
iterations based on user feedback.
● The main disadvantage of the Prototype model is that it can be difficult to
manage the scope of the project and ensure that the final product meets the
requirements.

Spiral Model:

● The Spiral model is a combination of the Waterfall and Prototype models.


● It is an iterative model in which the software is developed in several cycles, with
each cycle including the stages of Requirements gathering and analysis, Design,
Implementation, Testing, and Maintenance.
● The main advantage of the Spiral model is that it allows for changes and
iterations based on user feedback, while also providing a clear, defined process
for managing the project.
● The main disadvantage of the Spiral model is that it can be more complex and
harder to manage than the other models.

5
Comparison:

● Waterfall model is best suited for projects with well-defined and unchanging
requirements.
● Prototype model is best suited for projects with uncertain or changing
requirements.
● Spiral model is best suited for projects with complex and high-risk requirements.

Each model has its own advantages and disadvantages, and the choice of which model
to use will depend on the specific needs and constraints of the project. It is important to
evaluate the project's requirements and select the model that best suits the project.

Software Project Management:

● Software project management is the process of planning, organizing, and


managing the resources needed to develop and maintain software.

The main activities involved in software project management include:

1. Planning: This involves defining the scope, objectives, and schedule for the
project, as well as identifying the resources and budget needed to complete the
project.
2. Organizing: This involves creating the project team, assigning roles and
responsibilities, and establishing a project management plan.

6
3. Managing: This involves overseeing the day-to-day activities of the project,
monitoring progress, and making adjustments as needed to ensure that the
project stays on schedule and within budget.
4. Controlling: This involves monitoring the project's performance, comparing it to
the project plan, and taking corrective action when necessary.
5. Closing: This involves completing all the project activities, documenting the
project's results, and transferring the software to the customer or end user.

There are several software project management methodologies that have been
developed over time, such as Agile, Scrum, Waterfall, and Kanban. Each methodology
has its own set of practices, procedures, and tools that can be used to manage a
software project. The choice of methodology will depend on the specific needs and
constraints of the project.

In software project management, it is essential to have effective communication, risk


management, and change management processes in place. Regular meetings, progress
reports, and documentation can help to keep everyone informed and on track.

In summary, Software project management is a complex and challenging task, it


involves planning, organizing, managing and controlling the resources needed to
develop and maintain software. The choice of methodology will depend on the specific
needs and constraints of the project. Effective communication, risk management, and
change management processes are essential for the success of the project.

Size Estimation

Size Estimation:

● Size estimation is the process of determining the size of a software project in


terms of the amount of work required to complete it.

7
● Two common metrics for size estimation are Lines of Code (LOC) and Function
Points (FP).

Lines of Code (LOC) Metric:

● The LOC metric is a measure of the size of a software project based on the
number of lines of code in the source code.
● The LOC metric is often used as a simple and quick way to estimate the size of a
project, but it has some limitations. For example, it does not take into account the
complexity of the code or the number of functions and classes.

Function Points (FP) Metric:

● The FP metric is a measure of the size of a software project based on the


number of user inputs, outputs, and logical data files in the system.
● The FP metric is considered to be more accurate and comprehensive than the
LOC metric, as it takes into account the complexity and functionality of the
software.
● The FP metric is often used as a way to estimate the size of a project, as well as
to measure the productivity and efficiency of the development team.

Size estimation is important in software project management as it provides a way to


plan and budget for the resources needed to complete the project. However, it is
important to note that size estimation is not an exact science and the actual size of a
project may differ from the estimated size. Therefore, it is important to use multiple
metrics and techniques to estimate the size of a project, and to regularly update the
estimates as the project progresses.

Cost Estimation

8
Cost estimation in software engineering is the process of determining the cost of a
software project. It is an important aspect of software project management as it
provides a way to plan and budget for the resources needed to complete the project.
The cost estimation process involves identifying the resources and activities required
for the project, and estimating the cost of each resource and activity.

There are several cost estimation techniques that can be used in software engineering,
such as:

1. Delphi Method: This is a cost estimation technique that involves obtaining


estimates from a panel of experts. The panel members provide their estimates
anonymously, and the estimates are then reviewed and discussed until a
consensus is reached. The Delphi method is particularly useful for projects with
uncertain or changing requirements.
2. COCOMO (COnstructive COst MOdel): This is a cost estimation model that uses a
mathematical formula to estimate the cost of a software project based on the
size of the project and the development environment. The model takes into
account factors such as the number of lines of code, the number of function
points, and the level of complexity of the project.
3. Three-Point Estimation: This is a cost estimation technique that involves
generating three different cost estimates: best-case, most likely, and worst-case.
The final estimate is determined by taking the average of these three estimates.
4. Expert Judgment: This involves using the knowledge and experience of experts in
the field

Delphi Method
The Delphi method is a cost estimation technique that involves obtaining estimates
from a panel of experts. The panel members provide their estimates anonymously, and
the estimates are then reviewed and discussed until a consensus is reached. The Delphi
method is particularly useful for projects with uncertain or changing requirements.

9
The Delphi method has several steps:

1. Identify the panel of experts: A panel of experts with relevant knowledge and
experience in the field of the software project should be selected.
2. Collect the initial estimates: The panel members provide their estimates for the
project, usually in the form of a questionnaire. These estimates are collected
anonymously to encourage honest and unbiased responses.
3. Review and discuss the estimates: The estimates are reviewed and discussed
among the panel members. Any outliers or discrepancies are identified and
discussed to identify the reasons for the variations.
4. Revise and resubmit the estimates: The panel members revise their estimates
based on the feedback and discussion from the previous round. The estimates
are then resubmitted for another round of review and discussion.
5. Reach a consensus: The process continues until a consensus is reached among
the panel members. The final estimate is usually the average of the estimates
from the final round.

The Delphi method is considered a reliable and accurate method for cost estimation as
it aggregates the knowledge and experience of multiple experts, leading to more
accurate and reliable estimates. Additionally, the anonymity of the initial estimates
encourages honest and unbiased responses, and the iterative nature of the method
allows for revisions and adjustments based on feedback and discussion.

However, the Delphi method also has some limitations. It can be time-consuming and
resource-intensive as it requires multiple rounds of estimates and discussions. Also, the
method may be less effective if the experts are not familiar with the project or the
domain and if the experts may have a bias. Additionally, the Delphi method does not
take into account other external factors such as market conditions, competition, or
changes in project requirements. Therefore, it is important to use multiple cost
estimation techniques and consider other factors to get the most accurate cost
estimate for a software project.

10
Basic Cocomo
Basic COCOMO (COnstructive COst MOdel) is a cost estimation model that uses a
mathematical formula to estimate the cost of a software project based on the size of
the project and the development environment. The model takes into account factors
such as the number of lines of code (LOC), the number of function points (FP), and the
level of complexity of the project.

The basic COCOMO model is divided into three sub-models:

1. Organic mode: This model is used for small projects that are relatively simple
and have a small team with high cohesion. It is also applied when the
requirements are well-understood and the technology is familiar.
2. Semi-detached mode: This model is used for medium-sized projects that are
more complex and have moderate team cohesion. It is also applied when the
requirements are partially understood, and the technology is somewhat familiar.
3. Embedded mode: This model is used for large projects that are highly complex
and have a low team cohesion. It is also applied when the requirements are
poorly understood, and the technology is new and unfamiliar.

Each sub-model uses a different mathematical formula to estimate the cost. The basic
COCOMO formula uses a set of cost drivers that include factors such as the size of the
project, the development environment, the personnel characteristics, and the project
attributes. The values of these cost drivers are used to calculate the effort and schedule
required to complete the project.

The basic COCOMO model is considered to be a simple and easy-to-use cost estimation
method, however, it has some limitations. It does not take into account the skill level of
the development team, the quality of the requirements, or the project management
practices that are in place. It also assumes that the project is being developed using a
conventional development approach and doesn’t take into account the agile

11
development methodologies. Therefore, it is important to use multiple cost estimation
techniques and consider other factors to get the most accurate cost estimate for a
software project.

The basic COCOMO model uses several different formulas to estimate the cost of a
software project based on the size of the project and the development environment.
These formulas are used to calculate the effort and schedule required to complete the
project.

The main formulas used in the basic COCOMO model are:

Organic mode:

Effort (E) = 2.4 * (KLOC)^1.05

1. Schedule (D) = 2.5 * (E)^0.38

Semi-detached mode:

Effort (E) = 3.0 * (KLOC)^1.12

2. Schedule (D) = 2.5 * (E)^0.35

Embedded mode:

Effort (E) = 3.6 * (KLOC)^1.20

3. Schedule (D) = 2.5 * (E)^0.32

Where:

12
● KLOC is the size of the project measured in thousands of lines of code.
● Effort (E) is the effort required to complete the project measured in
person-months.
● Schedule (D) is the schedule required to complete the project measured in
months.

These formulas are used to estimate the effort and schedule required for the project
based on the size of the project and the development environment. The values of the
cost drivers are used to adjust the formulas for the specific characteristics of the
project.

It is important to note that these formulas are based on historical data and may not be
accurate for all projects, so it is important to use multiple cost estimation techniques
and consider other factors to get the most accurate cost estimate for a software
project.

Introduction to Halstead’s Software Science

● Halstead's Software Science is a set of metrics and methods for measuring the
complexity and difficulty of a software project. It was first proposed by Maurice
Halstead in 1977.

The main metrics used in Halstead's Software Science are:

1. Program Length (N): The total number of operators and operands in the program.
2. Vocabulary (V): The total number of unique operators and operands in the
program.
3. Difficulty (D): A measure of the difficulty of understanding and implementing the
program, calculated as: D = (V^2)/(2*N)

13
4. Volume (V): A measure of the information content of the program, calculated as:
V = N * log2(V)
5. Difficulty-Volume Product (DVP): The product of the difficulty and volume
metrics, calculated as: DVP = D * V

Halstead's Software Science can be used to estimate the effort and time required to
understand and implement a software project. The metrics can also be used to
compare the complexity of different software projects.

Halstead's Software Science is a simple and easy-to-use method for measuring the
complexity of software projects but it has some limitations, it doesn't take into account
the maintainability, readability, or usability of the software. Additionally, it only considers
the size of the program, not the functional requirements or the quality of the code.
Therefore, it is important to use multiple software metrics and consider other factors to
get a complete picture of the complexity and quality of a software project.

Staffing Level Estimation:

● Staffing level estimation is the process of determining the number of personnel


needed to complete a software project. It is an important aspect of software
project management as it helps to plan and budget for the resources needed to
complete the project.

Putnam Model

Putnam's Model is a staffing level estimation model that uses the size of the project and
the complexity of the project to estimate the number of personnel needed to complete
the project. It was first proposed by William Putnam in 1976.

14
The model uses a mathematical formula to estimate the number of staff-months
required to complete the project as:

Staff-months = (2.4 * KLOC) / FP

Where:

● KLOC is the size of the project measured in thousands of lines of code.


● FP is the number of function points.

Putnam's Model is a simple and easy-to-use method for staffing level estimation, but it
has some limitations. It doesn't take into account the skill level of the development
team, the quality of the requirements, or the project management practices that are in
place. Additionally, it assumes that the project is being developed using a conventional
development approach and doesn’t take into account the agile development
methodologies. Therefore, it is important to use multiple staffing level estimation
techniques and consider other factors to get the most accurate estimate of the
personnel needed for a software project.

It's important to mention that using only one method or model for staffing level
estimation is not enough, multiple methods and models should be used to get a more
accurate estimate. Additionally, it's important to consider the team's experience, the
project's complexity, and the development methodology when using any model.

Software Requirement Specification SRS


Software Requirements Specification (SRS) is a document that describes the functional
and non-functional requirements for a software system. It serves as a contract between
the customer and the development team, outlining what the software will do and how it
will be implemented.

15
The SRS typically includes the following information:

1. Overview: A brief summary of the purpose and scope of the software.


2. Functional requirements: A list of the features and functionality that the software
must provide, including input and output formats, user interface requirements,
and performance requirements.
3. Non-functional requirements: A list of the non-functional requirements that the
software must meet, including security, reliability, maintainability, and scalability
requirements.
4. Constraints: A list of any limitations or constraints that must be considered
during the development of the software, such as hardware or software
dependencies.
5. Assumptions and dependencies: A list of any assumptions or dependencies that
have been made during the development of the SRS, such as the availability of
certain data or the use of specific technologies.
6. Appendices: Additional information that is relevant to the SRS, such as diagrams,
user scenarios, or acceptance criteria.

The SRS serves as the foundation for the development of the software and is used to
ensure that the final product meets the needs and expectations of the customer. It
should be a living document that is continuously updated throughout the software
development process to reflect any changes in the requirements or constraints of the
project.

It is important to note that a complete and accurate SRS is crucial for the success of a
software project, as it ensures that all stakeholders have a clear understanding of the
project's goals, objectives and requirements. Therefore, it's important to involve all
stakeholders in the requirements gathering process and to validate the SRS with the
stakeholders before starting the development process.

16
SRS (Software Requirements Specification) documents are a detailed description of the
requirements for a software system. They serve as a contract between the customer
and the development team, outlining what the software will do and how it will be
implemented.

Characteristics of SRS Documents:

● Clear and concise: The SRS should clearly and concisely describe the
requirements for the software system in a way that is easy to understand for all
stakeholders.
● Complete: The SRS should include all the requirements for the software system,
including functional and non-functional requirements.
● Unambiguous: The SRS should avoid using ambiguous or vague language and
should be written in a way that is easily understood by all stakeholders.
● Verifiable: The SRS should include acceptance criteria that can be used to verify
that the software system meets the requirements.
● Traceable: The SRS should include a traceability matrix that links the
requirements to the design and testing of the software system.

Software Design
Software design is the process of defining the architecture, components, interfaces, and
other characteristics of a software system. It is the bridge between the requirements
and the implementation of the software system.

The main activities of software design include:

1. Identifying the software architecture: This involves determining the main


components and their relationships, as well as the overall structure of the
software system.

17
2. Defining the component interfaces: This involves specifying the inputs, outputs,
and behaviors of each component.
3. Designing the data structures and algorithms: This involves defining the data
structures and algorithms that will be used to implement the software system.
4. Identifying the design patterns: This involves identifying common design patterns
that can be used to simplify the design and improve the maintainability of the
software system.
5. Verifying the design: This involves evaluating the design against the
requirements and constraints, and making any necessary adjustments.

The software design should be a detailed and complete description of the software
system, including all its components, interfaces, and behavior. It should be written in a
way that is easily understood by the development team and other stakeholders.

There are several design methodologies and models that can be used to guide the
design process, such as the Waterfall model, the Agile model, and the Spiral model. It's
important to use a design methodology that matches the project's requirements and
constraints.

It's important to keep in mind that software design is an iterative process that should be
continuously reviewed and updated throughout the development process to ensure that
the final product meets the requirements and constraints of the project.

Classification of software design:

1. Structural design: This type of design focuses on the organization and


decomposition of the software system into smaller components, such as
classes, functions, and modules. It deals with the static structure of the software
system and how the components interact with each other.

18
2. Behavioral design: This type of design focuses on the dynamic behavior of the
software system and how the components interact with each other. It deals with
the algorithms, data structures, and control flow of the software system.
3. Object-Oriented design: This type of design focuses on the use of objects and
classes to represent the software system. It emphasizes the use of inheritance,
polymorphism, and encapsulation to design the software system.
4. Functional design: This type of design focuses on the use of mathematical
functions to represent the software system. It emphasizes the use of
mathematical models and formal methods to design the software system.
5. Architectural design: This type of design focuses on the overall structure and
organization of the software system. It deals with the high-level organization of
the software system and how the components interact with each other.

Software Design Approaches:

1. Top-Down Design: This approach starts with a high-level view of the software
system and decomposes it into smaller components. It is a good approach for
large and complex systems.
2. Bottom-Up Design: This approach starts with the lowest-level components and
builds them up to create the software system. It is a good approach for small and
simple systems.
3. Iterative Design: This approach involves iteratively designing, implementing, and
testing the software system. It is a good approach for projects with changing
requirements.
4. Agile Design: This approach emphasizes flexibility, adaptability, and customer
involvement. It is a good approach for projects with uncertain requirements.
5. Formal Design: This approach emphasizes the use of formal methods,
mathematical models, and proof techniques to design the software system. It is
a good approach for safety-critical and mission-critical systems.

19
It's important to keep in mind that the choice of design approach will depend on the
specific constraints and requirements of the project, and that different approaches can
be combined to create a more comprehensive design process.
Software Design Approaches:

1. Top-Down Design:
● This approach starts with a high-level view of the software system and
decomposes it into smaller components.
● The high-level view is broken down into smaller and more manageable parts, and
each part is then designed in more detail.
● It is a good approach for large and complex systems as it allows for a clear
understanding of the overall structure of the system before diving into the details.
● It also allows for easier maintenance and modification of the system as the
high-level structure remains unchanged.
● One of the main disadvantage of this approach is that it can be difficult to change
the high-level structure once it has been established.
2. Bottom-Up Design:
● This approach starts with the lowest-level components and builds them up to
create the software system.
● The low-level components are designed and implemented first, and then
integrated to create higher-level components.
● It is a good approach for small and simple systems as it allows for a clear
understanding of the details of the system before diving into the overall structure.
● One of the main disadvantage of this approach is that it can be difficult to
understand the overall structure of the system before all the low-level
components have been implemented.
3. Iterative Design:
● This approach involves iteratively designing, implementing, and testing the
software system.
● It allows for the design to evolve and adapt as the project progresses.

20
● It is a good approach for projects with changing requirements as it allows for
changes to be made to the design as the requirements change.
● One of the main disadvantage of this approach is that it can be difficult to control
the scope of the project.
4. Agile Design:
● This approach emphasizes flexibility, adaptability, and customer involvement.
● It is based on the Agile software development methodology, which emphasizes
iterative and incremental development, collaboration between the customer and
the development team, and the ability to adapt to changing requirements.
● It is a good approach for projects with uncertain or rapidly changing
requirements, as it allows for the design to evolve and adapt as the project
progresses.
● One of the main disadvantage of this approach is that it can be difficult to ensure
that the final product meets all the requirements and constraints of the project.
5. Formal Design:
● This approach emphasizes the use of formal methods, mathematical models,
and proof techniques to design the software system.
● It is a good approach for safety-critical and mission-critical systems, as it allows
for the design to be rigorously verified and validated.
● One of the main disadvantage of this approach is that it can be time-consuming
and costly, and may not be appropriate for all types of systems.

It's important to note that different software design approaches can be combined to
create a more comprehensive design process that addresses the specific constraints
and requirements of the project. Additionally, the choice of design approach will depend
on the specific constraints and requirements of the project and it's important to choose
the approach that will provide the best balance of cost, schedule and the quality of the
final product.

Functional oriented software design

21
Function-oriented software design, also known as Structured Design, is a software
development method that emphasizes the use of functions and procedures to design a
software system. It is based on the idea that a software system can be represented as a
set of functions that interact with each other to provide the required functionality.

The main steps in function-oriented software design are:

1. Identifying the functions: This step involves identifying and documenting the
functions that the software system must perform.
2. Defining the interfaces: This step involves specifying the inputs and outputs of
each function, including the data types and formats.
3. Designing the data structures: This step involves designing the data structures
that will be used by the functions, including the data entities and relationships.
4. Designing the algorithms: This step involves designing the algorithms that will be
used by the functions to perform their tasks.
5. Verifying the design: This step involves evaluating the design against the
requirements and constraints and making any necessary adjustments.

Function-oriented software design is a simple and efficient method for software


development, and it is well-suited for small and medium-sized systems. It allows for
clear and well-defined interfaces between functions and provides a clear structure for
the software system.

However, it can be difficult to handle complexity and change when the system becomes
larger and more complex. That's why Object-Oriented design (OOD) is considered to be
a more suitable method for large and complex systems as it provides a better way of
handling complexity and change.

It's important to note that, like any other method, Function-oriented software design has
its own advantages and disadvantages, and it's important to choose the most
appropriate method according to the project's characteristics and requirements.

22
Structured Analysis
Structured Analysis is a software development method that uses a structured and
systematic approach to analyze and design a software system. The main goal of
structured analysis is to understand and describe the problem domain, and to identify
and specify the requirements for the software system.

Structured Analysis includes the following steps:

1. Problem identification: This step involves identifying the problem to be solved by


the software system, and defining the objectives and goals of the system.
2. Requirements gathering: This step involves gathering and documenting the
requirements for the software system, including functional and non-functional
requirements.
3. Data modeling: This step involves modeling the data requirements of the
software system, including the data entities, attributes, and relationships.
4. Process modeling: This step involves modeling the processes and functions of
the software system, including the input, output, and processing of data.
5. System design: This step involves designing the software system, including the
data structures, algorithms, and interfaces.
6. Implementation: This step involves implementing the software system based on
the design.
7. Testing and validation: This step involves testing the software system to ensure
that it meets the requirements and constraints.

Structured Analysis is a well-established method for software development and it is


widely used in many organizations. It is considered to be a reliable and efficient method
for software development, but it can be time-consuming and may not be suitable for
projects with rapidly changing requirements.

23
It's important to note that Structured Analysis, like any other method, has its own
advantages and disadvantages, and it's important to choose the most appropriate
method according to the project's characteristics and requirements. Additionally, it's
important to follow a systematic and well-defined process when using Structured
Analysis to ensure that the final product meets the requirements and constraints of the
project.

DFD and Structured Design


Data Flow Diagrams (DFDs) are a graphical representation of the flow of data in a
software system, and they are often used in structured design methodologies.

A DFD consists of a set of symbols that are used to represent the different components
of the software system and their interactions. The main symbols used in a DFD are:

● Process: A process is a symbol that represents a function or a procedure that


transforms inputs into outputs.
● Data Store: A data store is a symbol that represents a data repository, such as a
database or a file.
● Data Flow: A data flow is a symbol that represents the flow of data between the
different components of the system.

DFDs are used to model the data flow in a software system, and they can be used to
represent the system at different levels of abstraction. They are used to identify the
inputs and outputs of the system, the data stores, and the processes that transform the
data.

A DFD can be used to represent the system at different levels of abstraction. A


high-level DFD represents the system at a general level and shows the main
components and their interactions. A detailed DFD represents the system at a more
specific level and shows the inputs, outputs, and processes of each component.

24
DFDs are also used to identify the sources and destinations of data, the data stores, and
the processes that transform the data. This can help in identifying potential bottlenecks
or inefficiencies in the system and in making design decisions.

It's important to note that a DFD is not a flowchart, and it is not used to show the control
flow or the logic of the system. The purpose of a DFD is to show the data flow and
transformation in the system, and it is not meant to show the logic or the control flow of
the system.

Finally, it's important to note that DFDs can be used in conjunction with other tools and
techniques, such as structured design, to provide a more comprehensive view of the
system and to ensure that the final product meets the requirements and constraints of
the project.

Structured Design is a software development method that emphasizes the use of


functions and procedures to design a software system. DFDs are used in structured
design to represent the data flow and transformation in the system. It is based on the
idea that a software system can be represented as a set of functions that interact with
each other to provide the required functionality.

By using DFDs, the designers can identify the inputs, outputs and process of each
function and how the data flow among them, this allows them to understand the system
requirements in a better way and design the system with a clear structure.

It's important to note that DFDs are not the only tool that can be used in structured
design, but they are widely used as they provide a simple and efficient way to represent
the data flow and transformation in the software system.

Object Oriented Design

25
Object-oriented design (OOD) is a software development method that emphasizes the
use of objects, classes, and their interactions to design a software system. It is based
on the object-oriented programming paradigm, which views a software system as a
collection of interacting objects.

The main concepts of OOD are:

1. Objects: An object is a representation of a real-world entity or concept and it has


a state, behavior and identity.
2. Classes: A class is a blueprint that defines the properties and methods of an
object.
3. Inheritance: Inheritance is a mechanism that allows a class to inherit properties
and methods from a parent class.
4. Polymorphism: Polymorphism is a mechanism that allows objects to respond to
the same method call in different ways.
5. Encapsulation: Encapsulation is a mechanism that hides the implementation
details of an object from the outside world.
6. Abstraction: Abstraction is a mechanism that allows a class to be defined at a
high level of generality and to be refined as necessary.

OOD provides a number of advantages over other software development methods, such
as better support for code reuse, encapsulation, and abstraction. OOD makes it easier to
manage complexity and change and it is well-suited for large and complex systems.

However, OOD can be more difficult to implement than other methods, and it requires a
deeper understanding of the problem domain. Additionally, OOD can be more difficult to
test and debug, as it can involve a large number of interacting objects.

It's important to note that OOD is not the only method that can be used for software
development and it's important to choose the most appropriate method according to
the project's characteristics and requirements. Additionally, it's important to follow a

26
systematic and well-defined process when using OOD to ensure that the final product
meets the requirements and constraints of the project.

Coding and Testing of Software


Coding and testing are two important stages in the software development process.

Coding is the process of writing the source code of a software system according to the
design and requirements. It involves writing the instructions that the computer will
execute to perform the required tasks. Good coding practices include writing clean,
readable, and well-organized code, using comments and documentation to explain the
code, and using version control to track changes to the code.

Testing is the process of evaluating a software system to ensure that it meets the
requirements and works as intended. There are several types of testing, such as unit
testing, integration testing, system testing, and acceptance testing. Unit testing is the
process of testing individual units or components of the software. Integration testing is
the process of testing the interactions between the different components of the
software. System testing is the process of testing the software as a whole. Acceptance
testing is the process of testing the software to ensure that it meets the customer's
requirements.

It's important to test the software thoroughly, as it helps to identify and fix bugs and
defects in the software early in the development process. It also helps to ensure that the
software meets the requirements and works as intended.

Automated testing is a widely used technique in software development, it helps to


ensure that the software works as intended and that any changes made to the code do
not break existing functionality.

It's important to note that coding and testing are iterative processes, and they may be
repeated multiple times during the development process to ensure that the software

27
meets the requirements and works as intended. Additionally, it's important to follow a
systematic and well-defined process when coding and testing to ensure that the final
product meets the requirements and constraints of the project.

Unit Testing
Unit testing is a method of testing individual units or components of a software system.
It is performed on the individual functions or methods of a class or module. The
purpose of unit testing is to ensure that each unit of the software performs as intended.
Unit tests are typically automated, and they are run every time the code is changed to
ensure that the changes do not break existing functionality.

Unit testing is performed by writing test cases that exercise the individual units of the
software. These test cases typically involve providing inputs to the unit and checking
the outputs to ensure that they match the expected results. Unit tests should be
designed to test the normal and abnormal cases of the unit.

Unit tests are typically written by developers, and they are run as part of the
development process. Unit tests can be run automatically, for example, as part of a
continuous integration process, or they can be run manually.

Unit testing has several advantages, including:

● It helps to catch bugs early in the development process, before they become
more difficult and expensive to fix.
● It helps to ensure that changes to the code do not break existing functionality.
● It helps to improve the design of the code by encouraging the use of small,
focused, and testable units of code.
● It provides documentation of the intended behavior of the code.

Black Box Testing

28
Black box testing is a method of testing a software system from the external
perspective, without knowledge of the internal structure or implementation. The focus is
on the inputs and outputs of the system and how it behaves in response to different
inputs. The purpose of black box testing is to ensure that the software meets the
requirements and works as intended from the user's perspective.

Black box testing is performed by providing inputs to the system and checking the
outputs to ensure that they match the expected results. It can also be performed by
observing the system's behavior and checking if it behaves as expected. Black box tests
are typically written by testers, and they are run as part of the testing process. Black box
testing can be performed manually, or it can be automated.

There are several types of black box testing methods:

● Functional testing: This method is used to test the functional requirements of the
system. It involves testing the system's inputs, outputs, and user interface.
● Non-functional testing: This method is used to test the non-functional
requirements of the system. It involves testing the system's performance,
security, usability, and other non-functional aspects.
● Acceptance testing: This method is used to test the system to ensure that it
meets the customer's requirements. It is typically performed by the customer or
an end-user.
● Usability testing: This method is used to test the system's usability, or how easy it
is for users to understand and use the system.
● Compatibility testing: This method is used to test the system's compatibility with
different hardware, software, and operating systems.
● Performance testing: This method is used to test the system's performance, such
as response time and throughput.

29
● Security testing: This method is used to test the system's security, such as its
ability to protect against unauthorized access and data breaches.

Black box testing has several advantages, including:

● It helps to ensure that the software meets the requirements and works as
intended from the user's perspective.
● It helps to identify usability and accessibility issues.
● It helps to identify compatibility and performance issues.
● It is easy to perform and understand, even for non-technical stakeholders.

It's important to note that black box testing can miss issues that are internal to the
system, it's important to combine it with other testing methods like white box testing to
have a more comprehensive view of the system. Additionally, it's important to follow a
systematic and well-defined process when performing black box testing to ensure that
the final product meets the requirements and constraints of the project.

White Box Testing


White box testing is a method of testing a software system from the internal
perspective, with knowledge of the internal structure and implementation. The focus is
on the individual components of the system, such as classes and methods, and how
they interact with each other. The purpose of white box testing is to ensure that the
individual components of the system work correctly and that they are correctly
integrated.

White box testing is performed by exercising the individual components of the system
and checking the internal state of the system to ensure that it is correct. White box tests

30
are typically written by developers, and they are run as part of the development process.
White box testing can be performed manually, or it can be automated.

White box testing has several advantages, including:

● It helps to ensure that the individual components of the system work correctly
and that they are correctly integrated.
● It helps to improve the design of the code by identifying and eliminating design
flaws.
● It helps to identify performance and scalability issues.
● It helps to ensure that the code is secure and that it conforms to security
standards.

White box testing techniques include:

● Statement coverage: This technique is used to ensure that all statements in the
code have been executed at least once.
● Branch coverage: This technique is used to ensure that all branches in the code
have been executed at least once.
● Path coverage: This technique is used to ensure that all possible paths through
the code have been executed at least once.
● Logic coverage: This technique is used to ensure that all logical conditions in the
code have been executed with all possible outcomes.

It's important to note that white box testing can be more time-consuming and difficult to
perform than black box testing, as it requires a thorough understanding of the internal
structure and implementation of the system. Additionally, it's important to follow a
systematic and well-defined process when performing white box testing to ensure that
the final product meets the requirements and constraints of the project.

31
Debugging
Debugging is an important part of the software development process and it involves
identifying the cause of an error, determining the location of the error, and making the
necessary changes to fix the error.

Debugging techniques include:

● Reading error messages and log files: This is the first step in debugging, as they
can provide information about the cause and location of the error.
● Using the debugging tools: Debugging tools, such as debuggers, allow
developers to step through the code line by line, and inspect the values of
variables at different points in the code. This helps to identify the location of the
error and the cause of the error.
● Reproducing the error: This can help to identify the cause of the error and to
determine the conditions under which the error occurs.
● Using print statements or logging: This is a simple but effective method of
debugging, by adding print statements or log entries to the code, developers can
track the execution of the code and identify the location of the error.
● Reviewing the code: This is a good way to identify errors that are caused by a
lack of understanding of the problem domain or by poor coding practices.

Program Analysis tools


Program analysis tools are software tools that are used to analyze the source code of a
software system. They can be used for a variety of purposes, such as finding bugs,
improving code quality, and analyzing performance. Examples of program analysis tools
include:

● Static code analysis tools: These tools analyze the source code without
executing it. They can be used to find potential bugs, security vulnerabilities, and

32
coding style issues. Examples of static code analysis tools include Lint,
SonarQube, and FindBugs. These tools can detect issues such as uninitialized
variables, null pointer exceptions, and infinite loops.
● Dynamic analysis tools: These tools analyze the software while it is running. They
can be used to find performance bottlenecks, memory leaks, and other issues
that are not easily identified by static analysis. Examples of dynamic analysis
tools include profilers, debuggers, and tracers. Profilers, for example, can be used
to measure the performance of the code and identify which functions or methods
are taking the most time.
● Memory analysis tools: These tools are used to analyze the memory usage of a
software system. They can be used to find memory leaks and other
memory-related issues. Examples of memory analysis tools include Valgrind,
Purify, and Address Sanitizer. These tools can detect issues such as memory
leaks, buffer overflows, and use-after-free errors

System testing
System testing is the process of testing a software system as a whole. It is performed
after integration testing and its goal is to validate that the system meets the
requirements and works as intended.

System testing can include functional testing, which is testing that the system functions
as specified, and non-functional testing, which is testing that the system meets
non-functional requirements such as performance, security and usability. Acceptance
testing is also typically included in system testing. This is testing performed by the
customer or end-user to ensure that the system meets their requirements and is
suitable for use.

Functional testing includes:

33
● Unit testing: It has been performed earlier in the development process, but it can
also be performed at system testing level to ensure that individual units of the
software work correctly when integrated together.
● Integration testing: It has been performed earlier in the development process, but
it can also be performed at system testing level to ensure that the system works
correctly when integrated together.
● End-to-end testing: It tests the system from start to finish, simulating real-world
scenarios to ensure that the system functions as intended.
● Regression testing: It is performed after changes have been made to the system,
to ensure that the changes do not break existing functionality.

Non-functional testing includes:

● Performance testing: It is used to test the system's performance, such as


response time and throughput, under different loads and conditions.
● Security testing: It is used to test the system's security, such as its ability to
protect against unauthorized access and data breaches.
● Usability testing: It is used to test the system's usability, or how easy it is for
users to understand and use the system.
● Compatibility testing: It is used to test the system's compatibility with different
hardware, software, and operating systems.

Acceptance testing includes:

● User acceptance testing (UAT): it is performed by end-users or customers to


ensure that the system meets their requirements and is suitable for use.
● Operational acceptance testing (OAT): it is performed by the operations team to
ensure that the system can be deployed and maintained in the production
environment.

34
● Performance acceptance testing (PAT): it is performed to ensure that the system
meets the performance requirements.

It's important to note that system testing should be performed throughout the
development process, not only at the end. Additionally, it's important to follow a
systematic and well-defined process when performing system testing, to ensure that the
final product meets the requirements and constraints of the project.

Software reliability and Quality Assurance


Software reliability is the ability of a software system to perform its intended functions
without failure for a specified period of time. It is an important aspect of software
engineering and it is closely related to software quality.

Quality assurance (QA) is the process of ensuring that a software system meets the
specified requirements and works as intended. It involves a set of activities that are
performed throughout the software development process to identify and prevent errors
in the software.

There are several techniques that can be used to improve software reliability and quality
assurance, including:

● Requirements engineering: This is the process of defining and managing the


requirements for a software system. It is important to ensure that the
requirements are complete, consistent, and testable.
● Design and code reviews: These are techniques used to review the design and
code of a software system to identify errors and improve the quality of the code.
Design reviews focus on the overall architecture of the system, while code
reviews focus on the details of the implementation.

35
● Testing: This is the process of evaluating the software system to ensure that it
meets the specified requirements and works as intended. It includes unit testing,
integration testing, system testing, and acceptance testing.
● Maintenance: This is the process of modifying the software system to correct
errors and improve its performance. It includes error correction, optimization, and
updating the software to meet changing requirements.
● Configuration management: This is the process of controlling and tracking
changes to the software system. It includes version control, change
management, and release management.
● Quality metrics: This is the process of measuring the quality of a software
system. It includes metrics such as software size, complexity, and testing
coverage.

It's important to note that software reliability and quality assurance are ongoing
processes that should be performed throughout the software development life cycle. It's
also important to have a defined process and procedures in place for software reliability
and quality assurance, in order to ensure that the final product meets the requirements
and constraints of the project.

Reliability metric

Reliability metrics are used to measure the reliability of a software system. These
metrics are used to estimate the probability that a software system will perform its
intended functions without failure for a specified period of time.

Some of the common reliability metrics include:

● Mean time to failure (MTTF): This metric is the average time that the system will
run without failure. It is calculated by dividing the total operation time by the
number of failures.

36
● Mean time to repair (MTTR): This metric is the average time required to repair a
failed system. It is calculated by dividing the total repair time by the number of
failures.
● Failure rate: This metric is the number of failures per unit of time. It is calculated
by dividing the number of failures by the total operation time.
● Availability: This metric is the proportion of time that the system is operational. It
is calculated by dividing the total operation time by the total time, including both
operational and downtime.
● Reliability: This metric is the probability that the system will perform its intended
functions without failure for a specified period of time.

These metrics can be used to estimate the reliability of a software system during the
development process, and it can also be used to estimate the reliability of a software
system in the field. These metrics can be used to identify areas of the software system
that have a high failure rate, and to estimate the effect of changes to the software
system on the reliability.

It's important to note that the assumptions made by the reliability metrics may not
always hold true, and the results should be interpreted with caution. Additionally, the
metrics do not take into account the impact of the system's environment and usage on
the reliability, for that reason it's recommended to use other models and techniques in
conjunction with the metrics to have a comprehensive view of the software reliability.

Musa’s Basic model


Musa's Basic Model is a reliability model that was proposed by John Musa in his book
"Software Reliability: Measurement, Prediction, Application." The model is based on the
assumption that the failure rate of a software system follows a negative exponential
distribution, which is a common assumption in reliability engineering.

The model consists of three main components:

37
● The number of faults: This is the number of errors or defects that are present in
the software system.
● The fault removal efficiency (FRE): This is the percentage of faults that are
removed during the development and testing process.
● The mean time to failure (MTTF): This is the average time that the system will run
without failure, given that the remaining faults are not critical.

The model uses these components to calculate the reliability of the software system,
which is defined as the probability that the system will run without failure for a specified
period of time.

The reliability of the software system is calculated using the following formula:

Reliability = MTTF / (MTTF + (Number of Faults x (1 - FRE)) )

This formula can be used to estimate the reliability of a software system during the
development process, and it can also be used to estimate the reliability of a software
system in the field. The model can be used to identify areas of the software system that
have a high failure rate, and to estimate the effect of changes to the software system on
the reliability.

It's important to note that the assumptions made by the model may not always hold
true, and the results should be interpreted with caution. Additionally, the model does not
take into account the impact of the system's environment and usage on the reliability,
for that reason it's recommended to use other models and techniques in conjunction
with Musa's basic model to have a comprehensive view of the software reliability.

Software Quality Assurance


Software Quality Assurance (SQA) is the process of ensuring that a software system meets the

specified requirements and works as intended. It involves a set of activities that are performed

throughout the software development process to identify and prevent errors in the software.

38
SQA is a critical component of the software development process and it is essential to ensure

that the final product meets the requirements and constraints of the project.

Some of the common activities performed during SQA include:

● Requirements engineering: This is the process of defining and managing the


requirements for a software system. It is important to ensure that the requirements are
complete, consistent, and testable.
● Design and code reviews: These are techniques used to review the design and code of a
software system to identify errors and improve the quality of the code. Design reviews
focus on the overall architecture of the system, while code reviews focus on the details
of the implementation.
● Testing: This is the process of evaluating the software system to ensure that it meets the
specified requirements and works as intended. It includes unit testing, integration
testing, system testing, and acceptance testing.
● Configuration management: This is the process of controlling and tracking changes to
the software system. It includes version control, change management, and release
management.
● Quality metrics: This is the process of measuring the quality of a software system. It
includes metrics such as software size, complexity, and testing coverage.
● Auditing and inspection: This is the process of reviewing the software development
process to ensure that it is being performed in accordance with established standards
and procedures.

It's important to note that SQA is an ongoing process that should be performed throughout the

software development life cycle. It's also important to have a defined process and procedures in

place for SQA, in order to ensure that the final product meets the requirements and constraints

of the project. SQA is a critical component of the software development process, and it's

essential to ensure that the final product is of high quality and fit for its intended purpose.

Iso 9000 and SEI CMM and comparison

39
ISO 9000 and SEI CMM (Capability Maturity Model) are two widely recognized standards
for quality management and process improvement in software engineering.

ISO 9000 is a set of international standards for quality management that was first
published in 1987 by the International Organization for Standardization (ISO). The
standard provides a framework for quality management systems and it is designed to
help organizations ensure that their products and services meet customer requirements
and are consistent with international standards. The standards in ISO 9000 include
guidelines for quality management systems, quality assurance, and quality control.

SEI CMM (Capability Maturity Model) is a model for improving the software
development process that was developed by the Software Engineering Institute (SEI) at
Carnegie Mellon University. The model provides a framework for evaluating the maturity
of an organization's software development process and it is designed to help
organizations improve the quality of their software and the efficiency of their
development process. The model consists of five levels, each representing a different
level of maturity in the software development process.

ISO 9000 and SEI CMM are both recognized standards for quality management and
process improvement in software engineering, but they have some differences:

● ISO 9000 is focused on quality management systems and it provides guidelines


for quality management, quality assurance, and quality control. SEI CMM, on the
other hand, is focused on improving the software development process and it
provides a framework for evaluating the maturity of an organization's software
development process.
● ISO 9000 is intended for organizations of all types and sizes, while SEI CMM is
primarily intended for software development organizations.

40
● ISO 9000 is a standard, it is a set of guidelines that an organization can choose
to follow, SEI CMM is a model it is a framework that an organization can use to
evaluate its software development process and identify areas for improvement.

Both ISO 9000 and SEI CMM are widely recognized standards for quality management
and process improvement in software engineering and they can be used together to
improve the quality of software and the efficiency of the development process.
Organizations can use ISO 9000 to establish a quality management system and SEI
CMM to evaluate and improve their software development process.

Software Maintenance
Software maintenance is the process of modifying a software system to correct errors, improve

performance, or adapt to changes in the system's environment or requirements. It is a critical

aspect of software engineering and it is an ongoing process that is performed throughout the

software development life cycle.

There are several types of software maintenance, including:

● Corrective maintenance: This is the process of correcting errors or defects in the


software system. It includes debugging, error correction, and testing.
● Adaptive maintenance: This is the process of modifying the software system to adapt to
changes in the system's environment or requirements. It includes updates, upgrades, and
modifications to the software system.
● Perfective maintenance: This is the process of improving the performance or
functionality of the software system. It includes optimization, adding new features, and
improving the usability of the software system.
● Preventive maintenance: This is the process of performing activities to prevent errors or
defects from occurring in the software system. It includes testing, code reviews, and
maintenance of documentation.

41
Software maintenance is a complex process that requires a thorough understanding of the

software system, the system's environment, and the requirements of the system's users. It also

requires a well-defined process and procedures to ensure that the software system is

maintained in a controlled and efficient manner.

The maintenance process includes several activities such as:

● Maintenance planning: This is the process of identifying the maintenance activities that
will be performed and the resources that will be required.
● Maintenance execution: This is the process of performing the maintenance activities.
● Maintenance evaluation: This is the process of evaluating the results of the maintenance
activities and determining whether the maintenance objectives have been met.
● Maintenance closure: This is the process of closing out the maintenance activities and
documenting the results.

Software maintenance is an essential component of software engineering and it's important to

have a well-defined process and procedures in place for software maintenance in order to

ensure that the final product meets the requirements and constraints of the project.

Maintenance Process Models


Maintenance process models are frameworks that describe the activities and tasks that are

performed during the software maintenance process. These models provide a structured approach

for the maintenance process and they help to ensure that the software system is maintained in a

controlled and efficient manner.

Some of the common maintenance process models include:

● Waterfall model: This model is a linear sequential model that describes the maintenance
process as a series of stages. The stages include maintenance planning, analysis, design,
implementation, testing, and deployment.

42
● Spiral model: This model is a cyclical model that describes the maintenance process as a
series of iterations. Each iteration includes maintenance planning, analysis, design,
implementation, testing, and deployment.
● Agile model: This model is an iterative and incremental model that describes the
maintenance process as a series of short iterations. Each iteration includes maintenance
planning, analysis, design, implementation, testing, and deployment.

The Waterfall model is a linear sequential model, it is best suited for small systems that have

well-defined requirements and a low risk of change. The Spiral model is a cyclical model, it is best

suited for large systems that have complex requirements and a high risk of change. The Agile model

is an iterative and incremental model, it is best suited for systems that have rapidly changing

requirements and a high degree of uncertainty.

Each of the models has its own advantages and disadvantages, and the choice of the model to use

depends on the characteristics of the software system and the constraints of the project. It's

important to choose a maintenance process model that is appropriate for the software system and

the project, in order to ensure that the software system is maintained in a controlled and efficient

manner.

Reverse Engineering
Reverse engineering in software engineering is the process of analyzing and understanding a

software system by examining its components and structure. It is a method of uncovering the

design and architecture of a software system, and it can be used to improve the maintainability,

scalability, and security of the system.

There are several reasons why reverse engineering is used in software engineering, including:

● To understand and analyze legacy systems: Reverse engineering is used to analyze and
understand older systems that were developed before the current standards, practices,
and tools were in place.
● To improve the quality of the software: Reverse engineering can be used to identify
errors, defects, and areas of improvement in the software.

43
● To update, upgrade, or migrate the software: Reverse engineering is used to update,
upgrade, or migrate the software to a newer version or platform.
● To create an equivalent, compatible, or similar system: Reverse engineering can be used
to create an equivalent, compatible, or similar system based on the design and
architecture of an existing system.

Reverse engineering can be performed at different levels of abstraction, including:

● Code level: Reverse engineering at the code level involves analyzing the source code and
understanding the algorithms, data structures, and control flow of the software.
● Design level: Reverse engineering at the design level involves analyzing the architecture
and design of the software to understand the organization and relationships of the
software components.
● Functionality level: Reverse engineering at the functionality level involves analyzing the
functionality and requirements of the software to understand the purpose and behavior
of the software.

Reverse engineering is a complex process that requires a thorough understanding of the

software system, the system's environment, and the requirements of the system's users. It also

requires specialized tools, techniques, and methodologies to ensure that the software system is

reverse-engineered in a controlled and efficient manner. It's important to comply with the

copyright laws when reverse engineering a software system to avoid legal issues.

Estimation of Maintenance Cost


Estimating maintenance costs is an important aspect of software engineering, as it helps

organizations to budget and plan for the resources required to maintain a software system. There

are several methods and techniques that can be used to estimate maintenance costs, including:

● Expert judgment: This method involves using the knowledge and experience of experts to
estimate the maintenance costs. The experts may be experienced developers, managers, or

44
other stakeholders who have a good understanding of the software system and the
maintenance process.
● Parametric estimation: This method involves using a mathematical model to estimate the
maintenance costs based on historical data. The most common parametric estimation
method is the COCOMO (COnstructive COst MOdel) model, which uses the size of the
software system and a set of cost drivers to estimate the maintenance costs.
● Analogous estimation: This method involves using the costs of similar software systems to
estimate the maintenance costs of the current software system. The costs of the similar
systems are adjusted based on the differences in size, complexity, and other factors.
● Three-point estimation: This method involves using a set of three estimates to estimate the
maintenance costs: a best-case estimate, a most likely estimate, and a worst-case estimate.
The three estimates are used to calculate a range of possible costs and a most likely cost.
● Bottom-up estimation: This method involves breaking down the software system into smaller
components and estimating the costs of each component separately. The costs of the
components are then added up to estimate the total maintenance costs.

It's important to note that maintenance cost estimation is a complex and uncertain task, and it's

important to use a combination of methods and techniques to estimate maintenance costs in order

to have a comprehensive view. Additionally, it's important to keep in mind that actual costs may vary

significantly from the estimates due to unknown variables such as changes in requirements,

technology or market conditions, and also other factors. To avoid this, it's recommended to perform

regular cost estimation reviews and update the estimates accordingly.

Important Topics for Exam


Transaction and Transform Analysis
Transaction and transform analysis are techniques used in software engineering to understand and

improve the performance of software systems.

Transaction analysis is a technique used to measure the performance of software systems by

analyzing the number and type of transactions that the system can handle. A transaction is a unit of

work that the system performs, such as a request for information, an update to a database, or a

45
calculation. Transaction analysis is used to determine the system's capacity and throughput and to

identify bottlenecks and other performance issues.

Transform analysis is a technique used to measure the performance of software systems by

analyzing the time required to transform input data into output data. A transform is a process that

the system performs on the data, such as a calculation, a search, or a format conversion. Transform

analysis is used to determine the system's response time, efficiency, and scalability and to identify

performance issues.

Both transaction and transform analysis are used to measure the performance of software systems

and to identify performance issues, but they focus on different aspects of the system. Transaction

analysis focuses on the number and type of transactions that the system can handle, while

transform analysis focuses on the time required to transform input data into output data. These two

techniques can be used together to provide a comprehensive view of the system's performance.

To perform transaction and transform analysis, a set of tools, methods and techniques are used.

These include:

● Profiling: This technique involves measuring the performance of the software system by
analyzing the execution time of the system's code.
● Monitoring: This technique involves measuring the performance of the software system by
monitoring the system's resource usage, such as CPU usage, memory usage, and network
usage.
● Tracing: This technique involves measuring the performance of the software system by
tracing the execution of the system's code and identifying performance issues.

It's important to note that the results of transaction and transform analysis are used to improve the

performance of software systems, but these techniques are not foolproof and results may vary

depending on the system's characteristics and environment.

46
Verification Vs Validation
Verification and validation are two important concepts in software engineering that are used to

ensure that a software system meets its requirements and specifications.

Verification is the process of ensuring that the software system is built according to the

requirements and specifications. It includes activities such as reviewing requirements and design

documents, performing inspections, and conducting static analysis. Verification ensures that the

software system is built correctly and that it meets the requirements and specifications.

Validation is the process of ensuring that the software system meets the needs of the customer or

user. It includes activities such as testing, user acceptance testing, and beta testing. Validation

ensures that the software system is fit for its intended purpose and that it meets the needs of the

customer or user.

Verification and validation are related but distinct activities. Verification is focused on ensuring that

the software is built correctly and meets the requirements, while validation is focused on ensuring

that the software meets the needs of the customer or user.

To perform Verification and validation, a set of tools, methods and techniques are used. These

include:

● Review and inspection: This technique involves reviewing requirements, design documents,
and source code, and performing inspections to ensure that the software system meets the
requirements and specifications.
● Testing: This technique involves executing the software system to ensure that it meets the
requirements and specifications and that it is fit for its intended purpose.
● Static analysis: This technique involves analyzing the source code of the software system to
identify errors, defects, and potential vulnerabilities.

It's important to note that the verification and validation processes are ongoing, and that it's critical

to have a well-defined process and procedures in place to ensure that the software system is verified

and validated throughout the software development life cycle. This helps to identify and correct

47
errors, defects, and other issues early in the development process, which can save time and money

in the long run.

Iso 9000 Certification

ISO 9000 certification in software engineering refers to the process of obtaining formal
recognition that a software development organization's quality management system (QMS)
meets the requirements of the ISO 9000 standards.
ISO 9000 is a family of international standards for quality management systems. The ISO 9000

standards provide a framework for organizations to develop and implement a quality management

system (QMS) that helps them to meet customer requirements and improve overall performance.

ISO 9000 certification is a formal recognition that an organization has a QMS in place that meets the

requirements of the ISO 9000 standards. To achieve ISO 9000 certification, an organization must

demonstrate that its QMS meets the requirements of the ISO 9000 standards through an

independent, third-party audit.

The ISO 9000 standards are divided into three parts:

● ISO 9000: This standard provides the basic concepts and definitions for quality management
systems.
● ISO 9001: This standard is the most widely used of the ISO 9000 standards and provides the
requirements for a QMS. It specifies the requirements for a QMS that an organization must
meet to be certified.
● ISO 9004: This standard provides guidance for organizations on how to improve their
performance and achieve customer satisfaction.

ISO 9001:2015 is the most recent version of the standard which is widely used, it is based on a

process approach and focuses on the customer and leadership. It also includes Risk-Based Thinking

as a key element of the standard, which helps organizations to identify and manage risks that could

affect the quality of the product or service.

48
ISO 9000 certification is not mandatory, but it can be beneficial for an organization in many ways.

Some of the benefits of ISO 9000 certification include:

● Improved customer satisfaction: ISO 9000 certification helps organizations to better


understand and meet customer requirements, which can lead to improved customer
satisfaction.
● Increased efficiency and effectiveness: ISO 9000 certification helps organizations to improve
their processes and increase efficiency and effectiveness.
● Enhanced reputation: ISO 9000 certification can enhance an organization's reputation and
credibility, which can lead to new business opportunities and increased sales.

It's important to note that ISO 9000 certification is not a one-time process, it is an ongoing process

that requires regular monitoring, measurement, and improvement. Organizations that are ISO 9000

certified must maintain their QMS and demonstrate compliance through regular audits.

Balance DFD
A balanced Data Flow Diagram (DFD) is a graphical representation of the flow of data through a

software system. It is used to model the flow of data between processes, data stores, and external

entities in a software system. A balanced DFD is a variation of the traditional DFD that is used to

improve the readability and understandability of the diagram.

A balanced DFD is designed to have a similar number of processes, data stores, and external entities

on each level of the diagram. This is achieved by breaking down complex processes into smaller

sub-processes and by using different levels of abstraction to represent different aspects of the

system.

A balanced DFD is composed of four main elements:

● Processes: Represent the actions or transformations that are performed on the data.
● Data stores: Represent the locations where data is stored, such as databases or files.
● External entities: Represent the sources and destinations of data, such as users or external
systems.

49
● Data flows: Represent the movement of data between the processes, data stores, and
external entities.

The main advantage of a balanced DFD is that it is easy to read and understand. By balancing the

number of elements on each level, the diagram becomes more organized and less cluttered, which

makes it easier to identify and understand the flow of data in the system.

To create a balanced DFD, the following steps can be followed:

● Identify the main processes, data stores, and external entities in the system.
● Break down complex processes into smaller sub-processes.
● Use different levels of abstraction to represent different aspects of the system.
● Draw the DFD with a similar number of processes, data stores, and external entities on each
level.

It's important to note that a balanced DFD is a variation of the traditional DFD, which is not

mandatory, but it can be useful in situations where it is needed to have a clear and organized

representation of the flow of data in the system. Additionally, DFDs can be useful in the early stages

of software development to understand the system requirements and to communicate them with the

stakeholders.

Structure Chart
A structure chart (also called a modular diagram or program structure chart) is a graphical

representation of the static structure of a software system. It is used to model the organization and

relationships of the software modules, also known as components, that make up the system.

Structure charts are often used in the design phase of the software development process to help

understand the organization and relationships of the software components.

A structure chart is composed of a set of boxes and arrows that represent the software modules and

the relationships between them. The boxes represent the software modules and the arrows

50
represent the relationships between them. Each box represents a module and contains information

about the module, such as its name, inputs, outputs, and functions. The arrows represent the

relationships between the modules, such as calls, data flow, or inheritance.

The main advantages of structure chart are:

● It helps to understand the organization and relationships of the software modules in a clear
and organized way.
● It makes it easy to identify the dependencies between the modules, which can help to
identify potential issues, such as circular dependencies.
● It makes it easy to identify the modules that need to be tested and the modules that need to
be maintained.

To create a structure chart, the following steps can be followed:

● Identify the main modules and functions of the system.


● Draw a box for each module and indicate the inputs, outputs, and functions of the module.
● Draw arrows between the modules to indicate the relationships between them, such as calls,
data flow, or inheritance.
● Use different symbols or colors to indicate different types of relationships or modules.

It's important to note that structure chart is a useful tool in the design phase of the software

development process, but it is not mandatory. The choice of representation tools depends on the

complexity of the system and the needs of the development team. Additionally, structure chart can

be used in conjunction with other tools and techniques, such as data flow diagrams, to provide a

more complete understanding of the system.

Logical and Physical DFD

51
In software engineering, Data Flow Diagrams (DFDs) are used to model the flow of data in a

system. There are two types of DFDs: logical and physical.

A logical Data Flow Diagram (DFD) is a high-level representation of the flow of data in a system.

It is used to model the flow of data between processes, data stores, and external entities at a

conceptual level. Logical DFDs do not show the actual physical components of the system, such

as hardware or software, but rather focus on the flow of data and how it is transformed by the

processes.

A physical Data Flow Diagram (DFD) is a low-level representation of the flow of data in a system.

It is used to model the flow of data between physical components of a system, such as

hardware and software. Physical DFDs show the actual physical components of the system,

such as servers, databases, and software applications, and how they interact to process and

store data.

The main difference between logical and physical DFDs is the level of abstraction. Logical DFDs

are more abstract and provide a high-level view of the system, while physical DFDs are more

detailed and provide a low-level view of the system.

Logical DFDs are useful for understanding the overall flow of data in a system and how it is

transformed by the processes. They are also useful for identifying potential issues, such as data

bottlenecks or redundant processes, and for communicating the system requirements to

stakeholders.

Physical DFDs are useful for understanding the actual physical components of a system and

how they interact to process and store data. They are also useful for identifying potential issues,

such as hardware or software constraints, and for planning the deployment and maintenance of

the system.

It's important to note that logical and physical DFDs are not mutually exclusive, they can be used

together to provide a more complete understanding of the system. Logical DFDs can be used to

52
understand the overall flow of data in a system, and physical DFDs can be used to understand

the actual physical components of the system and how they interact to process and store data.

Dfd model creation:level 0 level 1


Creating a Data Flow Diagram (DFD) is a process that involves breaking down a system
into its components and representing the flow of data between these components. The
DFD can be created at different levels of abstraction and detail, and it is common to
create a DFD at multiple levels to provide a more complete understanding of the
system. The two most common levels of DFDs are level 0 and level 1.

Level 0 DFD: This is the highest level of abstraction and is also called the context
diagram. It represents the overall flow of data in the system and the relationships
between external entities and the system. A level 0 DFD typically includes a single
process box that represents the entire system and external entities that represent the
sources and destinations of data. The data flows between the external entities and the
system are also represented.

Level 1 DFD: This level of DFD provides a more detailed view of the system by breaking
down the single process box of the level 0 DFD into smaller subprocesses. Each
subprocess represents a specific function or operation of the system. Data stores,
which represent the locations where data is stored, are also added to the diagram. The
data flows between the subprocesses, data stores, and external entities are represented
by arrows. Level 1 DFDs can be further decomposed into lower levels of DFDs to provide
even more detail.

Creating a DFD involves a series of steps:

● Define the scope of the system you want to model.


● Identify the main processes, data stores, and external entities in the system.
● Create the level 0 DFD, which represents the overall flow of data in the system
and the relationships between external entities and the system.

53
● Create the level 1 DFD, which provides a more detailed view of the system by
breaking down the single process box of the level 0 DFD into smaller
subprocesses and adding data stores.
● Repeat this process for lower levels of DFDs as needed to provide more detail.

It's important to note that DFD is a powerful tool to understand and represent the flow of
data in a system, but it's not mandatory. The choice of representation tool depends on
the complexity of the system and the needs of the development team. Additionally, it's
important to keep in mind that the DFD model creation process is iterative, and it's
expected to go through several revisions before it can be considered complete and
accurate.

Characteristics of a good SRS


A Software Requirements Specification (SRS) document is a document that describes the

requirements for a software system. It is an important document that is used to communicate the

requirements of a system to stakeholders, such as developers, customers, and users. A good SRS

document should have certain characteristics that make it clear, accurate, and usable.

1. Clear: A good SRS document should be written in clear, simple, and easy-to-understand
language. The requirements should be stated in a way that is unambiguous and easy to
interpret.
2. Complete: A good SRS document should be complete and should include all the
requirements for the system. It should include functional requirements, non-functional
requirements, and constraints.
3. Consistent: A good SRS document should be consistent in its terminology, formatting, and
organization. The document should use the same terms and phrases throughout, and the
format should be easy to follow.
4. Traceable: A good SRS document should be traceable. It should include a traceability matrix
that shows the relationship between the requirements and other documents, such as design
documents and test plans.
5. Verifiable: A good SRS document should be verifiable. The requirements should be stated in
a way that they can be tested and verified. The requirements should be specific, measurable,
and testable.

54
6. Modifiable: A good SRS document should be modifiable. It should be easy to update and
change as the project progresses and new requirements are identified.
7. Prioritized: A good SRS document should prioritize the requirements, making it clear which
are the most important and which are less important. This will help the development team to
focus on the most important requirements first.
8. User-focused: A good SRS document should be user-focused. It should describe the
requirements from the perspective of the user and how the system will be used by the user.

It's important to note that, it's not easy to create an SRS document that meets all of these

characteristics and that's why it's important to be reviewed and approved by all stakeholders before

starting the development process. Additionally, SRS documents are living documents, they are

expected to be updated and changed throughout the software development life cycle as the

requirements change and evolve.

Coupling and Cohesion


In software engineering, coupling and cohesion are two important concepts that describe the degree

of interdependence and organization within a software system.

Coupling refers to the degree of interdependence between software modules or components. A

system with high coupling has modules that are closely connected to each other, while a system

with low coupling has modules that are relatively independent of each other. High coupling can make

a system more difficult to understand, test, and maintain, while low coupling can make a system

more flexible and easy to change.

Coupling in software engineering refers to the degree of interdependence between


software modules or components. A system with high coupling has modules that are
closely connected to each other, while a system with low coupling has modules that are
relatively independent of each other.

There are several types of coupling:

1. Data Coupling: This type of coupling occurs when one module passes data directly
to another module. It is considered to be the least restrictive type of coupling.

55
2. Control Coupling: This type of coupling occurs when one module controls the
execution of another module. It is considered to be a moderate type of coupling.

3. Stamp Coupling: This type of coupling occurs when one module uses the data
structure of another module. It is considered to be a moderate type of coupling.

4. Content Coupling: This type of coupling occurs when one module modifies the data
of another module. It is considered to be a restrictive type of coupling.

5. Common Coupling: This type of coupling occurs when two or more modules share a
global variable or data. It is considered to be a restrictive type of coupling.

High coupling can make a system more difficult to understand, test, and maintain, while
low coupling can make a system more flexible and easy to change. It's important to strive
for low coupling in a system in order to increase the flexibility and ease of maintenance.

Cohesion refers to the degree of organization within a module or component. A system with high

cohesion has modules that are well-organized and focused on a single responsibility, while a system

with low cohesion has modules that are poorly organized and do not have a clear purpose. High

cohesion can make a system more maintainable and easy to understand, while low cohesion can

make a system more difficult to understand and change.

Cohesion in software engineering refers to the degree of organization and functional


independence within a module or component of a software system. High cohesion means
that a module or component is well-organized and focused on a single responsibility, while
low cohesion means that a module or component is poorly organized and does not have a
clear purpose.

High cohesion is generally considered to be a desirable characteristic of a software system


because it makes the system more maintainable and easy to understand. Modules with
high cohesion are easy to test and change, and they are less likely to introduce bugs or

56
errors into the system. Additionally, high cohesion modules are more reusable and less
prone to side effects when changes are made.

There are several types of cohesion in software engineering, they are:

● Functional Cohesion: This refers to the degree to which all the statements in a
module or component are related to the same single function or responsibility.

● Sequential Cohesion: This refers to the degree to which all the statements in a
module or component are executed in a specific order or sequence.

● Communicational Cohesion: This refers to the degree to which all the statements in
a module or component are related to the same data or communication.

● Procedural Cohesion: This refers to the degree to which all the statements in a
module or component are related to a common control flow.

● Temporal Cohesion: This refers to the degree to which all the statements in a
module or component are related to a common time or event.

● Logical Cohesion: This refers to the degree to which all the statements in a module
or component are related to a common goal or purpose.

Coupling and cohesion are closely related concepts, and it's important to find the right balance

between them. A system with low coupling and high cohesion is considered well-structured, while a

system with high coupling and low cohesion is considered poorly structured.

There are several techniques that can be used to measure coupling and cohesion, such as:

● McCabe's Cyclomatic Complexity: This is a measure of the complexity of a module, which is


related to the number of decision points in the module.
● Fan-In and Fan-Out: These are measures of the number of modules that call a module and
the number of modules that a module calls, respectively.

57
● LCOM (Lack of Cohesion of Methods): This is a measure of the degree of cohesion within a
module, based on the number of methods that share class variables.

It's important to note that, coupling and cohesion are not the only factors that determine the quality

of a software system, but they are important indicators that can help to identify potential issues and

improve the maintainability and understandability of the system. Additionally, it's important to keep

in mind that the best approach is to strike a balance between the two, having a low coupling and

high cohesion is not always possible or desirable.

Software Models
In software engineering, a software model is a representation of a software system that
is used to communicate and design the system. There are several types of software
models that are used in software engineering, each with its own strengths and
weaknesses.

1. Waterfall Model: The Waterfall model is a linear and sequential model that is
used for software development. It consists of distinct phases such as
requirements gathering, design, implementation, testing, and maintenance. Each
phase is completed before the next one begins and there is no overlapping or
iterative development.
2. Agile Model: The Agile model is an iterative and incremental model that is used
for software development. Agile methodologies such as Scrum, Kanban, and XP,
emphasize on customer collaboration, flexibility, and rapid delivery of working
software.
3. Spiral Model: The Spiral model is a combination of the Waterfall model and the
Iterative model. This model is used for high-risk projects and it's based on the
idea of "risk management" through the iteration. The spiral model has four
phases: Planning, Risk Analysis, Engineering, and Evaluation.
4. V-Model: The V-Model is a graphical representation of a software development
process. It's based on the idea of "verification and validation" where each stage
of the development process has a corresponding testing stage.

58
5. Prototype Model: The Prototype model is an iterative model that creates a
working model of the software system as early as possible. The prototype is
used to gather feedback from stakeholders and users, and it's refined until it
meets the requirements.
6. Incremental Model: The Incremental model is an iterative model that delivers a
working version of the software system in incremental stages. Each increment
builds upon the previous one and adds new functionality until the final version is
reached.
7. RAD Model: The Rapid Application Development (RAD) model is an iterative
model that uses prototyping and rapid development techniques to deliver a
working version of the software system as quickly as possible

59

You might also like