SE Unit 1
SE Unit 1
Software Engineering
Software Engineering
The term software engineering is the product of two words, software, and engineering.
The software is a collection of integrated programs.
Engineering is the application of scientific and practical knowledge to invent, design, build, maintain,
and improve frameworks, processes, etc.
Software Engineering is an engineering branch related to the evolution of software product using
well-defined scientific principles, techniques, and procedures. The result of software engineering is an
effective and reliable software product.
Definitions
Def-1: IEEE defines software engineering as:
a. The application of a systematic, disciplined, quantifiable approach to the development,
operation and maintenance of software; that is, the application of engineering to
software.
b. The study of approaches as in the above statement.
Def-2: Fritz Bauer, a German computer scientist, defines software engineering as:
Software engineering is the establishment and use of sound engineering principles in order to obtain
economically software that is reliable and work efficiently on real machines.
Why is Software Engineering required? (Need/importance of Software Engineering)
Without using software engineering principles it would be difficult to develop large programs.
In industry it is usually needed to develop large programs to accommodate multiple functions.
A problem with developing such large commercial programs is that the complexity and
difficulty levels of the programs increase exponentially with their sizes.
For example, a program of size 1,000 lines of code has some complexity. But a program with
10,000 LOC is not just 10 times more difficult to develop, but may as well turn out to be 100
times more difficult unless software engineering principles are used.
In such situations software engineering techniques come to the rescue.
Dynamic Nature: If the quality of the software is continually changing, new upgrades need to
be done in the existing one.
Quality Management: Better procedure of software development provides a better and
quality software product.
• The problem has to be decomposed such that each component of the decomposed problem can be
solved independently and then the solution of the different components can be combined to get the
full solution.
• A good decomposition of a problem as shown in fig. 33.5 should minimize interactions among
various components.
• If the different subcomponents are interrelated, then the different components cannot be solved
separately and the desired reduction in complexity will not be realized.
Program vs. Software Product
Programs are developed by individuals for their personal use. They are therefore, small in size
and have limited functionality but software products are extremely large.
In case of a program, the programmer himself is the sole user but on the other hand, in case of
a software product, most users are not involved with the development.
In case of a program, a single developer is involved but in case of a software product, a large
number of developers are involved.
For a program, the user interface may not be very important, because the programmer is the
sole user.
On the other hand, for a software product, user interface must be carefully designed and
implemented because developers of that product and users of that product are totally
different.
In case of a program, very little documentation is expected, but a software product must be
well documented.
A program can be developed according to the programmer’s individual style of development,
but a software product must be developed using the accepted software engineering
principles.
a. During the 1950s, most programs were being written in assembly language. These programs
were limited to about a few hundreds of lines of assembly code. Every programmer developed
programs in his own individual style - based on his intuition. This type of programming was
called Exploratory Programming.
b. The next significant development which occurred during early 1960s in the area computer
programming was the high-level language programming. Use of high-level language
programming reduced development efforts and development time significantly. Languages
like FORTRAN, ALGOL, and COBOL were introduced at that time.
c. Structured Programming: As the size and complexity of programs kept on increasing, the
exploratory programming style proved to be insufficient. To cope with this problem,
experienced programmers advised other programmers to pay particular attention to the
design of the program’s control flow structure.
A structured program uses three types of program constructs i.e. selection, sequence and
iteration.
Structured programs avoid unstructured control flows by restricting the use of GOTO
statements.
Structured programming uses single entry, single-exit program constructs such as if-then-
else, do-while, etc.
Structured programs are easier to maintain. They require less effort and time for
development. They are amenable to easier debugging and usually fewer errors are made
in the course of writing such programs.
d. Data Structure-Oriented Design: Pay more attention to the design of data structure, of the
program rather than to the design of its control structure.
e. Data Flow-Oriented Design: Next significant development in the late 1970s was the
development of data flow-oriented design technique. Every program reads data and then
processes that data to produce some output.
f. Object-Oriented Design: Object-oriented design (1980s) is the latest and very widely used
technique. It has an intuitively appealing design approach in which natural objects (such as
employees, pay-roll register, etc.) occurring in a problem are first identified. Relationships
among objects are determined.
g. Modern practice: The modern practice of software development is to develop the software
through several well-defined stages such as requirements specification, design, coding,
testing, etc., and attempts are made to detect and fix as many errors as possible in the same
phase in which they occur.
Now, projects are first thoroughly planned. Project planning normally includes preparation of
various types of estimates, resource scheduling, and development of project tracking plans.
Several techniques and tools for tasks such as configuration management, cost estimation,
scheduling, etc. are used for effective software project management.
The development team must identify a suitable life cycle model for the particular project and
then adhere to it. Without using a particular life cycle model, the development of a software
product would not be in a systematic and disciplined manner.
When a software product is being developed by a team there must be a clear understanding
among team members about when and what to do. Otherwise it would lead to chaos and
project failure.
Example:
Suppose a software development problem is divided into several parts and the parts are
assigned to the team members. From then on, suppose the team members are allowed the
freedom to develop the parts assigned to them in whatever way they like. It is possible that
one member might start writing the code for his part, another might decide to prepare the
test documents first, and some other engineer might begin with the design phase of the parts
assigned to him. This would be one of the perfect recipes for project failure.
A software life cycle model defines entry and exit criteria for every phase. So without a
software life cycle model, the entry and exit criteria for a phase cannot be recognized.
A few important and commonly used life cycle models are as follows:
Waterfall Model
The classical waterfall model is intuitively the most obvious way to develop software. Though the
classical waterfall model is elegant and intuitively obvious, we will see that it is not a practical model in
the sense that it cannot be used in actual software development projects.
Thus, we can consider this model to be a theoretical way of developing software. But all other life
cycle models are essentially derived from the classical waterfall model. So, in order to be able to
appreciate other life cycle models, we must first learn
the classical waterfall model.
1. Feasibility Study
The main aim of feasibility study is to determine whether it would be financially, technically,
timely, operationally etc feasible to develop the product
• At first project managers or team leaders try to have a rough understanding of what is required to
be done by visiting the client side. They study different input data to the system , output data and
they look at the various constraints on the behaviour of the system.
• After they have an overall understanding of the problem, they investigate the different solutions
that are possible.
• They pick the best solution and determine whether the solution is feasible financially and
technically. They check whether the customer budget would meet the cost of the product and
whether they have sufficient technical expertise in the area of development.
2. Requirements Analysis and Specification
The aim of the requirements analysis and specification phase is to understand the exact
requirements of the customer and to document them properly. This phase consists of two distinct
activities, namely
Integration is normally carried out incrementally over a number of steps. Finally, when all the modules
have been successfully integrated and tested, system testing is carried out.
The goal of system testing is to ensure that the developed system conforms to the requirements laid
out in the SRS document.
System testing usually consists of three different kinds of testing activities:
Correcting errors that were not discovered during the product development phase. This is
called corrective maintenance.
Improving the implementation of the system, and enhancing the functionalities of the system
according to the customer’s requirements. This is called perfective maintenance.
Porting the software to work in a new environment. For example, porting may be required to
get the software to work on a new computer platform or with a new operating system. This is
called adaptive maintenance.
use. So, Iterative waterfall model can be thought of as incorporating the necessary changes to the
classical waterfall model to make it usable in practical software development.
It is almost same as the classical waterfall model except some changes are made to increase the
efficiency of the software development.
The iterative waterfall model provides feedback paths from every phase to its preceding phases,
which is the main difference from the classical waterfall model.
Feedback paths introduced by the iterative waterfall model are shown in the figure below.
When errors are detected at some later phase, these feedback paths allow correcting errors
committed by programmers during some phase.
Incremental Model
• Incremental Model is a process of software development where requirements divided into multiple
standalone modules of the software development cycle.
• In this model, each module goes through the requirements, design, implementation and testing
phases. Every subsequent release of the module adds function to the previous release.
• The process continues until the complete system achieved.
Evolutionary model
This model is a combination of incremental and iterative models. In the evolutionary model, all work
divided into smaller chunks. These chunks present to the customer one by one. The confidence of the
customer increased. This model also allows for changing requirements as well as all development
done into different pieces and maintains all the work as a chunk.
Where the evolutionary model is useful
• It is very useful in a large project where you can easily find a module for step by step
implementation.
• The evolutionary model is used when
the users need to start using the
many features instead of waiting for
the complete software.
• The evolutionary model is also very
useful in object-oriented software
development because all the
development is divided into different
units.
Disadvantages of Evolutionary Model
• It is difficult to divide the problem into
several parts, that would be
acceptable to the customer which can
be incrementally implemented and
delivered.
The following are the evolutionary models.
1. The prototyping model
2. the spiral model
3. the concurrent development model
Prototype Model
The prototype model requires that before carrying out the development of actual software, a working
prototype of the system should be built. A prototype is a toy implementation of the system.
Spiral Model
• The spiral model, initially proposed by Boehm, is an evolutionary software process model that
couples the iterative feature of prototyping with the controlled and systematic aspects of the
linear sequential model.
• It implements the potential for rapid development of new versions of the software.
• Using the spiral model, the software is developed in a series of incremental releases.
• During the early iterations, the additional release may be a paper model or prototype.
• During later iterations, more and more complete versions of the engineered system are produced.
RAD (Rapid Application Development) is a concept that products can be developed faster and of
higher quality through:
Agile Model
• The meaning of Agile is versatile.
• Agile method proposes incremental and iterative approach to software design.
• AGILE methodology is a practice that promotes continuous iteration of development and testing
throughout the software development lifecycle of the project. In the Agile model, both
development and testing activities are concurrent.
• Agile software development refers to a group of software development methodologies based on
iterative development, where requirements and solutions evolve through collaboration between
self-organizing cross-functional teams.
• Agility is achieved by fitting the process to the project, removing activities that may not be essential
for a specific project. Also, anything that is
wastage of time and effort is avoided.
Agile principles
1. The highest priority of this process is
to satisfy the customer.
2. Acceptance of changing requirement
even late in development.
3. Frequently deliver working software
in small time span.
4. Throughout the project business
people and developers work together
on daily basis.
5. Primary measure of progress is
working software.
Phases of Agile Model:
Following are the phases in the Agile model are as follows:
1. Requirements gathering
2. Design the requirements
3. Construction/ iteration
4. Testing/ Quality assurance
5. Deployment
6. Feedback
Scrum
SCRUM is an agile development process focused primarily on ways to manage tasks in team-
based development conditions.
SCRUM concentrates specifically on how to manage tasks within a team-based development
environment.
Basically, Scrum is derived from activity that occurs during a rugby match.
Scrum believes in empowering the development team and advocates working in small teams
(say- 7 to 9 members).
It consists of three roles, and their responsibilities are explained as follows:
o Product owner: The product owner makes the product backlog, prioritizes the delay and is
responsible for the distribution of functionality on each repetition.
o Scrum Master: The scrum can set up the master team, arrange the meeting and remove
obstacles for the process
o Scrum Team: The team manages its work and organizes the work to complete the sprint or
cycle.
Process flow of Scrum Methodologies:
Process flow of scrum testing is as follows:
Each iteration of a scrum is known as Sprint
Product backlog is a list where all details are entered to get the end-product
During each Sprint, top user stories of Product backlog are selected and turned into Sprint
backlog
Team works on the defined sprint backlog
Team checks for the daily work
eXtreme Programming(XP)
Extreme Programming technique is very helpful when there is constantly changing demands or
requirements from the customers or when they are not sure about the functionality of the system. It
advocates frequent "releases" of the product in short development cycles, which inherently improves
the productivity of the system and also introduces a checkpoint where any customer requirements can
be easily implemented.
Business requirements are gathered in terms of stories. All those stories are stored in a place called
the parking lot.
In this type of methodology, releases are based on the shorter cycles called Iterations with span of 14
days time period. Each iteration includes phases like coding, unit testing and system testing where at
each phase some minor or major functionality will be built in the application.
SCRUM vs xP
There are however some differences, some of them very subtle, and particularly in the following 4
aspects:
1. Iteration length
Scrum
Typically from two weeks to one month long.
XP
Typically one or two weeks long.
1. Project planning:
Project planning is undertaken immediately after the feasibility study phase and before the
starting of the requirements analysis and specification phase.
Project planning involves estimating several characteristics of a project and then planning the
project activities based on these estimates made.
2. Project monitoring and control:
Project monitoring and control activities are undertaken once the development activities start.
The focus of project monitoring and control activities is to ensure that the software
development proceeds as per plan.
The size is the crucial parameter for the estimation of other activities. Resources requirement are
required based on Efforts and development time. Project schedule may prove to be very useful for
controlling and monitoring the progress of the project.
1. Sliding Window Planning: In the sliding window planning technique, starting with an initial plan,
the project is planned more accurately over a number of stages.
2. The SPMP Document of Project Planning: Once project planning is complete, project managers
document their plans in a software project management plan (SPMP) document.
Organization of the software project management plan (SPMP) document
1. Introduction
(a) Objectives
(b) Major Functions
(c) Performance Issues
(d) Management and Technical Constraints
2. Project estimates
(a) Historical Data Used
(b) Estimation Techniques Used
(c) Effort, Resource, Cost, and Project Duration Estimates
3. Schedule
(a) Work Breakdown Structure
(b) Gantt Chart Representation
(d) PERT Chart Representation
4. Project resources
(a) People
(b) Hardware and Software
(c) Special Resources
5. Staff organization
(a) Team Structure
(b) Management Reporting
6. Risk management plan
(a) Risk Analysis
(b) Risk Identification
(c) Risk Estimation
(d) Risk Abatement (reduction) Procedures
7. Project tracking and control plan
(a) Metrics to be tracked
(b) Tracking plan
(c) Control plan
8. Miscellaneous plans
(a) Process Tailoring
(b) Quality Assurance Plan
(c) Configuration Management Plan
(d) Validation and Verification
(e) System Testing Plan
(f ) Delivery, Installation, and Maintenance Plan
Estimating LoC
Accurate estimation of LOC count at the beginning of a project is a very difficult task.
One can possibly estimate the LOC count at the starting of a project, only by using some form
of systematic guess typically involves the following.
The project manager divides the problem into modules, and each module into sub-
modules and so on, until the LOC of the leaf-level modules are small enough to be
predicted.
To be able to predict the LOC count for the various leaf-level modules sufficiently
—function point (FP): This metric measures the size of a project by considering that “a software
product is directly dependent on the number of different high-level functions or features it supports”.
(Q) :Number inquiries: An inquiry is a user command (without any data input) inquiries are print
account balance, print all student grades, display rank holders’ names, etc.
(F): Number of Files: The files referred to here are logical files
(N): Number of Interfaces: different mechanisms that are used to exchange information like data
files on tapes, disks, communication links with other systems, etc.
Project Estimation
A large number of estimation techniques have been proposed by researchers. These can broadly be
classified into three main categories:
1. Empirical estimation techniques
2. Heuristic techniques
3. Analytical estimation techniques
1. Empirical Estimation Techniques
While using this technique, prior experience with development of similar products is helpful.
Empirical estimation techniques are based on common sense and subjective decisions, over the years,
2. Heuristic Techniques
Heuristic techniques assume that the relationships that exist among the different project parameters
can be satisfactorily modeled using suitable mathematical expressions.
Different heuristic estimation models can be divided into the following two broad Categories
[1]. single variable and
[2]. multivariable models.
[1].Single variable estimation models assume that various project characteristic can be predicted
based on a single previously estimatedcharacteristic of the software such as its size.
COCOMO Model
Boehm proposed COCOMO (Constructive Cost Estimation Model) in 1981.COCOMO is one of the most
generally used software estimation models in the world. COCOMO predicts the efforts and schedule
of a software product based on the size of the software.
The necessary steps in this model are:
1. Get an initial estimate of the development effort from evaluation of thousands of delivered
lines of source code (KDLOC).
2. Determine a set of 15 multiplying factors from various attributes of the project.
3. Calculate the effort estimate by multiplying the initial estimate with all the multiplying factors
i.e., multiply the values in step1 and step2.
To determine the initial effort Ei in person-months the equation used is of the type is shown below
Ei=a*(KDLOC)b
The value of the constant a and b are depends on the project type.
In COCOMO, projects are categorized into three types:
1. Organic
2. Semidetached
3. Embedded
1. Organic: if the project deals with developing a well-understood application program, the size of the
development team is reasonably small, and the team members are experienced in developing similar
methods of projects.
Examples: Simple business systems, simple inventory management systems, and data processing
systems.
2. Semidetached: A development project can be treated with semidetached type if the development
consists of a mixture of experienced and inexperienced staff. Team members may have finite
experience in related systems but may be unfamiliar with some aspects of the order being developed.
Example: new operating system (OS), a Database Management System (DBMS
3. Embedded: If the software being developed is strongly coupled to complex hardware, or if the
stringent regulations on the operational method exist.
Example: ATM, Air Traffic control.
According to Boehm, software cost estimation should be done through three stages:
1. Basic Model
2. Intermediate Model
3. Detailed Model
1. Basic COCOMO Model: The basic Cocomo model considers that the effort is only a function of
the number of lines of code and some constants calculated according to the various software
systems
The following expressions give the basic COCOMO estimation model:
Effort=a1*(KLOC)a2 PM
Tdev=b1*(efforts)b2 Months
Where
KLOC is the estimated size of the software product indicate in Kilo Lines of Code
a1,a2,b1,b2 are constants for each group of software products,
Tdev is the estimated time to develop the software, expressed in months,
Effort is the total effort required to develop the software product, expressed in person months.
What is a person-month?
o Person-month (PM) is a popular unit for effort measurement.
o Person-month (PM) is considered to be an appropriate unit for measuring effort,
obecause developers are typically assigned to a project for a certain number of months.
Example1: Suppose a project was estimated to be 400 KLOC. Calculate the effort and development time for each of the three
model i.e., organic, semi-detached & embedded.
Solution: The basic COCOMO equation takes the form:
Effort=a1*(KLOC) a2 PM
Tdev=b1*(efforts)b2 Months
Estimated Size of project= 400 KLOC
(i) Organic Mode
E = 2.4 * (400)1.05 = 1295.31 PM
D = 2.5 * (1295.31)0.38=38.07 PM
(ii) Semidetached Mode
E = 3.0 * (400)1.12=2462.79 PM
D = 2.5 * (2462.79)0.35=38.45 PM
(iii) Embedded Mode
E = 3.6 * (400)1.20 = 4772.81 PM
D = 2.5 * (4772.8)0.32 = 38 PM
2. Intermediate Model: The intermediate COCOMO model refines the initial estimates obtained
through the basic COCOMO model by using a set of 15 cost drivers based on various attributes of
software engineering.
Classification of Cost Drivers and their attributes:
Product attributes -
1. Required software reliability extent
2. Size of the application database
Intermediate COCOMO equation:
3. The complexity of the product
E=ai (KLOC)bi*EAF
Hardware attributes -
D=ci (E)di
4. Run-time performance constraints
5. Memory constraints
6. The volatility of the virtual machine environment
7. Required turnabout time
Personnel attributes - Coefficients for intermediate COCOMO
8. Analyst capability
9. Software engineering capability Project ai bi ci di
10. Applications experience
11. Virtual machine experience Organic 2.4 1.05 2.5 0.38
12. Programming language experience
Semidetached 3.0 1.12 2.5 0.35
Project attributes -
13. Use of software tools Embedded 3.6 1.20 2.5 0.32
14. Application of software engineering methods
15. Required development schedule
3. Detailed COCOMO Model: Detailed COCOMO incorporates all qualities of the standard version
with an assessment of the cost drivers effect on each method of the software engineering process.
Token Count
In these metrics, a computer program is considered to be a collection of tokens, which may be
classified as either operators or operands. All software science metrics can be defined in terms of
these basic symbols. These symbols are called as a
token.
The basic measures are
n1 = count of unique operators.
n2 = count of unique operands.
N1 = count of total occurrences of operators.
N2 = count of total occurrence of operands.
V* = Min volume of the most briefed program in
which a problem can be coded.
Halstead metrics are:
1. Vocabulary(n) :(total tokens) , that is the size of the
program can be expressed as N = N1 + N2.
2. Estimated Program Length(N): is a the number of unique operators and operands
3. Program Volume (V) :It is the actual size of a program in “bits” .
4. Program Level (L): representing a program written at the highest possible level.
5. Program Difficulty(D): is proportional to the number of the unique operator in the program.
6. Programming Effort (E): The amount of mental activity needed to translate the existing algorithm
into implementation in the specified program language.
7. Faults ( B ): The number of faults in a program is a function of its volume.
Project Scheduling
A schedule in your project’s time table actually consists of sequenced activities and
milestones that are needed to be delivered under a given period of time.
The most common and important form of project schedule is PERT ,CPM and Gantt chart.
Scheduling Process:
1. Identify all the major activities that need to be carried out to complete the project.
2. Break down each activity into tasks.
3. Determine the dependency among different tasks.
4. Establish the estimates for the time durations necessary to complete the tasks.
5. Represent the information in the form of an activity network.
6. Determine task starting and ending dates from the information represented in the activity
network.
7. Determine the critical path. A critical path is a chain of tasks that determines the duration of
the project.
8. Allocate resources to tasks.
Work Breakdown Structure
1. Work breakdown structure (WBS) is used to recursively decompose a given set of activities into
smaller activities.
2. WBS provides a notation for representing the activities, sub-activities, and tasks needed to be
carried out in order to solve a problem. Each of these is represented using a rectangle (see Figure).
3. The root of the tree is labeled by the project name. Each node of the tree is broken down into
smaller activities that are made the children of the node.
4. Figure 3.7 represents the WBS of management information system (MIS) software.
Activity Networks:
An activity network shows the different activities making up a project, their estimated durations, and
their interdependencies. Two equivalent representations for activity networks are possible and are in
use:
Activity on Node (AoN): In this representation, each activity is represented by a rectangular (some use
circular) node and the duration of the activity is shown alongside each task in the node. The inter-task
dependencies are shown using directional edges (see Figure ).
Activity on Edge (AoE): In this representation tasks are associated with the edges. The edges are also
annotated with the task duration. The nodes in the graph represent project milestones.
The activity network with computed LS and LF values has been shown in Figure
The CPM can be used to determine the duration of a project, but does not provide any
indication of the probability of meeting that schedule.
PERT Charts
Project evaluation and review technique (PERT) charts are a more sophisticated form of activity chart.
Each task is annotated with three estimates:
Optimistic (O): The best possible case task completion time.
Most likely estimate (M): Most likely task completion time.
Worst case (W): The worst possible case task completion time.
The PERT chart representation of the MIS problem of Figure 3
Gantt Charts
Gantt chart has been named after its developer Henry Gantt. Gantt chart is a special type of bar chart
where each bar represents an activity. The bars are drawn along a time line. The length of each bar is
proportional to the duration of time planned for the corresponding activity.
A Gantt chart representation for the MIS problem of Figure 3
We can summarize the differences between the two as listed in the table below:
Gantt chart PERT chart
Gantt chart is defined as the bar chart. PERT chart is similar to a network diagram
Gantt chart is often used for Small Projects PERT chart can be used for large and complex Projects
Gantt chart focuses on the time required to
PERT chart focuses on the dependency of relationships.
complete a task
PERT chart could be sometimes confusing and complex
Gantt chart is simpler and more straightforward
but can be used for visualizing critical path
Personnel Planning(Staffing)
Personnel Planning deals with staffing. Staffing deals with the appoint personnel for the position that
is identified by the organizational structure. It involves:
Defining requirement for personnel
Recruiting (identifying, interviewing, and selecting candidates)
Compensating
Developing and promoting agent
For personnel planning and scheduling, it is helpful to have efforts and schedule size for the
subsystems and necessary component in the system.
Typically the staff required for the project is small during requirement and design, the maximum
during implementation and testing, and drops again during the last stage of integration and
testing.
Using the COCOMO model, average staff requirement for various phases can be calculated as the
effort and schedule for each method are known.
When the schedule and average staff level for every action are well-known, the overall personnel
allocation for the project can be planned.
This plan will indicate how many people will be required for different activities at different times
for the duration of the project.
Types of Risks
1. Project risks
2. Technical risks
3. Business risks
1. Project risks: Project risks concern differ forms of budgetary, schedule, personnel, resource, and
customer-related problems.
2. Technical risks: Technical risks concern potential method, implementation, interfacing, testing, and
maintenance issue. It also consists of an ambiguous specification, incomplete specification, changing
specification, technical uncertainty, and technical obsolescence.
3. Business risks: This type of risks contain risks of building an excellent product that no one need,
losing budgetary or personnel commitments, etc.
1. Global Perspective: In this, we review the bigger system description, design, and
implementation. We look at the chance and the impact the risk is going to have.
2. Take a forward-looking view: Consider the threat which may appear in the future and create
future plans for directing the next events.
3. Open Communication: This is to allow the free flow of communications between the client
and the team members so that they have certainty about the risks.
4. Integrated management: In this method risk management is made an integral part of project
management.
5. Continuous process: In this phase, the
risks are tracked continuously
throughout the risk management
paradigm.
(1).Risk Assessment
The objective of risk assessment is to division the risks in the condition of their loss, causing potential.
For risk assessment, first, every risk should be rated in two methods:
(2).Risk Control(Mitigation)
1. Identification: Unit of Text created by a software engineer during analysis, design, code, or
test.
2. Version Control: Version Control combines procedures and tools to handle different version of
configuration objects that are generated during the software process.
3. Change Control: The "check-in" and "check-out" process implements two necessary elements
of change control-access.
4. Configuration Audit: SCM audits to verify that the software product satisfies the baselines
requirements and ensures that what is built and what is delivered.
5. Status Reporting: Configuration Status reporting providing accurate status and current
configuration data to developers, testers, end users, customers and stakeholders