0% found this document useful (0 votes)
7 views

Software Engineering Unit-II

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Software Engineering Unit-II

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Software Engineering

Unit-II

Mrs. Amandeep Kaur


Assistant Professor
Gujarat University
Software Measurement and Metrics

Software Measurement:
• A measurement is a manifestation of the size, quantity, amount,
or dimension of a particular attribute of a product or process.
• Software measurement is a titrate impute of a characteristic of a
software product or the software process.
• It is an authority within software engineering. The software
measurement process is defined and governed by ISO
Standard.
Software Measurement Principles

The software measurement process can be characterized by five


activities-
1.Formulation: The derivation of software measures and metrics
appropriate for the representation of the software that is being
considered.
2.Collection: The mechanism used to accumulate data required to
derive the formulated metrics.
3.Analysis: The computation of metrics and the application of
mathematical tools.
4.Interpretation: The evaluation of metrics results in insight into
the quality of the representation.
5.Feedback: Recommendation derived from the interpretation of
product metrics transmitted to the software team.
Need for Software Measurement

• Create the quality of the current product or process.


• Anticipate future qualities of the product or process.
• Enhance the quality of a product or process.
• Regulate the state of the project concerning budget and schedule.
• Enable data-driven decision-making in project planning and control.
• Identify bottlenecks and areas for improvement to drive process
improvement activities.
• Ensure that industry standards and regulations are followed.
• Give software products and processes a quantitative basis for
evaluation.
• Enable the ongoing improvement of software development practices.
Classification of Software
Measurement:
There are 2 types of software measurement:
1.Direct Measurement: In direct measurement, the product,
process, or thing is measured directly using a standard scale.
2.Indirect Measurement: In indirect measurement, the
quantity or quality to be measured is measured using related
parameters i.e. by use of reference.
Software Metrics

• A metric is a measurement of the level at which any impute


belongs to a system product or process.
• Software metrics is a quantifiable or countable assessment of
the attributes of a software product. There are 4 functions
related to software metrics:
1.Planning
2.Organizing
3.Controlling
4.Improving
Characteristics of software Metrics

1.Quantitative: Metrics must possess quantitative nature. It means


metrics can be expressed in numerical values.
2.Understandable: Metric computation should be easily
understood, and the method of computing metrics should be clearly
defined.
3.Applicability: Metrics should be applicable in the initial phases of
the development of the software.
4.Repeatable: When measured repeatedly, the metric values should
be the same and consistent in nature.
5.Economical: The computation of metrics should be economical.
6.Language Independent: Metrics should not depend on any
programming language.
Classification of Software Metrics

1.Product Metrics: Product metrics are used to evaluate the state of the product,
tracing risks and undercover prospective problem areas. The ability of the team to
control quality is evaluated. Examples include lines of code, cyclomatic
complexity, code coverage, defect density, and code maintainability index.
2.Process Metrics: Process metrics pay particular attention to enhancing the
long-term process of the team or organization.These metrics are used to optimize
the development process and maintenance activities of software.Examples
include effort variance, schedule variance, defect injection rate, and lead time.
3.Project Metrics: The project metrics describes the characteristic and execution
of a project. Examples include effort estimation accuracy, schedule deviation, cost
variance, and productivity.Usually measures-
3. Number of software developer
4. Staffing patterns over the life cycle of software
5. Cost and schedule
6. Productivity
Advantages of Software Metrics

1.Reduction in cost or budget.


2.It helps to identify the particular area for improvising.
3.It helps to increase the product quality.
4.Managing the workloads and teams.
5.Reduction in overall time to produce the product,.
6.It helps to determine the complexity of the code and to test the
code with resources.
7.It helps in providing effective planning, controlling and
managing of the entire product.
Disadvantages of Software Metrics

1.It is expensive and difficult to implement the metrics in some


cases.
2.Performance of the entire team or an individual from the team
can’t be determined. Only the performance of the product is
determined.
3.Sometimes the quality of the product is not met with the
expectation.
4.It leads to measure the unwanted data which is wastage of time.
5.Measuring the incorrect data leads to make wrong decision
making.
Lines of Code (LOC)

• A line of code (LOC) is any line of text in a code that is not a


comment or blank line, and also header lines, in any case of the
number of statements or fragments of statements on the line.
• LOC clearly consists of all lines containing the declaration of
any variable, and executable and non-executable statements.
• As Lines of Code (LOC) only counts the volume of code, you can
only use it to compare or estimate projects that use the same
language and are coded using the same coding standards.
Features of Lines of Code

• Change Tracking: Variations in LOC as time passes can be tracked


to analyze the growth or reduction of a codebase, providing insights
into project progress.
• Limited Representation of Complexity: Despite LOC provides
a general idea of code size, it does not accurately depict code
complexity. It is possible for two programs having the same LOC to
be incredibly complex.
• Ease of Computation: LOC is an easy measure to obtain because
it is easy to calculate and takes little time.
• Easy to Understand: The idea of expressing code size in terms of
lines is one that stakeholders, even those who are not technically
inclined, can easily understand.
Advantages of Lines of Code

• Effort Estimation: LOC is occasionally used to estimate


development efforts and project deadlines at a high level. Although
caution is necessary, project planning can begin with this.
• Comparative Analysis: High-level productivity comparisons
between several projects or development teams can be made using
LOC. It might provide an approximate figure of the volume of code
generated over a specific time frame.
• Benchmarking Tool: When comparing various iterations of the
same program, LOC can be used as a benchmarking tool. It may
bring information on how modifications affect the codebase’s total
size.
Disadvantages of Lines of Code

• Challenges in Agile Work Environments: Focusing on


initial LOC estimates may not adequately reflect the iterative
and dynamic nature of development in agile development, as
requirements may change.
• Not Considering Into Account External Libraries: Code
from other libraries or frameworks, which can greatly enhance a
project’s overall usefulness, is not taken into account by LOC.
• Challenges with Maintenance: Higher LOC codebases are
larger codebases that typically demand more maintenance work.
Halstead's Software Metrics
• According to Halstead's "A computer program is an implementation of an
algorithm considered to be a collection of tokens which can be classified as either
operators or operand.“
Token Count
• In these metrics, a computer program is considered to be a collection of tokens,
which may be classified as either operators or operands. All software science
metrics can be defined in terms of these basic symbols. These symbols are called as
a token.
The basic measures are
• n1 = count of unique operators.
n2 = count of unique operands.
N1 = count of total occurrences of operators.
N2 = count of total occurrence of operands.
• In terms of the total tokens used, the size of the program can be expressed as
• N = N1 + N2.
Halstead metrics
Program Volume (V)
• The unit of measurement of volume is the standard unit for size "bits." It is
the actual size of a program if a uniform binary encoding for the
vocabulary is used.
Program Level (L)
• The value of L ranges between zero and one, with L=1 representing a
program written at the highest possible level (i.e., with minimum size).
L=V*/V
Program Difficulty
• The difficulty level or error-proneness (D) of the program is proportional
to the number of the unique operator in the program.
D= (n1/2) * (N2/n2)
• Programming Effort (E)
• The unit of measurement of E is elementary mental discriminations.
• E=V/L=D*V
• Estimated Program Length
• According to Halstead, The first Hypothesis of software science is that the
length of a well-structured program is a function only of the number of
unique operators and operands.
• N=N1+N2
• And estimated program length is denoted by N^
• N^ = n1log2n1 + n2log2n2
• The following alternate expressions have been published to estimate
program length:
• NJ = log2 (n1!) + log2 (n2!)
• NB = n1 * log2n2 + n2 * log2n1
• NC = n1 * sqrt(n1) + n2 * sqrt(n2)
• NS = (n * log2n) / 2
• Potential Minimum Volume
• The potential minimum volume V* is defined as the volume of
the most short program in which a problem can be coded.
V* = (2 + n2*) * log2 (2 + n2*)
• Here, n2* is the count of unique input and output parameters
• Size of Vocabulary (n)
• The size of the vocabulary of a program, which consists of the
number of unique tokens used to build a program, is defined
as: n=n1+n2
where:-
n=vocabulary of a program
n1=number of unique operators
n2=number of unique operands
• Language Level - Shows the algorithm implementation
program language level. The same algorithm demands
additional effort if it is written in a low-level program language.
For example, it is easier to program in Pascal than in Assembler.
• L' = V / D / D
lambda = L * V* = L2 * V
Functional Point (FP) Analysis

• Allan J. Albrecht initially developed function Point Analysis in


1979 at IBM and it has been further modified by the
International Function Point Users Group (IFPUG).
• FPA is used to make estimate of the software project, including
its testing in terms of functionality or function size of the
software product.
• However, functional point analysis may be used for the test
estimation of the product.
• The functional size of the product is measured in terms of the
function point, which is a standard of measurement to measure
the software application.
Objectives of FPA

• The basic and primary purpose of the functional point analysis


is to measure and provide the software application functional
size to the client, customer, and the stakeholder on their
request.
• Further, it is used to measure the software project development
along with its maintenance, consistently throughout the project
irrespective of the tools and the technologies.
The points regarding FPs
• 1. FPs of an application is found out by counting the number
and types of functions used in the applications. Various
functions used in an application can be put under five types, as
shown in Table:
The FPA functional units
 FP characterizes the complexity of the software system and
hence can be used to depict the project time and the manpower
requirement.
 The effort required to develop the project depends on what the
software does.
FP is programming language independent.
 FP method is used for data processing systems, business
systems like information systems.
 The five parameters mentioned above are also known as
information domain characteristics.
• All the parameters mentioned above are assigned some weights
that have been experimentally determined and are shown in
Table
• The functional complexities are multiplied with the
corresponding weights against each function, and the values are
added up to determine the UFP (Unadjusted Function Point) of
the subsystem.
• Here that weighing factor will be simple, average, or complex for a measurement parameter type.
• The Function Point (FP) is thus calculated with the following formula.
• FP = Count-total * [0.65 + 0.01 * ∑(fi)]
= Count-total * CAF
• where Count-total is obtained from the above Table.
• CAF = [0.65 + 0.01 *∑(fi)]
• and ∑(fi) is the sum of all 14 questionnaires and show the complexity adjustment value/ factor-CAF
(where i ranges from 1 to 14). Usually, a student is provided with the value of ∑(fi)
• Also note that ∑(fi) ranges from 0 to 70, i.e.,
• 0 <= ∑(fi) <=70
• and CAF ranges from 0.65 to 1.35 because
1. When ∑(fi) = 0 then CAF = 0.65
2. When ∑(fi) = 70 then CAF = 0.65 + (0.01 * 70) = 0.65 + 0.7 = 1.35
Differentiate between FP and LOC
Metrics for the Design Model of the
Product

Metrics simply measures quantitative assessment that focuses


on countable values most commonly used for comparing and
tracking performance of system.
Metrics are used in different scenarios like analyzing model,
design model, source code, testing, and maintenance.
 Metrics for design modeling allows developers or software
engineers to evaluate or estimate quality of design and include
various architecture and component-level designs.
Metrics by Glass and Card
• In designing a product, it is very important to have efficient
management of complexity. Complexity itself means very
difficult to understand. We know that systems are generally
complex as they have many interconnected components that
make it difficult to understand. Glass and Card are two
scientists who have suggested three design complexity
measures.
Structural Complexity & Data
Complexity
Structural complexity depends upon fan-out for modules. It can be
defined as :
S(k) = f2out(k)
Where fout represents fanout for module k (fan-out means number of
modules that are subordinating module k).

Data complexity is complexity within interface of internal module. It is


size and intricacy of data. For some module k, it can be defined as :
D(k) = tot_var(k) / [fout(k)+1]
Where tot_var is total number of input and output variables going to
and coming out of module.
System Complexity
• System complexity is combination of structural and data
complexity. It can be denoted as:
Sy(k) = S(k)+D(k)
When structural, data, and system complexity get increased,
overall architectural complexity also gets increased.
Complexity metrics
Complexity metrics are used to measure complexity of overall software. The
computation if complexity metrics can be done with help of a flow graph. It is
sometimes called cyclomatic complexity. The cyclomatic complexity is a useful
metric to indicate complexity of software system. Without use of complexity
metrics, it is very difficult and time-consuming to determine complexity in
designing products where risk cost emanates. Even continuous complexity analysis
makes it difficult for project team and management to solve problem. Measuring
Software complexity leads to improve code quality, increase productivity, meet
architectural standards, reduce overall cost, increases robustness, etc. To calculate
cyclomatic complexity, following equation is used:

Cyclomatic complexity= E - N + 2
Where, E is total number of edges and N is total number of nodes.
• So, the Cyclomatic complexity can be calculated as
Given,
E = 10,
N=8

So,
Cyclomatic complexity
=E-N+2
= 10 – 8 + 2
=4
Data Structure Metrics

Essentially the need for software development and other


activities are to process data. Some data is input to a system,
program or module; some data may be used internally, and
some data is the output from a system, program, or module.
That's why an important set of metrics which capture in the
amount of data input, processed in an output form software. A
count of this data structure is called Data Structured Metrics. In
these concentrations is on variables (and given constant) within
each module & ignores the input-output dependencies.
• There are some Data Structure metrics to compute the effort
and time required to complete the project. There metrics are:
1.The Amount of Data.
2.The Usage of data within a Module.
3.Program weakness.
4.The sharing of Data among Modules.
The Amount of Data
To measure the amount of Data, there are further many
different metrics, and these are:
Number of variable (VARS): In this metric, the Number of
variables used in the program is counted.
Number of Operands (η2): In this metric, the Number of
operands used in the program is counted.
η2 = VARS + Constants + Labels
Total number of occurrence of the variable (N2): In this
metric, the total number of occurrence of the variables are
computed
The Usage of data within a Module
• The measure this metric, the average numbers of live variables are
computed. A variable is live from its first to its last references within the
procedure.

• For Example: If we want to characterize the average number of live


variables for a program having modules, we can use this equation.

• Where (LV) is the average live variable metric computed from the ith
module. This equation could compute the average span size (SP) for a
program of n spans.
Program weakness
• Program weakness depends on its Modules weakness. If Modules are
weak(less Cohesive), then it increases the effort and time metrics required
to complete the project.

• Module Weakness (WM) = LV* γ


• A program is normally a combination of various modules; hence, program
weakness can be a useful measure and is defined as:

• WMi: Weakness of the ith module


• WP: Weakness of the program
• m: No of modules in the program
There Sharing of Data among
Module
• As the data sharing between the Modules increases (higher
Coupling), no parameter passing between Modules also
increased, As a result, more effort and time are required to
complete the project. So Sharing Data among Module is an
important metrics to calculate effort and time.
Information Flow Metrics

• The other set of metrics we would live to consider are known as


Information Flow Metrics. The basis of information flow metrics is
found upon the following concept the simplest system consists of the
component, and it is the work that these components do and how
they are fitted together that identify the complexity of the system.
The following are the working definitions that are used in
Information flow:
• Component: Any element identified by decomposing a (software)
system into it's constituent's parts.
• Cohesion: The degree to which a component performs a single
function.
• Coupling: The term used to describe the degree of linkage between
one component to others in the same system.
• Information Flow metrics deal with this type of complexity by observing
the flow of information among system components or modules. This
metrics is given by Henry and Kafura. So it is also known as Henry and
Kafura's Metric.
• This metrics is based on the measurement of the information flow among
system modules. It is sensitive to the complexity due to interconnection
among system component. This measure includes the complexity of a
software module is defined to be the sum of complexities of the procedures
included in the module. A process contributes complexity due to the
following two factors.
1.The complexity of the procedure code itself.
2.The complexity due to the procedure's connections to its environment. The
effect of the first factor has been included through LOC (Line Of Code)
measure. For the quantification of the second factor, Henry and Kafura
have defined two terms, namely FAN-IN and FAN-OUT.
• FAN-IN: FAN-IN of a procedure is the number of local flows
into that procedure plus the number of data structures from
which this procedure retrieve information.
• FAN -OUT: FAN-OUT is the number of local flows from that
procedure plus the number of data structures which that
procedure updates.
• Procedure Complexity = Length * (FAN-IN * FANOUT)**2
Software Project Planning
A Software Project is the complete methodology of programming advancement from
requirement gathering to testing and support, completed by the execution procedures,
in a specified period to achieve intended software product.
Need of Software Project Management
Software development is a sort of all new streams in world business, and there's next
to no involvement in structure programming items. Most programming items are
customized to accommodate customer's necessities. The most significant is that the
underlying technology changes and advances so generally and rapidly that experience
of one element may not be connected to the other one. All such business and ecological
imperatives bring risk in software development; hence, it is fundamental to manage
software projects efficiently.
Software Project Manager
Software manager is responsible for planning and scheduling project development.
They manage the work to ensure that it is completed to the required standard. They
monitor the progress to check that the event is on time and within budget. The
project planning must incorporate the major issues like size & cost estimation
scheduling, project monitoring, personnel selection evaluation & risk management.
To plan a successful software project, we must understand:
• Scope of work to be completed
• Risk analysis
• The resources mandatory
• The project to be accomplished
• Record of being followed
Software Project planning steps
Software Cost Estimation
For any new software project, it is necessary to know how much it will cost to
develop and how much development time will it take. These estimates are
needed before development is initiated, but how is this done? Several
estimation procedures have been developed and are having the following
attributes in common.
Project scope must be established in advanced.
Software metrics are used as a support from which evaluation is made.
The project is broken into small peices which are estimated individually.
To achieve true cost & schedule estimate, several option arise.
Delay estimation
Used symbol decomposition techniques to generate project cost and
schedule estimates.
Acquire one or more automated estimation tools.
Uses of Cost Estimation
1.During the planning stage, one needs to choose how many
engineers are required for the project and to develop a schedule.
2.In monitoring the project's progress, one needs to access
whether the project is progressing according to the procedure
and takes corrective action, if necessary.
Cost Estimation Models
• A model may be static or dynamic. In a static model, a single
variable is taken as a key element for calculating cost and time.
In a dynamic model, all variable are interdependent, and there
is no basic variable.
Static, Single Variable Models
When a model makes use of single variables to calculate desired values such
as cost, time, efforts, etc. is said to be a single variable model. The most
common equation is:
C=aLb
Where C =Costs, L= size, a and b are constants
The Software Engineering Laboratory established a model called SEL model,
for estimating its software production. This model is an example of the
static, single variable model.
E=1.4L0.93
DOC=30.4L0.90
D=4.6L0.26
Where E= Efforts (Person Per Month), DOC=Documentation (Number of
Pages), D = Duration (D, in months), L = Number of Lines per code
Static, Multivariable Models
These models are based on method (1), they depend on several variables
describing various aspects of the software development environment. In
some model, several variables are needed to describe the software
development process, and selected equation combined these variables to
give the estimate of time & cost. These models are called multivariable
models.
WALSTON and FELIX develop the models at IBM provide the following
equation gives a relationship between lines of source code and effort:
E=5.2L0.91
In the same manner duration of development is given by
D=4.1L0.36
Static, Multivariable Models
• The productivity index uses 29 variables which are found to be
highly correlated productivity as follows:

• Where Wi is the weight factor for the ith variable and Xi={-
1,0,+1} the estimator gives Xi one of the values -1, 0 or
+1 depending on the variable decreases, has no effect or
increases the productivity.
COCOMO Model
• Boehm proposed COCOMO (Constructive Cost Estimation Model) in
1981.COCOMO is one of the most generally used software estimation
models in the world. COCOMO predicts the efforts and schedule of a
software product based on the size of the software.
The necessary steps in this model are:
1.Get an initial estimate of the development effort from evaluation of
thousands of delivered lines of source code (KDLOC).
2.Determine a set of 15 multiplying factors from various attributes of
the project.
3.Calculate the effort estimate by multiplying the initial estimate with
all the multiplying factors i.e., multiply the values in step1 and step2.
COCOMO Model
• The initial estimate (also called nominal estimate) is determined
by an equation of the form used in the static single variable
models, using KDLOC as the measure of the size. To determine
the initial effort Ei in person-months the equation used is of the
type is shown below
• Ei=a*(KDLOC)b
• The value of the constant a and b are depends on the project
type.
Projects Category
• Projects are categorized into three types:
1. Organic: A development project can be treated of the organic
type, if the project deals with developing a well-understood
application program, the size of the development team is
reasonably small, and the team members are experienced in
developing similar methods of projects. Examples of this type
of projects are simple business systems, simple
inventory management systems, and data processing
systems.
Projects Category
2. Semidetached: A development project can be treated with
semidetached type if the development consists of a mixture of
experienced and inexperienced staff. Team members may have
finite experience in related systems but may be unfamiliar with
some aspects of the order being developed. Example of
Semidetached system includes developing a new
operating system (OS), a Database Management System
(DBMS), and complex inventory management system.
Projects Category
• Embedded: A development project is treated to be of an
embedded type, if the software being developed is strongly
coupled to complex hardware, or if the stringent regulations on
the operational method exist. For Example: ATM, Air Traffic
control.
• For three product categories, Bohem provides a different set of
expression to predict effort (in a unit of person month)and
development time from the size of estimation in KLOC(Kilo
Line of code) efforts estimation takes into account the
productivity loss due to holidays, weekly off, coffee breaks, etc.
Software Cost Estimation Stages
• According to Boehm, software cost estimation should be done
through three stages:
1.Basic Model
2.Intermediate Model
3.Detailed Model
Basic Model
• The basic COCOMO model provide an accurate size of the project
parameters. The following expressions give the basic COCOMO estimation
model:
Effort=a1*(KLOC) a2 PM
Tdev=b1*(efforts)b2 Months
Where
KLOC is the estimated size of the software product indicate in Kilo Lines of
Code,
a1,a2,b1,b2 are constants for each group of software products,
Tdev is the estimated time to develop the software, expressed in months,
Effort is the total effort required to develop the software product, expressed
in person months (PMs).
Basic Model
• Estimation of development effort
For the three classes of software products, the formulas for estimating the
effort based on the code size are shown below:
Organic: Effort = 2.4(KLOC) 1.05 PM
Semi-detached: Effort = 3.0(KLOC) 1.12 PM
Embedded: Effort = 3.6(KLOC) 1.20 PM
Estimation of development time
For the three classes of software products, the formulas for estimating the
development time based on the effort are given below:
Organic: Tdev = 2.5(Effort) 0.38 Months
Semi-detached: Tdev = 2.5(Effort) 0.35 Months
Embedded: Tdev = 2.5(Effort) 0.32 Months
Intermediate Model
• The basic Cocomo model considers that the effort is only a function
of the number of lines of code and some constants calculated
according to the various software systems. The intermediate
COCOMO model recognizes these facts and refines the initial
estimates obtained through the basic COCOMO model by using a set
of 15 cost drivers based on various attributes of software engineering.
• Classification of Cost Drivers and their attributes:
Product attributes -
• Required software reliability extent
• Size of the application database
• The complexity of the product
Intermediate Model
Hardware attributes -
• Run-time performance constraints
• Memory constraints
• The volatility of the virtual machine environment
• Required turnabout time
Personnel attributes -
• Analyst capability
• Software engineering capability
• Applications experience
• Virtual machine experience
• Programming language experience
Intermediate Model
Project attributes -
• Use of software tools
• Application of software engineering methods
• Required development schedule
Detailed COCOMO Model
• Detailed COCOMO incorporates all qualities of the standard
version with an assessment of the cost driver?s effect on each
method of the software engineering process. The detailed model
uses various effort multipliers for each cost driver property. In
detailed cocomo, the whole software is differentiated into
multiple modules, and then we apply COCOMO in various
modules to estimate effort and then sum the effort.
The Six phases of detailed COCOMO are:
1.Planning and requirements
2.System structure
3.Complete structure
4.Module code and test
5.Integration and test
6.Cost Constructive model
The effort is determined as a function of program estimate, and a set of
cost drivers are given according to every phase of the software
lifecycle.
Putnam Resource Allocation Model

The Lawrence Putnam model


describes the time and effort
requires finishing a software
project of a specified size.
Putnam makes a use of a so-
called The Norden/Rayleigh
Curve to estimate project effort,
schedule & defect rate as shown
in fig:
Putnam Resource Allocation Model
• Putnam noticed that software staffing profiles followed the well
known Rayleigh distribution. Putnam used his observation about
productivity levels to derive the software equation:

The various terms of this expression are as follows:


• K is the total effort expended (in PM) in product development, and L
is the product estimate in KLOC .
• td correlate to the time of system and integration testing.
Therefore, td can be relatively considered as the time required for
developing the product.
• Ck Is the state of technology constant and reflects requirements that
impede the development of the program.
Putnam Resource Allocation Model
• Typical values of Ck = 2 for poor development environment
• Ck= 8 for good software development environment
• Ck = 11 for an excellent environment (in addition to following
software engineering principles, automated tools and techniques are
used).
• The exact value of Ck for a specific task can be computed from the
historical data of the organization developing it.
• Putnam proposed that optimal staff develop on a project should
follow the Rayleigh curve. Only a small number of engineers are
required at the beginning of a plan to carry out planning and
specification tasks. As the project progresses and more detailed work
are necessary, the number of engineers reaches a peak. After
implementation and unit testing, the number of project staff falls.
What is Risk?
"Tomorrow problems are today's risk." Hence, a clear definition of a "risk"
is a problem that could cause some loss or threaten the progress of the
project, but which has not happened yet.
These potential issues might harm cost, schedule or technical success of
the project and the quality of our software device, or project team morale.
Risk Management is the system of identifying addressing and eliminating
these problems before they can damage the project.
We need to differentiate risks, as potential issues, from the current
problems of the project.
Different methods are required to address these two kinds of issues.
For example, staff storage, because we have not been able to select people
with the right technical skills is a current problem, but the threat of our
technical persons being hired away by the competition is a risk
Risk Management
• A software project can be concerned with a large variety of risks. In
order to be adept to systematically identify the significant risks which
might affect a software project, it is essential to classify risks into
different classes. The project manager can then check which risks
from each class are relevant to the project.
• There are three main classifications of risks which can affect a
software project:
1.Project risks
2.Technical risks
3.Business risks
Risk Management
1. Project risks: Project risks concern differ forms of budgetary,
schedule, personnel, resource, and customer-related problems. A vital
project risk is schedule slippage. Since the software is intangible, it is
very tough to monitor and control a software project. It is very tough
to control something which cannot be identified. For any
manufacturing program, such as the manufacturing of cars, the plan
executive can recognize the product taking shape.
2. Technical risks: Technical risks concern potential method,
implementation, interfacing, testing, and maintenance issue. It also
consists of an ambiguous specification, incomplete specification,
changing specification, technical uncertainty, and technical
obsolescence. Most technical risks appear due to the development
team's insufficient knowledge about the project.
3. Business risks: This type of risks contain risks of building an
excellent product that no one need, losing budgetary or personnel
commitments, etc.
Risk Management
Other risk categories
Known risks: Those risks that can be uncovered after careful
assessment of the project program, the business and technical
environment in which the plan is being developed, and more
reliable data sources (e.g., unrealistic delivery date)
Predictable risks: Those risks that are hypothesized from
previous project experience (e.g., past turnover)
Unpredictable risks: Those risks that can and do occur, but
are extremely tough to identify in advance.
Principle of Risk Management
Global Perspective: In this, we review the bigger system
description, design, and implementation. We look at the chance and
the impact the risk is going to have.
Take a forward-looking view: Consider the threat which may
appear in the future and create future plans for directing the next
events.
Open Communication: This is to allow the free flow of
communications between the client and the team members so that
they have certainty about the risks.
Integrated management: In this method risk management is
made an integral part of project management.
Continuous process: In this phase, the risks are tracked
continuously throughout the risk management paradigm.

You might also like