Cost Estimates in SPM
Cost Estimates in SPM
Effort estimation in software project management is crucial for planning, budgeting, and scheduling.
Various methods have been developed to tackle this challenge, each with its strengths and limitations.
Here are some of the primary methods used for effort estimation:
1. Expert Judgment
This method relies on the experience and intuition of seasoned project managers or team members.
Experts use their knowledge of similar past projects to estimate the effort required for new projects.
While subjective, it can be surprisingly accurate, especially when the experts have a deep understanding
of the domain and the specific technologies involved.
2. Analogous Estimation
Analogous estimation involves comparing the current project with past projects that are similar in size,
complexity, and functionality. This method assumes that the effort required for the current project will
be similar to that of the projects it's being compared to. It's quick and often used in the early stages of
project planning when detailed information is not yet available.
3. Parametric Estimation
Parametric estimation uses statistical models to estimate project effort. It involves identifying the
relationship between various project characteristics (parameters) and the effort required. These models
can be based on industry data or data from the organization's past projects. Examples include regression
models where effort is a function of lines of code, function points, or other quantifiable measures of
software size.
4. Bottom-Up Estimation
In bottom-up estimation, the project is broken down into smaller components or tasks, and the effort
for each is estimated individually. These individual estimates are then summed up to get the total project
effort. This method can be very accurate since it accounts for the details of the project, but it's also time-
consuming and requires a clear understanding of all the project's aspects.
5. Three-Point Estimation
The Three-Point Estimation technique, particularly when applied using the PERT (Program Evaluation
and Review Technique) formula, is a powerful method to estimate task durations or effort in project
management. It is especially useful in addressing the uncertainty and variability inherent in estimating
complex tasks. The technique requires three types of estimates for each task:
M (Most Likely): The most probable outcome, assuming a normal level of obstacles and opportunities.
P (Pessimistic): The worst-case scenario, considering potential challenges that could arise.
The formula for calculating the expected duration or effort (E) is:
E=(O+4M+P)/6
This formula not only gives a weighted average (with the most likely estimate receiving the highest
weight) but also balances the optimistic and pessimistic views, leading to a more balanced and realistic
estimation.
6. Delphi Technique
COCOMO and its successors, like COCOMO II, are algorithmic models that use mathematical formulas
to estimate project effort based on project size (measured in lines of code or function points) and a set
of cost drivers that adjust the base effort estimation. These models provide a more systematic and
repeatable estimation process than some of the more qualitative methods.
Function Point Analysis is a method for measuring the functionality delivered by the project,
considering the user's external view of the system. The total count of function points is then used in
conjunction with historical productivity data (function points per person-month) to estimate the effort
required for the project.
In agile methodologies, such as Scrum, effort estimation is often done using story points. This approach
estimates the effort for user stories based on their complexity, risk, and the amount of work required.
Story points allow teams to estimate the effort for tasks relative to each other without tying the estimates
to specific time durations.
Each of these methods has situations where it is most applicable, and many projects benefit from using
a combination of methods to cross-validate estimates and address different aspects of project planning
and uncertainty.
Cost effort in the context of software engineering and project management, particularly within the
frameworks like COCOMO (Constructive Cost Model), refers to the amount of human labor and
resources required to complete a software project, measured in person-months or person-hours. This
metric is crucial for planning, budgeting, and scheduling software development projects. It helps project
managers estimate how much work is involved in a project and how many people and how much time
will be needed to complete it, which in turn influences the overall cost of the project.
Versions of COCOMO
COCOMO has evolved through different versions, each adding more sophistication and accuracy to the
estimation process:
1. Basic COCOMO: The original version provides a quick and rough estimate of effort based on
the size of the software and a set of three project classes (Organic, Semi-Detached, and
Embedded), each representing different levels of project difficulty.
2. Intermediate COCOMO: This version refines the basic model by introducing cost drivers that
account for various factors affecting productivity and project quality, such as developer
experience, software reliability requirements, and the use of modern tools and techniques.
3. COCOMO II: The most recent version of the model, COCOMO II, updates the model to
accommodate modern software development practices, including object-oriented approaches,
software reuse, and rapid development techniques. COCOMO II is more adaptable to a wider
range of projects and includes models for early prototyping, early design, and post-architecture
stages of project development.
• Size Estimation: Estimate the size of the software project in KLOC or function points.
• Model Selection: Choose the appropriate COCOMO model (Basic, Intermediate, or
COCOMO II) based on the project's needs and available information.
• Parameter Specification: For Intermediate and COCOMO II models, specify the values for
the cost drivers relevant to the project.
• Effort and Schedule Estimation: Apply the size and parameter values to the COCOMO
equations to estimate effort (in person-months) and project duration.
COCOMO's parametric nature allows for relatively quick and adaptable project estimates, making it a
valuable tool for project managers. However, like any estimation model, the accuracy of COCOMO's
predictions depends heavily on the quality of the input data and the appropriateness of the chosen model
and parameters for the specific project context.
Basic COCOMO: The original version provides a quick and rough estimate of effort based on the size
of the software and a set of three project classes (Organic, Semi-Detached, and Embedded), each
representing different levels of project difficulty.
COCOMO (Constructive Cost Model) is indeed a parametric model used for estimating software project
effort, cost, and schedule. Developed by Dr. Barry Boehm in 1981, it's one of the most widely
recognized and historically significant models in the field of software engineering.
COCOMO is based on a mathematical formula that uses the size of the software project (usually
measured in thousands of lines of code, KLOC) as the primary input parameter. It then adjusts this base
estimate according to a set of cost drivers that reflect the project's characteristics, such as complexity,
required reliability, team experience, and use of modern tools and techniques.
Basic COCOMO
The Basic COCOMO model is the simplest form of the COCOMO series and estimates the software
development effort (and subsequently cost and duration) using a single equation for each type of project.
The projects are categorized into three classes to account for the varying levels of complexity and
difficulty:
1. Organic: Projects with a "small" team and relatively "simple" software solutions, typically
found in business or non-complex scientific applications where the team has a good
understanding of the application domain.
2. Semi-Detached: Projects that fall between organic and embedded, often with mixed
characteristics such as medium-sized teams or projects with varied experience levels and
moderately complex applications.
3. Embedded: Projects that are developed within a set of tight constraints, high-levels of
hardware, software, operational, and interface complexity. This class is characteristic of
software that controls hardware devices, consumer electronics, or highly specialized
applications in aerospace and military.
The Basic COCOMO formula for estimating the software development effort (E) in person-months is
given by:
E=ab(KLOC) bb
Where:
After estimating the effort, the model can also estimate the project duration and staffing.
The strength of Basic COCOMO lies in its simplicity and the ease with which it can provide rough
estimates of project effort. However, its simplicity is also a limitation since it does not account for the
nuances and variations between individual projects beyond the basic classification. It assumes a fixed
relationship between project size and effort, which may not hold true for projects with significant
differences in technology, tools, and team dynamics.
For more detailed and nuanced estimation, Boehm developed extensions to the Basic COCOMO model,
namely Intermediate COCOMO and Detailed COCOMO, which consider additional factors and cost
drivers that can influence project effort and duration.
COCOMO and its variants remain a valuable tool in the arsenal of software project managers for initial
project estimation, especially when calibrated with contemporary data and adjusted for the specific
context of the projects they are managing.
Project Class ab bb
Organic 2.4 1.05
Project Class ab bb
Semi-Detached 3.0 1.12
Embedded 3.6 1.20
'a' and 'b' are constants determined from historical data and represent the productivity and scale factors,
respectively.
The values of 'a' and 'b' can vary depending on factors such as the organization's development
environment, the type of projects being analyzed, and the characteristics of the development team.
However, they are typically derived from regression analysis of historical project data to provide a
baseline for effort estimation.
While the values of 'a' and 'b' may not change frequently, they are not fixed and may be adjusted over
time as new data becomes available or as the development environment evolves. Additionally, different
organizations or industries may use slightly different values for 'a' and 'b' based on their specific context
and experience.
1. Product Attributes
• Required Software Reliability (RELY): Reflects the degree of importance placed on the
reliability of the software. It's measured on a scale from very low to very high.
• Size of the Application Database (DATA): Measures the size and complexity of the database
used by the application. Larger databases typically require more effort to develop and maintain.
2. Hardware Attributes:
• Product Complexity (CPLX): Evaluates the complexity of the product in terms of its
architecture, interfaces, and integration requirements. It's rated on a scale from very low to extra
high.
3. Personnel Attributes:
• Analyst Capability (ACAP): Reflects the skill and experience level of the analysts involved
in the project.
• Programmer Capability (PCAP): Measures the skill and experience level of the
programmers.
• Personnel Continuity (PCON): Accounts for the personnel turnover within the project team.
4. Project Attributes:
• Use of Modern Programming Practices (MODP): Reflects the extent to which modern
programming practices, tools, and techniques are employed in the project.
• Use of Software Tools (TOOL): Evaluates the sophistication and effectiveness of the software
development tools used.
• Increased Precision: By considering a broader range of project attributes and cost drivers,
Intermediate COCOMO provides more accurate estimates compared to Basic COCOMO.
• Customization: The model allows project managers to tailor estimates based on their specific
project characteristics, team capabilities, and development environment.
• Reflects Industry Trends: By incorporating factors such as the use of modern tools and
techniques, Intermediate COCOMO stays relevant in the rapidly evolving software
development landscape.
Limitations:
Overall, Intermediate COCOMO offers a valuable refinement to the original COCOMO model by
incorporating a more comprehensive set of factors that influence software project effort and duration.
By carefully considering these factors, project managers can generate more precise estimates, leading
to better project planning and management.
In Intermediate COCOMO, the mathematical formula for estimating software project effort is similar
to the one used in Basic COCOMO, but it incorporates adjustment factors (also known as cost drivers)
to account for various project attributes. The formula for estimating effort (E) in person-months is given
as:
E = ab×(KLOC)bb×EAF
Where:
The Effort Adjustment Factor (EAF) is calculated based on the product's characteristics and the ratings
assigned to the various cost drivers. The formula for calculating EAF is:
EAF=Πi=1 to n EMi
Where:
Each cost driver rating (EM) is determined based on the project's attributes, such as required software
reliability, product complexity, personnel capabilities, and use of modern tools and techniques. These
ratings are then converted into adjustment factors using predefined tables provided by COCOMO.
Once all the adjustment factors are determined, they are multiplied together to calculate the overall
Effort Adjustment Factor (EAF). This EAF value is then applied to the Basic COCOMO effort
estimation formula to account for the project's specific attributes and characteristics.
In summary, the steps for mathematically calculating effort estimation in Intermediate COCOMO are
as follows:
This mathematical approach allows project managers to generate more accurate effort estimates by
considering a broader range of project attributes and their impact on software development effort.
Let's walk through a detailed case study to illustrate how Intermediate COCOMO can be applied to
estimate software project effort.
A software development company, TechSolutions Inc., has been tasked with developing a web
application for a client in the e-commerce sector. The application will allow users to browse and
purchase products online. The project involves a team of developers, designers, and testers.
Project Attributes:
1. Size of the Application (KLOC): The estimated size of the application is 100,000 lines of code.
2. Product Complexity (CPLX): The application is moderately complex due to its requirement for
user authentication, product search functionality, shopping cart management, and payment
processing.
3. Required Software Reliability (RELY): Reliability is crucial for the application as it deals with
financial transactions. High reliability is required.
4. Analyst Capability (ACAP): The analysts in the team are experienced and skilled in gathering and
analyzing requirements.
5. Programmer Capability (PCAP): The programmers have average experience levels.
6. Personnel Continuity (PCON): The project team has a low turnover rate.
7. Use of Modern Programming Practices (MODP): Modern programming practices and tools are
moderately employed.
8. Use of Software Tools (TOOL): The team utilizes sophisticated software development tools and
integrated development environments (IDEs).
Based on the project attributes, let's assign rating values to each cost driver:
EAF=RELY×DATA×CPLX×ACAP×PCAP×PCON×MODP×TOOL
EAF = 1.15×0.90×1.00×0.85×1.00×1.10×1.00×0.90
EAF =1.15×0.90×0.85×1.10×0.90
EAF=1.01565
Calculation of Effort (E):
E=ab×(KLOC)bb×EAF
Given that this is a semi-detached project:
ab=3.0
bb=1.12
KLOC=100
E=3.0×(100)1.12×1.01565
E=3.0×398.1×1.01565
E=1202.7 person-months
Conclusion
Based on the Intermediate COCOMO estimation, the effort required for the development of the web
application is approximately 1202.7 person-months. This estimate can be used by TechSolutions Inc.
to plan resources, schedule activities, and budget for the project accordingly.
This case study demonstrates how Intermediate COCOMO can be applied to estimate software project
effort by considering various project attributes and their influence on development effort.
Cost Drivers Very Low Low Nominal High Very High Extra High
Required Software Reliability 0.90 0.95 1.00 1.10 1.20 1.30
Size of Application Database 0.95 0.98 1.00 1.05 1.10 1.15
Product Complexity 0.90 0.95 1.00 1.10 1.20 1.30
Analyst Capability 0.85 0.90 1.00 1.10 1.20 1.25
Cost Drivers Very Low Low Nominal High Very High Extra High
Programmer Capability 0.85 0.90 1.00 1.10 1.20 1.25
Personnel Continuity 0.95 1.00 1.00 1.05 1.10 1.15
Use of Modern Programming Practices 0.95 0.98 1.00 1.05 1.10 1.15
Use of Software Tools 0.95 0.98 1.00 1.05 1.10 1.15
COCOMO II
The COCOMO II (Constructive Cost Model II) is an advanced software cost estimation model
designed to provide an accurate projection of the efforts, time, and resources necessary for developing
a software project. It's an evolution of the original COCOMO model developed by Dr. Barry Boehm
in the 1980s, updated to accommodate the changes and advancements in software development
practices. COCOMO II offers a more flexible and detailed approach for estimating the cost of
software development projects, especially useful for modern software development that includes
concepts like rapid development, reuse, and non-sequential life cycles.
COCOMO II comprises three models, which are applied sequentially throughout the software
development process:
COCOMO II comprises three distinct models designed to provide accurate cost estimations at different
stages of software development. Each model is tailored to the specific needs and available information
at its respective phase in the development cycle. Here's a detailed look at each model:
Each model within COCOMO II serves a unique purpose, tailored to the specific stage of
software development. They collectively ensure that project managers and development
teams can make informed decisions throughout the project lifecycle, from early planning and
feasibility studies to detailed design and development.
Software Reuse
• What It Means: COCOMO II understands that not every part of a new software project needs
to be built from scratch. Teams often use existing software components, either by directly
incorporating them into the project or by modifying them to fit new requirements. This reuse
can significantly reduce the effort and time needed for development.
• How It Works: The model adjusts the overall effort estimate based on the amount (how much
code) and type (completely new, slightly modified, or directly reused) of software being reused.
This means if your project can utilize a lot of existing components, the estimated effort and cost
will be lower.
Scale Factors
• What They Are: Scale factors are elements that reflect how various attributes of a project
impact its overall difficulty and, therefore, the effort required to complete it.
• Examples Include:
a. Project Size: Larger projects require more effort.
b. Complexity: More complex projects are harder to manage and develop.
c. Volatility: Projects with frequent changes (in requirements, design, etc.) are more
challenging to estimate and complete.
• Impact: These factors are used to adjust the base effort estimate to account for the size,
complexity, and other scaling impacts of the project.
Cost Drivers
• What They Are: Cost drivers are project-specific factors that influence the cost and effort
needed beyond the basic size of the software.
• Examples Include:
a. Team Experience: More experienced teams can work more efficiently.
b. Software Tools: The use of advanced development tools can speed up the development
process.
c. Documentation Quality: High-quality documentation can make the development
process smoother and faster.
• How They Work: Each cost driver adjusts the effort estimate up or down based on its presence
or quality in the project. For example, a highly experienced team might reduce the estimated
effort, while a project that requires extensive documentation might increase it.
Maintenance Model
• Purpose: Recognizes that a significant portion of software costs occur after the initial
development, during the maintenance phase.
• What It Does: Provides a way to estimate the effort required for maintaining software after its
initial release. This includes fixing bugs, updating features, and ensuring the software continues
to operate with new hardware or software environments.
• Importance: By including maintenance in the estimation, COCOMO II gives a more
comprehensive view of a software project's lifecycle costs.
COCOMO II provides a comprehensive framework that can be tailored to a wide range of software
projects, from small, simple applications to large, complex systems. It is better suited for today's
software development environment, where rapid development cycles, component-based assembly, and
software reuse are common.
COCOMO II's methodology for estimating the cost and effort required for software
development projects is both comprehensive and adaptable, accommodating various types of
software projects from early development stages through maintenance. Here's a breakdown of
its key components, making it easier to understand for those not familiar with software
project estimation:
Size Estimation
• What It Is: The foundation of any cost estimation in COCOMO II. It quantifies the amount of work
involved in a software project.
• How It's Measured:
• KSLOC (Kilo Source Lines of Code): Estimates based on the lines of code expected in the
final product.
• Function Points (FP): A measure based on the functionality provided by the system, such as
inputs, outputs, user interactions, and the complexity of the system's internal operations.
• Object Points (OP): Similar to function points but used for projects developed with object-
oriented methods, considering the number of screens, reports, and third-party modules.
• Why It Matters: The size directly influences the effort and time estimates. A larger size typically
means more effort and longer duration.
Cost Drivers
• What They Are: Elements that can increase or decrease the effort needed beyond the basic workload
indicated by size.
• Types of Impact:
• Software Complexity: More complex algorithms or architectures can increase effort.
• Developer Capability: Highly skilled developers can reduce effort and time.
• Documentation Quality: Comprehensive and clear documentation can streamline
development.
• Software Tools: Advanced tools can accelerate development but might require additional
training.
• Impact Mechanism: Each driver adjusts the effort estimate by a factor rated from "very low" to
"extra high."
Scale Factors
• Difference From Cost Drivers: While cost drivers adjust effort linearly, scale factors impact the
effort non-linearly, meaning their effect increases more dramatically as the project scales up.
• What They Reflect: Attributes like project size, team cohesion, or the duration of the project that
globally impact the project's effort and duration.
• Use Case: Particularly important in the Post-Architecture phase of development when the project's
structure and team dynamics become clearer.
Effort Multipliers
• Function: These multipliers adjust the base effort estimate to reflect the influence of cost drivers.
• Application: Specific to various development phases and project characteristics, they are applied
directly to the size estimation to refine the effort required.
Equations
• Role: Empirical equations calculate the core metrics of a project: effort, duration, and staffing.
• Basis: These calculations consider size estimates, cost drivers, scale factors, and effort multipliers to
provide a comprehensive estimation.
• Outcome: The result is a detailed projection of how much effort (in person-months), how long
(duration), and how many people (staffing requirements) are needed to complete the project.
In essence, COCOMO II offers a structured approach to estimating the resources required for
software development, accommodating a wide range of project types and complexities. Its
adaptability lies in its ability to consider a variety of factors from the project's size and
complexity to the team's capabilities and the tools at their disposal, making it a valuable tool
for project managers and developers alike.
Duration=3.67×(Effort)F
Where, F= (0.28+0.2×(E−B))
• E is an exponent derived from the scale factors, impacting the effort estimation.
• B is a constant baseline value. In traditional COCOMO II, B often starts around 0.91
for small to medium-sized projects but can vary based on the project's size and
complexity.
• The effort is the calculated effort from step 4.
The formula takes into account the non-linear relationship between effort and time. The
number of people required (staffing) can be approximated by dividing the effort by the
duration.
Example:
Assuming you have an adjusted size of 32 KSLOC (after applying cost drivers and
considering the software's complexity), and the project's scale factors suggest an E value of
1.15, the effort calculation would look something like this:
Effort=2.94×(32)1.15
Effort=2.94×(32)1.15
Effort≈2.94×45.8≈134.7 person-months
Duration=3.67×(134.7)(0.28+0.2×(1.15−0.91))
Duration≈3.67×(134.7)0.33≈21.4 months
And if the average labor rate is $7,500 per person-month, the cost would be:
Cost=134.7×7,500=$1,010,250
These steps offer a structured approach to estimating the effort, duration, and cost of software
projects using COCOMO II, providing vital insights for project planning and management.
Example
Imagine a project estimated to be 10 KSLOC with average complexity and environment
factors. Suppose after adjustments for cost drivers and scale factors, the effort calculation
equation looks like this:
Effort=2.94×(10)1.05≈31.4 person-months
If the average cost per person-month is $7,000, the estimated cost would be approximately:
This calculation provides a baseline for budgeting, but it's essential to periodically revisit and
adjust the estimate based on actual project performance and newly discovered information.
Case Study
Background
Company: XTech Innovations, a mid-sized software development firm
specializing in enterprise solutions.
• Contact Management
• Sales Pipeline Tracking
• Customer Support & Service
• Marketing Campaign Management
• Reporting and Dashboards
• Integration with third-party services (email, social media, etc.)
Size Estimation: Given the lack of detailed requirements at the very beginning, the
project management team decides to use analogy-based estimation and expert
judgment, comparing the new CRM project to a past project of similar scope.
Based on their experience and initial requirements, they estimate the project to be
around 85,000 Source Lines of Code (SLOC).
• SLOC: 85,000
Calculation:
• Assume average values for cost drivers and scale factors not specified.
• Using the COCOMO II formula and the given parameters, the project management
team calculates an initial effort estimate.
With this refined estimate, they recalculate the effort and duration, considering
more accurate cost drivers and scale factors. This results in a slight increase in the
estimated effort to 350 person-months.
Cost Estimation
Given the effort estimate, the next step is to calculate the cost. XTech uses an
average labor rate of $7,500 per person-month, which includes salaries, overheads,
and other expenses.
Conclusion
The CRM project is completed in 370 person-months, slightly over the initial
estimate but within a revised budget that accounted for discovered complexities.
The project's final cost is approximately $2,775,000.
This case study demonstrates the iterative nature of project estimation and the
importance of flexibility, continuous monitoring, and communication with
stakeholders. COCOMO II and similar models provide a structured approach to
estimation, but the art of project management lies in adapting to changes and
learning from experience.
Object Points
Object Points are a metric used in software development to estimate the size and
complexity of a software application. They are part of the COCOMO II model,
particularly relevant in the Application Composition model, which is used during
the early stages of project development. Object Points offer a way to measure
the functional size of a project based on the software's user interface and
underlying operations, making them particularly useful for projects where rapid
application development (RAD) techniques are employed, or for projects that uses
iterative project development.
1. Screens (or User Interfaces): The total number of distinct screens or user
interfaces that the application will have. Each screen is categorized by complexity
(simple, medium, or complex) based on criteria such as the number of fields or the
amount of data processing required.
2. Reports: Similar to screens, this counts the total number of reports the software
will generate, with each report also classified as simple, medium, or complex based
on factors like layout complexity and data aggregation requirements.
3. Interfaces: The number of interfaces to other systems or applications. Interfaces
are also assessed for complexity, which might consider the amount of data
exchanged, the need for data transformation, and the communication protocols
used.
Calculating Object Points
After identifying and classifying the screens, reports, and interfaces, each
component is assigned a weight based on its complexity. For example:
(These values are illustrative; actual weights might vary based on specific
methodologies or organizational standards.)
The total Object Points for a project are calculated by summing the weighted
counts of screens, reports, and interfaces. This total gives a quantifiable measure of
the application's size and complexity.
Function Points
Function Points are a standardized, technology-independent measure used to
estimate the size and complexity of software applications. Developed in the 1970s
by Allan Albrecht at IBM, the method has since been refined and adopted globally.
Function Points quantify the functionality delivered to the user, based
primarily on the logical design and user requirements, rather than on
technical complexity or the programming effort required. This makes Function
Points particularly useful for estimating projects early in the development cycle,
comparing productivity across projects and technologies, and benchmarking.
1. External Inputs (EI): These are the operations where data enters the system from
external sources. An example is data entered by a user through a form on a
website.
2. External Outputs (EO): These refer to the operations where processed data is
sent back to the user or to another system. Reports, search results, and automated
email notifications are typical examples.
3. External Inquiries (EQ): These operations involve both an input and an output,
essentially a request for information where the output is directly related to the input
query without significant processing. A search feature could be classified as an
inquiry, provided it merely retrieves and displays data without substantial
manipulation.
4. Internal Logical Files (ILF): These represent the user-identifiable groups of
logically re lated data or information maintained within the system. An example
could be a user database.
5. External Interface Files (EIF): These are similar to ILFs but are used to refer to
logically related data that is used for reference purposes only and is maintained by
another system. These files are typically accessed or utilized by the software being
developed but are not directly maintained or controlled by it. An example is accessing a
third-party ZIP code database.
Calculating Function Points
The calculation process involves several steps:
1. Counting: Identify and count instances of the five components within the
application.
2. Weighting: Each of these components is then weighted according to its
complexity (simple, average, or complex). Complexity is determined by factors
like the number of data elements involved and the degree of interaction with other
components.
3. Summing: The weighted counts are summed to produce a total Function Point
count.
4. Adjusting: The total is optionally adjusted to account for various factors affecting
the project, such as the data communication needs, performance requirements, and
operational constraints. This adjustment is made using a Value Adjustment Factor
(VAF), which can increase or decrease the total Function Points.
Using Function Points
The total Function Points provide a measure of the functional size of the software.
This measure can be used to estimate project effort, cost, and duration by
comparing it against historical data of similar projects. The beauty of Function
Points lies in their independence from programming language, development
methodology, or technology, which allows for consistent and objective
comparisons across different projects or teams.
Notes
• Object Points are particularly useful in environments emphasizing screen and
report generation, and when rapid application development (RAD) methodologies
are used. They provide a quick, early estimate of effort based on the number and
complexity of user interfaces, reports, and interfaces.
• Function Points focus more on the functionalities that the software provides from
a user's perspective, including data processing, data storage, and data inquiry
functionalities, regardless of how these functions are implemented technically.
Both these metrics serve as tools to estimate the size of a software development
project, which can then be used to estimate effort, cost, and duration. The choice
between Object Points and Function Points often depends on the project's nature,
the development stage, and the available information for estimation.
1. Expert Judgment
This method relies on the experience and intuition of seasoned project managers or team
members. Experts use their knowledge of similar past projects to estimate the effort required
for new projects. While subjective, it can be surprisingly accurate, especially when the
experts have a deep understanding of the domain and the specific technologies involved.
2. Analogous Estimation
Analogous estimation involves comparing the current project with past projects that are
similar in size, complexity, and functionality. This method assumes that the effort required
for the current project will be similar to that of the projects it's being compared to. It's quick
and often used in the early stages of project planning when detailed information is not yet
available.
3. Parametric Estimation
Parametric estimation uses statistical models to estimate project effort. It involves identifying
the relationship between various project characteristics (parameters) and the effort required.
These models can be based on industry data or data from the organization's past projects.
Examples include regression models where effort is a function of lines of code, function
points, or other quantifiable measures of software size.
4. Bottom-Up Estimation
In bottom-up estimation, the project is broken down into smaller components or tasks, and
the effort for each is estimated individually. These individual estimates are then summed up
to get the total project effort. This method can be very accurate since it accounts for the
details of the project, but it's also time-consuming and requires a clear understanding of all
the project's aspects.
5. Three-Point Estimation
TThe Three-Point Estimation technique, particularly when applied using the PERT (Program
Evaluation and Review Technique) formula, is a powerful method to estimate task durations
or effort in project management. It is especially useful in addressing the uncertainty and
variability inherent in estimating complex tasks. The technique requires three types of
estimates for each task:
The formula for calculating the expected duration or effort (E) is:
E=(O+4M+P)/6
This formula not only gives a weighted average (with the most likely estimate receiving the
highest weight) but also balances the optimistic and pessimistic views, leading to a more
balanced and realistic estimation.
6. Delphi Technique
The Delphi Technique is a structured communication technique, originally developed as a
systematic, interactive forecasting method. It relies on a panel of experts who anonymously
provide their estimates. After each round, the range of the estimates is shared, and the experts
are allowed to adjust their estimates based on the information provided by the other
participants. This process is repeated until a consensus is reached.
Each of these methods has situations where it is most applicable, and many projects benefit
from using a combination of methods to cross-validate estimates and address different aspects
of project planning and uncertainty.