Module 5 SEPM
Module 5 SEPM
&Project Management
Regulation – 2022 (CBCS Scheme)
Module-05
Concept of Quality:
Objective Assessment:
Development Perspective:
o Assess the likely quality of the final system. o Ensure development methods
o Ensuring these methods will lead to the required quality in the final system.
Quality Concerns:
• Key points in the Step Wise framework where quality is particularly emphasized:
o Review the overall quality aspects of the project plan at this stage.
General Expectation:
o Final customers and users are increasingly concerned about software quality,
particularly reliability.
System Requirements:
Measuring Quality:
• Good Measure: Relates the number of units to the maximum possible (e.g., faults
per thousand lines of code).
• Direct Measurement: Measures the quality itself (e.g., faults per thousand lines of
code).
• Indirect Measurement: Measures an indicator of the quality (e.g., number of user
inquiries at a help desk as an indicator of usability).
Setting Targets:
• Impact on Project Team: Quality measurements set targets for team members.
• Meaningful Improvement: Ensure that improvements in measured quality are
meaningful.
o Example: Counting errors found in program inspections may not be
meaningful if errors are allowed to pass to the inspection stage rather than
being eradicated earlier.
1. Definition/Description
o Definition: Clear definition of the quality characteristic.
o Description: Detailed description of what the quality characteristic entails.
2. Scale
o Unit of Measurement: The unit used to measure the quality characteristic
(e.g., faults per thousand lines of code).
1. Availability:
o Definition: Percentage of a particular time interval that a system is usable. o
These measurements help quantify and assess the reliability and maintainability of
software systems, ensuring they meet desired quality standards.
ISO 9126 is a significant standard in defining software quality attributes and providing a)
framework for assessing them. Here are the key aspects and characteristics defined by|
5. Maintainability:
o Definition: The effort needed to make changes to the software. o Sub-
characteristics: Analyzability, modifiability, testability.
6. Portability:
o Definition: The ability of the software to be transferred from one
environment to another.
o Sub-characteristics: Adaptability, install ability, co-existence.
• Definition: Focuses on how well the software supports specific user goals in a
specific context of use.
• Elements: Effectiveness, productivity, safety, satisfaction.
Compliance
Interoperability
• Definition: Refers to the ability of the software to interact with other systems
effectively.
• Clarification: ISO 9126 uses "interoperability" instead of "compatibility" to avoid
confusion with another characteristic called "replaceability".
• Importance: Ensures seamless integration and communication between different
Maturity
Recoverability
• Definition: Refers to the capability of the software to restore the system to its
normal operation after a failure or disruption.
• Security: Focuses on access control and protecting the system from unauthorized
access, ensuring confidentiality, integrity, and availability.
• Recoverability: Focuses on system resilience and the ability to recover from
failures, ensuring continuity of operations.
Learnability
• Definition: Refers to the ease with which users can learn to operate the
• Focus: Primarily on the initial phase of user interaction with the software.
• Measurement: Assessed by the time it takes for new users to become
proficient with the software, often measured in training hours or tasks
completed.
Operability
• Definition: Refers to the ease with which users can operate and navigate the
software efficiently.
• Focus: Covers the overall usability of the software during regular use and
over extended periods.
Importance of Distinction
• Learnability: Critical for software that requires quick adoption and minimal
training, ensuring users can start using the software effectively from the
outset.
• Operability: Crucial for software used intensively or for extended periods,
focusing on efficiency, ease of navigation, and user comfort.
Analysability
• Definition: Refers to the ease with which the cause of a failure in the software can
be determined.
• Focus: Helps in diagnosing and understanding software failures or issues quickly
and accurately.
Changeability
• Definition: Also known as flexibility, changeability refers to the ease with which
software can be modified or adapted to changes in requirements or environment.
Clarification of Terms
Portability Compliance
• Definition: Refers to the adherence of the software to standards that facilitate its
transferability and usability across different platforms or environments.
• Focus: Ensures that the software can run efficiently and effectively on various
hardware and software configurations without needing extensive modifications.
• Importance: Facilitates broader deployment and reduces dependency on specific
hardware or software configurations.
Replaceability
Coexistence
• Definition: Refers to the ability of the software to peacefully share resources and
operate alongside other software components within the same environment.
• Focus: Does not necessarily involve direct data exchange but ensures
compatibility and non-interference with other software components.
• Importance: Enables integration of the software into complex IT ecosystems
without conflicts or performance degradation.
ISO 9126 provides structured guidelines for assessing and managing software quality
characteristics based on the specific needs and requirements of the software product. It
emphasizes the variation in importance of these characteristics depending on the type and
context of the software product being developed.
Once the software product requirements are established, ISO 9126 suggests the following
steps:
2. Define Metrics and Measurements: Establish measurable criteria and metrics for
evaluating each quality characteristic, ensuring they align with the defined
objectives and user expectations.
3. Plan Quality Assurance Activities: Develop a comprehensive plan for quality
assurance activities, including testing, verification, and validation processes to
ensure adherence to quality standards.
4. Monitor and Improve Quality: Continuously monitor software quality
throughout the development lifecycle, identifying areas for improvement and
• Reliability: Critical for safety-critical systems where failure can have severe
consequences. Measures like mean time between failures (MTBF) are essential.
• Efficiency: Important for real-time systems where timely responses are crucial.
Measures such as response time are key indicators.
• Internal measurements like code execution times can help predict external qualities
like response time during software design and development.
• Predicting external qualities from internal measurements is challenging and often
requires validation in the specific environment where the software will operate.
• ISO 9126 acknowledges that correlating internal code metrics to external quality
characteristics like reliability can be difficult.
• This challenge is addressed in a technical report rather than a full standard,
indicating ongoing research and development in this area.
<2 5
2-3 4
It seems like you're describing a method for evaluating and comparing software products
based on their quality characteristics. Here's a summary and interpretation of your
approach:
Usability 3 1 3 3 9
Efficiency 4 S
2 8 2
Maintainability 3
2 6 1 2
Overall 17 19
Understanding the differences between product metrics and process metrics is crucial in
software development:
1. Product Metrics:
o Purpose: Measure the characteristics of the software product being
developed.
o Examples:
Prepared by: P r o f . Sadhana R , AP / CSE & Prof. Shyji Ealias, AP/CSE
BCS501 Software Engineering
&Project Management
Regulation – 2022 (CBCS Scheme)
■ Size Metrics: Such as Lines of Code (LOC) and Function Points,
which quantify the size or complexity of the software.
■ Effort Metrics: Like Person-Months (PM), which measure the
o Examples:
Differences:
• Focus: Product metrics focus on the characteristics of the software being built
(size, effort, time), while process metrics focus on how well the development
process is performing (effectiveness, efficiency, quality).
• Use: Product metrics are used to gauge the attributes of the final software product,
aiding in planning, estimation, and evaluation. Process metrics help in assessing
By employing both types of metrics effectively, software development teams can better
manage projects, optimize processes, and deliver high-quality software products that meet
user expectations.
Product quality management focuses on evaluating and ensuring the quality of the
software product itself. This approach is typically more straightforward to implement and
measure after the software has been developed.
Aspects:
o Metrics may not always capture the full complexity or performance of the
final integrated product.
Process quality management focuses on assessing and improving the quality of the
development processes used to create the software. This approach aims to reduce errors
and improve efficiency throughout the development lifecycle.
Aspects:
• While product and process quality management approaches have distinct focuses,
they are complementary.
• Effective software development teams often integrate both approaches to achieve
optimal results.
• By improving process quality, teams can enhance product quality metrics, leading
to more reliable, efficient, and user-friendly software products.
ISO 9001:2000, now superseded by newer versions but still relevant in principle, outlines
standards for Quality Management Systems (QMS). Here’s a detailed look at its key
aspects and how it applies to software development:
ISO 9001:2000 is part of the ISO 9000 series, which sets forth guidelines and
requirements for implementing a Quality Management System (QMS).
The focus of ISO 9001:2000 is on ensuring that organizations have effective processes in
place to consistently deliver products and services that meet customer and regulatory
Key Elements:
1. Fundamental Features:
o Describes the basic principles of a QMS, including customer focus,
leadership, involvement of people, process approach, and continuous
improvement.
o Emphasizes the importance of a systematic approach to managing processes
and resources.
• Ensure that subcontractors and external vendors also adhere to quality standards
through effective quality assurance practices.
• Perceived Value: Critics argue that ISO 9001 certification does not guarantee the
quality of the end product but rather focuses on the process.
• Cost and Complexity: Obtaining and maintaining certification can be costly and
time-consuming, which may pose challenges for smaller organizations.
• Focus on Compliance: Some organizations may become overly focused on
meeting certification requirements rather than improving overall product quality.
Despite these criticisms, ISO 9001:2000 provides a structured framework that, when
implemented effectively, can help organizations improve their software development
processes and overall quality management practices.
It emphasizes continuous improvement and customer satisfaction, which are crucial aspects
in the competitive software industry.
1. Customer Focus:
o Understanding and meeting customer requirements to enhance satisfaction.
2. Leadership:
o Providing unity of purpose and direction for achieving quality objectives.
3. Involvement of People:
o Engaging employees at all levels to contribute effectively to the QMS.
4. Process Approach:
Detailed Requirements
1. Documentation:
o Maintaining documented objectives, procedures (in a quality manual),
plans, and records that demonstrate adherence to the QMS. o
Historical Perspective
1. Definition:
o TQM focuses on continuous improvement of processes through
measurement and redesign.
o It advocates that organizations continuously enhance their processes to
1. Objective:
The SEI Capability Maturity Model (CMM) is a framework developed by the Software
Engineering Institute (SEI) to assess and improve the maturity of software development
processes within organizations.
It categorizes organizations into five maturity levels based on their process capabilities and
practices:
1. Level 1: Initial
o Characteristics:
■ Chaotic and ad hoc development processes.
■ Lack of defined processes or management practices.
■ Relies heavily on individual heroics to complete projects. o
Outcome:
■ Project success depends largely on the capabilities of individual team
members.
■ High risk of project failure or delays.
2. Level 2: Repeatable
o Characteristics:
■ Basic project management practices like planning and tracking
3. Level 3: Defined
o Characteristics:
Outcome:
■ Consistent and standardized processes across the organization.
■ Better management of project risks and quality.
4. Level 4: Managed
o Characteristics:
Outcome:
■ Focus on managing and optimizing processes to meet quality and
performance goals.
■ Continuous monitoring and improvement of project execution.
5. Level 5: Optimizing
o Characteristics:
Outcome:
■ Continuous innovation and improvement in processes.
■ High adaptability to change and efficiency in handling new
challenges.
■ Leading edge in technology adoption and process optimization.
SEI CMM has been instrumental not only in enhancing the software development
practices within organizations but also in establishing benchmarks for industry standards.
It encourages organizations to move from chaotic and unpredictable processes (Level 1)
to optimized and continuously improving processes (Level 5), thereby fostering better
Benefits of CMMI
ISO/IEC 15504, also known as SPICE (Software Process Improvement and Capability
dEtermination), is a standard for assessing and improving software development
processes. Here are the key aspects of ISO 15504 process assessment:
Process Attributes
• Nine Process Attributes: ISO 15504 assesses processes based on nine attributes,
which are:
• Alignment with CMMI: ISO 15504 and CMMI share similar goals of assessing
and improving software development processes. While CMMI is more
comprehensive and applicable to a broader range of domains, ISO 15504 provides
a structured approach to process assessment specifically tailored to software
development.
When assessors are judging the degree to which a process attribute is being fulfilled they
allocate one of the following scores:
Here’s how evidence might be identified and evaluated for assessing the process
attributes, taking the example of requirement analysis processes:
step of the requirements analysis process, indicating that the defined process is being
implemented and deployed effectively (3.2 in Table 13.5). Using ISO/IEC 15504
Attributes
Importance of Evidence
Here’s a structured approach, drawing from CMMI principles, to address these issues and
improve process maturity:
1. Resource Overcommitment:
o Issue: Lack of proper liaison between the Head of Software Engineering and
Project Engineers leads to resource overcommitment across new systems
and maintenance tasks simultaneously.
o Impact: Delays in software deliveries due to stretched resources.
2. Requirements Volatility:
o Issue: Initial testing of prototypes often reveals major new requirements. o
Impact: Scope creep and changes lead to rework and delays.
3. Change Control Challenges:
o Issue: Lack of proper change control results in increased demands for
software development beyond original plans.
o Impact: Increased workload and project delays.
4. Delayed System Testing:
o Issue: Completion of system testing is delayed due to a high volume of bug
fixes.
o Impact: Delays in product release and customer shipment.
• Actions:
• Expected Outcomes:
Six Sigma
Here’s how UVW can adopt and benefit from Six Sigma:
1. Define:
o Objective: Clearly define the problem areas and goals for improvement. o
performance.
• Focus Areas:
o Addressing late deliveries due to resource overcommitment. o Managing
requirements volatility and change control effectively. o Enhancing testing
processes to reduce defects and delays in system testing phases.
• Tools and Techniques:
o Use of DMAIC (Define, Measure, Analyse, Improve, Control) for
existingprocess improvements.
o Application of DMADV (Define, Measure, Analyse, Design, Verify) for new
process or product development to ensure high-quality outputs from the
outset.
• Cost Savings: Reduced rework and operational costs associated with defects.
The discussion highlights several key themes in software quality improvement over time,
emphasizing shifts in practices and methodologies:
1. Increasing Visibility:
o Early practices like Gerald Weinberg's 'egoless programming' promoted code
review among programmers, enhancing visibility into each other's work.
o Modern practices extend this visibility to include walkthroughs, inspections,
and formal reviews at various stages of development, ensuring early
detection and correction of defects.
2. Procedural Structure:
o Initially, software development lacked structured methodologies, but over
time, methodologies with defined processes for every stage (like Agile,
Waterfall, etc.) have become prevalent.
o Structured programming techniques and 'clean-room' development further
enforce procedural rigor to enhance software quality.
3. Checking Intermediate Stages:
o Traditional approaches involved waiting until a complete, albeit imperfect,
version of software was ready for debugging.
o Contemporary methods emphasize checking and validating software
components early in development, reducing reliance on predicting external
quality from early design documents.
4. Inspections:
o Inspections are critical in ensuring quality at various development stages, not
just in coding but also in documentation and test case creation.
The late 1960s marked a pivotal period in software engineering where the complexity of
software systems began to outstrip the capacity of human understanding and testing
capabilities. Here are the key developments and concepts that emerged during this time:
Edsger Dijkstra and others argued that testing could only demonstrate the
presence of errors, not their absence, leading to uncertainty about software
correctness.
2. Structured Programming:
o To manage complexity, structured programming advocated breaking down
software into manageable components.
o Each component was designed to be self-contained with clear entry and exit
points, facilitating easier understanding and validation by human
programmers.
3. Clean-Room Software Development:
o Developed by Harlan Mills and others at IBM, clean-room software
development introduced a rigorous methodology to ensure software
reliability.
o It involved three separate teams:
■ Specification Team: Gathers user requirements and usage profiles.
■ Development Team: Implements the code without conducting
machine testing; focuses on formal verification using mathematical
techniques.
Overall, these methodologies aimed to address the challenges posed by complex software
systems by promoting structured, systematic development processes that prioritize
correctness from the outset rather than relying on post hoc testing and debugging. Clean-
room software development, in particular, contributed to the evolution of quality
assurance practices in software engineering, emphasizing formal methods and rigorous
validation techniques.
It seems like you're discussing formal methods in software development and the concept of
software quality circles. Here's a summary of the points covered:
• Purpose: SWQCs are adapted from Japanese quality practices to improve software
development processes by reducing errors.
• Structure: Consist of 4 to 10 volunteers in the same department who meet
regularly to identify, analyze, and solve work-related problems.
• Process: The group selects a problem, identifies its causes, and proposes solutions.
Management approval may be required for implementing improvements.
• Benefits: Enhances team collaboration, spreads best practices, and focuses on
continuous process improvement.
If you have any specific questions or if there's more you'd like to explore on these topics or
related areas, feel free to ask!
The process you're describing involves the compilation of most probable error lists, which
is a proactive approach to improving software development processes. Here’s a
This approach aligns well with quality circles and other continuous improvement
methodologies by fostering a culture of proactive problem-solving and learning from past
experiences.
If you have more questions or need further elaboration on any aspect, feel free to ask!
The concept of Lessons Learned reports and Post Implementation Reviews (PIRs) are
crucial for organizational learning and continuous improvement in project management.
Here’s a breakdown of these two types of reports:
• Purpose: A PIR takes place after a significant period of operation of the new
system (typically after it has been in use for some time). Its focus is on evaluating
the effectiveness of the implemented system rather than the project process itself.
• Timing: Conducted by someone who was not directly involved in the project to
ensure neutrality and objectivity.
• Content: A PIR includes:
o System Performance: How well the system meets its intended objectives
and user needs.
o User Feedback: Feedback from users on system usability and
functionality.
o Improvement Recommendations: Changes or enhancements suggested to
improve system effectiveness.
• Audience: The audience typically includes stakeholders who will benefit from
insights into the system’s operational performance and areas for improvement.
• Outcome: Recommendations from a PIR often lead to changes aimed at enhancing
the effectiveness and efficiency of the system.
• Continuous Improvement: They provide a basis for making informed decisions and
improvements in future projects and system implementations.
Testing
The text discusses the planning and management of testing in software development,
highlighting the challenges of estimating the amount of testing required due to unknowns,
such as the number of bugs left in the code.
1. Quality Judgement:
o The final judgement of software quality is based on its correct execution.
2. Testing Challenges:
o Estimating the remaining testing work is difficult due to unknown bugs in the
code.
3. V-Process Model:
o Introduced as an extension of the waterfall model.
o Diagrammatic representation provided in Figure 13.5.
o Stresses the necessity for validation activities matching the project creation
activities.
4. Validation Activities:
o Each development step has a matching validation process.
o Defects found can cause a loop back to the corresponding development stage
for rework.
5. Discrepancy Handling:
o Feedback should occur only when there is a discrepancy between specified
requirements and implementation.
o Example: System designer specifies a calculation method; if a developer
misinterprets it, the discrepancy is caught during system testing.
6. System Testing:
o Original designers are responsible for checking that software meets the
specified requirements, discovering any misunderstandings by developers.
• The V-process model provides a structure for making early planning decisions
about testing.
• Decisions can be made about the types and amounts of testing required from the
beginning of the project.
Off-the-Shelf Software:
• If software is acquired off-the-shelf, certain stages like program design and coding
are not relevant.
• Consequently, program testing would not be necessary in this scenario.
1. Objectives:
o Both techniques aim to remove errors from software.
2. Definitions:
o Verification: Ensures outputs of one development phase conform to the previous
phase's outputs.
o Validation: Ensures fully developed software meets its requirements
specification.
3. Objectives Clarified:
o Verification Objective: Check if artifacts produced after a phase conform to
those from the previous phase (e.g., design documents conform to requirements
specifications).
o Validation Objective: Check if the fully developed and integrated software
satisfies customer requirements.
4. Techniques:
o Verification Techniques: Review, simulation, and formal verification. o
Validation Techniques: Primarily based on product testing.
5. Process Stages:
Testing activities
The text provides an overview of test case design approaches, levels of testing, and main
testing activities in software development.
It emphasizes the differences between black-box and white-box testing, the stages of
testing (unit, integration, system), and the activities involved in the testing process. Test
1. Black-Box Testing:
o Test cases are designed using only the functional specification. o Based on
input/output behavior without knowledge of internal structure. o Also known
as functional testing or requirements-driven testing.
2. White-Box Testing:
o Test cases are designed based on the analysis of the source code. o Requires
knowledge of the internal structure.
o Also known as structural testing or structure-driven testing.
Levels of Testing
1. Unit Testing:
o Tests individual components or units of a program.
o Conducted as soon as the coding for each module is complete. o Allows for
parallel activities since modules are tested separately. o Referred to as testing
in the small.
2. Integration Testing:
o Checks for errors in interfacing between modules.
Testing Activities
1. Test Planning:
o Involves determining relevant test strategies and planning for any required
test bed.
o Test bed setup is crucial, especially for embedded applications.
2. Test Suite Design:
o Planned testing strategies are used to design the set of test cases (test suite).
3. Test Case Execution and Result Checking:
o Each test case is executed, and results are compared with expected outcomes.
o Failures are noted for test reporting when there is a mismatch between actual
and expected results.
The text describes the detailed process and activities involved in software test reporting,
debugging, error correction, defect retesting, regression testing, and test closure.
It highlights the importance of formal issue recording, the adjudication of issues, and
various testing strategies to ensure software quality.
Test Reporting
1. Issue Raising:
o Report discrepancies between expected and actual results.
2. Issue Recording:
o Formal recording of issues and their history.
1. Debugging:
2. Error Correction:
o Correct the code after locating the error through debugging.
3. Defect Retesting:
o Retesting corrected code to check if the defect has been successfully
addressed (resolution testing).
4. Regression Testing:
o Ensures unmodified functionalities still work correctly after bug fixes. o
Test Closure
1. Test Completion:
o Archiving documents related to lessons learned, test results, and logs for
The text describes who performs testing in organizations, the importance and benefits of
test automation, and various types of automated testing tools.
It emphasizes that while test automation can significantly reduce human effort, improve
thoroughness, and lower costs, different tools have distinct advantages and challenges.
o Reduces monotony, boredom, and errors in running the same test cases
repeatedly.
o Substantial cost and time reduction in testing and maintenance phases.
1. Historical Data:
o Use historical data to estimate errors per 1000 lines of code from past
projects.
o Apply this ratio to new system development to estimate potential errors
based on the code size.
Independent Reviews
Using these methods helps in obtaining a better estimation of latent errors, providing a
clearer understanding of the remaining testing effort needed to ensure software quality.
For example, A finds 30 errors and B finds 20 errors of which 15 are common to both A and B. The estimated total
number of errors would be:
(30 X 20)/15 = 40
Software reliability
o A bug may affect different users differently based on how they use the
software.
5. Reliability Improvement Over Time:
o Reliability usually improves during testing and operational phases as defects
Quality plans
• Quality plans detail how standard quality procedures and standards from an
organization's quality manual will be applied to a specific project.
• They ensure all quality-related activities and requirements are addressed.
Client Requirements:
• For software developed for external clients, the client's quality assurance staff may
require a quality plan to ensure the quality of the delivered products.
• This requirement ensures that the client’s quality standards are met.
• A quality plan acts as a checklist to confirm that all quality issues have been
addressed during the planning process.
• Most of the content in a quality plan references other documents that detail specific
quality procedures and standards.
Decomposition Techniques:
Software project estimation is a form of problem solving, and in most cases, the problem to be solved
(i.e., developing a cost and effort estimate for a software project) is too complex to be considered in
one piece. For this reason, you should decompose the problem, recharacterizing it as a set of smaller
(and hopefully, more manageable) problems.
The decomposition approach can be discussed from two different points of view: decomposition of
the problem and decomposition of the process. Estimation uses one or both forms of partitioning. But
before an estimate can be made, you must understand the scope of the software to be built and
generate an estimate of its “size.”
Software sizing:
The accuracy of a software project estimate is predicated on a number of things:
(1) the degree to which you have properly estimated the size of the product to be built;
(2) the ability to translate the size estimate into human effort, calendar time, and dollars (a function of
Prepared by: P r o f . Sadhana R , AP / CSE & Prof. Shyji Ealias, AP/CSE
BCS501 Software Engineering
&Project Management
Regulation – 2022 (CBCS Scheme)
the availability of reliable software metrics from past projects);
(3) the degree to which the project plan reflects the abilities of the software team; and
(4) the stability of product requirements and the environment that supports the software engineering
effort.
Because a project estimate is only as good as the estimate of the size of the work to be accomplished,
sizing represents your first major challenge as a planner. In the context of project planning, size refers
to a quantifiable outcome of the software project. If a direct approach is taken, size can be measured
in lines of code (LOC). If an indirect approach is chosen, size is represented as function points (FP).
Putnam and Myers [Put92] suggest four different approaches to the sizing problem:
• “Fuzzy logic” sizing. This approach uses the approximate reasoning techniques that are the
cornerstone of fuzzy logic. To apply this approach, the planner must identify the type of application,
establish its magnitude on a qualitative scale, and then refine the magnitude within the original
range.
• Function point sizing. The planner develops estimates of the information domain characteristics.
• Standard component sizing. Software is composed of a number of different “standard components”
that are generic to a particular application area. For example, the standard components for an
information system are subsystems, modules, screens, reports, interactive programs, batch programs,
files,LOC, and object-level instructions. The project planner estimates the number of occurrences of
each standard component and then uses historical project data to estimate the delivered size per
standard component.
• Change sizing. This approach is used when a project encompasses the use of existing software that
must be modified in some way as part of a project. The planner estimates the number and type (e.g.,
reuse, adding code, changing code, deleting code) of modifications that must be accomplished.
Problem-Based Estimation
Lines of code and function points were described as measures from which productivity metrics can be
computed. LOC and FP data are used in two ways during software project estimation: (1) as
estimation variables to “size” each element of the software and (2) as baseline metrics collected from
past projects and used in conjunction with estimation variables to develop cost and effort projections.
LOC and FP estimation are distinct estimation techniques. Yet both have a number of characteristics
in common. You begin with a bounded statement of software scope and from this statement attempt
to decompose the statement of scope into problem functions that can each be estimated individually.
LOC or FP (the estimation variable) is then estimated for each function. Alternatively, you may
choose another component for sizing, such as classes or objects, changes, or business processes
affected.
Baseline productivity metrics (e.g., LOC/pm or FP/pm6 ) are then applied to the appropriate
estimation variable, and cost or effort for the function is derived. Function estimates are combined to
produce an overall estimate for the entire project.It is important to note, however, that there is often
substantial scatter in productivity metrics for an organization, making the use of a single-baseline
productivity metric suspect. In general, LOC/pm or FP/pm averages should be computed by project
domain. That is, projects should be grouped by team size, application area, complexity, and other
relevant parameters. Local domain averages should then be computed. When a new project is
estimated, it should first be allocated to a domain, and then the appropriate domain average for past
productivity should be used in generating the estimate.
The LOC and FP estimation techniques differ in the level of detail required for decomposition and the
target of the partitioning. When LOC is used as the estimation variable, decomposition is absolutely
essential and is often taken to considerable levels of detail. The greater the degree of partitioning, the
more likely reasonably accurate estimates of LOC can be developed.
The resultant estimates can then be used to derive an FP value that can be tied to past data and used
Prepared by: P r o f . Sadhana R , AP / CSE & Prof. Shyji Ealias, AP/CSE
BCS501 Software Engineering
&Project Management
Regulation – 2022 (CBCS Scheme)
to generate an estimate. Regardless of the estimation variable that is used, you should begin by
estimating a range of values for each function or information domain value. Using historical data or
(when all else fails) intuition, estimate an optimistic, most likely, and pessimistic size value for each
function or count for each information domain value. An implicit indication of the degree of
uncertainty is provided when a range of values is specified.
A three-point or expected value can then be computed. The expected value for the estimation variable
(size) S can be computed as a weighted average of the optimistic (sopt), most likely (sm), and
pessimistic (spess) estimates
Process-Based Estimation
The most common technique for estimating a project is to base the estimate on the process that will
be used. That is, the process is decomposed into a relatively small set of tasks and the effort required
to accomplish each task is estimated.
Like the problem-based techniques, process-based estimation begins with a delineation of software
functions obtained from the project scope. A series of framework activities must be performed for
each function.
If process-based estimation is performed independently of LOC or FP estimation, we now have two
or three estimates for cost and effort that may be compared and reconciled. If both sets of estimates
show reasonable agreement, there is good reason to believe that the estimates are reliable. If, on the
other hand, the results of these decomposition techniques show little agreement, further investigation
and analysis must be conducted.
Estimation with Use Cases
Use cases provide a software team with insight into software scope and requirements. However,
developing an estimation approach with use cases is problematic for the following reasons ,
• Use cases are described using many different formats and styles—there is no standard form.
• Use cases represent an external view (the user’s view) of the software and can therefore be written
at many different levels of abstraction.
• Use cases do not address the complexity of the functions and features that are described.
• Use cases can describe complex behavior (e.g., interactions) that involve many functions and
features.
Unlike an LOC or a function point, one person’s “use case” may require months of effort while
another person’s use case may be implemented in a day or two. Although a number of investigators
have considered use cases as an estimation input, no proven estimation method has emerged to
date.Smith [Smi99] suggests that use cases can be used for estimation, but only if they are considered
within the context of the “structural hierarchy” that they are used to describe.
Empirical Estimation Models
An empirical estimation model is a method used to predict software project attributes (such as cost,
effort, time, or size) based on historical data and observations from previous projects. These models
use real-world data to develop predictions about how long a project will take, how much effort will
be required, or how many resources will be needed to complete specific tasks.
1. Empirical Data: These models rely on data from past software projects, often collected from
different organizations or project teams. The data includes attributes such as lines of code,
function points, complexity, team size, time to complete tasks, and effort spent.
2. Statistical Methods: Empirical estimation models typically involve statistical techniques to
derive relationships between project attributes (such as size or complexity) and performance
Prepared by: P r o f . Sadhana R , AP / CSE & Prof. Shyji Ealias, AP/CSE
BCS501 Software Engineering
&Project Management
Regulation – 2022 (CBCS Scheme)
metrics (such as effort or duration). Common techniques include regression analysis, machine
learning, and other predictive modeling techniques.
3. Predictive Focus: The main goal of empirical estimation is to predict outcomes (like cost or
effort) based on input parameters (such as software size or complexity). These models do not
aim to understand the underlying process but instead focus on identifying patterns and making
accurate predictions.
Several types of empirical estimation models are commonly used in software engineering:
• Early design stage model. Used once requirements have been stabilized and
basic software architecture has been established.
1. Data Collection:
o Gather historical data from previous software projects. This data should include
information on project attributes like size, complexity, resources, and outcomes
(effort, cost, time, defects, etc.).
2. Model Building:
o Use statistical or machine learning techniques to build a model. The input attributes
(like size or complexity) are used to predict the output (such as effort or cost).
3. Model Validation:
o Once the model is built, it should be validated using a different set of data to ensure
that it provides reliable and accurate predictions. Cross-validation techniques or out-
of-sample testing are common methods for this.
4. Prediction:
o The model is then used to make predictions for new software projects based on the
input data.