SMM Unit - 1
SMM Unit - 1
Industry standards such as ISO 9000 and industry models such as the Software Engineering
Institute’s (SEI) Capability Maturity Model Integrated (CMMI®) help in Utilising metrics to
comprehend better, monitor, manage, and predict software projects, processes, and products.
Software metrics can provide engineers and management with the information required to
make technical decisions.
Everyone involved in selecting, designing, implementing, collecting, and utilizing a metric
must comprehend its definition and purpose if it is to provide helpful information.
Software metrics programs must be designed to provide the precise data required to manage
software projects and enhance software engineering processes and services. Organisational,
project, and task objectives are determined beforehand, and metrics are chosen based on these
objectives.
We use the software metrics to determine our efficacy in meeting these objectives:
Teams can use software metrics to measure performance, plan upcoming work tasks, track
productivity, and better control the production process during project management by
observing various figures and trends during production.
In conjunction with management functions, they can also use software metrics to simplify
your projects by devising more efficient procedures, creating software maintenance plans, and
keeping production teams informed of issues that need to be resolved.
Throughout the software development process, various indicators are intertwined. The four
management functions correspond to software metrics: planning, organization, control, and
enhancement.
Software Quality:
Identify and apply various software metrics,
Software metrics are quantitative measures that provide insights into various aspects of the software
development process and the resulting software product. These metrics help assess the quality,
efficiency, and effectiveness of the software development process and can be used to make informed
decisions for improvement. Identifying and applying various software metrics is crucial for managing
and enhancing software development projects. Here are some key aspects of software metrics:
1. Types of Software Metrics:
Product Metrics: These metrics focus on the characteristics of the software product
itself. Examples include lines of code, cyclomatic complexity, and code churn.
Process Metrics: These metrics assess the efficiency and effectiveness of the
software development process. Examples include development speed, defect density,
and rework effort.
Project Metrics: These metrics provide insights into the overall progress and health
of the software development project. Examples include project schedule adherence,
budget variance, and resource utilization.
2. Common Software Metrics:
Lines of Code (LOC): Measures the size of the software by counting the number of
lines of code written.
Cyclomatic Complexity: Quantifies the complexity of the software by counting the
number of independent paths through the source code.
Defect Density: Calculates the number of defects per unit of code, helping assess the
software's quality.
Effort Estimation Accuracy: Measures the accuracy of initial effort estimates
compared to actual effort expended.
Code Churn: Indicates the frequency and extent of code changes during
development.
Test Coverage: Evaluates the proportion of the code exercised by testing, helping
assess the thoroughness of testing efforts.
3. Benefits of Software Metrics:
Performance Monitoring: Metrics help monitor the performance of the development
team, enabling early identification of potential issues.
Quality Assurance: Metrics can be used to assess and improve the quality of the
software, leading to better customer satisfaction.
Process Improvement: Identifying areas with high defect rates or inefficient
processes allows for targeted process improvement efforts.
Decision Support: Metrics provide data for informed decision-making during the
development life cycle.
4. Challenges and Considerations:
Selection of Relevant Metrics: It's crucial to choose metrics that align with project
goals and objectives.
Avoiding Overemphasis: Metrics should be used as a tool for improvement, not as a
sole measure of success or failure.
Context Sensitivity: Metrics should be interpreted in the context of the specific
project and its unique characteristics.
5. Tools for Software Metrics:
Various tools and platforms are available to automate the collection and analysis of
software metrics. These tools help in efficiently managing and interpreting the data.
The quality level of software is determined by evaluating both internal and external attributes of the
software product. Internal and external attributes represent two perspectives through which software
quality can be assessed.
1. Internal Quality Attributes: Internal quality attributes are characteristics that are not directly
visible to end-users but are crucial for the maintainability, reliability, and overall efficiency of
the software. Evaluating internal attributes helps in understanding how well the software is
designed and implemented. Some key internal quality attributes include:
Maintainability: The ease with which the software can be modified, updated, or
extended without introducing defects or breaking existing functionality.
Readability: The clarity and comprehensibility of the source code, making it easier
for developers to understand and maintain.
Modularity: The degree to which the software is organized into manageable and
independent modules or components.
Scalability: The ability of the software to handle increasing workloads or
accommodate growth without a significant decrease in performance.
2. External Quality Attributes: External quality attributes are characteristics that are directly
observable by end-users and reflect how well the software meets their needs and expectations.
These attributes are crucial for assessing the software's functionality and performance from a
user's perspective. Some key external quality attributes include:
Reliability: The ability of the software to perform consistently and reliably under
various conditions without unexpected failures.
Usability: The ease with which users can interact with the software, including factors
like user interface design and overall user experience.
Performance: The speed, responsiveness, and efficiency of the software in delivering
its intended functionality.
Security: The degree to which the software protects against unauthorized access, data
breaches, and other security threats.
3. Quality Metrics for Evaluation: To evaluate the quality level of internal and external
attributes, various metrics and measures can be employed. For example:
Code complexity metrics (internal) such as cyclomatic complexity.
Code readability metrics (internal) based on coding standards and documentation.
Defect density (internal) to measure the number of defects per unit of code.
User satisfaction surveys (external) to gather feedback on usability and overall user
experience.
Response time (external) to assess performance.
4. Quality Assurance Practices:
Code Reviews: Regular code reviews help ensure adherence to coding standards,
identify potential issues, and improve code quality.
Testing: Rigorous testing, including unit testing, integration testing, and user
acceptance testing, is essential to identify and address defects.
Continuous Integration and Continuous Deployment (CI/CD): Implementing
CI/CD practices helps maintain a consistent and reliable software development and
deployment pipeline.
By evaluating both internal and external attributes and using appropriate quality metrics,
organizations can gain a comprehensive understanding of the software's quality level. This
understanding allows for targeted improvements and ensures that the software not only meets user
expectations but is also maintainable and adaptable for future needs.
There are several reliability models used to evaluate the reliability of software products. The choice of
the right model depends on various factors, including the nature of the software, available data, and
the desired level of accuracy. Here are two commonly used reliability models:
1. Exponential Model:
Description: The exponential reliability model is based on the assumption that the
failure rate of a software system remains constant over time. It is a simple and widely
used model, especially when failure events are assumed to be independent and occur
randomly.
Equation: R(t) = e^(-λt), where R(t) is the reliability at time t, λ is the failure rate,
and e is the base of the natural logarithm.
Pros:
Simplicity and ease of use.
Suitable for systems with constant failure rates.
Cons:
Assumes constant failure rate, which may not be realistic for all systems.
May not accurately represent software systems with varying failure rates.
2. Weibull Model:
Description: The Weibull reliability model is more flexible than the exponential
model, allowing for different shapes of the hazard (failure) function. It can model
systems with increasing, decreasing, or constant failure rates over time.
Equation: R(t) = e^(-(t/β)^α), where R(t) is the reliability at time t, β is the scale
parameter, α is the shape parameter, and e is the base of the natural logarithm.
Pros:
Greater flexibility to model a variety of failure rate patterns.
Useful for systems where failure rates change over time.
Cons:
Requires more parameters, and estimating them accurately may be
challenging.
Interpretation of parameters might be complex.
3. Software Reliability Growth Models (SRGM):
Description: These models are specific to software development and are based on the
idea that software reliability improves over time as defects are identified and fixed.
The most common SRGM is the Jelinski-Moranda model.
Equation: R(t) = 1 - e^(-βt^n), where R(t) is the reliability at time t, β is a scaling
parameter, and n is the shape parameter.
Pros:
Tailored for software reliability assessment.
Takes into account the software debugging process.
Cons:
Assumption of the effectiveness of debugging processes may not always
hold.
Limited to modelling the reliability growth phase.
4. Non-homogeneous Poisson Process (NHPP):
Description: The NHPP model is widely used in software reliability engineering. It
assumes that the failure events follow a non-constant Poisson process, allowing for
changes in the failure rate over time.
Equation: Depends on the specific form of the intensity function for the Poisson
process.
Pros:
Allows for varying failure rates over time.
Widely used in software reliability studies.
Cons:
Estimation of parameters may be challenging.
Requires sufficient data for accurate modeling.
Choosing the Right Model:
Consider the nature of the software system (e.g., constant vs. variable failure rates).
Assess the availability and quality of failure data.
Choose a model that aligns with the assumptions and characteristics of the software under
evaluation.
It's important to note that no single model fits all scenarios perfectly, and the choice often involves
trade-offs. The accuracy of reliability modeling depends on the assumptions made and the quality of
the data used for estimation. The selection of the right reliability model is a crucial aspect of
accurately evaluating the reliability of any given software product.
Designing new metrics and reliability models for evaluating software quality involves considering
specific requirements and objectives. While creating custom metrics and models, it's important to
align them with the characteristics of the software, project goals, and the information you want to
capture. Below, I'll outline a general process and provide examples:
Designing New Metrics:
1. Define Objectives:
Clearly articulate the goals of your software quality evaluation. Are you prioritizing
maintainability, performance, security, or a combination of factors?
2. Identify Key Attributes:
Determine the key attributes that are critical for your software. For example, if user
satisfaction is crucial, you might consider metrics related to usability, responsiveness,
and overall user experience.
3. Quantify Characteristics:
Develop quantitative measures for each identified attribute. Consider how to measure
and express factors such as code readability, modularity, user interface intuitiveness,
etc.
4. Create Composite Metrics:
Combine individual metrics into composite metrics if necessary. This can provide a
holistic view of certain aspects of software quality.
5. Normalization:
Normalize metrics if needed to ensure that values are comparable across different
projects or software components.
6. Example Metrics:
Modularity Index: Quantifies the degree of modularity in the software, based on the
organization of code into cohesive and loosely coupled modules.
Usability Score: A composite metric assessing various aspects of usability, such as
learnability, efficiency, and error recovery.
Adaptability Quotient: Measures how easily the software can adapt to changing
requirements or environments.
Designing New Reliability Models:
1. Define Reliability Goals:
Clearly state your objectives for assessing software reliability. Do you want to model
the failure rate, predict the time to failure, or evaluate reliability growth over time?
2. Identify Failure Patterns:
Analyze historical data or anticipate failure patterns based on the software's
characteristics. This will guide the design of your reliability model.
3. Select Modeling Approach:
Choose a modeling approach that suits the nature of your software. Consider whether
an exponential model, Weibull model, or a custom model is more appropriate.
4. Incorporate Software Development Lifecycle:
If applicable, integrate software development lifecycle events into the reliability
model. For instance, include phases like design, coding, testing, and deployment to
capture how reliability changes over time.
5. Parameter Estimation:
Determine how to estimate model parameters. This might involve statistical methods,
historical data analysis, or expert judgment.
6. Example Reliability Models:
Dynamic Reliability Growth Model: Incorporates software testing and debugging
activities into a dynamic model that evolves over time.
Event-Driven Reliability Model: Captures reliability changes based on specific
events, such as software updates or changes in the operating environment.
User-Interaction Reliability Model: Considers user interactions and feedback to
model how software reliability evolves based on real-world usage.
Considerations:
1. Data Availability:
Ensure that the necessary data for both metrics and models are available and reliable.
2. Adaptability:
Design metrics and models to be adaptable to changes in software development
methodologies, technologies, and project requirements.
3. Feedback Loops:
Establish feedback loops to continuously refine and improve your metrics and models
based on new information and insights.
4. Documentation:
Clearly document the rationale, assumptions, and limitations of your custom metrics
and models.
Custom metrics and reliability models should be tailored to the unique aspects of the software and the
goals of the evaluation. Regular refinement and validation based on real-world data and experiences
will contribute to their effectiveness in assessing software quality and reliability.
Software Quality shows how good and reliable a product is. To convey an associate degree
example, think about functionally correct software. It performs all functions as laid out in the SRS
document. But, it has an associate degree virtually unusable program. even though it should be
functionally correct, we tend not to think about it to be a high-quality product.
Another example is also that of a product that will have everything that the users need but has an
associate degree virtually incomprehensible and not maintainable code. Therefore, the normal
construct of quality as “fitness of purpose” for code merchandise isn’t satisfactory.
Software Metrics
A software metric is a measure of software characteristics which are measurable or countable.
Software metrics are valuable for many reasons, including measuring software performance,
planning work items, measuring productivity, and many other uses.
Within the software development process, many metrics are that are all connected. Software
metrics are similar to the four functions of management: Planning, Organization, Control, or
Improvement.
2. Process Metrics: These are the measures of various characteristics of the software
development process. For example, the efficiency of fault detection. They are used to
measure the characteristics of methods, techniques, and tools that are used for developing
software.
Types of Metrics
Internal metrics: Internal metrics are the metrics used for measuring properties that are
viewed to be of greater importance to a software developer. For example, Lines of Code
(LOC) measure.
External metrics: External metrics are the metrics used for measuring properties that are
viewed to be of greater importance to the user, e.g., portability, reliability, functionality,
usability, etc.
Hybrid metrics: Hybrid metrics are the metrics that combine product, process, and resource
metrics. For example, cost per FP where FP stands for Function Point Metric.
Project metrics: Project metrics are the metrics used by the project manager to check the
project's progress. Data from the past projects are used to collect various metrics, like time
and cost; these estimates are used as a base of new software. Note that as the project
proceeds, the project manager will check its progress from time-to-time and will compare the
effort, cost, and time with the original effort, cost and time. Also understand that these
metrics are used to decrease the development costs, time efforts and risks. The project quality
can also be improved. As quality improves, the number of errors and time, as well as cost
required, is also reduced.
ADVERTISEMENT
For analysis, comparison, and critical study of different programming language concerning
their characteristics.
In comparing and evaluating the capabilities and productivity of people involved in software
development.
In making inference about the effort to be put in the design and development of the software
systems.
In comparison and making design tradeoffs between software development and maintenance
cost.
In providing feedback to software managers about the progress and quality during various
phases of the software development life cycle.
The application of software metrics is not always easy, and in some cases, it is difficult and
costly.
The verification and justification of software metrics are based on historical/empirical data
whose validity is difficult to verify.
These are useful for managing software products but not for evaluating the performance of
the technical staff.
The definition and derivation of Software metrics are usually based on assuming which are
not standardized and may depend upon tools available and working environment.
Most of the predictive models rely on estimates of certain variables which are often not
known precisely.
Size Oriented Metrics
LOC Metrics
It is one of the earliest and simpler metrics for calculating the size of the computer program.
It is generally used in calculating and comparing the productivity of programmers. These
metrics are derived by normalizing the quality and productivity measures by considering the
size of the product as a metric.
Based on the LOC/KLOC count of software, many other metrics can be computed:
a. Errors/KLOC.
b. $/ KLOC.
c. Defects/KLOC.
d. Pages of documentation/KLOC.
e. Errors/PM.
f. Productivity = KLOC/PM (effort is measured in person-months).
g. $/ Page of documentation.
Advantages of LOC
1. Simple to measure
Disadvantage of LOC
1. It is defined on the code. For example, it cannot measure the size of the specification.
2. It characterizes only one specific view of size, namely length, it takes no account of
functionality or complexity
3. Bad software design may cause an excessive line of code
4. It is language dependent
5. Users cannot easily understand it