Chapter 3 Software Quality and Reliability Updated
Chapter 3 Software Quality and Reliability Updated
Reliability
By Dr. DEEPAK M D
Internal and external qualities
• Internal qualities refer to attributes of the software that are
concerned with its internal structure and design.
• These qualities are primarily of interest to developers and engineers,
as they affect the maintainability, scalability, and flexibility of the
software.
• Internal qualities are often invisible to end-users but are crucial for
the long-term sustainability of the software.
•Maintainability:(Internal Qualities)
The ease with which the software can be modified to correct faults, improve performance, or adapt
to a changed environment.
Factors: Code readability, modularity, documentation quality, and adherence to coding standards.
•Modularity:
The degree to which the software is composed of discrete components or modules that can be developed,
tested, and maintained independently.
Benefits: Easier debugging, testing, and enhancement of individual components without affecting others.
•Reusability:
The extent to which parts of the software can be reused in other projects or in different contexts
within the same project.
Factors: Generalization of code, use of libraries, and adherence to design patterns.
•Testability:
The degree to which the software facilitates testing of its components and overall functionality.
Benefits: Easier identification and fixing of bugs, better coverage of test cases.
•Readability:
How easily the code can be read and understood by other developers, which affects the ease of future
maintenance and enhancements.
Factors: Use of clear variable names, proper indentation, and comments.
•Usability: Extrernal qualities
How easy and intuitive the software is for users to learn and operate.
Factors: User interface design, accessibility, and consistency in design.
•Reliability:
The ability of the software to perform its required functions under stated conditions for a specified period.
Factors: Error handling, fault tolerance, and the robustness of the system.
•Performance:
How well the software responds to user interactions, processes data, and performs tasks in terms of speed and
resource usage.
Metrics: Response time, throughput, and resource efficiency.
•Compatibility:
The ability of the software to operate across different environments, platforms, and with other systems.
Factors: Cross-platform support, integration capabilities, and adherence to standards.
•Functionality:
The extent to which the software meets the specified requirements and provides the necessary features and functions.
Factors: Coverage of functional requirements, correctness, and completeness.
•Accessibility:
The degree to which the software is usable by people with disabilities or those using assistive technologies.
Factors: Support for screen readers, keyboard navigation, and compliance with accessibility standards
•Readability:
•How easily the code can be read and understood by other developers, which affects the ease of future maintenance
and enhancements.
•Factors: Use of clear variable names, proper indentation, and comments.
•Scalability:
•The ability of the software to handle growth, whether in terms of data volume, user load, or complexity.
•Factors: Efficient use of resources, modular architecture, and the ability to distribute tasks across multiple systems.
•Complexity:
•The degree of difficulty in understanding, using, and maintaining the software.
•Types: Cyclomatic complexity (code complexity), data complexity, and system complexity.
Security:
The measures taken within the code and architecture to protect the software from unauthorized access, use, or modifications.
Factors: Secure coding practices, use of encryption, and validation mechanisms.
Process and product quality;
• Process quality refers to the quality of the procedures and practices
used to develop the software or produce the product. It focuses on
the effectiveness, efficiency, and consistency of the processes
involved in the creation of a product.
•Product Transition: Deals with the attributes that affect the software's adaptability to new environments.
•Portability: The ease with which the software can be transferred from one environment to another.
•Reusability: The extent to which software components can be used in other applications.
•Interoperability: The ability of the software to work with other systems.
Boehm's Quality Model
Barry Boehm introduced his software quality model in the late 1970s as well. Boehm's model builds on
McCall’s work but introduces a hierarchical structure that organizes quality attributes into three primary
categories.
Key Components:
1.As-Is Utility: Focuses on the software's operational attributes.
1. Portability: The software's ability to be used in different environments.
2. Reliability: The probability of the software performing its required functions under stated
conditions.
3. Efficiency: The software’s ability to perform its functions with optimal resource usage.
4. Human Engineering: The ease with which users can interact with the software.
2.Maintainability: Focuses on the ease with which the software can be modified.
1. Testability: The ease of testing the software to ensure it meets its requirements.
2. Understandability: The clarity with which the software's logic and structure can be understood.
3. Modifiability: The ease with which the software can accommodate changes.
3.Portability: Deals with the software's adaptability to new environments.
1. Self-Containedness: The software’s independence from external components.
2. Communicativeness: The software’s ability to interact with other systems.
3. Modularity: The degree to which the software’s components can be separated and recombined.
FURPS / FURPS+,
The original FURPS model categorizes software quality attributes into five main categories:
1.Functionality:
1. Features: The capabilities that the software provides to meet user needs.
2. Capability: The range of tasks the software can perform.
3. Security: Protection of data and operations against unauthorized access.
4. Interoperability: The ability of the software to work with other systems.
5. Compliance: Adherence to relevant laws, standards, and regulations.
2.Usability:
1. Human Factors: The design of the user interface and how it supports user tasks.
2. Aesthetics: The look and feel of the software.
3. Consistency: The uniformity of user interface elements.
4. Documentation: The quality and availability of help, manuals, and online support.
5. Accessibility: The software's ability to be used by people with disabilities.
3. Reliability:
•Availability: The degree to which the software is operational and accessible when needed.
•Fault Tolerance: The ability of the software to continue functioning in the presence of errors.
•Recoverability: The ability to restore data and resume operations after a failure.
•Accuracy: The precision of calculations and data processing.
4. Performance:
•Response Time: The time the software takes to respond to user inputs.
•Throughput: The amount of work the software can handle in a given time period.
•Efficiency: The use of system resources like CPU, memory, and bandwidth.
•Capacity: The software’s ability to handle a certain volume of transactions or data.
5. Supportability:
•Testability: The ease with which the software can be tested for defects.
•Maintainability: The ease with which the software can be corrected, enhanced, or adapted.
•Extensibility: The ease with which the software can be extended with new features.
•Adaptability: The ability of the software to adapt to changes in the environment or requirements.
•Compatibility: The ability of the software to run in different environments.
FURPS+ Model
FURPS+ extends the original FURPS model by adding more categories to cover aspects related to
implementation and design. The "+" in FURPS+ represents the following additional categories:
•Design Constraints:
•Architectural Constraints: Constraints related to the software architecture, such as the need to use specific
frameworks, languages, or design patterns.
•Hardware Constraints: Constraints related to the hardware environment in which the software must operate.
•Legal Constraints: Any legal obligations that affect the design, such as data privacy laws.
•Implementation Requirements:
•Programming Languages: Specific languages or technologies required for implementation.
•Development Tools: Tools that must be used for software development.
•Coding Standards: Guidelines for writing code, such as naming conventions and code structure.
•Interface Requirements:
•User Interfaces: Requirements related to the design and layout of user interfaces.
•APIs: Specifications for application programming interfaces that the software must provide or use.
•Communication Interfaces: Requirements for how the software will interact with other systems or components.
•Physical Requirements:
•Operating Environment: Specifications for the physical environment in which the software will run under different
Processors like Intel, Arm etc
•Physical Data Requirements: Requirements related to physical data storage, such as the need for
specific types of databases or storage devices.
Significance of FURPS and
FURPS+
•Flexibility and Extensibility: FURPS+ expands the original model by incorporating additional
factors related to design and implementation, making it a more flexible tool that can be tailored
to the specific needs of a project.
Dromey’s model
Dromey's Quality Model, developed by R.G. Dromey in the mid-1990s, offers a different approach to
software quality compared to earlier models like McCall’s and Boehm’s. Dromey’s model focuses on the
relationship between software quality attributes and the components of a software system. It emphasizes
how the properties of individual components contribute to the overall quality of the software.
Key Concepts of Dromey's Model
1.Quality-Carrying Properties:
1. Dromey's model introduces the concept of quality-carrying properties. These are properties of
components that directly contribute to the overall quality of the software. For example, a well-
defined interface for a component contributes to the software’s interoperability and
maintainability.
2.Component-Based Approach:
1. The model focuses on how individual components (such as modules, functions, or objects)
possess properties that influence various quality attributes. Instead of evaluating software
quality in broad terms, Dromey’s model evaluates how the quality of individual components
contributes to the system’s overall quality.
3.Mapping Quality Attributes to Properties:
1. Dromey’s model involves mapping general quality attributes to specific properties of the
software components. For instance, the quality attribute of reliability might be mapped to
properties such as fault tolerance and error handling within specific components.
4. Classification of Quality Attributes:
•Dromey’s model classifies quality attributes into four primary categories:
• Correctness: The degree to which the software meets its specifications and
requirements.
• Internal Quality: Attributes related to the internal structure and operation of the
software, such as maintainability and flexibility.
• Contextual Quality: How well the software fits into its operational environment,
including aspects like portability and reusability.
• Descriptive Quality: Attributes related to user experience, including usability and
efficiency.
ISO 9126 is an international standard for software quality that was developed by the
International Organization for Standardization (ISO). It provides a framework for evaluating
the quality of software products and is focused on defining, measuring, and assessing
software quality.
SO 9126 is divided into four major parts:
1.Quality Model
2.External Metrics
3.Internal Metrics
4.Quality in Use Metrics
1. Quality Model
The ISO 9126 quality model defines six primary characteristics
that describe software quality. Each characteristic is further
broken down into sub-characteristics:
A)Functionality: The capability of the software to provide
functions that meet stated and implied needs when used under
specified conditions.
• Sub-characteristics:
• Suitability
• Accuracy
• Interoperability
• Security
• Functionality Compliance
B) Reliability: The ability of the software to maintain a specified level
of performance when used under specified conditions.
•Sub-characteristics: C)Usability: The effort needed for use and the individual
•Maturity assessment of such use
•Fault Tolerance by a stated or implied set of users.
•Recoverability •Sub-characteristics:
•Reliability Compliance • Understandability
• Learnability
• Operability
• Attractiveness, Usability Compliance
D)Efficiency: The relationship between the level of performance of the software and the amount of resources used,
under stated conditions.
•Sub-characteristics:
•Time Behavior
•Resource Utilization
•Efficiency Compliance
E)Maintainability: The ease with which a software product can be modified to correct faults, improve performance,
or adapt to a changed environment.
•Sub-characteristics:
•Analyzability
•Changeability
•Stability
•Testability
•Maintainability Compliance
➢ This model was established by J.D. Musa in 1979, and it is based on execution time.
➢ The basic execution model is the most popular and generally used reliability growth model,
mainly because:
• It is practical, simple, and easy to understand.
• Its parameters clearly relate to the physical world.
• It can be used for accurate reliability prediction.
➢ The basic execution model determines failure behaviour initially using execution time.
➢ Execution time may later be converted in calendar time.
➢ The failure behaviour is a Non-homogeneous Poisson Process, which means the associated
probability distribution is a Poisson process whose characteristics vary in time.
-λ0: stands for the initial failure intensity at the start of the execution.
-v0: stands for the total number of failures occurring over an infinite time
period; it corresponds to the expected number of failures to be observed
eventually.
Where
λ0: Initial
v0: Number of failures experienced, if a program is executed for an infinite time
period.
μ: Average or expected number of failures experienced at a given period of time.
τ: Execution time.
Significance of Failure Rate and Mean Time Between Failures (MTBF) in Assessing Software Reliability
Failure Rate and Mean Time Between Failures (MTBF) are crucial metrics used to assess and quantify the reliability of
software systems. Here's why they are significant:
1. Failure Rate
•Definition: The failure rate is the frequency with which software failures occur over a specified period. It is typically
expressed as the number of failures per unit of time (e.g., failures per hour).
•Significance:
• Indicator of Reliability: A lower failure rate indicates that the software is more reliable because it fails less
frequently during operation.
• Predictive Analysis: The failure rate can be used to predict future failures, helping in planning maintenance
schedules and improving the design in subsequent iterations.
• Benchmarking: It provides a benchmark for comparing the reliability of different software versions or different
software systems, enabling organizations to make informed decisions about which systems to deploy.
2. Mean Time Between Failures (MTBF)
•Definition: MTBF is the average time between two consecutive failures of a software system during its operation. It is
a key indicator of the expected operational uptime of the software before a failure occurs.
•Significance:
• Reliability Measure: A higher MTBF indicates better reliability because it means the software can operate for
longer periods without failure.
• Maintenance Planning: MTBF helps in planning maintenance activities. Knowing the average time between
failures allows teams to schedule preventive maintenance to avoid unexpected downtimes.
• Customer Assurance: High MTBF values provide confidence to users and stakeholders that the software is
dependable and that disruptions will be infrequent, thus enhancing user satisfaction.
• Cost Estimation: By knowing the MTBF, organizations can estimate the costs associated with downtime and
Elaborate various methods in software quality
management system to achieve product quality?
A Software Quality Management System (SQMS) is a set of processes, tools, and methodologies
designed to ensure that a software product is of high quality. Various methods within an SQMS can
be employed to achieve and maintain product quality. These methods typically fall into three broad
categories: process-oriented methods, people-oriented methods, and technology-oriented
methods. Below is an elaboration of these methods:
1. Process-Oriented Methods
2. People-Oriented Methods
3. Technology-Oriented Methods
4. Hybrid Methods
1. Process-Oriented Methods
Quality Assurance (QA)
•Definition: QA involves systematically monitoring and evaluating various aspects of a project to ensure that quality
standards are being met.
•Activities:
• Process Audits: Regular reviews of the development process to ensure adherence to quality standards and
practices.
• Process Improvement Initiatives: Continuous improvement of processes based on feedback and analysis, often
following frameworks like Plan-Do-Check-Act (PDCA).
b. Quality Planning
•Definition: Quality planning involves defining the quality standards and metrics that the product must meet.
•Activities:
• Defining Quality Standards: Setting specific standards and guidelines that the product must adhere to, such as
ISO 9001 or industry-specific standards like ISO/IEC 25010 for software product quality.
• Quality Metrics: Establishing measurable criteria (e.g., defect density, mean time to failure) that will be used to
assess the product’s quality throughout the development lifecycle.
c. Quality Control (QC)
Definition: QC involves the operational techniques and activities used to fulfill quality requirements.
Activities:
Testing: Systematic execution of test cases to detect defects in the software. This includes unit testing, integration
testing, system testing, and acceptance testing.
Inspections and Reviews: Formal reviews of documents, code, and designs to detect defects early in the
2. People-Oriented Methods
These methods focus on the people involved in the software development process, as human
factors significantly impact product quality.
a. Training and Certification
•Definition: Ensuring that the team has the necessary skills and knowledge to produce high-
quality software.
•Activities:
• Training Programs: Regular training sessions on the latest development methodologies,
tools, and quality standards.
• Certifications: Encouraging or requiring team members to obtain certifications like
Certified Software Quality Analyst (CSQA) or ISTQB (International Software Testing
Qualifications Board) to ensure a high level of competency.
b. Peer Reviews and Pair Programming
•Definition: Collaborative methods where team members review each other’s work to catch
defects early and improve overall quality.
•Activities:
• Code Reviews: Developers review each other’s code to identify bugs, ensure
adherence to coding standards, and share knowledge.
• Pair Programming: Two developers work together on the same code, with one writing
code and the other reviewing it in real-time, leading to higher code quality
c. Effective Communication and Collaboration
•Definition: Ensuring clear and consistent communication within the team and with
stakeholders to prevent misunderstandings that could lead to quality issues.
•Activities:
• Daily Stand-ups: Regular meetings to discuss progress, issues, and solutions.
• Collaborative Tools: Use of tools like JIRA, Confluence, or Slack to facilitate
communication and collaboration across the team.
3. Technology-Oriented Methods
These methods involve using specific technologies and tools to automate and streamline quality management
processes.
a. Automated Testing
•Definition: Use of automated tools to run tests repeatedly and quickly, ensuring that new changes do not
introduce new defects.
•Activities:
• Unit Testing: Automated testing of individual components or modules of the software.
• Regression Testing: Automated re-testing of the software after changes to ensure that existing
functionality has not been broken.
• Continuous Integration (CI): Automated integration and testing of code changes in real-time as they are
committed to the repository.
b. Static Code Analysis
•Definition: Analyzing code without executing it to find potential errors, code smells and adherence to coding
standards.
•Activities:
• Code Quality Tools: Use of tools like SonarQube, Checkstyle, or PMD to automatically analyze code for
quality issues.
• Linting: Use of linters in the development environment to enforce coding standards and catch issues early.
c. Configuration Management
•Definition: Managing the software's configuration to ensure consistency and traceability across all environments
(development, testing, production).
•Activities:
• Version Control: Using tools like Git to track changes to the codebase, allowing for rollback and traceability.
• Build Automation: Automating the process of building software from the source code to ensure consistent
and repeatable builds.
4. Hybrid Methods
These methods combine aspects of process, people, and technology to achieve holistic quality
management.
a. Agile Methodologies
•Definition: Agile emphasizes iterative development, collaboration, and flexibility, allowing for
continuous improvement and quick response to changes.
•Activities:
• Sprint Retrospectives: Regular meetings to reflect on the sprint and identify
opportunities for process and product improvements.
• Scrum and Kanban: Frameworks that ensure the team follows an organized process,
focusing on delivering high-quality increments of the product.
b. DevOps Practices
•Definition: DevOps integrates development and operations to ensure faster delivery of high-
quality software.
•Activities:
• CI/CD Pipelines: Continuous integration and continuous deployment practices that
automate the build, test, and deployment processes, ensuring quality at each stage.
• Infrastructure as Code (IaC): Managing and provisioning infrastructure through code,
ensuring consistency and reliability in production environments.
Reliability models and estimation
Reliability models and estimation techniques are essential tools for predicting and quantifying the
reliability of software systems. These models help in understanding how likely a software system is
to fail during operation and how to improve its reliability over time. Here’s an introduction to the key
reliability models and estimation techniques.
Reliability models are used to represent and predict the behaviour of software systems over time.
The primary purpose of these models is to estimate the reliability of the software based on various
factors such as the number of defects, time between failures, and the operational environment.
A. Deterministic Models
•Basic Idea: These models assume that failures occur due to identifiable causes, and that the
system's behavior can be predicted based on these causes.
•Applications: Mostly used in the early stages of development where the cause-and-effect
relationship is more straightforward.
•Example:
• Static Models: These rely on the structure and design of the software to predict
reliability. They do not consider the operational profile or the time factor. For
example, code complexity metrics like Cyclomatic Complexity can be used to estimate
reliability based on the number of logical paths in the code.
2. Probabilistic Models(Time-Based Reliability Models)
•Basic Idea: These models view software failures as random events and use statistical
methods to estimate reliability. They take into account the operational profile and the time
between failures.
•Examples:
• Jelinski-Moranda Model: Assumes that the number of faults in the system
decreases with each failure and subsequent fix. The model predicts the failure rate
based on the number of remaining faults.
• Goel-Okumoto Model: A Non-Homogeneous Poisson Process (NHPP) model that
assumes that the failure rate decreases over time as faults are fixed. It is one of the
most commonly used reliability growth models.
• Weibull Distribution: A flexible model that can represent different types of failure
behaviors (e.g., increasing, constant, or decreasing failure rates) by adjusting its
shape parameter. It’s widely used in reliability analysis across various industries.
3. Fault-Count Reliability Models
These models focus on the number of faults detected during the testing phase and use this data to
predict the reliability of the software.
•Schneidewind Model: Utilizes the number of detected and corrected faults to estimate future
failure rates and predict software reliability.
•Littlewood-Verrall Model: A Bayesian model that updates reliability predictions as new failure
data becomes available, considering the uncertainty in the initial fault estimates.
4. Fault-Seeding Reliability Models
These models involve the deliberate introduction of known faults (seeded faults) to estimate the
number of remaining undetected faults.
•Mills Model: Estimates the total number of natural faults by analyzing the ratio of detected seeded
faults to detected natural faults.
•Error-Seeding Models: Use statistical methods to estimate the total number of faults based on the
detection rates of seeded and natural faults.
Estimation in Software Engineering refers to the process of predicting the time, effort, cost,
and resources required to complete a software development project. Accurate estimation is
crucial for project planning, budgeting, and resource allocation. Various methods and
techniques are used in software estimation, depending on the project size, complexity, and
available data.
Key Aspects of Software Estimation
1.Effort Estimation:
1. Effort estimation involves predicting the amount of work required to complete a
project. It is usually measured in person-hours or person-days. Accurate effort
estimation helps in resource planning and scheduling.
2.Time Estimation:
1. Time estimation is about predicting the duration needed to complete the project,
including all phases such as requirements gathering, design, coding, testing, and
deployment. Time estimation helps in setting realistic deadlines and timelines.
3.Cost Estimation:
1. Cost estimation involves predicting the financial resources required to complete a
project. This includes direct costs such as salaries and equipment, as well as indirect
costs like overhead and contingency.
4. Size Estimation:
•Size estimation predicts the amount of work involved in a project, often measured in terms of lines of code (LOC)
• function points (FP), or user stories. Size estimation is foundational for effort and cost estimation.
5. Resource Estimation:
•Resource estimation identifies the human resources, tools, and technologies needed to complete the project.
•It also considers the availability and expertise of the team members.