STQA-Chapter 5
STQA-Chapter 5
( Chapter 5 )
S O F T WA R E T E S T I N G A N D Q U A L I T Y A S S U R A N C E
1
Software Quality Metrics
Maintaining software quality is an important step of the software development lifecycle. Low
software quality can result in high maintenance costs, unhappy users, and even system failures,
which can erode trust.
This is where software development quality metrics become essential as they provide a
structured way to assess different aspects of software, from performance to maintainability. By
tracing the right metrics, teams can make data-driven improvements that enhance both the
development process and the final product.
In this article, we’ll explore essential software quality metrics, their importance, and how to
track them effectively. Whether you’re wondering how to measure software quality or looking
for practical ways to implement software implementation metrics, understanding and applying
the right metrics will provide you with a solid foundation to enhance your software quality
strategy
2
What Are Software Quality Metrics?
Software quality metrics are a specific type of software metrics that emphasize the quality of the
product, process, and project. They are more directly linked to process and product metrics than
to project metrics. These include performance, security, and maintainability.
Product Metrics assess the quality of the final product by looking at factors like reliability,
security, and ease of use.
Process Metrics focus on how effective the development process is, tracking things like how
often code changes, defect rates, and team productivity.
Keeping an eye on these metrics is essential for maintaining high software quality and ensuring
that you meet both user expectations and regulatory standards.
3
Why Track Software Quality Metrics?
Monitoring software quality metrics offers several key advantages.
Better user experience: Tracing usability and performance metrics helps make
sure that the software aligns with what users expect. This results in enhancing
the user experience.
4
Top Software Quality Metrics to Track - Example
Here are some of the most important metrics in software engineering that every development
team should prioritize: ( Usability, Customer Satisfaction, Security, Maintainability, etc., )
Usability
Software usability metrics are a way to measure how effective, efficient, and satisfying a product
is to use. They can help designers quantify usability objectively rather than making assumptions.
For consumer apps, a great user experience can determine the difference between users
adopting or abandoning the product.
•User satisfaction score: This score is typically gathered through surveys or feedback forms,
helping to measure how happy users are with the software.
•Error rate: By tracking the errors users make, this metric helps pinpoint confusing or poorly
designed features, providing valuable insights into where usability can be improved. It can be
calculated using the below formula.
5
Customer Satisfaction
Customer satisfaction metrics show how users feel about the software and its quality. This metric helps
measure how happy and satisfied customers are with a product or service. Some examples of these metrics
include:
Net promoter score (NPS): This metric shows how likely users are to introduce others to the software. It
directly reflects user satisfaction. The higher the NPS, the more satisfied the users are. That means users are
more likely to promote the product.
In NPS, customers are categorized into three groups based on the answer given to the question, “How likely
are you to recommend our product or service to a friend?”
Respondents rate their answers on a scale from 0 to 10, and their score determines their category.
•0-6: Sad faces represent Detractors.
•7-8: Neutral faces represent Passives.
•9-10: Happy faces represent Promoters.
Customer support tickets: Tracking support tickets can help identify common issues, providing insights into
areas needing improvement.
6
Best Practices for Tracking and Using Software Quality
Metrics
Effectively tracking software quality metrics requires a strategic approach. Consider these best
practices.
•Use real-time monitoring: Helps to manage operations in advance, improving decision-making
and offering quick insights into important events. Watching software performance in real-time
helps teams spot and fix problems quickly, leading to more stable software and a smoother
experience for users.Below is an image of the Grafana dashboard, which is one of the monitoring
tools available.
•Regularly review metrics: Continuously tracking metrics over time helps teams spot patterns or
trends that may not show up in short-term data. This lets them make better decisions and
changes to improve the software.
•Combine metrics with feedback: While metrics provide valuable quantitative insights,
combining them with user feedback offers a fuller understanding of software quality. User input
helps interpret the data more effectively and ensures the software aligns with user expectations
7
Choosing the Right Metrics for Your Project
Not all metrics work for every project. It’s important to pick metrics that fit the type of software
you’re creating and the specific goals of your project.
Enterprise applications: For complex software used by large organizations, focusing on reliability,
maintainability, and compliance is key.( Example: SAP, Oracle ERP, Power BI etc.,)
Agile projects: Agile projects benefit from metrics that support iterative improvements, such as
code quality and defect density. ( Example: Jira, Selenium, Zoom etc..)
Consider your software’s purpose, industry, and audience when selecting metrics to ensure they
provide relevant insights
8
Software Quality Attributes
Software Quality is a term used to measure the degree of excellence of software. Software
Quality attributes are extremely important while designing a software application.
There is a misconception that if the software application is bug-free then the quality of the
software is high. However, a bug-free product is just one of the Software Quality Attributes. The
quality of the software also depends on the user requirements, satisfaction, clearer design,
usability, etc.
List of Software Quality Attributes
Software Quality attributes help measure the software’s quality from different angles. Software
Quality Attributes can be broadly classified into 5 types: Design, Runtime, System, User, and
Non-runtime qualities.
However, there are at least 14 attributes when we go into detail. Let’s discuss this in detail.
9
1. Reliability
Reliability is the ability of software applications to behave as expected and function under the maximum
possible load.
Reliability is made up of different types
•Availability: The percentage of time the application is available for use. 100% availability refers to the
system being always available and never going down under any circumstances.
•Recoverability: Recoverability refers to the ability to recover the system quickly and efficiently.
•Fault Tolerance: This attribute refers to the extent to which the system can tolerate hardware failures.
Let’s take an example of an e-commerce website. The website went down as one of the server nodes
crashed. In an ideal situation, if the reliability attribute is implemented then the system should automatically
failover to the other server node.
2. Maintainability
Maintainability refers to how easily software developers can add new features and update existing features
with new technologies. The application architecture plays a critical role in maintainability. The well-
architected software makes maintenance easier and more cost-effective.
Consider the scenario where the law enforces new privacy laws. If your software is running on legacy code,
adhering to new laws might be challenging. If the software is well-architected and the code is well
documented, implementing such changes would be an easy task.
10
3. Usability
The Usability attributes refer to the end user’s ease of use. Usability is tied to application
performance, application UX design, and accessibility. To understand usability better let’s
consider an e-commerce page – the user has purchased an item and wants to return the
item. Good usability makes the return option available on the orders page. In some cases,
the return option may appear on to Contact Us page – in this situation user easily gets
confused and faces difficulty in finding the return option. The usability design principles can
help to design a good customer experience. The meaningful option texts, tooltips, and info
icons are a great way to encourage self-learning.
4. Portability
The portability quality attribute refers to how easily the system can be ported or migrated to
other environments containing different hardware or operating system specifications. The
portability problem is majorly faced in mobile native applications.
An example of a portability issue – you have designed a web application that works perfectly
fine on Android devices but when it is ported to iPhone devices (iOS), it fails to render. If
your application is abstracted from UI and business logic, fixing such issues becomes easy.
11
5. Correctness
Correctness refers to the ability to behave or function as per software requirement
specifications. This may include navigations, calculations, form submissions, etc.
Consider an example of sign-up. Your application should navigate to the terms and conditions
page after signing up as per the requirement specification. However, it is landing on the home
page without showing the terms and conditions page. Strict code review checklists and Unit
testing can help to discover such bugs.
6. Efficiency
Efficiency can be defined as the time taken by the system to complete a specific task. In
layman’s terms, it can be the performance of the application. Performance is the most critical
software quality attribute as it can make the user system to a hung state. The utilization of
system resources should be taken care with the most efficient way, and any memory leaks
should be avoided to achieve high efficiency.
For example, if you open one of the video editor applications on your desktop, as soon as you
open your system freezes, and all the other open application start to behave in an unintended
way. This is a bad application design and shows the poor efficiency of the software.
12
7. Security
Security attribute focuses on the ability to safeguard applications, data, and information from
unauthorized entities. This is very crucial as the data leaks may incur huge losses in terms of the
organization’s brand name and reputation. Furthermore, the organization may face a lawsuit.
Authentication, authorization, and data encryption are a few key areas that can be safeguarded from
malicious attacks. Add on to this, if your application is B2C, educating customers also plays an important
role.
An example of a security attribute is, you have developed an API endpoint that is exposed to the public
however, there is no authorization required to access this endpoint. This can make API and the underlying
system, less secure. To make the API secure you can consider enabling a JWT security token as
mandatory, which can be created only by authorized users.
8. Testability
Testability is how easily QA members can test the software and log a defect and how easy it is to
automate the software applications. Your application design should focus on making the testing easier
and faster.
Consider an example, you have designed a web application that doesn’t contain any uniquely identifiable
locators. This makes automation more complicated. Adding ids, data-tested, and class helps to cover
more scenarios through automation.
Nowadays automation testing is most recommended as it makes the delivery faster. If your application is
automation friendly then one can easily automate the application using automation software like
Testsigma and identify the defects in the early stage.
13
9. Flexibility
Technology changes are more common nowadays. Flexibility refers to how quickly your application can
adapt to future and current technology demands. Indirectly the flexibility attribute is tied up with your
competency in the market. If your application is not up to date with the latest technology then you
might not be able to deliver all the user needs.
For example, You are using a third-party library for styling your application. Due to some reasons, the
third-party library declares the end of development. Now the question arises of how quickly your
application can switch to another library. If it takes longer, then it might cost your business.
Never develop applications with tight coupling of libraries you have reference to, rather design
applications generically and hook the libraries.
10. Scalability
Scalability is how easily your system can handle increasing demands without affecting the application’s
performance. Vertical Scalability and Horizontal Scalability are two primary areas that help to meet the
scalability criteria.
For example, an e-commerce website declares a Black Friday sale. On sale day, the application sees
huge traffic. If your application is designed according to the scalability attribute, as soon as the traffic
increases, the system should automatically add the server nodes and distribute the traffic. Failing to do
that may result in your application going down.
14
11. Compatibility
Compatibility focuses on the ability to work software systems on different operating systems, and
browsers seamlessly as expected without affecting any functionality.
A most common problem with the web application where it works on Chrome but it doesn’t on
Firefox. This is because of compatibility issues. When it comes to business, you can’t restrict users to
use a specific browser. However, such issues can be tested using the most popular tools like
Testsigma. Testsigma provides 3000+ real devices and multiple browsers to execute your automated
tests for compatibility testing.
12. Supportability
It is the degree to which a software system can provide useful information for identifying and
resolving the issues when application/functionality stops working. Enabling logging, monitoring, and
health checks are most useful to adapt supportability.
For Example, Your application has a profile information page, which works perfectly fine for some
users and only a few users are facing issues with the profile page. In this case, it is difficult to
reproduce in a lower-level environment. The logs come in handy to identify the call stack and
reference APIs which is causing the issue.
15
13. Reusability
It is the degree to which software components can be reused in another application or the same
application. Reusable software components reduce the development cost and effort. This is one of the
reasons most companies are encouraging component-based development.
For example, your organization is developing two different applications where both the application
needs sign-in and sign-up forms. If you don’t have a reusable components strategy then you need to
develop the same thing two times. On the other hand, if you have identified common components and
designed those as reusable and shareable components it would be much easier to integrate sign-in and
sign-up screens into both the application. The reusable component can also decrease maintenance
costs.
14. Interoperability
Interoperability refers to the ability to communicate or exchange data between different systems.
Which can be operating systems, databases, or protocols. The interoperability problem arises due to
the legacy code base, poorly architectured application, and poor code quality.
For example, Your application needs to communicate with the payment gateway but you are facing
challenges to integrate with the payment gateway due to various standards mismatch. If you had taken
all the security, data, and standard API design approach this wouldn’t have happened.
16
Importance of Software Quality Attributes
Software quality attribute determines software system usefulness and success. Often the
software quality attributes safeguard you from reputation damage. Software quality attributes
should be introduced at the early stage of the software development life cycle (SDLC). While
architecting your application you should consider the quality attribute. If all the software quality
attributes are taken care then your application can sustain in the market for a long time.
Adapting software quality attributes is not a one-time task, rather it is continuous and it can
evolve. The management should enforce adapting software quality attributes to achieve high
standards. Keep in mind that adapting quality attributes at a later stage might be challenging
and it can be pricier. As many software quality attribute focus on core-level architecture, it
should be introduced at an early stage
17
Software Quality Standards
Software Quality Standards are formalized benchmarks and guidelines that ensure software
products meet predefined quality levels. These standards are developed by international
organizations, industry groups, and regulatory bodies to improve software reliability, security,
usability, and maintainability.
Importance of Software Quality Standards
•Consistency: Provide a framework for ensuring consistency in software development and
testing.
•Reliability: Ensure software performs reliably under specified conditions.
•Compliance: Help organizations meet regulatory requirements and industry norms.
•Customer Satisfaction: Enhance user satisfaction by ensuring quality, security, and usability.
•Risk Reduction: Minimize risks associated with software failures or security vulnerabilities.
18
Key Software Quality Standards
A. ISO Standards (International Organization for Standardization (ISO)
( International Electrotechnical Commission (IEC ) )
1.ISO/IEC 25010 (System and Software Quality Models):
Defines characteristics like functional suitability, performance, compatibility, usability, reliability,
security, maintainability, and portability.
Focuses on both product quality and quality in use.
19
B. IEEE Standards ( Institute of Electrical and Electronics Engineers (IEEE) )
1.IEEE 730 (Software Quality Assurance Plans):
◦ Provides guidelines for creating and implementing quality assurance plans.
D. Six Sigma
• Focuses on minimizing defects and improving quality through data-driven methodologies.
• Uses techniques like DMAIC (Define, Measure, Analyze, Improve, Control)
20
E. Other Standards
1.ISTQB (International Software Testing Qualifications Board):
◦ Provides certifications and guidelines for software testing professionals.
21
Benefits of Adhering to Standards
Improved Quality: Standards help detect and fix defects early.
Enhanced Communication: Create a common language for all stakeholders.
Reduced Costs: Avoid costly rework by implementing quality practices upfront.
Market Recognition: Compliance with standards can be a competitive advantage.
22
Implementation of Standards
• Gap Analysis: Assessing current practices against the standards.
• Process Improvement: Developing workflows and tools to align with standards.
• Training: Educating teams about the standards and their importance.
• Audits and Reviews: Regularly checking compliance with established standards.
• Certification: Obtaining certifications like ISO/IEC 9001 to demonstrate adherence.
23
Professional and Ethical Responsibilities in Testing
Testing is an integral part of the software development lifecycle, and testers hold a significant
responsibility to ensure the quality, reliability, and safety of the product. Professional and ethical
responsibilities in testing go beyond technical skills, emphasizing integrity, transparency, and
accountability.
1. Importance of Professional and Ethical Responsibilities in Testing
•Ensures trust between developers, clients, and end-users.
•Helps identify critical risks and issues that could harm users or organizations.
•Reduces liability for organizations by complying with legal and ethical standards.
•Encourages accountability and fosters a culture of quality.
24
2. Core Ethical Principles in Testing
Testers must adhere to certain ethical principles to uphold the integrity of their work:
Integrity:
• Testers must report results honestly without exaggerating or hiding issues.
• Avoid false claims about the quality of the system.
Objectivity:
• Testers must remain impartial, avoiding personal biases or influences.
• Strive for accuracy in evaluating system behavior and reporting findings.
Confidentiality:
• Protect sensitive information encountered during testing (e.g., user data, company secrets).
• Avoid disclosing test outcomes or project details without permission.
Accountability:
• Testers should take responsibility for their work, including test design, execution, and reporting.
• Own up to mistakes and rectify them transparently.
Respect for Stakeholders:
• Consider the needs and expectations of users, developers, and business stakeholders.
• Deliver results that align with the organization’s goals while safeguarding end-users.
25
3. Professional Responsibilities of Testers
As professionals, testers are expected to maintain high standards in their work, including:
Adherence to Standards:
• Follow industry standards such as ISTQB guidelines, IEEE 829, and ISO/IEC testing frameworks.
• Ensure testing practices comply with laws like GDPR (for data privacy) and HIPAA (for healthcare).
Continuous Learning:
• Stay updated with the latest tools, techniques, and methodologies in testing (e.g., automation, AI-driven testing).
• Regularly improve skills through training and certification.
Quality Assurance:
• Ensure that the testing process adequately assesses the software’s functionality, performance, and security.
• Work towards delivering a defect-free product to users.
Effective Communication:
• Share test results clearly with all stakeholders.
• Highlight risks, limitations, and potential impacts of unresolved defects.
Collaboration:
• Work closely with developers, business analysts, and product owners to ensure shared understanding and
alignment on quality goals.
26
4. Ethical Challenges in Testing
Testers often face dilemmas requiring ethical decision-making. Key challenges include:
Pressure to Manipulate Results:
• Organizations may pressure testers to downplay issues or make results look better than they are.
• Testers must resist this and communicate risks honestly.
Incomplete Testing:
• Tight deadlines may push testers to skip critical test cases or release products prematurely.
• Testers should advocate for sufficient testing time to ensure product quality.
Conflicts of Interest:
• Bias may arise if testers have personal stakes in the project (e.g., working on the same development
team).
• Independent testing or unbiased reviews should be prioritized.
Lack of Clear Requirements:
• Ambiguities in requirements can make testing challenging and lead to missed defects.
• Testers must seek clarification and document assumptions transparently.
Handling Sensitive Data:
• Data used in testing, such as customer information, must be anonymized or handled securely.
• Breaches of data ethics can lead to legal and reputational consequences.
27
5. Ethical Practices in Test Reporting
Report defects and issues with complete transparency.
Avoid blaming individuals; focus on solutions and quality improvement.
Clearly state test coverage limitations (e.g., untested areas or scenarios).
Ensure that metrics and KPIs (e.g., defect density, test coverage) are not manipulated to misrepresent
quality.
Codes of Ethics:
• ISTQB Code of Ethics includes integrity, professional competence, and confidentiality.
• ACM Code of Ethics outlines principles such as avoiding harm and respecting privacy.
Legal and Compliance Standards:
• GDPR: Ensuring user data is handled securely in testing processes.
• Accessibility Laws (e.g., ADA): Verifying that products are accessible to all users.
• Industry-Specific Regulations: Adhering to standards like HIPAA (for healthcare) or PCI DSS (for
payment systems).
28
7. Practical Examples of Ethical Behavior
Scenario 1: Discovering a critical defect late in the release cycle.
• Ethical Action: Report the issue immediately, even if it delays the release, to avoid harm
to users.
Scenario 2: Pressured to skip security testing to meet deadlines.
• Ethical Action: Advocate for the importance of security testing and document potential
risks if testing is skipped.
Scenario 3: Handling sensitive test data.
• Ethical Action: Use anonymized or synthetic data and ensure compliance with privacy
laws.
29
8. Consequences of Unethical Behavior in Testing
For Users: Harm from defects (e.g., data breaches, physical harm from faulty systems).
For Organizations: Loss of reputation, legal action, and financial penalties.
For Testers: Professional repercussions, loss of credibility, and potential disqualification from
certifications.
30