0% found this document useful (0 votes)
68 views37 pages

Comparative Analysis of Software Quality Assurance

This document provides a comprehensive comparison of software quality assurance techniques across different software development methodologies. It examines how quality assurance processes are applied and their outcomes in Waterfall, Scrum, Kanban, Iterative, and Extreme Programming approaches. The study finds that core testing approaches like regression testing and unit testing are commonly used. It emphasizes the importance of requirement traceability and test coverage. The document highlights challenges in testing complex systems and proposes solutions like shift-left testing. It also notes the financial benefits of effective quality assurance, such as reduced costs and improved customer satisfaction.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views37 pages

Comparative Analysis of Software Quality Assurance

This document provides a comprehensive comparison of software quality assurance techniques across different software development methodologies. It examines how quality assurance processes are applied and their outcomes in Waterfall, Scrum, Kanban, Iterative, and Extreme Programming approaches. The study finds that core testing approaches like regression testing and unit testing are commonly used. It emphasizes the importance of requirement traceability and test coverage. The document highlights challenges in testing complex systems and proposes solutions like shift-left testing. It also notes the financial benefits of effective quality assurance, such as reduced costs and improved customer satisfaction.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Comparative Analysis of Software Quality

Assurance Approaches in Development Models


D. I. Silva
Sri Lanka Institute of Information Technology Malabe
L. K. B. Siriwardana (  [email protected] )
Sri Lanka Institute of Information Technology Malabe

Research Article

Keywords: Scrum, Kanban, Waterfall, Iterative, Software Testing, Quality Assurance

Posted Date: October 27th, 2023

DOI: https://doi.org/10.21203/rs.3.rs-3458412/v1

License:   This work is licensed under a Creative Commons Attribution 4.0 International License.
Read Full License

Additional Declarations: No competing interests reported.

Page 1/37
Abstract
This study presents a comprehensive comparison of software quality assurance techniques across
several software development methodologies. It addresses critical knowledge gaps by examining the
application of quality assurance processes, associated challenges, and outcomes in diverse software
development contexts. The study covers a wide spectrum of development approaches, including classical
Waterfall procedures and Agile alternatives such as Scrum, Kanban, Iterative, and Extreme Programming.
The research delves into the responsibilities, experiences, and domains of experts involved in these
approaches, providing valuable insights into the realm of software development approaches. Key
findings of the study underscore the pervasive use of core testing approaches such as regression testing,
unit testing, integration testing, performance testing, and security testing. The study also emphasizes the
importance of guaranteeing requirement traceability, as well as providing insights on the technologies
utilized for this purpose. Moreover, the research highlights the critical importance of achieving
comprehensive test coverage and provides insights into a range of techniques for accomplishing this
goal. The paper discusses the challenges encountered while testing complicated or interconnected
software systems and offers practical solutions such as shift-left testing and exploratory testing to
mitigate them. Notably, it highlights the substantial financial benefits derived from effective quality
assurance processes, including reduced defect-fixing costs, minimized rework, optimized resource
utilization, and heightened customer retention. Moreover, the paper delves into cost-effective quality
assurance solutions and assesses the impact of quality assurance practices on user experience and
customer satisfaction.

I. Introduction
Software quality assurance (SQA) plays a pivotal role in ensuring that software applications meet defined
requirements, adhere to industry standards, and deliver an exceptional user experience. As the software
industry has evolved, diverse quality assurance (QA) methodologies have emerged, each equipped with
its unique concepts, procedures, and techniques. Within the realm of software testing methodologies, a
wide array of techniques and test types have been developed to ensure that the tested application align
with client requirements. To comprehend these testing methodologies fully, it is imperative to first
understand the diverse development processes they are integrated into. The discipline of project
management methodologies, including Agile frameworks and the traditional Waterfall methodology,
provides a systematic framework that guides project managers and teams in effectively managing every
facet of a project, from specifying goals and scoping of final product delivering. These methodologies
offer a set of rules and procedures to ensure projects are completed efficiently, on schedule, and within
budget, facilitating decision-making, resource allocation, risk management, and transparent
communication with stakeholders. With a multitude of approaches to accommodate varied project types,
sizes, and complexities, the software development landscape is rich with choices.

Among the most popular project management methodologies are the Waterfall model, Agile, Iterative,
Rapid Application Development (RAD), and Six Sigma [1]. When discussing software testing, four general
Page 2/37
tiers: unit testing, integration testing, system testing, and acceptance testing, come to the forefront. A
diverse range of testing approaches, such as white-box testing, black-box testing, and grey-box testing,
are commonly employed to ensure software functions as intended [10]. Testing types encompass a
variety of tests conducted at specific test levels, each serving a distinct purpose. Some of the common
test types include performance testing, security testing, exploratory testing, usability testing, among
others [11].

This study aims to address the fundamental question: ‘How do different development methodologies,
such as Waterfall, Agile, and Iterative, differ in their underlying testing principles and practices’? By
comparing and contrasting testing principles and practices across various development methodologies,
this study seeks to evaluate their effectiveness, challenges, and real-world implications. The study will
assess each methodology based on the following characteristics:

QA methodology and processes: This dimension focuses on how well various QA approaches ensure
that software meets requirements, complies with industry standards, and delivers a superior user
experience. The goal is to identify the most effective methods in practical software development
settings.
Implementation challenges: The study will analyze the difficulties and limitations encountered when
implementing different QA approaches. This includes considerations like resource constraints, team
acclimatization to new procedures, and securing management support for the implementation of
specific strategies.
Best practices: Identifying the best practices adopted by businesses in conjunction with their chosen
QA methods to produce high-quality software products is another vital aspect. Understanding how
effective strategies are adapted and applied across diverse development environments is crucial.
Performance metrics: The research will employ performance metrics, including defect density,
customer satisfaction, release frequency, and time-to-market, to assess each QA approach's efficacy.
These metrics provide a concrete criteria for evaluating the performance of diverse QA techniques.
Flexibility and adaptability: This dimension will consider each QA methodology's ability to adapt to
changes in project scope, requirements, and market dynamics; a critical factor in today's rapidly
evolving landscape.
Cost and resource implications: Evaluating the resource requirements and cost-effectiveness of
different QA methodologies is paramount. This assessment involves taking into account both
immediate and long-term impacts on organizations.
User experience and feedback: The study aims to investigate whether specific QA approaches
enhance user satisfaction and the overall end-user experience. Understanding this element is
essential to comprehensively evaluate the impact of QA practices on customer interactions.

The foundational assumption of this study is that there exists a correlation between software quality and
development methodologies. To empirically investigate this relationship, this study formulates the
following hypotheses:
Page 3/37
Non-Financial Disclosure: The authors declare no non-financial conflicts of interest that may be relevant
to this work. This includes, but is not limited to, personal, professional, political, or academic relationships
that might influence the interpretation or presentation of the research.

Research Funding: This research did not receive any specific grant from funding agencies in the public,
commercial, or not-for-profit sectors.

Involvement in Organizations: The authors declare that they are not currently, nor have been in the recent
past, associated with organizations that may have a direct or indirect interest in the subject matter or
materials discussed in this manuscript.

Patent or Intellectual Property Interests: The authors declare that they have no patent or intellectual
property interests related to the content of this manuscript.

Human and Animal Rights: The authors confirm that the work described in this manuscript complies with
relevant human and animal rights, as applicable to their research.

Data Sharing: The authors will provide access to the data and materials associated with this research,
upon request, in accordance with the journal's policies.

In compliance with best practices for transparency and reproducibility, we are committed to making the
data used in this research available to other researchers, provided that the data sharing is in accordance
with ethical and legal standards and permissions, and does not compromise the privacy and
confidentiality of individuals or entities involved in this study.

For inquiries regarding access to the data, please contact the corresponding author: [email protected]

Please note that the availability of data may be subject to specific institutional or legal restrictions, such
as privacy regulations, participant consent agreements, or intellectual property rights. We will make every
reasonable effort to facilitate data sharing to the extent possible within these constraints.

The data shared will include raw data and processed data. We aim to provide data in a format that allows
for the replication and verification of the findings presented in this manuscript.

We also encourage interested researchers to cite this paper when using the provided data to ensure
proper attribution.

H1 (Alternative Hypothesis)

Agile methodologies, such as Scrum and Kanban, exhibit significant adaptability to changes in project
scope and requirements, resulting in improved software quality outcomes, compared to traditional
Waterfall approaches.

H0 (Null Hypothesis)

Page 4/37
Agile methodologies, such as Scrum and Kanban, do not exhibit significant adaptability to changes in
project scope and requirements, resulting in improved software quality outcomes, compared to traditional
Waterfall approaches.

3) Scrum [3]: Work is separated into sprints, which are time-bound iterations often lasting one to four
weeks, in the iterative and incremental Scrum methodology. A cross-functional team works together on a
prioritized set of tasks pulled from the product backlog throughout each sprint. To enhance
communication and transparency, Scrum incorporates key rituals, including:

B. Software Testing

C. QA Procedure Adoption

II. Background
A. Project Management Methodologies

1. Waterfall Model [1],[2]: The Waterfall model is a well-established project management methodology
characterized by a sequential and linear approach. Requirements analysis, design, implementation,
testing, and deployment are the different steps that must be completed in the right order. Each phase
must be completed before the next one begins. The Waterfall model assumes that project requirements
are consistent and explicit. It provides a clear framework and a predictable period of time, but once a
phase is complete, it may be less adaptable to changes that may occur during the development process.
The Waterfall model is displayed in Fig. 1.

2. Agile [1, 2]: Agile is a method for managing projects and creating software that prioritizes flexibility,
teamwork, and responsiveness to change. It sets itself apart from previous, more rigid processes by
emphasizing the incremental delivery of value to both consumers and stakeholders rather than aiming to
predict and plan out every component in advance. Scrum and Kanban are two Agile approaches that are
particularly well suited for projects where flexibility or the capacity to react to changing needs quickly is
essential. Agile encourages a culture of cooperation and adaptability. The Agile model is shown in Fig. 2.

3. Scrum [3]: Work is separated into sprints, which are time-bound iterations often lasting one to four
weeks, in the iterative and incremental Scrum methodology. A cross-functional team works together on a
prioritized set of tasks pulled from the product backlog throughout each sprint. To enhance
communication and transparency, Scrum incorporates key rituals, including:

a. Daily Stand-Up Meetings: These brief, daily gatherings synchronize team members, fostering a
shared understanding of ongoing work and identifying potential hurdles.
b. Sprint Planning: During this session, the team outlines the work to be completed in the upcoming
sprint, refining product backlog items into actionable tasks.

Page 5/37
c. Sprint Review: At the end of each sprint, a review is held to demonstrate completed work to
stakeholders, gather feedback, and assess progress against project goals.
d. Sprint Retrospective: This introspective meeting allows the team to reflect on the sprint, pinpoint
areas for improvement, and make necessary adjustments to enhance future performance.

Figure 3 illustrates the Scrum framework in action.

The Scrum framework's three fundamental components are the Product Owner, Development Team, and
Scrum Master, while the primary components directing their duties are the Product Backlog, Sprint
Backlog, Product Increment, and Definition of Done. Five essential processes—Backlog refinement, Sprint
planning, Daily Scrum meetings, Sprint reviews, and Sprint retrospectives—are jointly carried out by these
three components. Scrum has a reputation for being a prescriptive process due to its well-defined roles,
artifacts, and events that offer a systematic approach to project management and the persistent pursuit
of project objectives [3, 4].

4. Kanban [1, 3]: The two primary principles of Kanban are to visualize work on a board and to maintain a
constant flow of operations throughout different phases. It does not follow fixed iterations like Scrum
does. Instead, to avoid bottlenecks and boost production, Kanban strongly emphasizes reducing the
amount of work that is still in process. According to the team's capabilities and the level of demand, work
is drawn into the system, promoting a continual flow of tasks. Teams are constantly looking to reduce
downtime by identifying process bottlenecks. Kanban encourages a culture of continuous development
by optimizing throughput and allowing for flexible workflow modifications. This ensures that processes
adapt to new demands and conditions. Figure 4 displays the Kanban model.

When compared directly to Scrum, the Kanban technique shows itself to be noticeably less rigorous in
terms of mandated patterns and standards. While Kanban maintains a more flexible approach, Scrum
meticulously defines various components and roles, such as the product owner overseeing product
development, the development team responsible for delivering release-ready increments post-sprint, and
the Scrum master ensuring thorough understanding and effective application of the Scrum methodology.
This distinction highlights the adaptability of Kanban, enabling teams to customize their workflow and
procedures according to their particular needs while highlighting constant flow and decreasing limitations
[3].

5. Iterative [2]: Projects are divided into smaller, more manageable cycles known as iterations using the
well-known software development process known as Iterative development. Each iteration includes the
requirements analysis, design, coding, testing, and assessment phases of the development cycle. This
method's implementation of a working product version at the end of each cycle is a remarkable aspect.
This Iterative process encourages frequent user feedback and Agile modifications, ultimately producing a
more specific and refined final product. When requirements are not completely defined up front and are
likely to change over time, Iterative development is very beneficial. This strategy, which is frequently

Page 6/37
coupled with Agile approaches, guarantees adaptability and ongoing improvement. Refer to Fig. 5 for a
visual representation of the Iterative model.

6. Extreme Programming (XP) [2]: Extreme Programming (XP) is a well-known software development
methodology that falls within the genre of Agile approaches. The key principles of XP, such as
collaboration, clear communication, adaptability, and the production of high-quality code, are highly
valued. It advocates for quick iterations and regular "releases" of the product, which inherently improve
system effectiveness and serve as a checkpoint for prompt client requests. The client-centric approach
used by XP ensures that the needs and expectations of the intended client are put first when developing
software. Figure 6 shows how the XP framework works in practice.

B. Software Testing
Software testing should be correctly aligned with system requirements to guarantee efficient use of time
and resources throughout development. Once the development phase is completed, testing is conducted.
When explaining testing, there are four main levels of software testing that are covered.

1. Unit Testing: This testing method uses a specific control path to find errors in every software
component. By testing the interfaces, a proper information flow is verified. Thorough testing is
conducted for boundary conditions, independent paths, error handling paths, and fundamental
paths. To establish and maintain data integrity, local data is thoroughly analyzed [16].
2. Integration Testing: This method primarily addresses software development and verification-related
issues when components interact with one another. The entire plan for software integration along
with specific tests should be clearly mentioned in the test specification [16].
3. System Testing: An integrated software system is evaluated as a component of system testing to
ensure that its requirements are being met. It verifies the complex connections between each
component, ensuring that all modules and programs run without issues. Performance, reliability,
usability, and security evaluations are just a few of the functional (evaluating software functionality)
and non-functional (evaluating software quality) tests that fall under this phase. System testing
must be performed by the testers [17].
4. Acceptance Testing: Verifying the software's compliance with customer requirements is the objective
of this testing. Its objectives are to evaluate the overall acceptability of the system and make sure the
software performs the required functions for the customer [17].

The creation of test cases is the initial step in starting the testing process. During this phase, numerous
testing methodologies are used to guarantee testing efficiency and accuracy.

White box testing stands out as a strategy that is especially helpful for finding and fixing bugs. This
approach, sometimes known as "clear box testing," entails creating tests with a thorough grasp of the
internal behavior of the code. To find errors, testers investigate the complex relationships between
software elements. Due to its dependency on complex code, it is, however, less frequently used. White box
testing includes several methods for analyzing the code, such as decision coverage, prime path testing,

Page 7/37
data flow testing, statement coverage, and others. These techniques provide a systematic approach to
ensuring code integrity and robustness [19].

Black box testing is a testing approach created to assess application functionality without becoming too
wrapped up in technical details. It carefully examines if the program complies with user requirements at
each stage of the Software Development Life Cycle (SDLC) and discovers applicability across all of those
stages. This thorough assessment of numerous functionalities involves looking for weaknesses and
closely examining edge cases, such as minimum, maximum, and base values. Black box testing is a well-
known and widely used testing method that is simple but thorough [18].

Error guessing, decision table testing, state transition testing, and all-pair testing are a few of the various
types of black box testing, each of which has a specific function in validating the quality and compliance
of software.[19].

Specialized combination of the benefits of both white box and black box testing approaches is provided
by grey box testing. Grey box testers use their knowledge of the software's internal mechanisms to
improve the effectiveness of functional testing. This strategy is comparable to black box testing in that it
ensures complete evaluations while taking advantage of the testers' familiarity with the inner workings of
the system [18].

In situations like penetration testing [19], where the hybrid technique enables a thorough examination that
integrates components of both white box and black box testing, efficiently revealing flaws and assuring
software security, grey box testing appears to be especially helpful.

To assure the quality and stability of software applications, a wide variety of testing types and
methodologies are available in the field of software testing. These testing techniques are essential for
finding errors, evaluating performance, enhancing security, and venturing into new areas within software
systems. Listed below are some of the testing types:

A. Performance Testing

Performance testing is a vital non-functional testing type that carefully analyzes how the software
functions under various circumstances, including both favorable and unfavorable scenarios. Critical time-
related metrics like load time, access time, run time, and execution time are thoroughly investigated.
Performance testing also evaluates the reliability of software overall, success rates, failure frequencies,
and Mean Time Between Failures (MTBF). Stress testing and load testing are the two main methods used
in performance testing [23].

Stress testing involves subjecting a system to extremely high loads in order to ascertain the upper
capacity limitations of the system. The system's robustness and capacity for continuing heavy usage are
evaluated by this test. To determine whether the system is capable of supporting sustained, continuous
loads without experiencing problems [23].

Page 8/37
On the other side, spike testing involves unexpectedly and severely raising user counts or load levels in
order to monitor system behavior under such circumstances [23].

Performance testing is often carried out using specialized tools such as Soap UI, Jmeter, LoadRunner,
and various IBM tools due to its complexity. Notably, recent software failures have been linked to
inadequate performance testing, emphasizing its critical role in software development [23].

B. Security Testing
Timing and buffer overflow attacks are now common dangers in the world of software security. The
design-level vulnerabilities that object-oriented systems introduce include problems with error handling.
Additional issues in design level further increase security risks such as exposed data pathways, missing
or ineffective access control methods, auditing gaps, bad logging procedures, and timing/ordering
mistakes. Software must go through security testing, which entails assessing crucial security aspects
including secure authentication, cryptography, access control, and various security processes, in order to
protect against these threats [23].

Two main approaches are commonly used in security testing: first, assessing the software's functional
security features; and second, using a risk-based strategy that takes into account prospective attacker
techniques. Penetration testing is one type of security assessment when assessors try to get beyond a
system's security measures based on their knowledge of the system's implementation and design [23].

C. Usability Testing

Usability testing is a technique employed to evaluate the user- friendliness of a product or website.
Through this method, User Experience (UX) researchers can determine whether real users find the product
or website easy to use. There are two techniques for conducting usability testing [24]:

1. Laboratory Usability Testing: This method is performed in a controlled lab environment in front of
observers. Testers are assigned tasks to complete, while observers monitor their actions and report
the results. Observers remain silent throughout the testing. Both observers and testers are physically
present in the same location during this testing [25].
2. Remote Usability Testing: In this approach, observers and testers are located remotely. Testers
perform predetermined tasks while remotely connected to the system under test. A computer
program captures the tester’s voice, screen activity, and facial expressions. The test results are
reported once the observers assess this data [25].

D. Exploratory Testing

Exploratory testing is a strategy that seeks to emulate the individual end-users' freedom and preferences.
It emphasizes the ideas of investigation, discovery, and curiosity. It functions simultaneously with testers
independently navigating the software to evaluate the quality of the user experience it provides, in
contrast to organized testing approaches. The minimal planning involved in exploratory testing. To start

Page 9/37
testing and initial assessment of the software, testers develop a fundamental test concept. The choices
that testers make about certain features and activities to examine during this procedure are made purely
on instinct. This strategy reflects the wide range of preferences and actions of actual end users.
Therefore, compared to typical test cases, exploratory testing tends to uncover more problems and edge
cases [26].

E. Ad hoc Testing

After formal testing has been completed, ad hoc testing a non-systematic, informal technique for
software testing is used to find potential system defects. This testing strategy lacks established test
cases, structured test design, and documentation. Ad hoc testing can be carried out at any stage of the
SDLC testing phase, especially during acceptance testing, regression testing and smoke testing. Although
it can only be done after the system has been completely created and is working. However, a thorough
knowledge of the system's functionality is essential for the tester. The three main types of Ad hoc testing
are [27]:

Buddy Testing: Involves collaboration between a developer and a tester, allowing for early issue detection
and resolution through random input testing after module completion and unit testing [27].

Pair Testing: Encourages idea exchange, viewpoints, and knowledge sharing for more efficient module
testing by including two testers from the testing team in the testing of a module. One tester performs
random tests while the second tester records findings [27].

Monkey Testing: Uses random inputs without any predefined test cases to evaluate the system, keeping
track of its functioning and behavior without any input boundaries [27].

III. Related works


The implementation and analysis of an Iterative software testing process employing automated open-
source testing tools were investigated within two different project environments in a comparative study
that is described in reference [5]. Both of these settings used the standard Waterfall software
development technique, while one adopted the Scrum, Agile software development process. The results
of this thorough examination were noteworthy, with both strategies producing significant advancements
in terms of software upgrades, stability, and general maturity. Regression testing was easily introduced
into the iterative development cycles in the Scrum project, with the help of automated testing technology.
The Waterfall project successfully implemented a similar testing model, using automated testing tools for
functional testing despite schedule restrictions. Both instances demonstrated the benefits of this strategy,
which were characterized by decreased errors, quicker testing procedures, and increased collaboration
between the development and testing teams. The early input of the testing team in the development
process and the flexibility of testing approaches in satisfying the particular requirements of the project
are two important lessons to be drawn from this study. The use of automated testing techniques clearly
shows its importance in boosting testing efficiency and overall software quality in both instances.
Page 10/37
The paper gives an in-depth analysis of test automation software development inside the Scrum
framework and broader Agile software development methodology in reference [6]. The inherent value of
testing in the software development lifecycle and the crucial part test automation performs in reducing
testing expenses are at the core of the discussion. The authors support a tactical strategy that smoothly
integrates Agile concepts into the development of software products as well as the production of tools for
test automation. This methodology stands out because it can provide a functional test framework early
in the development process, giving it clear advantages over more traditional methodologies like the
Waterfall approach. The article then explores the Scrum roles and meetings, providing insights into how
the teams responsible for product development and those devoted to test automation work together.
Specifically, the document supports the development of separate product backlogs for both teams and
the amalgamation of some sessions. These steps strengthen testing and development processes by
ensuring effective team collaboration. The paper concludes with an industrial case study to illustrate the
practical use of this methodology, highlighting the concrete advantages and practical applicability of this
integrated approach to test automation software development.

A case study of implementing Agile testing in a legacy software product development is presented in the
paper [7]. The testing team had difficulties when the product development team moved from a Waterfall
methodology to an Agile Scrum. The challenges and recommendations from the study are displayed in
table 1 below.

Metrics and supporting data that reflect the effectiveness of Agile testing in the product are discussed.
Before implementing Agile testing, they noticed that a number of important tests, including the smoke
test, API test, load test, and performance test, are either not conducted or are performed on a limited
scale. Low critical testing was mostly due to the expense (extended testing time delays the release of new
product versions to the market) and low accuracy of manual testing. Impact and efficacy of adapting to
Agile testing is displayed in Fig. 7.

Metrics and supporting data that reflect the effectiveness of Agile testing in the product are discussed.
Before implementing Agile testing, they noticed that a number of important tests, including the smoke
test, API test, load test, and performance test, are either not conducted or are performed on a limited
scale. Low critical testing was mostly due to the expense (extended testing time delays the release of new
product versions to the market) and low accuracy of manual testing. Impact and efficacy of adapting to
Agile testing is displayed in Fig. 7.

The challenges of testing in Agile methodologies, solutions to the problems discovered, and the paper
also discusses tools to support these challenges [8]. There are challenges with the testing process as
software development changes from traditional Waterfall approaches to Agile. Infrastructure setup, test
documentation, insufficient test coverage, broken code following frequent builds, early defect
identification, insufficient API testing, and a lack of focused testing are a few of these difficulties. As
some of the key solutions offered by the paper, the solutions for the difficulties listed below:

Page 11/37
To simply develop and manage test environments, it advises using self-service provisioning and on-
demand scalability.
The paper discusses using testing strategy, checklists, and exploratory testing to handle the issue of
frequently changing requirements in Agile development in terms of test documentation.
The study suggests methods such as automating acceptance testing, executing unit tests, and
running tests in regression suites to address poor test coverage.
Peer code reviews and static analysis techniques can help find errors early on.
The study recommends applying automated testing tools and continuous integration solutions like
Cruise Control and Hudson for handling problematic code after frequent builds.

A selection of software testing tools are also listed in the article that can be used to address some of the
problems with Agile development. N-Unit, STAF (Software Testing Automation Framework), SmartBear,
Cucumber, FitNesse, and Robot Framework are some of these tools [8].

Table I. Challenges and Solutions in Adopting Agile Testing a Legacy Software Project/Product [7]

Challenge Solutions

Traditional testing roles 1. The new roles and responsibilities for Agile test
manager
2. New roles and responsibilities for Agile testers

Collaboration, Communication, and 1. Virtual testing stand-up


dedicated Integration / System Testing
2. Virtual Integration and System Testing Team

Agile test developers skill 1. Technical skills


2. Behavioral skills

Testing Strategy 1. Strategy for Agile testing approach/guidance,


mapping with test quadrant
2. Our position in test quadrant initially

3. Internal test debt stories


4. Transforming manual testing to exploratory
testing
5. Moving towards q4 of test quadrant: focus on load
and performance test automation

6. Focusing on production testing


7. Moving towards q1 of test quadrant: focus on API
and unit testing

Page 12/37
Source [9] covers the development of a system using the Kanban technique in order to get around
limitations in existing club management systems. Giving university clubs and organizations a centralized
platform for effective management, communication, and organizing is the main goal. The effectiveness
and usability of the system are verified through functional and usability testing. The system satisfies
important usability characteristics, according to the findings of the usability testing. According to the
study's findings, the Kanban approach can be successfully used to create functional software for club
management systems in academic settings.

Functionality, reliability, usability, efficiency maintainability, and portability are the six fundamental quality
elements included in the ISO 25010:2011 methodology for software quality. The basic qualities generate
27 sub criteria for both internal and external quality [12].

Quality impacts customer satisfaction. Businesses must implement a DevOps-aligned strategy in order to
improve quality. Continuous testing and continuous quality monitoring are intrinsically tied to continuous
development, continuous build, and continuous deployment procedures. Think about Test-Driven
Development (TDD) and Behavior-Driven Development (BDD), which enable everyone in the team to share
knowledge of the program's planned functionality and operations [13].

Quality cannot be only the responsibility of the Quality Engineering team; it must be a shared
responsibility. It is imperative to consider quality coverage while creating test scenarios that include
validation tests and are based on objective requirements. It's also important to implement metrics to
manage QA effectiveness. Release quality, iteration performance, which evaluates defect leakage, the
quantity of high-severity defects, and the number of features offered should all be considered in these
measures [14].

The paper [15] examines the challenges faced by the product industry when developing software
programs that adhere to quality requirements and financial constraints. But, as with any novel idea, there
were opponents. 72% of those who participated said they thought QA should be an essential component
of each SDLC step, whereas 28% had different opinions. According to a participant in a face-to-face
discussion, participant identified the unstable economic climate as a significant impediment to the
widespread adoption of QA throughout all phases of the SDLC. The choice to include QA at each stage of
the SDLC was driven by a number of factors, including concerns about the lengthy process, inconsistent
methodology, projects meeting deadlines, and unclear requirements that frequently result in project
failures.

IV. Methodology
In order to gather data for this study, a questionnaire survey was employed. This approach was chosen
for several valid reasons. First and foremost, it allowed the authors to quickly and efficiently gather
insights from a variety of subject matter experts. The survey offered an organized means of obtaining
data from a diverse group of information technology (IT) professionals, given the complexity of the
topics under investigation. It also facilitated the acquisition of consistent responses, enabling the
Page 13/37
comparison of various QA approaches and development models. The questionnaire survey approach,
with its thoughtfully designed questions, played a pivotal role in ensuring the accuracy and reliability of
the collected data. It aided in the thorough analysis and comprehension of the relationships between
different software development methodologies and QA techniques.

A. Data Collection

Data collection for the study involved distributing a questionnaire designed to elicit insights from industry
experts. The questionnaire encompassed several key characteristics, including cost and resource
implications, flexibility and adaptability, best practices, performance metrics, QA methodologies and
processes, implementation challenges, and user experience and feedback.

B. Comparative Analysis

Upon completion of the survey, a comparative analysis was conducted to assess different methodologies
based on specific criteria. These criteria included evaluating which QA approach reduces time-to-market
for software products and which methodology effectively addresses the identification and correction of
defects discovered during QA.

C. Methodologies Selection

To ensure a comprehensive examination of development processes, a range of methodologies was


selected for research, including Waterfall, Scrum, Kanban, Iterative, and XP. This selection spanned both
traditional and modern development methods.

D. Sample Consideration

To account for variations in QA requirements across different domains, the study included organizations
from diverse sectors such as healthcare, finance, e-commerce, and insurance. Moreover, to encompass
variances in QA processes related to organizational size and resource availability, both large-scale and
smaller organizations were chosen. Geographic diversity was also considered by including organizations
from various regions, including the US, Australia, and other locations.

E. Pilot Test

To ensure the clarity and effectiveness of the survey, a pilot test was conducted with a smaller group
within the target audience before distributing the questionnaire to a larger sample. After the pilot test
participants submitted the survey, feedback was collected on various aspects, including question clarity,
survey flow, the understandability of terms, and completion time. The participant input for the pilot test
was thoroughly examined, and necessary refinements were made to the survey instrument. This pilot test
helped identify ambiguities and issues related to the questionnaire.

F. Survey Participants

Page 14/37
The survey received responses from forty IT specialists across QA, development, delivery, and security
professions, representing a diverse range of roles and experiences within their respective organizations.

To strike a balance between statistical significance and practical viability, it was decided to choose 80 IT
specialists. This sample size was deemed sufficient to yield meaningful findings given the complexity of
the research topic and the requirement to gather comprehensive data. A larger sample size would have
risked reducing the depth of responses.

G. Data Analysis

To reach appropriate decisions regarding the comparison of QA methodologies, the collected data
underwent thorough examination. The survey responses were analyzed using statistical and analytical
methods such as descriptive statistics, inferential statistics, and data visualization. These methods made
it possible to measure essential variables, identify trends, and make statistical comparisons between
various methodologies.

H. Minimizing Biases and Errors in Data Collection and Analysis

Care was taken to design the survey questions carefully to ensure clarity and objectivity. Questions
that might influence respondents' opinions by being ambiguous or leading were avoided. The
language of the questions was neutral and did not favor any particular methodology.
The questionnaire's structure was designed to be logical and cohesive to ensure that questions
flowed naturally and did not generate bias or ambiguity.
Participants were assured of the anonymity of their responses, encouraging honest and candid
feedback. This assurance alleviated concerns about potential repercussions or judgment based on
their responses.
Measures were implemented to minimize errors during the data analysis stage, including the use of
data validation checks to identify outliers and address answer inconsistencies.

V. Results and discussion


A. Distribution of Job Roles and Experiences

Among the surveyed professionals, as depicted in Fig. 8, 52.5% are in QA roles, while 35% are in
development, and the remaining 12.5% represent roles in project management, data science, and
cybersecurity. As illustrated in Fig. 9, responders with less than five years of experience constitute 37.5%,
with 23% having 4 to 6 years, 15% having 7 to 10 years, and 17.5% having over 10 years of experience.

B. Adoption of Development Methodologies

The study results shed light on the approaches used by organizations in their software development
processes. Scrum was used by the majority of responders (40%), demonstrating a significant preference

Page 15/37
for this Agile methodology. Kanban was used by 15% of respondents, whereas the classic Waterfall
model was used by 5%. A lesser proportion (2.5%) suggested using Iterative development alongside
Scrum or both Scrum and Agile. Furthermore, 20% of respondents have stated that they used both
Kanban and Scrum, demonstrating a dual-methodology approach. The remaining 15% reported using
Waterfall in addition to Iterative or a combination of Waterfall, Kanban, Scrum, and Iterative techniques.

These data show a wide range of techniques used by businesses, demonstrating their agility to choose
approaches that best suit their project requirements and development philosophies. Figure 10 illustrates a
visual representation of the number of methodologies used in the organizations.

The details in Table 2 include the domain and the number of team members employed by the
participants' organizations.

Page 16/37
Table 2
Details of Domain and Team Count
Team Details Given Options Number of Responders

Domains Education 12

Customer Relationship Management 10

Healthcare 8

Finance 8

Telecommunication 8

E commerce 8

Gaming 4

Insurance 6

Real estate 4

Science and research 2

Integrations 2

Car industry 2

Culture management 2

Supply chain management 2

Lifestyle 2

No of team members Less than 5 24

5 to 10 28

11 to 20 16

More than 20 12
C. QA Procedure Adoption

Most organizations employ regression testing, unit testing, integration testing, performance testing, and
security testing. Figure 11 illustrates the distribution of these testing types.

Ensuring requirements traceability is an important aspect of requirement management. The development


of traceability between requirements helps with dependency management, and the connection between
requirements and test cases enables test coverage analysis. Delivering products that meet customer
expectations requires the ability to measure test coverage. Traceability also enables quick impact
evaluation in the event of changes. Reusability of requirements becomes another important feature in
complex systems that are constructed with connected components and shared requirements [21].

Page 17/37
Jira is used by 60% of the respondents' organizations to ensure the link between test cases and
requirements. 47.5% of

respondents utilize Excel sheets, while 27.5 percent use Azure DevOps to track requirements and test
cases. To stay organized, 20% utilize mind maps and 10% use confluences. Other tools that have been
used to provide traceability between requirements and test cases include ClickUp, Application Lifecycle
Management System (ALM), and git.

D. Test Coverage and Analysis Tools

Test coverage is a crucial aspect of software testing, consisting of fault, code, and requirements
coverage. It ensures that all declared requirements are met, while code coverage identifies untested code.
Test coverage is crucial for early defect discovery, functionality verification, software quality, risk
reduction, and faster regression testing [22].

The survey's findings show that a majority of responders (68.3%) do not use analysis tools for evaluating
test coverage. This suggests that a significant percentage of responders may use different approaches or
manual processes to evaluate and manage their test coverage, which may present an opportunity for
those who aren't already using analysis tools to learn more about the advantages of doing so.

Achieving complete test coverage in software testing is an important goal for ensuring a software
product's quality and reliability. Respondents to the survey have stated numerous effective approaches
for achieving this goal.

First, a significant percentage of 37.5% highlighted the significance of developing a clear mapping
between software requirements and test cases. This method ensures that each defined requirement has a
corresponding test case, lowering the risk of testing oversight. Furthermore, 47.5% of respondents
emphasized the importance of evaluating user stories against acceptance criteria in order to ensure
thorough coverage. This approach ensures that all areas of the program are adequately reviewed by
aligning testing efforts with user expectations. In addition, 35% of participants perform manual test
coverage checks when time permits, offering a practical technique for ensuring comprehensive testing. In
contrast, 32.5% of respondents use code analysis tools to assess test coverage, while 27.5% do not
actively evaluate code coverage.

Only 32.5% of the surveyed companies utilize tools such as SonarQube, Azure DevOps, ALM, and
Acunitex to assess the test coverage of their products. A majority of 67.5% do not employ any tool for
measuring the test coverage of their products.

E. Challenges in Testing Complex Systems

Half of the surveyed companies have encountered challenges in testing complex or interconnected
software systems, primarily attributed to insufficient test coverage. Other significant hurdles when testing
such systems include identifying all potential system interactions and dependencies, time-intensive test
Page 18/37
setup and configuration due to complexity, coordinating testing activities across multiple teams
responsible for various components, managing intricate data flow and integration points, difficulty in
replicating defects within complex system interactions, and a lack of comprehensive documentation.
Figure 12 provides a detailed breakdown of the responses to each of these challenges encountered
during testing complex or interconnected software systems.

F. Challenges in QA Processes

The survey indicates that time constraints, which are experienced by IT professionals 85% of the time, is
their biggest concern. Resources being limited come in second with 77.5%. Due to a lack of tooling and
support, 50% of respondents have difficulties. 45% of respondents concur that resistance to change and
team adaptation are other prevalent obstacles. 30% of them, however, have communication problems.

G. Effective QA Practices

Discussing best practices, these are the best practices that we identified as successful. Shift-left testing
for moving QA activities earlier in the development cycle, Exploratory testing for flexible QA adaptability in
complex systems, Integrated test automation frameworks for efficient QA automation, Risk-based testing
for focused QA efforts on high-priority areas and DevOps practices for seamless QA integration into
continuous pipelines. Figure 13 demonstrates how respondents accepted these recommended practices.

H. Handling Changing Requirements


The majority of participants, or 77.5%, actively participate in the routine evaluation and update of their
test preparations in response to changing project specifications and scope requirements. In addition,
62.5% of participants regularly arrange test cases according to the most recent project scope. 45% of
respondents report keeping regular contact with both stakeholders and QA teams in order to maintain a
continuous and effective line of communication. While 35% of participants have included change
management approaches in their QA procedures, a significant 22.5% of participants do impact analysis
to determine how changes would affect the QA process.

I. Communication and Collaboration

A significant proportion of IT professionals, particularly 83.5%, engage in the practice of holding regular
meetings in order to build good communication and promote productive cooperation during the
development process. In order to improve collaboration and communication, 77.5% also use collaborative
platforms like Microsoft Teams, Slack, and comparable technologies. 42.5% keep thorough
documentation as a defense against failures to guarantee rigorous attention to detail. In order to further
improve their development procedures, 30% keep open channels for feedback, while 32.5% often perform
retrospectives. Notably, pair programming is used as a collaborative strategy by 22.5%.

J. Performance Metrics

Page 19/37
55% of survey respondents use defect density as a significant performance metric to assess the quality
of their projects. The test case pass rate is a highly observed number that 50% of participants use as a
trustworthy measure of performance. Furthermore, 42.5% participants used metrics for release frequency
and code coverage to evaluate the project's success, demonstrating their dedication to quality control.
32.5% of responders consider the length of time it takes for a product to reach the market as a crucial
performance indicator. It is important to note, nevertheless, that 12.5% of participants stated they did not
include any particular performance indicators in their projects.

K. Factors Impacting QA Adaptability

In the examination of factors affecting the adaptability of a QA methodology to changing requirements,


the following sections present the identified factors and the level of agreement among respondents
regarding each factor:

Collaboration and Communication – 70%


Iterative Development Cycles – 45%
Cross-Functional Teams – 35%
Frequent Retrospectives – 32.5%
Prioritization Mechanisms – 62.5%
Resource Allocation – 55%
Clear Documentation – 40%

The results of the survey in response to the study, "What techniques are employed to effectively manage
shifting priorities and evolving project objectives within your QA process?" yield interesting conclusions.
60% of respondents, a sizeable majority, prioritize collaborative interaction with stakeholders to fully
grasp changing project priorities, highlighting the critical importance of good communication in change
management. A proactive strategy is used by about 57.5% of participants, who regularly review and
modify their testing backlog in accordance with the Agile methodology's emphasized adaptability.
Additionally, 57.5% of respondents emphasize the value of working in sync with the development team to
coordinate testing efforts with shifting priorities. Nearly 45% of respondents use adaptable testing
frameworks, allowing testing approaches to be seamlessly changed as project dynamics change.
Additionally, 37.5% support impact analysis as a proactive way to prepare for difficulties as project
objectives change. The fact that 10% of respondents do not have accurate techniques, however, is
interesting because it may be necessary to use more definite strategies to manage shifting priorities
within their QA processes.

Results from the questionnaire, which were rated on a scale of 1 to 5, provided insight into how
respondents evaluated their QA approaches for managing changing needs during ongoing development.
The majority of respondents (n = 16) gave Scrum a rating of 4, indicating that it performed particularly
well and was remarkably adaptable to changing requirements. Impressively, Scrum received the highest
grade of 5 from 12 respondents, highlighting its extraordinary versatility. Scrum received two 3 ratings,
Page 20/37
which is a modest performance. In contrast, Kanban received a variety of opinions, most of which were
favorable, and several respondents acknowledged its efficiency in managing changes. The Waterfall
system had conflicting results, and some responders emphasized its limited adaptability. Different
combinations of methodologies showed different levels of adaptation, with some displaying high
effectiveness. These results highlight how crucial it is to choose a technique that is in line with project
dynamics and has the flexibility needed to respond to changing requirements. Figure 14 shows the visual
representation of the ratings given for the respondent’s current development methodology.

L. Cost and Resource Implications

The study results highlight the significant influence of efficient QA techniques on long-term cost
reductions in the software development industry. 67.5% of respondents, the majority of them, confirmed
the significant decrease in defect-fixing costs at later phases of development, which was primarily due to
the early detection and prevention of difficulties. The same percentage also noted a corresponding
decrease in the need for rework and the occurrence of fewer production issues, which translated to lower
downtime expenses. These results are a powerful demonstration of how important QA is to improving
product quality and, ultimately, customer retention, a sentiment shared by 55% of respondents.
Furthermore, 47.5% of the participants emphasized the effective resource utilization achieved by
streamlined testing techniques, reinforcing the inherent cost-effectiveness of well-executed QA projects.
Interestingly, just 7.5% of respondents thought that QA had no impact on cost savings, confirming the
popular acceptance of its considerable financial advantages. In conclusion, this survey highlights the
dual advantages of QA processes, not only in cost reduction but also in raising product quality, resulting
in long-term benefits for an organization's financial well-being and consumer satisfaction.

According to study results, organizations use a variety of savings techniques in their QA approaches.
Utilizing open-source testing tools to avoid licensing costs (47.5%), conducting risk-based testing to focus
on critical areas (45%), incorporating shift-left testing to catch defects early (37.5%), embracing
Continuous Integration/Continuous Deployment (CI/CD) to streamline testing (55%), and implementing
efficient test case management and reuse (47.5%) are among these strategies. These approaches not
only reduce expenses but also improve testing productivity and overall software quality. However, a small
percentage (5%) of respondents reported having no explicit efforts to save money, indicating a potential
for these companies to investigate and apply such ways to optimize their QA processes and efficiently
reduce costs. The study's findings indicate that organizations use a wide range of methods to reduce
expenses in their QA operations. Using open-source testing tools to avoid paying licensing fees is one of
these strategies, as mentioned by 47.5% of respondents. Furthermore, risk-based testing approaches are
used by 45% of organizations, who focus their testing efforts on key areas. A further 37.5% use shift-left
testing strategies for early problem discovery. Notably, a sizable 55% of respondents use CI/CD methods
to simplify their testing procedures, and 47.5% place a premium on effective test case management and
reuse as a cost-saving measure. These methods not only assist in reducing costs but also raise the
overall quality of software and the productivity of testing. However, just 5% of survey respondents
reported that their QA processes had clear cost-saving initiatives, which presents a chance for these
Page 21/37
organizations to investigate and put into practice ideas to improve their QA practices and achieve more
effective cost reductions.

M. User Experience and Feedback

The survey results reveal variations in the frequency of end users' or stakeholders' involvement in the QA
process for input. The majority of respondents (52.5%) reported their engagement with end users or
stakeholders at the end of each sprint in the Agile methodology, ensuring the continuous integration of
feedback into the development process. In contrast, 12.5% preferred monthly involvement, 17.5% favored
quarterly involvement, and 5% preferred biannual engagement. However, a significant 25% of respondents
indicated infrequent participation, potentially representing a missed opportunity to gather valuable
insights and align software development with user expectations.

The survey results confirm the significant impact of QA practices on customer satisfaction and user
experience within the context of software solutions. An overwhelming 80% of respondents strongly agree
that QA methods are crucial for ensuring that software aligns with and meets user expectations, thus
highlighting their importance in delivering user-centric solutions. Moreover, a substantial 62.5% of
participants believe that QA-driven problem prevention significantly enhances the overall user experience,
and a similar number acknowledges the importance of QA in early issue detection, effectively reducing
user incidents. Additionally, 62.5% of respondents concur that QA procedures ensure software reliability,
ultimately leading to increased customer satisfaction. Lastly, 55% of participants agree that
comprehensive testing is linked to a reduction in post-release customer concerns, underscoring the
significance of quality control in improving overall satisfaction with software products.

N. Comparative Analysis
In the survey, we also conducted a comparative analysis of various development methodologies to
identify those that possess distinctive attributes, such as those offering the most advantages or being the
easiest to use. The following questions were asked:

Based on your experience, which QA approach focuses on eliminating waste and maximizing value?
In your opinion, which QA methodology is the most cost-effective for your company?
Among the listed approaches, which one is more adaptable for handling unexpected changes in
project scope?
From your perspective, which methodology approaches prioritize and resolve defects identified
during QA?
In your opinion, which approach impacts the ability to iterate and improve QA processes over time?
Which QA approach generally leads to a shorter time-to-market for software products?

Table 3 provides a comparative analysis of various development approaches based on the key attributes
of software QA practices. The following section offers an analysis of the findings:

Page 22/37
A. Waste Reduction and Value Maximization (A1)

Scrum is the most widely recognized approach for its focus on waste reduction and value maximization,
with 62.5% of respondents acknowledging its effectiveness. Waterfall, Iterative, and XP are mentioned by
fewer respondents in this context, suggesting that they are not as strongly associated with waste
reduction and value maximization.

B. Cost Effectiveness (A2)

Scrum is considered the most cost-effective QA methodology for businesses, with 72.5% of respondents
in agreement. Kanban is also acknowledged for its cost-effectiveness, with 20% of respondents viewing it
as a viable option. Waterfall and Iterative receive limited support for cost-effectiveness; however, XP is not
mentioned in this context.

C. Adaptability in the Face of Unexpected Changes (A3)

Scrum and Iterative approaches are considered the most adaptable for managing unexpected changes in
project scope, with 55% and 30% of respondents choosing them, respectively. In this context, Kanban and
Waterfall are mentioned less frequently in relation to adaptability, while XP is not regarded as flexible in
handling unforeseen changes.

D. Prioritization and Defect Resolution (A4)

Scrum is considered effective at prioritizing and resolving defects during QA, with 57.5% of respondents
in agreement. Kanban and Iterative approaches to defect resolution are also acknowledged, although to a
lesser degree. Waterfall and XP are less favored for this aspect of QA.

E. Effect on the Ability to Iterate and Improve QA Processes (A5)

Scrum is highly regarded by 70% of respondents for its significant impact on the ability to iterate and
enhance QA processes over time. Iterative is also acknowledged for its potential in this regard, though to
a lesser extent. Kanban and Waterfall are less frequently associated with iterative quality improvement,
and XP does not have a substantial influence on the ability to iterate and enhance QA processes.

F. Time-to-Market (A6)

Scrum is perceived by 69.2% of respondents as the methodology that generally leads to a shorter time-to-
market for software products. Waterfall and Iterative methodologies are recognized for their potential to
reduce time-to-market, though not to the same extent as Scrum. Kanban and XP do not receive significant
recognition for expediting time-to-market.

Scrum is acknowledged for its agility and efficiency, making it an excellent choice for projects with rapid
changes. Kanban is known for its cost-effectiveness, which is particularly advantageous for resource
management and budget constraints. Iterative techniques are valued for their adaptability in handling
Page 23/37
unforeseen developments. The efficiency of Scrum in defect prioritization and resolution is highlighted,
underscoring its capacity to streamline QA processes. Scrum's iterative nature promotes continuous
quality improvement. Furthermore, Scrum's reputation for minimizing time-to-market underscores its
utility in a competitive software industry, enabling organizations to capitalize on market opportunities
more swiftly.

Table 3 presents the results of a comparative analysis of different development methodologies.

Table 3
Results of the Comparative Analysis
Attribute Waterfall Scrum Kanban Iterative XP

A1 10% 62.5% 10% 12.5% 5%

A2 0% 72.5% 20% 7.5% 0%

A3 0% 55% 15% 30% 0%

A4 10% 57.5% 17.5% 15% 0%

A5 7.5% 70% 12.5% 10% 0%

A6 5.1% 69.2% 5.1% 12.8% 7.7%

VI. Conclusion
The study explored a diverse spectrum of software development approaches, encompassing both
classical Waterfall model and Agile methodologies such as Scrum, Kanban, Iterative, and XP. It delved
into the roles, experiences, and domains of the experts involved in these approaches, shedding light on
the intricacies of real-world software development scenarios.

One of the prominent findings of this study is the widespread adoption of fundamental testing
methodologies, including regression testing, unit testing, integration testing, performance testing, and
security testing. Emphasis has been placed on the critical need for requirements traceability, which
facilitates effective management, coverage analysis, and impact assessment in response to evolving
project dynamics. In this context, essential tools such as Jira, Excel sheets, and Azure DevOps have been
found to be critical for maintaining this traceability.

The study highlights the importance of attaining comprehensive test coverage, which may be
accomplished through a variety of methods, such as establishing clear mapping between requirements
and test cases, reviewing user stories against acceptance criteria, and conducting manual test coverage
assessments. While some companies employ code analysis tools for this purpose, there remains room
for further improvement in this domain.

Page 24/37
Time constraints, resource limitations, and resistance to change were recognized as major hurdles in
testing complex or interconnected software systems. The study also elucidated strategies for addressing
these challenges, including the adoption of best practices such as shift-left testing, exploratory testing,
integrated test automation frameworks, risk-based testing, and embracing DevOps principles.

Furthermore, the survey results highlighted the tangible financial benefits derived from effective QA
processes, such as reduced defect-fixing costs, minimized rework, optimized resource utilization, and
heightened customer retention. These results demonstrate the measurable return on investment
associated with a well-implemented QA approach.

Moreover, cost-saving solutions in QA were investigated, ranging from open-source testing tools to
implementing risk-based testing and CI/CD practices. These methods not only reduce expenses but also
improve testing productivity and software quality.

The extent of end-user or stakeholder involvement in the QA process exhibited variations, with Agile
techniques encouraging constant engagement for user feedback. This emphasizes the significance of
regularly interacting with users to align software development with user expectations.

The study revealed that the effective implementation of QA methods has a major impact on user
experience and customer satisfaction, ensuring that software meets user expectations, preventing
problems, and enhances software stability. These findings highlight the critical importance of QA in
developing user-centric solutions.

Finally, the results suggest that Scrum exhibits excellence across various dimensions, including waste
reduction, cost-effectiveness, adaptability to changes, defect resolution, continuous iterative
improvements, and accelerated time-to-market. However, it is imperative to recognize that each approach
has its unique strengths and applicability, depending on specific project requirements and constraints. To
optimize software development processes, the chosen QA technique should align seamlessly with the
project’s goals and constraints.

Declarations
Corresponding Author Details

Corresponding author – L K B Siriwardana

Corresponding author’s email – [email protected]

Author’s Contribution

D. I. De Silva – reviewed the paper and did the necessary updates, wrote the ‘methodology’ section,
analysis the survey results and wrote the ‘results and discussion’ section

Page 25/37
L. K. B. Siriwardana – conducted the survey, conducted the literature review and wrote ‘introduction’,
‘background’ and ‘related work sections, wrote the ‘methodology’ section, analysis the survey results and
wrote the ‘results and discussion’ section

Fundings

No funding was obtained for this study.

conflicts of interest

Financial Disclosure: The authors of this manuscript declare that they have no financial conflicts of
interest with respect to the research, uthorship, and publication of this article. This includes, but is not
limited to, any financial associations, funding sources, or financial relationships with organizations or
entities that could be perceived to influence the research or its outcomes.

Non-Financial Disclosure: The authors declare no non-financial conflicts of interest that may be relevant
to this work. This includes, but is not limited to, personal, professional, political, or academic relationships
that might influence the interpretation or presentation of the research.

Research Funding: This research did not receive any specific grant from funding agencies in the public,
commercial, or not-for-profit sectors.

Involvement in Organizations: The authors declare that they are not currently, nor have been in the recent
past, associated with organizations that may have a direct or indirect interest in the subject matter or
materials discussed in this manuscript.

Patent or Intellectual Property Interests: The authors declare that they have no patent or intellectual
property interests related to the content of this manuscript.

Human and Animal Rights: The authors confirm that the work described in this manuscript complies with
relevant human and animal rights, as applicable to their research.

Data Sharing: The authors will provide access to the data and materials associated with this research,
upon request, in accordance with the journal's policies.

Data availability statement

The data that support the findings of this study are available from the corresponding author upon
reasonable request. Data availability is subject to restrictions imposed by ethical and privacy
considerations.

In compliance with best practices for transparency and reproducibility, we are committed to making the
data used in this research available to other researchers, provided that the data sharing is in accordance

Page 26/37
with ethical and legal standards and permissions, and does not compromise the privacy and
confidentiality of individuals or entities involved in this study.

For inquiries regarding access to the data, please contact the corresponding author: [email protected]

Please note that the availability of data may be subject to specific institutional or legal restrictions, such
as privacy regulations, participant consent agreements, or intellectual property rights. We will make every
reasonable effort to facilitate data sharing to the extent possible within these constraints.

The data shared will include raw data and processed data. We aim to provide data in a format that allows
for the replication and verification of the findings presented in this manuscript.

We also encourage interested researchers to cite this paper when using the provided data to ensure
proper attribution.

References
1. Martin, M., “Project Management Methodologies Tutorial,” Jul. 15, 2023. [Online]. Available:
https://www.guru99.com/types-project-methodology.html [Accessed:Aug. 22, 2023].
2. Hamilton, T., “Software Testing Methodologies: QA Models,” Jul. 01, 2023. [Online]. Available:
https://www.guru99.com/testing-methodology.html [Accessed:Aug. 22, 2023].
3. Saleh, S. M., Huq, S. M., & Rahman, M. A., “Comparative Study within Scrum, Kanban, XP Focused on
Their Practices,” in (2019). International Conference on Electrical, Computer and Communication
Engineering (ECCE), Feb. 2019, pp. 1–6. 10.1109/ECACE.2019.8679334.
4. López-Martínez, J., Juárez-Ramírez, R., Huertas, C., Jiménez, S., & Guerra-García, C. (2016). “Problems
in the Adoption of Agile-Scrum Methodologies: A Systematic Literature Review,” in 4th International
Conference in Software Engineering Research and Innovation (CONISOFT), Apr. 2016, pp. 141–148.
10.1109/CONISOFT.2016.30.
5. Martin, K., & Pamela, M. (2010). Automated GUI testing on the Android platform. IMVS Fokus Report,
4(1), 33–36.
6. Sultanía, A. K. (2015). “Developing software product and test automation software using Agile
methodology,” in Proceedings of the Third International Conference on Computer, Communication,
Control and Information Technology (C3IT), Feb. 2015, pp. 1–4. 10.1109/C3IT.2015.7060120.
7. Gupta, R. K., Manikreddy, P., & GV, A. (2016). “Challenges in Adapting Agile Testing in a Legacy
Product,” in IEEE 11th International Conference on Global Software Engineering (ICGSE), Aug. 2016,
pp. 104–108. 10.1109/ICGSE.2016.21.
8. Rehman, A. U., Nawaz, A., Ali, M. T., & Abbas, M. (2020). “A Comparative Study of Agile Methods,
Testing Challenges, Solutions & Tool Support,” in 14th International Conference on Open Source
Systems and Technologies (ICOSST), Dec. 2020, pp. 1–5. 10.1109/ICOSST51357.2020.9332965.

Page 27/37
9. Rahmat, A., & Hanifiah, N. A. M. (2020). “Usability Testing in Kanban Agile Process for Club
Management System,” in 6th International Conference on Interactive Digital Media (ICIDM), Dec.
2020, pp. 1–6. 10.1109/ICIDM51048.2020.9339668.
10. Umar, M. A. (2019). Comprehensive study of software testing: Categories, levels, techniques, and
types. International Journal of Advance Research Ideas and Innovations in Technology, 5(6), 32–40.
11. Umar, M. A. (2020). A Study of Software Testing: Categories, Levels, Techniques, and Types.
TechRxiv p 10 Jun, 29, 10.36227/techrxiv.12578714.v1.
12. Platform, O. B., “Systems and software engineering — Systems and software Quality Requirements
and Evaluation (SQuaRE) — System and software quality models," ISO, [Online]. Available:
https://www.iso.org/obp/ui/en/#iso:std:iso-iec:25010:ed-1:v1:en. [Accessed 26 8 2023].
13. Capgemini, & Sogeti, "DevOps with Quality: Achieving the desired quality at every stage of the
DevOps lifecycle.," [Online]. Available:
https://www.sogeti.fi/globalassets/global/downloads/testing/pov_devops-with-quality_ok.pdf.
[Accessed:Aug. 25, 2023].
14. Ibrahim, M. M. A., Syed-Mohamad, S. M., & Husin, M. H., “Managing Quality Assurance Challenges of
DevOps through Analytics,” in (2019). 8th International Conference on Software and Computer
Applications (ICSCA), Association for Computing Machinery, New York, NY, USA, pp. 194–198.
10.1145/3316615.3316670.
15. Mateen Buttar, A., Jahanzaib, M., & Iqbal, N. (2017). “The Role of Quality Assurance in Software
Development Projects: Project Failures and Business Performance,” Jan.
16. Nayyar, A. (2019). Instant Approach to Software Testing. BPB Publications, India,.
17. Dennis, A., Wixom, B. H., & Roth, R. M. (2012). Systems Analysis and Design 5th Edition, 5th Edition.
USA: John Wiley & Sons, Inc.
18. Arun Kumar Arumugam and IQVIA, “Software Testing Techniques New Trends,” IJERT, vol. V8, no. 12,
p. IJERTV8IS120318 (Jan. 2020). 10.17577/IJERTV8IS120318.
19. Sneha, K., & Malle, G. M., “Research on software testing techniques and software automation testing
tools,” in (2017). International Conference on Energy, Communication, Data Analytics and Soft
Computing (ICECDS), Aug. 2017, pp. 77–81. 10.1109/ICECDS.2017.8389562.
20. Perera, P., Bandara, M., & Perera, I. (2016). “Evaluating the impact of DevOps practice in Sri Lankan
software development organizations,” in Sixteenth International Conference on Advances in ICT for
Emerging Regions (ICTer), Sep. 2016, pp. 281–287. 10.1109/ICTER.2016.7829932.
21. Ooi, S. M., Lim, R., & Lim, C. C., “An integrated system for end-to-end traceability and requirements
test coverage,” in (2014). IEEE 5th International Conference on Software Engineering and Service
Science, Jun. 2014, pp. 45–48. 10.1109/ICSESS.2014.6933511.
22. QMetry (2023). “The Importance of Test Coverage in Software Testing: Ensuring Quality and
Reliability,” Medium, Jul. 06, https://medium.com/@QMetry/the-importance-of-test-coverage-in-
software-testing-ensuring-quality-and-reliability-bfe81b3ec538 [Accessed:Sep. 04, 2023).

Page 28/37
23. Hooda, I., & Singh Chhillar, R. (Feb. 2015). Software Test Process, Testing Types and Techniques.
IJCA, 111(13), 10–14. 10.5120/19597-1433.
24. Chi, C., "The Beginner’s Guide to Usability Testing [+ Sample Questions]," 28 Jul 2021. [Online].
Available: https://blog.hubspot.com/marketing/usability-testing.[Accessed:Sep. 09, 2023].
25. Hamilton, T., "What is Usability Testing? Software UX," 26 Aug 2023. [Online]. Available:
https://www.guru99.com/usability-testing-tutorial.html.[Accessed:Sep. 09, 2023].
26. Bose, S., "Exploratory Testing: A Detailed Guide," 16 March 2023. [Online]. Available:
https://www.browserstack.com/guide/exploratory-testing.[Accessed:Sep. 09, 2023].
27. Bhatti, I., Siddiqi, J. A., Moiz, A., & Memon, Z. A., “Towards Ad hoc Testing Technique Effectiveness in
Software Testing Life Cycle,” in (2019). 2nd International Conference on Computing, Mathematics
and Engineering Technologies (iCoMET), Jan. 2019, pp. 1–6. 10.1109/ICOMET.2019.8673390.
28. "The Agile Journey: A Scrum overview," 23 June 2021. [Online]. Available: https://www.pm-
partners.com.au/the-agile-journey-a-scrum-overview/..[Accessed:Sep. 09, 2023].
29. Karuna, V. (2015). "Lean Kanban Methodology to Application Support and Maintenance," 13 Sep
[Online]. Available: https://agilegnostic.wordpress.com/2015/09/13/lean-kanban-methodology-to-
application-support-and-maintenance/.[Accessed:Sep. 09, 2023].

Figures

Page 29/37
Figure 1

Waterfall model [2]

Page 30/37
Figure 2

Agile model

Figure 3
Page 31/37
Scrum model [28]

Figure 4

Kanban model [29]

Figure 5

Iterative model [2]

Page 32/37
Figure 6

Extreme programming model [2]

Page 33/37
Figure 7

Impact and effectiveness of adopting Agile testing [7]

Page 34/37
Figure 8

Distribution of the job role

Figure 9

Distribution of the responders work experience

Page 35/37
Figure 10

Number of multiple methodologies adopted in organizations

Figure 11

Different testing types used in organizations

Figure 12

Challenges related to testing complex or interconnected software systems

Page 36/37
Figure 13

Best Practices

Figure 14

Visual representation of the ratings

Page 37/37

You might also like