Software Testing and User Experience
Software Testing and User Experience
Software Testing
Zadeh
Software testing is an essential aspect of the software development process
that ensures software applications’ quality, reliability, and functionality.
However, more than testing is needed to guarantee a great user experience.
ISBN 978-1-77956-199-2
00000
TAP
9 781779 561992
TAP
TAP
Toronto Academic Press
SOFTWARE TESTING AND
USER EXPERIENCE
TAP
Toronto Academic Press
Software Testing and User Experience
© 2024
ISBN: 978-1-77956-199-2 (e-book)
This book contains information obtained from highly regarded resources. Reprinted material sources are indicated
and copyright remains with the original owners. Copyright for images and other graphics remains with the original
owners as indicated. A Wide variety of references are listed. Reasonable efforts have been made to publish reliable
data. Authors or Editors or Publishers are not responsible for the accuracy of the information in the published
chapters or consequences of their use. The publisher assumes no responsibility for any damage or grievance to the
persons or property arising out of the use of any materials, instructions, methods or thoughts in the book. The
authors or editors and the publisher have attempted to trace the copyright holders of all material reproduced in this
publication and apologize to copyright holders if permission has not been obtained. If any copyright holder has not
been acknowledged, please write to us so we may rectify.
Notice: Registered trademark of products or corporate names are used only for explanation and identification
without intent of infringement.
Toronto Academic Press publishes wide variety of books and eBooks. For more information about Toronto
Academic Press and its products, visit our website at www.tap-books.com.
ABOUT THE AUTHOR
Nastaran Nazar Zadeh is a highly experienced computer engineer, researcher, and advisor in the fields
of robotics, artificial intelligence and computer science. She holds a Master of Science in Computer
Engineering from Mapua University of the Philippines and pursed her Ph.D. in Electronic Engineering
at the same institution. With over seven years of teaching experience, Nastaran has taught electronic
and computer engineering programs at several reputable academic institutions, where she has also led
numerous thesis studies. Her research focuses on developing robotics systems with A.I. and machine
learning, which enables her to stay up-to-date with the latest advancements in the field and implement
cutting-edge technologies.
Contents
Preface xv
List of Figures ix
List of Tables xi
List of Abbreviations xiii
Introduction to
1
2.2. Unit Testing in Introductory Courses 35
2.3. Test-Driven Development (TDD) 36
Software Testing 1 2.4. Unit Testing in Java With Junit 37
2.5. Extensions and Advanced Features 42
Unit Introduction 1 2.6. Unit Testing for Automated Project
1.1. Quality Process 4 Grading 42
1.2. Quality Plan 6 Summary 44
1.3. Quality Process Monitoring 7 Review Questions 44
1.4. Verification And Validation 9 Multiple Choice Questions 44
1.5. Functional And Model-Based Testing 10 References 45
1.5.1. Functional Testing 11
Goals, Scope, and
3
1.5.2. Model-Based Testing 14
History of Software
1.6. Testing Levels 16
1.6.1. Module Testing 16
1.6.2. Integration and Component-Based
Testing 18
Testing 49
1.6.3. System, Acceptance, and Regression
Unit Introduction 49
Testing 20
3.1. The Testing Techniques Taxonomy 51
Summary 24
3.1.1. The Goal of Testing 51
Review Questions 24
3.2. The Testing Spectrum 51
Multiple Choice Questions 24
3.3. Dynamic Analysis and Static Analysis 52
References 25
3.4. Structural Technique and Functional
Technique 52
2 Types of Software
Testing 33
3.5. Scope of the Study
3.5.1. Technical Scope
3.5.2. Goal and Standard of Progress
53
53
54
3.6. The History of Testing Techniques 54
Unit Introduction 33
3.6.1. Concept Evolution 54
2.1. Unit Testing 35
3.7. Major Technical Contributions 57 5.2. What Makes Something Less Usable?
3.8. Technology Maturation 61 Why Are So Many High-Tech Products
So Hard To Use? 105
3.8.1. Redwine/Riddle Software Technology
Maturation Model 62 5.2.1. Five Reasons Why Products Are
Hard to Use 105
3.9. Brief History of Software Engineering 63
5.3. What Makes Products More Usable? 111
3.10. Testing Process Models 64
5.3.1. An Early Emphasis on Users and Tasks 112
3.11. The Major Stages of Research &
Development Trends 64 5.3.2. Evaluation and Measurement
of Product Usage 113
3.11.1. 1950 – 1970: Ad Hoc 65
5.3.3. Iterative Design and Testing 113
3.11.2. 1971 – 1985: Emphasize
Implementation and Single Program 65 5.4. Characteristics of Organizations
that Train UCD Practices 113
3.11.3. 1986 – Current: Emphasize
Specification and System 66 5.4.1. Stages That Include User Input 113
6 Usability Testing
4.6. Agile Model-Driven Development 92
4.6.1. Model-Driven Agile Development 93
127
Summary 94
Review Questions 94 Unit Introduction 127
Multiple Choice Questions 94 6.1. Why Test? Goals of Testing 129
References 95 6.1.1. Informing Design 129
5.1. Usable Meaning 103 6.3. Basic Elements of Usability Testing 132
6.4. When Should You Test? 132
vi
6.4.1. Our Types of Tests: An Overview 133 7.8. Parts of a Task for the Test Plan 161
6.5. Exploratory or Formative Study 134 7.8.1. The Materials and Machine States
6.5.1. When 134 Required to Perform the Task 161
7.1.4. It Provides Focal Point for 8.2.2. Create a Strategy and Business Case 180
Milestone and Test 152 8.2.3. Build on Successes 181
7.2. The Test Plan Parts 152 8.2.4. Set Up Long-Term Relationships 181
7.2.1. Review Goal & Purpose of Test 153 8.3. Sell Yourself and What You are Doing 182
7.3. Communicate Research Questions 154 8.3.1. Strategize: Choose Your Battles
7.4. Summarize Participant Characteristics 156 Carefully 182
7.4.1. Description of the Method 157 8.4. Formalize Processes and Practices 183
7.4.2. Independent Groups Design 158 8.4.1. Establish a Central Residency for
User-Centered Design 183
7.4.3. Within-Subjects Design 159
8.5. Add Usability-Related Activities to the
7.5. Testing Multiple Product Versions 160 Product Life Cycle 184
7.6. Testing Multiple User Groups 160 8.5.1. Educate Others Within the Organization 185
7.7. List the Tasks 161 8.5.2. Identify and Cultivate Champions 186
vii
8.5.3. Publicize the Usability Success Stories 186 8.6.4. Evaluate Product Usability in the
8.5.4. Link Usability to Economic Benefits 187 Field after Product Release 189
8.6. Expand UCD Throughout 8.6.5. Evaluate the Value of Your Usability
the Organization 188 Engineering Efforts 190
8.6.1. Pursue More Formal Educational 8.6.6. Develop Design Standards 190
Opportunities 188 8.6.7. Focus Your Efforts Early in the
8.6.2. Standardize Participant Recruitment Product Life Cycle 190
Policies and Procedures 189 Summary 191
8.6.3. Align Closely with Market Research Multiple Choice Questions 191
and Industrial Design 189 References 192
INDEX 197
List of Figures
Figure 1.1. A typical fault distribution for a system Figure 4.4. Rapid prototyping life cycle
grows over time
Figure 4.5. Executable specification
Figure 1.2. The distinction between validation and
verification Figure 4.6. Generic agile life cycle
Figure 1.3. Testing a partition based on categories, Figure 4.7. The extreme programming life cycle
a basic set of categories, options, and restrictions;
this is the example catalog handler. Options for each Figure 4.8. Test-driven development life cycle
grouping appear in their respective columns. Limits are Figure 4.9. The scrum life cycle
denoted by square brackets
Figure 4.10. The agile model-driven development life
Figure 1.4. An unrestrained set of classifications and cycle
options
Figure 5.1. Bailey’s human performance model
Figure 1.5. A model of the shopping cart’s finite state
machines derived from its vague specification Figure 5.2. Nonintegrated approach to product
development
Figure 1.6. A StateCharts description of the shopping
cart from the previous section Figure 5.3. Integrated approach to product
development
Figure 1.7. The V-shaped progression of development
and evaluation Figure 5.4. Questions and methods for answering
them
Figure 1.8. A systematic approach to system testing
Figure 6.1. Usability testing throughout the product
Figure 2.1. Dynamic unit test environment lifecycle
Figure 3.1. Testing information flow Figure 6.2. Test monitor and participant exploring the
Figure 3.2. Major research results in the area of product
software testing techniques Figure 6.3. Web page navigation interface
Figure 3.3. Technology maturation analysis of software Figure 7.1. An example of research questions from a
testing techniques usability test of a hotel reservations website
Figure 4.1. The waterfall life cycle Figure 7.2. Sample participant characteristics and
Figure 4.2. The waterfall life cycle as the V-model desired mix
XP Extreme Programming
PREFACE
Software testing is an essential aspect of the software development process that ensures software
applications’ quality, reliability, and functionality. However, more than testing is needed to guarantee a
great user experience. Integrating user experience design and testing throughout the entire software
development process is crucial to create intuitive, efficient, and enjoyable software applications.
This book aims to provide an in-depth understanding of software testing and user experience design
and testing. We will explore the different types of software testing, the goals, scope, and history of
software testing, and how testing goes beyond just unit testing. We will also delve into the fundamentals
of user experience design and usability testing, as well as the process of conducting a test.
The first chapter begins with an introduction to software testing. It also explores what software testing is,
why it is important, and its role in the software development process. It also covers the different types of
testing and their significance in ensuring the quality and reliability of software applications.
The second chapter delves into the various types of software testing. It discusses unit testing, integration
testing, system testing, acceptance testing, and other types of testing. It also explores the strengths and
weaknesses of each type of testing and the best practices for implementing them.
The third chapter will explore the goals, scope, and history of software testing. It examines the evolution
of software testing from its early days to its current state. It also discusses the different goals of software
testing, such as detecting defects, verifying software requirements, and improving software quality.
The fourth chapter will look beyond unit testing and explore other types of testing, such as exploratory
testing, regression testing, and performance testing. It also discusses the challenges and opportunities
presented by these testing methods.
The fifth chapter will focus on user experience design. It also examines what user experience is,
its importance, and how it affects the success of software applications. It also explores the different
elements of user experience design, such as usability, accessibility, and aesthetics.
The sixth chapter will explore usability testing. It also examines the importance of usability testing, the
different methods used in usability testing, and the best practices for conducting a usability test.
The seventh chapter will delve into the process of conducting a test. It also examines the different
stages of testing, including planning, preparation, execution, and reporting. We will also discuss the
different tools and techniques used in software testing.
The final chapter discusses how to expand from usability testing to designing the user experience.
—Author
CHAPTER 1
INTRODUCTION TO SOFTWARE
TESTING
UNIT INTRODUCTION
Large software system development is a difficult and error-prone process. Errors
can emerge at any level of development; however, these errors need to be found and
fixed as quickly as possible in order to limit their propagation and reduce the expenses
associated with verification. Engineers specializing in quality assurance need to be a part
of the product development process from the very beginning so that they can determine
which qualities are necessary and evaluate how those qualities will affect the process
(DeMillo et al., 1988). Their responsibilities encompass the entirety of the development
cycle, extending beyond product release into areas such as maintenance and post-mortem
examination. It is not a simple task to develop and implement an effective quality process.
However, in order to do so, the integration of a large number of quality-linked operations
with product attributes, process establishment, available skills and resources, and the
limitations of the financial resources are required (Baresi & Pezze, 2006).
Building huge software products require a lot of different operations, all of which need
to be coordinated effectively in order to achieve the objectives that have been set. Among
these responsibilities, the actions that primarily contribute to the building of the product
and operations that must inspect the standards of the evolution, as well as the artifacts
that are generated, can be distinguished. This categorization is not as clear as it could
be given that most actions contribute, even if only slightly, to promoting progress and
monitoring the quality. This characterization of tasks is not always accurate. Still, it helps
2 Software Testing and User Experience
CHAPTER
1
Introduction to Software Testing 3
Key Terms
• Integration Testing
• Model Based Testing
• Module Testing
• Quality Process
• Software Testing
• Testing Levels
• Verification and Validation
CHAPTER
1
4 Software Testing and User Experience
Figure 1.1.
A typical fault
distribution for a
system grows over
time.
The quality engineer notes the start and end dates, used
resources, and advances of each activity and responds to variances
from the current plan by either adapting it when deviations are
acceptable or adopting a when devi-actions are severe to the
current plan. The evaluation of quantitative development is extremely
challenging and has only lately been used. It entails acquiring
details on fault dispersion and competing it with previous data
(Kano & Nakagawa, 2008).
Figure 1.1 is a figure from that depicts the allocation of faults
over releases while accounting for three degrees of intensity. The
graphic shows that the quantity of faults increases for the first
buildings before dropping. The number of defects reduces at varying
rates: intense faults drop quicker than ordinary faults, and average
faults may even increase slightly. Different fault distributions suggest
potential quality issues: If the frequency of faults does not decrease
in the initial versions, it could be a sign of inadequate testing. On
the other hand, if the frequency of faults does not decrease in the
subsequent releases, it may indicate poor detection and resolution
of the faults (Gowen et al., 2008).
The orthogonal defect classification (ODC), established by IBM
in the 1990s, gives a thorough division of defects and advocates
overseeing different distributions to identify potential quality issues
(Gowen et al., 2012).
CHAPTER
1
10 Software Testing and User Experience
verify that users can add a specific item to their shopping cart with no more
than four mouse clicks from the home page and that the application responds
within one second after the click when the application is serving up to ten
thousand users working concurrently can be verified. As a result, system testing
begins as soon as requirements specifications are written. Mature development
methods plan inspection tasks to evaluate their testability and maximize the
software product’s verifiability (Babuska & Oden, 2004).
Certain methods are needed for the validation and verification of various
properties. Usability attributes, for instance, need special-purpose procedures for
their validation, unlike dependability properties, which can be verified through
model-based testing methodologies outlined below. In this instance, a typical
process consists of the following key steps:
i. Checking specifications using custom checklists and traditional inspection
methods.
ii. Evaluating preliminary prototypes created by simulating user interfaces.
iii. Testing periodic releases with end users and usability professionals
(Wallace & Fujii, 1989).
iv. Final system and acceptability testing should take into account user-
based testing, comparative testing, expert-based assessment and
evaluation, and automatic analysis (Sargent, 1987).
In contrast to functional and model-based testing, which do not need
user involvement, usability testing largely depends on users. The usability
team point outs the categories of users, choose appropriate samples of the
population based on the identified classes, defines groups of communications that
accurately reflect important usages of the system, observes the communication
of the specific users with the system, and then analyzes the results (Ryan &
Wheatcraft, 2017).
Figure 1.4. An
unrestrained set of
classifications and
options.
of options can thus expose the majority of likely failures. Combinatorial testing
and category partition can be successfully integrated by first restricting the
combination of options and then only considering pairwise possibilities (Caldwell
et al., 1985).
We identify both abnormal values as well as boundary and error criteria
while choosing options for the categories that have been defined. Many flaws
typically lurk in unique situations that rely on the kind of factors being taken
into account (Wuyts et al., 2007). For instance, test experts advise taking into
account at least one value inside the limits, the low and high bounds themselves,
the values before and after each bound, and at least one other value outside
the limit when dealing with a range of values [low, high]. Expert test designers’
knowledge can be preserved in catalogs that identify all scenarios that must
be taken into account for each specification (Favaloro et al., 2010). We have
the option of creating both general-purpose and niche catalogs. The first can
be applied in the majority of situations, whereas the second only applies to
special fields that are distinguished by specific cases. Specifications with a
clear structure can benefit from catalogs. In order to provide a full set of test
cases, catalog-based testing generally translates specifications into pre- and
post-conditions, variables, definitions, and functions first (Kuvin & Karas, 2003).
Figure 1.7.
The V-shaped
progression of
development and
evaluation.
an effective sorting technique that handles extremely small sets, sets that are
somewhat large, and sets that are larger than the available memory (Marinissen
et al., 2002). A straightforward quadratic algorithm, such as bubblesort, is
sufficient for sorting small sets, that quicksort may be recommended for sorting
large sets that exist in storage, and that treesort is better for sorting sets that
are larger than the available memory (Li et al., 2005).
Functional testing covers the functionality of the code; structural testing
examines the code’s structure and addresses scenarios that are not covered
by functional testing. The application of structural testing typically occurs in two
steps: first, programmers determine the code coverage using straightforward
coverage tools that show the percentage of covered code and emphasize
the missing portions of the code, and then they create test cases that range
covered portions of the code. Several code components might be considered
when calculating coverage (Jha et al., 2009). The simplest coverage criteria
measure the proportion of statements that are actually executed by the test
cases and is statement-centric. The number of branches that are exercised
by the tests is measured by branch coverage criteria. The number of paths
that the tests cover is measured by path coverage criteria. Depending on how
paths are chosen, several path coverage requirements can be found. Data
flow is referred to in additional criteria. Readers can find more details on code
coverage in (Abuelnaga et al., 2021).
Coverage requirements include non-executable items because they consist
of all the components. Unfortunately, there is no solution to the problem of
recognizing executable elements. Therefore, we are unable to choose only the
appropriate subset of implementable elements automatically (Cappa et al., 2018).
Code coverage is typically used as an approximatively metric to track module
testing operations rather than as an absolute indicator. For instance, if we talk
about statement coverage, the presence of up to 10–15% of non-implementable
statements may be tolerable. Still, a higher percentage may indicate either a
poor design that results in an excessive number of non-executable statements.
This poor specification makes it impossible to derive an appropriate set of
functional test cases. When the coverage drops below a desirable level, test
developers and designers look more closely at the module to find and fix the
issue (Wohlgemuth & Kurtz, 2011).
In some serious situations, all components that are not tested are evaluated
to see if they can be tested or what factors lead to the appearance of non-
executable statements. For instance, the quality standards generally used in
the avionics industry, RTCA/DO-178B “Software Considerations in Airborne
Systems and Equipment Certification,” and its European equivalent, EUROCAE
ED-12B, mandate MCDC coverage for on-board software and manual testing
of components (Yan et al., 2013).
CHAPTER
1
18 Software Testing and User Experience
CHAPTER
1
Introduction to Software Testing 19
CHAPTER
1
20 Software Testing and User Experience
Figure 1.8.
A systematic
approach to system
testing.
Restarting all the test cases created for the prior versions is
a straightforward regression testing strategy known as the “retest
all approach,” and it is used to see if the new version exhibits any
odd behaviors that weren’t present in the previous versions. This
straightforward method could result in significant expenses and
nontrivial issues because it requires changing test cases that aren’t
quickly relied on the recent version. Also, the price of performing
CHAPTER
1
22 Software Testing and User Experience
all test cases again could be prohibitive and perhaps useless (Seymour et al.,
2007).
With ad-hoc procedures created for the particular application, the quantity
of test cases that need to be re-run can be decreased. Techniques for selection
are based on codes. The people who work on the code keep track of the
program elements that were put to the test in earlier versions and choose test
cases that put the current release’s altered elements to the test (Davis, 1989).
Several code-based selection methods concentrate on various programming
features, such as control-flow, data-flow, etc. Code-based selection techniques
have good tool support and function even when specifications are not kept
up to date, but they are difficult to scale up: they work well for small, local
modifications, but offer challenges when changes affect significant areas of the
product (Khorasani & Zeyun, 2014).
Techniques for selection that are based on changes to the specifications
are emphasized. Compared to code-based solutions, they scale up significantly
better because they are not constrained by the quantity of altered code but
rather call for adequately maintained specifications (Roubtsov & Heck, 2006).
They perform especially well with model-based testing methodologies, which
can be enhanced using trackers to pinpoint tests that need to be run again.
For instance, if the system is described using finite state machines, it is simple
to modify traditional test generation parameters to concentrate on the finite
state machine components that have been altered or added in the most recent
build (Ding et al., 2019).
Techniques for prioritizing test cases define preferences among tests and
provide various execution plans rather than choosing a subset of test cases.
Priorities are set up to delay the implementation of cases that are less likely to
uncover flaws in order to maximize the effectiveness of tests. The effectiveness
of defect detection, execution histories, and code structure are the foundations
of common priority schemas (ALraja, 2015). Recent test cases are given low
priority by history-based priority schemas. By doing this, we can ensure that
every test case will eventually be run again. For some releases, like overnight
regression testing, this technique excels. Priority schemas that concentrate on
fault-identification increase the priority of tests that expose flaws in the latest
versions, making them more likely to work out unstable sections of the code
and expose flaws that have been present for a while. Schemas with structural
priorities provide precedence to either test cases that exercise recently run
elements or test cases that produce good coverage (Liker & Sindi, 1997).
In the first scenario, they work to reduce the likelihood that certain sections
of code would go untested for a long period of time; in the second scenario,
they work to reduce the number of tests that must be run repeatedly in
order to attain sufficient coverage (Hanssen & Haugset, 2009). By locating,
CHAPTER
1
Introduction to Software Testing 23
CHAPTER
1
24 Software Testing and User Experience
SUMMARY
Since software testing has been a topic of ongoing research for many years, quality
engineers today can gain from a variety of findings, tools, and approaches. Even if many
conventional study fields are still available, there are still plenty of challenges to be
overcome due to advancements in design and application. The majority of the outcomes
from research on testing theory to date have been negative, indicating the discipline’s
limitations yet urging further research. We still need a solid foundation for contrasting
various standards and methods. While useful, the testing methods now in use are still
not entirely satisfactory. In order to address new programming paradigms, we need new
approaches, but more importantly, we need greater test automation assistance.
Complex computer systems, heterogeneous mobile applications, and component-
based development provide additional difficulties. For modern software systems, it is
often difficult to forecast all potential applications and execution frameworks, so we must
shift from traditional testing schemes that primarily function before deployment, to those
that function after deployment, such as dynamic analysis, and self-organizing software.
REVIEW QUESTIONS
1. What is software testing, and why is it essential in the software development
process?
2. What are the different types of software testing, and what are the advantages
and disadvantages of each approach?
3. What is the difference between functional and non-functional testing, and why is
it important to test both?
4. What are some common challenges associated with software testing, and how
can they be addressed?
5. How do you design an effective testing strategy, and what factors should be
considered when creating a test plan?
6. What are some best practices for managing and reporting software defects, and
how can defect tracking systems help streamline the process?
CHAPTER
1
Introduction to Software Testing 25
REFERENCES
1. Abuelnaga, A., Narimani, M., & Bahman, A. S., (2021). A review on IGBT module
failure modes and lifetime testing. IEEE Access, 9, 9643–9663.
2. Aichernig, B. K., Mostowski, W., Mousavi, M. R., Tappler, M., & Taromirad, M.,
(2018). Model learning and model-based testing. In: Machine Learning for Dynamic
Software Analysis: Potentials and Limits: International Dagstuhl Seminar 16172,
Dagstuhl Castle, Germany, April 24–27, 2016, Revised Papers (pp. 74–100).
3. Alegroth, E., Nass, M., & Olsson, H. H., (2013). JAutomate: A tool for system-and
acceptance-test automation. In: 2013 IEEE Sixth International Conference on Software
Testing, Verification and Validation (pp. 439–446).
CHAPTER
1
26 Software Testing and User Experience
4. Alexander, A., Bergman, P., Hagströmer, M., & Sjöström, M., (2006). IPAQ
environmental module; reliability testing. Journal of Public Health, 14, 76–80.
5. Ali, A. S. B., & Money, W. H., (2005). A study of project management system
acceptance. In: Proceedings of the 38th Annual Hawaii International Conference on
System Sciences (pp. 234c).
6. ALraja, M. N., (2015). User acceptance of information technology: A field study of
an e-mail system adoption from the individual students’ perspective. Mediterranean
Journal of Social Sciences, 6(6 S1), 19.
7. Arditi, D., & Gunaydin, H. M., (1997). Total quality management in the construction
process. International Journal of Project Management, 15(4), 235–243.
8. Babuska, I., & Oden, J. T., (2004). Verification and validation in computational
engineering and science: Basic concepts. Computer Methods in Applied Mechanics
and Engineering, 193(36–38), 4057–4066.
9. Baresi, L., & Pezze, M., (2006). An introduction to software testing. Electronic Notes
in Theoretical Computer Science, 148(1), 89–111.
10. Barr, E. T., Harman, M., McMinn, P., Shahbaz, M., & Yoo, S., (2014). The oracle
problem in software testing: A survey. IEEE Transactions on Software Engineering,
41(5), 507–525.
11. Berke, P. R., & French, S. P., (1994). The influence of state planning mandates on
local plan quality. Journal of Planning Education and Research, 13(4), 237–250.
12. Berke, P., & Godschalk, D., (2009). Searching for the good plan: A meta-analysis
of plan quality studies. Journal of Planning Literature, 23(3), 227–240.
13. Beydeda, S., & Gruhn, V., (2001). An integrated testing technique for component-
based software. In: Proceedings ACS/IEEE International Conference on Computer
Systems and Applications (pp. 328–334).
14. Binder, R. V., Legeard, B., & Kramer, A., (2015). Model-based testing: Where does
it stand? Communications of the ACM, 58(2), 52–56.
15. Brahme, D., & Abraham, J. A., (1984). Functional testing of microprocessors. IEEE
Transactions on Computers, 33(06), 475–485.
16. Brody, S. D., (2003). Are we learning to make better plans? A longitudinal analysis
of plan quality associated with natural hazards. Journal of Planning Education and
Research, 23(2), 191–201.
17. Caldwell, G., Gow, S. M., Sweeting, V. M., Kellett, H. A., Beckett, G. J., Seth,
J., & Toft, A. D., (1985). A new strategy for thyroid function testing. The Lancet,
325(8438), 1117–1119.
18. Candea, G., Bucur, S., & Zamfir, C., (2010). Automated software testing as a service.
In: Proceedings of the 1st ACM Symposium on Cloud Computing (pp. 155–160).
19. Cappa, C., Mont, D., Loeb, M., Misunas, C., Madans, J., Comic, T., & De Castro, F.,
(2018). The development and testing of a module on child functioning for identifying
CHAPTER
1
Introduction to Software Testing 27
children with disabilities on surveys. III: Field testing. Disability and Health Journal,
11(4), 510–518.
20. Carson, J. S., (2002). Model verification and validation. In: Proceedings of the Winter
Simulation Conference (Vol. 1, pp. 52–58).
21. Causevic, A., Sundmark, D., & Punnekkat, S., (2010). An industrial survey on
contemporary aspects of software testing. In: 2010 Third International Conference
on Software Testing, Verification and Validation (Vol. 1, pp. 393–401).
22. Chowdhury, R. S., & Forsmark, C. E., (2003). Pancreatic function testing. Alimentary
Pharmacology & Therapeutics, 17(6), 733–750.
23. Ciortea, L., Zamfir, C., Bucur, S., Chipounov, V., & Candea, G., (2010). Cloud9: A
software testing service. ACM SIGOPS Operating Systems Review, 43(4), 5–10.
24. Cooper, B. G., (2011). An update on contraindications for lung function testing.
Thorax, 66(8), 714–723.
25. Crapo, R. O., (1994). Pulmonary-function testing. New England Journal of Medicine,
331(1), 25–30.
26. Crnkovic, I., (2001). Component‐based software engineering—New challenges in
software development. Software Focus, 2(4), 127–133.
27. Dalal, S. R., Jain, A., Karunanithi, N., Leaton, J. M., Lott, C. M., Patton, G. C., &
Horowitz, B. M., (1999). Model-based testing in practice. In: Proceedings of the 21st
International Conference on Software Engineering (pp. 285–294).
28. Davis, F. D., (1989). Technology acceptance model: TAM. In: Al-Suqri, M. N., &
Al-Aufi, A. S., (eds.), Information Seeking Behavior and Technology Adoption (pp.
205–219).
29. Davis, F. D., (1993). User acceptance of information technology: System characteristics,
user perceptions and behavioral impacts. International Journal of Man-Machine
Studies, 38(3), 475–487.
30. DeMillo, R. A., Guindi, D. S., McCracken, W. M., Offutt, A. J., & King, K. N., (1988).
An extended overview of the Mothra software testing environment. In: Workshop on
Software Testing, Verification, and Analysis (pp. 142, 143).
31. Dias-Neto, A. C., & Travassos, G. H., (2010). A picture from the model-based testing
area: Concepts, techniques, and challenges. In: Advances in Computers (Vol. 80,
pp. 45–120).
32. Ding, Z., Saide, S., Astuti, E. S., Muwardi, D., Najamuddin, N., Jannati, M., &
Herzavina, H., (2019). An adoption of acceptance model for the multi-purpose system
in university library. Economic Research-Ekonomska Istraživanja, 32(1), 2393–2403.
33. Donabedian, A., (1968). Promoting quality through evaluating the process of patient
care. Medical Care, 6(3), 181–202.
34. Douglas, P. S., Hoffmann, U., Patel, M. R., Mark, D. B., Al-Khalidi, H. R., Cavanaugh,
B., & Lee, K. L., (2015). Outcomes of anatomical versus functional testing for coronary
artery disease. New England Journal of Medicine, 372(14), 1291–1300.
CHAPTER
1
28 Software Testing and User Experience
35. Faisal, A., Handayanna, F., & Purnamasari, I., (2021). Implementation technology
acceptance model (tam) on acceptance of the zoom application in online learning.
Jurnal Riset Informatika, 3(2), 85–92.
36. Favaloro, E. J., Lippi, G., & Franchini, M., (2010). Contemporary platelet function
testing. Clinical Chemistry and Laboratory Medicine, 48(5), 579–598.
37. Fawzy, S. F., & Esawai, N., (2017). Internet banking adoption in Egypt: Extending
technology acceptance model. Journal of Business and Retail Management Research,
12(1), 109–118.
38. Garousi, V., & Mäntylä, M. V., (2016). When and what to automate in software testing?
A multi-vocal literature review. Information and Software Technology, 76, 92–117.
39. Gill, N. S., & Grover, P. S., (2003). Component-based measurement: Few useful
guidelines. ACM SIGSOFT Software Engineering Notes, 28(6), 4.
40. Gowen, A. A., O’donnell, C. P., Cullen, P. J., & Bell, S. E. J., (2008). Recent
applications of chemical imaging to pharmaceutical process monitoring and quality
control. European Journal of Pharmaceutics and Biopharmaceutics, 69(1), 10–22.
41. Gowen, A. A., O’Sullivan, C., & O’Donnell, C. P., (2012). Terahertz time domain
spectroscopy and imaging: Emerging techniques for food process monitoring and
quality control. Trends in Food Science & Technology, 25(1), 40–46.
42. Hanssen, G. K., & Haugset, B., (2009). Automated acceptance testing using fit. In:
2009 42nd Hawaii International Conference on System Sciences (pp. 1–8).
43. Hartman, A., & Nagin, K., (2004). The AGEDIS tools for model-based testing. ACM
SIGSOFT Software Engineering Notes, 29(4), 129–132.
44. Harvey, L., (2005). A history and critique of quality evaluation in the UK. Quality
Assurance in Education, 13(4), 263–276.
45. Hasan, B., (2006). Delineating the effects of general and system-specific computer
self-efficacy beliefs on IS acceptance. Information & Management, 43(5), 565–571.
46. Hayes, I. J., (1986). Specification directed module testing. IEEE Transactions on
Software Engineering, (1), 124–133.
47. Howden, W. E., (1980). Functional program testing. IEEE Transactions on Software
Engineering, (2), 162–169.
48. Jha, P. C., Gupta, D., Yang, B., & Kapur, P. K., (2009). Optimal testing resource
allocation during module testing considering cost, testing effort and reliability.
Computers & Industrial Engineering, 57(3), 1122–1130.
49. Juristo, N., Moreno, A. M., & Strigel, W., (2006). Guest editors’ introduction: Software
testing practices in industry. IEEE Software, 23(4), 19–21.
50. Kano, M., & Nakagawa, Y., (2008). Data-based process monitoring, process control,
and quality improvement: Recent developments and applications in steel industry.
Computers & Chemical Engineering, 32(1, 2), 12–24.
51. Kehrel, B. E., & Brodde, M. F., (2013). State of the art in platelet function testing.
Transfusion Medicine and Hemotherapy, 40(2), 73–86.
CHAPTER
1
Introduction to Software Testing 29
52. Khorasani, G., & Zeyun, L., (2014). Implementation of technology acceptance model
(TAM) in business research on web-based learning system. International Journal of
Innovative Technology and Exploring Engineering, 3(11), 112–116.
53. Kim, S., Park, S., Yun, J., & Lee, Y., (2008). Automated continuous integration
of component-based software: An industrial experience. In: 2008 23rd IEEE/ACM
International Conference on Automated Software Engineering (pp. 423–426).
54. Kuvin, J. T., & Karas, R. H., (2003). Clinical utility of endothelial function testing:
Ready for prime time? Circulation, 107(25), 3243–3247.
55. Labiche, Y., Thévenod-Fosse, P., Waeselynck, H., & Durand, M. H., (2000). Testing
levels for object-oriented software. In: Proceedings of the 22nd International Conference
on Software Engineering (pp. 136–145).
56. Le Traon, Y., Mouelhi, T., & Baudry, B., (2007). Testing security policies: Going
beyond functional testing. In: The 18th IEEE International Symposium on Software
Reliability (ISSRE’07) (pp. 93–102).
57. Lemos, O. A. L., Silveira, F. F., Ferrari, F. C., & Garcia, A., (2018). The impact of
software testing education on code reliability: An empirical assessment. Journal of
Systems and Software, 137, 497–511.
58. Leung, H. K., & Wong, P. W., (1997). A study of user acceptance tests. Software
Quality Journal, 6, 137–149.
59. Li, H. Y., Li, W. H., Wong, L. Y., & Hwang, N., (2005). Built-in via module test
structure for backend interconnection in-line process monitor. In: Proceedings of
the 12th International Symposium on the Physical and Failure Analysis of Integrated
Circuits, 2005; IPFA 2005 (pp. 167–170).
60. Lieb, II. J. G., & Draganov, P. V., (2008). Pancreatic function testing: Here to stay
for the 21st century. World Journal of Gastroenterology: WJG, 14(20), 3149.
61. Liker, J. K., & Sindi, A. A., (1997). User acceptance of expert systems: A test of
the theory of reasoned action. Journal of Engineering and Technology Management,
14(2), 147–173.
62. Lyles, W., & Stevens, M., (2014). Plan quality evaluation 1994–2012: Growth and
contributions, limitations, and new directions. Journal of Planning Education and
Research, 34(4), 433–450.
63. Mahmood, S., Lai, R., & Kim, Y. S., (2007). Survey of component-based software
development. IET Software, 1(2), 57–66.
64. Marinissen, E. J., Iyengar, V., & Chakrabarty, K., (2002). A set of benchmarks for
modular testing of SOCs. In: Proceedings. International Test Conference (pp. 519–528).
65. Mentzer, J. T., Flint, D. J., & Hult, G. T. M., (2001). Logistics service quality as a
segment-customized process. Journal of Marketing, 65(4), 82–104.
66. Meyers, D. C., Durlak, J. A., & Wandersman, A., (2012). The quality implementation
framework: A synthesis of critical steps in the implementation process. American
Journal of Community Psychology, 50, 462–480.
CHAPTER
1
30 Software Testing and User Experience
CHAPTER
1
32 Software Testing and User Experience
97. Steenkamp, J. B. E., (1990). Conceptual model of the quality perception process.
Journal of Business Research, 21(4), 309–333.
98. Thapa, B. R., & Walia, A., (2007). Liver function tests and their interpretation. The
Indian Journal of Pediatrics, 74, 663–671.
99. Utting, M., Pretschner, A., & Legeard, B., (2012). A taxonomy of model‐based testing
approaches. Software Testing, Verification and Reliability, 22(5), 297–312.
100. Van, D. B. A., Mummery, C. L., Passier, R., & Van, De. M. A. D., (2019). Personalized
organs-on-chips: Functional testing for precision medicine. Lab on a Chip, 19(2),
198–205.
101. Vitharana, P., (2003). Risks and challenges of component-based software development.
Communications of the ACM, 46(8), 67–72.
102. Wallace, D. R., & Fujii, R. U., (1989). Software verification and validation: An overview.
IEEE Software, 6(3), 10–17.
103. Weyuker, E. J., (1998). Testing component-based software: A cautionary tale. IEEE
Software, 15(5), 54–59.
104. Wohlgemuth, J. H., & Kurtz, S., (2011). Using accelerated testing to predict module
reliability. In: 2011 37th IEEE Photovoltaic Specialists Conference (pp. 003601–003605).
105. Wohlgemuth, J., & Kurtz, S., (2014). Photovoltaic module qualification plus testing.
In: 2014 IEEE 40th Photovoltaic Specialist Conference (PVSC) (pp. 3589–3594).
106. Wu, C. W., Pearn, W. L., & Kotz, S., (2009). An overview of theory and practice on
process capability indices for quality assurance. International Journal of Production
Economics, 117(2), 338–359.
107. Wu, Y., Chen, M. H., & Offutt, J., (2003). UML-based integration testing for component-
based software. In: COTS-Based Software Systems: Second International Conference,
ICCBSS 2003 Ottawa, Canada, February 10–12, 2003 Proceedings 2 (pp. 251–260).
108. Wu, Y., Pan, D., & Chen, M. H., (2001). Techniques for testing component-based
software. In: Proceedings Seventh IEEE International Conference on Engineering
of Complex Computer Systems (pp. 222–232).
109. Wuyts, F. L., Furman, J., Vanspauwen, R., & Van De, H. P., (2007). Vestibular
function testing. Current Opinion in Neurology, 20(1), 19–24.
110. Yan, F., Noble, J., Peltola, J., Wicks, S., & Balasubramanian, S., (2013).
Semitransparent OPV modules pass environmental chamber test requirements.
Solar Energy Materials and Solar Cells, 114, 214–218.
111. Yau, S. S., & Dong, N., (2000). Integration in component-based software development
using design patterns. In: Proceedings 24th Annual International Computer Software
and Applications Conference; COMPSAC 2000 (pp. 369–374).
CHAPTER
1
CHAPTER 2
UNIT INTRODUCTION
In beginner programming courses, the need for testing in high-quality software development
is often underestimated. Learning is enhanced and emphasis on quality and correctness is
fostered through unit tests and specific test-driven development (TDD) components. Tools
like JUnit greatly simplify the creation of test cases. These systems can also be used to
automate subject grading, which is an added benefit to teachers (Basili & Selby, 1987).
The purpose of a technique called validation is to increase the user’s confidence that
the program is working as intended. Inspection often requires inspection. The purpose
of testing is to ensure that a program meets its requirements, but as Edsger Dijkstra
pointed out, testing can only prove the existence of bugs, not the existence of bugs.
Nonetheless, thorough testing significantly boosts confidence that performs as planned
(Kuhn & Reilly, 2002).
Never presume that your code is correct, as a programmer. Yet, novice programmers
frequently believe that success is defined as a program that executes without syntax
problems. Early testing can help children develop a trusting mindset and sense of
accountability for the accuracy of their work (Hooda & Chhillar, 2015).
Debugging print statements that cluttered a program’s source code and caused
“scroll blindness” had long been used in software testing. To interpret test findings, one
34 Software Testing and User Experience
must sift through a lot of output, which is a time-consuming and extremely subjective
procedure. Another significant drawback of this method is that it lacks the automation
and versatility necessary to run tests frequently and quickly in the development process
(Fraser & Arcuri, 2012).
Learning Objectives
At the end of this lesson, students will able to:
• Identify and describe the different types of software testing
• Understand the advantages and disadvantages of each testing type
• Comprehend the testing process
• Develop testing strategies
• Learn about the tools and techniques used in software testing
Key Terms
• Software Testing
• Test-Driven Development
• Types of software Testing
• Unit Testing
• Unit Testing in Java
CHAPTER
2
Types of Software Testing 35
interactively testing classes and functions. BlueJ’s object bench allows for the
interactive execution of an object’s methods. Inspectors make it possible to
see the object’s current state, providing quick reviews on the results of using
the method.
BlueJ’s correlative analyzing features have some restrictions. As a result,
tests that find defects must be manually run again after improvements have
been made because they cannot be automated or reused. This is especially
problematic when a huge amount of manually provided data is required, such as
when filling a big array. Regression testing is necessary in these circumstances,
which entails redoing tests to make sure that changes have fixed the issue
and haven’t caused any new faults. Such analysis must be automated to be
useful (Barriocanal et al., 2002).
It is preferable to require as little human interaction as possible during testing.
The tests should self-check, and results should be sent to the programmer
indicating if each test was successful or unsuccessful. Regression test
development can be time-consuming and sometimes error-prone. In actuality,
more code may be required for tests than is being tested. However, the amount
of work required to generate test cases can be significantly decreased by tools
like J Unit (Janzen, 2005).
Testing can also be done with a debugger. An efficient technique to track
the execution of a program to identify the root of a bug is to set markers,
watches, etc. Regression testing, however, is not appropriate for an interactive
debugger. Once an error has been found, it is then possible to isolate it by
debugging. Unit testing can greatly lessen the need for debugging because it
is best suited to the early identification and eradication of errors (Warren et
al., 2014).
Tests run in the debugger or on the BlueJ object bench do not constitute
correctly. They can only execute one procedure call or statement at a time,
in essence. They also need judgment to interpret the outcomes (Morse &
Anderson, 2004).
import junit.framework.TestCase;
public class AccountTest extends TestCase {
public AccountTest(String arg0) {
super(arg0);
}
public void testDeposit() { a.deposit(200);
assertEquals(“deposit error: ,” 700, a.getBalance());
}
CHAPTER
2
Types of Software Testing 39
}
It shows the result, which are represented as a “Red Bar,” demonstrate
that a failure took place. Once more, the number of test methods, mistakes,
and failures are displayed. The assert Equals message string argument, the
anticipated outcome, and the actual result are all displayed in the failures
panel, which also indicates the method that failed.
You can run specific test methods in the hierarchy by using the Run button
to the right of the Results window. Stack traces of the unsuccessful test methods
are shown in the bottom panel.
JUnit distinguishes between failures and errors. The assert methods are
capable of foreseeing failures. Errors are unexpected issues brought on by
uncaught exceptions that have spread from a test procedure.
It is necessary to address the issue that has been identified with withdrawl.
By raising an exception, we can enhance the method (Janzen & Saiedian, 2005).
// Subtracts amt from the balance of this Account
// Throws InsufficientFundsException if balance < amt
public void withdraw(double amt) {
if (balance < amt)
throw new InsufficientFundsException();
else
balance = balance – amt;
}
Now that the testWithdraw2 method has succeeded, another test is required
to ensure that the exception is thrown when necessary.
public void testWithdraw3 () {
try {
a.withdraw(501);
fail(“Expected InsufficientFundsException”);
} catch (InsufficientFundsException e) {
// exception test succeeded
}
CHAPTER
2
Types of Software Testing 41
}
The test method fails right away as a result of the fail method if the
anticipated exception is not raised. The test in this case passes from the
Insufficient Funds Exception would be thrown and caught.
Test suites help organize test cases when a project expands and the
quantity of test cases develops. The term “test suite” refers to a group of
connected tests. A JUnit test suite can be created in a variety of methods.
One is to add each test method by name separately, like in the example that
follows (Bruzual et al., 2020).
public static Test suite() {
TestSuite suite = new
TestSuite(“Test everything”);
suite.addTest(new
AccountTest(“testDeposit”));
suite.addTest(new
AccountTest(“testWithdraw1”));
suite.addTest(new
AccountTest(“testWithdraw2”));
suite.addTest(new OtherAcct
Tests(“testWithdraw3”)); suite.
addTest(new OtherAcctTests(“t
estWithdraw4”)); return suite;
}
Including all testing procedures from a test case, like in this instance, is a simpler
solution.
return suite;
CHAPTER
2
42 Software Testing and User Experience
All procedures with titles that start with “test” from the specified classes
will be covered by this. JUnit test cases are Java classes, making it simple
to input remarks and create documentation pages. Consequently, the test
plan’s documentation is a straightforward and essential step in the process of
developing tests as a whole. JUnit may be used manually or using an IDE with
reasonable ease. Simple IDEs for educational purposes BlueJ and DrJava now
come with built-in JUnit functionality. The open-source Eclipse IDE incorporates
JUnit in a very adaptable manner at a more advanced level. A wizard in
Eclipse can create test method stubs automatically. IDEs like eclipse, which
also assist hot code updates, enable the execution of JUnit tests without the
need to recompile the test suite. Eclipse can be set up to start a debugger if
a failure happens (Artho et al., 2007).
ACTIVITY 2.1:
Source: Atle Rotevatn et al., creative commons license. What are the types of
software testing? Give
It would be simple to create a script on a Unix/Linux machine
a detail presentation.
to further automate this procedure by cycling among submissions
and sending output results to files (Olan, 2003).
The testing procedure can be further automated with Java’s Ant
build tools, and the outcomes are compiled in a structured report and
sent via email to the intended recipient. Notwithstanding the need
for more grader comments, there remains room for improvement
in the grading process (Spacco et al., 2005).
Assembling the submission method in which students can run
their code via a ready-made test driver to give feedback on failed
tests is an additional option. Also, this system would emphasize how
crucial accurate specifications and standards are. The test cannot
be performed if a student’s code does not adhere to a class’s
specifications for method names and declarations. As mistakes
that students would probably miss might be found and fixed prior
to final submission is due, it might also lower the number of
submissions that are undesirable. It is consistent with Kent Beck’s
recommendations for TDD from, notwithstanding the possibility that
this would encourage programming for the test drive (Desai et al.,
2009).
CHAPTER
2
44 Software Testing and User Experience
SUMMARY
A testing framework is necessary for creating trustworthy software. The addition of unit
testing to a programmer’s toolbox can enhance the design and drastically cut down
on time spent looking for enigmatic errors. Students can be effectively encouraged to
extensively test the software they build by using tools like JUnit. Such tools are more
than just extra bells and whistles for an IDE; they improve the learning process. If unit
testing is introduced to students early on and is supported by these tools, students may
also become “test infected.”
REVIEW QUESTIONS
1. What are the different types of software testing, and how are they used in the
software development life cycle?
2. How does functional testing differ from non-functional testing, and why is it
important to test both aspects of a software system?
3. What are some of the most common techniques used in manual testing, and how
do they compare to automated testing approaches?
4. What is regression testing, and why is it important in ensuring software quality
over time?
5. How can performance testing help identify bottlenecks and other issues that might
affect the overall speed and responsiveness of a software system?
6. What are some of the key considerations when designing and implementing a
software testing strategy, and how can organizations ensure that their testing
efforts are effective and efficient?
3. Which type of testing ensures that the software works as intended across
different devices, platforms, and browsers?
a. Compatibility testing
b. Regression testing
c. Security testing
d. Acceptance testing
4. Which type of testing is conducted by developers to ensure that their code
is working as expected?
a. Unit testing
b. Integration testing
c. System testing
d. Acceptance testing
5. Which type of testing verifies that the software meets the requirements and
specifications agreed upon by the client and development team?
a. Regression testing
b. Acceptance testing
c. System testing
d. Usability testing
REFERENCES
1. Artho, C., Chen, Z., & Honiden, S., (2007). AOP-based automated unit test classification
of large benchmarks. In: 31st Annual International Computer Software and Applications
Conference (COMPSAC 2007) (Vol. 2, pp. 17–22).
2. Barriocanal, E. G., Urbán, M. Á. S., Cuevas, I. A., & Pérez, P. D., (2002). An experience
in integrating automated unit testing practices in an introductory programming course.
ACM SIGCSE Bulletin, 34(4), 125–128.
3. Basili, V. R., & Selby, R. W., (1987). Comparing the effectiveness of software testing
strategies. IEEE Transactions on Software Engineering, (12), 1278–1296.
4. Baumgärtel, P., Grundmann, P., Zeschke, T., Erko, A., Viefhaus, J., Schäfers, F., &
Schirmacher, H., (2019). RAY-UI: New features and extensions. In: AIP Conference
Proceedings (Vol. 2054, No. 1, p. 060034).
5. Bruzual, D., Montoya, F. M. L., & Di Francesco, M., (2020). Automated assessment
of Android exercises with cloud-native technologies. In: Proceedings of the 2020
CHAPTER
2
46 Software Testing and User Experience
20. Latorre, R., (2013). Effects of developer experience on learning and applying unit
test-driven development. IEEE Transactions on Software Engineering, 40(4), 381–395.
21. Mackinnon, T., Freeman, S., & Craig, P., (2000). Endo-testing: Unit testing with mock
objects. Extreme Programming Examined, 287–301.
22. Moe, M. M., (2019). Comparative study of test-driven development TDD, behavior-
driven development BDD and acceptance test–driven development ATDD. International
Journal of Trend in Scientific Research and Development, 3, 231–234.
23. Morse, S. F., & Anderson, C. L., (2004). Introducing application design and software
engineering principles in introductory cs courses: Model-view-controller java application
framework. Journal of Computing Sciences in Colleges, 20(2), 190–201.
24. Munir, H., Moayyed, M., & Petersen, K., (2014). Considering rigor and relevance
when evaluating test driven development: A systematic review. Information and
Software Technology, 56(4), 375–394.
25. Nagappan, N., Maximilien, E. M., Bhat, T., & Williams, L., (2008). Realizing quality
improvement through test driven development: Results and experiences of four
industrial teams. Empirical Software Engineering, 13, 289–302.
26. Olan, M., (2003). Unit testing: Test early, test often. Journal of Computing Sciences
in Colleges, 19(2), 319–328.
27. Pancur, M., Ciglaric, M., Trampus, M., & Vidmar, T., (2003). Towards empirical
evaluation of test-driven development in a university environment. In: The IEEE
Region 8 EUROCON 2003; Computer as a Tool (Vol. 2, pp. 83–86).
28. Patterson, A., Kölling, M., & Rosenberg, J., (2003). Introducing unit testing with
BlueJ. ACM SIGCSE Bulletin, 35(3), 11–15.
29. Schneider, S. A., Lang, A. E., Moro, E., Bader, B., Danek, A., & Bhatia, K. P., (2010).
Characteristic head drops and axial extension in advanced chorea‐acanthocytosis.
Movement Disorders, 25(10), 1487–1491.
30. Sherman, M., Bassil, S., Lipman, D., Tuck, N., & Martin, F., (2013). Impact of auto-
grading on an introductory computing course. Journal of Computing Sciences in
Colleges, 28(6), 69–75.
31. Spacco, J., & Pugh, W., (2006). Helping students appreciate test-driven development
(TDD). In: Companion to the 21st ACM SIGPLAN Symposium on Object-Oriented
Programming Systems, Languages, and Applications (pp. 907–913).
32. Spacco, J., Strecker, J., Hovemeyer, D., & Pugh, W., (2005). Software repository mining
with marmoset: An automated programming project snapshot and testing system. In:
Proceedings of the 2005 International Workshop on Mining Software Repositories (pp. 1–5).
33. Sullivan, G. J., Topiwala, P. N., & Luthra, A., (2004). The H. 264/AVC advanced
video coding standard: Overview and introduction to the fidelity range extensions.
Applications of Digital Image Processing XXVII, 5558, 454–474.
34. Warren, J., Rixner, S., Greiner, J., & Wong, S., (2014). Facilitating human interaction in
an online programming course. In: Proceedings of the 45th ACM Technical Symposium
on Computer Science Education (pp. 665–670).
CHAPTER
2
48 Software Testing and User Experience
35. Weikle, D. A., Lam, M. O., & Kirkpatrick, M. S., (2019). Automating systems course
unit and integration testing: Experience report. In: Proceedings of the 50th ACM
Technical Symposium on Computer Science Education (pp. 565–570).
CHAPTER
2
CHAPTER 3
UNIT INTRODUCTION
In the histories of the evolution of digital computers, software testing has been around
since the dawn of time. Testing software is an essential part of evaluating and determining
software quality. Hence testing should never be skipped. Testing often takes between
40 and 50% development efforts, & typically utilizes more work for systems requiring
higher reliability levels. As a result, software engineering’s important component is testing.
4th generation development of programming languages (4GL) that makes the software
implementation process go more quickly increased the percentage of time spent on
testing (Khan, 2010). Because of the increasing work needed to maintain and improve
current systems, a large testing amount will be required to verify those systems after any
modifications are implemented. Even though there have been advancements in verification
approaches & formal methods, the system need still be evaluated before being put into
use. Testing continues to be the only successful method to ensure the quality of software
systems that are not trivial and the most complex yet poorly understood subfields of
software engineering. Future trends suggest that testing, a significant area of research in
computer science, will become even more significant (Baresi & Pezze, 2006).
This retrospective on 50 years of software testing technique research examines
growth of software testing technique research by tracking significant research findings
which may have led to field’s development. In addition, this evaluates the evolution of
research paradigms by tracing the types of research questions & methodologies employed
50 Software Testing and User Experience
Learning Objectives
At the end of this chapter, readers will be able to:
• Understand the concept of software testing and its importance in software
development.
• Learn different types of software testing techniques such as black-box testing,
white-box testing, gray-box testing, regression testing, and acceptance testing.
• Gain an understanding of the advantages and disadvantages of each testing
technique and when to use them in software testing.
• Understand the process of creating test cases, test suites, and test plans, and
how to execute them.
• Learn about test-driven development (TDD) and behavior-driven development
(BDD) and how they are used in software testing.
• Understand the importance of test coverage, test automation, and continuous
integration in software testing.
• Learn about the tools and technologies used in software testing such as test
management tools, defect tracking tools, and performance testing tools.
Key Terms
• Redwine/Riddle Software Technology
• Software Testing Techniques
• Structural Techniques
• Testing Spectrum
• Testing Techniques Taxonomy
CHAPTER
3
Goals, Scope, and History of Software Testing 51
CHAPTER
3
Goals, Scope, and History of Software Testing 53
changed along with the concept and goals of software testing. Before we start
studying the history of testing approaches, let’s quickly go through the concept
progression of testing using the testing process model by Gelperin & Hetzel
(1988) (Graham & Dayton, 2002).
1. Phase I. The Debugging-Oriented Period. Before 1956: Testing
Was Not Separated from Debugging
In 1950, Turing wrote the renowned piece on program testing regarded
as the first. How can we tell if a program is intelligent? This question
is addressed in the essay. In other words, this question is a particular
case of “How would we know that the requirements are satisfied by
the program?” if the requirement is to create such a program. Turing’s
operational test stipulated that a human interrogator could not tell the
difference between a program’s behavior and that of a reference system
(a person) (tester). This might be regarded as functional testing in its
embryonic stage. Debugging, testing, and program checkout were still
not distinct ideas (Eickelmann & Richardson, 1996).
2. Phase II. The Demonstration-Oriented Period. 1957~78: – Testing
to Ensure that the Software Meets its Requirements
Testing, sometimes known as program checkout at the time, and
debugging were not separated until 1957. Charles Baker observed
in 1957 that “program checkout” was thought to have two objectives:
“Make sure the program works” and “Make sure the program solves
the problem.” Although “making sure” was frequently interpreted as
the testing aim of completing requirements, the latter objective was
seen as the testing’s primary objective. Debugging and testing are two
distinct steps, as seen in Figure 3.1. The difference between testing
and debugging depended on how success was defined. During this
time, definitions emphasize that testing aims to show correctness:
“Thus, an ideal test only passes when a program is error-free.”
The notion that software might be thoroughly tested also became popular
in the 1970s. This prompted other studies that focused on path coverage
testing. “Exhaustive testing defined either in terms of program pathways or
a program’s input domain,” as stated in the 1975 work by Goodenough and
Gerhart (Luo, 2001).
3. Phase III. The Destruction-Oriented Period. 1979~82: – Implementation
Faults Detected by Testing
In 1979, The Art of Software Testing, written by Myers laid the groundwork
for developing more efficient test technique design. The definition of
software testing as “the process of executing a program to identify
defects” was first used. It was stressed that finding errors greatly
increases the utility of test cases. Similar to the demonstration-oriented
stage, it is possible to unintentionally choose test data with a low
CHAPTER
3
56 Software Testing and User Experience
CHAPTER
3
Goals, Scope, and History of Software Testing 57
Early in the 1990s, there was an uptick in demand for estimating and
forecasting software system reliability. Several reliability models used before
Jalote and colleagues’ work in 1994 used functional testing strategies and
made reliability predictions based on failure data gathered during testing. Data
collection, calculation, expertise, and computation for result interpretation are
necessary for these models. By modeling a software system as a graph and
assuming that a node’s dependability is a function of how many times it is
performed during testing, the authors offer a novel method based on the coverage
history of the program. The more times a node is executed during testing, the
more reliable it is. The reliability and validity of the individual nodes are then
used to simulate and calculate the dependability of the software system. With
such a model, adding reliability computation functionality to coverage analysis
tools is simple, totally automating reliability estimation.
Results from both functional and structural testing methods were successful
in 1997. The framework for probabilistic functional testing was put forth this
year. In their formulation of the testing activity, Bernot and colleagues estimate
the reliability and confidence level in the accuracy of the system being tested.
Additionally, they describe how to create suitable distributions for the most
prevalent data domains, including unions, intervals of integers, inductively
defined sets & Cartesian products (Boehm, 2006).
The other intriguing study from 1997 used a rigid, automatable method
for complex system integration testing called formal architectural description.
The authors suggest modeling the systems’ desired behavior using the formal
specification language CHAM in this study. A graph representing all potential
system behaviors regarding interactions between its constituent parts is derived
and simplified. Similar to how control & data flow graphs are used in structural
testing, an appropriate set of reduction graphs can be used to highlight the
system’s key architectural characteristics and generate integration tests using
a coverage method. The use of formal methods in testing procedures has
been a trend since the later 1980s; this study is one example (Buffardi, 2020).
Many software developers have used Commercial Off Shelf (COTS) software
and unified modeling language (UML) since the late 1990s. So, the proper
testing methods for the UML components are required by this development
trend. Hartmann and colleagues at Siemens integrated test generation and
test execution technologies with industry-standard UML modeling tools like
Rational Rose in their 2000 study to address the issue of testing components.
The authors outline their method for modeling components and interactions,
explain how test cases are created from these component models, and then
walk through how to run them to check that they behave per expectations.
The method is tested using examples in the Siemens TnT environment. Test
cases are derived from StateCharts with annotations.
CHAPTER
3
Goals, Scope, and History of Software Testing 61
Figure 3.3.
Technology matura-
tion analysis of soft-
ware testing tech-
niques.
CHAPTER
3
64 Software Testing and User Experience
CHAPTER
3
66 Software Testing and User Experience
These studies were adopted as the traditional standards for subsequent research
followed them. Figure 3.3 shows that practically all significant theoretical
structural testing research was published during this time. The transition from
simple programs to huge systems challenged the whole software engineering
community. Still, it took the testing community some time to adjust—about five
years (Qvale, 2002).
The picture also shows that there was just one notable outcome for
functional testing during this time. The cause is clear. Functional testing has
only used heuristic criteria and is based on requirements. Without the ability to
express the requirements effectively, strictly, and unambiguously, it isn’t easy
to establish when and if such criteria are satisfied. This served as one of the
driving forces for the creation of implementation-based testing approaches,
which had the advantage of being automated and able to measure customer
satisfaction. Thankfully, the study that came out at this time set a great tone
for further studies because it shifted the focus from the low-level, simplistic
input/output requirements that testers frequently employed to a higher level—
the system’s architecture. The question of evaluating “software” rather than a
“system” was still of interest to scholars and practitioners throughout this time.
Nonetheless, the entire field of software engineering had started to prepare
for the transition from the stage of large-scale programming to a higher level
(Penuel et al., 2020).
has benefitted both functional and structural testing methods (Howitz ACTIVITY 3.1:
et al., 2020).
What do you know
Much testing research has been conducted on these systems about the history of
due to the widespread development and use of object-oriented software testing?
technologies, COTS software, and component-based systems. Early
in the 1990s, the earliest OO testing experiments were published.
Most components, such as classes and other elements, employ
conventional functional and structural methodologies. Researchers
have put up fresh challenges and solutions to examine components’
relationships and inheritances. They employ structural and functional
procedures, and combining the two techniques for testing complex
systems has been a successful strategy (Ölveczky & Meseguer,
2004).
CHAPTER
3
68 Software Testing and User Experience
SUMMARY
To assist engineers in creating high-quality systems, testing has been widely adopted. The
methods for testing have developed from ad hoc activities carried out by a small group of
programmers to a disciplined area of study in software engineering. The development of
testing methods has been successful yet insufficient. There is high pressure to generate
software of higher quality at cheaper costs, yet current methodologies are insufficient for
this. We should conduct fundamental research that tackles the tough problems, develop
tools & methods, and conduct empirical investigations so that we may anticipate a big
change in the way we test software.
To facilitate transfer of these techniques into practice, researchers should show efficacy
of numerous existing techniques for large industrial software. The research’s findings will
be validated by their effective application in developing commercial software, inspiring
further study. The widespread usage of software and the rising expense of validating it
will spur industry-research partnerships to develop innovative methods and speed their
implementation. One of the most important research fields will shortly be the invention of
effective testing methods and tools to produce high-quality software.
REVIEW QUESTIONS
1. What is the difference between white-box testing and black-box testing? Give an
example of a situation in which you would use each technique.
2. What is boundary value analysis? How can it help you design more effective
tests?
3. Explain the difference between functional testing and structural testing. What are
the advantages and disadvantages of each approach?
4. Describe the different types of test cases that you might use when testing software.
When would you use each type?
5. What is regression testing? Why is it important, and how do you perform it
effectively?
6. How do you measure the effectiveness of your testing? What metrics can you
use to evaluate your test results, and what do they tell you about the quality of
your software?
CHAPTER
3
Goals, Scope, and History of Software Testing 69
c. System testing
d. Acceptance testing
2. Which of the following is NOT a black-box testing technique?
a. Equivalence partitioning
b. Boundary value analysis
c. State transition testing
d. Statement coverage
3. Which of the following is a non-functional testing technique?
a. Regression testing
b. Load testing
c. User acceptance testing
d. Boundary value analysis
4. Which of the following is a white-box testing technique?
a. Equivalence partitioning
b. Boundary value analysis
c. Decision table testing
d. Statement coverage
5. Which of the following is a static testing technique?
a. Performance testing
b. Usability testing
c. Walkthrough
d. Exploratory testing
REFERENCES
1. Alagar, V. S., Periyasamy, K., & Periyasamy, K., (2011). Specification of Software
Systems (Vol. 1, pp. 105–128) London, UK: Springer.
2. Baresi, L., & Pezze, M., (2006). An introduction to software testing. Electronic Notes
in Theoretical Computer Science, 148(1), 89–111.
3. Beutel, R. A., (1991). Software engineering practices and the idea/expression
dichotomy: Can structured design methodologies define the scope of software
copyright. Jurimetrics J., 32, 1.
CHAPTER
3
70 Software Testing and User Experience
19. Fielding, A. J., (1989). Migration and urbanization in Western Europe since 1950.
The Geographical Journal, 155(1), 60–69.
20. Gelperin, D., & Hetzel, B., (1988). The growth of software testing. Communications
of the ACM, 31(6), 687–695.
21. Graham, M. H., & Dayton, P. K., (2002). On the evolution of ecological ideas:
Paradigms and scientific progress. Ecology, 83(6), 1481–1489.
22. Guo, J., Wang, Q., & Li, Y., (2021). Evaluation-oriented façade defects detection
using rule-based deep learning method. Automation in Construction, 131, 10–20.
23. Heckman, S., & Williams, L., (2008). On establishing a benchmark for evaluating
static analysis alert prioritization and classification techniques. In: Proceedings of
the Second ACM-IEEE International Symposium on Empirical Software Engineering
and Measurement (Vol. 1, pp. 41–50).
24. Hill, R., Halamish, E., Gordon, I. J., & Clark, M., (2013). The maturation of biodiversity
as a global social–ecological issue and implications for future biodiversity science
and policy. Futures, 46, 41–49.
25. Howitz, W. J., McKnelly, K. J., & Link, R. D., (2020). Developing and implementing
a specifications grading system in an organic chemistry laboratory course. Journal
of Chemical Education, 98(2), 385–394.
26. Hsia, P., Kung, D., & Sell, C., (1997). Software requirements and acceptance testing.
Annals of Software Engineering, 3(1), 291–317.
27. Ipate, F., & Holcombe, M., (1997). An integration testing method that is proved to
find all faults. International Journal of Computer Mathematics, 63(3, 4), 159–178.
28. Jones, B. F., Sthamer, H. H., & Eyres, D. E., (1996). Automatic structural testing
using genetic algorithms. Software Engineering Journal, 11(5), 299–306.
29. Khan, M. E., (2010). Different forms of software testing techniques for finding errors.
International Journal of Computer Science Issues (IJCSI), 7(3), 24.
30. Liu, S., & Chen, Y., (2008). A relation-based method combining functional and structural
testing for test case generation. Journal of Systems and Software, 81(2), 234–248.
31. Luo, L., (2001). Software Testing Techniques (Vol. 15232, No. 1–19, p. 19). Institute
for software research international Carnegie Mellon university Pittsburgh, PA.
32. Mariani, L., Pastore, F., & Pezze, M., (2010). Dynamic analysis for diagnosing
integration faults. IEEE Transactions on Software Engineering, 37(4), 486–508.
33. McKenzie, D., (1981). The variation of temperature with time and hydrocarbon
maturation in sedimentary basins formed by extension. Earth and Planetary Science
Letters, 55(1), 87–98.
34. Miller, T., (2012). Using dependency structures for prioritization of functional test
suites. IEEE Transactions on Software Engineering, 39(2), 258–275.
35. Mujumdar, A. S., (2004). Research and development in drying: Recent trends and
future prospects. Drying Technology, 22(1, 2), 1–26.
CHAPTER
3
72 Software Testing and User Experience
36. Neuderth, S., Jabs, B., & Schmidtke, A., (2009). Strategies for reducing test anxiety
and optimizing exam preparation in German university students: A prevention-oriented
pilot project of the University of Würzburg. Journal of Neural Transmission, 116,
785–790.
37. Ölveczky, P. C., & Meseguer, J., (2004). Specification and analysis of real-time systems
using real-time Maude. In: Fundamental Approaches to Software Engineering: 7th
International Conference, FASE 2004. Held as Part of the Joint European Conferences
on Theory and Practice of Software, ETAPS 2004, Barcelona, Spain, March 29-April
2, 2004; Proceedings 7 (Vol. 7, pp. 354–358). Springer Berlin Heidelberg.
38. Orso, A., & Rothermel, G., (2014). Software testing: A research travelogue (2000–
2014). In: Future of Software Engineering Proceedings (Vol. 1, pp. 117–132).
39. Penuel, W. R., Riedy, R., Barber, M. S., Peurach, D. J., LeBouef, W. A., & Clark,
T., (2020). Principles of collaborative education research with stakeholders: Toward
requirements for a new research and development infrastructure. Review of Educational
Research, 90(5), 627–674.
40. Pfleeger, S. L., (1999). Making change: Understanding software technology transfer.
The Journal of Systems & Software, 47(2), 111–124.
41. Qvale, T. U., (2002). A case of slow learning?: Recent trends in social partnership
in Norway with particular emphasis on workplace democracy. Concepts and
Transformation, 7(1), 31–55.
42. Riddle, W. E., (1984). The magic number eighteen plus or minus three: A study of
software technology maturation. ACM SIGSOFT Software Engineering Notes, 9(2),
21–37.
43. Rohlf, F. J., (1972). An empirical comparison of three ordination techniques in
numerical taxonomy. Systematic Zoology, 21(3), 271–280.
44. Rountev, A., Kagan, S., & Gibas, M., (2004). Static and dynamic analysis of call
chains in Java. In: Proceedings of the 2004 ACM SIGSOFT International Symposium
on Software Testing and Analysis (Vol. 1, pp. 1–11).
45. Saleh, H., Avdoshin, S., & Dzhonov, A., (2019). Platform for tracking donations of
charitable foundations based on blockchain technology. In: 2019 Actual Problems of
Systems and Software Engineering (APSSE) (Vol. 1, pp. 182–187). IEEE.
46. Sawant, A. A., Bari, P. H., & Chawan, P. M., (2012). Software testing techniques and
strategies. International Journal of Engineering Research and Applications (IJERA),
2(3), 980–986.
47. Schaffartzik, A., Mayer, A., Gingrich, S., Eisenmenger, N., Loy, C., & Krausmann,
F., (2014). The global metabolic transition: Regional patterns and trends of global
material flows, 1950–2010. Global Environmental Change, 26, 87–97.
48. Selby, R. W., (2007). Software Engineering: Barry W. Boehm’s Lifetime Contributions
to Software Development, Management, and Research (Vol. 69, pp. 1–15). John
Wiley & Sons.
CHAPTER
3
Goals, Scope, and History of Software Testing 73
49. Sharafi, Z., Soh, Z., & Guéhéneuc, Y. G., (2015). A systematic literature review
on the usage of eye-tracking in software engineering. Information and Software
Technology, 67, 79–107.
50. Shaw, M., (2002). What makes good research in software engineering? International
Journal on Software Tools for Technology Transfer, 4, 1–7.
51. Shi, M., (2010). Software functional testing from the perspective of business practice.
Computer and Information Science, 3(4), 49.
52. Shull, F., Basili, V., Carver, J., Maldonado, J. C., Travassos, G. H., Mendonça, M.,
& Fabbri, S., (2002). Replicating software engineering experiments: Addressing the
tacit knowledge problem. In: Proceedings International Symposium on Empirical
Software Engineering (Vol. 1, pp. 7–16). IEEE.
53. Sim, S. E., Easterbrook, S., & Holt, R. C., (2003). Using benchmarking to advance
research: A challenge to software engineering. In: 25th International Conference on
Software Engineering, 2003; Proceedings (Vol. 1, pp. 74–83). IEEE.
54. Sommerville, I., (2011). Software Engineering, 9/E (Vol. 1, No. 2, pp. 1–23). Pearson
Education India.
55. Stroebe, W., Mensink, W., Aarts, H., Schut, H., & Kruglanski, A. W., (2008). Why
dieters fail: Testing the goal conflict model of eating. Journal of Experimental Social
Psychology, 44(1), 26–36.
56. Thomas, T. K., (2015). Measuring community impact assessment for internal destination
performance evaluation in an exploring tourist destination. ATNA Journal of Tourism
Studies, 10(1), 53–71.
57. Vegas, S., Juristo, N., & Basili, V. R., (2009). Maturing software engineering knowledge
through classifications: A case study on unit testing techniques. IEEE Transactions
on Software Engineering, 35(4), 551–565.
58. Vyatkin, V., (2013). Software engineering in industrial automation: State-of-the-art
review. IEEE Transactions on Industrial Informatics, 9(3), 1234–1249.
59. Wirth, N., (2008). A brief history of software engineering. IEEE Annals of the History
of Computing, 30(3), 32–39.
60. Woods, S. S., Resnick, L. B., & Groen, G. J., (1975). An experimental test of five
process models for subtraction. Journal of Educational Psychology, 67(1), 17.
61. Zhang, M., Li, X., Zhang, L., & Khurshid, S., (2017). Boosting spectrum-based fault
localization using PageRank. In: Proceedings of the 26th ACM SIGSOFT International
Symposium on Software Testing and Analysis (Vol. 1, pp. 261–272).
CHAPTER
3
CHAPTER 4
UNIT INTRODUCTION
In the beginning of this chapter, we take a look at several different development life
cycle models for the software and discuss the implications that each of these life cycles
has for testing. We took a broad perspective and recognized 3 levels (unit, system and,
integration) in terms of symmetrical relationships within the waterfall model (Luo, 2001).
These levels are as follows: This perspective has been reasonably successful over the
course of several decades, and all these levels continue to exist; however, the emergence
of various life cycle models compels us to take a closer look at these testing perspectives.
We start with the classic waterfall model in the beginning, mostly due to the fact that
it is well known and serves as a reference model for more contemporary models. After
that, we investigate variations on the waterfall paradigm, and then we finish up with some
mainline agile variants (Jamil et al., 2016).
A significant change also occurs in the way that we think. Our capacity to recognize
test cases may be hindered depending on how the item being tested is depicted, which
is why we are more focused with how to depict the item being tested (Sawant et al.,
2012). If you have a look at the papers that were presented at the most important
conferences (either professional or academic), you will see that the number of sessions
on the specification models and procedures is very close to the number of presentations
on testing methodologies. Model-Based Testing, abbreviated as MBT, is the point at which
all layers of software modeling and testing come together (Basili & Selby, 1987).
76 Software Testing and User Experience
Learning Objectives
When they have finished reading this chapter, learners will be able to do the following:
Understand the importance of software testing
Differentiate between different testing techniques
Understand the process of test case design
Learn about different levels of testing
Understand how to measure test coverage
‑Key Terms
• Agile Testing
• Beyond Unit Testing
• Scrum
• Traditional Waterfall Testing
• Waterfall Testing
CHAPTER
4
Beyond Unit Testing 77
iv. The model places a strong emphasis on analysis almost to the exclusion
of syntheses, which comes into play for the first time during the
integration testing phase.
v. Given the constraints imposed by staffing levels, massive parallel
development at unit level might not be viable (Chandra, 2015).
vi. Most importantly, “perfect foresight” is necessary since any errors or
omissions that occur at requirements level will spread throughout the
rest of the life cycle stages (Loc & Abedon, 2011).
The “omission” component was something that gave the initial waterfall
developers a lot of trouble. As a direct consequence of this, nearly all the papers
that specified the requirements urged that they be consistent, comprehensive,
and clear.
The majority of the methodologies for needs definition make it impossible
to establish consistency (decision trees are an exception to this rule), therefore
the necessity of clarity is self-evident. The aspect of completeness is particularly
intriguing, given that all the successor life cycles begin with the presumption
of incompleteness and rely on some kind of iteration to eventually arrive at
“completeness” (Liu et al., 2011).
analyze risk, create and test, and plan the next iteration—are iterated in an
incremental fashion. The spiral grows larger with each succeeding stage of
evolution (Lizcano, 2013).
The first approach to regression testing is to repeat the tests from previous
iteration. The second approach to regression testing is to develop a smaller
collection of test cases that are explicitly focused on discovering affected flaws.
Both approaches are valid approaches to regression testing. In an environment
that relies more on manual testing, it is not desired to repeat a whole set of
preceding integration tests, however in an environment that relies more on
automation, this is acceptable (Cook et al., 2011). When compared to the
test case failure expectation during progression testing, the test case failure
expectation during regression testing should be (or is) lower. As a general rule,
regression tests should fail no more than 5% of the time when they are run
after advancement tests. When it comes to advancement testing, this percentage
could go up to 20%. Soap Opera Tests are a fascinating name for particular
kinds of regression tests that are performed manually. This name comes from
the name of a popular soap opera. The plan is to have lengthy regression
tests that are difficult to understand, much like the intricate storylines found in
soap operas of television. In contrast, a progressive test case might fail due
to a select few causes, while a test case of soap opera may fail for a variety
of different reasons. If a test case for a soap opera fails, it is obvious that
additional targeted testing is needed to pinpoint where the error is.
The naming conventions for the builds are what cause the variances in
appearance between the three spin-off models. Incremental development is
the rationale for distinct builds is typically to reduce the staff profile. This is
because incremental development works best with small teams. The phases of
pure waterfall development, from thorough design all the way to unit testing,
can require a significant increase in the number of people working on the
project. As a result of the fact that many firms are unable to manage such
frequent staff fluctuations, the system is segmented into versions that can be
serviced by the workforce that is already in place. Even if the presumption
of a build sequence is created during evolutionary development, only the
very first construction is actually determined. On this basis later builds are
determined; often, this is carried out due to priorities stated by the customer/
user; as a result, the system changes to match the changing requirements of
the user. This hints at the customer-focused principle that underpins the agile
methodology (Yuksek et al., 2006).
A build is specified first in terms of quick prototyping, and then it is evaluated
to a go/no-go choice based on technology-related risk factors in the spiral model,
which is a blend of fast prototyping and evolutionary development. As a result
of this, we can see that it is impossible to maintain preliminary design like an
essential stage using the evolutionary model or the spiral model. Integration
CHAPTER
4
84 Software Testing and User Experience
The rapid prototyping process does not have any novel repercussions for
integration testing; although, it does have some extremely intriguing repercussions
for system testing. Where exactly can I find the requirements? Is the standard
represented by the most recent prototype? How many individual system test
cases be linked to the original prototype? Use of the prototype cycles as
information-gathering activities, followed by the production of a requirements
specification using a more traditional approach, is one approach that might be
useful in responding to concerns such as these (Stocks & Carrington, 1996).
Another alternative is to record what the client does when using the prototypes,
then to characterize these actions as significant customer scenarios, and
finally to use these actions as system test cases. They can be considered the
ancestors of the user stories that are used in agile life cycles. The operational
(or behavioral) viewpoint may be brought into the specifications phase thanks
to fast prototyping, which is the primary contribution of this technique (Au &
Paul, 1994). In most cases, system’s structure is prioritized over the behavior
of the system when it comes to requirements definition methodologies. It is
unfortunate that this is the case because the vast majority of consumers need
not to care about structure, yet they care about behavior (Pandey & Mehtre,
2014).
The idea of quick prototyping has been expanded upon with the
implementation of executable specifications (Figure 4.5). The requirements are
described in a format that is readable by the computer (such as petri nets, finite
state machines or StateCharts). After that, the client puts the specification into
action to observe how the system is supposed to behave and then provides
feedback, similar to the rapid prototyping paradigm. Complexity is either inherent
to or can be introduced into the executable models. As compared to the fully-
fledged version of StateCharts, this is a significant understatement. Creating an
executable model needs skill, and implementing it demands an engine (Berzins
& Yehudai, 1993). The executable specification approach works particularly well
for event-driven systems, in particular those in which the events might arrive
in a variety of ordering.
David Harel, the author of StateCharts, calls these kinds of systems
“reactive” (Harel, 1988) due to the fact that they respond to happenings in the
outside world. In the same vein as rapid prototyping, the goal of an executable
specifications is to provide the client with the opportunity to experience different
behaviors that were intended for them. Further similarity is that client input
could need executable models to be modified before they can be used. A
decent engine for an operational model will facilitate the capture of “interesting”
system transactions, and the process of converting them into genuine system
test cases is frequently very close to being a completely mechanical one. This
is one of the ancillary benefits. When this is done in a careful manner, it is
possible to track system testing all the way back to the requirements (Gandhi
& Robertson, 1992).
CHAPTER
4
86 Software Testing and User Experience
Figure 4.5.
Executable
specification.
Once more, this life cycle does not have any bearing on the
integration testing that occurs. The fact that the specifications
document is more explicit than a prototype is one of the most
significant distinctions between the two. More importantly, the
generation of system test cases out of an executable specification
is frequently a process that can be described as mechanical.
Although there will be an increase in work needed to construct a
workable specification, this will be partially compensated for by a
decrease in the amount of work required to produce system test
cases. An additional key difference is presented below:
An intriguing sort at the system level of structural testing can
be carried out when system testing is dependent on an executable
specification. Last but not least, the stage of executable definition
can be integrated into iterative life cycle models, as we saw when
we looked at rapid prototyping (Heimdahl & Thompson, 2000).
Figure 4.6.
Generic agile life
cycle.
presented in Figure 4.7. The fact that user stories are in charge of
driving either a system testing and release plan demonstrates that
it is patently oriented towards the needs of the client. The release
plan outlines a series of iterations, which are all supposed to result
in the delivery of a functionally limited component (Beck, 1999).
One of the things that sets XP apart from other methodologies
is its paired programming emphasis, in which two programmers
collaborate closely and frequently use the same development
machine and keyboard.
Figure 4.7.
The extreme
programming life
cycle.
One person works directly with the code, while the other
maintains a perspective that is a little bit higher. The two individuals
are, in a way, carrying out an ongoing review. There are a number
of parallels can be seen with the fundamental repetitive life cycle
depicted in Figure 4.7. The absence of an overarching preliminary
design step is a key distinction between the two approaches. Why?
Due to the fact that (Kircher et al., 2001). Entails working from
the ground up. It is difficult to conceive of what might take place
within release plan phase if XP were in fact driven by a series of
user stories (Paulk, 2001).
Figure 4.8.
Test-driven
development life
cycle.
The second issue is that every single developer is prone to making errors,
which is a significant part of the reason why we test in first place. But, take
into consideration the following: what gives us reason to believe that TDD
(Nagappan et al., 2008) developer is flawless at conceiving of test cases
which drive the development? What’s even worse is the possibility that later
user stories won’t match up with earlier ones. TDD has one further drawback
that there’s no spot in the entire lifecycle for a cross-check to be performed
at the user story level (Zhang, 2004).
4.4. SCRUM
The agile life cycle known as Scrum is likely the one that is utilized the most
frequently. An overarching focus is placed on the individuals who make up the
team as well as their collaboration. The title comes from a tactic used in rugby
in which two opposing teams lock arms and try to “hook” the football back
onto their own side of the field. As a result of the need for well-coordinated
cooperation, the software development process is called a scrum (Schwaber,
1997).
A first glance reveals that Scrum, which stands for “the development life
cycle,” is primarily comprised of “new names for old ideas.” This is especially
true in regard to the jargon that is commonly used in Scrum. Roles, ceremonies,
and artifacts are three examples of these types of things (Srivastava et al.,
2017). Commonly, when people talk about Scrum roles, they are referring to
the people who are participating in the project; ceremonies are simply meetings,
and artifacts are work products. Scrum initiatives are led by people who behave
like traditional supervisors with less administrative power known as (Scrum
masters) (Deemer et al., 2010). The Scrum team functions as a development
team, while product owners take the place of traditional clients. This diagram
was taken from the “official” Scrum literature, which is published by the Scrum
Alliance. Consider the various undertakings in light of the repetitive life cycle
depicted in Figure 4.9. The conventional iterations are renamed “sprints,” and
their duration ranges from two to four weeks (Sutherland, 2001). An everyday
stand-up meeting is held by the Scrum team each day during a sprint. The
purpose of these meetings is to focus on what occurred the previous day
and what tasks need to be completed the following day. After that, there is
a quick sprint of design, coding, and testing, which is continued through an
integration of work completed by the team at end of the day. This is the part
about being agile; an everyday build that adds to the sprint-level product of
work in a relatively short amount of time. The most notable aspects of the
Scrum methodology that stand in contrast to the conventional approach to
iterative development are indeed the distinctive terminology and the length of
the iterations (Cristal et al., 2008).
CHAPTER
4
Beyond Unit Testing 91
CHAPTER
4
92 Software Testing and User Experience
Figure 4.10.
The agile
model-driven
development life
cycle.
The topic of discussion was “Is there any room for design in ACTIVITY 2.1:
agile software development?” The majority of the responses to this
question affirm the necessity of layout in almost any of agile cycle. What is the difference
In spite of all of this, it would appear that AMDD has no capacity between waterfall
for integration or system testing (Alfraihi et al., 2018). testing and agile? Give
a detail presentation.
CHAPTER
4
94 Software Testing and User Experience
SUMMARY
In software development, testing is an essential part of the development process to
ensure that the software meets the required specifications and functions correctly. Unit
testing is the most fundamental type of testing that checks the smallest testable parts of
the software, usually individual functions or methods. However, more than unit testing is
needed to ensure the overall quality of the software. Testing beyond unit testing involves
a broader range of testing techniques, including integration testing, system testing,
acceptance testing, and performance testing. Integration testing checks the interaction
between different modules or components of the software, ensuring that they work
together seamlessly. System testing tests the entire system to ensure all components work
together as expected. Acceptance testing involves testing the software against the user’s
requirements to ensure it meets their needs. Performance testing checks the software’s
performance under various conditions, including high load and stress. Testing beyond unit
testing is essential to ensure that the software functions correctly as a whole and meets
the user’s requirements. While unit testing is crucial, it should be complemented by other
testing techniques to ensure the overall quality of the software.
REVIEW QUESTIONS
1. What are the main types of software testing discussed in this chapter, and how
do they differ from one another?
2. What common challenges or pitfalls are associated with software testing, and
how does the chapter recommend addressing them?
3. How does the chapter approach the issue of test coverage, and what strategies
are recommended for ensuring comprehensive testing?
4. What role does automation play in software testing, and how can it be effectively
integrated into the testing process?
5. How does the chapter address the issue of testing in complex or distributed
systems, and what strategies are recommended for ensuring reliable and accurate
testing in these environments?
6. What are the best practices recommended for managing the testing process and
ensuring testing is integrated effectively into the software development lifecycle?
REFERENCES
1. Adenowo, A. A., & Adenowo, B. A., (2013). Software engineering methodologies: A
review of the waterfall model and object-oriented approach. International Journal of
Scientific & Engineering Research, 4(7), 427–434.
2. Alfraihi, H., Lano, K., Kolahdouz-Rahimi, S., Sharbaf, M., & Haughton, H., (2018).
The impact of integrating agile software development and model-driven development:
A comparative case study. In: System Analysis and Modeling – Languages, Methods,
and Tools for Systems Engineering: 10 th International Conference, SAM 2018,
Copenhagen, Denmark, October 15–16, 2018, Proceedings 10 (pp. 229–245).
3. Ambler, S. W., (2003). Agile model driven development is good enough. IEEE
Software, 20(5), 71–73.
CHAPTER
4
96 Software Testing and User Experience
4. Ambler, S. W., (2006). Agile model driven development (AMDD). In: Xootic Symposium
(Vol. 2006, p. 13).
5. Au, G., & Paul, R. J., (1994). Graphical simulation model specification based on
activity cycle diagrams. Computers & Industrial Engineering, 26(2), 295–306.
6. Barr, E. T., Harman, M., McMinn, P., Shahbaz, M., & Yoo, S., (2014). The oracle
problem in software testing: A survey. IEEE Transactions on Software Engineering,
41(5), 507–525.
7. Basili, V. R., & Selby, R. W., (1987). Comparing the effectiveness of software testing
strategies. IEEE Transactions on Software Engineering, (12), 1278–1296.
8. Beck, K., (1999). Embracing change with extreme programming. Computer, 32(10),
70–77.
9. Berzins, V., & Yehudai, A., (1993). Using transformations in specification-based
prototyping. IEEE Transactions on Software Engineering, 19(5), 436–452.
10. Bhuvaneswari, T., & Prabaharan, S., (2013). A survey on software development life
cycle models. International Journal of Computer Science and Mobile Computing,
2(5), 262–267.
11. Chandra, V., (2015). Comparison between various software development methodologies.
International Journal of Computer Applications, 131(9), 7–10.
12. Chikh, A., & Aldayel, M., (2012). A new traceable software requirements specification
based on IEEE 830. In: 2012 International Conference on Computer Systems and
Industrial Informatics (pp. 1–6).
13. Cook, G. A., Pandit, N. R., & Beaverstock, J. V., (2011). Cultural and economic
complementarities of spatial agglomeration in the British television broadcasting
industry: Some explorations. Environment and Planning A, 43(12), 2918–2933.
14. Cristal, M., Wildt, D., & Prikladnicki, R., (2008). Usage of scrum practices within
a global company. In: 2008 IEEE International Conference on Global Software
Engineering (pp. 222–226).
15. Curcio, K., Navarro, T., Malucelli, A., & Reinehr, S., (2018). Requirements engineering:
A systematic mapping study in agile software development. Journal of Systems and
Software, 139, 32–50.
16. Desai, C., Janzen, D., & Savage, K., (2008). A survey of evidence for test-driven
development in academia. ACM SIGCSE Bulletin, 40(2), 97–101.
17. Elghondakly, R., Moussa, S., & Badr, N., (2015). Waterfall and agile requirements-
based model for automated test cases generation. In: 2015 IEEE Seventh International
Conference on Intelligent Computing and Information Systems (ICICIS) (pp. 607–612).
18. Ereiz, Z., & Mušić, D., (2019). Scrum without a scrum master. In: 2019 IEEE
International Conference on Computer Science and Educational Informatization
(CSEI) (pp. 325–328).
19. Gandhi, M., & Robertson, E. L., (1992). A specification-based data model. In: Entity-
Relationship Approach—ER’92: 11th International Conference on the Entity-Relationship
CHAPTER
4
Beyond Unit Testing 97
33. Larman, C., & Basili, V. R., (2003). Iterative and incremental developments: A brief
history. Computer, 36(6), 47–56.
34. LEE, C. S., & LEE, K. W., (2021). The effectiveness of object-oriented-QR monopoly
in enhancing ice-breaking and education UX: A preliminary study. In: 29th International
Conference on Computers in Education (ICCE) (pp. 403–409).
35. Liu, W., Du, Z., Xiao, Y., Bader, D. A., & Xu, C., (2011). A waterfall model to
achieve energy efficient tasks mapping for large scale GPU clusters. In: 2011 IEEE
International Symposium on Parallel and Distributed Processing Workshops and
PHD Forum (pp. 82–92). IEEE.
36. Lizcano, A. S., (2013). Merging computational science and urban planning in the
information age: The use of location-based social media for urban analysis. In:
2013 13th International Conference on Computational Science and its Applications
(pp. 200–203).
37. Loc-Carrillo, C., & Abedon, S. T., (2011). Pros and cons of phage therapy.
Bacteriophage, 1(2), 111–114.
38. Luo, L., (2001). Software Testing Techniques (Vol. 15232, No. 1–19, p. 19). Institute
for software research international Carnegie Mellon university Pittsburgh, PA.
39. Matinnejad, R., (2011). Agile model driven development: An intelligent compromise. In:
2011 Ninth International Conference on Software Engineering Research, Management
and Applications (pp. 197–202).
40. Maximilien, E. M., & Williams, L., (2003). Assessing test-driven development at IBM.
In: 25th International Conference on Software Engineering, 2003; Proceedings (pp.
564–569).
41. Moreira, F., & Ferreira, M. J., (2016). Teaching and learning modeling and specification
based on mobile devices and cloud. In: 2016 11th Iberian Conference on Information
Systems and Technologies (CISTI) (pp. 1–6).
42. Nagappan, N., Maximilien, E. M., Bhat, T., & Williams, L., (2008). Realizing quality
improvement through test driven development: Results and experiences of four
industrial teams. Empirical Software Engineering, 13, 289–302.
43. Pandey, S. K., & Mehtre, B. M., (2014). A lifecycle-based approach for malware
analysis. In: 2014 Fourth International Conference on Communication Systems and
Network Technologies (pp. 767–771).
44. Patisaul, H. B., & Jefferson, W., (2010). The pros and cons of phytoestrogens.
Frontiers in Neuroendocrinology, 31(4), 400–419.
45. Paulk, M. C., (2001). Extreme programming from a CMM perspective. IEEE Software,
18(6), 19–26.
46. Petersen, K., Wohlin, C., & Baca, D., (2009). The waterfall model in large-scale
development. In: Product-Focused Software Process Improvement: 10th International
Conference, PROFES 2009, Oulu, Finland, June 15–17, 2009; Proceedings 10 (pp.
386–400).
CHAPTER
4
Beyond Unit Testing 99
47. Puleio, M., (2006). How not to do agile testing. In: AGILE 2006 (AGILE’06) (p. 7).
48. Putra, J., (2020). Data management system for thesis monitoring at STMIK IBBI
using B-model. In: 2020 3rd International Conference on Mechanical, Electronics,
Computer, and Industrial Technology (MECnIT) (pp. 365–369).
49. Razak, R. A., & Fahrurazi, F. R., (2011). Agile testing with selenium. In: 2011
Malaysian Conference in Software Engineering (pp. 217–219).
50. Reitzig, M., (2022). Flat fads or more? From a as in “agile” to z as in “Zi Zhu Jing
Ying Ti” In: Get Better at Flatter: A Guide to Shaping and Leading Organizations
with Less Hierarchy (pp. 173–193).
51. Richardson, D., O’Malley, O., & Tittle, C., (1989). Approaches to specification-based
testing. In: Proceedings of the ACM SIGSOFT’89 Third Symposium on Software
Testing, Analysis, and Verification (pp. 86–96).
52. Sawant, A. A., Bari, P. H., & Chawan, P. M., (2012). Software testing techniques and
strategies. International Journal of Engineering Research and Applications (IJERA),
2(3), 980–986.
53. Schwaber, K., (1997). Scrum development process. In: Business Object Design
and Implementation: OOPSLA’95 Workshop Proceedings 16 October 1995, Austin,
Texas (pp. 117–134).
54. Sharma, G., (2017). Pros and cons of different sampling techniques. International
Journal of Applied Research, 3(7), 749–752.
55. Sharma, S., & Hasteer, N., (2016). A comprehensive study on state of scrum
development. In: 2016 International Conference on Computing, Communication and
Automation (ICCCA) (pp. 867–872).
56. Shylesh, S., (2017). A study of software development life cycle process models. In:
National Conference on Reinventing Opportunities in Management, IT, and Social
Sciences (pp. 534–541).
57. Sinha, A., & Das, P., (2021). Agile methodology Vs. traditional waterfall SDLC: A case
study on quality assurance process in software industry. In: 2021 5th International
Conference on Electronics, Materials Engineering & Nano-Technology (IEMENTech)
(pp. 1–4).
58. Srivastava, A., Bhardwaj, S., & Saraswat, S., (2017). SCRUM model for agile
methodology. In: 2017 International Conference on Computing, Communication and
Automation (ICCCA) (pp. 864–869).
59. Stocks, P., & Carrington, D., (1996). A framework for specification-based testing.
IEEE Transactions on Software Engineering, 22(11), 777–793.
60. Stolberg, S., (2009). Enabling agile testing through continuous integration. In: 2009
Agile Conference (pp. 369–374).
61. Strand, R., (2009). Corporate responsibility in Scandinavian supply chains. Journal
of Business Ethics, 85, 179–185.
CHAPTER
4
100 Software Testing and User Experience
62. Streule, T., Miserini, N., Bartlomé, O., Klippel, M., & De Soto, B. G., (2016).
Implementation of scrum in the construction industry. Procedia Engineering, 164,
269–276.
63. Sumrell, M., (2007). From waterfall to agile-how does a QA team transition? In:
Agile 2007 (AGILE 2007) (pp. 291–295).
64. Sutherland, J., (2001). Inventing and reinventing scrum in five companies. Cutter
IT Journal, 14(21), 5–11.
65. Thummadi, B. V., Shiv, O., & Lyytinen, K., (2011). Enacted routines in agile and
waterfall processes. In: 2011 Agile Conference (pp. 67–76).
66. Tsai, B. Y., Stobart, S., Parrington, N., & Thompson, B., (1997). Iterative design
and testing within the software development life cycle. Software Quality Journal, 6,
295–310.
67. Tsai, W. T., Vishnuvajjala, R., & Zhang, D., (1999). Verification and validation of
knowledge-based systems. IEEE Transactions on Knowledge and Data Engineering,
11(1), 202–212.
68. Turhan, B., Layman, L., Diep, M., Erdogmus, H., & Shull, F., (2010). How effective
is test-driven development. Making Software: What Really Works, and Why We
Believe it, 207–217.
69. Williams, L., Maximilien, E. M., & Vouk, M., (2003). Test-driven development as a
defect-reduction practice. In: 14th International Symposium on Software Reliability
Engineering, 2003; ISSRE 2003 (pp. 34–45).
70. Yuksek, O., Komurcu, M. I., Yuksel, I., & Kaygusuz, K., (2006). The role of hydropower
in meeting Turkey’s electric energy demand. Energy Policy, 34(17), 3093–3103.
71. Zhang, Y., & Patel, S., (2010). Agile model-driven development in practice. IEEE
Software, 28(2), 84–91.
72. Zhang, Y., (2004). Test-driven modeling for model-driven development. IEEE Software,
21(5), 80–86.
CHAPTER
4
CHAPTER 5
USER EXPERIENCE
UNIT INTRODUCTION
Despite our importance on usability, only a minority of products deliver on this front.
There are various factors at play, including but not limited to history, culture, organization,
finances, and others, but exploring all of them would be too lengthy for this book. The
good news is that there are established and trustworthy techniques for determining which
aspects of a product’s design improve its usability, and which ones need to be tweaked
for the product to compete successfully in the market (Hassenzahl & Tractinsky, 2006).
Because usability is only an issue when missing or absent, it can be difficult to determine
what makes something usable. One example of a paradigm shift in usability that has been
shown to boost sales is Apple’s iPod. Pretend a customer is attempting to purchase from
your company’s online store. Possible thoughts they’re having in response to the site:
I haven’t located the info I need. Okay, I think I’ve sought what I’m looking for, but the
price tag is a mystery. Is there any left? Is it possible to have it sent to where I currently
am? Should I expect free shipping if I spend this much? Most online shoppers have run
into problems like these at some point (Datig, 2015).
It’s simple to criticize websites because there are so many, but there are countless
other situations in which people are forced to deal with inconvenient products and services.
When it comes to your alarm clock, phone, and DVR, how well-versed are you? How
simple is it to navigate the voice-activated menu of available options when contacting a
company (Fronemann & Peissner, 2014)?
102 Software Testing and User Experience
Learning Objectives
Key Terms
• Characteristics of Organizations
• Emulation and Measurement of Product
• Iterative Design
• Usability Testing
• Usable Meaning
• Usable Product
CHAPTER
5
User Experience 103
Let’s spend some time discussing what usability is and what it entails
before diving into the definition and exploration of usability testing. A product
or service’s usability increases if it satisfies the user’s needs, is easy to use,
gains access, and learns about it. Whether or not a product is useful depends
on whether or not the user is willing to use it to accomplish his or her goals.
All other efforts are pointless without that drive, as the product will remain
unsold. Even if a user is willing to use a system for free, it will not be adopted
if it does not fulfill their needs. Practicality is probably the factor that gets the
least attention during laboratory investigations (Voskobojnikov et al., 2021).
The marketing team is responsible for identifying the features of the product
or system that are most important and desired before any other usability aspects
are considered. Without it, the development team will have to make educated
guesses or even bad, use themselves as the user model.
It is where a system-oriented approach to the design typically takes root.
Efficiency is the rate at which a given task can be completed to the user’s
satisfaction in terms of both accuracy and completeness. For usability testing,
you might establish the goal that “95% of all users will be able to load the
software within 10 minutes” (Mashapa & van Greunen, 2010).
The effectiveness of a product is measured by how well it fulfills its intended
purpose and how easily its intended audience can use it. The error rate is the
standard quantitative indicator of this. Like efficiency, the effectiveness of your
usability test should be measured in terms of a fraction of the total user base.
The standard could read: “95% of all users will be able to load the software
correctly on the first attempt.”
The effectiveness of a system includes its learnability or the degree to
which a user can master its controls with a set amount of instruction over a
set time. Infrequent users’ ability to re-learn the system after long periods of
inactivity also falls under this umbrella. What we mean by “satisfaction” is the
information gleaned from interviews with customers, both written and verbal,
about their experiences with the product. Users will do better on a product if
it caters to their requirements and makes them happy. Users are frequently
asked to provide feedback on the products they test, which can help pinpoint
issues and their root causes (Javahery et al., 2009). Usability goals and
objectives are typically defined in measurable terms of one or more of these
CHAPTER
5
104 Software Testing and User Experience
traits. However, we must stress that the ability to generate numbers about
practice and satisfaction is never the sole determinant of a product’s usability.
Data can tell us whether a product “works,” but a qualitative component of
something’s usability is also difficult to quantify. The interpretation of the data
is crucial for finding a solution to a problem, as behavioral data reveals the
root cause of the issue. Blood pressure and pulse rate are two vitals that any
doctor can measure. The real value of a doctor comes from the time they
spend analyzing a patient’s data and coming up with individualized treatment
plans. To effectively treat a design problem, looking at the bigger picture and
evaluating the likelihood of various potential causes is often necessary rather
than relying solely on a few isolated data points. Small nuances exist that the
untrained eye misses (Matz, 2013).
Usability and accessibility are like twins. Regarding accessibility, we usually
mean the ease with which people can access the resources they need to
complete a task. What we mean by “accessibility” in this book, however, is
how easy it is for people with physical impairments to use a given product.
Making a product accessible to those with disabilities or those using it in unique
situations almost always benefits the general public. Designing with accessibility
for people with disabilities in mind can clarify and simplify things for people with
situational or temporary impairments. Luckily, you can get help from various
resources when creating accessible designs. The book’s accompanying website
features links to useful accessibility-related resources (Baird, 2015).
To incorporate usability testing and other forms of user feedback into
your company’s user-centered design (UCD) process, you should learn about
accessibility best practices. The larger field of UCD includes many methods
and techniques we will discuss in this chapter to make things more usable and
accessible. Experience design is a broader, more all-encompassing concept
that builds on the foundation of UCD. Even if a customer can complete their
purchase on your website, they may have questions about the logistics of
product delivery, upkeep, service, and possible return. How does your company
aid in pre-purchase investigation and selection? Experience design takes all of
these into account, so we must consider practicality again (Liu et al., 2013).
Real usability is unnoticeable. When things are going well, you don’t pay
attention to them. Assuming everyone is at a satisfactory temperature, there
will be no complaints. On the other hand, product usability occurs on a scale.
That being said, how practical is your offering? Although users can achieve
their goals, is there a way to make it even easier to use? Is there any point in
trying to enhance it? Usability experts typically focus on fixing existing designs
to reduce user frustration. You’ve set yourself a worthy objective (Tractinsky et
al., 2000). Achieving this goal for each customer can be challenging, though.
And the portion of the user’s experience it impacts while trying to achieve a
goal is minimal. It is impossible to quantify something like usability, even though
CHAPTER
5
User Experience 105
CHAPTER
5
106 Software Testing and User Experience
Figure 5.1.
Bailey’s human perfor-
mance model.
iii. Developers have traditionally been hired and compensated less for
their interpersonal “people” skills and more for their technical prowess
when tackling complex problems.
iv. The fact that designers historically catered to similar consumers is a
major contributor to the neglect of human needs. There was no point
in taking the time to investigate such a well-known coworker. Thus,
we arrive at our next topic (Woods, 1985).
2. Reason 2: Target Audiences Enlargement and Adaptation
Because of how far technology has come into the hands of the average
consumer, the demographics of the intended users have shifted significantly.
Organizations working on development have been slow to respond to this change.
Lovers (also known as early adopters) were the first people to use computer-
based products. These people had extensive experience with computers and
other mechanical devices, a penchant for learning new things, a penchant for
tinkering, and a sense of pride in their ability to diagnose and fix problems.
Those responsible for creating these items had a lot in common. Users and
creators of these systems were essentially the same people. Because of the
similarities, the team used a “next bench” design technique, which involves
creating products with the end user in mind, even if they are only physically
separated by one bench. As expected, this method was well-received, and
users reported few problems (Gengshen, 2003).
To what end would they whine? An integral part of the product’s appeal for
enthusiast users was the challenge of getting it to work, and they took great
pride in their abilities to get products of this complexity up and running. As a
result, a “machine-oriented” or “system-oriented” approach to development was
generally accepted and became the norm (Braun, 2007).
However, all of that is different now in significant ways. Users typically
possess limited expertise with computers and mechanical devices, have little
tolerance for tinkering with their brand-new purchase, and have very different
expectations from those of the designer.
Even more importantly, modern users are not analogous to designers
regarding skill, aptitude, expectation, or any other factor that might be considered
during the design process. Companies that previously would have found Ph.D.
chemists using their products are now more likely to find high school graduates
in those roles. When the gap between the designer and the user is wide, it
is clear that the “next-bench” design approach fails as a viable strategy, and
businesses that use this approach, even unintentionally, will continue to make
products that are difficult to use (Petrovčič et al., 2018).
Nowadays, designers are typically trained professionals with degrees in
fields such as human-computer interaction, industrial design, human factors
engineering, computer science, or a combination of these. The days when the
CHAPTER
5
108 Software Testing and User Experience
CHAPTER
5
User Experience 109
Figure 5.2.
Nonintegrated
approach to product
development.
CHAPTER
5
User Experience 111
than ever because of the growing number of less-tech-savvy users and their
increasingly high standards for usability. To use a computer metaphor, we have
shifted our attention from the system’s inner workings to the environment in
which the program is used (how it communicates). This shift in emphasis has
necessitated a reorientation in the abilities sought after by designers. In the
future, more importance will be placed on conceptualization and less on actual
implementation. Perhaps one day, coding expertise will not be required at all
stages of user interface design (Balogh, 2001).
The five reasons listed above are only the tip of the iceberg when explaining
the success of useless products and systems. What’s more crucial is the
underlying theme shared by these issues and misconceptions: namely, that
too much focus has been placed on the product itself and not enough on the
product’s results. It is not astonishing that the user remains to receive too little
consideration and consideration, especially in the heat of an increasingly rushed
and shortened development process. Designers often lose sight of the fact that
they are shaping the interaction between a product and a person rather than
the product itself. More importantly, designers of this relationship must ensure
that the human is free to concentrate on the task at hand and achieving the
goal rather than how the task is accomplished. They are also responsible for
designing how each part of the product interacts with the others. It necessitates
a high level of coordination between the various groups responsible for the
product’s design and the people who will be using the product in their day-to-
day lives or at their places of employment. Nothing from the past can be used
with today’s user base and technology (Reynolds et al., 2005).
Methods and techniques that help designers shift their perspective and
how they design products, such as user-centered design (UCD), which starts
with the user’s wants and needs and moves inward to the product’s actual
implementation (UCD). Let’s delve deeper into the concept of UCD, as it is only
within this framework that usability testing makes sense and thrives (Rosenblum
& Ousterhout, 1992).
CHAPTER
5
User Experience 113
session, designers must receive expert interviewers’ training. If this isn’t done,
the findings may be highly inaccurate (Herlocker & Konstan, 2001).
CHAPTER
5
114 Software Testing and User Experience
Figure 5.4.
Questions and
methods for
answering them.
CHAPTER
5
User Experience 115
CHAPTER
5
116 Software Testing and User Experience
reviewing the major methods will give you a better idea of their place. It is
important to remember that the techniques presented here would typically be
used in the order presented during different stages of a product’s development
lifecycle (Smith et al., 2006).
CHAPTER
5
User Experience 117
5.5.4. Surveys
Many people’s opinions can be gathered about a product’s current or potential
features by conducting surveys. In contrast to the focus group, which can dig
deep into respondents’ motivations and attitudes, surveys can use statistical
extrapolation from larger samples to conclude the population as a whole. One
of the most well-known ongoing surveys, the Nielsen ratings, is used to inform
multimillion-dollar business decisions for the entire country based on the opinions
of about 1500 individuals (Marsh, 1984). While surveys have a place at any
stage of the user lifecycle, they are most commonly used in the beginning to
gain insight into the user base. Surveys require crystal-clear language everyone
can read and comprehend, which is difficult without extensive testing and
planning. Again, observing users in action during a usability test is preferable
to asking them about what they do or have done (Jamsen & Corley, 2007).
5.5.5. Walk-Throughs
By visualizing a user’s path through an early concept or prototype of the
product, walk-throughs are used to investigate how a user might fare with
the product once you have a good idea of who your target users are and the
task goals they have. Typically, the designer in charge of the project leads the
team through simulated user tasks (sometimes even acting as the user). In
contrast, a second team member takes notes on any issues or concerns that
arise (Aliaga & Carlbom, 2001). IBM pioneered structured walk-throughs for
code reviews, wherein participants take on defined roles (such as moderator
and recorder) and adhere to strict rules (such as a maximum walk-through
duration of two hours). If possible, it’s better to have a real user, perhaps
from a preferred client, participate in the design process as a stand-in for the
designer (Smith-Jackson, 2004).
CHAPTER
5
118 Software Testing and User Experience
CHAPTER
5
120 Software Testing and User Experience
SUMMARY
This chapter provides an introduction to usability testing, which is the process of evaluating
how user-friendly a product or service is by observing how users interact with it. It covers
the benefits of usability testing, including improved user satisfaction, increased efficiency,
and reduced development costs. It also discusses the different types of usability testing,
such as heuristic evaluation, cognitive walkthroughs, and user testing, as well as the
various methods for collecting data during usability testing, including surveys, interviews,
and observation. It also highlights the importance of usability testing throughout the
product development process and the need for ongoing testing and evaluation to ensure
continued usability and user satisfaction.
REVIEW QUESTIONS
1. What is usability testing, and why is it important in developing digital products?
2. Describe the difference between formative and summative usability testing. When
is each type of testing most appropriate?
3. What are some common methods for collecting and analyzing data during usability
testing? How do you decide which methods to use?
4. Explain the concept of “think-aloud” protocols. How do they help researchers
understand user behavior?
5. What are common usability metrics used to evaluate user performance and
satisfaction? What are the strengths and weaknesses of each type of metric?
6. How do you recruit participants for a usability testing study? What are some
common challenges in participant recruitment, and how can they be addressed?
CHAPTER
5
User Experience 121
REFERENCES
1. Adriaens, F., & Biltereyst, D., (2012). Glocalized telenovelas and national identities:
A “textual cum production” analysis of the “telenovelle” Sara, the Flemish adaptation
of Yo soy Betty, la fea. Television & New Media, 13(6), 551–567.
2. Aliaga, D. G., & Carlbom, I., (2001). Plenoptic stitching: A scalable method for
reconstructing 3D interactive walk throughs. In: Proceedings of the 28th Annual
Conference on Computer Graphics and Interactive Techniques (Vol. 1, pp. 443–450).
3. Baird, C., (2015). Useful, Usable, Desirable: Applying User Experience Design to
Your Library: Aaron Schmidt and Amanda Etches 2014 (Vol. 168). Chicago, IL: ALA
Techsource, ISBN: 978-0-8389-1226-3.
4. Balogh, L., (2001). Design and application guide for high speed MOSFET gate drive
circuits. In: Power Supply Design Seminar SEM-1400, Topic (Vol. 2, pp. 1–9).
CHAPTER
5
122 Software Testing and User Experience
CHAPTER
5
User Experience 123
21. Hjelle, K. M., Skutle, O., Førland, O., & Alvsvåg, H., (2016). The reablement team’s
voice: A qualitative study of how an integrated multidisciplinary team experiences
participation in reablement. Journal of Multidisciplinary Healthcare, 1, 575–585.
22. Jacobs, D., & McDaniel, T., (2022). A survey of user experience in usable security
and privacy research. In: HCI for Cybersecurity, Privacy and Trust: 4th International
Conference, HCI-CPT 2022, Held as Part of the 24th HCI International Conference,
HCII 2022, Virtual Event, June 26–July 1, 2022, Proceedings (Vol. 1, pp. 154–172).
Cham: Springer International Publishing.
23. Jamsen, J., & Corley, K., (2007). E-survey methodology. In: Handbook of Research
on Electronic Surveys and Measurements (Vol. 1, pp. 1–8). IGI Global.
24. Javahery, H., Deichman, A., Seffah, A., & Taleb, M., (2009). A user-centered framework
for deriving a conceptual design from user experiences: Leveraging personas and
patterns to create usable designs. Human-Centered Software Engineering: Software
Engineering Models, Patterns and Architectures for HCI, 1(2), 53–81.
25. Jokela, T., (2002). Making user-centered design common sense: Striving for an
unambiguous and communicative UCD process model. In: Proceedings of the Second
Nordic Conference on Human-Computer Interaction (Vol. 1, pp. 19–26).
26. Karat, J., & Karat, C. M., (2003). The evolution of user-centered focus in the human-
computer interaction field. IBM Systems Journal, 42(4), 532–541.
27. Kensing, F., Simonsen, J., & Bodker, K., (1998). MUST: A method for participatory
design. Human-Computer Interaction, 13(2), 167–198.
28. Leontief, W., (1970). Environmental repercussions and the economic structure: An
input-output approach. The Review of Economics and Statistics, 1, 262–271.
29. Li, Y., Cao, X., Everitt, K., Dixon, M., & Landay, J. A., (2010). FrameWire: A tool for
automatically extracting interaction logic from paper prototyping tests. In: Proceedings
of the SIGCHI Conference on Human Factors in Computing Systems (Vol. 1, pp.
503–512).
30. Liu, S., Zheng, X. S., Liu, G., Jian, J., & Peng, K., (2013). Beautiful, usable, and
popular: Good experience of interactive products for Chinese users. Science China
Information Sciences, 56, 1–14.
31. Marsh, C., (1984). Problems with surveys: Method or epistemology? Sociological
Research Methods: An Introduction, 1, 82–102.
32. Mashapa, J., & Van, G. D., (2010). User experience evaluation metrics for usable
accounting tools. In: Proceedings of the 2010 Annual Research Conference of the
South African Institute of Computer Scientists and Information Technologists (Vol.
1, pp. 170–181).
33. Matz, K., (2013). Designing Usable Apps: An Agile Approach to User Experience
Design (Vol. 1, pp. 1–10). Winchelsea Press (Winchelsea Systems Ltd).
34. Morton, G., Masters, J., & Cowburn, P. J., (2018). Multidisciplinary team approach
to heart failure management. Heart, 104(16), 1376–1382.
CHAPTER
5
124 Software Testing and User Experience
35. Oliver, R. L., & Bearden, W. O., (1985). Disconfirmation processes and consumer
evaluations in product usage. Journal of Business Research, 13(3), 235–246.
36. Onwuegbuzie, A. J., Dickinson, W. B., Leech, N. L., & Zoran, A. G., (2009). A
qualitative framework for collecting and analyzing data in focus group research.
International Journal of Qualitative Methods, 8(3), 1–21.
37. Paul, C. L., (2008). A modified Delphi approach to a new card sorting methodology.
Journal of Usability Studies, 4(1), 7–30.
38. Petrovčič, A., Rogelj, A., & Dolničar, V., (2018). Smart but not adapted enough:
Heuristic evaluation of smartphone launchers with an adapted interface and assistive
technologies for older adults. Computers in Human Behavior, 79, 123–136.
39. Pirolli, P., Pitkow, J., & Rao, R., (1996). Silk from a sow’s ear: Extracting usable
structures from the web. In: Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems (Vol. 1, pp. 118–125).
40. Preece, J., Nonnecke, B., & Andrews, D., (2004). The top five reasons for lurking:
Improving community experiences for everyone. Computers in Human Behavior,
20(2), 201–223.
41. Reynolds, P., Bosma, N., Autio, E., Hunt, S., De Bono, N., Servais, I., & Chin, N.,
(2005). Global entrepreneurship monitor: Data collection design and implementation
1998–2003. Small Business Economics, 24, 205–231.
42. Robertson, T., & Simonsen, J., (2012). Challenges and opportunities in contemporary
participatory design. Design Issues, 28(3), 3–9.
43. Rose, S. P., (2013). Making progress amid transition: Apprehension is reasonable
during times of change, but we’ve got to keep moving forward. Healthcare Financial
Management, 67(10), 26–27.
44. Rosenblum, M., & Ousterhout, J. K., (1992). The design and implementation of a
log-structured file system. ACM Transactions on Computer Systems (TOCS), 10(1),
26–52.
45. Sasse, M. A., Brostoff, S., & Weirich, D., (2001). Transforming the ‘weakest link’—A
human/computer interaction approach to usable and effective security. BT Technology
Journal, 19(3), 122–131.
46. Scariot, C. A., Heemann, A., & Padovani, S., (2012). Understanding the collaborative-
participatory design. Work, 41(Supplement 1), 2701–2705.
47. Sefelin, R., Tscheligi, M., & Giller, V., (2003). Paper prototyping-what is it good
for? A comparison of paper-and computer-based low-fidelity prototyping. In: CHI’03
Extended Abstracts on Human Factors in Computing Systems (Vol. 1, pp. 778, 779).
48. Signer, B., & Norrie, M. C., (2007). PaperPoint: A paper-based presentation and
interactive paper prototyping tool. In: Proceedings of the 1st International Conference
on Tangible and Embedded Interaction (Vol. 1, pp. 57–64).
49. Smith, A., Gulliksen, J., & Bannon, L., (2006). Building usability in India: Reflections
from the Indo-European systems usability partnership. In: People and Computers
CHAPTER
5
User Experience 125
XIX—The Bigger Picture: Proceedings of HCI 2005 (Vol. 1, pp. 219–232). Springer
London.
50. Smith, J. A., Hayes, C. E., Yolton, R. L., Rutledge, D. A., & Citek, K., (2002). Drug
recognition expert evaluations made using limited data. Forensic Science International,
130(2, 3), 167–173.
51. Smith-Jackson, T. L., (2004). Cognitive walk-through method (CWM). In: Handbook
of Human Factors and Ergonomics Methods (Vol. 1, pp. 785–793). CRC Press.
52. Sokol, M. B., (1994). Adaptation to difficult designs: Facilitating use of new technology.
Journal of Business and Psychology, 8, 277–296.
53. Taati, B., Snoek, J., & Mihailidis, A., (2013). Video analysis for identifying human
operation difficulties and faucet usability assessment. Neurocomputing, 100, 163–169.
54. Tanaka, M., (2003). Multidisciplinary team approach for elderly patients. Geriatrics
& Gerontology International, 3(2), 69–72.
55. Ter Bogt, T. F. M., & Engels, R. C. M. E., (2005). “Partying” hard: Party style,
motives for and effects of MDMA use at rave parties. Substance Use & Misuse,
40(9, 10), 1479–1502.
56. Timmer, M. P., Dietzenbacher, E., Los, B., Stehrer, R., & De Vries, G. J., (2015).
An illustrated user guide to the world input–output database: The case of global
automotive production. Review of International Economics, 23(3), 575–605.
57. Tractinsky, N., Katz, A. S., & Ikar, D., (2000). What is beautiful is usable. Interacting
with Computers, 13(2), 127–145.
58. Tsai, B. Y., Stobart, S., Parrington, N., & Thompson, B., (1997). Iterative design
and testing within the software development life cycle. Software Quality Journal, 6,
295–310.
59. Voskobojnikov, A., Wiese, O., Mehrabi, K. M., Roth, V., & Beznosov, K., (2021). The
U in crypto stands for usable: An empirical study of user experience with mobile
cryptocurrency wallets. In: Proceedings of the 2021 CHI Conference on Human
Factors in Computing Systems (Vol. 1, pp. 1–14).
60. Wan, Z., Xia, X., Lo, D., & Murphy, G. C., (2019). How does machine learning
change software development practices? IEEE Transactions on Software Engineering,
47(9), 1857–1871.
61. Woods, D. D., (1985). Cognitive technologies: The design of joint human-machine
cognitive systems. AI Magazine, 6(4), 86–87.
CHAPTER
5
CHAPTER 6
USABILITY TESTING
UNIT INTRODUCTION
Usability testing is a term that is frequently and sometimes arbitrarily used to describe
any method used to assess a system or product. Frequently, it is clear from the speaker’s
context that they are referring to one of the other strategies (Lindgaard & Chattratichart,
2007).
In this book, the phrase “usability testing” refers to a procedure that uses standardized
questionnaires that are reflective of the intended market to assess how well a product
satisfies specified usability requirements. By including actual users, procedures like expert
evaluations, walkthroughs, and the like that don’t involve actual users in the process are
no longer classified as usability testing (Sonderegger et al., 2016).
Usability testing is a method of inquiry that has its origins in the traditional experimental
approach. One can perform a wide variety of tests, from formal, classical experiments with
enormous sample sizes and intricate test designs to very casual, qualitative investigations
with just one subject (Wichansky, 2000). Each testing strategy has unique goals and needs
various amounts of time and resources. To get results quickly in commercial product
development environments, this book focuses on more casual, less sophisticated tests
(Markopoulos & Bekker, 2003).
128 Software Testing and User Experience
Learning Objectives
At the end of this lesson, students will be able to:
• Understand the importance of usability testing in the design and development
process of a product or service.
• Learn about the different types of usability testing methods, including heuristic
evaluation, cognitive walkthrough, and user testing.
• Understand how to plan and conduct a usability test, including defining test goals,
selecting participants, and creating test scenarios.
• Learn how to collect and analyze usability testing data, including identifying
common usability issues and prioritizing them for resolution.
• Understand how to use usability testing results to inform design decisions and
improve the overall user experience of a product or service.
Key Terms
• Dynamic Characteristic
• Formative Study
• Goals of Testing
• Reliability
• Usability Testing
CHAPTER
6
Usability Testing 129
CHAPTER
6
130 Software Testing and User Experience
Figure 6.1.
Usability testing
throughout the
product lifecycle.
6.5.1. When
The exploring study is carried out quite shortly after launch when a product
is still being specified and created in its early phases (thus the reason it is
occasionally referred to as “formative”). The product’s user profile and usage
design (or problem identification) should have been established by this stage in
the development cycle. The project team is likely struggling with the functional
requirements and early product prototypes. Maybe the design stage is about to
start once the demands and specifications phase has been finished (Lee, 2011).
6.5.2. Objective
The explorative study’s primary goal is to assess the viability of early design
proposals. The exploratory study is focused on the elevated feature of a user
interface or content, which can be thought of as being differentiated into a
more detailed component and another aspect (Qu & Furnas, 2008).
For instance, it would be extremely beneficial for Web application interface
designers to know from the outset whether the user intuitively understands the
interface’s core components. Designers may wish to know, for instance, how
effectively the interface:
CHAPTER
6
Usability Testing 135
Figure 6.2.
Test monitor
and participant
exploring the
product.
well a user can move down numerous menu layers. A horizontal display of
all main aspects and a vertical display of two of the functions might help you
accomplish both goals. The test of such a prototype would involve the user
attempting to do typical chores. Alternatively, if it is too early to execute tasks,
the consumer may simply “walk through” or examine the product while being
guided by a test moderator as they respond to questions. The user may even
be able to accomplish both in some circumstances. The approach relies on
where the development cycle is in the process and how sophisticated the
mockups are (Vulić et al., 2015).
An experimental test’s testing procedure is typically highly casual and
resembles a partnership between the test subject and test moderator, with lots
of interactions between them. An examination of the user’s thought process is
essential because so much of the information you need to know is cognitive.
Together, the test moderator and participant may examine the product, with the
moderator either conducting a nearly continuous interview or urging the participant
to “think aloud” as much as possible. The test moderator and participant
can sit next to one another, unlike later tests when there is significantly less
interaction, as seen in Figure 6.2.
Request suggestions from the audience on how to make unclear parts clearer.
In contrast to later tests, when a greater emphasis is placed on quantifying
how well the user can do, here you aim to understand why the user behaves
as he or she does by gathering qualitative data.
The distinguishing characteristic of the explorative test is its focus on
discussion and examination of elevated ideas and thought processes, which
helps to shape the final design. This is true regardless of whether you utilize
a working prototype, early manuals, and static screens or whether the user
executes commands or simply “walks through” a product with the test moderator
(Rouquerol et al., 2012).
6.6.1. When
Perhaps the most prevalent kind of usability test that is undertaken is
the assessments test. It is likely the easiest and most uncomplicated
test to create and carry out for a beginner usability specialist.
Early or midway through the product development cycle, generally,
after the foundational or high-level design or organization of the
product has been determined, assessment tests are undertaken
(Lam, 2013).
6.6.2. Objective
By assessing the usability of more basic functions and features
of the product, the assessment test seeks to build on the results
of the exploratory test. The assessment test starts by focusing on
CHAPTER
6
Usability Testing 139
the meat and the flesh if the exploratory test’s goal is to work on the product’s
framework (Fancsali et al., 2018). This test looks at how well the idea has
been executed, assuming that the product’s fundamental conceptual model is
sound. Instead of just investigating a product’s intuitiveness, you want to know
how well a user can carry out complete, realistic tasks as well as pinpoint any
product usability issues (Setiyana, 2016).
6.7.1. When
The thoroughly evaluate, also known as the verification test, is typically carried
out late in the development cycle and, as the name implies, is intended to
assess a product’s usability in comparison to industry standards or, in the case
of a verification test, to ensure that earlier issues have been fixed and no new
ones have been added. The validation test often takes place much closer to
the product’s delivery than the preceding two tests, which occur during a very
active and ongoing design cycle (Mattocks et al., 2010).
6.7.2. Objective
The goal of the validating test is to assess how the product stacks up against
a preset benchmark or usability standard, such as a project-related quality
standard, an internal corporate standard from the past, or even a performance
standard set by a rival company (Monteiro et al., 2009). Before it is released, it
will be determined whether or not the product complies with this standard, and
if not, what the reason(s) are. The usability targets defined early in the project
CHAPTER
6
140 Software Testing and User Experience
are typically where the standards come from. They in turn derive from prior
usability testing, marketing research, user interviews, or just the development
team’s best predictions (Wallace & Fujii, 1989).
Usability goals are frequently expressed in terms of quality standards, such
as effectiveness and efficiency, which measure how successfully and quickly
the user can carry out specific activities and tasks. Alternatively, the goals
could be expressed in terms of user preferences, such as obtaining a specific
ranking or rating. The flavor of a verification test is slightly different. Here, it’s
important to make sure that any usability problems that were discovered during
earlier tests have been properly addressed and fixed (Balci, 1994).
The validation test itself should then be used to establish company standards
for upcoming goods, which makes perfect sense. The same can be done with
verification. For instance, future iteration of the product must perform at least
as well as the setup procedures for a software package, which should be
able to be completed in five minutes with no more than one mistake. Then,
products can be created with this standard as a goal, ensuring that usability is
maintained as new features are added in subsequent iterations (Balci, 1995).
Evaluation of the end-to-end performance of all a product’s elements is one
of the main goals of the validation test, often for the first time. For instance,
all the stages in a procedure or workflow, or the integration of documentation,
help, and software/hardware. It is impossible to exaggerate the value of an
integrated testing stage. It is not rare for components to not function effectively
together because they are frequently built in a manner that is comparatively
secluded from one another. An organization would be wise to learn this before
releasing anything because, from the user’s perspective, it is all one product
and is expected to function that way (Nayani & Mollaghasemi, 1998).
Another goal of the testing stage fact, any test carried out very late in the
development cycle—has come to be characterized as “disaster or catastrophic
insurance” in the business. The possibility of releasing a new product with
significant defects or one that would need to be recalled worries the administration
the most at this later phase (Grace & Taghipour, 2004). Slipping the timetable
may be preferable to recalling the goods or having to mail “fixes” to every user
if such a fault is found. If you can foresee a significant flaw in the product,
you constantly have an advantage, even if there is no time to fix it before
launch. The support personnel can be trained, a solution can be created, and
even public relations replies can be created. Despite all these benefits, some
businesses would still prefer to remain unaware of product flaws (Sargent, 2010).
Benchmarks or standards for the test’s tasks are either created ACTIVITY 6.1:
or discovered before the actual test begins (Bekö et al., 2020).
This can be as straightforward as fixing the issues found in earlier What is usability
experimental tests, or it can involve precise error or time measures. testing? Give
The test moderator interacts with the participants either very little a detailed
or not at all while they are doing tasks. (And it’s unlikely that kids demonstration.
are asked to “think aloud”) (Ramos et al., 2013).
The main objective is the gathering of statistical data, but the
causes of poor performance are also noted. If you are comparing
user performance to a standard, you should plan and decide how
the standard will be adhered to and what will happen if the product
does not reach the standard. For instance, if the requirement for
a task is “time to finish,” would you contrast the standard to the
average score of all participants, or will you require that 70% of
participants achieve the requirement? What circumstances will
cause the product’s timetable to be changed? Would there be
enough time for the tasks that failed to satisfy the standard to be
retested? All of these issues need to be addressed and settled
before the test (Seifrtova et al., 2009).
Since you’re making significant quantitative judgments about
the product, a testing process places a greater emphasis on
experimental rigor and consistency than an assessment test would.
Ensure that the design team members contributed to and were
on board with the creation of the test standards. They won’t feel
as though the standards are excessively high or unreachable in
this way (Landers et al., 2017).
CHAPTER
6
142 Software Testing and User Experience
SUMMARY
Usability testing is a technique used to evaluate a product or service by testing it with
representative users. The chapter on usability testing discusses the importance of usability
testing in the product development process and how it can be used to identify usability
issues and improve user satisfaction. The chapter outlines the steps involved in conducting
a usability test, including setting goals, defining tasks, recruiting participants, conducting
the test, and analyzing the results. The chapter also guides on choosing the appropriate
testing method, such as remote testing or in-person testing, and the importance of
considering accessibility and inclusivity in the testing process. The chapter emphasizes
the importance of iterating and improving the product based on the feedback obtained
from usability testing. Finally, the chapter provides tips on communicating usability testing
results to stakeholders and incorporating the findings into the product development process.
REVIEW QUESTIONS
1. What is usability testing, and why is it important in the design process?
2. What are the different types of usability testing methods available, and how do
they differ?
3. How do you select participants for usability testing, and what criteria should you
use?
4. What common usability issues can arise during testing, and how can they be
addressed?
5. How do you analyze and interpret usability testing data, and what are some
common metrics used?
6. What are some best practices for conducting effective and efficient usability testing
sessions?
REFERENCES
1. Al Kilani, M., & Kobziev, V., (2016). An overview of research methodology in
information system (IS). Open Access Library Journal, 3(11), 1–9.
2. AL‐Omar, H., & AL‐Mutairi, A., (2008). Bank‐specific determinants of profitability: The
case of Kuwait. Journal of Economic and Administrative Sciences, 24(2), 20–34.
3. Balci, O., (1994). Validation, verification, and testing techniques throughout the life
cycle of a simulation study. In: Proceedings of Winter Simulation Conference (pp.
215–220).
4. Balci, O., (1995). Principles and techniques of simulation validation, verification, and
testing. In: Proceedings of the 27th Conference on Winter Simulation (pp. 147–154).
5. Bekö, G., Wargocki, P., Wang, N., Li, M., Weschler, C. J., Morrison, G., & Williams,
J., (2020). The indoor chemical human emissions and reactivity (ICHEAR) project:
Overview of experimental methodology and preliminary results. Indoor Air, 30(6),
1213–1228.
6. Bielaczyc, K., (2013). Informing design research: Learning from teachers’ designs
of social infrastructure. Journal of the Learning Sciences, 22(2), 258–311.
CHAPTER
6
144 Software Testing and User Experience
7. Cheng, M., & Foley, C., (2018). Understanding the distinctiveness of Chinese post-
80s tourists through an exploration of their formative experiences. Current Issues
in Tourism, 21(11), 1312–1328.
8. Chivanga, S. Y., & Monyai, P. B., (2021). Back to basics: Qualitative research
methodology for beginners. Journal of Critical Reviews, 8(2), 11–17.
9. Collins, D., (2003). Pretesting survey instruments: An overview of cognitive methods.
Quality of Life Research, 12, 229–238.
10. Díez-Mediavilla, M., Alonso-Tristán, C., Rodríguez-Amigo, M. D. C., García-Calderón,
T., & Dieste-Velasco, M. I., (2012). Performance analysis of PV plants: Optimization
for improving profitability. Energy Conversion and Management, 54(1), 17–23.
11. Doolen, T. L., & Hacker, M. E., (2005). A review of lean assessment in organizations:
An exploratory study of lean practices by electronics manufacturers. Journal of
Manufacturing Systems, 24(1), 55–67.
12. Fancsali, S. E., Zheng, G., Tan, Y., Ritter, S., Berman, S. R., & Galyardt, A., (2018).
Using embedded formative assessment to predict state summative test scores. In:
Proceedings of the 8th International Conference on Learning Analytics and Knowledge
(pp. 161–170).
13. Galbreath, J., (2005). Which resources matter the most to firm success? An exploratory
study of resource-based theory. Technovation, 25(9), 979–987.
14. Garousi, V., Mesbah, A., Betin-Can, A., & Mirshokraie, S., (2013). A systematic
mapping study of web application testing. Information and Software Technology,
55(8), 1374–1396.
15. Grace, J. R., & Taghipour, F., (2004). Verification and validation of CFD models and
dynamic similarity for fluidized beds. Powder Technology, 139(2), 99–110.
16. Green, F., (2019). An exploration into the value of formative assessment and the
barriers associated with the implementation of formative strategies. In: K. W. M., K.
B. A., D. W., & L., (eds.), Rethinking Teacher Education for the 21st Century: Trends,
Challenges and New Directions (pp. 203–222).
17. Hallingberg, B., Turley, R., Segrott, J., Wight, D., Craig, P., Moore, L., & Moore, G.,
(2018). Exploratory studies to decide whether and how to proceed with full-scale
evaluations of public health interventions: A systematic review of guidance. Pilot and
Feasibility Studies, 4, 1–12.
18. Holloway, K., & McConigley, R., (2009). Descriptive, exploratory study of the role of
nursing assistants in Australian residential aged care facilities: The example of pain
management. Australasian Journal on Ageing, 28(2), 70–74.
19. Karna, S. K., & Sahai, R., (2012). An overview on Taguchi method. International
Journal of Engineering and Mathematical Sciences, 1(1), 1–7.
20. Kennel, K. A., Drake, M. T., & Hurley, D. L., (2010). Vitamin D deficiency in adults:
When to test and how to treat. In: Mayo Clinic Proceedings (Vol. 85, No. 8, pp.
752–758).
CHAPTER
6
Usability Testing 145
21. Klein, J., Moon, Y., & Picard, R. W., (1999). This computer responds to user
frustration. In: CHI’99 Extended Abstracts on Human Factors in Computing Systems
(pp. 242, 243).
22. Lam, R., (2013). Formative use of summative tests: Using test preparation to promote
performance and self-regulation. The Asia-Pacific Education Researcher, 22, 69–78.
23. Landers, T., Davis, J., Crist, K., & Malik, C., (2017). APIC MegaSurvey: Methodology
and overview. American Journal of Infection Control, 45(6), 584–588.
24. Latten, J. E., (1998). A scheduling-conflict resolution model: Are you frustrated by
scheduling conflicts that take your students away from your performance ensembles?
Read on for ways to possibly eliminate these problems. Music Educators Journal,
84(6), 22–26.
25. LaViolette, P. A., (1985). An introduction to subquantum kinetics: I. An overview of
the methodology. International Journal of General System, 11(4), 281–293.
26. Lee, I., (2011). Formative assessment in EFL writing: An exploratory case study.
Changing English, 18(1), 99–111.
27. LeSage, J. P., & Pace, R. K., (2011). Pitfalls in higher order model extensions of
basic spatial regression methodology. Review of Regional Studies, 41(1), 13–26.
28. Lindgaard, G., & Chattratichart, J., (2007). Usability testing: What have we overlooked?
In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
(pp. 1415–1424).
29. Markopoulos, P., & Bekker, M., (2003). On the assessment of usability testing
methods for children. Interacting with Computers, 15(2), 227–243.
30. Mather, L. E., & Tucker, G. T., (1974). Meperidine and other basic drugs: General
method for their determination in plasma. Journal of Pharmaceutical Sciences, 63(2),
306–307.
31. Mattocks, C. J., Morris, M. A., Matthijs, G., Swinnen, E., Corveleyn, A., Dequeker,
E., & Wallace, A., (2010). A standardized framework for the validation and verification
of clinical molecular genetic tests. European Journal of Human Genetics, 18(12),
1276–1288.
32. McCann, J. C., & Ames, B. N., (2005). Is docosahexaenoic acid, an n− 3 long-chain
polyunsaturated fatty acid, required for development of normal brain function? An
overview of evidence from cognitive and behavioral tests in humans and animals.
The American Journal of Clinical Nutrition, 82(2), 281–295.
33. Monteiro, P., Machado, R. J., & Kazman, R., (2009). Inception of software validation
and verification practices within CMMI Level 2. In: 2009 Fourth International Conference
on Software Engineering Advances (pp. 536–541).
34. Morykwas, M. J., Argenta, L. C., Shelton-Brown, E. I., & McGuirt, W., (1997).
Vacuum-assisted closure: A new method for wound control and treatment: Animal
studies and basic foundation. Annals of Plastic Surgery, 38(6), 553–562.
35. Nayani, N., & Mollaghasemi, M., (1998). Validation and verification of the simulation
CHAPTER
6
146 Software Testing and User Experience
50. Teixeira, A., (2008). Basic composition: Rapid methodologies. Handbook of Muscle
Foods Analysis, 291–314.
51. Vulić, I., De Smet, W., Tang, J., & Moens, M. F., (2015). Probabilistic topic modeling
in multilingual settings: An overview of its methodology and applications. Information
Processing & Management, 51(1), 111–147.
52. Wahl, N. J., (2000). Student-run usability testing. In: Thirteenth Conference on
Software Engineering Education and Training (pp. 123–131).
53. Wallace, D. R., & Fujii, R. U., (1989). Software verification and validation: An overview.
IEEE Software, 6(3), 10–17.
54. Ward, J. L., & Hiller, S., (2005). Usability testing, interface design, and portals.
Journal of Library Administration, 43(1, 2), 155–171.
55. Wharton, C. M., Cheng, P. W., & Wickens, T. D., (1993). Hypothesis-testing strategies:
Why two goals are better than one. The Quarterly Journal of Experimental Psychology,
46(4), 743–758.
56. Whetzel, D. L., & McDaniel, M. A., (2009). Situational judgment tests: An overview
of current research. Human Resource Management Review, 19(3), 188–202.
57. Wichansky, A. M., (2000). Usability testing in 2000 and beyond. Ergonomics, 43(7),
998–1006.
58. Williams, D. R., & Rast, P., (2020). Back to the basics: Rethinking partial correlation
network methodology. British Journal of Mathematical and Statistical Psychology,
73(2), 187–212.
CHAPTER
6
CHAPTER 7
UNIT INTRODUCTION
The design for the test serves as the basis for the entire examination. It covers your
usability tests who, when, how, where, and why, and what you are testing. It is possible
to fall into the habit of skipping the step of laying out a comprehensive test strategy
when working under the often unyielding time constraint imposed by project deadlines.
It’s possible that you already have a pretty solid concept of what it is you want to put to
the test in your head, and because of this, you opt against writing it down. You should
not take such a casual approach; it is a mistake that will inevitably come back to haunt
you (Pawliuk et al., 2021).
Learning Objectives
At the end of this chapter, readers will be able to:
• Understanding the importance of testing in the software development process
and how it contributes to software quality.
• Learning the different types of testing, such as functional, performance, security,
etc., and their purposes.
• Understanding the testing life cycle and the different stages involved in the testing
process, such as test planning, test design, test execution, and test reporting.
150 Software Testing and User Experience
• Learning how to define test objectives, test cases, and test scenarios that align
with the software requirements.
• Understanding the importance of test automation and how to automate testing
processes using tools and frameworks.
• Learning how to track defects and issues during testing and report them effectively.
Key Terms
• Focal Print
• Major Vehicle
• Test blueprint
• Test Plan
• Test Plan Parts
CHAPTER
7
The Process of Conducting a Testing 151
CHAPTER
7
152 Software Testing and User Experience
You utilize it to obtain buy-in & input from other members so that everyone
agrees with what’ll happen. Because projects are continually evolving from
week to week & day to day, you don’t want a candidate to claim after exam
that their specific objectives were not addressed (Nettleton et al., 2006):
Everyone directly affected by test results should evaluate plan, especially
when your organization initially begins testing. This is both economically and
politically sensible (Lin et al., 2018).
iii. The third week of each month is when the testing meeting rooms are
open (so is the cafeteria every evening).
iv. Lou attended most recent ACM SIGCHI conference (Association for
Computing Machinery Special Interest Group on Computer-Human
Interaction), where he discovered this extremely cool testing method
(let Lou first highlight the method’s advantages to company).
v. You want to determine whether the market needs this product (backward
logic; a focus group or survey is a more appropriate technique early
on) (Huberty, 2009).
If you are eager to start usability testing, you might think, I don’t care why
we test as long as we test. The repercussions can be dealt with later. And none
of those mentioned above factors causes problems in the medium run. But,
if you want to test to be a fundamental component of how your company
creates products in the long run, you must link testing to product’s demands
and the organization’s overarching business needs. If not, you risk your testing
becoming another passing trend or one of the newest techniques that change
with the seasons (Mill et al., 1992).
CHAPTER
7
The Process of Conducting a Testing 155
Figure 7.1.
An example of re-
search questions
from a usability test
of a hotel reserva-
tions website.
CHAPTER
7
160 Software Testing and User Experience
As before, you’ll rebalance the order in which the versions are presented
to account for these potential differences. Based on the data in the table
above, some of the eight participants will start with Version A, and others will
start with Version B. The equal number of iterations of each version in both
the first and last positions nullifies the potential for bias (Singh et al., 2017).
managers and clerks. During your testing, you should look for discrepancies in
product mastery between and among various user demographics. You’re looking
for newcomers and seasoned veterans to each group and any differences
between them. So, you’ll need to have a range of job types and levels within
each. Again, you will use a matrix layout (like the one below) (Lee & Lee,
2018). Different people will fill out the four conditions in the above table. There
should be at least 16 people involved if you want to recruit at the shown rate
of 4 people per cell. However, a within-subjects design cannot be used if there
are too many subjects (about four subjects per group is the minimum needed to
evaluate group differences). Instead, you’ll need to conduct a simpler study with
fewer participants per cell or fewer participants overall. Remember that if fewer
than four people are in a cell, it will be difficult to draw meaningful conclusions
about the whole. You’ll likely have to ignore potential group differences to make
your research manageable (McHugh, 2011).
only if these materials or simulating the machine state yourself as the test
moderator. For instance, if you were conducting user testing for a website
before the screens were coded or prototyped, you might have given out printed
wireframe drawings of the pages. Alternatively, you (or the participant) could
open the file on the screen at the appropriate time if the page was already on
the computer but not yet connected as part of a working prototype (Nakajima
et al., 2018).
Some tests may be documented while others aren’t, or vice versa. Tests of
wireless network installation instructions should account for the fact that some
steps—such as assigning drives to computers on the new network—will not
require written instructions. When applicable and helpful, your task list may
include product components being put through their paces for this endeavor.
For instance, if a task requires the user to enter a customer name into a web
form, you may want to outline the sequence of screens or web pages the user
must visit to complete the task. This information makes indicating whether or
not the entire system is being used easier (Wang et al., 2020).
Success criteria can be set in several ways, such as when the participant
reaches a predetermined point in the task or screen flow, when they reach
the desired endpoint (even if they make mistakes getting there), or when they
reach the specified number of errors or wrong turns (Butler & Cartier, 2004).
about the users’ ability to comprehend the label: “Have the participants explain
the significance of the XYZ label to you once you’ve shown them the label.”
To put it another way, the moderator of the exam will receive feedback
regarding the label. Because of labeling is the component of the product that
is in question here, this appears to be a straightforward and simple solution.
On the other hand, this view simplifies the circumstance too much. You can
determine the fact that there are basically 3 separate processes related with
applying the basic label in the appropriate manner if you carry out a simple
study (Goldberg et al., 2002).
i. Noticing the label
ii. Reading the label
iii. Processing the information and responding correctly
In addition, all three procedures take place within the extremely particular
context of making use of the website in order to upload photos to the internet:
i. If all you do is show the participant the label, then the only processes
you’re addressing are the second and third ones. You will have no way
of knowing whether or not the participants observe the label, which
comes before the other behaviors. You will also make the entirety of
the context irrelevant. During the course of browsing the website, the
participants will be required to carry out a specific task or tasks at the
time when they are meant to be reading the label. Instead of having
someone draw their attention to the label and inquire about their
thoughts, they will be performing the task themselves. This “context” is
essential since it has a significant impact on the individuals’ capabilities
to process the information (Buscher et al., 2009).
ii. You should also examine how the position of label on website page
impacts things. If it is located amongst 5 other labels and other activities,
you should evaluate how well the participants do despite the presence
of these possible distractions.
After conducting research into the application, environment, and location
of labels, you are aware that it is not sufficient to only ask the participants to
describe the meaning of the label. Instead, you need to give them a task in
which they’re expected to utilize the label, and then you need to determine
whether or not they notice the label, read it, and use it in the correct manner
(Czaja et al., 2013).
CHAPTER
7
The Process of Conducting a Testing 165
plan. Because of time restrictions, it is quite rare that you really test all of the
many tasks that make up a whole documentation, interface or both together
at the same time. Unless you are ready to spend an excessive amount of
resources, it is not viable to perform testing sessions that extend for days at
a time. Instead, you will often be put in a position where you need to test a
sample that is representative of the product’s functionalities (Lehtola et al., 2004).
It is vital that while selecting this sample of activities, you work out as a lot
of the most critical elements of the item as is reasonably practical and make
sure to cover all of the test goals. Your work list should be filtered or reduced
to something more manageable but at the same time you should make it a
priority to record as many usability problems as feasible. The following is a
summary of some typical strategies that you can use to prioritize the tasks on
your list or cut down on it without having to make any unnecessary sacrifices
(Burns & Naikar, 2016).
Prioritize by Frequency: Choose the activities that are representative
of the ones that are carried out the most regularly by your end population of
users. When utilizing the product, the most common activities are those that
are carried out on a daily basis by the regular end user and may account for
as much as seventy-five to 80% of total usage time.
When you were assessing a word processing package, you would like to
make sure that the final user could easily complete the following tasks until you
worried about the more obscure tasks, such as “how to cover up a comment
that doesn’t print out.” For instance, when you’re evaluating a word processor
package, you might want to make sure that end user can easily carry out the
following tasks (Claassen et al., 2014).
i. Open a file.
ii. Save a file.
iii. Edit a file.
iv. Print a file.
The majority of the time, tests consist of a series of arcane tasks that only
around five percent of the user needs population would ever find, let alone
use. Why? Because they are typically on the cutting edge of the product, we
hypothesis that development team views those “five percenters” as the activities
that present the most opportunity for intellectual growth and professional
challenge. Sadly, the general end user doesn’t really share the priority or
enthusiasm that the developer has for these obscure duties.
If, after applying the “75% usage recommendation,” there’s still time to test
extra tasks, including tasks that at least 25% of your end user group performs
consistently. If there’s still the opportunity to evaluate more tasks after applying
the “75% usage guideline,” Including the duties that are completed less frequently
CHAPTER
7
166 Software Testing and User Experience
only after you have verified that your most common responsibilities have been
attended to (Xuan et al., 2012).
Prioritize by Criticality: When a task results in a support call, data
loss, product damage, or user injury, it is considered to be a critical task.
Critical tasks are ones that, if performed poorly or ignored, could have major
repercussions for the end user, the product, or the company’s reputation. In
essence, you want to be sure that the tasks that cause the most suffering and
maybe negative press are caught.
Prioritize by Vulnerability: In this context, the term “vulnerability” refers
to those responsibilities that you have the sneaking suspicion, well before
evaluation, will be difficult to carry out or that have recognized weaknesses
in their design.
Most frequently, the project team will have a strong grasp on this and,
when questioned, will voice their concerns regarding a new feature, procedure,
interface design, portion of a document, etc. In that case, the examination
should contain questions and activities that cover these primary subject areas.
Sometimes, in the spirit of “being neutral,” developers will pretend that all
functions operate as poorly (or well), and that none of them are particularly
problematic. This is done to avoid the appearance of prejudice. They don’t want
their known flaws to be revealed during the test for whatever reason, whether
it be a good intention or something less admirable. As a result, activities that
are manifestly difficult to do and that represent entire chunks, components or
web pages of a document are omitted from the test. These omissions later
turn out to be albatrosses if there’s no time to fix them since the test was
completed too quickly.
To prevent this from happening, apply your best critical judgment to determine
which features or tasks are not quite finished, are new or have never been
tested before, or have been challenging for in-house people to complete. In the
event that you are uncertain, a human performance specialist can assist you
in determining the weak points of the products by carrying out an assessment.
An expert assessment can also assist you in refining the scope of your overall
test objectives (Conaghan et al., 2001).
Prioritize by Readiness: If you leave testing till a very late phase of the
development cycle, you might have no choice except to stick with features that
are already ready to be tested or completely forego testing.
Even if this is not the best option, there are occasions when you have no
other choice. There is no guarantee that you will always get the opportunity
to wait for each and every screen, component and user guide section to be
finished. Always keep in mind that it is preferable to test something rather
than nothing.
CHAPTER
7
The Process of Conducting a Testing 167
ACTIVITY 7.1: information are all examples of preference data, which represent
measures of user opinion or thought process. Your research questions
What are the should serve as the foundation for the data collection. Sometimes
processes of a reference to these measurements will already be present in an
conducting a test? earlier part of the test plan, like the methodology section. Measures
Give a detail of performance and preferences might be utilized statistically or
presentation. qualitatively, based on the aims of the test (Szabo et al., 2010).
If you include the evaluation metrics that you want to use during
the test, any parties that have an interest in doing so will have
a much easier time examining the test plan. It’ll also ensure that
they receive the kind of data which they anticipate getting from
the test, which will be beneficial to them (Wang & Strong, 1996).
Following is a selection of the various kinds of measurements that
might be gathered throughout the course of a normal test.
SUMMARY
The process of conducting testing involves various stages to ensure that a product or
system meets the required standards of quality and functionality. The process starts with
planning, where the testing team determines the objectives, scope, and approach for
testing. This stage also includes creating test plans, test cases, and test scripts. The next
stage is the preparation phase, where the testing team sets up the testing environment,
including hardware and software, and prepares the test data. This stage also involves
identifying and allocating resources, such as testers and testing tools. The execution phase
is where the actual testing takes place, and the testing team executes the test cases
and scripts to detect defects and ensure the system’s functionality. During this stage, the
team may also perform regression, performance, and security tests. Once the testing is
complete, the testing team analyzes the results to identify defects and issues. The team
then reports the defects to the development team, who will fix the issues and re-test
the system. The final stage is the closure phase, where the testing team documents the
testing results and produces a final report. This report provides a summary of the testing
process, including the objectives, scope, approach, test cases, and results.
CHAPTER
7
170 Software Testing and User Experience
REFERENCES
1. Al‐Bayari, O., & Sadoun, B., (2005). New centralized automatic vehicle location
communications software system under GIS environment. International Journal of
Communication Systems, 18(9), 833–846.
2. Anand, A., Das, S., Singh, O., & Kumar, V., (2022). Testing resource allocation
for software with multiple versions. International Journal of Applied Management
Science, 14(1), 23–37.
3. Aromataris, E., Fernandez, R., Godfrey, C. M., Holly, C., Khalil, H., & Tungpunkom,
P., (2015). Summarizing systematic reviews: Methodological development, conduct
and reporting of an umbrella review approach. JBI Evidence Implementation, 13(3),
132–140.
4. Azeroual, O., Saake, G., & Schallehn, E., (2018). Analyzing data quality issues in
research information systems via data profiling. International Journal of Information
Management, 41, 50–56.
5. Bellotti, V., Dalal, B., Good, N., Flynn, P., Bobrow, D. G., & Ducheneaut, N., (2004).
What a to-do: Studies of task management towards the design of a personal task
list manager. In: Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems (pp. 735–742).
6. Boisvert, M., Lang, R., Andrianopoulos, M., & Boscardin, M. L., (2010). Telepractice
in the assessment and treatment of individuals with autism spectrum disorders: A
systematic review. Developmental Neurorehabilitation, 13(6), 423–432.
7. Bosse, Y., & Gerosa, M. A., (2017). Why is programming so difficult to learn? Patterns
of difficulties related to programming learning mid-stage. ACM SIGSOFT Software
Engineering Notes, 41(6), 1–6.
8. Brown, C. H., Holman, E. W., Wichmann, S., & Velupillai, V., (2008). Automated
classification of the world’s languages: A description of the method and preliminary
results. Language Typology and Universals, 61(4), 285–308.
CHAPTER
7
The Process of Conducting a Testing 171
9. Burns, C. M., & Naikar, N., (2016). Prioritization: A double-edged sword? Journal of
Cognitive Engineering and Decision Making, 10(1), 105–108.
10. Buscher, G., Cutrell, E., & Morris, M. R., (2009). What do you see when you’re
surfing? Using eye tracking to predict salient regions of web pages. In: Proceedings
of the SIGCHI Conference on Human Factors in Computing Systems (pp. 21–30).
11. Butler, D. L., & Cartier, S. C., (2004). Promoting effective task interpretation as an
important work habit: A key to successful teaching and learning. Teachers College
Record, 106(9), 1729–1758.
12. Carpio, L., Hall, P., Lingard, L., & Schryer, C. F., (2008). Interprofessional communication
and medical error: A reframing of research questions and approaches. Academic
Medicine, 83(10), S76–S81.
13. Chicheł, A., Skowronek, J., Kubaszewska, M., & Kanikowski, M., (2007). Hyperthermia–
description of a method and a review of clinical applications. Reports of Practical
Oncology & Radiotherapy, 12(5), 267–275.
14. Claassen, C. A., Pearson, J. L., Khodyakov, D., Satow, P. M., Gebbia, R., Berman,
A. L., & Insel, T. R., (2014). Reducing the burden of suicide in the US: The
aspirational research goals of the national action alliance for suicide prevention
research prioritization task force. American Journal of Preventive Medicine, 47(3),
309–314.
15. Conaghan, P., Edmonds, J., Emery, P. A. U. L., Genant, H. A. R. R. Y., Gibbon, W.
A. Y. N. E., Klarlund, M., & Ostergaard, M., (2001). Magnetic resonance imaging
in rheumatoid arthritis: Summary of OMERACT activities, current status, and plans.
The Journal of Rheumatology, 28(5), 1158–1162.
16. Czaja, S. J., Sharit, J., Lee, C. C., Nair, S. N., Hernández, M. A., Arana, N., &
Fu, S. H., (2013). Factors influencing use of an e-health website in a community
sample of older adults. Journal of the American Medical Informatics Association,
20(2), 277–284.
17. Davis, K., Lo, H. Y., Lichliter, R., Wallin, K., Elegores, G., Jacobson, S., & Doughty,
C., (2022). Twelve tips for creating an escape room activity for medical education.
Medical Teacher, 44(4), 366–371.
18. Desurvire, H., Kondziela, J., & Atwood, M. E., (1992). What is gained and lost when
using methods other than empirical testing. In: Posters and Short Talks of the 1992
SIGCHI Conference on Human Factors in Computing Systems (pp. 125, 126).
19. Dyck, M. J., (1987). Assessing logotherapeutic constructs: Conceptual and psychometric
status of the purpose in life and seeking of noetic goals tests. Clinical Psychology
Review, 7(4), 439–447.
20. Fenech, M., (1993). The cytokinesis-block micronucleus technique: A detailed
description of the method and its application to genotoxicity studies in human
populations. Mutation Research/Fundamental and Molecular Mechanisms of
Mutagenesis, 285(1), 35–44.
CHAPTER
7
172 Software Testing and User Experience
21. Ganz, J. B., Davis, J. L., Lund, E. M., Goodwyn, F. D., & Simpson, R. L., (2012).
Meta-analysis of PECS with individuals with ASD: Investigation of targeted versus non-
targeted outcomes, participant characteristics, and implementation phase. Research
in Developmental Disabilities, 33(2), 406–418.
22. Goetz, C. G., Fahn, S., Martinez‐Martin, P., Poewe, W., Sampaio, C., Stebbins,
G. T., & LaPelle, N., (2007). Movement disorder society‐sponsored revision of
the unified Parkinson’s disease rating scale (MDS‐UPDRS): Process, format, and
clinimetric testing plan. Movement Disorders, 22(1), 41–47.
23. Goldberg, J. H., Stimson, M. J., Lewenstein, M., Scott, N., & Wichansky, A. M.,
(2002). Eye tracking in web search tasks: Design implications. In: Proceedings of
the 2002 Symposium on Eye Tracking Research & Applications (pp. 51–58).
24. Greenwald, A. G., (1976). Within-subjects designs: To use or not to use? Psychological
Bulletin, 83(2), 314.
25. Harter, H. L., & Moore, A. H., (1976). An evaluation of exponential and Weibull test
plans. IEEE Transactions on Reliability, 25(2), 100–104.
26. Huberty, T. J., (2009). Test and performance anxiety. Principal Leadership, 10(1),
12–16.
27. Krasny-Pacini, A., Chevignard, M., & Evans, J., (2014). Goal management training
for rehabilitation of executive functions: A systematic review of effectiveness in
patients with acquired brain injury. Disability and Rehabilitation, 36(2), 105–116.
28. Kubey, R., Larson, R., & Csikszentmihalyi, M., (1996). Experience sampling method
applications to communication research questions. Journal of Communication, 46(2),
99–120.
29. Lee, S., & Lee, D. K., (2018). What is the proper way to apply the multiple comparison
test? Korean Journal of Anesthesiology, 71(5), 353–360.
30. Lehtola, L., Kauppinen, M., & Kujala, S., (2004). Requirements prioritization challenges
in practice. In: Product Focused Software Process Improvement: 5th International
Conference, PROFES 2004, Kansai Science City, Japan, April 5–8, 2004. Proceedings
5 (pp. 497–508).
31. Lim, L. P., Garnsey, E., & Gregory, M., (2006). Product and process innovation in
biopharmaceuticals: A new perspective on development. R&D Management, 36(1),
27–36.
32. Lin, X., Yajnanarayana, V., Muruganathan, S. D., Gao, S., Asplund, H., Maattanen,
H. L., & Wang, Y. P. E., (2018). The sky is not the limit: LTE for unmanned aerial
vehicles. IEEE Communications Magazine, 56(4), 204–210.
33. Liu, L., Wang, L., Sun, J., Zhou, Y., Zhong, X., Yan, A., & Xu, N., (2007). An
integrated test-bed for PAT testing and verification of inter-satellite lasercom terminals.
In: Free-Space Laser Communications VII (Vol. 6709, pp. 34–38).
34. Lochbaum, M., Zişan, K. Ç., Kara-Aretha, G., Taylor, W., & Ricardo, Z., (2016).
“Task and ego goal orientations in competitive sport: A quantitative review of the
CHAPTER
7
The Process of Conducting a Testing 173
CHAPTER
7
174 Software Testing and User Experience
47. Nettleton, E., Thrun, S., Durrant-Whyte, H., & Sukkarieh, S., (2006). Decentralized
SLAM with low-bandwidth communication for teams of vehicles. In: Field and Service
Robotics (Vol. 24, pp. 179–188).
48. Pawliuk, C., Brown, H. L., Widger, K., Dewan, T., Hermansen, A. M., Grégoire, M.
C., & Siden, H. H., (2021). Optimizing the process for conducting scoping reviews.
BMJ Evidence-Based Medicine, 26(6), 312.
49. Pearlman, M., (2011). Finalizing the test blueprint. In: Building a Validity Argument
for the Test of English as a Foreign Language™ (pp. 241–272).
50. Pissoort, D., & Armstrong, K., (2016). Why is the IEEE developing a standard on
managing risks due to EM disturbances? In: 2016 IEEE International Symposium
on Electromagnetic Compatibility (EMC) (pp. 78–83).
51. Poulton, E. C., (1982). Influential companions: Effects of one strategy on another in
the within-subjects designs of cognitive psychology. Psychological Bulletin, 91(3), 673.
52. Rai, R., (2016). Tips to organize a conference: Experiences from DERMACON 2016,
Coimbatore. Indian Dermatology Online Journal, 7(5), 424.
53. Rochon, J., Gondan, M., & Kieser, M., (2012). To test or not to test: Preliminary
assessment of normality when comparing two independent samples. BMC Medical
Research Methodology, 12(1), 1–11.
54. Roemer, J. E. (1986). Equality of resources implies equality of welfare. The Quarterly
Journal of Economics, 101(4), 751-784.
55. Rogevich, M. E., & Perin, D., (2008). Effects on science summarization of a reading
comprehension intervention for adolescents with behavior and attention disorders.
Exceptional Children, 74(2), 135–154.
56. Rude, C. D., (2009). Mapping the research questions in technical communication.
Journal of Business and Technical Communication, 23(2), 174–215.
57. Saavedra, R. H., & Smith, A. J., (1996). Analysis of benchmark characteristics
and benchmark performance prediction. ACM Transactions on Computer Systems
(TOCS), 14(4), 344–384.
58. Sandelowski, M., (2000). Whatever happened to qualitative description? Research
in Nursing & Health, 23(4), 334–340.
59. Saraf, I., Iqbal, J., Shrivastava, A. K., & Khurshid, S., (2022). Modelling reliability
growth for multi‐version open source software considering varied testing and
debugging factors. Quality and Reliability Engineering International, 38(4), 1814–1825.
60. Singh, V. B., Sharma, M., & Pham, H., (2017). Entropy based software reliability
analysis of multi-version open source software. IEEE Transactions on Software
Engineering, 44(12), 1207–1223.
61. Smeltzer, L. R., (1993). Emerging questions and research paradigms in business
communication research. The Journal of Business Communication (1973), 30(2),
181–198.
CHAPTER
7
The Process of Conducting a Testing 175
62. Sprague, J., (1993). Retrieving the research agenda for communication education:
Asking the pedagogical questions that are “embarrassments to theory.” Communication
Education, 42(2), 106–122.
63. Szabo, J. K., Vesk, P. A., Baxter, P. W., & Possingham, H. P., (2010). Regional
avian species declines estimated from volunteer‐collected long‐term data using
list length analysis. Ecological Applications, 20(8), 2157–2169.
64. Tseng, S. T., Balakrishnan, N., & Tsai, C. C., (2009). Optimal step-stress accelerated
degradation test plan for gamma degradation processes. IEEE Transactions on
Reliability, 58(4), 611–618.
65. Wang, A. Y. T., Murdock, R. J., Kauwe, S. K., Oliynyk, A. O., Gurlo, A., Brgoch, J.,
& Sparks, T. D., (2020). Machine learning for materials scientists: An introductory
guide toward best practices. Chemistry of Materials, 32(12), 4954–4965.
66. Wang, R. Y., & Strong, D. M., (1996). Beyond accuracy: What data quality means
to data consumers. Journal of Management Information Systems, 12(4), 5–33.
67. Weidner, K., & Eggum, B. O., (1966). Protein hydrolysis: A description of the method
used at the department of animal physiology in Copenhagen. Acta Agriculturae
Scandinavica, 16(3, 4), 115–119.
68. Weitzner, D. S., & Calamia, M., (2020). Serial position effects on list learning tasks
in mild cognitive impairment and Alzheimer’s disease. Neuropsychology, 34(4), 467.
69. Wilson, T. D., Kraft, D., & Dunn, D. S., (1989). The disruptive effects of explaining
attitudes: The moderating effect of knowledge about the attitude object. Journal of
Experimental Social Psychology, 25(5), 379–400.
70. Xuan, J., Jiang, H., Ren, Z., & Zou, W., (2012). Developer prioritization in bug
repositories. In: 2012 34th International Conference on Software Engineering (ICSE)
(pp. 25–35).
71. Yik, M., Widen, S. C., & Russell, J. A., (2013). The within-subjects design in the
study of facial expressions. Cognition & Emotion, 27(6), 1062–1072.
CHAPTER
7
CHAPTER 8
UNIT INTRODUCTION
Up until this point, the details regarding the scientific components of drafting, developing,
carrying out, and presenting the findings of a usability test, have been discussed. Testing
for usability has also been presented within the perspective of a user-centered method
for creating products by a company. This chapter provides a broader perspective on
increasing the impact that user-centered design (UCD) and user experience design have
on a company. These propositions are directed largely toward an individual who has been
made responsible for usability within his or her business (or who would wish to take on
this role) but who has received little to no professional training in UCD (Redish, 2007).
A phased program is used to underscore how important it is to construct such a
program in stages over several years. According to the phased diagram, when it comes to
product development, companies that have not previously adopted such a program face a
significant challenge when attempting to implement a user-centered approach (Meloncon,
2017). This undertaking is full of same challenges, risks, and political interests that come
with any significant shift in the corporate culture. It needs a great deal of planning and
paying close attention to the “human” concerns within the firm. Nevertheless, depending on
the level of management support and the number of resources that have been allocated
to usability, an organization might want to move more quickly or more slowly, Put these
ideas into action in a different order, or steer clear of those that do not make sense
(Ellis & Kurniawan, 2000). The critical thing is to recognize and take into consideration
178 Software Testing and User Experience
the dynamics that are at play within a specific business. The process of establishing
a program follows a path that is more spontaneous than it is structured or planned by
speaking with those who have been through the process before (El Bassiti & Ajhoun,
2013). No foolproof method can be applied in a “cookie cutter” fashion to each and
every company, and any effort to do so will put a program in jeopardy. This chapter is a
reflection of the collective experiences of many people and groups that have put usability
into action, UCD, and experience design in both big and small start-ups and established
businesses. These individuals and groups have all been involved in implementing these
practices at some point (Benoît et al., 2010).
Learning Objectives
At the end of this chapter, readers will be able to:
• Understanding the difference between usability testing and user experience design
• Understanding the importance of user experience in product design
• Understanding the user-centered design process
• Learning how to design for user experience
• Learning how to evaluate user experience
• Understanding the role of usability testing in user experience design
Key Terms
• Shealth Mode
• Usability Related Activities
• Usability Testing
• User Experience
CHAPTER
8
Expanding from Usability Testing to Designing the User Experience 179
1998). This will include identifying where the organization is headed, the UCD
techniques and methods to be employed, hiring requirements, modifications,
and additions to the product life cycle, usability tests and evaluations, internal
awareness-building, educational opportunities, and external services required,
such as recruiting or market research firms and usability consultants. Guidelines
for projects, including usability checkpoints in the life cycle, usability objectives,
and test types, may also be included in the plan (Schaltegger et al., 2012).
The plan may be developed for personal use or shared with others, depending
on one’s position and level of responsibility within the organization. Emphasis
is placed on a plan at such an early stage because, without it, usability may
falter. As the company becomes more committed to implementing a usability
program, it will need to address the issues in the plan. Even if starting from
scratch, the techniques implemented should be successful, with people wanting
more. Being one jump ahead and ready for success is essential (Versteeg
& Bouwman, 2006). The plan should consider the organization’s political
realities and management’s level of commitment. The right balance should be
kept between stretching the organization and proposing impossible tasks. It’s
important to remember that any plan is subject to revision as circumstances
change. It’s a living document (Welch et al., 2023).
CHAPTER
8
182 Software Testing and User Experience
CHAPTER
8
Expanding from Usability Testing to Designing the User Experience 183
∼ OR ∼
ii. Usability can be centralized, and a centralized manager can receive reports
from one or more specialists who support various projects. The first method
is typically utilized by companies that are considered to be “usability-mature.”
These companies have many usability specialists on staff, all of whom are
dedicated to certain projects and report directly to the managers of those
projects (Lucchi & Delera, 2020).
CHAPTER
8
184 Software Testing and User Experience
from other activities, the more vulnerable it is to be undercut. When actions like
creating usability targets, officially identifying the user, doing a job analysis, and
defining testing checkpoints become second nature, the usability specialist will
know usability has arrived and will be tough to destroy (Carter et al., 2005).
CHAPTER
8
186 Software Testing and User Experience
CHAPTER
8
188 Software Testing and User Experience
(Schwab et al., 2001). To be more precise, research any usability issues with the
present product before starting any projects that involve a “follow-on” product to
it. Then, resolve these issues by factoring their resolution into the new product’s
usability goals. Sweeping up the error messages, for instance, should be an
inherent usability goal for the new project if you are aware that the present
product’s online error messages are cryptic and the user guides lack clarity
of the errors. Despite the fact that this seems like the most straightforward
common sense, it’s amazing how frequently politics interferes with and stops
such an analysis from occurring (Donner & Tellez, 2008).
SUMMARY
This chapter discusses how the field of user experience (UX) design has evolved beyond
just usability testing to a more holistic approach to designing products that meet users’
needs. The chapter explores the various stages of the UX design process, including
user research, prototyping, and testing, and highlights the importance of involving users
throughout the design process.
REFERENCES
1. Agarwal, N., Calvo, B., & Kumar, V., (2014). Paving the road to success: A students
with disabilities organization in a university setting. College Student Journal, 48(1),
34–44.
2. Ashraf, G., (2012). A review on the models of organizational effectiveness: A look at
Cameron’s model in higher education. International Education Studies, 5(2), 80–87.
3. Atkinson, N. L., Saperstein, S. L., Desmond, S. M., Gold, R. S., Billing, A. S., &
Tian, J., (2009). Rural eHealth nutrition education for limited-income families: An
iterative and user-centered design approach. Journal of Medical Internet Research,
11(2), e1148.
4. Atkinson, P. E., (2012). Selling yourself magically-Persuasion strategies for personal
and organizational change. Management Services, 56(4), 28.
5. Auger, J., (2013). Speculative design: Crafting the speculation. Digital Creativity,
24(1), 11–35.
6. Bendixen, R. M., Fairman, A. D., Karavolis, M., Sullivan, C., & Parmanto, B., (2017).
A user-centered approach: Understanding client and caregiver needs and preferences
in the development of mHealth apps for self-management. JMIR mHealth and
uHealth, 5(9), e7136.
7. Benitez‐Paez, F., Comber, A., Trilles, S., & Huerta, J., (2018). Creating a conceptual
framework to improve the re‐usability of open geographic data in cities. Transactions
in GIS, 22(3), 806–822.
8. Benoît, C., Norris, G. A., Valdivia, S., Ciroth, A., Moberg, A., Bos, U., & Beck, T.,
(2010). The guidelines for social life cycle assessment of products: Just in time!
The International Journal of Life Cycle Assessment, 15, 156–163.
9. Brannon, A. L., Schoenmakers, M. A., Klapwijk, H. P., & Haley, K. B., (1993). Product
value matrices help firms to focus their efforts. Omega, 21(6), 699–708.
10. Brody, A. A., Arbaje, A. I., DeCherrie, L. V., Federman, A. D., Leff, B., & Siu, A.
L., (2019). Starting up a hospital at home program: Facilitators and barriers to
implementation. Journal of the American Geriatrics Society, 67(3), 588–595.
11. Bruce, K., (1998). Can you align IT with business strategy? Strategy & Leadership,
26(5), 16–20.
12. Carter, J. A., Liu, J., Schneider, K., & Fourney, D., (2005). Transforming usability
engineering requirements into software engineering specifications: From PUF to
UML. Human-Centered Software Engineering—Integrating Usability in the Software
Development Lifecycle, 147–169.
13. Cooper, B. L., Watson, H. J., Wixom, B. H., & Goodhue, D. L., (2000). Data
warehousing supports corporate strategy at first American corporation. MIS Quarterly,
547–567.
CHAPTER
8
Expanding from Usability Testing to Designing the User Experience 193
14. Dhamija, R., & Tygar, J. D., (2005). The battle against phishing: Dynamic security
skins. In: Proceedings of the 2005 Symposium on USABLE Privacy and Security
(pp. 77–88).
15. Dillahunt, T., Wang, Z., & Teasley, S. D., (2014). Democratizing higher education:
Exploring MOOC use among those who cannot afford a formal education. International
Review of Research in Open and Distributed Learning, 15(5), 177–196.
16. Donner, J., & Tellez, C. A., (2008). Mobile banking and economic development:
Linking adoption, impact, and use. Asian Journal of Communication, 18(4), 318–332.
17. El Bassiti, L., & Ajhoun, R., (2013). Toward an innovation management framework: A
life-cycle model with an idea management focus. International Journal of Innovation,
Management and Technology, 4(6), 551.
18. Ellis, R. D., & Kurniawan, S. H., (2000). Increasing the usability of online information
for older users: A case study in participatory design. International Journal of Human-
Computer Interaction, 12(2), 263–276.
19. Friedman, H. S., (2000). Long‐term relations of personality and health: Dynamisms,
mechanisms, tropisms. Journal of Personality, 68(6), 1089–1107.
20. Göransson, B., Lif, M., & Gulliksen, J., (2003). Usability design-extending rational
unified process with a new discipline. In: DSV-IS (pp. 316–330).
21. Gorry, J., Roen, K., & Reilly, J., (2010). Selling your self? The psychological impact
of street sex work and factors affecting support seeking. Health & Social Care in
the Community, 18(5), 492–499.
22. Greenfield, D., Pawsey, M., Hinchcliff, R., Moldovan, M., & Braithwaite, J., (2012).
The standard of healthcare accreditation standards: A review of empirical research
underpinning their development and impact. BMC Health Services Research, 12(1),
1–14.
23. Grossman, J. H., (1991). How to sell yourself: A guide for interviewing effectively.
SAM Advanced Management Journal, 56(2), 33.
24. Grundlingh, A., (2008). “Are we Afrikaners getting too rich?” 1 cornucopia and change
in Afrikanerdom in the 1960s. Journal of Historical Sociology, 21(2, 3), 143–165.
25. Gudipati, M., & Sethi, K. B., (2017). Adapting the user-centered design framework for
K-12 education. Taking Design Thinking to School: How the Technology of Design
can Transform Teachers, Learners, and Classrooms, 94–101.
26. Guiffrida, D. A., (2003). African American student organizations as agents of social
integration. Journal of College Student Development, 44(3), 304–319.
27. Guthman, J., (2008). Bringing good food to others: Investigating the subjects of
alternative food practice. Cultural Geographies, 15(4), 431–447.
28. Hansen, M. T., Nohria, N., & Tierney, T., (2013). What’s your strategy for managing
knowledge? In: The Knowledge Management Yearbook 2000–2001 (pp. 55–69).
29. Harper, M., & Allen, S., (1997). Volunteer to enhance your career and the profession.
Journal of Accountancy, 183(2), 41.
CHAPTER
8
194 Software Testing and User Experience
30. Howell, J. M., (2005). The right stuff: Identifying and developing effective champions
of innovation. Academy of Management Perspectives, 19(2), 108–119.
31. Jansen, E. P., (2004). The influence of the curriculum organization on study progress
in higher education. Higher Education, 47, 411–435.
32. Jiang, J. J., Klein, G. S., & Pick, R. A., (1996). Individual differences and system
development. ACM SIGCPR Computer Personnel, 17(3), 3–12.
33. Johnson, G. I., & Westwater, M. G., (1996). Usability and self-service information
technology: Cognitive engineering in product design and evaluation. AT&T Technical
Journal, 75(1), 64–73.
34. Khan, O., Christopher, M., & Creazza, A., (2012). Aligning product design with the
supply chain: A case study. Supply Chain Management: An International Journal,
17(3), 323–336.
35. Labro, E., (2006). Is a focus on collaborative product development warranted from a
cost commitment perspective? Supply Chain Management: An International Journal,
11(6), 503–509.
36. Lampland, M., (2010). False numbers as formalizing practices. Social Studies of
Science, 40(3), 377–404.
37. Lane, I. F., (2007). Change in higher education: Understanding and responding to
individual and organizational resistance. Journal of Veterinary Medical Education,
34(2), 85–92.
38. Le Lann, L., Jouve, P. E., Alarcón-Riquelme, M., Jamin, C., & Pers, J. O., (2020).
Standardization procedure for flow cytometry data harmonization in prospective
multicenter studies. Scientific Reports, 10(1), 11567.
39. Lewis, III. R. R., (2005). Ecological engineering for successful management and
restoration of mangrove forests. Ecological Engineering, 24(4), 403–418.
40. Lucchi, E., & Delera, A. C., (2020). Enhancing the historic public social housing
through a user-centered design-driven approach. Buildings, 10(9), 159.
41. Lyall, C., & Fletcher, I., (2013). Experiments in interdisciplinary capacity-building:
The successes and challenges of large-scale interdisciplinary investments. Science
and Public Policy, 40(1), 1–7.
42. Martinsons, M. G., (1993). Cultivating the champions for strategic information systems.
Journal of Systems Management, 44(8), 31.
43. Meloncon, L. K., (2017). Patient experience design: Expanding usability methodologies
for healthcare. Communication Design Quarterly Review, 5(2), 19–28.
44. Menzi-Çetin, N., Alemdağ, E., Tüzün, H., & Yıldız, M., (2017). Evaluation of a
university website’s usability for visually impaired students. Universal Access in the
Information Society, 16, 151–160.
45. Mizrahi, T., & Rosenthal, B. B., (2001). Complexities of coalition building: Leaders’
successes, strategies, struggles, and solutions. Social Work, 46(1), 63–78.
CHAPTER
8
Expanding from Usability Testing to Designing the User Experience 195
46. Nesbit, R., & Brudney, J. L., (2010). At your service? Volunteering and national
service in 2020. Public Administration Review, 70, S107–S113.
47. Paz, F., Paz, F. A., Villanueva, D., & Pow-Sang, J. A., (2015). Heuristic evaluation
as a complement to usability testing: A case study in web domain. In: 2015 12th
International Conference on Information Technology-New Generations (pp. 546–551).
48. Philippidis, A., (2022). Into orbit: Satellite bio raises $110 M toward tissue therapeutics:
Emerging from stealth mode, company focuses on manufacturing platform, pipeline
growth starting with focus on liver conditions. GEN Edge, 4(1), 400–406.
49. Redish, J. G., (2007). Expanding usability testing to evaluate complex systems.
Journal of Usability Studies, 2(3), 102–111.
50. Richter, D., Kunter, M., Klusmann, U., Lüdtke, O., & Baumert, J., (2014). Professional
development across the teaching career: Teachers’ uptake of formal and informal
learning opportunities. In: Teachers’ Professional Development (pp. 97–121).
51. Rosenbaum, S., (2008). The future of usability evaluation: Increasing impact on
value. Maturing Usability: Quality in Software, Interaction and Value, 344–378.
52. Roth, R. E., Ross, K. S., & MacEachren, A. M., (2015). User-centered design for
interactive maps: A case study in crime analysis. ISPRS International Journal of
Geo-Information, 4(1), 262–301.
53. Schaltegger, S., Lüdeke-Freund, F., & Hansen, E. G., (2012). Business cases for
sustainability: The role of business model innovation for corporate sustainability.
International Journal of Innovation and Sustainable Development, 6(2), 95–119.
54. Schwab, A. L., Knott, R., & Schottdorf, W., (2001). Environmental and economic
benefit of new fungus-tolerant grape varieties and their usability for different training
systems. In: Proceedings of the 12th GESCO Congress 2001 in Montpellier, France
(Vol. 1, pp. 201–204).
55. Sfikas, P. M., (1999). Volunteering your services. The Journal of the American Dental
Association, 130(2), 278–280.
56. Sward, D., & Macarthur, G., (2007). Making user experience a business strategy.
In: Law, E., et al., (eds.), Proceedings of the Workshop on Towards a UX Manifesto
(Vol. 3, pp. 35–40).
57. Thursky, K. A., & Mahemoff, M., (2007). User-centered design techniques for a
computerized antibiotic decision support system in an intensive care unit. International
Journal of Medical Informatics, 76(10), 760–768.
58. Tillinghast, E. D., Hunt, W. F., & Jennings, G. D., (2011). Stormwater control measure
(SCM) design standards to limit stream erosion for Piedmont North Carolina. Journal
of Hydrology, 411(3, 4), 185–196.
59. Van, H. M. C., & Bingham, T., (2013). ‘Surfing the silk road’: A study of users’
experiences. International Journal of Drug Policy, 24(6), 524–529.
60. Versteeg, G., & Bouwman, H., (2006). Business architecture: A new paradigm to
relate business strategy to ICT. Information Systems Frontiers, 8, 91–102.
CHAPTER
8
196 Software Testing and User Experience
61. Walden, A., Garvin, L., Smerek, M., & Johnson, C., (2020). User-centered design
principles in the development of clinical research tools. Clinical Trials, 17(6), 703–711.
62. Weiss, E., Mixtaj, L., Weiss, R., & Weiss, G., (2013). Economic evaluation of the
usability of abandoned mining works. In: SGEM 2013: 13th International Multidisciplinary
Scientific Geoconference Science and Technologies in Geology, Exploration and
Mining; Conference Proceedings (Vol. 1, pp. 16–22).
63. Welch, T. D., Smith, T. B., & Niklinski, E., (2023). Creating a business case for
success. Nursing Administration Quarterly, 47(2), 182–194.
64. Wendel, H. E. W., Zarger, R. K., & Mihelcic, J. R., (2012). Accessibility and usability:
Green space preferences, perceptions, and barriers in a rapidly urbanizing city in
Latin America. Landscape and Urban Planning, 107(3), 272–282.
65. Wixon, D., & Wilson, C., (1997). The usability engineering framework for product
design and evaluation. In: Handbook of Human-Computer Interaction (pp. 653–688).
66. Wormald, P. W., & Rodber, M., (2008). Aligning industrial design education to
emerging trends in professional practice and industry. Journal of Design Research,
7(3), 294–303.
67. Zorzetti, M., Signoretti, I., Salerno, L., Marczak, S., & Bastos, R., (2022). Improving
agile software development using user-centered design and lean startup. Information
and Software Technology, 141, 106718.
CHAPTER
8
INDEX
Software Testing
Zadeh
Software testing is an essential aspect of the software development process
that ensures software applications’ quality, reliability, and functionality.
However, more than testing is needed to guarantee a great user experience.
ISBN 978-1-77956-199-2
00000
TAP
9 781779 561992
TAP
TAP
Toronto Academic Press