Angela M. Cirucci, Urszula M. Pruchniewska - UX Research Methods For Media and Communication Studies - An Introduction To Contemporary Qualitative Methods
Angela M. Cirucci, Urszula M. Pruchniewska - UX Research Methods For Media and Communication Studies - An Introduction To Contemporary Qualitative Methods
Angela M. Cirucci, Urszula M. Pruchniewska - UX Research Methods For Media and Communication Studies - An Introduction To Contemporary Qualitative Methods
List of Figuresvii
SECTION ONE
Introduction1
1 What Is UX? 3
2 Biased Digital Structures and UX 14
3 How Did We Get Here? 23
4 Interfaces and Navigation 30
SECTION TWO
Preparing Your Study 43
SECTION THREE
Methods71
Bibliography164
Appendix A: Sample Research Plan Template 169
Appendix B: Academic Report Example171
Appendix C: Industry Report Example 189
Index 191
Figures FiguresFigures
Introduction
1 What Is UX? IntroductionWhat Is UX?
What is user experience? Think about something you love doing that requires
a specific physical object, such as drinking coffee out of your favorite mug,
running in your “fastest” sneakers, or writing with your special pen. What is
this experience? Why do you love it? What is your favorite part of using this
object? Now, think about a digital experience you love, such as using a par-
ticular website or app. What is it? Why do you love it? What is your favorite
part of using this product? The answers to these questions form your best user
experience. Of course, user experience can also be negative—think about a
mug that is an awkward size for your hand or a website with buttons that are
clickable but don’t actually perform an action. These interactions are frustrat-
ing and unpleasant to the user. They might make you question who designed
this and why they didn’t think about how somebody would actually be using
it? This is where the field of user experience comes in.
This book is all about user experience, or UX, which is paying attention
to how a person (or “end-user” in technical terms) uses—that is, interacts
with and experiences—a particular product or service. UX typically includes
researchers and designers, though increasingly the field is becoming more
complex, with additional roles such as those for UX writers and UX content
strategists becoming more prevalent. A UX researcher is the person who con-
ducts research to better understand the user and their needs before the product
or service is created; they also conduct research to understand how a current
product or service can be improved from the user perspective. A UX researcher
is also the person who typically tests a designed product to make sure it actu-
ally meets user needs. A UX designer is the person who designs (and redesigns)
the product based on the insights that the UX researcher provides. This book
focuses on the UX of digital products, such as devices, websites, platforms,
and apps, and specifically on the different ways of doing research to inform the
creation of a satisfying user experience.
Maybe you have seen “UX” in a recent blog post or job ad. But, studying
user experience has a surprisingly older history than you may think. It extends
back much further than digital spaces and is rooted in buildings, cities, and
tangible objects, slightly morphing, but keeping its foundational pieces, as
it moves through early human-computer interaction (HCI) research and into
DOI: 10.4324/9781003181750-2
4 Introduction
today’s app-centric world. Large companies like Facebook and Google with
hundreds of UX researchers, as well as smaller companies with only one UX
researcher, all employ some of the methods in the upcoming chapters.
The goal of this book is twofold. While we want to help aspiring UX
researchers learn the necessary skills for the job, we also see this book as a
means of introducing new and exciting research methods into academia. These
creative methods developed in UX can be used in Media and Communication
Studies to better understand how and why different people interact with digi-
tal technologies (and to what ends) and how design impacts communication.
Perhaps even more importantly, the methods outlined in this book provide a
more applied perspective that parallels the design choices made for apps and
websites, allowing academic researchers to ultimately conduct more relevant
studies of digital products.
This book presents some of the most common qualitative UX research meth-
ods. These methods are creative, contemporary, and adaptable. But, as you
will see, they are also reliant on traditional research skills like interviewing,
observing, and focus group moderating. They also rely on the ability to think
critically and outside the box, to find emergent trends, and to use previous find-
ings to create informed, desirable, and feasible solutions to problems. In this
introductory chapter, we introduce you to the field of UX research, tracing the
history of the field and linking it to more traditional research, with which you
may be a little more familiar.
1. Useful—Is the product useful? Does it have a purpose? Note that what
is considered “useful” changes person by person, but generally a product
should have a clear use for some defined target audience.
2. Usable—Can users effectively and efficiently achieve their end objective
with the product?
3. Findable—Is content in the product or are the features of the product easy
to find? Websites, for example, should be easily navigable by novice users.
4. Credible—Does the design enhance credibility of the product, that is, do
the users trust the product?
5. Accessible—Does the product provide an experience that can be accessed
by users with a full range of abilities?
6. Desirable—Is the product design and experience aesthetically pleasing?
This facet includes things like branding, image, and identity.
6 Introduction
Figure 1.1 Morville’s honeycomb of the seven distinct factors that make for good UX.
7. Valuable—Does the product deliver value to the user and to the business?
The product should deliver customer satisfaction but should also contrib-
ute to the bottom line (in for-profit companies) or to the mission and val-
ues (in nonprofits).5
The goal of UX is to meet the needs of the end-user through simple and ele-
gant design that makes the product feel like a joy to use and to own. It’s impor-
tant to note that UX includes all aspects of an end-user’s interaction with the
company, its services, and its products—including marketing of the product and
technical assistance or customer support when something goes wrong with the
product. As Don Norman, the inventor of the term “user experience,” explains:
This complete experience clearly goes way beyond just the UI (user interface),
the specific point at which the user interacts with a product, such as a web
page. Obviously, the interface is very important in a user’s experience, but
even if an interface is nice to look at, it could still contain flaws and inconsist-
encies that ruin the overall product or brand experience.7 We get more into UIs
in Chapter 4.
In other words, usability is focused on making a space that is easy for the user
to become familiar with and competent in, as well as making it easy to reach
their objective. Usability is certainly an important piece of UX, but it isn’t
exactly the same thing. User experience includes usability but is also so much
more—including the experience beyond the product itself (such as customer
service) and aesthetics (how pleasant the product is to look at).
This Book
In this book, we will mostly focus primarily on the UX of popular websites
and apps. But, user experience research originates from studying how people
experience physical spaces and experiences. With that in mind, the methods
in this book can be applied to a myriad of digital products, including games,
operating systems, computer programs, and hardware or devices. Particularly
relevant to fields like Media and Communication Studies, the UX research we
cover in this book is mostly concerned with better understanding how apps and
websites communicate with end-users.
Again, our main goal for this book is twofold. First, we want to help Media
and Communication Studies students prepare for the workforce. UX researcher
jobs are abundant, ranging from positions at top digital corporations like Face-
book and Google, to smaller companies looking for one or two researchers,
and to consulting companies that exist solely to conduct UX studies for a vari-
ety of clients.
Second, we are passionate about updating and energizing qualitative
research methods within the Media and Communication Studies fields. Many
research methods textbooks think of digital spaces primarily as recruitment
tools—you can use a Facebook group or a tweet to find participants. Others
think of them as places to scrape data (information about and from users),
often not taking into consideration privacy issues and merely assuming peo-
ple’s words and images are up for grabs because they are posted “publicly.” We
want to promote innovative techniques for gathering and analyzing data not
just from but also about digital spaces and products—and the people who use
them! So, while the book is focused on UX in the sense that we are introducing
techniques popular in industry research, the majority of the methods outlined
in this book can also be used creatively for academic research, research that
adds to a body of knowledge about digital technologies and their users.
Even though there are many exciting, innovative methods in the UX field, a
good UX researcher still relies heavily on foundational qualitative research skills
that are typically used in academic research. As you will see in the upcoming
chapters, observation, asking good questions, and thinking critically all are essen-
tial to conducting great UX research. Almost every method we outline in this book
incorporates some type of interviewing or observing, which is why we dedicate
a chapter (Chapter 8) to a brief overview of the most common traditional qualita-
tive methods: interviews, observations, open-ended surveys, and focus groups.
Overview
In the following chapters that comprise Section One, we introduce you to the
world of digital structures and the study of how humans interact with them.
10 Introduction
In Chapter 2, we cover the very important and timely topic of bias in digital
structures and UX. We discuss the ways in which norms and goals are “baked in”
to human-created spaces, including how objects can have implications for users
online, as well as in their everyday “offline” lives. We cover how old and new
technologies have affected those who identify with marginalized communities.
In Chapter 3, we briefly trace early digital design that was less about interac-
tivity and more about qualities like usefulness, desirableness, and affordability.
Early computers were programmable machines that only viewed people as one
component of the production system. Command line interfaces, early mice and
keyboards, VISICALC, and WordStar are explored here. In Chapter 3 we also
provide more contemporary definitions of design that place users in the center
and that situate UX researchers’ goals in striving to create experiences that fit
diverse user contexts, characteristics, and patterns of life. Chapter 4 covers
important components related to UX research—interfaces, navigational com-
ponents, and other interface elements such as tooltips and coach marks. While
interface design fundamentals are more prominent in UX design training, UX
researchers are expected to be familiar with these concepts.
Section Two is devoted to helping you understand how to best plan a UX
study. In Chapter 5 we introduce the guiding process for this book—the
Design Thinking Mindset. This process involves five main stages of Design
Thinking—empathizing with users, defining problems, coming up with ideas,
designing/building prototypes, and testing those prototypes. Chapter 6 then
provides a walkthrough of the process of actually planning a UX research pro-
ject. We discuss gathering background information, logistics, recruitment tech-
niques, and ethical and accessibility considerations. Chapter 7 covers reporting
on and presenting your research findings. We compare academic and industry
reports and also outline how to present UX findings well, discussing presenta-
tion tools such as Microsoft PowerPoint, Canva, Mural, and Zoom.
In Section Three we finally get to the exciting UX research methods! We
cover 12 methods, representing all five stages of the Design Thinking Mind-
set. But, we begin the section with an overview of foundational qualitative
research methods.
• NNgroup.com
• UXplanet.org
• Interaction-design.org
• boxesandarrows.com
• Journal of Usability Studies
• Journal of Human-Computer Interaction
• UX Research Blogs
We also briefly discuss how to practically use each particular method in-
person and virtually, given the reality of our even-more-digital world since
the COVID-19 pandemic. We end with discussion questions to get your brain
juices flowing. Now, let’s get your UX research journey started in Chapter 2,
which focuses on the critical consideration of bias in UX research and design.
Notes
1. Lisa Jewell, “User Research-What’s Tomato Ketchup Got to Do with It?”
UX Planet, May 14, 2018, https://uxplanet.org/user-research-whats-tomato-
ketchup-got-to-do-with-it-758bfb536ca3.
2. Ibid.
What Is UX? 13
3. Ibid.
4. Ibid.
5. Peter Morville, “User Experience Design,” Semantic Studios, June 21, 2004, https://
semanticstudios.com/user_experience_design/.
6. Donald A. Norman, “The Way I See It: Systems Thinking: A Product Is More Than
the Product,” Interactions 16, no. 5 (2009): 54.
7. Don Norman and Jakob Nielsen, “The Definition of User Experience (UX),”
Nielsen Norman Group, accessed July 18, 2021, www.nngroup.com/articles/
definition-user-experience/.
8. “ISO 9241–11:2018(en),” ISO, 2018, www.iso.org/obp/ui/#iso:std:iso:9241:-11:
ed-2:v1:en.
2 Biased Digital Structures
and UX IntroductionBiased Digital Structures and UX
This book is based on a core assumption: digital products are not neutral.
Social media, such as Facebook, Instagram, and Snapchat, are not “just neutral
platforms” for people to communicate through, nor are any digital products,
including devices (such as iPhones), websites, or apps. Things that are designed
and made by people can never be neutral, as people themselves are not neu-
tral, and the society we live in is certainly not neutral. Designers, researchers,
and engineers all have their own beliefs and values, and live in environments
steeped in cultural and societal norms and assumptions. These biases, values,
and assumptions seep into the digital products they work on, whether con-
sciously or unconsciously. In this chapter, we talk about politics, power imbal-
ances, and bias and how these concepts feature in UX research and design.
Usually we discuss people having politics. But, in his popular piece from
1980, “Do Artifacts Have Politics?,” Langdon Winner argues that technical
things have political qualities. Specifically, Winner discusses machines, struc-
tures, and systems that embody forms of power and authority. He talks about
how technologies are shaped by social and economic forces, and how the val-
ues and politics of these forces reside within the technologies themselves. Even
though human-made objects do not have agency, people are biased, and it is
inevitable that they bake in these biases when they create or design products.
So, it’s important to pay attention to the characteristics of technical objects and
the meaning of those characteristics—why are things made the way they are?
What assumptions do things embody? And what outcomes do things create
through their use?1
For example, Winner explains, consider the overpasses in Long Island.
These bridges over the parkways of New York are very low, with some having
as little as nine feet of height clearance. Even though the constrained height
of these bridges might not be noticeable at first glance, these overpasses are
low enough that buses can’t drive under them. These 200 or so overpasses
were intentionally designed to be low by Robert Moses, the master builder and
architect behind much of New York’s transport infrastructure from the 1920s
to the 1970s. Moses wanted to discourage buses on his parkways, so that only
people who owned cars (i.e., white, middle/upper-class people) could use them
DOI: 10.4324/9781003181750-3
Biased Digital Structures and UX 15
freely for leisure and recreation. Anyone who used public transit—primarily
poor people and people of color—did not have easy access to the parkways. As
you can see, Moses’s design decisions had clear discriminatory outcomes, and
this bias lives on in the things—the parkways themselves—long after Moses
died, making them inherently not neutral.2
But surely most designers, builders, architects, and engineers aren’t deliber-
ately prejudiced like Moses? Unfortunately, as Winner points out, bias is often
baked into objects without intention. He highlights how much of the human-
made physical world is (and was even more so in the 20th century) inaccessi-
ble to people with disabilities. This is less about intentional exclusion and more
about neglect stemming from the limited experience of able-bodied architects
and builders creating a world based on their own needs and experiences.3
Some technologies aren’t inherently exclusionary to particular groups of
people but have unintended real-world consequences that can be harmful.
Winner provides the example of the mechanical tomato harvester created in
the 1940s. This machine was much more efficient at picking tomatoes than
doing it by hand, but it was also much rougher on the plants, by shaking
the tomatoes loose from their stalks. This led to the need to develop new
tomato varieties that were able to withstand the motion. These new breeds,
coupled with the high cost of the equipment, fundamentally changed how
tomatoes are grown, displacing smaller farms and contributing to the rise of
large agri-business.4
Technologies build order into our world; they structure human activity. Soci-
eties choose structures for technologies that influence how people are going to
work, communicate, travel, and consume. This happens over long periods of
time. But it’s important to clarify here that our assumptions in this book are
not based on technological determinism, the idea that technology shapes the
development of society, forcing social change by its very design. Rather, we
recognize that yes, technologies influence and shape society but they them-
selves are also shaped by social and economic forces.
Ultimately, we want to highlight that people are behind technologies. They
make certain choices in the research and design process, and these choices
are based in the socio-cultural context, their goals, and, often, who is paying
for the design and development of a product. Frequently, these choices aren’t
even conscious. Different people are differently situated and possess unequal
degrees of power and unequal levels of awareness—and these differences are
embedded in the products they create.
Many scholars and industry experts have more recently touched on this idea
of digital technologies having prejudices and biases baked in—and have high-
lighted the need for more inclusivity, diversity, and compassion in technology
research, design, and use. In this chapter, we briefly summarize a few scholars
who have contributed important work to the critical analysis of technologies.
We also include industry discussions on how to make applied research and
design less biased and more accessible to all.
16 Introduction
Judy Wajcman and Gender Biases
Judy Wajcman specifically focuses on gender and the ways in which technolo-
gies reflect gender inequalities. Not only do men still have a monopoly in the
tech industry but gender inequalities are embedded in the technologies.5 Draw-
ing from Judith Butler,6 Wajcman argues that social relations are “materialised
in tools and techniques.”7 The tools that users fold into their lives are socially
constructed, based in prejudices that already exist, privileging those who make
and fund them (mostly straight, able-bodied white men).
For example, Bumble is a dating app designed for women to take control of
the dating game, by only allowing the woman to start the conversation after
matching with a potential partner. Scholars Rena Bivens and Anna Shah Hoque8
and Caitlin MacLeod and Victoria McArthur9 studied the interface of Bumble
and found that the app makes normative assumptions about gender, sex, and
sexuality. For example, at the time of their studies, the gender choices when
creating a profile on Bumble were “male” or “female,” and users could choose
whether they were interested in “males,” “females,” or “both.” (Bumble has
since changed these options to include non-binary gender identification.)
In addition, the entire premise of Bumble is that men and women are most
often trying to date each other. Thus, Bumble operates on a “heterosexual
matrix,”10 the idea that bodies identified as female at birth perform femi-
nine gender identity and are attracted to bodies designated as male at birth
that perform masculinity. Static binary logics (male/female, heterosexual/
homosexual) permeate Bumble’s design, and, as such, the app is “optimized
for straight cisgender women.”11 Who would have thought that the “feminist”
dating app could still be so exclusionary by its very design to a large swathe
of people!
UX Takeaways
So, what does this all mean for the subject of this book: UX research? Hope-
fully you can see that one of the key ways of fighting bias and prejudice in
the design of digital products and spaces is to simply do research in the first
place! This is why UX research is crucial in industry—to make sure that tech
products are inclusive, equitable, and sensitive to a variety of scenarios and
users. Even though we as researchers are not directly designing the website or
app, we are undoubtedly shaping how digital spaces are structured—what is
possible, what things look like, who is included, etc.
Actively researching—talking with, observing, empathizing with—the peo-
ple who actually use digital products is a great way of making sure that you’re
20 Introduction
not just designing for yourself, based on your own assumptions, ideas, and
values. It’s important to do research before you start creating a website or an
app, but it’s just as important to continue doing research during the design
process and even after the product is launched. You want to make sure you are
considering your users’ needs at all times.
Of course, in order to be truly empathetic and diverse, you need to con-
sult a diverse user base—that is, talk to lots of different people who might
end up using your product! There’s no point in only recruiting users for
research who look and think the same way you do—they will only validate
your biased assumptions. Remember that different people also have different
ideas and experiences from each other—a straight, Black, upper-class, middle-
aged woman living in a large city will probably have different experiences,
viewpoints, needs, and goals for using a particular digital product than a low-
income, Latina grandmother of six living in a rural area. It’s also helpful to
have a diverse team of researchers and designers, so that different viewpoints
and experiences are already at the table internally throughout the product
development process.
We also need to consider our own biases as researchers, beyond making sure
we do user research with a diverse group of people. Consider:
• Are we looking for specific answers when analyzing our research and not
letting the data guide us?
• Are we being as objective as possible and truly putting our users first (and
not our own feelings) when thinking about design solutions?
• How can we make sure we are methodical and that our results are valid?
If possible, it is helpful to have multiple researchers working on the same
project, to validate insights gathered from findings. If it’s not possible to
have multiple people conducting the research (i.e., gathering data through
interviews, observations, or other methods outlined in this book), it’s
important that multiple people can analyze the data and come up with
their own conclusions—and then consolidate them as a team.
• We need to make sure to use the right methods for the questions we’re
asking and that we are nuanced in our analysis (e.g., balancing what an
interviewee tells us versus what they show us during observations in the
conclusions that we draw).
• If possible, we should validate our insights with the people we gathered
data from. Go back to your research participants and ask them if the conclu-
sions that you drew from the study resonates with them. Why or why not?
In addition to collecting and analyzing lots of data from current and potential
users themselves, UX teams can also be deliberate in creating their designs.
Make sure to:
• Pre-empt a variety of possible issues that might arise once people start
using your product and work possible solutions into your design (this is
called proactive rather than reactive design)
Biased Digital Structures and UX 21
• Brainstorm lots of different scenarios or contexts of use with your team
(think of the Facebook “Year in Review” example from earlier in this
chapter)
⁻ Here, it’s important to consider what assumptions we are mak-
ing about how people will use our product. What happens if these
assumptions are wrong?
• Have an awareness of history, culture, and social issues related to your
product. No website, app, or device exists in a vacuum or is used by peo-
ple with no history
Notes
1. Langdon Winner, “Do Artifacts Have Politics?” Daedalus 109, no. 1 (1980), www.
jstor.org/stable/20024652.
2. Ibid.
3. Ibid.
4. Ibid.
5. Judy Wajcman, “Feminist Theories of Technology,” Cambridge Journal of Eco-
nomics 34, no. 1 (2010): 146, https://doi.org/10.1093/cje/ben057.
6. Read more on the concept of “performativity” here: Judith Butler, Gender Trouble
(New York: Routledge, 1999).
7. Wajcman, “Feminist Theories of Technology,” 147.
8. Rena Bivens and Anna Shah Hoque, “Programming Sex, Gender, and Sexuality:
Infrastructural Failures in the ‘Feminist’ Dating App Bumble,” Canadian Journal
of Communication 43, no. 3 (2018), doi:10.22230/cjc.2019v44n3a3375.
9. Caitlin McLeod and Victoria McArthur, “The Construction of Gender in Dating
Apps: An Interface Analysis of Tinder and Bumble,” Feminist Media Studies 19,
no. 6 (2019), https://doi.org/10.1080/14680777.2018.1494618.
10. Bivens and Hoque, “Programming Sex, Gender, and Sexuality,” 449.
11. Ibid.
12. Safiya Umoja Noble, Algorithms of Oppression (New York: NYU Press, 2018).
13. Ibid.
14. Ibid.
15. Sara Wachter-Boettcher, Technically Wrong (New York: WW Norton & Company,
2017).
16. Ibid.
17. Ibid.
22 Introduction
18. Cathy O’Neil, Weapons of Math Destruction (New York: Crown Publishing Group,
2016).
19. Ibid., 7.
20. Ibid., 20.
21. Ibid., 21.
22. Ibid., 70.
23. Ibid.
24. Mar Hicks, “When Did the Fire Start?” in Your Computer Is On Fire, eds. Thomas
S. Mullaney, Benjamin Peters, Mar Hicks, and Kavita Philip (Cambridge: MIT
Press, 2021), 11–25.
25. Ibid., 20.
26. Ibid.
27. Adrienne Shaw, “Encoding and Decoding Affordances: “Stuart Hall and Interactive
Media Technologies,” Media, Culture, & Society 39, no. 4 (2017).
28. Ibid.
29. Ibid., 8.
30. Gaby David and Carolina Cambre, “Screened Intimacies: Tinder and the Swipe
Logic,” Social Media + Society 2, no. 2 (2016), https://doi.org/10.1177/20563051
16641976.
3 How Did We Get Here? IntroductionHow Did We Get Here?
With each new generation, whatever is prevalent in the digital world at the time
becomes “normal” to that group. If you grew up with the internet, for instance,
having the internet feels normal. It is an expected utility in your life. Think
about younger generations; what will “normal” be for them? Now think about
your parents and grandparents. You can see why there is often conflict; their
“normal,” or what the popular technologies were when they were growing up,
is quite different from your “normal.”
These considerations are also important from a diversity and inclusivity
perspective. With each new generation, the things we continue to allow to be
“normal”—sexism, racism, ableism, etc.—will continue to feel acceptable.
However, if we work to make changes in our field (and our society!) now, we
can change the expected worldview of generations to come.
So, how did we get here? In this chapter, we will briefly (very briefly) review
how we moved from a pre-computer, to an early computer, and to our current
computer world. We include this chapter to provide some context, but we cer-
tainly do not cover all the interesting history around UX that exists. We suggest
that you read through the sources we provide—understanding how we got here
helps us know where to go.
Before Computers
The word “computer” today is usually used to mean a desktop computer.
But, technically, a lot of things are computers—laptops, tablets, smartphones,
smartwatches, calculators. Anything that computes is a computer. The word
actually originally referred to people who computed. Before there were high-
powered, programmable computers to solve complex equations, people would
complete long math problems by hand, and so were called “computers.”
The job of computer was often held by women, specifically Black women.
Today we may call them mathematicians, and their job was to support research
like that which was being conducted at NASA.1 (If you read the book or saw the
movie Hidden Figures, you are probably familiar with some of these amazing
women, including Katherine Johnson and Dorothy Vaughan.) Black women
also played a large role in programming the very first computer, ENIAC, built
DOI: 10.4324/9781003181750-4
24 Introduction
for the US Army.2 Today, however, few women complete degrees in computer
science. Studies suggest that this is due to myriad factors, including computer
and video game marketing being targeted to boys, as well as gender attitudes
and behavior within the computer science community.3
In this book, we will use the word “computer” to refer to devices like desk-
top computers, laptops, smartphones, tablets, websites, and apps. So, when we
say “before computers,” we are talking about the human experience before
the digital machines we are so accustomed to today. Of course, ancient tools
were designed to “compute” or measure mass (ancient scales), measure dis-
tance (Jacobs’ staff, range finders, odometers), measure time (sundials, water
clocks), compute numbers (abacus, mesolabio, Antikythera mechanism), and
communicate (lighthouses).4 But, the first computers, as we use the word today,
were machines like the US Army’s ENIAC and IBM’s 7090.
These early computers were the size of a room and had no screens, mice,
or keyboards. Instead, a punch card system was used to input information and
output results. A person would punch a stack of cards by hand, creating what-
ever equation they were trying to solve or program they were attempting to
run. They would then hand the stack to the operator, and she would put the
stack in the queue, behind the other jobs. To begin a job, the operator would put
the stack into a hopper and push “run.” Whatever program was punched in the
cards would run (if it worked). This process could take an hour or many hours,
depending on the length and complexity of the program. Once completed, the
“answer” or output was punched out and delivered by the operator.5 Today,
this same process, which could have taken hours, is equivalent to running a
program on your laptop or phone that takes maybe a couple of seconds (usu-
ally less).
In the 1950s, 1960s, and 1970s, controls were designed solely for the pur-
pose of operating the machine. Computers weren’t designed for people; people
were merely meant to become another part of the machine. That is, the experi-
ence was not user-centric, and computers were not widely used for commu-
nication. Computers were still largely viewed as computation machines, and
people were expected to adapt to them. Any controls were designed to operate
the machine, not to provide the user with an enjoyable experience. No one had
computers in their homes; during this time, computers were located in busi-
nesses, government buildings, and universities.
However, some tech-oriented people began to think beyond punch cards—
and this is where things that we’re familiar with today, such as mice, icons,
monitors, and software, started being created. For instance, the mouse was
developed by Stanford Research Laboratory (now SRI) employee Doug Engel-
bart in 1965. It was a cheap replacement for a previous manipulation tool—the
light pen—which never really took off because of the awkward angle the user
had to constantly hold their arm while using the wand-like pen. In what is now
called “the mother of all demos,” Engelbart conducted a demo of his mouse
(so named because the wire looked like a tail), as well as hypertext, objects
on an interface, dynamic file linking, multiple windows, and communication
How Did We Get Here? 25
between two people over a network with audio and video. In other words,
this demo was a very early look into a human-centered computing experience.
While the “mouse” title stuck, “bug,” which is what Engelbart called the cur-
sor, wasn’t as popular.67 You can watch “the mother of all demos” here!: www.
youtube.com/watch?v=B6rKUf9DWRI
The first computer advertised with a monitor was the Xerox Alto, released
in the early 1970s. It came with a keyboard and a three-button mouse as well
as an 8 ×10, sideways television-like screen. Beyond helping consumers com-
plete tasks, the Alto was also marketed as a communication device—the first
public glimpse into computers as interpersonal and public communication
tools. Although it was largely advertised as a personal computer, it was very
unlikely that people would have one in their home—it cost $32,000!8
Formative research continued to be carried out at Xerox PARC. David Can-
field Smith coined the term “icon” in his 1975 Stanford doctoral thesis and
then went on to work for Xerox. Once there, he popularized the idea of icons as
one of the chief designers of Xerox Star, first marketed in 1981. Officially the
“Xerox 8010 Star Information System,” this computer improved upon Xerox’s
Alto, adding a two-button mouse and folders and costing about half the price
of the Alto.9 Beyond early computers, many program advancements that we
take for granted today began at Xerox in the 1970s and early 1980s. Bravo, the
first contemporary Word Processor, and Draw, the first drawing program, for
example, were developed there.
This initial research at Xerox PARC encouraged other companies to cre-
ate more efficient computers and software. Apple began producing similar
style computers, including The Lisa and Apple II. (Sketches of The Lisa user
interface, as it was being developed, can be seen here: www.folklore.org/
StoryView.py?story=Busy_Being_Born.txt.) Apple was also at the forefront of
creating the foundation for the contemporary, “user-friendly” software we’re
used to now. VisiCalc was the first spreadsheet program and was very similar
to software that we rely on today, like Microsoft Excel. The mouse could be
used to select a cell, cells would auto recalculate when values were changed,
and labels and formulas would be suggested to the user as they typed.10
WordStar, created by Seymour Rubinstein and John Robbins Barnaby, was
similar to Bravo, but it had a much steeper learning curve due to its com-
plicated interface. However, once users understood the program, it was quite
powerful. WordStar was the foundation for immensely popular software today,
like Microsoft Word.11 Programs like WordStar and VisiCalc finally provided
a concrete reason for families to purchase personal computers for their homes.
WordStar allowed people to type letters and other correspondence, and Visi-
Calc was incredibly popular for personal budgeting. Files could be easily cre-
ated, formatted, stored, and edited.12
These early software were the first WYSIWYG programs (fondly pro-
nounced wizzywig), or “what you see is what you get.” The idea behind
WYSIWYG software is that the user sees no source code, just the end product.
In other words, what the user sees is what the product would look like if it were
26 Introduction
to be printed out on a piece of paper. This was a huge change and what allowed
for the eventual move to massively adopt digital computing technologies, the
digital tools with which we are familiar today. When the program no longer
requires users to have deep knowledge of programming languages, comput-
ers can finally make their move into households (as long as the hardware is
affordable!).
The WYSIWYG ideal also led to users becoming further and further
removed from fully understanding the mechanisms behind the digital infra-
structures that they trust and fold into their daily lives. WYSIWYG technolo-
gies are often referred to as “user-friendly.” What this really means is that
users (those assumed to not be experts) need increasingly less knowledge of
how digital programs, websites, and apps actually function. In other words,
most current UX research does not focus on the background processes that are
making apps and websites function the way we expect. Often, in fact, pleasur-
able user experiences include not having to think at all about how an app or
a website works. Many argue that users have become so far removed from
technologies’ inner workings that they are easily misled, exploited, and taken
advantage of, as we explore in Chapter 2. In this book, we hope to inspire you
to look deeper into the digital world and to help your participants and users
become more digitally literate and critical as well.
Notes
1. “Katherine G. Johnson,” NASA, May 25, 2017, www.nasa.gov/feature/katherine-
g-johnson.
2. “ENIAC Programmers Project,” ENIAC Programmers, 2021, http://eniacprogram
mers.org/.
3. Eileen D. Bunderson and Mary Elizabeth Christensen, “An Analysis of Retention
Problems for Female Students in University Computer Science Programs,” Journal
of Research on Computing in Education 28, no. 1 (1995): 1–18.
4. Cesare Rossi, Flavio Russo, and Ferruccio Russo, Ancient Engineers & Inventions
(Dordrecht: Springer, 2009).
5. Steven Lubar, “ ‘Do Not Fold, Spindle or Mutilate’: A Cultural History of the Punch
Card,” Journal of American Culture 15 (1992).
6. Brad A. Myers, “A Brief History of Human-Computer Interaction Technology,”
Interactions 5, no. 2 (1998).
7. William K. English, Douglas C. Engelbart, and Melvyn L. Berman, “Display-
Selection Techniques for Text Manipulation,” IEEE Transactions on Human Fac-
tors in Electronics 1 (1967).
How Did We Get Here? 29
8. Steven K. Roberts, “The Xerox Alto Computer,” BYTE Magazine, September 1981,
https://archive.org/details/byte-magazine-1981-09/page/n59/mode/2up.
9. “Xerox 8010 Star Information System,” National Museum of American History,
accessed July 18, 2021, https://archive.org/details/byte-magazine-1981-09/page/
n59/mode/2up.
10. Owen W. Linzmayer, Apple Confidential 2.0: The Definitive History of the World's
Most Colorful Company (San Francisco: No Starch Press, 2004), 14–15.
11. Thomas J. Bergin, “The Origins of Word Processing Software for Personal Com-
puters,” PC Software: Word Processing for Everyone (2006): 32.
12. Linzmayer, Apple Confidential 2.0.
13. Ibid., 109–14.
14. Ibid.
15. Rob Siltanen, “The Real Story Behind Apple’s ‘Think Different’ Campaign,”
Forbes, December 14, 2011, www.forbes.com/sites/onmarketing/2011/12/14/
the-real-story-behind-apples-think-different-campaign/?sh=41ac8a6362ab.
16. Mitchell Kapor, “A Software Design Manifesto,” in Bringing Design to Software,
ed. Terry Winograd (New York City: ACM Press, 1996).
17. Jakob Nielsen, “A 100-Year View of User Experience,” Nielsen Norman Group,
December 24, 2017, www.nngroup.com/articles/100-years-ux/.
4 Interfaces and Navigation IntroductionInterfaces and Navigation
In 1983, Time Magazine switched up its usual Person of the Year: instead of
naming a person, the magazine named the computer the “Machine of the Year”
and stated, “The Computer Moves In.” As discussed in Chapter 3, computers
were certainly around before 1983, but the idea that computers were made with
the human experience in mind, especially how we view them as social utilities
today, began in the 1980s.
Key components for the idea of user experience lie in interfaces and naviga-
tion. In this chapter, we first outline what interfaces are, breaking them down
into three main types—user interfaces, Advertiser APIs, and Developer APIs.
We then provide brief definitions of a list of popular navigation tools. Although
as a UX researcher you are not designing interfaces and navigational tools per
se, these basics are foundational knowledge for conducting effective user expe-
rience studies of apps and websites.
Understanding interfaces and navigation is also useful if you are conducting
research in academic fields like Media and Communication Studies. Often,
when you are attempting to understand any space in which, or through which,
people communicate, it is important to understand the context. How digital
spaces like apps and websites are designed (as discussed in Chapter 2) pro-
vides specific scripts, constraints, and limitations for users to work in. While it
may at first seem that groups of people just “are” a certain way online, it could
actually be, and is likely, the case that something about the interface is priming
or prompting them to act in ways that reify stereotypes. Being able to speak
the language of human-computer interaction as well as recognizing the func-
tionalities of digital spaces is the first step to conducting critical and inclusive
studies that help us better understand digital communication phenomena.
Interfaces
An interface is where two different systems meet and communicate with one
another. Seismologists study the interfaces between tectonic plates and how
vibration is communicated between them. Physicists working in optics con-
sider the interfaces between the material of a lens and the air and how light
is communicated between them. As a UX researcher, you will consider the
DOI: 10.4324/9781003181750-5
Interfaces and Navigation 31
interfaces between human users and the systems (applications) they are using,
and how information is communicated between them.
As a verb, “to interface” is to communicate across this boundary. At the
core, this is what a computer interface does: it allows two or more people
or things to interface through some connection and interaction between hard-
ware, software, and users. Users communicate with the software and then the
software communicates with the hardware and perhaps some other software.
A good example might be a user’s interaction with their email. In order to
access an email message, a user might interface with an email application via
a web browser, then that application might, in turn, interface with a separate
database to access the content of the message. Here, a user communicates with
different types of software and hardware in one interaction.
While we often think of humans interfacing with technology, computers
also interface with other computers with little or no outside human interac-
tion. In some cases, these computers are embedded into devices, or even into
animals (think microchips) and people, in order to allow them to communicate
with each other. Today this network has come to be known as the “internet
of things” (IoT).1 The IoT consists of smart devices that are programmed to
collect and share data. This collection and sharing do not require any direct,
conscious, human-to-computer, or human-to-human interaction. For instance,
a person may have a heart monitor implanted that automatically sends data
to their doctor without any human intervention, or a piece of manufactur-
ing machinery might have sensors that report its status or location to other
machines.2
Hardware interfaces include components like plugs, sockets, and cables.
You may be familiar with a USB socket or an ethernet cable. The USB socket
is a hardware component that allows your flash drive, for example, to inter-
face with your computer’s operating system. Software interfaces are things
like languages, codes, and operating systems, including Windows, Mac, and
Linux. User interfaces include pieces like keyboards, mice, commands, and
menus. A user interfaces with a mouse by moving it around a screen and click-
ing on the desired options. This then triggers some other action to happen, like
opening a file or program, showing that the software and hardware are also
interfacing with one another. All interfaces have some structure that includes
both how the data move through them and an implied function that links to
what that interface actually does.3
Interfaces that have been made specifically for the user, to allow for and
enhance visual user experiences, are called graphical user interfaces (GUIs).
GUIs are the visual representations of the interface between user and com-
puter. An example of this are the menus and icons that make up a computer’s
desktop. The user interacts with these components in order to communi-
cate with the operating system and to perform actions on the desktop.4 Prior
to the development of GUIs, users could still perform most of the same
tasks but would interface instead via the command line interface, or CLI.
This is still common in the domains of system administration and software
32 Introduction
development, where little is gained by adding the computing overhead of a
GUI. Instead, in these fields, the end goal is to have the system or software
manage itself, and since computers can interface more easily with text com-
mands than pointing devices and clicks, a GUI is not only unnecessary but
often unwanted.
In addition to GUIs are voice-controlled interfaces (VUIs) and gesture-
based interfaces. VUIs, as the name suggests, are controlled by voice com-
mands instead of other human input like a keyboard stroke or a mouse click.
Popular VUIs include Siri and Alexa. By saying the “wake command” (“Hey
Siri” or “Alexa”) to the Apple or Amazon device, the program “listens” and
attempts to perform the task for which the user has asked.
Gesture-based interfaces are most prevalent in virtual reality (VR) and aug-
mented reality (AR), where a user performs certain body movements to elicit
the connected action.5 A user playing a VR video game might slash with their
empty hands in order to swing a sword in the game. Everyday items are also
beginning to include gesture-based interfaces. For instance, in an effort to pro-
mote safe driving, BMW has included gesture-based interfaces in their cars. If
a driver takes one finger and spins it around, the volume of the radio or GPS
will turn up and down. Or, if the driver puts up two fingers, like a peace sign,
the touch screen turns on and off.6
GUIs
GUIs arguably were first invented at the Xerox Palo Alto Research Center.
In 1977 a team designed Xerox Star—software designed to run on a series of
personal computers—but the interface was much too slow and was not com-
mercially successful. It just so happened, however, that Steve Jobs visited
Xerox during this time and saw Xerox Star. Jobs went back to Apple, hired the
original designers of Xerox Star, and created Apple Lisa. This product was also
not successful. But, in 1984, Apple developed the successful Apple Macintosh,
still based on the general Xerox Star vision. Mac’s GUI set the tone for the
look and feel of GUIs used today.10 (We cover a bit more computer history in
Chapter 3.)
The Apple Macintosh was advertised as the computer “for the rest of us.”
The GUI included menus, icons, point-and-click, and mouse-driven process-
ing. It also limited users to contextually correct answers—this means that, once
a user makes a selection, the menu limits what can happen next, ensuring that
the options are based on the previous selection.11 For example, if in the main
menu the user selects “volume options,” only sub-options relating to volume
will be visible. This type of technology seems quite fundamental today, and
it is likely you don’t even notice the behind-the-scenes process. But this type
of design is crucial to usability and good experience—and has not been the
standard for that long!
Today, GUIs are best made when keeping three criteria in mind:
Types of Interfaces
When understanding GUIs, it is important to realize that there are different
types of users. We often employ the word “user” to describe the more formal
category of “end-user.” End-users are the everyday people that are expected to
use the product. These outward facing apps and websites are officially labeled
end-user interfaces, or EUIs.
EUIs are generally all that most users of apps and websites see. If you open
an app on your phone or go to the main URL for any site, you are accessing
the EUI. It is deceptively easy to imagine that all that exists is the EUI, largely
because it is all most users directly experience and because most users seem-
ingly have no reason to view, or even really know about, other GUIs.
Although most people use the word “user” in place of end-user, there are
two other main types of users—advertisers and developers. The interfaces for
these two groups are usually referred to as application programming interfaces,
or APIs. This is because APIs are interfaces that provide slightly more backend
access to apps and websites. APIs are where certain types of content are cre-
ated for end-users. This content will become part of EUIs through components
like targeted content and third-party apps.
Advertiser APIs are commonly referred to as dashboards. These dashboards
are generally accessible to anyone—as an end user, it is possible to open an
advertiser API and see what advertisers, seeking to create targeted content on
apps and websites, use. Advertiser APIs are specifically designed to provide
pleasurable experiences for those creating ads. Figure 4.1 shows a screenshot
of Google Ads’ dashboard.
Developer APIs are areas for third-party developers to utilize components of
an app to create their own program. For example, the Facebook developer API
allows third-party programmers to tap into the data they make available to soft-
ware developers to create apps and games. You most likely have encountered a
login page that asks you to log in with your Facebook or Google username and
password. This is an example of that app or website utilizing Facebook’s and
Google’s APIs. Instead of the smaller app having to program the complicated,
and, of course, important, the task of setting up a secure, encrypted connection,
the programmers leverage Facebook’s and Google’s expertise and well-tested
code. Of course, Facebook and Google get something out of the deal too—the
Interfaces and Navigation 35
companies are now privy to data created through this connection which feeds
back into their model and helps increase profits.
Although popular UX research discussions are aimed at end-user experi-
ences, UX research is also conducted to ensure that advertisers and developers
are provided with spaces that are usable and pleasurable. While developers,
and even advertisers, often use lower-level interfaces to interact with APIs,
software platforms have recently made available GUIs in the form of dash-
boards for developers. These dashboards often allow developers to see what
parts of the API they are using and turn functionality on and off. An example
of Twitter’s developer API dashboard can be seen in Figure 4.2.
Navigation
An important piece of GUIs is the ways that a user can navigate through your
app or website. Knowing the different tools available to provide users with
great navigation is useful in UX research, even if actually designing them usu-
ally falls under the job of a UX designer or engineer. This section will intro-
duce you to a short list of popular navigational tools.
Breadcrumbs are named as such because they are like the breadcrumbs you
drop to leave a trail so that you can find your way back. Breadcrumbs are visual
representations, usually of simple page names, that let you know where you are
at any time on a website (and sometimes in apps too). Breadcrumbs certainly
help with findability, but also they help users to not feel lost; they can always
get back to a previous tier or remember why they are on the page they are on.
36 Introduction
Figure 4.3 shows a screenshot of Amazon’s app EUI. At the bottom, you can
see the breadcrumbs provided; it is letting the user know that they are in the
“Cooking Methods” section of the “Cookbooks, Food, and Wine” category.
More broadly, the user is in the “Books” section of Amazon.
Tooltips are brief descriptions that explain what a tool or button does. Toolt-
ips are always hidden on the default GUI and can be seen by hovering over
the tool or clicking/tapping. Tooltips save a lot of room and clutter because
the wordy instructions are hidden. The hidden tooltips are also nice for users
who don’t need them; users don’t want to constantly see the wordy instruc-
tions when they don’t need them. However, new users, and users that have
forgotten what certain functionalities do, can use the help. Figure 4.4 shows a
screenshot of a tooltip in Google Drive. In the search bar, there is a small icon
that looks like some sort of generic settings. But, when the user hovers, they
get the description “Search options,” reminding the user what will happen if
they click that icon.
Coach marks are like tutorials that individually highlight certain function-
alities that the app or website think will be useful. Often, coach marks will
appear when you have opened an app for the first time, when a new tool is
Interfaces and Navigation 37
added, or when you haven’t logged into a site for some time. Figure 4.5 is a
screenshot of TED Talks’ app. Upon opening the app for the first time, a coach
mark points out to the user that if they click the highlighted icon in the upper-
right corner, they can cast the video to their TV.
38 Introduction
Sliders are lists that can move horizontally and that provide multiple options.
Sliders allow users to see many options while saving space and decreasing
clutter on a page. In addition, a slider acts as a menu, while the user stays on
their current page. In Figure 4.6, a slider is shown on YouTube with suggested
video content categories.
Popovers are similar to pop-ups (external ads or other content that “pop-
up” or appear on top of the web content a user is trying to view) but are built
into an app or a website. When a popover appears, the content the user had
been viewing is still visible behind the popover. Popovers are used to get
a user’s attention, providing them with something you want them to see or
do, over and above what they’re currently browsing. For example, in Fig-
ure 4.7, a screenshot of the babysitting app Bambino, a popover asks if the
user wants to log in to Facebook for a more personalized experience. Popo-
vers are usually easily dismissed by simply clicking/tapping outside of the
popover box. As seen in Figure 4.8, a screenshot from TED, popovers can
also be controls. As with the first example, the content can still be seen in the
background. In this example, however, the popover controls are themselves
also partially transparent in an effort to reduce interfering with the video as
much as possible.
Sidebars are hidden menus that can slide out and quickly show the user a
list of options that otherwise would add clutter and be less aesthetically pleas-
ing. In Figure 4.9, you can see eBay’s app when a user first logs in. Typi-
cal menu options like “Saved,” “Buy Again,” and “Purchases” are seemingly
missing. However, when the user clicks/taps on the three horizontal lines—
fondly known as the “hamburger” because the icon looks like a crudely drawn
burger—the sidebar slides out and provides easy access to multiple options.
Of course, there are many more navigational tools, and increasingly more
are being introduced as the world goes more and more digital. These is just
a small sampling of popular examples, but hopefully they begin to get you
thinking about how navigation and interfaces are key to user experience. We
hope that being aware of these different design components can lead you to
conduct effective studies that are also inclusive, keeping users with different
backgrounds and abilities in mind.
Interfaces and Navigation 39
Figure 4.9 The eBay app’s initial login view before and after tapping the “hamburger”
and revealing the sidebar.
Notes
1. “Interface,” PCMag, accessed July 18, 2021, www.pcmag.com/encyclopedia/term/
interface.
2. “Internet of Things (IoT),” IoT Agenda, accessed July 18, 2021, https://internetof
thingsagenda.techtarget.com/definition/Internet-of-Things-IoT.
3. “Interface.”
4. “What Is User Interface Design?” Interaction Design Foundation, accessed July 18,
2021, www.interaction-design.org/literature/topics/ui-design.
5. “What Is User Interface Design?”
6. Matt Burns, “BMW’s Magical Gesture Control Finally Makes Sense as Touch-
screens Take Over Cars,” November 4, 2019, https://techcrunch.com/2019/11/04/
bmws-magical-gesture-control-finally-makes-sense-as-touchscreens-take-over-
cars/?guccounter=1.
7. Don Norman, “Emotion & Design: Attractive Things Work Better,” Interactions 9,
no. 4 (2002).
Interfaces and Navigation 41
8. Ibid.
9. Ibid.
10. Bernard J. Jansen, “The Graphical User Interface,” ACM SIGCHI Bulletin 30,
no. 4 (1998).
11. Ibid.
12. Ibid.
Section Two
The best studies follow a particular method to ensure rigor and valid results.
Perhaps you have already learned about the scientific method: a linear process
that begins with a question, dives into a specific research area, hypothesizes
what the findings may be, tests those hypotheses with experiments, analyzes
the data, interprets the findings, and reports the results. The scientific method
is most often taught in the lab sciences, is viewed as objective, and should
produce results that are replicable and generalizable.
You may have also learned about more qualitative research that follows a
similar, linear pattern. But, this type of research typically starts with an open
research question instead of hypotheses and employs more “qualitative” meth-
ods that are not experiments but still gather data—interviews, observations,
surveys, focus groups, and so on. These “pseudo-scientific” studies are some-
times viewed as less objective, less generalizable, and less replicable. Yet, they
are still rigorous, valid, and highly valued in academic communities for under-
standing cultural norms, smaller groups of people, and snapshots in time.
In the hard sciences and academia alike, it isn’t so much that one type of
research method is better, but which is more fitting for the context, the type of
questions you want to answer, and the types of data you can collect. In fact,
the scientific, “objective” method has been used to “prove” many racist, sexist,
and biased theories including hard “scientific” studies that were used to sup-
port white and male superiority. It is generally argued that scientific research
often takes on dominant viewpoints, sometimes without the researchers even
realizing.1
What “science” and “scientific research” are is a timely topic as well. Since
the early 2000s, there has been an ongoing “replication crisis.” Meta-studies
have shown that the majority of scientific studies are impossible to reproduce.
This is mainly an issue in the social sciences and medicine,2 but studies have
also shown it to be a problem in the natural sciences.3 Thus, our goals in this
textbook do not include attempting to fit into some “traditional” definition of
scientific research. Instead, we adopt the Design Thinking Mindset to conduct
rigorous and relevant research that considers people with different experiences
and backgrounds. The Design Thinking Mindset takes an empathetic approach
DOI: 10.4324/9781003181750-7
46 Preparing Your Study
to suggesting data-driven changes to apps and websites, but it is also a helpful,
innovative process for thinking through academic research studies.
Let’s break that down. First, a human-centered approach is one that takes into
consideration diverse users, varying usage contexts, and inclusive end-user
interfaces. For our purposes, we may consider this a user-centered approach.
But, we want to be sure to not lose the human aspect; only viewing users as
objects to be studied is problematic because it is easy to fall into the trap of
thinking of users as robots that use products similarly to each other and in a
vacuum. Users should be thought of as holistic humans—with different con-
texts, different experiences, and different personalities—all which shape their
user experience.
As a UX researcher, it is important to remember that you are not the user!
Your experiences cannot be considered representative or generalizable. Instead,
the design choices you make should be based on what you learn from users.
Being human-centered really includes two main things: (1) You design for the
actual people that use, could use, or you would like to use your product, and (2)
you conduct studies and suggest changes that take into consideration diverse
and intersectional identities.
Second, drawing from a toolkit to combine what users need and what tech-
nologies can actually do, is essential. Here we are talking about keeping on
brand and not suggesting changes or conducting studies that are not in line
The Design Thinking Mindset 47
with the app’s or website’s mission or that are not feasible from an engi-
neering standpoint. There is nothing worse than a UX researcher who isn’t
aware of the complete process—not just the research they are conducting but
the overall capabilities of the product and what the UX designers, engineers,
programmers, and other members of the team have as goals, pressures, and
constraints.
As for the third part of that definition—business success—even though you
are the UX researcher, that doesn’t mean you can completely step out of a busi-
ness mindset. Everything you do must, in some way, consider the success of
the app or website. Of course, success can be defined in a myriad of ways: Is
the product publicly traded? Privately owned? Non-profit? Are there specific
timelines? Expectations? Goals to be reached? Initiatives to focus on? When
you begin working for a company or on a product, it is important to quickly
learn what type of work is needed that can both provide users with better expe-
riences and help the business remain, or become, successful.
These three parts of the Design Thinking Mindset definition are often
referred to as desirability, feasibility, and viability (note how these catego-
ries overlap with the seven facets of good UX in Morville’s honeycomb from
Chapter 1). Products must be desirable to users, but they must also be techno-
logically feasible and economically viable.6 Tim Brown, a Design Thinking
pioneer, explains this way of thinking in his TED talk. You can watch the video
here: www.youtube.com/watch?v=UAinLaT42xY
Empathize
UX studies typically begin with the Empathize stage which is all about under-
standing the user. As human-centric researchers, we should always be con-
cerned with who users are, what they care about, and what they need. The
Empathize stage is yet another reminder that, as a researcher, you are not the
user! This revelation can be especially difficult when you are studying an app
or a website that you use every day or have a lot of experience with. It is very
easy to rely on your own experiences or stories from friends when making
choices in the research process. But these are not data, just anecdotal evidence
that is not really representative of all people. Thus, to empathize is to deeply
understand who your users are and what they care about.
48 Preparing Your Study
This first stage in the Design Thinking process relies on your ability to
engage with users, discovering their thoughts and values. Of course, these
thoughts and values may not look like yours; they may even directly clash
with how you view the world and live your life. But, to empathize isn’t to
agree, just to understand. It isn’t your role as a researcher to force your own
thoughts or only attempt to understand those with whom you agree or share
similar experiences.
During the Empathize stage, you use methods that allow you to observe,
engage, watch, and listen. The goal is to let users be in their own context as
much as possible, sharing stories and experiences with you that provide insight
into how they use the technology, what they like about it, what they find frus-
trating, who they are, and what they need, both from the app or website and
generally as a human. Methods often used in this stage include emotional jour-
ney mapping, contextual inquiry, breakup/love letters, and screenshot diaries.
Empathize stage methods can be used in academic research for studies that
aim to understand people’s subjective experiences and emotions, particularly
when researchers are interested in knowing more about specific groups in
specific contexts. In traditional Communication and Media Studies research,
Empathize methods are qualitative methods such as interviews and focus
groups—methods that focus on attitudes and feelings, rather than behaviors.
Research in the Empathize stage is similar to exploratory research in the aca-
demic paradigm, in that there typically isn’t a very focused research question
guiding the research—it is more about exploring what is out there that can lead
to a more defined project down the line. Parallel to this, in the Design Thinking
process, Empathize is followed by the Define stage.
Define
In the Define stage, you are already aware of the users’ needs, desires, and pain
points because you have discovered them through some Empathize method(s).
Now, you must focus and clarify your findings. The Define stage is particularly
important because it is when, as the researcher, you start to the understand
the research problem and create a concise and clear Problem Statement. (In
the academic research process, this would typically be the first stage of the
research process—deciding on the research question or hypothesis to guide
your study—unless the study is exploratory.)
A Problem Statement is a brief, one- or two-sentence description of the
user experience issue that you will be researching, collecting data about, and,
eventually, proposing a solution for. A good Problem Statement focuses on
one, small but prominent frustration or misconception that users have with
your app or website. For example, a Problem Statement like “Facebook isn’t
cool anymore” is much too broad and assumes too many things. It is trying to
be everything for everyone. Instead, a Problem Statement may be something
like “Users can’t find Facebook’s Most Recent button” or “Users don’t think
Facebook is doing enough to fight fake news.” These statements are clear and
The Design Thinking Mindset 49
small, and not only speak to users’ desires but also take into consideration
feasibility and viability. Other stakeholders in the company can see how mak-
ing small changes to the end-user interface and usability upgrades (making a
button more visible) or showing the organization is interested in contemporary
issues like fighting fake news (by, for example, giving more reporting options)
is in line with the company’s mission and helps them continue to be a success-
ful business.
A Problem Statement is necessarily guided by the analysis of the data col-
lected from your Empathize method(s), and it is often also informed by pre-
vious research that has focused on similar issues, ensuring that you, as the
researcher, have the background information and most contemporary methods
to best understand the problem. Methods often used in this stage include prob-
lem trees and personas. Technically, these are more analysis techniques than
research methods in the traditional sense, as they don’t involve a collection of
information. Typically, in the Define phase, different team members (includ-
ing designers, engineers, and other stakeholders) will brainstorm the Problem
Statement together. Here, not only is it important to consider what you discov-
ered about the needs of users but it’s also crucial to talk to other stakeholders
(both internal and external to the organization) to make sure their needs and
desires for a particular product or feature align with users’ needs.
The Define stage as a whole is closest to the research proposal stage in aca-
demia, where the researcher conducts a literature review (of other’s research
findings) to understand what work on the topic was completed previously and
to formulate research questions to answer or hypotheses to test. Again, this
is the key difference between Research Questions and Problem Statements.
Research Questions typically come first in an academic research process,
whereas Problem Statements are typically generated from prior Empathize
research in UX industry work.
Ideate
The third stage of Design Thinking is Ideate. After creating a Problem State-
ment and researching some background information, you can focus on generat-
ing ideas on how to make the users’ experiences more enjoyable and inclusive.
You have identified a problem; now it is time to think about solutions. In this
stage, the goal is to begin thinking about what changes could be made to an
interface or a process. The Design Thinking Mindset challenges researchers
to not just do what has always been done or what is easiest. Instead, methods
are employed that help researchers to think “outside of the box,” dreaming up
solutions that speak to the data collected. In the Ideate stage you are not trying
to find the one, perfect solution—just a lot of different possibilities!
The Ideate stage is unique because it includes external as well as internal
methods. This means that some Ideate methods are completed only by those
working within the company (internal)—such as UX researchers, design-
ers, engineers, product managers—while other methods include recruited
50 Preparing Your Study
participants (external). Methods often used in this stage include cognitive map-
ping, darkside writing, and card sorting.
Ideate methods can be used in academia whenever brainstorming is needed
or when academics are working on an applied project, where they provide rec-
ommendations. For example, Ideate methods can be used to think through how
a particular technology or communication process can be made more inclusive
to diverse groups of people. Ideate methods are also great activities to be com-
pleted by participants taking part in focus groups, to help researchers observe
how people come up with solutions to problems. Ideate methods are also valu-
able in participatory academic research, where research participants take an
active part in the research, brainstorming solutions together with researchers
for the ultimate purpose of solving a problem in their communities.
Prototype
In the fourth stage of Design Thinking, Prototype, UX designers typically take
over for the researchers. Here the goal is to begin to make new versions of
tools, buttons, interfaces, and so on based on the previous three stages—that is,
you are actually creating the solution to the problem that you brainstormed in
the Ideate phase. A prototype is a mock-up or a replica of what a new product
or an altered app or website would look like and how it would function. There
are different stages of prototyping, ranging from low-fidelity “rough drafts”
that can be as simple as pen-and-paper sketches (these are sometimes called
sketches, mock-ups, or wireframes if they show predominantly layout) to high-
fidelity, interactive, computer-generated renderings of the UI, built using soft-
ware like Figma and Adobe XD.
Prototyping does not have a similar process or stage in academic research,
largely because academic research is usually not applied research. In other
words, it is not typically about creating a tangible product or recommending
solutions. However, as with Ideate methods, academic researchers can use
prototypes as part of their research activities to better understand how partici-
pants would interact with an imagined technology. We believe that attempting
to incorporate prototyping activities into Communication and Media Studies
would lead to less isolated, more collaborative, and more creative research. For
example, researchers could create a prototype of a new social media platform
and ask focus group participants how they would interact with it. This type
of research gives tangible insights into how participants move through digital
spaces and the ways in which tools and functionalities may privilege certain
people or certain uses over others.
Test
The final stage of Design Thinking is Test. Testing involves asking participants
to engage with the created prototypes, whether it is prototypes of completely
new technologies or prototypes of improvements and changes to existing
The Design Thinking Mindset 51
technologies. In this stage, you’re checking whether the product meets the
needs of the user or what versions of the product provide the best user experi-
ence. Testing allows you to isolate pain points, that is, areas of the design that
the user has problems with. For instance, maybe navigating to the home page
from somewhere else in the website is not intuitive because the “home” button
is not easily findable. Methods often used in this stage include A/B testing and
usability testing.
Testing methods, when used as part of academic research, are particularly
useful for researchers concerned with cultural power imbalances and acces-
sibility. Testing existing technologies, websites, and apps with diverse users
can provide empirical evidence for accessibility issues and power imbalances
around identity that are baked into technologies. This can help show that digi-
tal spaces are often designed for a very narrow range of people with certain
characteristics (often able-bodied, white, middle-class men) and are exclusive
to others by their very design.
To reiterate an important tenet of Design Thinking: unlike most academic
research methods, the Design Thinking process is not meant to be linear.
Instead, projects should be able to move through the five stages organically,
allowing new data and interpretations to guide researchers to the obvious
next stage. The researcher can return to any previous stage to which the data
lead. The collaborative, iterative, and nonlinear nature of the Design Think-
ing Mindset is certainly different from the traditional research methods you
have probably learned. It straddles the line between inclusive, human-centered
research and business-minded and successful product development.
Empathize
As the UX researcher, you would start off with the Empathize stage, asking
users about their needs, desires, and pain points. Because you haven’t devel-
oped your platform yet, you might seek out current engaged users of social
media and talk to them about their experiences with and thoughts on social
media. You would probably conduct some interviews, do some contextual
inquiry (observe how users interact with current platforms and ask them about
their use), and map out their emotional journeys when using social media.
You would ask questions not only about their current experiences and pain
points (what’s annoying about current social media apps) but also about an
ideal future scenario. What would people ideally want from a perfect social
52 Preparing Your Study
media platform? What would make them choose your platform over others that
exist in the market?
Define
From your Empathize research, you might discover that people love interact-
ing with their friends on social media but wish they could feel more “present”
during the interactions. On the basis of this knowledge, you would create a
problem tree, aiming to get to the root of the problem of perceived lack of pres-
ence (for more details on problem trees, see Chapter 14). Through your work
on the problem tree, you might decide that the cause of the problem is the limi-
tations of the screens that are typically used for social media—they are small
and 2D. Here, you would define your Problem Statement as: Social media apps
limit the feeling of presence in online interactions because of screen limita-
tions. Now you know what problem you are trying to solve through the devel-
opment of your new app.
Ideate
In this stage, you would get the team together and brainstorm some ideas on how
to overcome the problem. You could, for instance, organize a darkside writing
exercise (more on this in Chapter 16), asking team members to come up with
the very worst possible social media platform for present interaction (e.g., one
that just lets you text with limited characters, no photos, no videos, no calling
through the app) and flip the “worst idea” statement for possible solutions. Let’s
say that at this stage you come up with the idea to incorporate virtual reality
(VR) into your social media platform, so that users could feel like they’re inter-
acting with their friends in the same space at the same time via the technology.
If following the Design Thinking process linearly, we would move on to
Prototype. However, as the researcher you have yet to really explore virtual
reality options and how diverse users respond and employ virtual reality func-
tionalities. So, instead of just moving blindly into the Prototype stage, you
could loop back to Define, learning more about contemporary virtual reality
applications.
Define (Again)
Returning to the Define stage, you don’t necessarily need to create a new prob-
lem tree or problem statement. However, it is time to conduct a little research
about virtual reality in social media spaces. Back in the Define stage, you can
research both academic articles and industry blogs and UX pages like nngroup.
com and interaction-design.org. Similar to a literature review, you can begin
to catalog what has been done before and how other companies have added
virtual reality to their social media platforms.
The Design Thinking Mindset 53
Ideate (Again)
Now that you are better acquainted with virtual reality trends, you can bring
this information back to the Ideate stage and brainstorm possible changes that
specifically employ virtual reality functionalities, guided by what you have
learned from your second Define stage research. For example, you might
decide that adding holograms or 3D cartoons of friends on social media would
be enough “virtual reality,” or you might play around with the idea of sending
out free VR goggles to everyone who signs up for your platform. Now that you
have some ideas, you are finally ready to prototype!
Prototype
During this stage, the UX designers, working with you as the UX researcher
(and your findings), would create some mock-ups of what this social media
platform would look like. Obviously creating a prototype of a virtual real-
ity environment is hugely technically complicated. So, start small. Choose
one idea from your Ideate findings, such as the idea that your users might
enjoy creating 3D cartoons of themselves that could then talk to friends using
their phones’ microphones instead of just using text or 2D video chat. The UX
designers would create a working prototype of this 3D cartoon feature and
send it to you to then move to the final stage, Test.
Test
In the Test phase, you are finally ready to see how your users respond to the
changes you think best solve the Research Problem. In this example, you may
decide to conduct some A/B testing with different size 3D cartoons or with hol-
ograms versus 3D cartoons, to see which version possible users would prefer.
As you can see, Design Thinking is not a straight step-by-step process. It
allows for adding, taking away, looping back, rethinking, failing, trying again,
testing, and innovating. It’s flexible and creative and places people right in the
middle of any research and design project. Now that you’ve got the right mind-
set for human-centered design, you can start planning your research project,
which we cover in the next chapter.
Notes
1. e.g., Sandra G. Harding, The Science Question in Feminism (Ithaca: Cornell Univer-
sity Press, 1986).
2. Jonathan W. Schooler, “Metascience Could Rescue the ‘Replication Crisis’,” Nature
News 515, no. 7525 (2014), https://doi.org/10.1038/515009a.
3. Monya Baker, “1,500 Scientists Lift the Lid on Reproducibility,” Nature News 533,
no. 7604 (2016), www.nature.com/articles/533452a.
4. “Design Thinking,” Interaction Design Foundation, accessed July 18, 2021, www.
interaction-design.org/literature/topics/design-thinking.
54 Preparing Your Study
5. “Design Thinking Defined,” IDEO Design Thinking, accessed July 18, 2021, https://
designthinking.ideo.com/.
6. “History,” IDEO Design Thinking, accessed July 18, 2021, https://designthinking.
ideo.com/history.
7. An Introduction to Design Thinking: Process Guide, Hasso Plattner Institute of
Design at Stanford, 2013, https://web.stanford.edu/~mshanks/MichaelShanks/
files/509554.pdf.
6 Planning Your Research Preparing Your StudyPlanning Your Research
Planning your research project ensures that everything runs smoothly and that
you get valid insights. This chapter walks you through all the things you need
to do before actually starting your study, including formulating a clear research
question(s) or objective(s), getting a thorough understanding of background
information and previous research, recruiting the right participants, consider-
ing ethical issues and accessibility, and ironing out all the logistical details,
such as the place and time for conducting a study.
DOI: 10.4324/9781003181750-8
56 Preparing Your Study
Background
Before you start doing your own research, you should first get a thorough
understanding of background information related to your topic. In academia,
this is the literature review, where you find relevant academic articles and
books that have been written by researchers in a particular field on a particu-
lar topic. The point of the literature review is to identify a gap in the body of
knowledge that you can fill by conducting your study. Perhaps a lot of research
has been done about how teens use social media for socialization, but very
little work exists on how seniors socialize on social media platforms. That’s a
gap that you can aim to fill!
In industry, when you are working as a UX researcher, you will want to
review documents in your company that have to do with the study you’re
planning (including any previous related research studies), as well as analyz-
ing your competitors and their products, so that you can understand what
works and what doesn’t. Such a review can also lead to helpful insights into
how changes and new features have been implemented. Make sure that you
understand how your product or new feature differs from your competition.
By thoroughly examining the features and functions of your competitors’
products, you can better understand what your product needs to provide a
unique and hopefully better user experience. This is called a competitive
analysis, and, as well as looking at what is out there currently, it also involves
understanding untapped markets, to see where your newly developed product
can fit in.
Participants
Participants are the people who take part in your research study. Think care-
fully about your sample—the people that you need to talk to and observe to
get answers to your Research Question or to fulfill your Research Objective. In
UX, the participants you recruit for research should be representative of (pos-
sess characteristics similar to) the target users for your website or app.
The first thing to consider in deciding on your participants is demographic
characteristics. Are you interested in a particular group of people based on age,
income level, location, marital status, disability status, etc.? Are you interested
in comparing different demographic groups? Or are you trying to generally
understand the average person’s experience and want to talk to or observe a
wide range of demographic categories?
Next, think more broadly about other key criteria for participation in your
study. This will vary from project to project. Is it important that you talk only
with people who have had some experience with a particular technology or
who work in a particular field? If you are interested in a broad target user
base (i.e., you’re designing something “for everyone”), consider representing
both the average or mainstream user and more extreme use cases or outliers.
So, if you’re designing digital products, consider recruiting people who have
Planning Your Research 57
day-to-day working experience with computers and also those who are not
very digitally literate, to get the full spectrum of possible users.
A note here about diversity in sampling: Think carefully about what is con-
sidered mainstream or average. For decades, the “average” user (that digital
products were typically designed for) was a white, able-bodied, straight, male.
As researchers and designers in the 21st century, we should be actively moving
away from such a narrow-minded understanding of the universal person. This
means you should always be thinking about diversity in your research, and par-
ticularly as you are recruiting participants. Sometimes you may be interested
in a very specific group of people (for instance, if you’re designing an app for
moms or for Latina entrepreneurs), and in that case, you obviously want to do
research with your specific target group. But in cases where your target market
is broader, you should make sure you recruit a diverse group of people to ask
questions and test products on.
Once you have decided on the desired make-up of your sample, you need
to think about recruiting (finding and signing up) your participants. Where
or how will you find these people to take part in your study? Some recruit-
ment methods include: tapping into your existing networks and asking people
you know to spread the word about your research (in UX industry research,
researchers often rely on their co-workers to test new products); advertising
and promoting your study on social media; putting up flyers in public places,
getting the word out into the media with press releases (particularly helpful
if you’re working on an exciting, impactful study that’s newsworthy); using
existing email listservs; or reaching out to partner organizations who can help
connect you with the people you want to talk to. You can also do intercept or
guerilla research, by going to a public place and simply asking strangers to
answer a few quick questions. There are also recruitment firms that can do the
recruitment for you, if you have the budget to pay them!
Think about building incentives into your budget—you’re likely to be more
successful in finding participants for your study if you remunerate them for
their time. Incentives can range from a cup of coffee (especially in guerilla
research) to $20–$50 gift cards for an hour-long interview to being entered into
a drawing to win a bigger prize for participation in a focus group. If you are a
student, you are unlikely to have much resources for incentives. Get creative
(perhaps you can tap into some clubs or student networks you belong to, ask
family friends to help, or offer to watch pets or kids in exchange for people
taking part in your research), but also don’t underestimate people’s general
willingness to help out with student work. Don’t be shy asking everyone you
know to help you find participants for your study!
How many people should you ideally have participating in your study?
The answer is: as many as possible based on the resources (time, budget, and
personnel) you have. Though qualitative research does not require a specific
sample size like quantitative research does (because you are not trying to gen-
eralize beyond your sample), you should aim for at least five participants per
study to get some meaningful insights.
58 Preparing Your Study
Ethical Considerations
Ethical considerations are a critical part of any research. If you’re doing
research at a university, you need to get permission to conduct your research
from your Institutional Review Board (IRB). An IRB is an internal committee
that evaluates research plans to make sure they are ethical. IRBs are less com-
mon in industry, so industry researchers rely more on their own moral commit-
ment to ethical conduct.
The golden rule of ethics in research is to cause no harm, whether physical
(bodily) harm or psychological harm. While physical harm is easily identified
and unlikely to be a major consideration in UX research (unlike biomedical
research), psychological harm is trickier. Think carefully about whether the
research you’re proposing could make your participants uncomfortable in any
way. Are the questions you’re asking too intrusive, insensitive, or related to
past traumas? For instance, if you’re working on developing an app for track-
ing chronic illness, you need to think ahead of time how asking participants
about their health conditions might be upsetting and make a plan for working
through such difficulties with empathy and respect.
Other very important ethical considerations in any research are anonymity,
confidentiality, and privacy. Anonymity means that no identifying information
(including name, phone number, etc.) about your participants is collected, so
the data collected cannot be linked to a specific person. No in-person research
can be anonymous. The most common type of research that is anonymous is
surveying, where respondents fill in a questionnaire online about their habits,
attitudes, opinions, etc. but do not provide any personal information. Do be
aware that indirect identifiers, such as gender and race collected with other
information (e.g., personal stories), can jeopardize anonymity. Whenever you
collect any identifying information about your participants, the study is no
longer anonymous and you have to be careful to keep that data confidential.
Confidentiality means that any information collected during the research
process will not be shared beyond the research team. When data are reported
and presented publicly, they should be presented in the aggregate (not per indi-
vidual), or if particular quotes or stories are used as evidence (often the case for
qualitative research), they should be anonymized (i.e., no real names or other
identifying characteristics should be used).
Whereas confidentiality pertains to storage and sharing of data, privacy per-
tains to the individual and their right to have control over if, how, and when
their personal information is collected and shared. Privacy is related to consent
in research. Whenever there is a reasonable expectation of privacy, you may
only collect information from participants (observing, recording, etc.) with
their consent. For instance, you cannot record conversations in a coffee shop
without getting explicit consent from the people involved.
This is where the notion of “informed consent” in research comes in. Before
conducting any research activity, you should get consent from your partici-
pants to collect their data. The “informed” part means that you tell them what
Planning Your Research 59
your study is about (give as much information as possible without affecting the
results) and that their participation is voluntary. This means that a participant
can decide to leave the study at any point, including after data collection has
started. Researchers typically provide consent forms for participants to sign
before a study is conducted, but you can also collect consent verbally if it is
recorded. Remember that you are asking for consent not only to collect their
information and use it for your desired purposes (e.g., to make recommenda-
tions for feature improvements to a product) but also to record the interaction.
A sample consent form is provided on usabliity.gov (along with a plethora of
other templates, resources, and guides) here: www.usability.gov/how-to-and-
tools/resources/templates/consent-form-adult.html
Ethics in Practice
You might have read about (or maybe even remember) Facebook’s controver-
sial social experiment conducted in 2012. For one week, the company manipu-
lated the algorithms behind the content of its users’ News Feeds, showing some
people more negative content (by filtering out any positive content shared by
each user’s friends) and showing other people more positive content. This was
all done in an effort to see how the sentiment of a News Feed affects users’
moods (as measured through them posting positive or negative status updates
after being exposed to the manipulated feeds). The major controversy around
this study was that Facebook manipulated the feeds without the users’ explicit
consent, arguably causing psychological harm by eliciting negative moods (for
those who were shown more negative content).
As with much research, providing all the information about a study to par-
ticipants ahead of time is not a good practice, as research relies on understand-
ing how people act naturally (without changing their behavior to please the
researcher by acting out “desired” results, for instance). But some acknowledg-
ment that users were inadvertently taking part in research, even if in the form
of a debrief afterward, would have made the Facebook study much more ethi-
cal. Debriefing refers to the process of giving participants more information
about the study, after the interview, experiment, focus group, or other research
activity took place. This is a crucial part of informed consent, so the participant
can truly understand what they participated in and why—this is particularly
important when deception was necessary as part of the study design (as was the
case with the Facebook social experiment). A debrief also gives participants
the space to ask the researcher questions about the study.
For more on Facebook’s study, see this Wired article: www.wired.com/2014/06/
everything-you-need-to-know-about-facebooks-manipulative-experiment/
Accessibility Considerations
The Americans with Disabilities Act (ADA) of 1990 is a civil rights law
that prohibits discrimination based on disability, whether physical or mental
60 Preparing Your Study
disability. This not just covers prohibiting active acts of discrimination (such as
not hiring somebody because of their disability) but also ensures equal access
to public and commercial spaces to individuals with disabilities. It includes
the provision of reasonable accommodations to make physical spaces equally
accessible to all (think features such as wheelchair ramps, parking spots des-
ignated for those with disabilities, and Braille used to mark elevator buttons).
Like physical spaces, we move through digital spaces. In January 2018, Sec-
tion 508 of the Rehabilitation Act of 1973 requiring that “federal agencies’
electronic and information technology is accessible to people with disabili-
ties, including employees and members of the public” was revised with new
standards to ensure equal access to current technologies.1 Any federal website
or app is required to comply with Section 508. All websites and apps aren’t
mandated to comply with the revised Section 508 standards, but many web-
sites are still required to be accessible to all users to comply with state laws,
institutional laws, or certain grant requirements. Even if your websites and
apps aren’t covered by any legal or formal requirements, it is still best practice
to ensure your products are accessible equally to all.2
What does this mean in practice? Considerations around accessibility
include:
Notes
1. “About the ICT Accessibility 508 Standards and 255 Guidelines,” U.S. Access Board,
accessed July 18, 2021, www.access-board.gov/ict/.
2. “Accessibility Testing for Websites and Software,” Section508.gov, accessed July 18,
2021, www.section508.gov/test/web-software.
7 Reporting and Presenting
Research Findings Preparing Your StudyReporting and Presenting Research Findings
After designing your research study, collecting the data, and analyzing the
data, you will want to share your fascinating insights with others. Whether
you’re working in academia or in the UX industry, you need to first and fore-
most consider your audience for any type of report or presentation. Who are
you sharing your findings with and why? Are you sharing insights with your
UX team in a workshop format, so that you can brainstorm decisions together
about a future design? Are you writing a research paper for your professor as
a class project? Are you presenting at a conference, to share the knowledge
you have gathered with others working in your field? There are typical report-
ing (written) and presentation (verbal) formats in both academia and industry,
which we outline in detail in this chapter. You should stick to these formats
at least loosely (though there is always room for a little creativity, especially
when presenting), but always make sure to tweak the specifics so that they
make sense to and are appealing to your audience.
DOI: 10.4324/9781003181750-9
Reporting and Presenting Research Findings 63
Abstract
An abstract is a brief (typically 100–300 words) summary of your research.
The abstract should include what your project is about, what research methods
you used, and what the main findings are. Even though an abstract is presented
at the beginning of a research paper, it is typically written last, after you’ve
written the rest of the sections and can easily summarize them.
Introduction
The introduction of a research paper should introduce the reader to the topic
and explain the importance of the study. Frequently, researchers use anecdotes
or examples from popular media as “hooks” to draw the audience into the
“story” of the research.
Literature Review
A literature review is a summary of what research has been previously con-
ducted on the topic, as well as a brief foray into theory, the assumptions about
“how things work” that guide a research project. The literature review is a
substantial section of your paper and should include citations of many aca-
demic journal articles and books from the field of study. The Research Ques-
tion should be presented at the end of this section and should arise naturally
out of gaps in the literature review. You need to tell that reader what you plan
to examine that others have not looked at before.
Methods
In this section, you outline how you conducted the study. This includes:
Findings/Results
This section is the bulk of your report. It is where you present your key
insights, alongside the evidence for these findings (quotes and descriptions
from your interviews, observations, etc.). It’s important to balance description
and analysis here. Explain what was said or what you saw and then present a
brief analysis of “what does this mean,” connected to your Research Question.
Remember to keep your participants anonymous! Don’t use real names or too
many identifying details that could be used to link the words to the actual peo-
ple who said them.
Discussion
In the discussion section, the researcher puts the findings in context and pro-
vides an in-depth analysis and interpretation of the findings, ultimately answer-
ing the Research Question. Oftentimes, the findings/results and discussion
sections are combined in qualitative research reports, as the “objective” find-
ings (what was said and observed) make more sense as a story when they are
woven together with the researcher’s interpretations, instead of separated out
somewhat artificially in separate findings and discussion sections.
Conclusion
In this section, you present a summary of what you found and what it means.
Here you can also include limitations of the current study (e.g., not having a
very diverse sample) or ideas for future studies that were out of scope of this
particular research.
Works Cited
The works cited or bibliography is an alphabetical list of every scholar or
expert whose published ideas, thoughts, or quotes you have used throughout
your report (most of these can be found in your literature review). This section
should follow one of the common citation formats, such as APA (American
Psychological Association) or Chicago style. For more on citation formats and
style guides, see Purdue Owl’s guide here: https://owl.purdue.edu/owl/pur-
due_owl.html
Appendices
This is where you can include any research materials that you used in the study,
such as a list of interview questions.
Reporting and Presenting Research Findings 65
Industry Reports
When you’re working for a company or organization on the UX team, you
need to provide reports of your research that have very different purposes
to academic research papers. Typically, a UX researcher will provide some
recommendations based on their research findings, in the form of a research
summary, for the design team to use to help create or improve the app or web-
site. Industry reports tend to be much shorter and to-the-point than academic
research papers—these papers do not aim to contribute to a body of knowledge
but are instead used to apply the findings internally for product development.
(Sometimes industry research ends up as part of a white paper, which is a
longer, in-depth document that is shared externally, that is, outside of the com-
pany. White papers present information on a particular topic while also typi-
cally advocating for a specific solution or position—usually one that benefits
the company. White papers are out of the scope of this book, but we encourage
you to read up on them if you’re interested!)
An industry report or research summary is typically made up of a few key
sections as outlined in the following, though the specifics will vary based on
the needs of the organization and the specific audience. Check out Appendix C
for a sample industry report.
Background
This is similar to a literature review in an academic report and includes a brief
summary of any research that was done previously in your company that is
relevant to your current project, as well as any other background information
pertaining to the project aims and research design decisions.
Stakeholders
This typically refers to internal (to the company) people. In this section, you
list all the people in your company, by name, who are part of or have a stake
in the project. Some are people that need to be consulted as subject matter
experts, others are part of the ideation workshops that will use these find-
ings (these are typically UX designers and product managers), yet others
are the CEO and other chiefs, who are concerned with the business goals,
bottom line, and how your research project impacts both. This section can also
include external stakeholders who will be affected by this research project,
such as “end-users” (of course!) or external regulatory bodies that need to be
consulted.
Methodology
As in an academic report, here you explain how you collected and analyzed
your data, listing the specific methods used, as well as the time frame for the
research.
66 Preparing Your Study
Participants
Here, as in an academic report, you tell the reader who took part in the study,
including the specific criteria for participation and the participant demographics.
Key Findings
In this section, one of the most important in the report, you write out the key
findings from your study that are related to your Problem Statement. What did
you see and hear as you conducted your research? Unlike academic reports,
this section can use bullet points, especially in environments that are fast-
paced and include a lot of iterative research for multiple projects happening
simultaneously.
Known Limitations
Here, list anything that makes you less confident about your findings. This
could be something like a small or not diverse sample size or limited observa-
tion opportunities.
Recommendations
This is the part where the academic report and the industry report differ. Once
you have written out your key findings or insights based on the data collected,
you need to make some recommendations for the design (or re-design) and
development of the digital product based on these findings.
References
This is a list of all the documents (both internal and external) that were used in
the production of this report.
Appendices
In this section, you provide the materials used in the study, such as interview
questions, instructions that you provided to the participants, and any other rel-
evant information that could help the reader get more detail on how the study
was practically conducted.
Presenting
Often, you will be asked to present the findings of your research orally. Even
more so than with a written report, you need to consider your audience. You
only get one chance to present your findings—if an audience member doesn’t
understand, they can’t re-read your presentation again! (Though, they do have
Reporting and Presenting Research Findings 67
the chance to ask you questions for clarification, which you should be prepared
to answer on the spot.)
Displaying
There are many tools or programs to create visual presentations that you can
show while speaking about your project. We list some of our favorites here:
There are a few rules to keep in mind when creating research presentations.
First, you’ll want to use as few words as possible. Stick to keywords and
important phrases. The goal is to get the audience to pay attention to you, not
to have to read paragraphs of text in a presentation. The text is really there to
act as guideposts for both you and your audience as you walk them through
the project.
Instead of filling your presentation with words, use visual elements like
graphs, charts, tables, and images to summarize your findings. The best pres-
entation is embedded with interesting findings through dynamic images that
help you, as the speaker, tell your story. When an audience can see instead
of just hear your findings, they are much more likely to immerse themselves
in, and get excited about, your research. For example, if you used the method
of screenshot diaries (see Chapter 10), the presentation should contain a few,
exemplar entries. These don’t need to have text accompanying them; you can
walk the audience through what you found, using the entries as illustrations.
If you are working with text, say, for example, because you conducted a brain
writing session (see Chapter 16), it is OK to paste words in as cropped screen-
shots to show the text from your participants in-context. This is much more
exciting than just retyping their words onto your slides.
You should certainly also include relevant pictures for interest. But, be care-
ful to not just drop in lots of random Google image search images and clipart.
These can not only make the presentation sloppy but can lead audience mem-
bers to wonder how this random imagery is connected to your research. This
is a very easy mistake to make in tools like PowerPoint and Canva, because
the programs include so many options for shapes and pictures. Remember that
every piece of a presentation should be working to help the audience under-
stand, and become invested in, your research—so choose the right images to
illustrate your story.
Be sure to use colors and other aesthetic elements well. Don’t use colors that
clash or that may be difficult to read because of poor contrast. In academia,
presentations will often be “on brand” by using a college or university’s colors
and logo on all the slides. In fact, many schools have PowerPoint templates
for students and faculty use. In the industry, staying “on brand” may mean
using the company’s branding materials (logo, colors, fonts, etc.) during the
presentation. Also, be careful of using anything that flashes or moves quickly,
Reporting and Presenting Research Findings 69
as those types of images can trigger seizures in people with certain medical
conditions.
Methods
8 Traditional Research
Methods Overview MethodsTraditional Research Methods Overview
The two most commonly used qualitative methods in Media and Communica-
tion Studies are interviews and observations. Many of the specific processes,
tools, and skill sets that researchers draw on when interviewing and observ-
ing form the basis of the contemporary UX methods that we outline in this
book. Two other methods, focus groups and qualitative surveys, are also used
frequently in qualitative academic research. Qualitative data is typically ana-
lyzed using a process called thematic analysis. This chapter provides a brief
overview of interviews, focus groups, qualitative surveys, observations, and
thematic analysis, as subsequent chapters assume reader familiarity with these
basics of qualitative research.
Interviews
Interviews are, at the core, directed conversations, aimed at finding out some
information. Interviewers ask interviewees questions to better understand their
thinking, feelings, experiences, and motivations for behavior.
In academia, interviews are one of the most common methods used for
qualitative research in social science fields such as Psychology, Sociology,
Anthropology, Communication Studies, and Media Studies. They are used in
academic settings where researchers are curious about how individuals per-
ceive their social realities. As you probably know, interviews are not just a
research method—they are common in other settings such as job seeking and
news articles. The difference between interviews for research purposes and
interviews for jobs or journalistic interviews is that researchers typically ask
multiple people similar questions to draw a more generalized conclusion about
a particular phenomenon, rather than diving deep into only one person’s story.
In UX research, interviews are usually used at the early stages of product
design, in the Empathize stage, to find out what users want and need, and
also to find out what they currently don’t like about their experience. Because
there are always other stakeholders involved in the creation of a website or
app (e.g., the company that needs to make a profit, engineers that need to build
DOI: 10.4324/9781003181750-11
74 Methods
the product), interviews are also used in industry to understand these various
stakeholders’ requirements of the product.
Before an interview, a researcher needs to create an interview guide, which
is a set of questions that will be asked during the interview. Typically, inter-
views consist of open-ended questions, such as “how” or “why” questions, to
prompt the interviewee to provide in-depth, detailed answers. Interviews can
be structured, semi-structured, or unstructured, depending on the needs of the
research (whether the research is more exploratory or whether the researcher
has a sense of the important questions they need to be answered already). In
structured interviews, researchers strictly follow the interview guide—all par-
ticipants are asked the same set of questions in the same order. Semi-structured
interviews take a looser approach, with the interviewer adding or removing
questions as the conversation unfolds. And finally, an unstructured interview
flows like a conversation, with the interviewer typically coming into the inter-
view only with some general themes they would like to cover.
It’s important for the interviewer to build rapport with the person being
interviewed, so that they feel comfortable sharing their insights. To build rap-
port, researchers might want to ask some easier questions early on, questions
that are not connected to the research itself (e.g., talk about the weather, pets,
and current events). During the interview, the interviewer’s job is primarily
to listen and guide the interaction by prompting the interviewee and asking
follow-up questions. Don’t insert yourself into the interview by telling too
many of your own stories and sharing your own experiences! Treat the inter-
viewee as the expert.
Interviews can take place in person, but remote interviews have been pop-
ular with researchers long before the COVID-19 pandemic forced most in-
person interactions online. Remote interviews can be conducted over the phone,
but video interviews (such as through Zoom or Skype) provide the additional
advantage of showing nonverbal communication (such as facial expressions
and gestures), which can be a source of important insights. Virtual interviews
are also easily recorded, whether through the software being used or via voice
recorders on the researcher’s phone.
Interviews provide data in the form of transcripts (recorded interviews are
transcribed into text files), which are then analyzed by the researcher, who will
look through the transcripts for patterns and themes. Because interviews tend
to be at least 30 minutes long, they generate a lot of data—a 30-minute inter-
view creates about 10 pages of single-spaced text.
Focus Groups
Focus groups are somewhat like group “interviews,” with a researcher talking
with five to ten participants at once. It’s important to clarify, though, that focus
groups are not interviews. Focus groups are more like a group discussion and
are particularly valuable because of the interactions between the participants,
Traditional Research Methods Overview 75
rather than the insights gleaned from the individual answers from each partici-
pant. A small group setting can inspire participants to share personal stories and
bounce off others’ answers in ways that they would not have been comfortable
doing in a one-on-one interview. Focus groups are useful for bringing to light
ideas that might have been taken for granted, things that some individual par-
ticipants might not have thought of before, but that emerge in a group setting.
During a focus group, it’s important to pay attention to group dynamics—the
amount of agreement or disagreement around a particular topic can highlight
what’s important to take into consideration during product development in UX.
Typically, a research study will include at least three focus groups. (Five
to eight are ideal!) Because the format of a focus group is discussion-based,
where respondents are encouraged to interact with each other, the researcher
serves as a facilitator or moderator, rather than as an interviewer. Focus
group participants can be recruited so that groups are homogenous (where
participants all share demographic and other characteristics relevant to the
study) or heterogenous (diverse participants make up each focus group).
This decision depends on if the researcher wants to see how different people
approach, discuss, and negotiate on the same topic or if the purpose of the
research is to understand how a very specific group of people understands or
feels about a topic.
A focus group moderator will use a focus group guide, a pre-prepared list
of prompts, questions, and activities, to help facilitate participant interaction,
during the discussion. As with interview guides, the first questions should be
broad and easy, so that participants can feel at ease in the group and to encour-
age sharing. Focus group guides are particularly important when facilitators
other than the primary researcher are used across different groups in the same
study. This ensures that the questions asked and prompts used are the same in
all the groups, to provide comparable data for analysis.
Focus group moderating can be a difficult task. You need to guide the con-
versation in a useful direction related to the Research Question or Problem
Statement at hand, which can be hard to do with many voices at the table,
each with their own stories and opinions. The facilitator also has to make sure
that one particularly chatty participant doesn’t dominate the conversation and
that shy participants are encouraged to speak up. This is why a focus group
moderator should be a skilled communicator and a credible presence to the
focus group, so that they can (gently but firmly) assert their authority and keep
the conversation on track. In UX, focus groups are used to assess user needs
and feelings about a particular product. In academic Media and Communica-
tion Studies research, focus groups are really helpful for understanding group
norms and dynamics around communication.
Qualitative Surveys
Surveys are another way of gathering attitudinal information from participants.
Surveys consist of questions and scales (sets of questions that measure a par-
ticular attribute, such as a personality characteristic or digital literacy) that
76 Methods
get at the who, what, and where of your users. In academic research, surveys
are usually considered a quantitative method, in that they gather information
that can be analyzed and presented numerically, such as demographics, num-
bers around media or technology use (time, frequency, etc.), and scales where
respondents can rate an option on a scale from 1 to 5 (strongly disagree to
strongly agree). Qualitative surveys, on the other hand, ask open-ended ques-
tions of participants (typically asking how or why), similar to interviews, but
in this case, participants respond in written format, without an interviewer
present. These types of questions demand long-form written answers, ideally
paragraphs, that tell an anecdote or shed light on participants’ feelings and
attitudes around a particular topic.
Surveys are usually conducted online using programs such as SurveyMon-
key or Google Forms, where a link is sent to all participants to answer the
survey questions at their convenience. Surveys can also be sent out via mail
(with a stamped return envelope for participants to send back the completed
questionnaire) or conducted over the phone (typically using automatic sys-
tems). No doubt you’ve experienced a survey before—for instance, after call-
ing customer service, it is common for a quick two- or three-question survey to
be presented to callers, asking them to rate their satisfaction with the interac-
tion, from 1 to 5.
Surveys are used in UX research to solicit feedback from users, either in
the Empathize stage (to understand users’ current needs) or in the Test stage,
where researchers want to evaluate a website or app. Surveys are often built
into the digital product itself—through a “popover” with a couple of ques-
tions that pops up while a person is interacting with the website or app. When
conducting qualitative surveys, you should keep the questions minimal—one
or two questions asking for written answers are the maximum that you should
ask for users to answer, particularly if they’re presented in a popover format
(e.g., tell us what we could improve about this website . . .). Too many questions
can turn off the user—how often have you started a survey and then stopped
because the question set was dragging on and it seemed like too much effort?
The main limitation of using interviews, focus groups, and qualitative sur-
veys for UX research is that, while they give you great insights into how and
what people think (attitudinal data) as well as self-report data on their behav-
iors and habits, they do not give researchers insight into how people actually
behave (behavioral data). Often, what people do and what people say they do
are different things. When you’re concerned with how people use a digital
product, observations can provide more accurate data of actual use.
Observations
Observations, as the name suggests, involve a researcher watching and listen-
ing to people. Observations have historically been conducted in person, but
as more and more of our lives have gone digital, researchers do more online
observations or digital ethnographies, observing people in digital spaces.
Traditional Research Methods Overview 77
Observations are a very common technique in UX research because ulti-
mately UX researchers are interested in understanding how people interact
with something—how they actually behave—so that that experience can be
improved. In UX, observational research takes place during the Empathize
stage, when researchers want to discover how people currently use a particu-
lar product, but also during the Test stage, when researchers want to evaluate
whether a product is easy and efficient to use. As you’ll see later in this book,
many different UX methods have an observation component.
In academic studies, especially in Communication and Media Studies,
observations are used for a similar purpose, to discover and explain people’s
behavior, particularly regarding communication devices and consuming media.
As mentioned previously in this book, the key difference between academic
research and UX industry research is that academics are not typically working
to provide recommendations for the creation of a new media or communication
product or product improvements but instead are adding to the body of knowl-
edge about when, how, or why certain media is used or certain communica-
tion patterns occur. For example, observations as part of an academic research
study can show how different family members may use a plethora of devices
for entertainment all in the same space. This sheds light on changing cultural
trends, such as the shift from a family sitting down to watch the same TV show
at dinner (common before the rise of mobile technology devices) to each fam-
ily member choosing their own entertainment on a different device and being
together in the same space but not consuming the same content.
Observations as a method in both academic and UX research range from
quick observations of people completing specific tasks to months-long or even
years-long ethnographic field studies. Short observations can be done in the lab
or in the field (in the place where people would typically use the product in real
life). These types of observations are usually concerned with a better under-
standing of how a person uses a specific device or product. In UX, the most
commonly used limited time observation method is usability testing, where a
researcher observes a user navigating through a website or an app and takes
note of pain points or issues.
Longer observations typically take place in the environment of the user,
where a researcher lives and/or works with the people they are studying,
observing every move and trying to understand the culture as a whole. For
example, a UX researcher might do an ethnographic study at a college, observ-
ing how students go about their daily lives on campus, including going to
lectures and socializing, to inform the design of a new university website.
Observing students on campus can provide many rich insights that can lead
to ultimately creating a better website. For example, perhaps the researcher
sees that students often get lost when trying to find lecture halls. Based on this
observation, they might recommend that a clickable campus map, with clearly
labeled classrooms, is added to the home page of the university student portal,
so that students have that resource handy as they’re navigating the physical
environment of campus.
78 Methods
Observations can be messy with the huge amounts of different types of data
collected, particularly during an ethnographic study. A solid ethnographic
study will yield hundreds of pages of notes, photographs, videos, recordings,
artifacts (objects or documents from the culture or place being studied), and
more. The scope of the observation will dictate what data are collected. For
example, a usability testing session where the researcher watches a user navi-
gate to a specific page on a website will be more manageable to document
(simply record the screen and note pain points) than an ethnographic study of
a college campus, as described earlier. This is why researchers need to have a
solid plan prior to conducting the observation, so that they know what to focus
on while they’re in the field.
Field notes are detailed notes that an observer takes while they are observing,
jotting down details such as who is being observed, their behaviors and interac-
tions, the physical environment itself, the participants’ movements within the
space, etc. Recording observations, whether via video or audio, as much as
possible is also helpful. Ultimately, you want to focus your observations on
what you need in order to answer your research questions or help fulfill your
research objectives. This might seem obvious, but it’s crucial to carefully think
through what sort of data you might want to hone in on. For instance, if you’re
wanting to understand interpersonal communication patterns between students
at a coffee shop, you will want to pay attention to interactions between groups
of people more so than observing solo students go about their business.
Often, observations are conducted in conjunction with interviews (read
more about contextual inquiry, a type of UX method incorporating observa-
tions with interviews, in Chapter 12). Interviews conducted before, during, or
after observations can provide researchers with insights into the motivations
for and thought processes around user behavior.
Final Thoughts
As you read through the UX methods in this book, you will notice that we
often mention asking “good” questions of your users or paying attention to
participants’ behaviors. All of these integral pieces of more contemporary UX
methods are still rooted in traditional qualitative research methods discussed in
this chapter. Critical thinking has always been necessary in research, and that
is no different for UX. The mindset necessary and the methods used may seem
very different to academic research, but really UX methods are, at the core,
innovative versions of asking questions and observing how people act. And,
just as in traditional academic research, UX data analysis includes time spent
reviewing and re-reviewing data, finding patterns and themes in the data, and
using your expertise and previous studies’ findings to arrive at your insights.
Further Reading
“How to Conduct User Observations,” Interaction Design Foundation. www.inter
action-design.org/literature/article/how-to-conduct-user-observations.
Brennen, Bonnie S. Qualitative Research Methods for Media Studies. New York: Rout-
ledge, 2017.
Lindlof, Thomas R., and Bryan C. Taylor. Qualitative Research for Communication
Methods. Los Angeles: Sage Publications, 2017.
Pernice, Kara. “User Interviews: How, When, and Why to Conduct them,” Octo-
ber 7, 2018. www.nngroup.com/articles/user-interviews/#:~:text=A%20user%20
interview%20is%20a,of%20learning%20about%20that%20topic.
9 Emotional Journey
Mapping MethodsEmotional Journey Mapping
Quick Tips
Tools Whiteboard and Post-It notes or spreadsheet software like
Excel
Use When You want to understand how a space makes a user feel;
focus on emotions and improving experience.
The Method
Emotional journey mapping invites participants to take the researcher on a sort
of tour. Also known as customer journey mapping, emotional journey mapping
is primarily concerned with understanding a user’s complete experience from
opening an app or a website to closing the app or browser tab.
The first mention of the idea of mapping a customer’s journey is in Chip Bell
and Ron Zemke’s 1989 book, Service Wisdom. Calling it the “cycle of service
DOI: 10.4324/9781003181750-12
Emotional Journey Mapping 81
mapping,” customer journey research began by encouraging researchers to
map experiences in physical spaces.1 A popular application was the 1999 study
of what would eventually become the Acela high-speed train in the northeast-
ern US. IDEO and C+CO conducted emotional journey mapping to capture a
riders’ complete experiences with the service.2
To conduct an emotional journey map, touchpoints, or each area visited
or task completed, are mapped chronologically. For example, a customer
journey map of a grocery store may include parking, store appearance,
entry, produce section, deli, bakery, canned goods, and checkout. At each of
these touchpoints, researchers note how pleasing or frustrating the experi-
ence was, how long the participant stayed in that space, what they did while
there, and any responses to interview questions. After mapping several jour-
neys, researchers look for trends: What is the grocery store doing well and
where are shoppers having an enjoyable experience? What can the store
improve and where are shoppers getting frustrated? Where are the highs
and lows?
Maybe, after mapping several journeys, researchers notice that most partici-
pants are fine moving through the store, but checkout is frustrating. This may
cause them to rate the grocery store poorly overall because checkout is the last
thing they remember. But, researchers can now suggest that the store look into
how they can fix the checkout experience. Then, the overall ratings and experi-
ences may drastically improve.
As another example, maybe researchers notice a trend that the parking expe-
rience and the appearance of the storefront lead to annoyed shoppers. Subse-
quently, no parts of the shopping journey are gratifying because the first few
minutes were so unenjoyable. Through this example, researchers may suggest
that the store research how to improve their parking and storefront before mak-
ing internal changes, to see if a better early impression doesn’t help to improve
the entire experience.
Of course, other trends may be found somewhere in the middle. If a task
takes longer than a certain amount of time, that may mean the difference
between an enjoyable experience and an annoying one. This could help the
grocery store recognize where they may need to place extra staff during certain
busy times. For example, shoppers may generally feel gratified at the deli, but
once the wait is longer than ten minutes, they typically feel it is a bad experi-
ence. Tracking when the participants are shopping, researchers realize that this
long wait is common on Sunday afternoons. The store can now staff the deli
with more workers on Sundays or research other ways to make the Sunday deli
shoppers happier.
Emotional journeys for website and app users work in a very similar way.
As the researcher, you map how users move through the digital structures, not-
ing what areas or tasks are enjoyable and which are frustrating. Touchpoints
represent steps in a digital process like changing a password, posting a photo,
scrolling through a news feed, or attempting to purchase a product. Indeed,
touchpoints need not be formal procedural steps, but they can be.
82 Methods
The nice thing about emotional journey mapping is that it is a very flexible
method. You get to decide exactly what you are measuring. First, the researcher
decides what the emotional spectrum will be based on the research problem,
the company’s mission/goals, and the tasks being mapped. For example, the
spectrum may go from entertained to bored, from satisfied to disappointed, or
from pleased to frustrated. Second, the researcher chooses what types of data
to collect beyond the touchpoints—times, notes, images/screenshots, and so
on. Third, the researcher also decides what the journey will be. Open-ended
journeys invite the participant to use an app or website as they normally would.
In other words, they implicitly define what the touchpoints are as well as their
order. In a close-ended journey, researchers decide on the same list of touch-
points, in the same order, for every participant. A combination of the two can
also be used. Maybe you give each participant the same goal, but let them cre-
ate the path to that goal on their own.
Mapping users’ emotional journeys is similar to traditional academic meth-
ods like observations and ethnographies. Like observations, the goal is to
watch as participants do the relevant task. As a researcher, it is your job to note
as much as you can about the experience. Similarly, like ethnographies, user
emotional journeys are not just about collecting objective, surface data like
which touchpoint is visited, when, and for how long. Journeys aren’t really
even just getting a Likert-scale-like number that represents emotion. Instead,
as with ethnography, as the researcher you want to understand what the expe-
rience is like for the participant, empathizing with their highs and lows and
recognizing why things may be satisfying or frustrating, even if the participant
isn’t quite sure why.
Use When
Emotional journey mappings are typically employed at the beginning of a
study, in the Empathize stage, usually before a research problem is formed.
This is because emotional journey mappings are meant to uncover how a user
interacts with a product, how the product makes them feel, and what specific
touchpoints are enjoyable and which are frustrating. Thus, emotional journeys
are best used when you are beginning your user-centric project. The method
can tell you a lot about what an app or a website is doing well and what aspects
are providing subpar experiences.
In academic research, emotional journey maps can be used to visualize a
person’s experience. For example, if you are studying interpersonal commu-
nication at a doctor’s office, you can ask that your participants describe their
emotions through each touchpoint—parking, entering the building, checking-
in, filling out forms, waiting in the waiting room, reading any signage, going
into the exam room, communicating with the nurse, talking to the doctor, being
examined, checking out, and leaving. The emotional journey maps can help
researchers learn about common interpersonal experiences at doctors’ offices.
Emotional Journey Mapping 83
For instance, perhaps the study can be used to compare experiences at different
offices or between different groups of patients.
Case Study
In 2018 Nike used emotional journey mapping to study customers’ experi-
ences with using the company’s new app and then entering a brick-and-mortar
store. Michael Martin, VP of Digital Projects at Nike, has a background in
entertainment and gaming products, and for Nike’s new “Nike Direct” prod-
uct, Martin first studied how customers experience the brick-and-mortar Nike
sneaker shopping experience by mapping touchpoints and their delights and
frustrations.3
He found that customers in a physical Nike store experience peak excite-
ment when they find a sneaker they like. But this can quickly flip as they
wonder things like “is this the only color?” and “what if they don’t have my
size?” Because of emotional journey mapping, Martin was able to identify and
suggest solutions for these lows. He then used these findings to make recom-
mendations for the new product—Nike Direct. On this app, customers can see
a product they like, scan it, immediately know about color and size options,
and request that a salesperson bring out the customer’s choice fairly quickly.
Customers can also reserve time slots, scheduling one-on-one time with an
employee for an almost personal-shopping-like experience. Martin used jour-
ney mapping to understand when Nike customers are traditionally pleased and
frustrated and helped to create a data-driven app that worked on increasing the
highs and decreasing the lows.
Steps
1. Decide what you will be mapping.
a. Is it a complete experience, from start to finish like using the app on a
normal day?
b. Is it using a specific tool or functionality like changing privacy set-
tings or searching?
86 Methods
c. Is it implementing a specific process like signing up or posting a
photo?
2. Decide if you will lay out the touchpoints ahead of time or if you want the
participants to create their own paths.
3. Observe as the participants complete the process, noting touchpoints, time
spent at each touchpoint, emotions felt, and context of usage.
a. If you can be physically with the person, you can watch over their
shoulder in a more casual setting. Or, in a more lab-like setting, you
may connect a laptop or phone to a larger screen that you can watch
through screen mirroring. Ask for screenshots or take your own pho-
tos of the process if you can.
b. If you cannot be with the person, you can complete the mapping over
a program like Zoom. The participant can either perform a laptop hug
(holding their laptop backward, on their lap, so the camera picks up
what is happening on their phone) or, if the study is web-based, they
can share their screen. Take screenshots if you can.
4. Ask good questions.
a. Be sure to include thought-provoking, but unobtrusive, questions.
b. Provide cues for participants who are less chatty and provide guidance
for participants who may begin to include irrelevant information.
c. Ask participants why they made a certain choice, why they stayed at
one touchpoint for a very long, or very short, period of time, and why
they felt the emotion they did.
5. Create maps of touchpoints on an x-y axis, allowing the x-axis to represent
time and the y-axis to represent emotion. Attach relevant information to
this map including notes and images/screenshots.
6. Analyze the mappings by comparing variables like emotion and time
spent. Remember to consider varied user-types. Try to find trends, noting
where it seems a lot of participants may be unsatisfied. You should also be
noting trends related to where participants feel extremely satisfied—these
insights can help improve other pieces of the app or website.
Discussion Questions
1. Think about a task you perform each day. It could be showering, brush-
ing your teeth, driving to school, making dinner, or something else. What
are the touchpoints? How do you feel at each one? Satisfied? Frustrated?
How could you improve your experience of that task by altering one
touchpoint?
2. Working with a partner or group, consider the same daily task. Do you all
have the same touchpoints? Does each touchpoint mean the same thing
or have the same significance to everyone in the discussion group? Are
frustrating touchpoints the same for everyone in the discussion group?
Emotional Journey Mapping 87
Notes
1. Ron Zemke and Chip R. Bell, Service Wisdom: Creating and Maintaining the Cus-
tomer Service Edge (Minneapolis: Lakewood Books, 1989).
2. “The Story of the Journey Map—The Most Used Service Design Technique,” Inter-
national Service Design Institute, 2020, https://internationalservicedesigninstitute.
com/the-story-of-the-journey-map-the-most-used-service-esign/.
3. Yasmin Gagne, “Nike’s New Concept Store Feeds Its Neighbors’ Hypebeast and Dad-
Show Dreams,” Fast Company, July 12, 2018, www.fastcompany.com/90201272/
nikes-new-concept-store-feeds-its-neighbors-hypebeast-and-dad-shoe-dreams.
10 Screenshot Diaries MethodsScreenshot Diaries
Quick Tips
Tools Google Drive
Use When You want to understand users’ everyday experiences in
context and over some set period of time.
The Method
In traditional academic research, the diary method is a sort of assignment. After
recruitment, researchers ask their participants to begin journaling, or keeping
a diary, related to the phenomenon being studied. For example, a Media Stud-
ies researcher may want to better understand how college students consume
streaming services like Netflix and Hulu. Using a diary study, they can capture
topics like content viewed, when, and for how long, as well as more personal
and contextual information like why that content was chosen, why that content
was viewed at the specific time, how the participants felt while viewing, and
so on.
DOI: 10.4324/9781003181750-13
Screenshot Diaries 89
In diary studies, entries are written by hand in a notebook or journal pro-
vided by the researcher. The participants then return their diaries, with all of
their entries, once the allotted time period has ended. Today, with virtual tools,
participants may be asked to keep their entries on their computer and email
them at the end of the study. Or, if the researcher wants to see entries in real-
time, they may ask participants to email an entry daily or weekly, or to com-
plete the entry in some shared space like Google Drive.
Employing a diary study makes results much more personal, largely because
the participants are in their typical, natural environments. The only change to
participants’ routines is completing a diary entry at the end of the day or after
they have finished the related task. Because the participants are not in a lab
or sitting face to face with an interviewer, they are more likely to contribute
authentic data. In addition, diary studies are often performed over a longer
period—a week, a month, a year. These longitudinal data help to understand
how participants experience something over time. In a weekly study, research-
ers could find trends related to different days of the week. In a longer, maybe
a yearly, study, researchers may find trends related to weather changes or holi-
days celebrated.
In traditional diary studies, the parameters for what and how a person dia-
rizes are up to the researcher. When participants log an entry and what they
include in that entry should be directly related to the study’s goals, questions, or
hypotheses. Some diary studies are open and simply ask participants to jot down
whatever they feel is relevant to a topic. Other diary studies include specific
questions to be answered at specific times during the day or after specific tasks
have been completed. So, in our streaming services example, participants may
be asked to complete a diary entry every night and to respond to four questions:
What did you watch today? For how long did you watch it? When did you watch
it? Why did you choose that content to watch? Or, they may simply be asked to
generally write about their streaming experiences at the end of each week.
This traditional mode of diary studies can certainly be used in UX. You
could ask participants to complete entries about their experiences with an app
or a website, or you could ask them to focus on a specific tool or functionality.
For example, if a research problem has to do with making a particular online
shopping experience better (and more regular, from a business sense), partici-
pants could be asked to write down if they bought anything from that specific
website, what it was, when exactly they bought it, and why. Answering these
questions over weeks or months allows researchers to understand shopping
trends.
Screenshot diaries are a new visual twist on traditional diary studies. The
setup remains the same. As the researcher, you decide the parameters for the
study—again, these should be related to your research problem and should con-
sider the time and resources available to you. However, instead of just asking
participants to journal, you ask the participants to take screenshots throughout
the day and to then caption and annotate these images.
90 Methods
A great way to conduct a screenshot diary study is through Google Drive.
Set up folders for each of your participants (being sure that the privacy is set
so that each participant can only access their own folder). You can put the
prompts and questions right in the documents, and then all your participants
need to do is drag or paste in their screenshots to the folder every day. Google
Docs tools make it easy to annotate and caption an image once it is in the file.
As with traditional diary studies, you can ask your participants to simply take
screenshots whenever something they view as important happens within an
app or a website. Or, you can be more specific, asking that they take a screen-
shot whenever they are frustrated or to take a screenshot of the first thing they
do when they visit a site or open an app. You can even ask them to take screen-
shots of, for example, their work computer desktop at specific times during the
day. Because using Google Drive allows for more organization, you can, if it’s
relevant to your study, ask that each image includes a date and time.
Let’s think about examples related to Instagram. As one instance, perhaps a
research problem is focused on users’ frustrations with targeted content. The
researchers could ask the participants to take a screenshot whenever they expe-
rience frustrating content. They would normally be asked to also include the
date and time, as well as a caption and annotations to explain what is going on.
Figure 10.1 is an example of this type of entry.
Figure 10.1 Screenshot diary entry showing when a user is frustrated with certain
content.
Screenshot Diaries 91
As another example, imagine a research problem has to do with the func-
tionality of updating a password. For this specific task, you could ask your
participants to screenshot each step in the process of updating their password
and to explain what they were trying to accomplish and how they were feeling.
Figure 10.2 shows an example of a screenshot diary entry related to changing
passwords in Instagram.
It is incredibly helpful to give your participants examples so that they
know what you are expecting from them. (Feel free to use the two above
as is or with some changes!) If you want really long captions that are more
like traditional diary entries, provide examples that display that. If you want
lots of annotations with emojis, arrows, and circles, provide those types of
examples.
Be sure that your participants know how to take a screenshot on their device.
Different mobile devices have slight variations in buttons pressed when it
comes to taking screenshots. A PC laptop is slightly different from an Apple.
It’s probably best that participants don’t literally take a shot of their screen
with another camera. Those images are never clear, and it becomes more work
for the participants to get the images into their Google Drive folder. A few tips
before the study begins or a how-to guide in the participants’ Google Drive
folders can go a long way!
Figure 10.2 Screenshot diary entry showing that a user cannot find the password
change setting.
92 Methods
Use When
Like other Empathize stage methods, screenshot diaries are great to use
when beginning a new study, when you’re trying to figure out users’ rela-
tionships with your product. Screenshot diaries give personal and authentic
looks into users’ everyday lives—how they use your website or app and the
way that your product makes them feel as they move through their daily
routines. Screenshot diaries are one of the few UX research methods that
send participants off on their own for the entirety of the study, so participants
have very little contact with you, the researcher. This means that they are
less likely than in other studies to feel like they are being studied and more
likely to share their true feelings, providing you with a very human-centered
perspective.
Case Study
Researchers from the University of California, San Diego, and the Nokia
Research Center partnered to better understand how mobile device users expe-
rience digital history across their personal devices. Even though cloud services
Screenshot Diaries 93
allow data to be saved for later access on any device, previous research had
found that users still re-search and re-access content anew when they move
to a different device. To study how users re-access content across devices, the
researchers used screenshot diaries.1
Fifteen participants were asked to take screenshots when they re-accessed
content on their mobile device. They were also asked to annotate them later
in the day, through a nightly journal, for which participants were sent daily
reminders to complete. The researchers found interesting patterns related to
where content was first accessed and where it was re-accessed. For example,
when content is first accessed on a computer and then re-accessed on a mobile
device, it is likely because users are showing their friends that content. On
the other hand, when content is first accessed on a mobile device and then
re-accessed on a computer, it is likely that users experienced some technical
barrier that forced them to move to a device with more capabilities. Overall,
planning ahead to efficiently share content across devices is a lot of work and
is not often considered at the time, leading users to then waste time re-finding
and re-accessing content.
Through their findings, the researchers propose three main solutions. First,
content could be identified as likely to be re-accessed and tagged with loca-
tion and time content to then reshow that content to the user at a better time.
Second, tools should allow for better automatic sharing of bookmarks and web
history. Third, content should be freed from “application silos.”2 For example,
if someone looks up directions via Google Maps, that content should automati-
cally be saved to the user’s mobile device.
Steps
1. Set up a Google Drive folder that is exclusively for your screenshot diary
study.
2. Decide what you want from participants. How many entries and how
often? Will entries be more open, or do you want to list out specific things
your participants should take screenshots of?
3. Create a study instructions document, as well as a how-to guide includ-
ing sample entries and steps for taking a screenshot on multiple types of
devices.
4. Create folders for each participant. Each folder should contain the docu-
ments created in Step 3.
5. Contact participants, linking them to their personal folder and directing
them to the instructions and how-to guide.
6. Once submissions are complete, code each entry by participant and num-
ber. Then, you can easily move them all to one folder so they are more
easily read but not mixed up.
7. Look for trends related to what participants took screenshots of or how
they felt about the specific screenshots you asked them to take.
94 Methods
Discussion Questions
1. How are screenshot diaries similar to emotional journey mappings?
How are they different? Come up with a scenario where you would use a
screenshot diary as well as a similar scenario where mapping an emotional
journey would make more sense.
2. Screenshot diaries are usually considered an Empathize method. What
steps can you take to ensure that participants are revealing their emotions
related to experiences within your app or website during a screenshot
diary study?
3. Think about an example Problem Statement that would warrant a week-
long screenshot diary study. Now think about an example Problem State-
ment that would warrant a six-month-long screenshot diary study. Discuss
how these two Problem Statements are different with a small group.
Notes
1. Elizabeth Bales, Timothy Sohn, and Vidya Setlur, “Planning, Apps, and the High-
End SmartPhone: Exploring the Landscape of Modern Cross-Device Reaccess,” in
International Conference on Pervasive Computing, eds. Kent Lyons, Jeffrey High-
tower, and Elaine M. Huang (Berlin: Springer-Verlag, 2011).
2. Ibid., 16.
11 Breakup and Love Letters MethodsBreakup and Love Letters
Quick Tips
Tools Paper and pens; Google Docs and Zoom
Use When You want to understand why users are loyal to your product
or why they may leave.
The Method
As we discussed with the Heinz ketchup study in Chapter 1, just asking partici-
pants to talk about how they use something through an interview or a survey
rarely reveals actual day-to-day experiences, delights, and frustrations. Often,
directly asking your study participants to list what they love and what they hate
only gets them talking about surface-level tools and functionalities that they
think you want them to mention or that they may have read about in recent
news. Instead, asking your participants to write either a breakup letter or a love
letter taps into personal feelings that begin to reveal what really keeps users
invested and what may make them sign off for good. Often, participants relive
salient moments that would have gone unmentioned through more traditional
methods like interviewing.
DOI: 10.4324/9781003181750-14
96 Methods
In 2009 Smart Design, a strategic design company, came up with the idea of
having research participants write letters either breaking up with or showing
their love for products and brands. The goal was to create a method that better
tapped into the relationship that users have with products and brands. They
found that asking participants to write letters, addressed to products instead of
people, helped users communicate their loyalties and frustrations in ways that
interviewing couldn’t. In 2010 they piloted this idea, asking some conference
participants to write breakup letters.1 You can watch some clips here: https://
vimeo.com/11854531!
Asking participants to write a breakup or love letter usually works best when
you ask them to address the letter to a product or broad experience instead
of one tool or functionality. Therefore, it is best used at the beginning of the
design process, most often during the Empathize stage, or as an exploratory
method for a research question that focuses on participant experiences or emo-
tions. For example, you could simply ask participants to break up with Insta-
gram or write a love letter to a video game. Or, if you are interested in studying
a brand, participants could be asked to break up with Google or Apple. If, as
a Media Studies researcher you are interested in, say, viewers’ experiences
while binging a show on Netflix, you could ask that they write a love letter or
breakup letter that speaks to their experience.
Once you decide on the topic for the breakup or love letter, participants
should be given about ten minutes to complete their letter. Before they begin,
it is often a good idea to provide them with an unrelated example—if you are
studying an app, show them a letter addressed to a food or clothing brand. This
way your participants will know the type of letter you are expecting them to
write but they won’t be swayed to use some of your ideas. Usually the method
includes deciding if all participants will write a breakup or love letter. Or, you
may decide that half of your participants will do one and the other half, the
other. Another method involves you letting participants decide which type of
letter they would like to write. In any case, best practices include only asking
your participants to write one or the other. Asking each person to write both a
breakup letter and a love letter doesn’t allow them to focus and immerse them-
selves in the emotions involved.
Writing letters can be completed in-person or virtually. If completed in-
person, participants can choose how they would like to write the letter—in
their own notebook, on a piece of paper you provide, on their laptop, etc. If
completed virtually, participants can use tools like Google docs or email to get
their letters to you. In both cases, you want to collect the letters for analysis.
If possible, an incredibly powerful second step is to ask participants to read
their letters aloud. Being able to observe facial expressions, body language,
and tone adds even more insight into what the participants are thinking and
feeling. This second step is not always possible but if there is any way to listen
and watch your participants read their letters, make it a part of your study.
Writing letters together can act as a sort of focus group. Five to ten partici-
pants can be in the room together, silently write their breakup or love letters
Breakup and Love Letters 97
for about ten minutes, and then, one by one, stand up and read their letters. If
holding the study virtually, you can conduct it synchronously, meeting with
five to ten participants over a program like Zoom. Letters can still be written
silently together, and then participants can take turns reading what they wrote.
If synchronous focus groups are not possible, you can have participants read
their letters aloud to just you. Or, if meeting synchronously can’t happen at all,
participants can record themselves reading their letters.
Use When
Breakup and love letters are best used in the early stages of UX research, typi-
cally in the Empathize phase, to tap into salient experiences that speak to why
your users are loyal or why they may leave. These insights are then later help-
ful in constructing your problem tree and eventual Problem Statement.
In Communication and Media Studies research, breakup and love letters are
great tools for understanding specific communication experiences, media con-
tent, or tech devices, uncovering the ways in which diverse groups of people
enjoy them or are frustrated by them. As with the other Empathize methods
covered in this book, breakup and love letters help participants to remem-
ber important details that may go unmentioned in traditional methods like
interviewing.
Your pricing is always the best and I have rarely ever had to make returns, you
have been there for me and have been easy to get a hold of whenever I need you.
You are always there when I need you, no matter what time of the day or night
it is!! I know I can rely on you to show me items that I love with prices I can
afford!
You are the love of my life, Amazon, because you know the way to my cheap-
skate heart—amazing prices on products I need and want.
What are some themes you notice right away from reading these three snippets?
In each of these three examples, participants comment on Amazon’s pricing.
The why is fairly clear—people don’t want to spend more money than they
have to. But there is also an implied trust that Amazon is the best price and
perhaps even a hint that participants have compared prices previously and are
fairly certain that Amazon’s pricing is often the cheapest.
So, let’s plug “pricing” into our problem tree trunk in the next stage, Define,
and think about the causes and effects of Amazon’s pricing model. Following
the Design Thinking process, we may eventually find that users would appre-
ciate comparative pricing pulled from other, competing websites, so Amazon
customers could more easily compare prices, products, and shipping fees. Users
were often already doing this, so adding it would mean less work for custom-
ers as well as a brand lift for Amazon—this choice would also show that, as a
Breakup and Love Letters 99
company, Amazon is confident they can provide lower prices most of the time.
As you can see, the pricing was not a “problem,” but it could lead to potential
product changes that may keep more users and even usher in new users.
Case Study
In February of 2018, Danielle Dennie and Susie Breier, from Concordia Uni-
versity, used breakup and love letters to study how undergraduate students
experienced their Library Research Skills Tutorial. The site was created to help
students begin their university-level research journey by showing them how
to find relevant information, evaluate it, and use it well. The researchers gave
students 20 minutes to write their letters. Through their analysis, they learned
if students enjoyed the website (most did not). But, perhaps more importantly,
Dennie and Breier also learned how students feel about the research process
and if their site was fulfilling student needs.2
You can see their PowerPoint presentation here! https://spectrum.library.
concordia.ca/985363/1/breier-dennie-forum-2019-final-20190426.pdf
Steps
1. Decide what you will be studying.
a. Is it some experience?
b. Is it a particular product?
2. Decide what types of letters will be written. Remember to keep an open
mind at this point and not let your personal experiences or biases drive this
decision.
a. All love letters?
b. All breakup letters?
c. Half and half?
d. Participants get to choose?
3. Decide if study will take place in-person or virtually.
a. If in-person, begin organizing a schedule and inviting participants,
aiming for five to ten participants per focus group session.
b. If virtual, decide on the software you want to use. We recommend a
virtual focus group via Zoom and a shared Google Drive folder where
participants can write their letters through Google Docs.
4. Meet with participants and provide necessary instructions and example(s).
a. If in-person, provide pens, pencils, paper, notebooks, and/or laptops.
b. If virtual, walk participants through the Google Folder, making sure
everyone has access and knows how to add their own Google Doc.
As an alternative, you could set up a private Doc for each participant
before the focus group starts, so they can just go into their own Doc
and begin typing.
100 Methods
5. Give participants some time to ask questions and review the example(s).
6. Now it’s writing time. It is best to give participants about ten minutes to
write their letters. This should be ample time to get good content without
allowing for any overthinking.
7. Once the writing is complete, ask each participant, one at a time, to read
their letters to the group. Take note of their emotions, tone, and body lan-
guage. If virtual, you may have to ask that a participant turn on their camera.
In most instances, you can’t force this action. So, if the participant wants to
remain a black square, you can still take note of their auditory cues..
8. Allow participants to chat about each other’s letters. Remember that the
main reason to conduct traditional focus groups is for the interactions that
take place. Be sure to take notes of things that participants may include,
agree on, or even argue about.
9. Ask any questions that you feel are relevant and would be helpful to better
understand participants’ experiences, love, and frustrations.
10. Be sure you collect all letters whether they are hard copies or digital
copies.
11. Thank the participants for their time, and end the focus group sessions.
12. Begin your thematic analysis, reading over the letters and related notes
multiple times. Try to formulate two or three themes that are common
across many of the letters.
Discussion Questions
1. Have you ever written a love letter or breakup letter to a person? How do
you think this method is similar to, and different from, writing a letter to a
partner or ex?
2. Try this method with a group. Pick a product and half of you write a love
letter while the other half write a breakup letter. What was difficult? What
was easy? How would you use what you learned by being a participant to
help guide your participants if you were to employ this method?
3. What could you do if, when asked to write a love letter, a participant states
that they cannot think of one thing positive to write. Or, what if, when
asked to write a breakup letter, a participant says that they cannot think of
one negative thing to write?
Notes
1. “Smart Design: The Breakup Letter,” vimeo, 2010, https://vimeo.com/11854531.
2. Danielle Dennie and Susie Breier, “The Wind Beneath My Wings: Falling in and
Out of Love with an Online Library Research Skills Tutorial,” Concordia Library,
2019, https://spectrum.library.concordia.ca/985363/1/breier-dennie-forum-2019-fi-
nal-20190426.pdf.
12 Contextual Inquiry MethodsContextual Inquiry
Quick Tips
Tools Recorder (camera ideally), note-taking tools (paper and
pen); Zoom or other video conferencing program (for
virtual sessions)
Use When You want to understand how a person moves through a
particular routine or flow in their natural environment,
typically a work environment.
DOI: 10.4324/9781003181750-15
102 Methods
The Method
As the name suggests, contextual inquiry consists of observations in a natu-
ral environment (in context) while also asking questions of participants about
what they are doing and why (inquiry). In this way, contextual inquiry com-
bines the two most commonly used qualitative methods: observations and
interviews. Importantly, in the UX field, contextual inquiry is seen as a co-
creation method, so it is less about the interviewer asking questions that the
participant answers and the researcher then interprets but more about the inter-
viewer and interviewee interpreting the task together. Contextual inquiry is
particularly useful for understanding how people conduct activities in the way
they normally would, such as their typical behaviors and practices in a work
environment, and for thoroughly understanding the point of view of the user.
Hugh Beyer and Karen Holtzblatt developed the contextual inquiry method
as a way to better understand workflows and processes in organizations. They
came up with this method to overcome the limitations of other qualitative
methods, such as interviews and qualitative surveys, which collect after-the-
fact, attitudinal data (what people think), but not in-the-moment or behavioral
data (what people do in the moment).1
A significant strength of contextual inquiry is that it does not only rely on a
participant’s recall of their actions (which is subject to errors and bias), because
it also includes observations of in-the-moment activities. Contextual inquiry
also helps the researcher understand the reasoning and motivations for certain
actions during an activity, by including interviews with participants as they
work. Not only is contextual inquiry great for understanding activities in the
moment, it also allows researchers a glimpse into naturally occurring activi-
ties, unlike the artificial setting of a lab. Because the activities in contextual
inquiry unfold naturally, the researcher also sees all the unconscious, habitual
parts of a workflow that a user might not be aware of enough to self-report. For
example, a researcher watching someone working with their Word Processor
can see if the user saves their progress using mouse clicks in the edit bar or
keyboard shortcuts, as saving work regularly is a largely unconscious activity
for those working with Word Processors on a daily basis. Observations like this
can guide the development of intuitive software interfaces.
Contextual inquiry, as is the case with other qualitative methods, is most
powerful when a researcher observes and interviews multiple people working
in the same context on similar tasks and then looks for trends or patterns across
these interactions. Contextual inquiry is great for understanding the in-depth
thinking processes of a user, as well as the unconscious habits and structures
they adhere to in a workflow. It is less relevant in scenarios where you’re more
interested in how a system works and whether it is generally usable.
This method is perfect for centering humans in the design process, align-
ing well with the Design Thinking Mindset, because it assumes that partici-
pants are the subject matter experts in their own lives and that the researcher
can learn from observing and asking them questions. Another way of think-
ing about contextual inquiry is as a craftsperson/master-apprenticeship model,
Contextual Inquiry 103
with the participant being the craftsperson sharing the knowledge and experi-
ence with their work, while the researcher is the apprentice observing and ask-
ing questions to learn more about it.
Contextual inquiry consists of four grounding principles:
Use When
Contextual inquiry is typically used during the Empathize stage in UX, to
understand user workflows, processes, and interactions with various digital
products in a real-life setting. Insights gleaned from contextual inquiry can
104 Methods
help designers design more intuitive products that fit into how people already
work and so make people’s jobs easier.
Contextual inquiry can be categorized as a type of field research or eth-
nography when thinking about it from a more academic research sense, as it
combines observing and interviewing in a person’s natural environment. It is
useful for understanding Media and Communication Studies settings, such as
newsrooms, TV stations, and classrooms.
Case Study
Bryan Dosono, Jordan Hayes, and Yang Wang, all Syracuse University
researchers, employed contextual inquiry to explore the experiences of people
with visual impairments during the authentication processes of logging into
computers, mobile phones, and websites. The researchers visited their partici-
pants, all people with varying visual impairments, where they usually use their
devices—their homes, workplaces, public libraries, etc.3
During their contextual inquiries, the researchers watched as their partici-
pants attempted to find login fields and enter usernames and passwords on
programs like email, banking websites, social media, and mobile phone oper-
ating systems. Most participants used assistive technology, such as the screen
Contextual Inquiry 105
readers, JAWS, and ZoomText (screen readers provide speech output for all
elements of a website page). After analyzing their contextual inquiry findings,
Dosono et al. found that their participants had issues finding where to log in
and how to authenticate once a username and password were entered. Screen
readers, like JAWS and ZoomText, read from top to bottom. Most programs
contain a lot of text and graphic content at the top of the screen, which means it
takes users of JAWS and ZoomText a long time to find where to actually enter
their credentials. Even when they did find where to log in, the participants were
often unsure if authentication was successful.4
A key takeaway is the limitations that the researchers found with assistive
techs like JAWS and ZoomText. The programs mask passwords, lack feedback
when entering case-sensitive passwords, provide little output regarding error
messages, and make password recovery difficult for those with vision impair-
ments. On the basis of these findings, Desono et al. provided four suggestions.
First, they implored digital designers to improve the accessibility of the login
areas, moving them to more obvious positions. Second, they suggested more
obvious confirmation messages when successful authentication had occurred.
Third, the researchers argued that JAWS, ZoomText, and other assistive tech-
nologies should include keyboard shortcuts that help users jump directly to the
login areas. Fourth, Desono et al. argue that digital design standards should
include consistent terminology for labeling login and authentication areas.5
As you can see from this example, just through meeting with 12 digital users
in their natural places of digital usage, the researchers uncovered rich data
using contextual inquiry. They were able to really experience what users with
visual impairments experience, noting many snags and frustrations that likely
would have been left out of traditional interviews or qualitative surveys. The
researchers were able to produce data-driven suggestions that would not only
help visually impaired users but likely make the interfaces more user-friendly
for other users as well.
Steps
1. Decide what activities and who you want to observe (define your focus).
2. Decide if you will conduct the session in-person or virtually.
a. If in-person, schedule a time (or times) for the contextual inquiry with
the participants in their natural environment. A contextual inquiry
can be as short as a couple of hours or as long as a couple of weeks
of observing different activities. Make sure you find a spot in the
physical space where you can be unobtrusive but also where you can
observe the action.
b. If virtual, schedule a time (or times) to meet over a service like Zoom.
The participants should still be in their natural environments, but you
will be “with them” on their laptop, desktop computer, phone, or
tablet.
106 Methods
3. Set up any recording equipment (if you can, record from multiple angles).
If you are conducting the study virtually, you can use Zoom’s recording
feature or ask the participants to do a “laptop hug,” as explained earlier in
this chapter.
4. Start off the session by breaking the ice—introduce yourself and build rap-
port with the participant.
5. Next, clarify what the purpose of the session is with the participant, telling
them what exactly you will be observing and detailing how the session
will proceed. Make sure to explain that you’ll be largely observing and
asking questions as they arise. Also clarify with the participant when it’s
OK to interrupt and when it’s not.
6. Start the contextual inquiry. The majority of your time should be spent
watching the person go about their activities. Make sure you maintain
focus on understanding what you’re seeing to address the Problem State-
ment guiding your research.
7. Stop the observation, and ask questions when you don’t understand what
the participant is doing or why. Ask them to walk you through their actions
and motivations and to provide lots of details!
8. Continue the contextual inquiry with alternating periods of observation
and periods of discussion. Make sure to take copious notes!
a. When you make interpretations about a particular part of the work
process, validate these interpretations with your participant (remem-
ber the partnership principle)—but don’t overwhelm them with
interpretations in the moment; you can confirm your interpretations
toward the end of the session. Providing interpretations continuously
and probing in particular directions during the session can cause the
participant to change their behavior to please you.
b. When observing and asking questions, get information on whether
what you’re observing is the standard way of doing things or if it’s an
anomaly (and if it’s an anomaly, ask why it is occurring).
9. At the end, summarize the session and share your interpretations with the
participant. Ask them to weigh in on these—they might need to clarify or
correct some of your interpretations.
Discussion Questions
1. Think about some specific scenarios of when you would conduct a con-
textual inquiry rather than an interview. Why would a contextual inquiry
make more sense here? In what scenarios would a contextual inquiry not
make sense?
2. How could you make sure that you are unobtrusive in your observations in
a physical space? Provide some practical tips.
3. What may be better captured in-person during a contextual inquiry than
virtually? Can you think of any examples of data that would actually be
captured better during a virtual session and that may be missed during an
in-person session?
Contextual Inquiry 107
Notes
1. Karen Holtzblatt and Hugh Beyer, Contextual Design: Defining Customer-Centered
Systems (Amsterdam: Elsevier, 1997).
2. Kim Salazar, “Contextual Inquiry: Inspire Design by Observing and Interviewing
Users in Their Context,” Nielsen Norman Group, December 6, 2020, www.nngroup.
com/articles/contextual-inquiry/.
3. Bryan Dosono, Jordan Hayes, and Yang Wang, “ ‘I’m Stuck!’: A Contextual Inquiry
of People with Visual Impairments in Authentication,” in Eleventh Symposium
On Usable Privacy and Security ({SOUPS} 2015). 2015, https://www.usenix.org/
conference/soups2015/proceedings/presentation/dosono
4. Ibid.
5. Ibid.
13 Personas and Scenarios MethodsPersonas and Scenarios
Personas are all about designing for a specific user in mind. Personas are fic-
tional representations or profiles of different target users that a researcher cre-
ates based on information gathered during the Empathize stage. So, personas
are not a data collection method but more a data analysis method—they are
a way of organizing your Empathize findings to be used throughout the itera-
tive design process. Personas, also called model characters, should represent
an “ideal” user and should help you categorize possible different user types in
simplistic ways, to aid design.1
Scenarios work in tandem with personas. They are fictional narratives of
how, when, and in what context a typical user would visit your digital space.
Scenarios can be either goal/task-based or more elaborated, delving deep into
the ideal user’s motivations and circumstances. Scenarios can be used with
personas to help UX designers better understand the end-users that they’re
designing for—it’s easier to design for a name, face, personality, and story than
for a generic “target user”! Think of personas and scenarios in the language of
filmmaking: Where the persona is the main character in the story, the scenario
is a particular (typical) scene in the movie (one in which the character interacts
with a digital product).2
Quick Tips
Tools Pen, paper, software for document creation
Use When You want to categorize and organize user data collected
during the Empathize stage into different user types to
shape your design.
DOI: 10.4324/9781003181750-16
Personas and Scenarios 109
The Method
Personas are fictional representations or “hypothetical archetypes” of poten-
tial users that can help guide design decisions. The concept of personas was
developed by Alan Cooper, a software designer in the 1980s. After conducting
some interviews with his target users, Cooper decided to play-act as imag-
ined characters loosely based on the user information he had gathered when
brainstorming the development of new software. He found this role-playing an
effective way of putting himself in the user’s mindset and representing differ-
ent points of view in the design process.3 And, personas were born! For more
on Cooper’s fascinating journey with personas, check out his essay (it’s in the
references of this chapter).4
There are different ways of developing personas, depending on the needs of
a project. Traditional personas are in-depth, fleshed-out profiles that categorize
different user types and that answer the question “who are we designing for?”
(as well as connected questions, such as, how and in what context will these
users be interacting with our product?). Personas can be created in two ways:
from secondary research (existing documents, previous research findings) and
from Empathize research for a specific project. Basically, before you develop
personas, you need to know something about your targeted users—be it from
published studies and asking subject matter experts or from speaking to pos-
sible users themselves (in the form of interviews, focus groups, screenshot
diaries, etc.).
A persona is an ideal, reliable, and realistic but fictional representation of a
specific target audience segment. Personas are archetypes of users, highlight-
ing their key characteristics and should represent different types of possible
users who might be interested in your product. Personas are written, as a one-
to-two-page document, by UX researchers and designers, and look like a biog-
raphy or profile page. Personas typically include:
Personas should include rich fictional details, such as quotes and specific
details that are relevant to the context of the digital product. For example, per-
sonas developed for a dating app should focus on relationship characteristics,
such as what a person wants from a partner, whereas personas developed for a
health app might focus on things like nutrition and exercise habits.
110 Methods
An important thing to remember about personas is that they should not
be “made up” but should be based on some actual data about your possible
users, whether it’s secondary research or data you collect yourself. In addi-
tion, personas should be connected to a particular context, not just be random
characterizations.
Figure 13.1 is an example of a persona related to a (fictional) new coffee
delivery app, Cuppa Joe Quick Coffee, that delivers coffee quickly to specific
pick-up points.
Personas like this give a lot of insight into our target users. Because complex
personas provide a lot of information and can take a long time to develop, it is
sometimes easier to create lean personas, which are essentially more concise
versions. These personas present only the most pertinent information and are
developed from a few (five to ten) quick user interviews conducted just for this
purpose, rather than lots of background and previous user research.
Some questions to ask in lean persona interviews include:
If you don’t have time to do even a few interviews (as is often the case!),
you can also do some online research, validate sample personas with people
in your network (students, co-workers, family, etc.) who match the profile,
and check out competitors to understand better what types of people they are
targeting. Sometimes personas are created based on assumptions held by the
Keisha (our persona from earlier in this chapter) runs late to work a lot.
She has trouble sleeping and a hard time waking up, which means she
often rushes through her morning routine. To make matters worse, her
roommate Riley frequently uses up the coffee beans that they share and
rarely replaces the bag. Keisha needs coffee so she has enough energy
to get through her busy day. But she’s typically running late already
and has not much time to spare, so she is looking for the most efficient
way of getting her caffeine fix. She is looking for ways to order coffee
regularly online, with minimal effort and time, to pick up on her walk
to work. She sees Cuppa Joe Quick Coffee advertised on her Instagram
and decides to explore the app, to see how it can help her mornings run
more smoothly.
Case Study
The UX team for Spotify, the music streaming app, developed personas in
order to better understand their current and potential users. An app like Spotify
is technically “for everyone”—all types of people listen to music—so using
Personas and Scenarios 113
personas can really help designers to focus on the needs of different listen-
ers (to differentiate mentally between users), and not have to design for some
ambiguous mass audience. Because personas are an internal method, Spotify
does not share the actual personas themselves (with all the pictures and char-
acteristics) outside of the company. So even though we can’t provide Spotify’s
full persona details here, we still see value in sharing the persona development
process and how it was used by the company.5
The team at Spotify started the persona development process by analyz-
ing their current user base. The UX researchers conducted diary studies and
contextual inquiries with listeners of different ages, lifestyles, etc., to better
understand why people listened to music and their needs. Early on, the team
realized that the reasons that people listen to music are actually pretty similar
across groups (e.g., for entertainment, to kill boredom) but that there were key
differences around device use, the contexts (how and where people listen to
music), and whether people were prepared to pay for music or not.
In the next phase of the project, the team decided to hone in on better under-
standing how people listen to music together. They went through the user data
again, specifically focusing on contexts where people listened together. They
then created five personas, each representing a person who listened to music
with others in different ways and spaces, such as at parties and on commutes.
The team picked genders, names, and characteristics of the personas randomly,
based on the range of actual users from user research, and created cartoon
pictures for each persona. Next, the research team shared the personas across
the company so that different teams (design, marketing, sales, etc.) could refer
to them to guide their work in specific areas. Not only did the team share
the personas as static images but they also created an interactive website, ran
workshops, and even created a card game so that everyone at Spotify could
really understand and relate to the personas in their work.
Steps
1. Collect user data, whether through your own or secondary research. This
step includes conducting interviews and observations and parsing through
existing documents. It’s also a good idea to talk to other people in the
organization to put together everything that they know about our users.
If you’re working on improving an existing product, you can also rely on
existing analytics (such as web analytics—information about existing cus-
tomer behavior on your website, such as number of clicks or time spent on
the website) to guide your persona development. This step is really impor-
tant, so you don’t simply create stereotypes of users based on common
knowledge. You should collect some real data as a basis for your personas.
2. Once you have gathered user data, you’re ready to analyze. Analyzing
requires going through the data (interview transcripts, observation notes,
analytics, etc.) and looking for patterns, particularly looking for patterns
114 Methods
that highlight different behaviors across all the users. Repeated behavioral
patterns should form the basis of each of your personas.
3. The next step is actually creating the personas themselves. It’s helpful
to start with a persona template, that includes all the details you want to
flesh out per persona, such as name, age, occupation, attitudes, skills, etc.
(Remember to make your persona context-specific, that is, connected to
the digital product you are interested in designing or improving.) Quick
tip: Don’t base your personas on people you know (use fake names and
photos), and also don’t simply write out the characteristics of one specific
user to serve as your persona. Personas should be based on a combination
and distillation of information from multiple real users.
4. After you have created personas, you need to create scenarios for them.
Personas become much more powerful when they have a story behind
them.
5. Once you’ve created your personas, you should “socialize” them. This
means you share your personas and scenarios with the rest of the team,
so they can start talking about them and designing and developing the
product for these “ideal users” in mind. Personas are particularly powerful
in that they provide a tangible focus for designers and engineers as they
go about their work—everyone works with an actual (if fictional) user in
mind, not just a nameless faceless audience segment.
Discussion Questions
1. Think of your favorite website or app and how you navigate through it.
Write a persona based on your behavior on the app, as well as a scenario
of typical use, focused on improving this app in some way that you would
like to see. Interview some of your friends who also use this app. What
other personas and scenarios could you create based on their answers?
2. How might you use personas in creative ways as activities during inter-
views or focus group participants? Think particularly about diversity,
equity, and inclusion and the value of getting people to empathize with
others in your research.
Notes
1. Rikke Friis Dam and Teo Yu Siang, “Personas: A Simple Introduction,” Interaction
Design Foundation, January 2021, www.interaction-design.org/literature/article/
personas-why-and-how-you-should-use-them.
2. “Scenarios,” Usability.gov, accessed July 18, 2021, www.usability.gov/how-to-and-
tools/methods/scenarios.html.
3. Rikke Friss and Teo Yu Siang, “Personas.”
4. Alan Cooper, “The Long Road to Inventing Design Personas,” OneZero, Febru-
ary 4, 2020, https://onezero.medium.com/in-1983-i-created-secret-weapons-for-
interactive-design-d154eb8cfd58.
5. Mady Torres de Souza, Olga Hording, and Sohit Karol, “The Story of Spotify Per-
sonas,” Spotify Design, March 2019, https://spotify.design/article/the-story-of-s
potify-personas.
14 Problem Trees MethodsProblem Trees
Remember that, as a UX researcher, you are worried not only about desirability
but also about feasibility and viability. Your studies and recommendations can-
not exist in a vacuum; they should always be linked to the culture and current
reality of your app or website, including the company’s mission, so it’s a good
idea to consider these factors when planning a study. A problem tree takes
preliminary data from the Empathize stage, as well as previous research, and
assists you with recognizing the core issue, outlining the causes and effects,
and constructing a clear and concise Problem Statement.
Quick Tips
Tools PowerPoint, Word, Illustrator, Gimp, Google Jamboard,
Mural
Use When You are attempting to craft your Problem Statement and
need to define the roots and causes of your problem.
The Method
Problem trees are visualizations of a core problem with your product. The
“trunk” of the tree represents the core issue, the “roots” are the causes of that
issue, and the “branches” are the potential effects. Essentially, you are mapping
the core problem that you found via Empathize methods and the causes and
effects of that problem. Some causes and effects may actually be found through
conducting analysis of participants’ responses during user research. You will
also apply your general/common knowledge to add other possible causes and
effects. Creating a problem tree helps you to break down the problem into
smaller, more manageable pieces.1
DOI: 10.4324/9781003181750-17
116 Methods
It is extremely likely that you will actually formulate more than one Prob-
lem Statement. While at first this may seem like bad news because it means
that you have to “pick” one or that you have just given yourself more work,
in reality, it is a big positive. As a UX researcher, you want to constantly have
multiple projects ready to go that provide the rationale for why your job is
important and necessary. Also, of course, it’s your job as the UX researcher
to be proactive, always suggesting small changes to the app or website before
users become overly frustrated. Depending on the size of your team, you may
be working on several projects at once, or you may work on one Problem
Statement at a time.
After analyzing your Empathize results, decide what the problem or issue
is that needs to be addressed. If you find more than one large issue, you can
construct more than one problem ree! The core problem can be broad at this
point. The process of building the tree will help to narrow your actual research
problem (or a more specific Problem Statement). Once you have your core
trunk, you will map the roots (causes) and branches (effects). Again, these
will come from both your participants and your knowledge of the product.
You don’t necessarily have to map the problem, causes, and effects as a literal
tree. But, just be sure the way you map it shows a clear cause and effect visu-
alization. It is also common to have more than one layer of causes and effects.
One cause may lead to another, secondary cause. And, one effect may lead to
another, secondary, effect.
Here is a sample problem tree for a study abroad website. Let’s imagine that,
through some Empathize research, you find that participants are having issues
moving through the mobile version of the site—notice this is the “trunk” or
core issue.
What do you notice after reading through this problem tree? First, take note
that the core issue is directly driven by data collected during the Empathize
stage. The problem is still vague and messy—it would be difficult to conduct
an effective Ideate method at this point. The roots tap into different pieces of
the website, including coding, design, and ethical considerations. The possible
effects not only list that users may be turned off and not return but also speak
to larger branding issues and ethical issues, including ostracization caused by
lack of inclusivity.
As you read over some Ideate methods in later chapters, keep this problem
tree in mind. What could be some Problem Statements that could come out of
this tree? What would make for good Ideate methods to use on this project?
Are there any pieces that may warrant looping back and conducting another
Empathize method?
diagram with all the steps and substeps that a user needs to take in order to
complete some goal (e.g., all the steps needed to “post a picture” on Insta-
gram). The UX team can then study this diagram to inform their design, figur-
ing out ways to make the task sequence easier and more intuitive for the user.
For instance, some of the listed steps might need additional user support built
into them or some steps can be eliminated completely by designing a more
118 Methods
intuitive product. Once a task analysis is complete, you can validate it—that is,
check whether all the steps you laid out are indeed the steps that users take—
through usability testing (see Chapter 19).
For example, in a very basic task analysis of posting a picture on Instagram,
the steps might go as such:
If you looked at the previous discussion to help define your Problem State-
ment, you might decide to focus on eliminating some steps to make the flow
more efficient. So, for example, you might decide to add a feature that would
automatically generate some possible hashtags for a picture, using AI to cate-
gorize the picture. To learn more about task analysis, check out Usability Body
of Knowledge’s page here: www.usabilitybok.org/task-analysis. Or, Interac-
tion Design Foundation’s page here: www.interaction-design.org/literature/
article/task-analysis-a-ux-designer-s-best-friend. Now, back to problem trees!
Use When
Problem trees are best constructed when some preliminary research is avail-
able, and you are ready to decide on your Problem Statement. Nearly every UX
project should use a problem tree (or a similar method, like a task analysis) to
define and organize problems with the app or website. This is why it is part of
the Define stage of the Design Thinking Mindset. It should be used fairly early
on, but some preliminary research is necessary to ensure that your eventual
Problem Statement is relevant, timely, and user-centered.
A problem tree can also be used to help sort through and organize possible
academic research topics. Often, students begin with research questions that
are too broad or that have already been heavily covered. Using a problem tree
in tandem with writing a literature review can help output a clear and con-
cise Research Question. For example, if you have an idea for your research
problem, begin by putting that into your problem tree as the “trunk” or core
issue. This can be something broad, perhaps a trend you have noticed in your
personal daily life, a friends’ experience, or even something you read about or
saw on the news. Now, as you conduct a search for relevant literature, you can
start to add roots and branches, or causes and effects, respectively, to your tree.
By picking one or two of these causes or effects, you will be able to construct
a targeted and important Research Question!
Problem Trees 119
What You Need
To begin, you will need preliminary data, most likely from Empathize method
results. To construct the actual tree, you can use tools as simple as a paper and
pencil or a whiteboard. A whiteboard is especially helpful in the earlier stages
of problem tree creation because it allows for constant editing and smoother
collaboration.
To formalize the tree, you can use simple digital tools like Microsoft Word
and PowerPoint. These software allow for the creation of shapes, arrows, and
text boxes, which is all you really need for a problem tree. If you are comfort-
able with software, and want to get a little fancier, you can use programs like
Adobe Illustrator or Gimp. If you need to virtually collaborate to create your
problem tree, Google’s Jamboard or Mural are both great collaborative white-
board tools.
Good Problem Statements usually come out of a cause (root) or effect (branch).
Because the core issue, or the trunk, is often too broad, the problem tree visu-
alization helps us to link roots and effects and hone our research problem. Your
final Problem Statement will probably be similar to your trunk but made more
precise through at least one cause, one effect, or some combination.
120 Methods
Case Study
It’s difficult to find a published problem tree because it is such an internal
method. Instead, we will share another sample problem tree. In this problem
tree, imagine you are interested in studying Pinterest. You ask participants,
during the Empathize stage, to write breakup letters. After analyzing that data,
you find that users were experiencing findability issues—they couldn’t find
specific content on the platform. As you can see in Figure 14.2, listing the
causes and potential effects breaks down the issue into more digestible chunks.
Steps
1. Complete your analysis of Empathize method results.
2. Decide what your core issue is. It is OK if it is broad at this point.
3. Make this core issue the “trunk” of your tree.
4. Use your participants’ contributions, as well as your own knowledge, to
decide what the causes of this issue are.
5. Map these causes as the “roots” of your tree.
6. Again, use your participants’ contributions as well as your knowledge to
decide what the possible effects of this issue are.
7. Map these effects as the “branches” of your tree.
8. Consider if there are secondary roots and causes. These are causes of
causes and effects of effects!
9. Map these secondary roots and causes as well.
10. Edit your tree to see if you can match up some roots and effects, noticing
possible linear processes.
11. Analyze your tree to recognize what potential studies may solve more than
one issue!
12. Finally, decide on a concise Problem Statement to tackle.
Discussion Questions
1. How is the process of creating a Problem Statement different from con-
structing a Research Question? How is it similar?
2. Try drawing a problem tree for some process at your college or university.
For example, maybe there is limited parking for students. Or, maybe the
food on campus just isn’t great. What are some potential roots and causes
for this problem? Do any solutions jump out that were not obvious before?
Notes
1. e.g., Ingie Hovland, Successful Communication: A Toolkit for Researchers and Civil
Society Organisations (London: Overseas Development Institute, October 2005),
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.1057&rep=rep1&ty
pe=pdf.
2. See these and more discussion questions in: Hovland, Successful Communication, 13.
15 Cognitive Mapping MethodsCognitive Mapping
Quick Tips
Tools Paper and pencil; Zoom, Skype, Google Jamboard
Use When You want a visualization of what users remember about
an app or website, their interpretation of the space, or how
they would change an interface or process, so you can
incorporate these perspectives into your design.
The Method
It is commonly understood now more than ever that people learn in different
ways. Guidelines like Universal Design for Learning (UDL) focus on the vary-
ing ways people best engage with, represent, and express new information.1
In particular, guides strongly suggest that people are offered multiple modes
of expressing learned content, including traditional writing, creative essays,
performances, drawings, and videos. These ideas about learning are not only
beneficial in education, they are also extremely helpful in qualitative research.
Traditional academic methods in Communication and Media Studies include
surveys, focus groups, or one-on-one interviews. These methods ask people to
DOI: 10.4324/9781003181750-18
Cognitive Mapping 123
either talk about some experience or write about it. But, these types of outlets
are not always the best ways for people to express their emotions, feelings,
needs, desires, or frustrations. Cognitive mapping is a creative approach that
instead asks users to create sketches, drawings, or mappings of certain con-
cepts or products. In essence, as the researcher, you are asking participants to
create mental maps.
The original idea for cognitive mapping can be traced back to 1948, when
psychologist Edward Tolman was experimenting with brain mapping, mazes,
and rats, showing how rats create mental maps of their environments in their
heads.2 In the late 1980s and early 1990s, cognitive mappings started to become
a popular method in social and behavioral sciences.3 For instance, cognitive
mappings are popular in geography and social work studies fields. Researchers
ask their participants to draw physical spaces like city blocks and the inside
of buildings to better understand how people move through physical spaces,
what they remember and thus are important pieces to them, and how included
or excluded they feel in certain areas.4 Interestingly, cognitive mapping works
well as an app and a website method because digital spaces are similar to phys-
ical spaces. Users “move” through digital spaces, utilizing different tools and
feeling different emotions based on who they are, their cultural backgrounds,
and their relevant goals as they go.
Cognitive mapping is also a great method to really involve your participants
in the Ideate process. Instead of asking participants to look at a current inter-
face or process and then comment on what they may change or where they feel
excluded, you can provide a blank slate. Beyond asking participants to draw
what they remember from a current version of an app or a website, you can also
ask them to draw how they wish an interface looked or how a process is organ-
ized. Without the current version of your product to act as a guide, participants
are more likely to provide authentic feedback that truly captures what they
would like to see, not what they think the app is supposed to be doing based
on the current version.
In addition, you can also ask that participants draw a different, but related,
product that they like better. For example, perhaps you work for Pandora, the
radio station app, and you have found that users think the way stations are
organized is confusing. You may ask that participants draw a different music
streaming app that they believe provides a more intuitive organizational expe-
rience. From these results, you are not looking for exact changes to your prod-
uct to be perfectly visualized, but rather you want to understand what it is that
another product provides that yours does not.
It is up to you to decide on how much or little guidance you will give to your
participants. This is, of course, guided by your Problem Statement or Research
Question and the goals you are trying to reach with your study. You can leave
it up to them to draw, sketch, map, or chart however they please. Or, you can
specifically ask for a sketch or flowchart.
As with many methods discussed in this book, a large part of a great cog-
nitive mapping study is asking good questions. Cognitive mapping can be
124 Methods
conducted virtually or in-person. But, in both cases we recommend a synchro-
nous meeting. As your participant draws, you can ask that they think aloud,
explaining what they are including and why. Then, after they have completed
each drawing, you can ask them questions to further explain their mappings,
taking notes or directly annotating their images.
In-person cognitive mappings can be done simply with sheets of paper and
pencils. You can take notes while participants walk you through their drawings
as well as annotate their mappings once they have completed each one and
handed it to you. Virtually, tools like Google Jamboard or Mural (both digital
whiteboards) paired with video communication software like Zoom work well.
Speaking with your participants through Zoom, you can view a shared Google
Jamboard and watch as the participants “draw.” You can then easily save these
boards for later analysis, including the notes and annotations you added.
Use When
For this textbook, we are suggesting cognitive maps be used in the Ideate
stage. Cognitive maps can also be an Empathize method, when you ask partici-
pants to draw current systems or processes. But because mappings can provide
creative looks into how participants would change a product or help solve a
research problem, we like to think of them as an Ideate method. Remember,
as UX researchers, we are not the users. Thus, cognitive maps provide unique
insight when working to decide what a prototype will include. Cognitive maps
are best used in the Ideate stage with prompts like “draw what would you
change about this product” or “draw another app/website that includes a simi-
lar feature but does it better.”
However, cognitive maps can also be used in the Empathize stage. Instead
of asking participants to draw potential solutions or changes, as discussed,
you can instead use cognitive mapping to tap into how users feel about your
product currently. At a macro-level, you can ask that participants just “draw”
your product. This straightforward prompt offers insight into how users view
your website or app on two levels. The first is through tools and functionali-
ties—noting what your participants include and omit speaks to what they find
important and memorable. The second is more symbolic—what users include
in their mappings can be analyzed to provide a deeper understanding of their
thoughts on the space.
As one example, take some time to view the two sketches of Twitter in
the following. Let’s imagine you provide participants with the simple
prompt: “Draw Twitter.” What do you notice about the two contributions in
Figure 15.1? How are they similar? How are they different?
In the two sample sketches in the figure, it is clear to see that both include
a few of the same elements—which are the main pieces of Twitter—tweets
and hashtags. But we can also see that the first is mainly focused on content,
not giving much attention at all in their drawing to the actual structural or
aesthetic elements of the app. The second, however, includes more structural
elements, including profile photos, the settings gear, ads, suggested accounts,
Cognitive Mapping 125
and the general organizational aesthetic of the Twitter feed. These findings,
paired with interview questions and demographic information, would begin to
provide insight into what is important to users.
The example in the figure would more likely be the result of an Empathize
method, or a Research Question that is attempting to understand how users
feel about, or experience, an app or an website. On the other hand, cognitive
mapping is great to use in the Ideate stage. Let’s imagine that, through your
Empathize stage, you found that participants were not posting to Twitter and
had some frustrations with the posting process. You may then prompt your
participants with: “Draw an app’s posting interface that you enjoy.” Findings
may vary. For instance, participants may all sketch different apps. But, remem-
ber, after collecting cognitive mappings from your participants, you should
not expect exact prototype changes or Research Question answers to pop out.
Instead, a thorough analysis is needed first.
Case Study
In 2018, Michael A. Devito, Ashley Marie Walker, and Jeremy Birnholtz, all
scholars from Northwestern University, used cognitive maps as a method to
understand how LGBTQ+ people self-present their identities on social media
platforms. Participants were provided with paper and colored markers. They
were then asked to map how they present their LGBTQ+ selves online and
to be creative when doing so. By employing the cognitive mapping method,
the researchers helped their participants to visualize their digital identities and
relationships without the overarching platforms themselves getting in the way
and on their own terms. The researchers then used the sketches during the
interview phase, asking participants to explain what they had included.5
Devito, Walker, and Birnholtz found that their participants use social media
to both express their LGBTQ+ identities and avoid ostracization and harass-
ment. Because different social media platforms provide different audiences and
expected post types, these participants were able to perform the selves they
wanted to and needed to based on who was watching. In particular, different
platforms, or even multiple accounts on the same platform, allowed for audi-
ence segregation to ensure proper identity management.6
While this example is a more traditionally academic study, utilizing cog-
nitive mapping clearly helped the researchers gain insight that was unlikely
to come out during a traditional interview or qualitative survey. In addition,
Cognitive Mapping 127
because participants could sketch whatever they wanted, the researchers were
provided with a myriad of images including sliding scales, social media maps,
lists, and cartoon representations, which made for a richer, more nuanced
analysis.
Steps
1. Decide what you will be studying.
a. Are you using cognitive mappings as an Ideate method? What prompts
will you include?
b. Are you using cognitive mappings as an Empathize method? What
prompts will you include?
2. Decide if the study will take place in-person or virtually.
a. If taking place in-person, prepare paper and writing implements
(pens, pencils, markers, etc.) for sketches.
b. If taking place virtually, set up Zoom meetings and Google Jamboards.
3. Decide if you would like participants to draw a specific type of map—
sketch, flowchart, list, etc.—or if you will allow them to freely map.
4. Once you are sitting down with each participant, explain what creating
cognitive maps entails. Be sure you let participants know that drawing
ability doesn’t matter. Often, participants are worried they are going to be
“graded” on the basis of their artistic skills.
5. Before providing the first prompt, be sure the participants are comfortable
with the tools provided and understand their role.
6. Begin by providing the first prompt. Be sure you only provide the prompt
and not too much explanation. If you give too many examples or coaching,
participants’ drawings will not authentically represent their own experi-
ences or ideas.
7. Ask that the participants think aloud as they draw. You can take notes dur-
ing this time.
8. Once they say they have completed the first mapping, provide the second
prompt. Continue this process until you are through all of your prompts.
9. About five prompts is usually enough. Too many more prompts often lead
participants to get tired.
10. Once you have the mappings, ask participants questions about what they
included. These questions may be aimed at one sketch, or they may tap
into how sketches are related to, or different from, one another.
11. Thank the participants for their time.
12. Analyze the sketches, annotations, and your notes. Take note of what was
included and what was left out. If using an Ideate method, critically inter-
pret why certain tools and functionalities are desired. How would they
change the experience? What are they really providing that your prod-
uct currently is not? Compare these trends within and between identities,
including age, gender, and disability status.
128 Methods
Discussion Questions
1. Have you ever drawn some type of image or chart instead of just taking
notes to help you learn about a topic? How did it provide a different per-
spective than traditional note-taking? If you haven’t, can you think of a
time it would have been helpful? How so?
2. How can you imagine you could use cognitive maps when your partici-
pants express varied ways of processing and communicating information?
3. What do you think you would do if one of your participants told you that
they are “bad at drawing?”
4. How is cognitive mapping similar to, and different from, emotional jour-
ney mapping?
Notes
1. “The UDL Guidelines,” CAST, 2018, https://udlguidelines.cast.org/.
2. Edward C. Tolman, “Cognitive Maps in Rats and Men,” Psychological Review 55,
no. 4 (1948), https://doi.org/10.1037/h0061626.
3. e.g., Colin Eden, “Cognitive Mapping,” European Journal of Operational Research
36, no. 1 (1988).
4. e.g., Daniel R. Montello, “Cognitive Map-Design Research in the Twentieth Cen-
tury: Theoretical and Empirical Approaches,” Cartography and Geographic Infor-
mation Science 29, no. 3 (2002).
5. Michael A. DeVito, Ashley Marie Walker, and Jeremy Birnholtz, “ ‘Too Gay for
Facebook:’ Presenting LGBTQ+ Identity Throughout the Personal Social Media
Ecosystem,” Proceedings of the ACM on Human-Computer Interaction 2, no. CSCW
(2018).
6. Ibid.
16 Brain Writing and Darkside
Writing MethodsBrain Writing and Darkside Writing
Even with piles of data from participants in the early empathize stage of the
Design Thinking process, UX researchers and designers often find themselves
stuck in ruts—tied to old ways of thinking about a platform and doing similar
things over and over that rarely lead to long-term, more enjoyable experiences.
Brain writing and darkside writing help to think outside the box, offering sim-
ple but effective methods that lead to creative solutions to UX problems. While
brain writing is more often used with external participants, darkside writing is
a great internal method.
Quick Tips
Tools Notepad & sticky notes; Google Sheets & Zoom/Skype
Use When You want to generate new ideas or solutions.
The Methods
Both brain writing and darkside writing are similar to the traditional method of
focus groups—they rely on social dynamics to more deeply dive into the issue
at hand. Group brainstorming is a well-known process for generating ideas, but
it has some issues, including needing a skilled leader and frequent disruptions,
especially if there is conflict or if one participant speaks more than the oth-
ers. But, brain writing allows for a different atmosphere.1 Studies have found
that brain writing sessions generate higher quality ideas than brainstorming
sessions.2
When implementing brain writing, you ask participants to tackle your Prob-
lem Statement head-on. In a group setting, participants are provided with the
DOI: 10.4324/9781003181750-19
130 Methods
current Problem Statement as a simple statement. Each participant begins with
a sheet of paper and writes down a solution to the problem. Then, each par-
ticipant passes the paper to the left. Each time they receive a new sheet, par-
ticipants build on the ideas that were written previously. The goal is to create
as many “ideas” as there are participants. But, instead of creating siloed ideas
from each person, you are instead getting a myriad of collaborative ideas that,
most likely, no one person would have constructed alone.
For example, let’s say you work for Netflix and your Problem Statement is:
“Users perceive Netflix to overly promote their original content.” One partici-
pant may begin by writing a broad, unmanageable solution. But, by the end of
the session that one sheet of paper may look like the example in Figure 16.1,
held in a Google Doc. You can see how much the solution evolved during the
session, providing a great starting point for you as a researcher to make deci-
sions regarding the next step of the Design Thinking process: Prototype.
As mentioned earlier, brain writing is based on the popular method of brainstorm-
ing, but, as the researcher, you ask your participants to write down their ideas
instead of verbally communicating them. This can really benefit participants who
are not as comfortable speaking in front of a group or worrying that they will be
ostracized or judged for having a certain opinion or idea. It also helps with those
who may talk too much and not allow space for the rest of the group to add their
ideas. Because everyone gets one piece of paper, writes down an idea, passes it,
and adds to the next sheet of paper, everyone has an equal chance to contribute.
Netflix probably has some sort of algorithm in place to suggest videos to the
users, like YouTube. Maybe we change that algorithm.
Obviously Netflix wants to promote their own content, so the recommenda-
tion algorithm wouldn’t just stop advertising Netflix content. But, maybe there
could be better organization for like who sees what when.
I feel like their content is for younger people. What happened to the older
shows that we all loved? Why isn’t that promoted more?
I think all content is promoted, and I do think that what is shown is related to
who is watching. But also they do promote their own content way more and it’s
just not usually as good.
Maybe there is a way to promote both contents at the same time? Like what if
old and new content were presented next to each other? 50/50 split? This way it
would be giving equal time, Netflix still gets to advertise their stuff, but we still
also see other stuff.
Even better: promote similar content side-by-side. If there is a remake or a
show that is obviously trying to re-create an older idea or just similar shows,
advertise them side-by-side. This way Netflix still gets to promote their original
content, but they connect back to older shows that people loved. This could spark
interest in the newer shows too.
Use When
Brain writing is best used when you want an external, Ideate phase method.
Brain writing is valuable because it places users (or potential users) together
in a focus group setting and allows them to collaborate on several ideas that
are direct “solutions” for your Problem Statement. As with traditional focus
groups, your role as researcher is not to be an interviewer, but a moderator.
So, you’ll want to use this method if you are comfortable providing the groups
with the initial Problem Statement and then letting them work together to
132 Methods
create a few solutions. Brain writing is an excellent method for co-creation, a
popular concept in the UX industry that means that internal teams and users
or customers interpret research findings, generate ideas, and design solutions
together.
Darkside writing is best used when you want to conduct an internal, Ideate
phase method. Sometimes it makes the most sense for your Ideate process to
be internal, maybe due to resource availability: Do you have time for external
participants? Do you have a way to reimburse them for their time? Other times
an internal Ideate process is necessary just due to the nature of the Problem
Statement. It may be that Empathize and Define method results presented a
Problem Statement that needs to first be thought through by those who are
within the company. Remember, the Design Thinking process is not meant
to be linear; you may find at first that you need to conduct an internal Ideate
study and then, later, need to conduct another Ideate, but this time using an
external method.
Case Study
In 2018, researchers from Denmark conducted research in an effort to create
a mobile app that teens with cancer could use to communicate and, hope-
fully, experience an increase in health-related quality of life. The research-
ers note that, while there are apps for teens with cancer, these existing apps
were not created with data from the actual teens themselves. Thus, their goal
was to use co-creative methods to suggest a more inclusive and effective user
experience.3
One method employed by the researchers was brain writing. Using this
method, they found that their participants would like an app that includes
a community forum, an information library, and a symptom and side-effect
tracking tool. The researchers also learned that bright, warm colors are pre-
ferred. This case study is a great example of how we need to remember that
we are not the users. Previous apps were created by people that were quite
unlikely to be the actual end users. But, with a little insight from teens,
through a UX method like brain writing, the researchers were able to begin
designing a product that could have real impact on the lives of many young
adults.4
Steps
1. Organize your participants.
a. If working in-person, find a space that will work best for brain writing
or darkside writing.
b. If working virtually, set up the Zoom meeting and either the Google
Docs or the Google sheet.
2. Provide whatever tools and instructions are necessary.
3. Obtain any demographic information you may from your participants.
This may be completed via a short survey or by just asking them a few
quick questions before the writing session begins.
134 Methods
4. Define the Problem Statement. Remember that it may not be exactly as
you have defined it internally. You may need to change convoluted lan-
guage or jargon.
a. If conducting brain writing sessions, remember you will essentially
be providing your Problem Statement, perhaps with some modifica-
tions, to your participants.
b. If conducting darkside writing sessions, you will present the “flipped,”
“dark” version of your Problem Statement.
5. Begin the session.
a. For brain writing, ask that each participant takes a set amount of time
to write a one-two sentence solution to the Problem Statement.
b. If conducting a darkside writing session, ask that each person takes a
set amount of time to create a “solution.”
6. After the set amount of time has lapsed, move on to the next step.
a. For brain writing, ask each participant to pass their sheet of paper to
the right. If you are working virtually, you can ask that each partici-
pant moves to the next number sheet. For example, if you have eight
participants, you will label the Docs 1–8. A participant that began on
Doc 2 would move to Doc 3. The participant on Doc 8 would move to
Doc 1.
b. For darkside writing, ask participants to take a sticky from the board
or table that they did not write, read it over, and create a new sticky
that flips that dark “solution” into a solution for your actual Problem
Statement.
7. Repeat these steps until the session is complete.
8. Thank your participants for their time.
9. Begin analyzing your solutions. Remember to pay attention to fit, possi-
bility, and resources needed. Also remember to consider demographic and
cultural differences.
Discussion Questions
1. Think of two Problem Statements or Research Questions, one that would
call for a brain writing session and one that would call for a darkside
writing session. Why do you think each would be best for their related
method?
2. Why do you think the social dynamic of focus groups is useful in UX
research if most device usage is usually a solo, personal activity?
3. How could you get creative with these two methods and adapt them for
participants who find it difficult to find their voices through writing?
Brain Writing and Darkside Writing 135
Notes
1. Arthur B. VanGundy, “Brain Writing for New Product Ideas: An Alternative to Brain-
storming,” Journal of Consumer Marketing 1, no. 2 (1984), https://doi.org/10.1108/
eb008097.
2. Peter A. Heslin, “Better Than Brainstorming? Potential Contextual Boundary Condi-
tions to Brain Writing for Idea Generation in Organizations,” Journal of Occupa-
tional and Organizational Psychology 82, no. 1 (2009).
3. Abbey Elsbernd et al., “Using Cocreation in the Process of Designing a Smartphone
App for Adolescents and Young Adults With Cancer: Prototype Development Study,”
JMIR Formative Research 2, no. 2 (2018), http://formative.jmir.org/2018/2/e23/.
4. Ibid.
17 Card Sorting (and Tree
Testing) MethodsCard Sorting (and Tree Testing)
When you think about an app or a website, you may think about its home page
or maybe you think about a few of the specific features you like about it. What
you may not think about are all of the different pages, options, tools, menus,
and so on of a specific platform. This is not accidental. Digital spaces don’t
want their users to notice clunky menu options or to get lost moving through
pages. Instead, they want users to easily find what they are looking for; they
want to provide a logical, easy-to-follow organization. Here is where card sort-
ing comes in. Card sorting allows participants to take the role of designer,
organizing content on a website or an app in a way that makes sense for them.
Quick Tips
Tools Index cards or Optimal Sort at optimalworkshop.com
Use When You need to discover how to best organize page structure or
menus or categorize content.
The Method
Card sorting allows users to take a hands-on approach to helping designers
organize content that is presented in digital products. As the researcher, you
are essentially asking participants to become a part of your app or website’s
organization design. Card sorting is used often because it is quick, cheap, and
reliable.1 Card sorting is particularly useful when your Problem Statement
involves findability or usability or when your Research Question involves
understanding how people categorize information.
Card sorting comes out of the “Q Methodology,” created by psycholo-
gist William Stephenson. Participants perform “Q sorts,” where they rank
DOI: 10.4324/9781003181750-20
Card Sorting (and Tree Testing) 137
statements written on cards based on some provided context so that researchers
can better understand their viewpoints on a particular topic. Cards can also be
images or videos (if the sorting is completed on a digital device).2
There are generally two types of card sorting—closed sort and open sort.
In both scenarios, you provide participants with cards that have the content
you want to be organized on them. For example, let’s say you are helping
a new department store provide a better user experience on their website.
Each card would list a different item that the store sells. You may not be able
to include every single item, because that might get a little tiring for your
participants. But, including a representative sample (e.g., sneakers, sandals,
flats, dress shoes, etc. as part of the “shoe” inventory) will still work. In a
closed sort, you first provide the participants with the different categories
the store wants to include—accessories, teen clothing, baby clothes, shoes,
kitchen appliances, furniture, and so on. Then, participants match up the item
cards with what they believe to be the appropriate category. In an open sort,
participants are given the item cards, but the category headers are blank; it
is up to the participants to decide the categories where the different items fit
in. Card sorting is often used to categorize menu content on a webpage, for
example, what sort of stuff should go on the “about” page, what fits under
the “FAQs,” etc.
You could also design some hybrid of the two types of card sort. Maybe you
provide the participants with some categories but also provide additional blank
cards so they can add more categories that you didn’t think of. Maybe there
are some blank item (content) cards. Or maybe participants have the ability to
edit created cards. If you have the time and resources, you can also do multiple
phases of card sorting. For instance, you could start with an open sort to allow
your participants to create categories that they think the different items would
fit under. Then, in phase two you could conduct a closed sort to test whether
the categories that came out of the initial open sort work for other participants.
Any combination can work, as long as you match what you want to learn with
what you are asking your participants to do.
At this point, it’s important to also briefly cover tree testing, which is basi-
cally the reverse of card sorting. It is a usability technique (often presented
as part of usability testing) that helps researchers validate whether or not the
information architecture (how content is laid out, organized, and categorized
on a website or app) makes sense. Tree testing is called “tree” testing because
the information architecture of a website typically looks like a tree, with a core
trunk (the home page) and large branches with smaller and smaller branches
shooting off it. (Tree testing is not to be confused with problem trees, as dis-
cussed in Chapter 14!)
Whereas in card sorting, participants are first given a set of cards and asked
to categorize them, in tree testing, they are provided a final text version of a site
map (basically all the menu categories of a website and all their submenus) and
asked to highlight where they think a particular piece of information or items
would be found in this categorization.
138 Methods
Card sorting comes before tree testing; tree testing is essentially a way of
confirming the results of a card sort and validating the general findability of
your site. As you can probably guess from the name, tree testing is commonly
used during the Test stage. An example of a tree test would be to provide a par-
ticipant with a site map of a shopping website and ask them to find their “cart”
or to find information on how to return an item. Following the user’s reasoning
while they navigate, the researcher can get some ideas on what improvements
need to be made to make the site more easily navigable and the information
(e.g., carts or return policies) on it more findable.
Tree testing provides qualitative data in terms of reasoning by the partici-
pants (of course, this is only possible in a moderated tree test where participants
are asked to talk out loud as they navigate), but it can also provide quantitative
data, such as, how many participants correctly completed the task and how
long did it take them? Tree testing is typically done with more participants than
other qualitative methods, because of the quantitative component that needs
more participants in order to be seen as valid.
Use When
Card sorting is mostly used during the Ideate phase of the Design Thinking
process. Remember, in the Ideate phase, the goal is to think outside the box and
begin exploring data-driven solutions to your research problem. Often, meth-
ods within the Ideate phase are internal—you or you and your team (inside
the company) work together, looking at participant data and brainstorming
changes to a process or interface design. Card sorting is unique because it
brings participants—actual users—into the Ideate process.
The card sorting method is best used when you already have an idea of what
a space will include but you aren’t completely sure how those things should be
displayed, organized, or grouped. For example, let’s say you are helping to get
an exercise app up and running. The developers know that they want to include
different types of workouts, offering slightly different workouts depending on
equipment needed and time available. But they aren’t sure how to best group
these varied workouts. The card sort method could take workout type, length,
and equipment needed and ask participants to organize them into what they
view as relevant and intuitive.
Card sorting can be used as a creative method for any academic research
that is interested in understanding how people categorize information. Qualita-
tive academic research is typically interested in understanding what, why, and
how. So, for example, you could ask different people to categorize workouts
(as in the example provided earlier) and then ask them why they categorized
them as such. It might be interesting to see whether different groups categorize
workouts differently and what the reasoning for this is. For instance, you might
find that participants of varying genders categorize the same type of workout
(e.g., yoga) under different categories (e.g., strength training? cool-down?).
You could interpret this as evidence that gender stereotypes around exercise
(and what is perceived as a “real” workout) persist.
Card Sorting (and Tree Testing) 139
What You Need
In person, card sorting is as easy as getting index cards and writing a piece of
content (an item) or a category on each. You then ask participants to move these
cards around on a big table or workspace. This can also be done in groups or
individually to get different perspectives, similar to how individual interviews
output different findings to focus groups—the group dynamic builds on itself,
providing insights on group interactions, rather than singular attitudinal data.
Virtually, websites like OptimalWorkshop.com provide a tool—Optimal
Sort—that allows participants to use their own computers at home to drag and
drop cards to different areas on a screen, similarly to how they would move
index cards around on a table. Beyond just sending a link to the Optimal Sort
workspace participants, you can bring participants together using software
like Zoom, Google Hangouts, and Skype and have them work on the virtual
card sort together. In the in-person model, be sure you have an efficient way
to take note of the different ways participants organize the cards. You can
choose to take notes, but a better way is to take a photo or record the interac-
tion. Online, Optimal Sort not only saves the structures for you but provides
some initial analyses like similarity matrices and dendrograms (taxonomic,
tree diagrams).
In both in-person and virtual card sorting, it is important to randomize the
order in which you present participants with the cards to prevent order bias.
This is commonly done by “shuffling” the stack of content cards that you are
asking participants to sort. You wouldn’t want your findings to be driven by
participants unconsciously thinking that the first few cards are the most impor-
tant or by participants beginning to feel tired by the end and not thinking as
deeply about the same few final cards. It is up to you if the category cards
remain in the same order for each participant. If you already know how the
labels will be ordered on your site, you can present them in that order for each
participant. If you aren’t sure yet, or if you don’t want the category cards’ order
to play a role in your findings, mix them around. In-person, randomizing cards
to prevent order bias involves you shuffling the cards before each new partici-
pant comes in. Online sites like Optimal Sort provide a checkbox, and, once
ticked, the cards shuffle automatically for each participant.
Case Study
Alison Doubleday, a researcher at the University of Illinois at Chicago, used
card sorting to study students’ experiences with a new science curriculum on
the course management site Blackboard. To begin, Doubleday conducted an
open card sort, before the new curriculum was implemented. Cards were cre-
ated by faculty, and card names were based on the content that was previ-
ously available on Blackboard. The participants were asked to group these
cards and then to name the groupings. Groupings included Course Informa-
tion (with content like syllabus, course schedule, and course policies), Edu-
cational Resources (with content like research articles, animations, and links
Card Sorting (and Tree Testing) 141
to websites), and Daily Materials (with content like online lessons, tutorials,
and case studies). After analyzing the results, the Blackboard site for the new
course was created.3
Doubleday then conducted another card sorting study to see what, if any,
changes were needed to make the Blackboard site more user-friendly. But this
time, the research was a semi-closed sort. The same cards were used from
the open sort, but the categories Doubleday decided on after the open sort
were given to the new participants (instead of letting these participants create
their own groupings). The participants were asked to move the cards under
these categories, but they were also allowed to edit group names, delete group
names, and add new group names.4
Doubleday’s results from the second semi-closed sort were actually slightly
different than the first. This points both to the importance of understanding how
different types of card sorts can provide you with different results and to the
importance of repeating a Design Thinking stage if necessary. As with much
UX research, the card sorting results were surprising to the research team. Had
they used anecdotal evidence, they would have organized the site completely
differently. Remember, you are not the user—you need to do research to under-
stand how users actually think and behave!
Steps
1. Decide how you will conduct the study. Choose if you will do an open
sort, a closed sort, or some combination of the two. Remember to let your
Problem Statement or Research Question guide your decision.
2. Prepare the study materials. Card labels should be short enough that par-
ticipants can read them quickly, but also descriptive enough that they are
clear.
a. If you are conducting the card sort in-person, create your index cards
for sorting.
b. If you are conducting the study virtually, create the digital cards using
a site like Optimal Workshop.
3. Conduct the study.
a. If working in-person, bring participants in one at a time, explain their
task to them, and watch as they sort the cards.
b. If working virtually, send the link to your participants. You can choose
to have them do the sort on their own time and look at all of the results
after. Or, you can choose to conduct a synchronous card sort, using a
service like Zoom to watch the participants as they sort the cards, so
the process is closer to an in-person session.
4. After the cards are sorted, ask participants (if they are in-person or meet-
ing synchronously online) questions based on the choices they made. Find
out why they made the choices they did.
142 Methods
5. Be sure to save the information architecture (the classifications and organ-
ization) that your participants create. If your study is virtual, tools like
Optimal Sort save your results for you. But, if you are working in-person
with index cards, be sure to take an overhead photo of each structure or
to record the session. Save these images with your notes so that you can
organize them together.
6. Analyze your data. Remember that you are not looking for some exact
structure or for every participant to provide the same exact organization.
Instead, you want to study the structures created, as well as your notes
from asking questions of your participants. Understand the choices made
and why those choices were made. Then use these, along with your exper-
tise and findings from your Define stage, to move into the Prototype phase,
and create a design with the most intuitive organization structure.
Discussion Questions
1. How would you study your college’s online course management system
(Blackboard, Canvas, D2L, etc.)? Try creating cards out of the content
contained in a specific course site. How would you organize an open sort
study? A closed sort? Something in between?
2. Can you think of an example when card sorting would work better as
images or videos instead of words?
3. How could you still capture participants’ thoughts if they complete a card
sort virtually and asynchronously (without you watching)?
Notes
1. Donna Spencer and Todd Warfel, “Card Sorting: A Definitive Guide,” Boxes
and Arrows, April 7, 2004, para 4, https://boxesandarrows.com/card-sorting-
a-definitive-guide/.
2. William Stephenson, The Study of Behavior: Q-Technique and Its Methodology (Chi-
cago: University of Chicago Press, 1953).
3. Alison Doubleday, “Use of Card Sorting for Online Course Site Organization Within
an Integrated Science Curriculum,” Journal of Usability Studies 8, no. 2 (2013).
4. Ibid.
18 Heuristic Evaluations
and Critical Analysis
Walkthroughs MethodsHeuristic Evaluations
Quick Tips
Tools App/website to test
Screen recording software or video camera
The Method
In a walkthrough, the researcher or other assessor downloads the app or
accesses the website and moves through it like a typical user, signing up,
DOI: 10.4324/9781003181750-21
144 Methods
navigating through different screens, and eventually logging out. As they move
through it, they will note their thoughts and especially note parts of the process
that are difficult or not intuitive.
A heuristic evaluation or expert review is a specific type of UX walkthrough
conducted by a usability expert (who could be the researcher themselves, if
they are a usability expert!), to check that the website or app meets certain
standards of usability. Walkthrough methods help the researcher see where
there are usability problems with the site, but they’re also great for making
explicit the implicit values and biases present in apps and websites, from an
academic research perspective. Ben Light, Jean Burgess, and Stefanie DuGuay
propose a walkthrough method for academic researchers as a way of conduct-
ing a critical analysis of an app, bringing to light certain assumptions that the
app is based on, and pointing out the limitations resulting from these assump-
tions.1 For readability purposes, to distinguish this method from other walk-
through methods, we have termed this a “Critical Analysis Walkthrough” in
this chapter.
1. Visibility of system status: When users interact with the website or app,
they should always know what is going on in the system, through imme-
diate feedback on every interaction between the user and the system. For
example, when you click “download” on a website, you will get a pop-up
window telling you the download has started and keeping you informed as
to the percentage complete as the file is downloading.
2. Match between system and the real world: The language and concepts
used on the website or app should be natural to the user and free of jargon.
Use terms that users would typically use in their daily lives, not technical
phrases and concepts. For example, most software uses the icon of a trash
can as a space designated for deleted items.
3. User control and freedom: Users should be able to move through the sys-
tem backward and forward unhampered, and have the option to go back
Heuristic Evaluations 145
if they make mistakes or change their minds. The options to “undo” and
“redo” should be part of every system.
4. Consistency and standards: Every website or app should follow platform
conventions, such as using the “home” button for the landing page or
being able to scroll down a page through the scroll bar on the right-hand
side. Check out Apple‘s Human Interface Guidelines4 and Google‘s Mate-
rial Design Guidelines5 for consistent design guidelines.
5. Error prevention: The system should guide users through it in a way that
makes errors less likely, through built-in constraints (e.g., in a phone num-
ber field, the option to add letters should be disabled, “forcing” the user to
type in numbers) and confirmation messages asking a user whether they
definitely want to commit to an action.
6. Recognition rather than recall: Users should be able to move through the
system easily without having to remember parts of the system. That is, all
possible options should be clearly visible or easily retrievable at all times
(think of the menu bar in Microsoft Word and how it presents the user with
a range of options, such as save, edit, etc. using intuitive icons and text).
7. Flexibility and efficiency of use: The design should allow both experi-
enced and novice users the ability to move through the system efficiently.
For example, keyboard shortcuts allow seasoned users to quickly produce
certain frequent actions. Flexibility, in the form of customization and
personalization based on personal preferences, should also be part of the
system.
8. Aesthetic and minimalist design: The design should contain only what is
necessary for efficient and enjoyable use. Cut out unnecessary words, dec-
orative but distracting features, and other noise from the system design.
9. Help users recognize, diagnose, and recover from errors: Error messages
should be precise, should not use codes or jargon (rather, use plain, real-
world language), and should offer a solution to the problem.
10. Help and documentation: Help and documentation information should be
easy to find and concrete in the solutions presented.6
Use When
Walkthroughs are used in the Test stage of the Design Thinking process, to
ensure a website or an app works the way it should or to check through what
improvements could be made to an existing product. In a heuristic evaluation,
the website or app is checked against a list of standards that ensure the product
meets them.
In academia, researchers can conduct critical analysis walkthroughs to bet-
ter understand how an app or a website guides the user through it in particular
ways that are connected to broader societal norms and expectations. In this type
of walkthrough, researchers can use features of the app as evidence to show
how the way the space is designed connects to cultural biases and assumptions.
A researcher will navigate through the app and, for instance, find that at sign-
up, the app provides limited gender options, forcing the user to pick between
male and female only. This assumption about the gender binary presents limi-
tations to users who do not identify as male or female.
Both heuristic evaluations and critical analysis walkthroughs can be com-
bined with user research, such as interviews or usability testing, to include the
perspective and actual behaviors of the users in the analysis.
Heuristic Evaluations 147
What You Need
Walkthroughs do not need any special equipment—all you need is the website
or app to be evaluated and a way of taking notes. You (or the assessor, if you’re
hiring a usability expert) can either think out loud as you walk through and
record yourself (it’s helpful to record the screen as you walk through so you
can clearly see what you’re referring to) or you can jot down your notes. You
might consider taking screenshots of the walkthrough as you’re going if you’re
not able to video record, as it’s easier to refer to pictures for design purposes.
(Because a walkthrough does not involve participants, the consideration of in-
person versus remote walkthroughs is moot.)
Case Study
A great example of all the steps in a heuristic evaluation can be found in an
evaluation case study of yuppiechef.com, a South African-based online kitch-
enware store, conducted by UX specialist Marli Ritter. As a heuristic eval-
uation is systematic and very long (the assessor must do a thorough job of
assessing the whole site or app on multiple criteria or heuristics), we provide
some highlights here but encourage you to read the entire case study.8
One of Nielsen’s ten heuristics is “match between the system and the real
world.” Yuppiechef.com failed this standard in one section of the site, by using
confusing (not standardized) icons for the action of repeat subscriptions of
kitchen items—a calendar icon when, according to Ritter, “a more suitable
icon would be something that illustrates continuous movement.” However,
in another area of the website, the “match between system and real world”
Heuristic Evaluations 149
standard was “passed”—images of actual handwritten notes that accom-
pany gifts were shown to illustrate what happens when you mark your order
as a “gift.” This is a great example of using a real-life object in the digital
experience.9
As you can see just from this short example, heuristic evaluations can be
very complicated because they are holistic, looking at every part of a website
(so different parts might pass/fail the same heuristic), and also because they
are so detailed. However, a thorough walkthrough is worth it in terms of really
making sure that your product is fully user-friendly from start to end!
Discussion Questions
1. Think of some different scenarios for when you might want to conduct a
heuristic evaluation and when you might want to conduct usability testing
(getting novice or typical users—not experts—to move through an app or
website). What are the pros and cons of each?
2. Conduct a walkthrough as a critical analysis of your favorite app. What
are some assumptions that the app makes about you as a typical user? Are
there parts of the app’s design that frustrate or delight you—when you feel
it is “not getting you at all” or “really speaking to you”—what are these
parts and why do they make you feel that way? If you were to redesign the
app to be more “for you,” what would you change about it?
Notes
1. Ben Light, Jean Burgess, and Stefanie Duguay, “The Walkthrough Method: An
Approach to the Study of Apps,” New Media & Society 20, no. 3 (2018), https://doi.
org/10.1177/1461444816675438.
2. Jakob Nielsen and Rolf Molich, “Heuristic Evaluation of User Interfaces,” Proceed-
ings of the SIGCHI Conference on Human Factors in Computing Systems, 1990,
https://doi.org/10.1145/97243.97281.
3. Jakob Nielsen, “10 Usability Heuristics for User Interface Design,” Nielsen Norman
Group, November 15, 2020, www.nngroup.com/articles/ten-usability-heuristics/#
poster.
4. “Human Interface Guidelines,” Apple, accessed July 18, 2021, https://developer.
apple.com/design/human-interface-guidelines/.
5. “Guidelines,” Google, accessed July 18, 2021, https://material.io/design/guidelines-
overview.
6. Nielsen, “10 Usability Heuristics for User Interface Design.”
7. Light, Burgess, and Duguay, “The Walkthrough Method.”
8. Marli Ritter, “Heuristic Analysis of yuppiechef.com: A UX Case Study,” UX Collective,
October 7, 2018, https://uxdesign.cc/ux-case-study-heuristic-analysis-of-yuppiechef-
com-c92098052ce4.
9. Ibid.
19 Usability Testing MethodsUsability Testing
Quick Tips
Tools Task list
Camera or share screen capabilities (if unmoderated testing)
Use When You want to test a product for usability.
The Method
Have you ever used a website where you couldn’t figure out how to find some-
thing simple or where you kept clicking on multiple broken links? Experi-
ences like this are common and can be very frustrating to users. This is why
usability tests should be conducted throughout the design process to avoid such
scenarios. It’s important to make sure that a website or an app is usable and
that users can move through it without problems. Areas of focus in usability
testing include testing whether users can navigate through the product easily,
whether the product is effective (it does what it says it does), whether it can
DOI: 10.4324/9781003181750-22
Usability Testing 151
be navigated efficiently (within a reasonable timeframe), as well as testing for
accessibility (people with different abilities can navigate it).
Usability tests consist of three important pieces: the researcher (also known
as the facilitator or moderator in this instance), the user, and tasks for the user
to complete using the digital product. The researcher provides the tasks for the
user to complete on the website or app and then observes the user and asks for
feedback as the user moves through the tasks.
Usability tests can be run in-person, with the facilitator and user in the same
room looking at the device while the user navigates, or they can be run remotely.
In remote usability tests, the user shares their screen with the researcher, either
using screen sharing on their computer or using a camera pointed at the screen,
as they complete the tasks.
Aside from being categorized as in-person or remote, usability tests can
also be categorized as moderated or unmoderated. In moderated tests, the
researcher guides the user through a series of tasks and prompts them to talk
about what they are doing and why. In an unmoderated test, the user is pro-
vided a list of prompts and tasks in written form that they move through on
their own. Because unmoderated tests are always recorded, the researcher can
see both the user behavior on the website or app (where they click/hover, how
they navigate through the website, etc.) and the reasoning behind the behavior.
Some unmoderated tests even include a recording of the participant’s face, so
that the researcher can see nonverbal reactions to the website or app. There are
many online services, such as usertesting.com, that recruit participants and run
remote usability tests for researchers.
Usability testing can be either qualitative or quantitative, though qualitative
testing is more common. And, even more commonly, the two are used together
to complement each other. Qualitative usability testing focuses on observa-
tions and think-aloud comments to understand the problems in a website or
app. Quantitative usability testing typically focuses on two metrics: task suc-
cess (whether the user completes the task or not) and time-on-task (how long it
takes a user to complete a task).
A tool used frequently in quantitative usability testing is the System Usabil-
ity Scale, which provides a quick measurement of usability based on ten ques-
tions asked of the user after going through the task, with answers ranging from
“strongly agree” to “strongly disagree.” The questions are listed.
Scores from users provide an estimate of the ease of use of a website or app
(for scoring, see https://hell.meiert.org/core/pdf/sus.pdf).
As a more qualitative example, let’s say you have created a website for sell-
ing chairs. You have a large inventory, with hundreds of different products on
offer. Importantly, you are also able to provide customization for some chair
styles—so customers can choose the material, the size, the leg height, etc., that
suits them best. You now want to test whether people coming to your website
understand what types of customization are available so they can easily place
a unique custom order. You start your moderated usability session by show-
ing participants the home page of your website and you task them with “find-
ing out what customization options are available.” The participants will then
explore the site, talking out loud as they are navigating.
Most of the participants easily find the “customize” button for select chair
styles, navigating there directly with a few clicks. So far, so good. Once they
select the “customize” button, they are presented with a list of customization
options. You ask the participants to explain to you, from looking at this list,
what customization options they think are available to them. A few participants
mention at this stage that they’re not sure whether the word “color” in the cus-
tomization options refers to the frame or to the cushioning of a chair. So, when
they see the different colors provided at that option, they’re not actually sure
what the final customized chair will look like. You can now use that finding to
change the design of the site and create two separate items in the customization
menu—“frame color” and “cushion color.” Remember, usability is all about
ease of use, which includes findability of content and eliminating frustration or
confusion for the user as they move toward their end goal (in this case, buying
a chair customized to their liking).
If you want to take it to the next level, eye movement tracking and heat maps
are two specific UX techniques that can be used during usability testing to get
an even more precise understanding of how users navigate through a digital
space, rather than simply watching people navigate and speak their thoughts
out loud. As the name suggests, in eye movement tracking, a sensor measures
user eye movement to see where a person is looking on a screen. Knowing
where a person looks first or where their gaze lingers can tell a researcher
where points of interest (or possible trouble spots) are on the page, even if
the participant is not actually conscious of this (and might not have told you
through thinking out loud). Heat mapping is a behavioral analytics tool for
websites (companies such as Hotjar provide heat mapping services). It is simi-
lar to eye movement tracking except that software logs where a user moves,
hovers, and clicks on a page and then the software presents the researcher
with a heatmap of the most used areas. These tools are useful in providing
Usability Testing 153
more objective, physiological measures of user experience in conjunction with
think-aloud and observational data.
Usability testing is most similar to experimental methods in scientific
research, where users are provided with a task in a particular scenario and then
the researcher observes them completing this task, to test whether the research
hypothesis is supported or refuted (e.g., people who are sleep deprived will
perform worse on a reading task than those who are not sleep deprived). The
goal of usability testing is to test an entire website or app, so it’s a much more
holistic method than an experiment testing a specific hypothesis.
Use When
Usability testing helps to uncover problems in a digital product and also pro-
vides ideas for opportunities for new features or improvements. It’s important
to conduct usability testing throughout the product design process because
even the best designers can’t fully know how actual users will interact with a
product in a real-world context.
Usability testing is useful in Media and Communication Studies for answer-
ing research questions related to cultural differences, bias, and inaccessibil-
ity of certain technologies by people with different abilities, identities, and
cultures. Usability testing with these various groups can bring to the surface
problems with the ease of use of websites for particular groups, thus providing
evidence for pervasive unequal power dynamics built into tech.
Case Study
Steve Krug is a usability consultant who has written numerous books on the
topic of UX and specifically usability. In 2010, he wrote the book Rocket Sur-
gery Made Easy: The Do-It-Yourself Guide to Finding and Fixing Usability
Problems to show how usability testing is very simple. All it entails is observ-
ing someone using a digital product and talking through how they use it.2
Krug created a video of a demo usability test as a companion to his book,
and you can watch it here! www.youtube.com/watch?v=1UCDUOB_aS8
In the video, Krug shows a screen recording of a participant completing
various tasks on the Zipcar website (Zipcar is a car-sharing company where
members pay hourly or daily rates to rent cars, along with their membership).
In the video, Krug starts off by welcoming the participant and explaining the
test. Next, the participant is prompted to provide a narrative about the home
page, to see if it’s clear to users what this site is for. The participant states that
she knows this site is for Zipcar and that she knows what that is (renting cars),
but that the bow on the picture of the car is confusing, as it makes her feel like
she is buying a car. Krug could take this insight back to the content team and
Usability Testing 155
have them change the image on the front page to a car without a bow, to make
it clearer that users will be renting a car and not buying one.
The participant then walks through the website, thinking out loud as she
does so. She highlights certain phrases and exclaims “I don’t know what this
means.” Once again, Krug could use this as an actionable insight to change
the wording on the site to be clearer. Krug prompts the participant loosely
throughout, asking her more questions about certain parts of the home page.
Next, Krug takes the participant through specific tasks using scenarios prompt-
ing her to move through the site in particular ways (e.g., explore your options
for using Zipcar for occasional trips) and observes her working through these
tasks. At the end of the demo, Krug highlights three problems that the usability
test made clear and then he suggests improvements to the site based on these
insights. Watch the full demo, and make notes of what you think the three big-
gest usability problems are and how you would suggest fixing them.
Steps
1. Create (or have someone on your team, like a UX designer, create) a wire-
frame or prototype of what you want to test.
2. Write out two to three tasks that you would like the participants to com-
plete on your website or app.
3. Decide on the type of test you will run:
a. Moderated versus unmoderated
If you choose to run unmoderated tests (without you being physically
present), you need to make sure that the instructions and task lists
are clearly written out for the participant, so they can understand
what they need to do without needing clarification.
b. Remote versus in-person
i. For both types of tests, you should record the session. If you are
in-person, you can sit next to the person and watch the screen
over their shoulder as they work.
ii. If you are remote, you can connect via Zoom (or another video
communication software) and ask the participant to share their
screen with you. If using Zoom, you will be able to see what is
happening on the participant’s screen, as well as being able to see
their face, which is useful for analyzing nonverbal cues (such as
a frown signifying confusion).
4. Start the test by welcoming the user and explaining what will happen dur-
ing the test (remember, if you are conducting an unmoderated session,
this all has to be written down for the participant to read on their own. Or
you could record yourself introducing the test and providing the instruc-
tions and have the participant watch the video on their own). Make sure
156 Methods
to clarify that you are not testing them, but the website/app. There are no
right or wrong answers! Explain that you will ask them to think aloud as
they’re moving through the tasks, telling you what they’re doing and why
they’re doing it.
5. First, ask the participant about the home page. You might include ques-
tions such as: What is this page for? What could you do here? What stands
out to you about this page? This tests first impression messaging reso-
nance of your website or app.
6. Next, ask the participant to move through tasks on the site, such as finding
something specific. Make sure to remind the participant to think out loud
as they’re completing the tasks (if you are running an unmoderated ses-
sion, you can have popover reminders flash on the screen).
7. When the test is finished, thank participants for their time.
8. Analyze the results of multiple tests, paying attention to frequent pain
points.
9. Report on the top usability problems your test showed and provide rec-
ommendations to the content creation and UI design teams to fix these
problems.
10. Repeat with subsequent iterations, refining the design periodically.
Discussion Questions
1. Usability testing and contextual inquiry are similar in that both encom-
pass talking with and observing users as they navigate through a digital
space. How are they different? What different Research Questions might
you answer using contextual inquiry than usability testing?
2. What are the pros and cons of in-person versus remote usability tests?
What are the pros and cons of moderated versus unmoderated usability
tests?
3. When do you think additional tools like eye movement tracking and heat
maps are necessary? What are the pros and cons of including these in a
usability study?
Notes
1. John Brooke, “SUS: A Quick Usability Scale,” Usability Evaluation in Industry 189,
no. 194 (1996).
2. Steve Krug, Rocket Surgery Made Easy: The Do-It-Yourself Guide to Finding and
Fixing Usability Problems (Indianapolis: New Riders, 2009).
20 A/B Testing MethodsA/B Testing
Quick Tips
Tools Google Forms, Survey Monkey
Use When When you want to compare your new prototype to the
current version or when you want to compare two new
prototypes.
The Method
Toward the end of the Design Thinking process, you will finally have some
idea as to how you can resolve your research problem. You should now have
a prototype that incorporates the small changes you have built through Empa-
thize, Define, and Ideate methods. You have some ideas about different design
or content options for the prototype, but how will you know which versions
users prefer? Or, if you are building an improved version of a product that
already exists, how do you know which version users prefer—your new one or
the original? To answer these questions, you can use one of the most popular
DOI: 10.4324/9781003181750-23
158 Methods
UX methods: A/B testing. A/B testing places two options next to one another
and asks participants to choose: “A” or “B”?
You can see an example of this in Figure 20.1. There are two options for a
clothing site. Upon first glance they look quite similar, but you will notice that
the first option’s button says “BUY NOW,” while the second option’s button
says “ORDER NOW.”
There are two main ways to organize an A/B test. In the first scenario,
“A” and “B” are both prototypes that you have created. Perhaps you found,
through your research, that there are two slightly different ways a problem
could be solved. In this case, you would create two new prototypes, each
reflecting the small change. Then, you would ask participants to choose the
prototype that they found to be better, as relevant to your study—more effi-
cient, easier to use, more enjoyable, more aesthetically pleasing, etc.
In the second scenario, one example presented to the participants is your pro-
totype and the other is how the product currently functions. For example, let’s
say you are working to redesign Gmail’s icon. In this scenario, you would pre-
sent participants with what the icon currently looks like and the new version you
Use When
A/B testing is best used when you are ready to test a new tool or functionality
that you have developed after collecting data from participants in the Empa-
thize stage. A/B testing is great for comparing two prototypes or your new
prototype to an already existing product. Be careful to ensure that your “A”
and “B” versions are only testing one or two small changes at a time. If there
is too much variability, it will be difficult to determine why participants chose
one option over the other. So, if you change the color of a button, for example,
make sure the wording, shape, placement, etc. stays the same.
In academic research, A/B testing can be used in a similar manner—to
better understand which option your participants prefer. For instance, if you
study any type of persuasive communication, such as health communication or
political communication, you can use A/B testing to understand what type of
wording or content works best in campaigns. A/B testing can be used in tan-
dem with traditional methods like qualitative surveys and interviews to better
understand attitudes or opinions towards different choices on the topic under
study. You can even present an A/B choice during a focus group and see how
the participants discuss which is preferred.
Case Study
At the end of 2007, Dan Siroker, now the founder and CEO of Scribe AI,
worked as a Google product manager. But, he took a leave of absence after
meeting Barack Obama, then a presidential candidate, at a talk at Google head-
quarters. As an Obama campaign digital advisor, Siroker introduced Obama to
A/B testing.1
Obama’s digital team was having a big issue with turning site visitors into
subscribers—getting a visitor to subscribe meant the campaign team collected
an email, and once emails were collected subscribers could turn into donors.
The original website began with a splash page—the introductory page that
pops up whenever any page of your website is visited—that included a tur-
quoise-shaded photo of Obama and a bright-red “sign up” button. Although the
page got many visitors, not many clicked the “sign up” button.2
Working with the team, Siroker pulled out the two main parts of the splash
page—the image and the button. Using A/B testing, the team tested differ-
ent button wording including “Learn More,” “Join Us Now,” and “Sign Up
Now.” They also tested a few different visuals—a black-and-white photo of
the Obama family and a video of Obama speaking at a rally. The “Learn More”
button led to 18.6% more signups. The black-and-white photo outperformed
all the other visuals by 13.1%. Interestingly, the team’s “instinct” was that the
video would be much more effective. But, it was actually 30.3% worse! This
is a perfect example of how researchers are not the users. Changes should be
data-driven. Together, using the new black-and-white family photo and the
“Learn More” button, the team reached 40% more signups. It was estimated
that about $75 million of campaign money was brought in due to the site’s UX
changes.3
Steps
1. Receive your prototype(s) from the UX designer, or create them yourself
using a tool like Figma or Adobe XD.
2. Decide if you will conduct the A/B tests in-person or virtually.
a. If conducting in-person, prepare the materials. These include the pro-
totypes you are testing and possibly a device for your participant to
view them on if they are not using their own. But, remember, it is best
A/B Testing 163
to have participants use their own devices so that the test is as close as
possible to their day-to-day usage.
b. If conducting virtually, create your survey using a resource like
Google Forms or SurveyMonkey. Remember that if you are using
static images you can include them right in the survey. If you are
using working prototypes, you should link the participants to each in
the survey.
3. To prevent order bias, be sure to randomize the order that the participants
see the options.
4. If working in-person, be sure to explain to your participants the process
they should follow before they begin. If working virtually, don’t forget to
include any relevant instructions at the top of the survey. It is usually a
good idea to put the instructions on their own, introductory page. Partici-
pants can then click a “next” button to move on to the actual test.
5. Once all of your participants have completed the test, analyze the results.
Remember to look at the quantitative aspects (how many picked which
option) as well as perform a thematic analysis of your qualitative, “why”
responses.
Discussion Questions
1. In what situations would testing static images be better than testing work-
ing (interactive) prototypes? In what situations would testing working
prototypes be better than testing static images?
2. Although A/B testing is considered quite popular in the UX community,
some think it is overused or used improperly. Can you think of a sce-
nario for which A/B testing would not be appropriate? Suggest another
test method discussed in this book and explain why it would be better than
an A/B test in your example.
3. A/B tests are rarely used in academic settings. Why do you think this is the
case? Can you think of a Research Question for which an A/B test could
be combined with interviewing, focus groups, or surveys?
Notes
1. Brian Christian, “The A/B Test: Inside the Technology That’s Changing the Rules of
Business,” Wired, April 25, 2012, www.wired.com/2012/04/ff-abtesting/.
2. Ibid.
3. Ibid.
Bibliography BibliographyBibliography
Background
Relevance: Why are you doing this project?
• Why is it important/timely?
• What problem are you trying to solve?
Prior Research: What research has been done on the topic? Summarize prior
findings (this is the Literature Review in academic research)
Participants
How many people will you talk to?
Where/how will you find them (recruitment)? e.g., through social media
posts
Who are they? (demographics—age, gender, location, marital status,
income, etc.)
Criteria for participation in this project—what types of people are you try-
ing to understand? (e.g., must haves: familiarity with a technology, be a
pet owner, drink coffee)
Method
What specific method will you use? (e.g., contextual inquiry, usability
testing)
Where will the study take place?
170 Appendix A: Sample Research Plan Template
How will you answer your question or fulfill your objectives?
• Include all study details, such as the physical setup and questions you
will ask the participants (interview guide)
Timeline
Details on timeline for:
• Participant recruitment
• Data collection
• Data analysis
• Presentation of findings
References/Appendixes
What other documents are useful to understanding this research plan? For
example, recruitment ads (this is a bibliography in academic research)
Appendix B
Academic Report Example Appendix B: Academic Report ExampleAppendix B: Academic Report Example
Normative Interfaces
Affordances
The use of “affordance” as a noun was coined by Gibson in his 1979 book The
Ecological Approach to Visual Perception. Writing about animals and their
biological environments, Gibson uses his theory of affordances to explain that,
to really understand where and how animals live, we must comprehend how
animals visually perceive of what their environments offer them:
I mean it by something that refers to both the environment and the ani-
mal in a way no existing term does. It implies the complementarity of the
animal and the environment. . . . Affordance cuts across the dichotomy
of subjective-objective and helps us to understand its inadequacy. It is
equally a fact of the environment and a fact of behavior. It is both physical
and psychical, yet neither. An affordance points both ways, to the environ-
ment and to the observer.
(pp. 119, 121)
In other words, there is a symbiotic relationship between the animal and its
environment.
Gibson continues by explaining what has happened now that humans have
added on to the environment—the shapes and substances of our world have
been changed because humans want to make more available what benefits
them. The large, and important, change to the conception of affordances then
is that human-made objects are no longer neutral. The symbiotic process of
animal-environment takes on a third party—designers.
Gibson’s original argument is actually situated in ignoring the material
makeup of objects and instead analyzing affordances—what the object can do,
is supposed to do, and is promoted as doing. At the time of his writing, Gibson
believed that people paid too much attention to the dimensions or physical
qualities of things—their smaller pieces, their material makeup. Instead, he
wanted researchers to focus on what objects may afford animals in the sym-
biotic relationship of animal-environment. We have perhaps come full circle,
often omitting material analyses of mediated structures. And, media technolo-
gies present different issues than trees or flowers because they are made by
humans and are thus not neutral. Indeed, the general social network site user is
rarely, if ever, provoked to think about the material makeup of the site.
In sum, to apply Gibson’s theory of affordances is to focus on the negotia-
tion that exists at the intersection of user, interface, and designer; affordances
do not exist without interaction (Nagy and Neff 2015). Without considering all
three pieces, we miss the dynamic relationship that all users have with medi-
ated technologies. Users display a stark differentiation between their practices
Appendix B: Academic Report Example 173
and the actual interfaces. Theoretically, we can think of things like features,
apps, and devices as distinct objects, but users make sense of technological
systems in a variety of ways that span and conflate these objects and interfaces
(McVeigh-Schultz and Baym 2015).
McVeigh-Schultz and Baym (2015, 2) define affordances as the “percep-
tion between bodies and artifact.” Similarly, Van Dijck (2013) argues that plat-
forms are techno-social constructs that are some aggregation of technology,
content, and users. However, neither explicitly states the new third prong—
designers. Thus, I argue, to study affordances is to study the negotiation that
happens between users and designers, through the created interface. While it
is important to view interfaces and digital objects more like mediators instead
of intermediaries (because they do much more than just move information but
transform it) (e.g., Galloway 2013; Latour 2005), it is integral to also include
investigations into how and why specific designer choices have implications
for users.
As such, using affordance to mean “choice” or “constraint” is not helpful
when examining the three-pronged negotiation (McVeigh-Schultz and Baym
2015). Instead, affordances are relational negotiations (e.g., Hutchby 2001a)
that are constantly occurring as the cycle of designers designing and users
using continues. While objects may visually convey action-capacities (Nor-
man 1988), digital capacities can be hidden by designers in an effort to pro-
mote desired perceptions and uses (McVeigh-Schultz and Baym 2015).
In addition, because designers have the most power in affordance nego-
tiation, their decisions work to prime, resist, and shape the ways users make
sense of the technology (McVeigh-Schultz and Baym 2015). “When a pro-
grammer decides which gesture to render, then [they are] deciding not what
to communicate, but what possible messages to allow; such decisions dic-
tate the communication potential of a space” (Kolko 1999, 180). Interfaces
are both semiotic and institutional structures (Giddens 1984) that influence
how narratives are shared and shaped (Duguay 2016). As media technolo-
gies become more advanced, designers work harder to design “user-friendly”
spaces that obscure how spaces actually function. Less space is provided to
tinker and, as a general culture, we are encouraged to feel a passivity to tech-
nology (Gillespie 2006).
Previous research in online gaming has highlighted the importance of
avatar design because of its strong tendencies to affect offline interactions
(e.g., Kolko 1999). While this work was situated in games, social network
site profiles are very similar to personalized avatars (e.g., Cirucci 2013). At
sign-up, as with initial avatar creation before a game can be played, Face-
book begins shaping user performances and experiences (Light and McGrath
2010). Facebook’s “real name” policy, for example, along with a myriad of
other design choices are generally in contention with identities that are fluid
and complex (Lingel and Golub 2015).
Instead, Facebook is concerned with investing time and money into singular
profiles (Lingel and Golub 2015). Some identity choices are inescapable while
174 Appendix B: Academic Report Example
others are simply omitted (Zhao, Grasmuck, and Martin 2008). These flat pro-
files produce data that conflate identity performances and contexts and feed
Facebook data that can then be repurposed as both targeted ads for users and
valuable fodder for third-parties ready to cross-reference databases and build
even more dynamic “profiles” of people. However, these databases are never
perfect and privilege one point of view while muzzling many others (Light
2007). Facebook has thus become a polemical and political site of analysis
(e.g., Bowker and Star 1999).
With the above in mind, this study aims to better understand the non-neu-
tral interfaces that guide user identity perception on Facebook. Specifically,
I explore identifications related to gender and race in an effort to better realize
how and why Facebook potentially proliferates social division and misrepre-
sentation. Because digital technologies are created by people, they are neces-
sarily couched in political, economic, and cultural powers (e.g., Winner 1980).
Therefore, a look into designer choices, paired with users’ perceptions and
daily usage habits, offers insight into Facebook’s affordances.
Methods
Nagy and Neff (2015) outline a new way of conceptualizing affordances—
imagined affordances. The notion of imagined affordance takes into account
the experiences not consciously realized by users. They argue that users real-
ize specific affordances, not some full set as presented by each space. On the
other hand, McVeigh-Schultz and Baym (2015) found that people make sense
of material structures at different, nested layers and that this sense-making pro-
cess does not involve speaking about material parts separately. Clearly, there
is a disconnect between how we can better investigate structures, their design,
and their implications for users.
In response to these two studies, this study was conducted in two parts:
a structural discourse analysis and focus groups. Pairing an analysis of pre-
sented, non-neutral tools with users’ experiences with these tools provides a
dynamic look into the negotiation and interaction that is affordances.
Focus Groups
After many readings of each tool and functionality, I conducted nine focus
groups with college students (n=45) at a large, urban, east-coast univer-
sity in the US. Beyond an analysis of Facebook’s interface, I was inter-
ested in learning how everyday users identify and make sense of the site.
Instead of just presenting architectural pieces as choices or constraints,
I provided a space for informants to share their sense-making processes
(McVeigh-Schultz and Baym 2015). As Duguay (2015) notes, employing
only a walkthrough method is limiting because there is no inclusion of
user perception. At the same time, only completing an analysis of user
performances fails to recognize and consider the mandatory setting of the
mediating interface.
Participants were 18–30 and declared their racial affiliations as: white
(71%), Black (13%), Asian (9%), Latinx (4%), and Other (2%). Each focus
group lasted between 45 minutes and 1.5 hours. Topics for conversation were
derived from my structural discourse analysis findings. Informants were asked
to share stories regarding how they have used different Facebook tools and
were asked to speak about the extent to which they are aware of their non-
neutrality. While some general questions were inspired by the discourse analy-
sis, the focus groups were generally open-ended.
It is important to note that all participants were given a short demographics
survey to complete. Spaces to define gender and race affiliations were open-
ended. However, no participants included a gender affiliation beyond female or
male. Focus groups were also mixed gender and race, which may have silenced
those who may have felt their comments would have been seen as marginal-
ized or “different.” However, I chose focus groups because they are social and
thus relevant when the content up for discussion is also social (Frey and Fon-
tana 1993). As discussed in detail below, the makeup of my sample became a
finding in and of itself, displaying how those in more privileged socio-cultural
positions (or those who at least feel the need to speak in a way that conforms
to the majority) negotiate affordances in very specific ways.
176 Appendix B: Academic Report Example
Findings
The following sections outline three main themes that emerged through the
structural discourse analysis and focus groups. These three themes are rooted
in Facebook’s gender and race affordances. Each section outlines related func-
tionalities, presents a brief discourse analysis, and shares participant experi-
ences, as a way of exploring Facebook’s affordances.
Digital Gender
In early 2014,3 beyond binary options, Facebook provided US users 50+ gen-
der affiliations.4 It is safe to say that gender selection is important to the site;
while on the About Page users can choose something other than a binary affili-
ation, at sign up new users still must choose from only “female” and “male.”
For Facebook, third party marketers, and database companies gender is a cru-
cial demographic.
The study of digital gender is certainly not new. The importance of ascrib-
ing binary gender to virtual bodies has been integral to online worlds since
early multi-user dungeons (MUDs), gaming spaces, and chat rooms. While
some may have thought that users logged in and became “disembodied,” it
was quickly made apparent that electronic worlds are not separable from the
physical self (e.g., Kolko 1999). In a study of Gaydar, a dating site for gay
men, Light (2007) found that suggestions for users as they created their dat-
ing profiles were stereotypically masculine, omitting groups from drop-down
boxes, specifically effeminate men. Through its digital structure, Gaydar users
were pressured to conform to certain cultures with little room for resistance.
This 2007 study is but one example of the ways in which designer choices
of digital spaces make negotiation of gender affordances unequal and compel
users to adhere to heteronormative expectations.
Although Facebook’s About Page gender change occurred shortly before
I spoke with my informants, only half were familiar with the additions, and
only one had changed the gender option (from female to cisfemale). For my
informants, and perhaps in line with Facebook’s goals, the gender prompt is
not perceived as a space for expression, but as a box to check that mirrors their
birth certificates or medical forms.
However, we’ve gotten feedback from translators and users in other coun-
tries that translations wind up being too confusing when people have not
specified a sex on their profiles. People who haven’t selected what sex
they are frequently get defaulted to the wrong sex entirely in Mini-Feed
stores.
(Gleit 2008)
In her post, Gleit labels the affiliations “sex.” This is how Facebook previ-
ously labeled the category. Gleit includes this in her post to connote a more
objective sense of the identity label. Besides conforming to the English (US)
delineation on the site, she is also implying that all users need to check off a
sex, just as the hospital did when they were born. Later in the post, however,
Gleit refers to the selection as “gender.”
Some of my participants even noted that using the space to perform more
than gender ascribed at birth is identification “overload.”
Davina shares a story that explains her distaste for non-binary gender per-
formances online. Although, she claims, people who are offline may know
this about you, she questions why it is necessary online. For this participant,
178 Appendix B: Academic Report Example
a change is confusing because Facebook compels users to perform “legal”
or “real” identifications and because these performances are closely linked
to official forms that rarely offer more than “female” and “male.” It is clear
that Facebook’s gender affordances are both unused by, and confusing for, my
participants.
I argue that these perceptions are in part because the more fluid gender
options feel inconsistent with other affordances Facebook offers that the site
links to “authenticity.” For example, users, at sign up, still are provided only
“female” and “male” from which to choose. They must also input their full
birthday and their full, “real” name (as it appears on their birth certificate or
driver’s license). This type of rhetoric, paired with other options, prompted
my informants to share that the increased gender options were probably just a
pandering tactic.
From a programming perspective, the obvious heuristic is a model in
which data are pulled for algorithmic ranking, personalized site character-
istics, and third-party marketing first as “user gender is binary” or “user
gender is not binary”; second as “user is female,” “user is male,” or “user
is other”; and, as a third point, what is actually selected as the custom
affiliation(s). In other words, users’ digital gender identifications are still
largely reliant on some binary estimation. On one hand, the new gender
choice is appeasement; on the other, it serves as additional marketing data
(Kellaway 2015).
LJ, a 19-year-old white female: Umm, I like that there isn’t one actually;
I think that it’s good that [Facebook’s] color blind.
Ashley, a 20-year-old white female: I just don’t think it [race] matters that
much. It doesn’t define you.
Ryan B., a 21-year-old white male: I’m just, I’m indifferent about it,
I guess. I mean, it’s something that I don’t think, you know, represents
the individual.
In line with other racial discourses present in the US, participants read Face-
book’s lack of explicit racial identification spaces as an indication that Face-
book positions race as no longer a defining characteristic of people. Again,
Facebook’s omission promotes a very specific perception of racial affairs.
Whether these are perceptions that started offline and were reified online, or
they are perceptions that Facebook helped to cultivate, my participants were
quick to assume that digital spaces are creators of new, equal environments.
In contrast, many argue that we learn to interact with another through physi-
cal features (e.g., Alcoff 2006; Nakamura and Chow-White 2012). As Naka-
mura (2002) argues, the internet is a space for cybertyping. Digital spaces
harbor hegemonic ideals, and race becomes just as important online as it is
offline (Martin, Trego, and Nakayama 2010; Tynes, Reynolds, and Greenfield
2004). Thus, it could be argued that Facebook works to support notions that
the US is post-racial.
The point of this analysis is not to critique my informants for their opin-
ions and interpretations of Facebook’s goals. Rather, I intend to show one of
many instances in which affordances online can have strong impacts on users.
When considering how Facebook’s interface is constantly constructing reality,
highlighting specific traits and trends while squelching others,6 we can con-
clude that many young users’ views of the US are cultivated through digital
affordances.
180 Appendix B: Academic Report Example
Visual culture. Ultimately, discussions with the focus groups led to con-
siderations of our current, highly visual culture. Visible, corporeal identifiers,
namely profile pictures, are seen as deeming explicit racial/ethnic affiliation
unnecessary.
Stephanie, a 27-year-old white female: I think it’s the fact that you can post
a picture of your race but you can’t post a picture of your gender.
JM, a 20-year-old white female: What I’m saying is you can see what they
look like. And, if you want to know what their race is, you can ask them,
sort of. But looking at them you really can’t, like, sexuality doesn’t have
a color. I think people identify with that more, not to say that I agree with
that; but I feel like that’s how Facebook is saying it.
Throughout my focus groups, both when speaking directly about gender and
race, and otherwise, being visible was integral. Users want to see those with
whom they interact. There is a certain cultural anxiety that exists for many
when they cannot decipher another’s gender or race. This is perhaps rooted in
the fact that first impressions help us to define people, and stereotypes aid in
the process. As such, online functionalities exploit this “stereotypical short-
hand” (Kolko 1999, 181). Stereotypes are easy because they quickly describe
users and are easy to fit into databases. Designers’ first choices are often the
tools and functionalities that are derived from a very small selection of stereo-
types (Kolko 1999).
In particular, Facebook promotes a very visible culture (Cirucci 2015), that
begins with the profile photo. My informants shared many stories in which
they were “creeped out” or annoyed when another Facebooker did not have
a recognizable image of their face included in their profile photo. Users rely
on these photos as a means to quickly summarize others’ lives in a matter of
minutes (Farquhar 2013). My participants labeled the profile picture as “prime
real estate” and placed much value in its ability to not only add aesthetic value
to a profile, but also provide identification validation. Many admitted to not
interacting with faceless users even when they know the person offline and
know for certain that the profile is controlled by their offline friend.
The adherence to, and expectation for, visible selves allows users to rely on
visible tells for race/ethnicity. As Stephanie includes, there must be a gender
selection because that can be “hidden” in a photo. But, you cannot “hide your
race.” In the case of my participants, the assumption that visible qualities make
it easy to guess someone’s race/ethnicity is driven by the site’s omission of
race and ethnicity categories, as well as their stark adherence to selves that are
extremely visible.
In addition, it is intriguing to question why Facebook would leave out race/
ethnicity seeing that it is a valuable marketing tool. Beyond the notion that
Facebook employs powerful algorithms that likely abstract what a user’s race/
ethnicity is through stereotypical likes and browsing history, Stephanie may be
correct. It is not impossible, or even improbable, that Facebook “guesses” race/
Appendix B: Academic Report Example 181
ethnicity based on profile, uploaded, and tagged photographs, among other
data collected.
Gaver (1991) describes hidden affordances as functionalities that are
afforded but with no information available about them. This is what Facebook
essentially does—provides spaces that do not ask for race or ethnicity socially,
while, at the same time, builds databases that still collect this information at
the institutional level. In a sense, users are aiding in the negotiation of the
affordances, without even realizing it. It is perhaps important to note here that
what a “profile” or “user” looks like for a user is drastically different from what
a “profile” or “user” looks like in Facebook’s databases. Thus, it would be
naïve to assume that just because Facebook makes a tool visible, that it matters
greatly to the database, just as when an affordance is hidden it can still matter
greatly. In sum, when an affordance is hidden by Facebook, it means that the
full process and purpose are hidden, and the company does not want users to
have much active say in the meaning negotiation.
Indeed, Facebook has implemented two processes in particular that are fairly
hidden to the general user and that play a large role in how they categorize
race and ethnicity—DeepFace and Multicultural Affinity Targeting. DeepFace
is 3D modeling software that can verify faces with 97.35% accuracy, less than
one percent away from human-level capabilities (Taigman et al. 2014). Thus, it
is quite easy for Facebook to document skin color and other facial features that
may align with racial and ethnic differences. There is no mention of DeepFace
in Facebook’s Help section when searching for and reading about what the site
does to and with user images. In fact, when you search “deepface” in the Help
Center, no matches are returned.
Multicultural Affinity Targeting helps advertisers, and Facebook, target peo-
ple with “multicultural interests.” They define multicultural affinity as “the
quality of people who are interested in and likely to respond well to multicul-
tural content” (Fussell 2016). As with Nakamura’s (2002) early study of Lamb-
daMOO where she shows that white is the “default” race and all others must
be defined otherwise, white Facebookers do not have ethnic affinities—they
are reserved for African American, Asian, and Latinx users (Thomas 2016).
Thus, paired with DeepFace data, how users perform through the site allows
Facebook to ascribe an “affinity.” Then, advertisers can choose to exclude cer-
tain affinities from their marketing sample. This means that I could post an
advertisement that excludes all users that have been deemed African Ameri-
can, Asian, and Latinx, with the hopes of my ad only being visible to white
users. Facebook is careful to label these “affinities” and not ethnicities to avoid
any lawsuits or bad press (Thomas 2016).
Above, I quote JM, who suddenly realized, toward the end of the focus
group, that Facebook perhaps is sending an implicit message through the omis-
sion of explicit race/ethnicity spaces. This is representative of many of my par-
ticipants’ comments toward the end of their focus group. After much thought
and conversation, some began to think that perhaps Facebook compels them
to think about gender and race in specific ways. To be clear, just as Facebook
182 Appendix B: Academic Report Example
cannot solve social divisions, it is not my goal to argue that the site is solely
creating new social divisions. Clearly, the issues discussed herein are not new.
However, in a space that has been so seamlessly folded into many lives, it is
important to investigate what stereotypical norms are promoted.
Facebook’s active decision to leave race/ethnicity off the user interface,
while constantly updating and posting about gender affiliation, led my inform-
ants to view gender as a more important, but less messy, identification piece.
Viewing US society as post-racial and guessing race/ethnicity through skin
color and other physical “tells” are normalized through Facebook’s function-
alities—our identities are “necessarily shaped by platform design choices”
(Lingel and Golub 2015, 547).
JM, a 20-year-old white female: I think when I filled out Facebook it was,
like, so long that it was just kind of like, kind of like checking off a physi-
cal form, like, male, female, what are you interested in . . .
Alessia, a 20-year-old white female: When you first start out with Facebook,
it’s an application process too . . .
The perception that Facebook is some official, patrolled space is in line with
comments from its creator, Mark Zuckerberg:
You have one identity. The days of you having a different image for your
work friends or co-workers and for the other people you know are prob-
ably coming to an end pretty quickly. Having two identities for yourself is
an example of a lack of integrity.
(Kirkpatrick 2011, 199)
Conclusion
It is perhaps becoming less and less a secret that Facebook strategically decides
how identities will be shaped in an effort to construct more efficient data col-
lection, algorithmic, and marketing models. The process of selecting which
identification affiliations to request, and which to simply leave off the user
interface, places value on specific identifications. As supported through my
findings, Facebookers are led to adopt specific expectations and norms regard-
ing the identification process and important cultural issues. As my focus group
participants demonstrated, some believe that gender is a more important issue
than race because Facebook explicitly asks users to define it. Others noted that
race is a more complex and important fight than gender, and Facebook is right
in “staying out.” Thus, just as offline expectations follow us into online spaces,
prejudices that we learn online journey with us into offline spaces—they are
naturalized and reified through our constant, digital performances guided by
the site’s design.
Through the structural discourse analysis and focus groups, two main con-
clusions emerged. First, the negotiation of affordances, as defined by Gibson
and updated for social network sites, is not, in fact cannot be, equal because
the power roles at play at not equal. Facebook, as Gibson explained, creates a
non-neutral space that makes more available what is beneficial to them. They
control both the institutional data and the social interface. Facebook’s employ-
ees decide how tools, functionalities, and buttons will be designed, how the
data will be catalogued and saved, and what will happen to them over time.
They decide which data are important and which are “throwaway,” included at
the social level for user appeasement. Thus, while users are certainly allowed
to do as much as they can within the site, they play only a small role in what the
affordances are. This is made especially clear by the way in which my partici-
pants view the site as an “official” space or a social utility. Just as they would
not want to lie on a form, they do not want to cheat the Facebook system.
The second conclusion situates my sample within the generally heter-
onormative Facebooker type—when users affiliate with more privileged and
socially accepted identifications (whether by choice or through social pres-
sure), they are not inspired to tinker with the site or resist the norms being
cultivated. This would at first seem counter-intuitive—those with more social
power should have more power in the negotiation of affordances. However,
Appendix B: Academic Report Example 185
those with marginalized identities are used to fighting against how they are
shoved into categories in spaces exactly like job applications or medical forms.
Marginalized individuals, especially when considering gender and race/
ethnicity affiliations, are not concerned with being “accurate” because many
official forms do not even provide them with the correct spaces and options
to be “accurate.” For these groups, being authentic still means defying what
forms present as identifications. Therefore, it is clear that when a part of
privileged identity groups, as were most of my participants, users are not
likely to even think about how they could, or should, subvert the Facebook
architecture from the inside. Facebook’s functionalities and policies reflect
particular assumptions of identity that privilege some users over others. But,
it is those who are marginalized that attempt to find workarounds (Lingel and
Golub 2015).
The structural rules implemented by Facebook, although not always
designed as visible affordances at the user-level, both set parameters for what
is possible (Hutchby 2001b) while also compelling users to act in particular
ways and, in turn, implicitly support and adhere to heteronormative identity
expectations. It is not that each user is determined by Facebook’s structure,
but that, through a “regulated process of repetition that both conceals itself and
enforces its rules precisely through the production of substantializing effects,”
users are molded (Butler 2006, 198). Agency, then, can only be located in some
break of that repetition. This subversion is difficult, however, because unlike
dressing in drag offline, if a cisfemale user decides to upload a profile photo
wherein she is dressed in a stereotypically masculine way, Facebook will have
enough other data points to continue to view, and market to, her as “female.”
In addition, until mainstream media break down the complex negotiation of
digital affordances, most users will remain comfortable in, or at the very least
unaware of, Facebook’s promoted culture.
Notes
1. Although the main structural discourse analysis took place in January 2014, small
pieces were added and changed throughout the course of this project as the interface
changed. This study represents but a snapshot of time because Facebook’s interface
is constantly being updated.
2. http://zuckerbergfiles.org/
3. February 13, 2014.
4. At the time of publication, Facebook allows users to type anything in the “gender”
box, after they have selected “custom” as their main gender, instead of “female” or
“male.”
5. Each informant was asked to choose a pseudonym and was provided a blank space to
provide identifying information including, but not limited to: age, racial affiliation,
ethnicity, gender, and socio-economic class.
6. It was reported that Facebook took down an iconic photo, “Napalm Girl,” from the
Vietnam War (Wong, 2016).
7. In this context, “accurately” is defined generally as performing some legal and cor-
poreal self.
186 Appendix B: Academic Report Example
Works Cited
Alcoff, L. M. 2006. Visible Identities: Race, Gender and the Self. Oxford: Oxford Uni-
versity Press.
Bowker, G., and S. L. Star. 1999. Sorting Things Out. Classification and Its Conse-
quences. Cambridge, MA: MIT Press.
boyd, d. 2014. It’s Complicated: The Social Lives of Networked Teens. New Haven, CT:
Yale University Press.
Butler, J. 2006. Gender Trouble. Feminism and the Subversion of Identity. New York:
Routledge.
Cirucci, A. M. 2013. “First Person Paparazzi: Why Social Media Should Be Studied
More Like Video Games.” Telematics and Informatics 30 (1): 47–59.
———. 2015. “Facebook’s Affordances, Visible Culture, and Anti-Anonymity.”
Proceedings of the 2015 International Conference on Social Media & Society.
doi:10.1145/2789187.2789202.
Duguay, S. 2015. “Is Being #Instagay Different From an #lgbttakeover? A Cross-Plat-
form Investigation of Sexual and Gender Identity Performances.” In SM&S: Social
Media and Society 2015 International Conference. Toronto: Ted Rogers School of
Management, Ryerson University, July 27–29.
———. 2016. “He Has a Way Gayer Facebook Than I Do: Investigating Sexual Iden-
tity Disclosure and Context Collapse on a Social Networking Site.” New Media &
Society 18 (6): 891–907.
Fairclough, N. 1995. Media Discourse. London: Arnold.
Farquhar, L. 2013. “Performing and Interpreting Identity Through Facebook Imagery.”
Convergence 19 (4): 446–71.
Frey, J. H., and A. Fontana. 1993. “The Group Interview in Social Research.” In Suc-
cessful Focus Groups: Advancing the State of the Art, edited by D. K. Mogan, 175–
87. Newbury Park: Sage Publications.
Fussell, S. 2016. “Facebook Might Be Assigning You an ‘Ethnic Affinity’ You Can’t
Change.” Fusion, October. http://fusion.net/facebook-might-be-assigning-you-an-
ethnic- affinity-you-1793863259.
Galloway, A. R. 2013. The Interface Effect. Malden, MA: Polity Press.
Gaver, W. W. 1991. “Technology Affordances.” In Proceedings of the SIGCHI Confer-
ence on Human Factors in Computing Systems, edited by S. P. Robertson, G. M.
Olson, and J. S. Olson, 79–84. New York: ACM, April.
Gibson, J. J. 1979. The Ecological Approach to Visual Perception. Boston, MA:
Houghton Mifflin.
Giddens, A. 1984. The Constitution of Society: Outline of the Theory of Structuration.
Berkeley, CA: University of California Press.
Gillespie, T. 2006. “Designed to ‘Effectively Frustrate’: Copyright, Technology and the
Agency of Users.” New Media & Society 8 (4): 651–69.
Gleit, N. 2008. “He/She/They: Grammar and Facebook.” Facebook, June. www.face-
book.com/notes/facebook/heshethey-grammar- andfacebook/21089187130.
Hutchby, I. 2001a. Conversation and Technology: From the Telephone to the Internet.
Malden, MA: Blackwell.
———. 2001b. “Technologies, Texts and Affordances.” Sociology: The Journal of the
British Sociological Association 35 (2): 441.
Kellaway, M. 2015. “Facebook Now Allows Users to Define Custom Gender.” Advo-
cate, February. www.advocate.com/politics/transgender/2015/02/27/facebook-now-
allows- users-define-custom-gender.
Appendix B: Academic Report Example 187
Kirkpatrick, D. 2011. The Facebook Effect: The Inside Story of the Company That Is
Connecting the World. New York: Simon & Schuster.
Kolko, B. E. 1999. “Representing Bodies in Virtual Space: The Rhetoric of Avatar
Design.” The Information Society 15 (3): 177–86.
———. 2000. “Erasing @race: Going White in the (Inter)face.” In Race in Cyberspace,
edited by B. Kolko, L. Nakamura, and G. B. Rodman, 213–32. New York: Routledge.
Latour, B. 2005. Reassembling the Social: An Introduction to Actor-Network Theory.
Cambridge, MA: Harvard University Press.
Light, B. 2007. “Introducing Masculinity Studies to Information Systems Research:
The Case of Gaydar.” European Journal of Information Systems 16 (5): 658–65.
Light, B., and K. McGrath. 2010. “Ethics and Social Networking Sites: A Disclosive
Analysis of Facebook.” Information Technology & People 23 (4): 290–311.
Lingel, J., and A. Golub. 2015. “In Face on Facebook: Brooklyn’s Drag Community and
Sociotechnical Practices of Online Communication.” Journal of Computer‐Mediated
Communication 20 (5): 536–53.
Magnet, S. 2007. “Feminist Sexualities, Race and the Internet: An Investigation of Sui-
cidegirls.com.” New Media & Society 9 (4): 577–602.
Martin, J. N., A. B. Trego, and T. K. Nakayama. 2010. “College Students’ Racial Atti-
tudes and Friendship Diversity.” The Howard Journal of Communication 21: 97–118.
doi:10.1080/10646171003727367.
McNicol, A. 2013. “None of Your Business? Analyzing the Legitimacy and Effects of
Gendering Social Spaces Through System Design.” In Unlike Us Reader: Social
Media Monopolies and Their Alternatives. INC Reader #8, 200–19. Amsterdam:
Institute of Network Cultures. www.exhipigeonist.net/files/Unlike%20Us%20
Reader%20- %20Social%20Media%20Monopolies%20And%20Their%20Alterna-
tives.pdf#page=202.
McVeigh-Schultz, J., and N. K. Baym. 2015. “Thinking of You: Vernacular Affordance
in the Context of the Microsocial Relationship App, Couple.” Social Media+ Society
1 (2): 2056305115604649.
Nagy, P., and G. Neff. 2015. “Imagined Affordance: Reconstructing a Keyword for
Communication Theory.” Social Media+ Society 1 (2): 2056305115603385.
Nakamura, L. 2002. Cybertypes: Race, Ethnicity, and Identity on the Internet. New
York: Routledge.
Nakamura, L., and P. A. Chow-White. 2012. “Introduction—Race and Digital Technol-
ogy: Code, the Color Line, and the Information Society.” In Race After the Internet,
edited by L. Nakamura and P. A. Chow-White, 1–18. New York: Routledge.
Norman, D. A. 1988. The Psychology of Everyday Things. New York: Basic Books.
Papacharissi, Z. 2009. “The Virtual Geographies of Social Networks. A Comparative
Analysis of Facebook, LinkedIn, and a Small World.” New Media & Society 11
(1–2): 199–220. doi:10.1177/1461444808099577.
Raynes-Goldie, K. 2010. “Aliases, Creeping, and Wall Cleaning: Understanding Pri-
vacy in the Age of Facebook.” First Monday 15 (1). http://firstmonday.org/ojs/index.
php/fm/article/viewArticle/2775/2432.
Rheingold, H. 1996. “A Slice of My Life in My Virtual Community.” High Noon on the
Electronic Frontier: Conceptual Issues in Cyberspace, 413–36.
Taigman, Y., M. Yang, M. Ranzato, and L. Wolf. 2014. “DeepFace: Closing the Gap to
Human-Level Performance in Face Verification.” In Proceedings of the 2014 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), June. www.face-
book.com/publications/546316888800776/.
188 Appendix B: Academic Report Example
Thomas, D. 2016. “Facebook Doesn’t Know You’re White.” Vice News, November. https://
news.vice.com/story/facebook-tracks-your-ethnic-affinity-unless-youre-white.
Turkle, S. 1995. Life on the Screen. Identity in the Age of the Internet. New York:
Simon & Schuster.
Tynes, B., L. Reynolds, and P. M. Greenfield. 2004. “Adolescence, Race, and Eth-
nicity on the Internet: A Comparison of Discourse in Monitored vs. Unmonitored
Chat Rooms.” Journal of Applied Developmental Psychology 25 (6): 667–84.
doi:10.1016/j.appdev/2004.09.003.
Van Dijck, J. 2013. The Culture of Connectivity: A Critical History of Social Media.
Oxford: Oxford University Press.
Williams, L. 2016. “Rinstagram or Finstagram? The Curious Duality of the Modern Ins-
tagram User.” The Guardian, September. www.theguardian.com/technology/2016/
sep/26/rinstagram-finstagram-instagram- accounts.
Winner, L. 1980. “Do Artifacts Have Politics?” Daedalus 109 (1): 121–36.
Wong, J. C. 2016. “Mark Zuckerberg Accused of Abusing Power After Facebook Deletes
‘Napalm Girl’Post.” The Guardian, February. www.theguardian.com/technology/2016/
sep/08/facebook-mark-zuckerberg- napalm-girl-photo-vietnam-war.
Zhao, S., S. Grasmuck, and J. Martin. 2008. “Identity Construction on Facebook: Digi-
tal Empowerment in Anchored Relationships.” Computers in Human Behavior 24
(5): 1816–36. doi:10.1016/j.chb.2008.02.012.
Appendix C
Industry Report Example Appendix C: Industry Report ExampleAppendix C: Industry Report Example
Background
Petz.com is a one-stop online shop for pet owners to buy supplies—food, toys,
medication, etc.—for their pets. As part of ongoing Petz.com website improve-
ment initiative (see Petz.com Website Improvements List), the “customer reg-
istration page” has been identified as a key part of the website to redesign for
usability. The purpose of this research was to understand how new users to the
website navigate through the registration page to sign up to be regular custom-
ers of Petz.com.
Stakeholders
Sam Smith—UX Researcher
Dolly Rhea—UX Designer
Silvio Buresco—Software Developer
Poppy Lupus—Product Manager
Eddie Wharton—CEO
Methodology
Usability testing
Participants
Ten pet owners who were interested in signing up with Petz.com to buy pet
supplies regularly. Participants were recruited through popover surveys on the
website shown only to new visitors on their first visit (before registration).
190 Appendix C: Industry Report Example
Key Findings
Key findings from usability tests are summarized:
Known Limitations
The following is a known limitation to the data collected:
Recommendations
Key recommendations are summarized:
• Increase the size of the “register now” button, and make it a darker color,
so it stands out more and is easier to find on the home page.
• Show an error message alerting users of the mistake, that they did not
input the correct email address format (missing an @).
• Make the “date of birth” data collection box optional, and provide a tooltip
to explain why this data is being collected (this feature is used to send
loyal customers “happy birthday” messages along with a discount code to
use on their birthday, but it is not mandatory for registering as a regular
shopper on the website).
Reference
Petz.com Website Improvements List, gathered from complaints from previous
users
Appendix
Instructions and script for usability testing
Index
A/B Testing 12, 51, 53, 157 – 163, 164 Design Thinking 8, 10, 11, 45 – 53, 69,
accessibility 10, 51, 55, 59 – 61, 92, 105, 98, 102, 118, 129, 130, 132, 138, 141,
146, 151, 153, 167 144, 146, 157, 159, 162, 165 – 166
Adobe 50, 119, 159, 161 – 162 diary studies 10, 88 – 90, 94, 113
Amazon 32, 36, 98
analytics 17, 113, 152 eBay 38, 40
anonymity 58, 183, 186 emotional journey map 10, 48, 80 – 87,
API 30, 34 – 35 94, 128, 154
Apple 25 – 27, 32 – 33, 91, 96, 145 Empathize 47 – 49, 51 – 52, 60, 73, 76 – 77,
augmented reality (AR) 32 80, 82, 88, 92, 94 – 97, 101, 103, 108,
109, 111 – 112, 114– 116, 119 – 121,
Bambino 38 – 39 124 – 127, 129, 132, 157, 160
brainwriting 11, 68, 129 – 135, 167 Engelbart, D. 24 – 25
breadcrumbs 35 – 36 ENIAC 23 – 24
breakup letters 11, 48, 95 – 100, ethics 58 – 59, 187
120, 167 ethnography 76 – 78, 82, 101, 104
Bumble 16 EUI 34, 36
Butler, J. 16, 185 expert review 144
eye movement tracking 152, 156
coach marks 10, 36 – 37
cognitive mapping 11, 50, 122 – 128, Facebook 4, 9, 16 – 18, 21, 34, 38 – 39,
165 – 166 48, 59, 131
competitive analysis 56, 119 Figma 159, 161 – 162
confidentiality 58 focus groups 45, 48, 50, 57, 59, 73 – 76,
consent 58 – 59 78, 96 – 97, 99 – 100, 103, 109, 114,
contextual inquiry 8, 11, 48, 51, 78, 122, 129, 131 – 132, 134, 139, 160,
101 – 107, 156, 165, 167, 169 163, 174 – 176
Cooper, A. 109
critical analysis walkthrough Gmail 27, 158, 159
143 – 148 Google 4, 8, 9, 16 – 17, 18, 27, 34,
35f4.1, 36, 67 – 68, 76, 88 – 93, 95 – 97,
darkside writing 11, 50, 52, 129 – 135 99, 112, 115, 119, 122, 124, 126 – 127,
Define 11, 47, 48 – 49, 52, 53, 98, 104, 129, 130, 132 – 133, 139, 145, 157,
108, 112, 115, 116, 118, 132, 134, 159, 161 – 16
142, 157 GUI 31 – 35
192 Index
human computer interaction (HCI) 3 pain points 7, 27, 48, 51, 77 – 78, 104,
heat maps 152 109, 154, 156
heuristic evaluation 11, 143 – 149, 166 personas 11, 49, 10 – 114, 159,
Heinz 4 – 5, 95 164, 167
Hicks, M. 18 Pinterest 120 – 121
Hotjar 152 popovers 38 – 39, 76, 156, 189
Problem Statement 11, 48 – 49, 52, 66,
IBM 24 75, 83, 94, 97 – 98, 104, 106, 115 – 116,
Ideate 11, 47, 49 – 50, 52 – 53, 116, 118 – 119, 121, 123, 129 – 132, 134,
122 – 127, 129 – 132, 137 – 138, 157 136 – 137, 141
IDEO 46, 81 privacy 9, 58, 61, 85, 90, 107,
Instagram 14, 19, 90 – 91, 96, 111, 165, 182, 187
117 – 118 problem tree 11, 49, 52, 97, 98,
Interaction Design Foundation 79, 118 115 – 121, 137
interviews 4 – 5, 8 – 11, 20, 45, 48, 51, Prototype 7, 10, 12, 47, 50, 52 – 53,
56 – 57, 59 – 61, 63 – 64, 66, 73 – 76, 78, 124 – 126, 130, 142, 150, 153, 155,
81, 83, 89, 95 – 97, 101– 106, 109 – 110, 157 – 163
113 – 114, 122, 125 – 126, 129, 131,
139, 146, 160, 163, 170 recruitment 9, 10. 57, 88,
IRB 58 169, 170
reporting 10, 49, 62, 69, 112
JAWS 105 research problem 7, 11, 48, 53, 82,
Jewell, L. 4 – 5 89 – 91, 116, 118 – 119, 124, 126,
138, 157
Krug, S. 154 – 155 Research Question 46, 48 – 49, 55 – 56,
63 – 64, 75, 78, 96, 115, 118 – 119, 121,
Linux 31 122 – 123, 125, 131, 134, 136, 141,
literature review 49, 52, 56, 63 – 65, 153, 156, 163, 169
118, 169
love letter 11, 48, 95 – 100 scenarios 11, 17, 19, 21, 68, 102, 106,
108, 111 – 112, 114, 137, 149, 150,
Mac 26, 31, 33, 63 155, 167
Microsoft 10, 25, 27, 67, 83, 119, 145 scientific research 45, 153
moderator 75, 131, 151 screenshot diaries, 11, 48, 68,
Morville, P. 5 – 7, 5f1.1, 47 88 – 94, 109
Mural 10, 68 – 69, 79, 115, 119, 124 Shaw, A. 19
sidebars 38, 40
Netflix 88, 96, 130 Siroker, D. 162
Nielsen, J. 144, 148 sliders 38
Nike 85 Smart Design 96
Noble, S. 16 – 17 Snapchat 14
Norman, D. 6, 27, 32 – 33 Spotify 112 – 113
stakeholders 46, 49, 65,
O’Neil, C. 17 – 18 73 – 74, 189
observations 4 – 5, 8 – 11, 20, 45, 55, 61, Stephenson, W. 136, 142, 167
63 – 64, 66, 73, 76 – 78, 82, 102 – 104, surveys 5, 8 – 10, 45, 58, 73, 75 – 76, 78,
106, 113, 147, 151, 153 95, 102, 105, 122, 126, 133, 159 – 161,
Optimal Sort 136, 139, 142 163, 175, 189
Index 193
SurveyMonkey 76, 159, 161, 163 136 – 137, 140, 143 – 149, 150 – 156,
System Usability Scale (SUS) 151 169, 189 – 190
usability testing 8, 12, 51, 55, 63, 77, 78,
task analysis 116, 118 101, 118, 137, 146, 149, 150 – 156,
technical walkthrough 146, 148 169, 189, 190
TED 37 – 39, 47
Test 47, 50 – 51, 53, 76 – 77, 143, 146, VR 32, 52, 53
150, 157
thematic analysis 63, 73, Wachter-Boettcher, S. 17
78 – 79, 92, 97 – 98, 100, 104, Wajcman, J. 16
112, 161, 163 walkthrough 10, 11, 143 – 144, 146 – 149,
Tinder 19 166, 175
Tolman, E. 123 Winner, L. 14 – 15
tooltips 10, 36 – 37, 190 wireframe 7, 50, 150,
touchpoints 10, 80 – 83, 153, 155
85 – 86 WYSIWYG 25 – 26
tree testing 11, 136 – 138
Twitter 35 – 36, Xerox 25, 33
124 – 125
YouTube 25, 38, 130
Universal Design for Learning 122
usability 5, 7 – 8, 12, 27, 32 – 33, 49, Zemke, R. 80
51, 55, 59 – 61, 63, 77 – 78, 101, 118, ZoomText 105