0% found this document useful (0 votes)
60 views22 pages

Dscout Usability Testing Guide Ebook Final

Uploaded by

PotatoChip -
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views22 pages

Dscout Usability Testing Guide Ebook Final

Uploaded by

PotatoChip -
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

The People Nerds Guide to

Usability
TABLE OF CONTENTS

3 Usability: An introduction

5 Usability goals and best practices

7 How to conduct a usability test

17 Usability test checklist

18 How DropBox approaches usability

21 Usability with dscout

The People Nerds’ Guide to Successful Cross-Team Collaboration 2


An intro to usability
What is usability?

Usability is the ability for someone to:

• Access your product or service


• Complete the tasks they need
• Achieve whatever goal or outcome they expect

For example, with a clothing company, a user would go to the website with the
expectation of being able to purchase clothing. But just because a user can buy clothing,
doesn’t mean it’s an easy or satisfactory experience.

So we can break usability down further into three areas:

1. Effectiveness:
Whether a user can accurately complete tasks and an overarching goal.

2. Efficiency:
How much effort and time it takes for the user to complete tasks,
and an overarching goal accurately.

3. Satisfaction:
How comfortable and satisfied a user is with completing the tasks and goal.

Usability testing, whether you use metrics or run a qualitative usability test, looks at these
three factors to determine whether or not a product or service is usable.

The People Nerds’ Guide to Successful Cross-Team Collaboration 3


What are we testing? And how do we know?
There are many aspects you could test, even with a simple product. But the point of a
usability test is to ensure that users can complete their most common tasks to achieve
their goals.

Again, one of the main goals of a clothing website would be to purchase clothing.
You can have smaller goals within that larger goal, such as comparing clothing.

Then, you can break this down into the tasks people must do to achieve those goals.
For instance:

1. Searching for specific types of clothing with keywords


2. Filtering through colors, brands, sizes
3. Sorting by reviews, prices
4. Opening multiple windows to compare different options
5. Saving clothing to a favorites list
6. Reading (and understanding) the size and fit of clothes
7. Adding a piece of clothing to a basket
8. Checking out and paying for the clothing
9. Receiving a confirmation of purchase
10. Receiving the clothing

These are all tasks associated with the larger goal. With usability testing, we ask people
to do these critical tasks to assess whether or not they can achieve them and the larger
goal in an efficient, effective, and satisfactory way.

If someone can do these tasks, they can get to their expected outcome. However,
if efficiency, effectiveness, or satisfaction suffer during this process, they may get
frustrated and give up or go to a different website.

We’ve all encountered this—an infuriating user experience that made us rage-click,
throw our phones (against a soft surface, of course), and give up on a product or service.

The People Nerds’ Guide to Successful Cross-Team Collaboration 4


Usability testing goals
and best practices
When is usability testing an appropriate method?
Usability testing can help us with:

• Whether or not a user can use a product for its intended function
• Whether or not a product allows a user to reach their goals
• How a user uses a product, separate from how we think a user should use a product
• How a product functions when placed in front of a human
• Where bugs and complicated user experiences lie within a product

What usability testing will NOT tell us:


Usability won’t help you understand:

• The emotions a user is feeling outside of their immediate actions


(instead, try generative research)
research
• Statistically significant quantitative data on usage patterns or trends
(instead, try product analytics)
analytics
• Preferences between two versions of a design (instead, try A/B testing
or comparative usability testing)
testing
• The desirability of a product (instead, try market research)
research
• Complete market demand and value (instead, try market research)
• What people will pay for a product (instead, try Van Westendorp)

Now, let’s dive into some goals for usability testing because the best thing you can do is
start with goals and a research plan.
plan

The People Nerds’ Guide to Successful Cross-Team Collaboration 5


Common goals for usability tests:
Before you dive right in, determine what exactly you want to uncover from the project.
Some ideas to keep in mind:

• Learn about people’s current pain points, frustrations, and barriers about
[current process/current tools] and how they would improve it
• Uncover the current tools people use to [achieve goal] and their experience with
those tools. Uncover how they would improve those tools
• OR Evaluate how people are currently interacting with a [product/ website/
app/service]

The People Nerds’ Guide to Successful Cross-Team Collaboration 6


How to conduct
a usability test
Once you’ve identified the goal of the test, you can start putting the pieces in place to
conduct one.

Recruitment and sample size


Recruiting the right people is essential for a good user research study. It’s incredibly
costly to fill a 60-minute usability test, only to find out you ran it on the wrong people—
plus, it’s awkward.

For usability tests, ask yourself the following questions, with some examples from above,
to understand who your target users would be:

• What are the particular behaviors I am looking for?


(ex. They have purchased clothing online in the past month)

• Have they needed to use the product? And in what timeframe?


(ex. They have used our product in the past month to purchase clothing)

• What goals are important to our users?


(ex. Getting the right size and fit of clothing without returns)

• What habits might they have?


(ex. Visiting our website frequently (ex: at least once every other week)

One note about the sample size for usability testing: a general idea for evaluative
research is testing with five people. While this could be correct, it isn’t a hard and
fast rule.

For the shopping example above, if you pick five random people off the street to
test your product, you likely won’t find 85% of the issues. The fine print behind “five
people per usability study” is that it means five users per segment.

The People Nerds’ Guide to Successful Cross-Team Collaboration 7


Some general guidelines for the top evaluative methods:
Usability testing
• Moderated: Recruit at least five participants per segment
• Unmoderated: Recruit at least 15 participants per segment, in case you get messy data

Concept testing
• Moderated: Recruit at least eight participants per segment
• Unmoderated: Recruit at least 15 participants per segment, in case you get messy data

Card sorting
• Recruit about 20-30 participants per segment

Benchmarking
• Recruit 25 or more participants per segment, since we’re looking at quantitative data

Finally, it’s important to consider other characteristics beyond demographics when


screening would-be usability participants. Do you need specific customer groups,
personas, or maybe users of particular features? Not all usability testing is the same,
be thoughtful about who is included and excluded from each test.

If you plan to create a rolling usability program, for example, consistency across outputs
might necessitate a panel sampling method. If, on the other hand, you are managing
many requests from multiple stakeholder teams, honing in on the 3-5 key screening
criteria is key to keeping ahead.

Want to learn more about dscout’s recruiting options? See our platform in action.

The People Nerds’ Guide to Successful Cross-Team Collaboration 8


Task-writing
When constructing usability testing tasks, consider these steps:

1. Start with a goal

Start with what you want the user’s end goal to be, not the goal of the task. What does
the user need (or want) to accomplish at the end of the action? What is their goal for
using this product?

2. Include some context

Instead of throwing participants into action with no relevant information, give them
context on why they need to use the product. You can also consider the context and
background information for why they would use the product in the real world.

3. Give them relevant information

Since you’re recording metrics, you don’t want to be vague in your instructions. If users
need to input dates, locations, or particular data in a form, give them that information.
You don’t want the user to guess what you want them to do, resulting in skewed data.

4. Ensure there is an end the user can reach

If you’re trying to get someone to accomplish a task, make sure they can. There
should be a reachable “end,” which satisfies the participant and helps you record if the
participants could complete the task.

5. Write your task scenario(s)

Once you’ve brainstormed this information, it’s time to write your task scenario. Don’t
shy away from creating questions that are open-ended, qualitative, or media-based.
You can use different data types to triangulate your findings, adding rigor to your
recommendations.

* Think aloud questions – Participants can articulate why they are making the choices
they are. This can be more accessible and inclusive for some participants who find typing
tedious. As a researcher, it also gives you a rich set of data to triangulate, especially if you
have access to screen recordings.

* Selfie camera recordings – Offer a view of a participant’s face, including the


expressions of delight when they figure something out and grimaces at bottlenecks.

The People Nerds’ Guide to Successful Cross-Team Collaboration 9


* Concluding open ends – Offer participants a moment to offer any feedback you didn’t
think to ask about. This might reveal information useful to a product or even a brand
stakeholder. At the very least, it should help you sharpen your next study.

6. Conduct a dry run (or two!)

After writing down your task scenarios, it can be extremely beneficial to try a dry run
with internal employees, friends, family, etc. This will allow you to practice the test flow,
make sure the tasks make sense, and indicate whether there are too many.

In terms of how many tasks should be in one usability test, it depends on the complexity
of the tasks and how much time you have with the participant.

Pro tip: For a 45-60 minute session, five to seven tasks are an appropriate amount.
By including a dry run in your process, you can know how many tasks you can fit into
the session.

A potential task example for the above clothing company might look like this:

Winter is coming up, and you’re looking for a new winter coat to keep you warm that is
under $150.

Another task example could look like the following image.

Learn how dscout makes task writing and follow up quick and easy with Express.

The People Nerds’ Guide to Successful Cross-Team Collaboration 10


Usability metrics
Although usability tests can be qualitative, they are more likely to be quantitative.
Metrics are the backbone of task-based usability, offering a numerical representation of
what a product experience is “doing” for a given user.

Don’t forget the core research skills of observation,


observation which will serve you well in
interpreting and contextualizing any metric. Keep in mind, if you’re looking for qualitative
feedback, usability metrics won’t be a good fit.

Before we dive into the actual metrics, keep in mind the three cornerstones of usability,
because the metrics you collect will be measuring these things:

Effectiveness: Whether or not a user can accurately complete a task that allows them
to achieve their goals. Can a user complete a task? Can a user complete a task without
making errors?

Efficiency: The amount of cognitive resources it takes for a user to complete tasks. How
long does it take a user to complete a task? Do users have to expend a lot of mental
energy when completing a task?

Satisfaction: The comfort and acceptability of a given website/app/product. Is the


customer satisfied with the task?

Effectiveness
• Task Success: This simple metric tells you if a user can complete a given task
(0=Fail, 1=Pass). You can get fancier with this by assigning more numbers that
denote the difficulty users had with the task, but you need to determine the levels
with your team before the study.

• Number of Errors: This task gives you the number of errors a user committed while
trying to complete a task. You can also gain insight into common mistakes users run
into while attempting to complete the task. If any of your users seem to want to
complete a task differently, a common trend of errors may occur.

• Single Ease Question (SEQ): The SEQ is one question (on a seven-point scale)
measuring the participant’s perceived task ease. Ask the SEQ after each completed
(or failed) task.

• Confidence: Confidence is a seven-point scale that asks users to rate how


confident they were that they completed the task successfully.

The People Nerds’ Guide to Successful Cross-Team Collaboration 11


Combining these metrics can help you highlight high-priority problem areas. For
example, suppose participants respond confidently that they completed a task, yet most
actually fail. In that case, there is a vast discrepancy in how participants use the product,
leading to problems.

Efficiency
• Time on Task: This metric measures how long it takes participants to complete
or fail a given task. This metric can give you a few different options to report on,
where you can provide the data on average task completion time, average task
failure time, or overall average task time (of both completed and failed tasks)

• Subjective Mental Effort Question (SMEQ): The SMEQ allows the users to rate
how mentally tricky a task was to complete.

Satisfaction
• System Usability Scale (SUS): The SUS has become an industry standard and
measures the perceived usability of user experience. Because of its popularity,
you can reference published statistics (for example, the average SUS score is 68).

Overarching metrics
• SUM: This measurement will enable you to take completion rates, ease, and time
on task and combine it into a single metric to describe the usability and experience
of a task.

• Standardized User Experience Percentile Rank Questionnaire (SUPRQ):


This questionnaire is ideal for benchmarking a product’s user experience. It allows
participants to rate the overall quality of a product’s user experience based on four
factors: usability, trust/credibility, appearance, and loyalty.

The People Nerds’ Guide to Successful Cross-Team Collaboration 12


A note on following up:

Although usability testing leans quantitative, focusing on medians and averages of


instruments, matching them to specific experience areas for prioritization, this doesn’t
mean there aren’t moments when user responses might require more digging.

If your research tooling allows, consider following up with a handful of participants


to dig into their motivations, rationale, or specific actions. You might choose that one
participant who struggled the most, or maybe the person who represents a group of
customer stakeholders who are interested in building new products for.

A few conversations can go a long way to sharpening your recommendations, adding


context and richness to the hard-quant your usability test likely produced.

This can proceed in the other direction, too, where a few repeated examples of friction or
delight points might create the opportunity to trend-spot with a larger sample.

Here, going from a 15-person usability test to a 1500-person survey adds confidence you
can weave into your recommendations, especially for future-proofing the product.

A few ways you can dig deeper into your data:

• Schedule 1-1 interviews with a few of your most engaged participants to better
understand their feedback and gather additional insights.
• Ask a handful of your most engaged participants to join a series of studies to
understand their product usage habits over time.
• To gut-check a potential trend, conduct another usability test with a wider audience.

The People Nerds’ Guide to Successful Cross-Team Collaboration 13


Analysis
Once you conduct the usability test, it’s time to begin the process of unwinding your data
into patterns and trends you can act on.

Qualitative usability testing


When synthesizing qualitative usability tests, it can help to focus on global tags and
affinity diagramming.
diagramming A few examples of global tags are:

• Goals: What the person is trying to accomplish as an outcome.


• Need: Something a person needs to fulfill a goal.
• Task: Something a person does to achieve a goal.
• Pain point: A barrier or difficulty towards accomplishing a goal.

After each usability session, it can help to do a quick debrief and split the global tags into
four quadrants and note what happened during that interview concerning each quadrant.

For example, if you were usability testing the clothing website, a session debrief (one
participant) might look like this:

Goals
• Purchase a new weatherproof winter jacket to keep warm during snowstorms and in
below-freezing weather.
• Get the highest quality for the lowest price by comparing different products on
the website.

Needs
• Weatherproof and durable jacket, knowing this is the case through a description
• Lasts for more than five years
• Warm enough for below-freezing weather
• Under $250

The People Nerds’ Guide to Successful Cross-Team Collaboration 14


Tasks
• Searching for winter jackets using keywords
• Filtering by weatherproof or durability
• Reading the description to understand more about the product
• Sorting by price

Pain points
• Not knowing if a coat is waterproof or weatherproof for the necessary conditions
• Understanding fit and size with additional layers
• Knowing the length of the coat
• Understanding other peoples’ experiences with the coat in similar situations

Completing this small synthesis session makes for easier work at the end of the study.
Once you complete all the sessions and each debrief, bring all participants together to
assess patterns and trends (the same thing three or more people are saying).

Streamline your analysis process and quickly view completion, time on task,
average ease rating and other metrics with dscout. Learn more.
more

The People Nerds’ Guide to Successful Cross-Team Collaboration 15


Quantitative usability testing (with metrics)

One way to report quantitative usability testing is with a stoplight report.

A stoplight report:
• Conveys whether or not a user has passed a usability testing task
• Includes how severe the problem is in a task
• Shows the amount of time spent on a task, by task and participant
• Highlights, on average, how many participants passed/failed a given task
• Summarizes how the usability test went through visuals

The most valuable part of the stoplight approach is how visual it is. It can quickly provide
a stakeholder with a holistic overview of how the usability test went.

A stoplight chart includes the following components (but you don’t have to include all of them!):
• Each participant has a column, with a participant summary at the bottom
• Each task has a row, with an average task summary
• The three colors indicate:
• Whether a participant succeeded (green)
• Whether a participant struggled with a task (orange)
• Whether a participant failed the task (red)

The time for each task is recorded within the task participant bubble and averaged per
task. To try it out yourself, grab a copy of our Usability Quant Testing Template.
Template

Now that you’ve got the basics on conducting and analyzing a usability test, we’ll go
through a handy usability testing checklist and offer specific use cases showing how
other prominent companies have approached this method.

The People Nerds’ Guide to Successful Cross-Team Collaboration 16


Usability Test Checklist
Make sure every usability test goes off without a hitch. We’ve built a robust checklist
to ensure you’re prepared for everything prior, during, and after the test.

GET THE CHECKLIST

The People Nerds’ Guide to Successful Cross-Team Collaboration 17


How DropBox
approaches usability
We talked with Design Researcher, Meghan Earley, about how Dropbox met user needs
by going beyond standard usability testing—leveraging video and diary studies to look
longitudinally. As a result, they gained more confidence in their issue reports and saw
increased investment from their stakeholders.

The challenge:
People use Dropbox in browser to host and share files—relegating it to more passive,
background usage. So when the company began the development of their new desktop
app, they strived to create a single workspace for users to organize their content,
connect their tools, and bring their teammates together.

But when you build a product for more flexible and frequent usage—you have to be sure
that it’ll work as intended. And to be confident that it’ll work as intended, the insights that
you need are often more extensive than what you could glean from a typical usability test.

And so, she turned to dscout.

“I looked at dscout because we wanted to do a longitudinal study,” she says. “It was the
first time we had people using the product outside of interviewing and concept testing.
So we really wanted to get a sense for their day-to-day: How are they interacting with
this app, and what are their attitudes towards it?”

“We were hoping to get some in-context feedback; we needed participants to submit
surveys in the moment they were doing things. That made a major impact in our attempts
to understand what the real issues were.”

The People Nerds’ Guide to Successful Cross-Team Collaboration 18


The solution:
The Dropbox study took four weeks total—longer than most. However, the length was
necessary for the type of insights Dropbox was looking for, as well as the product they
wanted to release.

“We kept it pretty open-ended in the beginning,” Meghan says, “We didn’t want people
to feel like they were doing something right or wrong. We really just wanted to be a fly
on the wall and understand what was going on.”

The second half of the study was more straightforward. Participants were sent specific
survey-like questions asking about different parts of the product. And as the study
concluded, Meghan conducted a Live mission—pulling specific users in for 1:1 interviews
about their experience.

“The interviews were nice because we already had so much context,” Meghan says.
“They were really efficient. You can get straight to the heart of things after having heard
from this person on dscout for the past four weeks.”

The People Nerds’ Guide to Successful Cross-Team Collaboration 19


The impact:
Leaning on dscout’s platform, Dropbox was able to conduct a longer study and
expanded the breadth of their insights as a result.

“dscout offered us a complete understanding,” Meghan says. “Usability problems are


glaringly important from an evaluative perspective. In a typical usability test, we’ll see
someone encounter something once over the course of an interview. But when we’re
seeing people encountering things over and over, it’s definitely a signal that they’re
more important.”

This allowed Dropbox to address those issues, and address areas in which they could
really impact their users’ needs.

“We identified some key problems pertaining to the new functionality that we’re
adding. We’re adding features to help people collaborate and work with each other
better. And there were some pretty key blockers to people being able to do that.

“So identifying that has informed our design direction and understanding what we need
to do in order to help people collaborate more in Dropbox.”

Dive deeper into the study and see a few of Meghan’s tips for ensuring a successful
project in our Field Report.

READ MORE

The People Nerds’ Guide to Successful Cross-Team Collaboration 20


Benefits of usability testing
with dscout
dscout can help you gather the in-the-moment insights you need on your timeline.
See how we can support your usability testing needs.

A centralized experience keeps you focused


With built-in operations like recruitment, screeners, and incentive processing and a
single-view research activity builder, usability is more accessible and nimble.

A platform approach extends insight impact


Why stop at a single usability test? dscout’s suite of moderated and unmoderated tools
offer more variety for usability testing: follow up interviews, pre-session trend spotting,
or a rolling iterative approach are all possible from the same product.

A partner to augment your team


dscout’s staff of trained researchers can help translate research briefs, advise on analysis
approaches, or help with field management. This support goes beyond just tech help: our
team becomes an extension of your own, critical in these lean times.

A balanced approach to automation


Thoughtfully-integrated automation like response quality checks, expressiveness filtering,
full session transcriptions, and usability-specific analysis (e.g., time on task) keep
momentum and open time for deeper work where you need it most.

A way to engage stakeholders


Make the insights stick by bringing collaborators and decision makers into the process,
whether that’s through easy-access viewer status, one-click data share links, or
customized video playlists. Usability with dscout makes research a team sport.

“It was great that I was able to share the results from the analyze page
with someone else on my team. Huge time-saver! Otherwise, I spend
a lot of time taking the results and re-summarizing it somewhere else.”
Mary Mascari, UX Researcher at a Leading U.S. Airline

The People Nerds’ Guide to Successful Cross-Team Collaboration 21


Ready to scale your usability
testing and entire
research program?
Successfully scaling research requires effective stakeholder collaboration, flexible
methods, and insights designed for impact.

See how dscout can help

You’re fascinated by the why.


We break down the hows.
Get the People Nerds blog sent to your inbox to hear from top UXR practitioners and pick
up a few pieces of novel advice for every stage of your research project cycle.

Subscribe to the People Nerds Newsletter

The People Nerds’ Guide to Successful Cross-Team Collaboration 22

You might also like