0% found this document useful (0 votes)
90 views123 pages

This Sux

Uploaded by

rizal yudha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views123 pages

This Sux

Uploaded by

rizal yudha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 123

THIS SUX!

Ritvij Gautam is the co-founder Timothy Rotolo is a UX Architect at


and CEO of TryMyUI. He has a TryMyUI. His background is in
background in philosophy of mind, international relations, but his
which informs and influences his knack for design and skills in
approach to the study of usability research led him to the user
and user-centric design. Ritvij has a experience field. Tim has run and
big-picture outlook that seeks to observed hundreds of usability
apply broad concepts and prin- studies on websites of every kind,
ciples, and is quick to draw on and authors TryMyUI’s “UX Wars”
knowledge from outside fields to series of usability analysis articles.
enhance his vision.
Copyright © 2015 by TryMyUI, Inc.
All rights reserved.

Authors: Ritvij Gautam, Timothy Rotolo


Contributors: Jeff Sauro, Jennifer Romano Bergstrom, Guiseppe Getto, Karan Saggi
Edited by Timothy Rotolo.

Book design by Jenny Cang.


Cover design by Timothy Rotolo.

ISBN 978-0-9968284-0-6

Published by: TryMyUI, Inc.


1200 Park Place, Suite 290
San Mateo, CA 94403

No part of this publication text may be uploaded or posted online without the prior written
permission of the publisher.
For permission requests, write to [email protected]
CONTENTS

INTRODUCTION

ASKING & OBSERVING IN UX RESEARCH

4 Chapter I Farewell to Focus Groups

10 Chapter II Occurrence, Not Recall

CONCEPTS & PRINCIPLES

14 Chapter III Qualitative & Quantitative

18 Chapter IV Designing an Effective Usability Test Jeff Sauro

24 Chapter V Running a Comparative Usability Study

30 Chapter VI Experts vs. the Crowd

TOOLS FOR EFFECTIVE UX DATA COLLECTION

36 Chapter VII Standardizing UX Feedback

41 Chapter VIII Measuring Task Usability

PRACTICING USER-CENTRIC DESIGN

45 Chapter IX Collaborative Analysis

50 Chapter X Iterative Testing Jennifer Romano Bergstrom

57 Chapter XI When Design Drowns Out Message

62 Chapter XII It’s the Little Things


UX IN HIGHER EDUCATION

65 Chapter XIII Higher Ed and UX Guiseppe Getto & Karan Saggi

APPENDICES

74 UX Wars Spotify vs. Tidal

85 UX Wars Apple Music

96 UX Wars OkCupid vs. Match.com

107 UX Wars Priceline vs. Expedia


Introduction
“This sucks!”

If you do a lot of usability testing, you’ve probably heard that a lot. Every
time users stumble over a poor UI, wherever there’s a gap between how the
product is and how it should be, the user experience suffers; conversions
drop, user satisfaction decreases -- and for product people, that sucks.

When you’re working on a product, the best thing you can do for your
designs is to get into your user’s head. What do they want from the product?
What do they need from it? What will they expect from it, and how will they
use it?

INTRODUCTION 1
It is tempting to think that we know the answers to these questions, but
more often than not our preconceptions, attachment to our creations, and
familiarity with the product preclude any kind of objective understanding of
what users really think. We might think we’ve designed the perfect product,
when in reality our users are saying, “This sucks!”

That’s what usability testing is for.

By going straight to the source, we eliminate the guessing and put the
experience of real users at the center of the decision-making process. But
getting high-quality feedback is a skill in and of itself; knowing what to do
and what not to do in running a usability study can make all the difference in
the relevance, accuracy, and usefulness of your research.

Perhaps the central balance that one must strike in conducting UX research
is between asking and observing. Quiet, unobtrusive observation is crucial to
collecting feedback that reflects a genuine user experience, but on the other
hand to really get at the root of a user’s thought process can take some
pushing and prodding through restrained, carefully worded questions. The
trick is knowing when (and how) to push, and when to let the user journey
follow its natural flow.

Often it’s the unprompted moments that can say the most in usability
testing: just the word “oh” can tell so much, for example, depending on the

INTRODUCTION 2
user’s timing, tone, and inflection. Noticing the significance and implications
of these moments is almost as important as making sure that they can
organically arise in the first place. This requires removing ourselves from the
equation as much as possible, minimizing our own voices and just watching
and listening.

Asking questions, especially the wrong questions, can produce misleading or


incomplete information, which in the end is counterproductive to the design
process. Asking the right thing, though, can complement and enhance our
understanding of not just the issues in our designs but also their relative
importance to those who must deal with them — and as a wrap-up, direct
questioning is a valuable gauge of what the user took away from their
experience and what stands out about it in their memory.

The next three sections of this book will address the place of asking and
observing in UX — what we can accomplish with each of them, and how to
employ these different methods most effectively in our research. Then we
will take a look at how to use the insights we collect to be user-minded in
our designs and to make UX research a continuing part of the design process.

It’s easy to be persuaded into thinking we know what we’re doing; usability
testing is for proving us wrong. And through this process of being proved
wrong, we learn how to make our products that much better.

INTRODUCTION 3
I Farewell to Focus Groups
The future of user research

A number of years ago, British Airways needed to find out what customers
wanted. They were adding mini refrigerator to first-class seating sections so
passengers could help themselves to a snack during long overnight flights,
and needed to find out exactly what kind of snacks their passengers would
be interested in. So they put together a few focus groups.

Fruit, said the focus groups. Salads.

And so the mini fridges were filled with healthy fruits and salads. But one
longtime flight attendant objected – after years spent waiting on airline pas-
sengers and observing what they wanted in practice, she insisted that a few
chocolates and cakes be stocked too.

4
At the end of that flight, the chocolates and cakes were gone, and fruits and
salads still filled the fridges.
Why are focus groups so prone to serious error? Because they rely on asking
rather than observing, and that extra layer adds all sorts of complications:

1. People are very good at answering questions, even when they don’t
know the answer.
This has been shown time and again - for example, in Jimmy Kimmel’s Lie
Witness News segments, in which interviewees off the street are presen-
ted with a piece of false news and asked for their response. Unwilling to
reveal that they don’t know, people concoct blatant lies to tell the inter-
viewer. One man, when asked what he thought of Landon Donovan’s
play in the 2014 World Cup (Donovan was cut from the team before the
tournament), said it was “definitely pretty good; he took one to the...
nose, was it? and he kept playing.”

2. People don’t know why they like the things they like.
Often, our likes and preferences have reasons that we don’t really under-
stand. We know we like something, but we can’t really say why. But once
again, when pressed for explanations, people would rather make one up
than admit that they don’t know - after all, aren’t we supposed to know
why we like the things we like?

FAREWELL TO FOCUS GROUPS 5


3. People aren’t in touch with their subconscious.
This accounts for a good chunk of the reason we don’t know why we like
things, and also why we’re bad at predicting our own behavior in hypo-
thetical situations. Much of this information lays within our subconscious,
influencing our decisions in the moment without us realizing it. We just
don’t have access to it. It’s easy to think you’d want salads in the fridge
when you’re not half-awake thousands of feet off the ground in the
middle of the night.

4. People want to make themselves look good.


We aren’t likely to give an embarrassing answer, or one that paints us in
a bad light or makes us look foolish, when surrounded by other people.
We generally have a pretty good handle on the kind of answers that
others will respond positively to, or which ones they won’t like, and then
make sure to pick the most popular or inoffensive answers. Self-
deception plays a big part, too: people are often liable to talk about
themselves in ways that they wish were true, rather than what is really
true, painting their personality and character in an idealized manner.

5. People want to please the moderator.

Besides their peers, focus group participants also want to make their
moderator happy. This means guessing at the answers he or she is look-

FAREWELL TO FOCUS GROUPS 6


ing for, and returning those, instead of more honest replies. We don’t
want to be the one giving the bad news, so we play to the hopes and
expectations of the person asking the questions. In addition, people may
sometimes believe that their own answer is wrong if the questioner, who
is assumed to know more about the product, seems to be expecting a
different response.

And then there are the problems with recall, the act of retrieving infor-
mation from the memory after the fact - including divided attention,
primacy and recency effects, time delay, context dependency, and more. All
of these issues come into play when that extra layer is added between the
focus group’s conductor and the participants.

Focus groups are expensive

Focus groups costs:

 Participant recruiting & incentive  Refreshments & meals

 Facility rental  Moderator services

 Participant transportation  Videotaping & note-taking

On average, these typically add up to at least $4,000, and frequently more.


Market Trends Research says at least two focus groups are necessary to get
useful results. So if you ran focus groups on just one topic per quarter,

FAREWELL TO FOCUS GROUPS 7

hjgkjhj
that’s a low-end cost of $32,000 per year.

Beyond the price tag, there are the costs of time and effort - personnel must
commit a good deal of energy to planning and developing the focus group,
and then observing it while it takes place. All this for 2 hours’ worth of
contorted, questionable feedback.

$32,000
Now, say you wanted to hire the same
number of people to get the same amount
of feedback, without the failings of the
is the cost to run focus group format and the significant
an effective focus investment of personnel time and energy.

group program Using remote usability testing, you could


cut your budget by more than 90%, elimi-

90% nate the hassle of recruiting, trans-porting,


and feeding participants, and save per-

of expenses are sonnel time too. You could even signi-

saved by using
ficantly increase your number of testers, or
what you're getting out of them, or how
remote usability frequently you test, while still spending
testing just a fraction of what you would be paying
for focus groups.

FAREWELL TO FOCUS GROUPS 8


Getting closer to users

Focus groups create a superficial environment that returns superficial advice.


To really understand what users want, you have to strip away that extra
layer of asking that creates a barrier between you and the user and rely on
observation, unobtrusive and neutral. Get your participants out of the focus
group rooms, away from the one-way mirrors, where they can act, judge,
and respond independently of others’ influence. Freeing your participants
from their social and psychological traps is the best thing you could do for
your feedback.

9
II Occurrence, Not Recall
Peering into users’ heads

Recall: the retrieval of information stored in the memory.

We all rely on it every day – Where did I leave my phone charger? Which gas
station has the lowest prices? What filename did I save that document
under?

In usability research, we rely on recall to get feedback from everyday users,


testers, and focus group subjects with surveys, direct questioning, and other
methods – What did you think of the registration process? Did you have any
trouble finding the contact information? How does the site make you feel?

10
Problems with recall

A number of psychological factors exert a major influence on recall accuracy,


including:

 Attention
Divided attention has been shown to seriously hamper the memory-
encoding process that allows recall later on
 Motivation
The greater the incentive for accuracy, the more reliable respondents’
recollections are likely to be
 Primacy & Recency effects
People tend to be better at remembering the first and last elements of a
series than the middle elements
 Interference
A delay between the encoding of a memory and the subsequent
remembering, especially if filled with a separate task, impairs recall
 Context & state dependency
Items are recalled more reliably in the same environment or mental
state in which they were initially encoded, and less in different ones

On top of all this, your respondents’ answers can be subject to various other
manipulations depending on the format, including social pressures coming
from fellow testers leading to conformity, bandwagoning, or lying to hide
what could be perceived as incompetence; inadvertent pressure from a test
administrator to answer a certain way or confirm a given expectation;

OCCURRENCE, NOT RECALL 11

jjjjjjjjjjj
question-framing issues that influence tester responses (think ‘leading the
witness’); and more.

The human mind is capable of endless


shape-shifting, over-imagination, and
self-deception – more than enough
reasons to think twice about your
focus group results.

So if waiting till the end to ask your questions is such a feedback faux pas,
what’s the solution? Problems like division of attention or context depen-
dency might be diminished by confining testers to a controlled environment,
but at the cost of losing a genuine, true-to-life look at the user experience.
Primacy, recency, and interference could be combatted by asking testers
questions at the end of each individual task, but at the cost of obstructing
the natural flow of their journey through your website.

There is simply no way to eliminate all the distortions at once from a


usability study that follows the format:

1. Have testers use the website

2. Ask questions later

OCCURRENCE, NOT RECALL 12


Replacing recall with occurrence

The only way to close the gap between what users do on your site and what
they remember doing on your site, or what they say they’ve done on your
site, is to look at their occurrent thoughts - that is, the thoughts that pass
through their mind in the exact moment of those actions.

No recall is required - no flawed mental filters, no forgetting of middle


elements or transitioning between mental states - just verbalization of the
thought process as it happens. It’s something that comes naturally to us:
most people talk to themselves, especially while alone. Remote usability
testing allows you to tap into that natural instinct, listening to testers’
thoughts in real-time and getting the full, accurate, unadulterated picture.

Effectively, you’re looking into your user’s mind, and understanding


what they do and why at the most direct level. The pitfalls of relying on
recall are avoided, and the problems that crop up when testers are
influenced by judgmental peers, by hovering researchers, and by the
wording of questions never arise. They have simply to open their
mouths and let their minds flow out.

OCCURRENCE, NOT RECALL 13


III Qualitative & Quantitative
Research through dual lenses

In essence, a set of binoculars is


no more than a pair of telescopes
side-by-side. But in terms of utili-

I
ty, the difference is much more:
5 times the field of vision, much-
enhanced depth perception, and
a testament to the usefulness of
looking at things through two
different lenses.

14
In the same way, zooming in on usability with the dual lenses of qualitative
and quantitative feedback returns a much broader, more solidly context-
ualized picture than either would alone.

Stories from qualitative

At the very core of user experience is the subjective and emotional response
of the individual user – in short, the way a website makes visitors feel. These
feelings run the gauntlet from delighted, impressed, or hooked to confused,
frustrated, and angry. All these welling emotions, and the ones in between,
have one thing in common: they won’t show up in the numbers.

Your data may tell you which pages people visited, and how long they
stayed, and where they came from and where they went to, but the story
itself is missing – the feelings aren’t there. Did that user stay on your site
because his interest was captured by a great piece of content, or because he
was fruitlessly searching for an "About" section? When that visitor clicked to
a new page, did they move a step closer to their objective or had they mere-
ly mistaken the page for something else? And if so, what were the cues that
led them to that wrong turn?

QUALITATIVE & QUANTITATIVE 15


Listening to a user narrate their journey, hearing their reactions as they
navigate the ins and outs of a site, fills in those blanks. The ups and downs,
the irritations and confusions, the aha moments, the satisfaction of a task
completed, all come together to tell a story, and
the best and worst things about your website
stand out like warm bodies through infrared
goggles. Qualitative feedback tells the “why” at a
level that doesn’t otherwise come out.

Context from quantitative

The second lens on a set of binoculars lets the viewer gauge depth by taking
advantage of parallax, allowing the mind to compare and reconcile two
overlapping but distinct images to understand the object at hand in 3-D.

Similarly, quantitative feedback allows you to understand your site’s


usability in the context of its virtual surroundings. Unlike qualitative
information, it can be used to make easy, reliable comparisons – usability
tools like SUS (the System Usability Scale) and SEQs (Single-ease Questions)
that measure and quantify usability can map the individual’s user expe-
rience, chart usability increases and decreases over time, and show how
your website performs compared to others.

QUALITATIVE & QUANTITATIVE 16


This last function is perhaps the most important, because your site does not
operate in a vacuum; it exists within a diverse online world that offers stiff
competition and supplies potential visitors with endless expectations, habits,
and pre-suppositions. Seeing where you rank in that world breaks open a
whole new level of self-awareness and points you in the right direction as
you make UX improvements.

Putting qualitative and quantitative together

Delving into the individual stories of users enormously increases the value
of your research. Your users aren’t just data points; each one is the author
of a unique journey through your site, and exploring those journeys broa-
dens your view by many multiples. Placing your usability into context with
empirically robust quantification and comparison tools deepens your
understanding. Together, they render a much-enhanced, complete picture
in all its detail.

QUALITATIVE & QUANTITATIVE 17


IV Designing an Effective
Usability Test
Written by Jeff Sauro

Designing a user test is a bit like designing a website: inevitably, users find
every way you never thought of to misinterpret and misuse what’s put
before them. Most of the time, though, that isn’t their fault – in fact, it’s

xII
probably yours.

Test designers frequently make oversights and errors that cripple the ability
of their tests to gather useful feedback and show the interactions they're
looking for. A poorly designed usability test will seriously impact the results,
including (1) users’ ability to correctly carry out the tasks in your user flow,
and (2) the amount of insight you get into real usability issues with your

18
application. When your testers don’t understand the tasks, aren’t prepared
to approach them properly, or misinterpret the instructions, the returns on
your research aren’t being optimized and you may well not learn what you
hoped to about your interface.

Here are 6 guidelines for designing a test that will avoid common errors and
maximize the information and insight you get out of your usability research.

1 Create an engaging, immersive scenario


To get the best feedback from your user tests, the tester should be
immersed as much as possible in the mindset of someone using your
product in a real-life context. To achieve this, the scenario you design for
your testers should be detailed and realistic. Write a scenario that sounds
like a story, not a set of instructions.

A poor scenario, for example might sound like this: “You need to buy
renter’s insurance and want to explore your options.” A great scenario
would be, “Your friend just paid thousands of dollars to repair damage
caused when a guest accidentally caused a kitchen fire. You have guests
over often and want to be covered in case something like this happens.”

DESIGNING AN EFFECTIVE USABILITY TEST 19


This allows the tester to dive into the experience and explore your product
with the perspective of a real-life user. If you’ve done your demographic
selection well, a detailed scenario like this one may be pretty identifiable or
at least imaginable for your testers.

2 Test user impressions


Your landing page is a billboard for your brand. A good way to measure
whether that billboard is projecting the right message is an impression test.
The way we do this is to show testers the landing page for up to 15 seconds
(the average amount of time visitors spend on a webpage) and then ask
users to talk about words they remember from the site, its general feel,
products or services offered, and overall impressions. It’s a great way to
gauge whether visitors are receiving the message you’re trying to send, and
understand the visual and verbal cues that shaped, guided, or obstructed
their understanding of what the site is about.

Making the right impression on new visitors can be the difference between
losing or keeping them. Including an impression test at the beginning of your
user test also helps to orient the user and gives them the chance to under-
stand what the site is for before jumping into the tasks.

DESIGNING AN EFFECTIVE USABILITY TEST 20


3 Design a journey, not a to-do list
Remember that although your tester will be instructed through your app-
lication by clearly delineated tasks, it should mimic a real-life user journey
as much as possible. Think about how someone visiting the site might
progress through different pages and phases. You probably have a pretty
good idea of how visitors move through your site; use that knowledge to
lay out a smooth and natural journey that a real user might actually take.

A task doesn’t always have to involve a concrete action like signing up or


making a purchase. Sometimes asking the user to browse around or explore
for a bit, whether in a specific section of the website or across the whole
thing, makes sense. And leaving them to navigate and make decisions for
themselves can tell you a lot about the way people process what they see
and how they’re approaching the content of your site.

4 Don’t make the tasks dependent


It is important to alternate the presentation order of tasks as there is a sig-
nificant learning effect that happens. If your tasks have dependencies (e.g.,

DESIGNING AN EFFECTIVE USABILITY TEST 21


create a file in one task then delete the same file in another task) then if a
users fails one task they will often necessarily fail the other. Do your best to
avoid dependencies (e.g. have the user delete another file.) This isn't always
possible if you're testing an installation process but be cognizant of both the
bias and complications introduced by adding dependencies.

5 Don’t lead the witness


Your word choice when writing tasks should avoid keywords used on your
application – tell testers the end goal to be achieved, not the action to take.
For example: “Save a product you like so you can come back to it later”
instead of “Add an item you like to your wishlist.” Not only will this show you
how easily the user identifies and locates the way to complete the task, it
might reveal a totally different way they may think about achieving that goal
– maybe their first instinct is to bookmark it on their browser.

The bottom line is, if the words you use are the same as the words that
appear on your site, the interactions you see in your results won’t be genu-
ine; you’re effectively giving the tester the ‘answer.’ Refrain from leading the
witness, and you’ll learn a lot more.

DESIGNING AN EFFECTIVE USABILITY TEST 222


6 Write for first-time visitors

Being so intimately familiar with your own product, it can be easy to


accidentally talk about it in a way that doesn’t make sense to new users.
Remember that your testers probably know nothing about your product and
how it works. Steer clear of industry terms or brand-speak; these are likely
to just confuse your testers, and can sometimes result in a tester mistakenly
believing he has completed a task and moving on without actually doing it at
all. When you write your tasks, use simple and generic wording and think
hard about what will and will not make sense to first-time visitors.

There are many other ways to optimize your research. But with these 6
guidelines, you will be able to avoid the main potholes and get valuable,
relevant usability feedback.

About the Author

Jeff Sauro is the founding principal of MeasuringU, a


company providing statistics and usability consulting to
Fortune 1000 companies. He is a pioneer in quantifying the
user experience, and the author of over 15 peer-reviewed
research articles and 5 books on statistics and the user
experience. Jeff has worked for Oracle, PeopleSoft, Intuit, and
General Electric.

DESIGNING AN EFFECTIVE USABILITY TEST 23


V Running a Comparative
Usability Study

No website exists in a vacuum, and seeing how yours compares to your


competitors' is critical for making important roadmap decisions.

Where does your competitor's website or app hold the edge? What are they
doing right that you can learn from? And where are the strong points in your
own design? Learning the answers to these questions will give you a strong
grounding for understanding where to take your product and how to market
it.

Here are the top 5 things we tell customers who are looking to run a
comparative usability study.

24
 Keep tasks the same

One of the key tricks is to keep your tasks as similar as possible so that the
results are directly comparable. If you can, choose a scenario and set of tasks
that are exactly the same, including order and word choice. This may require
you to get creative - make sure to frame your tasks in a way that is equally
applicable to both sites, and use words that aren't found on either so you
don't give an advantage to one or the other.

However, two sites, even competitors, are rarely identical, and it's likely you'll
have to make some accommodations. Sometimes similar sites will be de-
signed with the same functions in a different order, or with one or two
central functions that are sharply different.

The key is to design a genuine, true-to-life user journey for each site that will
return relevant insights, while also ensuring that your test designs are close
enough to allow side-by-side comparison.

RUNNING A COMPARATIVE USABILITY STUDY 25


 Different testers for different sites

Typically, you'll want to use different testers for each site, rather than
having the same people test both. Since both sites are offering a com-
peting product, service, or experience, the design and structure of the
first site will invariably affect testers' perceptions and expectations of the
second. People create schemas for how different functions should look,
feel, and work, and once they have seen one site's version, they are more
likely to have trouble with versions that differ.

This is the same reason we recommend using different testers for longit-
udinal research on a single site – once people have learned a system, it
colors their subsequent experiences, and their test results will not reflect
a typical user's journey.

 Recognize the important issues

Not all usability problems are created equal. Understanding the weight of
various issues is important to seeing how two sites really compare. When
thinking about what matters most, these are some questions to consider:

RUNNING A COMPARATIVE USABILITY STUDY 26


How serious is the problem itself? Does it completely obstruct the
function at hand, or only slow it down?

How crucial is the affected function? Is it auxiliary to the user


experience, or fundamental?

How did users respond to the problem? Were they annoyed, frustrated?
Who did they blame for the problem?

The more successful site is not the one with the smallest tally of issues, but
the one that better enables users to achieve their end goals. For example, if
your website centers around a search function and the search is unusable,
no amount of UX brownie points from the menu layout or the locator can
make up for it.

 Make quantitative comparisons

Measuring the user experience in quantifiable terms is a great way to take


an objective look at comparative usability.

The bulk of your insights will come from watching users struggle with
usabilitycndjsbfhsbfhsdbhbfhsdb

RUNNING A COMPARATIVE USABILITY STUDY 27


usability issues firsthand, but looking at the results through a quantitative
lens is very important for grounding and contextualizing, as well as elimi-
nating personal biases.

You may think you're objective, but it's easy to subconsciously minimize the
issues your own site has while focusing heavily on the problems of your
competitor's. Widely-used quantification scales like SUS and the SEQ are
great tools for taking a more clear-eyed look at the results, and also allow
direct side-by-side comparison between system and system, task and task.

 Go below the surface

Not every usability problem will look like a problem. Some issues are subtle
enough that the user doesn't notice that their experience has suffered. This
may occur when users shoulders the blame for mistakes themselves, and
therefore don't say anything about it. Other times it's not that there's a
problem, but rather that there's simply room for improvement – an obser-
vation that's much easier to detect in a comparative usability study. Keep an
eye out for spots where the user experience may be just alright. Turning
those moments into stellar experiences is a key to creating a successful
website that people will want to return to.

RUNNING A COMPARATIVE USABILITY STUDY 28


There's always something your
competitor is doing right that
you can learn from; insights
such as these are some of the
most valuable takeaways from
any comparative usability test.

RUNNING A COMPARATIVE USABILITY STUDY 29


VI Experts vs. the Crowd
The wisdom of crowdsourcing

Experts or the Crowd?


It’s a debate that vexes numerous and diverse areas of thought, from
sociology and psychology to government (authoritarianism or democracy?),
economics (central planning or free markets?), information dissemination
(Encyclopedia Britannica or Wikipedia?) and more.

So when it comes to UX, who can tell you more – the experts, or the crowd?
The answer may not be as clear-cut as you think.

30
The wisdom of crowds

In 2004, James Surowiecki gave a name to the truth and accuracy of the
aggregated many: “the wisdom of crowds.” It’s the idea, basically, that the
collected knowledge of a large number of people tends to be remarkably
correct.

The apple of this particular strain of


“Galton was interested in figuring out
thought fell on the head of a British what the ‘average voter’ was capable
of because he wanted to prove that
scientist named Francis Galton, a per- the average voter was capable of very
fectly stuffy elitist certain that proper little. So he turned the competition
into an impromptu experiment…He
breeding and the concentration of added all the contestants’ estimates,
and calculated the mean of the group’s
power in the hands of a suitable few guesses. That number represented,
were the key to a successful society. you could say, the collective wisdom of
the Plymouth crowd. If the crowd were
Observing a contest to guess the a single person, that was how much it
would have guessed the ox weighed.
weight of a well-fattened ox at a coun-
Galton undoubtedly thought that the
try fair, Galton was inspired to run
average guess of the group would be
statistical tests on the participants’ way off the mark. After all, mix a few
very smart people with some mediocre
responses, and discovered, to his sur- people and a lot of dumb people, and
it seems like you’d end up with a dumb
prise, that the average of all 787
answer. But Galton was wrong.”
responses deviated from the ox’s true
-- James Surowiecki
weight by a single pound. (From The Wisdom of Crowds, p. xii-xiii)

EXPERT VS. THE CROWD 31


787 contestants’ average estimation
was only 1 pound off from the right answer

The wisdom of crowds lies in the great diversity of independent opinion: as


overestimation, underestimation, opposition, endorsement, half-truths, and
whole truths are averaged together, the voice of the crowd converges on
correctness. The key, however, is the independence of each individual’s
decision-making. In Galton’s statistical test, each entrant was individually
responsible for his/her own estimate; they may have discussed the ox’s
weight with others before submitting their answer, but in the end their
judgment was entirely in their own hands.

It is under circumstances like these that the wisdom of crowds can reach its
full potential: untainted by the shadows of peer pressure, conformism, and
bandwagoning, independently-generated thoughts and ideas contribute to
the diversity and wholeness of the collective voice, and counteracting forces
(think underestimation and overestimation, or opposition and support) are
allowed to balance the sum and bring the crowd to the best judgment.

This is why the wisdom of crowds works as an argument for democracy:

EXPERT VS. THE CROWD 32


though governance and decision-making are discussed at a large scale, when
it comes time to vote there is only the lone citizen and the ballot, together
in the voting booth. Each citizen is free to make whatever decision he feels is
best, and together all of these independent judgments choose presidents
and congressmen and mayors and councilmen.

Applications to UX

Remote usability testing is not so different from a guess-the-weight-of-the-


cow contest. Sure, the participants aren’t actually competing, but each of
them, with their varying knowledge, experience, and skill levels, contributes
a new point of view that leads us closer to an accurate and precise eval-
uation of the subject at hand.

“ Why are experts not that smart? Because experts tend to be and
think alike, and thus do not reflect maximum diversity of opinions.

-- Aldo Matteucci

But are they better than experts? At some things, they certainly can be
(after all, none of the cattle experts guessed within a pound of the prize ox’s
true weight).

EXPERT VS. THE CROWD 33


That’s not to say that experts don’t have anything to offer; on the contrary,
the wealth of deeper understanding, analytical thinking, and problem reso-
lution skills that a UX expert brings to the table are great tools.

But they, too, are hu-


man, subject to their
own personal biases
and the biases of their
field, caught in the
bubble of their own
minds. No individual,
no matter their exper-
tise, can compete with
the crowd for com-
pleteness and all-encompassingness. There are too many angles for one
person’s opinion to be 100% accurate; aggregation will always be able to
achieve a more perfect picture.

How, then, can we maximize what we learn from the crowd?

EXPERT VS. THE CROWD 34


Putting the “wisdom of the crowds” into action’

What does crowdsourcing usability look like? The UXCrowd is one way – a
tool for identifying and prioritizing usability stress points by harnessing the
wisdom of crowds

Step 1 Step 2
After completing a task-based After all the answers from the
user test, the first 5 testers sub- first 5 testers are gathered, the
mit 2 things they liked, 2 things rest of the testers assign weight-
they didn’t like, and 2 ideas for ed votes to the ones they most
improving the site. agree with. This allows the best
ideas to bubble up to the top.
Each tester’s submissions are
made before they have seen To prevent bandwagoning, the
anyone else’s input. Thus, each testers can’t see the standing
idea is the result of independent vote count of the submissions as
thinking and insight. they vote on them. This allows
testers to approach each idea
with an open mind.

By preserving the freedom of each tester to think independently and draw


his or her own conclusions, the UXCrowd draws on the wisdom of the crowd
without allowing room for the vices of group thinking to infringe on the final
product.

EXPERT VS. THE CROWD 35


VII Standardizing UX Feedback
The System Usability Scale

Usability testing fills a critical need in understanding users’ responses to


your website’s UX, but the inherent subjectiveness of the video/audio
feedback format means that it can be difficult to distill meaning from a large
number of responses. Your testers are writing you a novel with their words –

I
the plot is no secret, but what’s their great underlying theme?

Fortunately, a tool already exists to consolidate and synthesize individual


testers’ responses into a coherent description of a site’s strengths and
inadequacies – one that is widely respected and is something of an industry
standard in usability research. The System Usability Scale (SUS) has long
been a favorite for its simplicity and accuracy:

36
ten questions, a five-point “strongly agree” to “strongly disagree” response
system, and a quick scoring algorithm yield an extremely reliable usability
score on a scale of 0 to 100.

The ten items of the SUS questionnaire:

1. I think that I would like to use this system frequently.


2. I found the system unnecessarily complex.
3. I thought the system was easy to use.
4. I think that I would need the support of a technical person
to be able to use this system.
5. I found the various functions in this system were well
integrated.
6. I thought there was too much inconsistency in this system.
7. I would imagine that most people would learn to use this
system very quickly.
8. I found the system very cumbersome to use.
9. I felt very confident using the system
10. I needed to learn a lot of things before I could get going
with this system

STANDARDIZING UX FEEDBACK 37
With thousands of previously documented uses to compare to, SUS gives
you a solid indication of users’ overall satisfaction with your website, and
can even be broken down into usability and learnability dimensions. The
percentile ranking contextualizes your raw score, allowing you to under-
stand how your site performs relative to others; and some researchers have
tried, with some success, to map adjectives like “excellent,” “poor,” or
“worst imaginable” to individual scores for extra insight.

By various accounts, the mean SUS score hovers around 68-70.5 (a score
that roughly corresponds, as it happens, to the adjective “good,” though
falling rather short of “excellent”). Normalizing score distribution with per-
centiles therefore makes a 68 (or a 70.5) into a 50% – better than half of all
other systems tested, and worse than the other half.

STANDARDIZING UX FEEDBACK 38
Quartile breakdown of SUS scores and suggested adjective rankings from the Journal of
Usability Studies

Though described by its inventor as a “quick and dirty” measure, studies


have found SUS to be among the most accurate and reliable of all usability
surveys, across sample sizes. It has today become one of the most successful
metrics for quantifying system satisfaction, with thousands using it to gauge
user-friendliness over a wide range of products online and off.

STANDARDIZING UX FEEDBACK 39
It is these qualities that make SUS the key to normalizing a diverse array of
tester feedback, and aggregating responses into a meaningful but concise
picture of your UX. If individual test videos are the trees, SUS shows you not
only the forest, but the entire ecosystem into which your system fits; it
describes your site not as a standalone entity, but as a part of the wider web
world that surrounds and thus helps to define it. With a widely-trusted
industry standard to rely on, you can take a step back from your own com-
pany and see how you chalk up in the bigger picture.

STANDARDIZING UX FEEDBACK 40
VIII Measuring Task Usability
The Single-Ease Question

The idea behind task-based usability testing is that any application's user
experience is comprised of various steps along the user’s journey, each of
which must be optimized for simplicity and ease of use to guide the user to
his or her end goal.

Each task, then, while contributing to a cohesive whole, is also a unique


opportunity to create an intuitive and seamless interaction for the user. It is
in finding where we fail to do this that we are able to improve our websites
and applications.

41
So if tasks are the building blocks of usability testing, is there a way
to think quantitatively about the individual usability of the tasks
we ask our testers to complete?

Qualitative feedback identifying problem spots is an invaluable (and the


primary) return of user testing, but it does not allow us to compare general
usability across tasks and see the relative weight users assign to the
problems (or ease of use) they faced in each separate task.

SUS, the System Usability Scale, provides quantitative UX feedback on


overall system satisfaction and usability. Although it’s only a short 10-item
questionnaire, SUS could still quickly become burdensome for testers when
applied repeatedly after each task.

Measurable metrics like number of


clicks or time taken per task are useful
in getting a handle on the effectiveness
and simplicity of task designs, and are
built into the bones of usability testing
anyways, but are not comprehensive;
they are more suited for extrapolation
and setting targets to hit.

MEASURING TASK USABILITY 42


The Single Ease Question

A more broadly focused method, which does not pile significant amounts of
time, effort, or complexity onto the tester, is the Single Ease Question, or
SEQ.

Like SUS, the Single Ease Question uses a Likert Scale-style response system,
but the similarities stop there. As its name implies, SEQ is just one question:

“How difficult or easy did you find the task?”

And the response scale has 7 points, not 5.

This adds room for more nuances and a greater diversity of responses, while
still preserving the one-question-only simplicity of the SEQ.

The Single Ease Question has been found to be just as effective a measure as
other, longer task usability scales, and also correlates (though not especially
strongly) with metrics like time taken and completion rate.

In addition to its usefulness as a quantification tool, the SEQ can provide


important diagnostic information with the inclusion of one more query:
“Why?” Though this does in fact double the length of this short question-

MEASURING TASK USABILITY 43


naire, the critical value it adds is in tying feedback to a causal relationship
with specific problems that you can then act on to improve your website.

With or without this addendum, the SEQ is a valuable tool helping web
developers understand the usability not only of the website as a whole, but
also of the individual steps on the user’s pathway through it.

MEASURING TASK USABILITY 44


IX Collaborative Analysis
Workflows within a UX team

How can you make usability research relevant to all of your stakeholders
without bogging the whole team down in a morass of data?

For the UXer, extracting insights from the data isn’t the end goal; the
findings, rather, are a tool for spurring necessary changes to the product.
Thus the challenge is to engage the whole team with those findings and
achieve alignment within the team so that the product can move forward.
This is a tricky proposition, though. Watching all of the video results is
neither feasible nor necessary for every team member; much as it may add
to the nuance of the analysis, the time investment will almost always be
insurmountable for most stakeholders.

45
It falls to the UX Researcher to watch all of the results and identify key
findings, judge what is important and what isn’t, and then persuasively
communicate these to decision-makers.

To achieve his or her goals, therefore, what the UX Researcher really


needs is a way to easily pinpoint and reference critical, demonstrative
moments in the results to back up their arguments and justify making
changes to the design.

The decision-maker’s objective, on the other hand, is to make an informed


decision about the direction of the product without wasting too much time
on the specifics.

What the decision-maker needs to achieve their goals is an efficient


means of knowing and understanding the issues, and seeing the
evidence needed to green-light changes.

The goal of the designer, for their part, is to see why particular design
elements work or do not work, and then use that information to create new
solutions based on real user behavior.

For the designer to achieve his or her goals, they need to be able to
directly access the results and witness users interacting with the product
at key junctures and hear their thoughts and reactions when they run
into walls.

COLLABORATIVE ANALYSIS 46
Conceptualizing a UX research workflow

Given these specific goals and responsibilities, one can flesh out a picture
of a team UX workflow that engages every stakeholder with the data at the
appropriate level of involvement and achieves the goals of all involved, as
well as the team as a whole.

The researcher mines for insight and patterns, categorizes and synthesizes
↓ amounts of feedback, and makes note of key teachable moments.
large

The designer reviews the researcher’s work, observes the critical inter-
actions, and deepens the analysis of the results with their own unique
contributions and insights.

The product manager or other decision-maker weighs the recommenda-


tions of the researcher, substantiated by video highlights, user quotes, or
other data, as well as the input and suggestions of the designer, and decides
the course of the next sprint.

COLLABORATIVE ANALYSIS 47
Tools for the team that collaborates

TryMyUI built a platform for teams to collaborate on their UX research.


There are several features crucial to an efficient, replicable workflow.

The first big piece of the puzzle for us is annotations: a way for each
account member with whom the video results are shared to add tagged,
timestamped notes the whole team can see and access. This is primarily
for researchers – the tags make it easy to identify and keep track of use
patterns and recurring issues, and the timestamps with one-click
playback are an easy way to share key moments and highlights that
make the case for design changes.

If there is more than one researcher, such a feature enables a more


efficient division of labor, allowing everyone to see at a glance the
primary takeaways of each video, whether or not they have watched it
in full.

For the designer, being able to see the researcher’s annotations makes it
simple to locate important teaching moments and then watch them him
or herself to more firmly grasp the issues. They can also add their own
insights, and with author-based filtering the whole team can read the
designer’s unique input and perspective on the data.

COLLABORATIVE ANALYSIS 48
The other big piece is the UX Insights summary: a way to create a
compact list of top findings from the test data and share it with stake-
holders. With all the important points consolidated into one place, such a
feature cuts away the extra and delivers only what the decision-maker
needs to know to move forward. Because it is integrated with the rest of
the platform, though, the video evidence that substantiates each recom-
mendation is just one click away for any viewer.

With all these elements in place, implementing an efficient UX research


workflow that puts the team into alignment and turns the wheels towards
reaching product goals is a seamless exercise. And with everyone working
on the same platform, this highly replicable exercise can be an ongoing
process that continues to power your team across product sprints or
different projects.

COLLABORATIVE ANALYSIS 49
X Iterative Testing

Written by Jennifer Romano Bergstrom

Iterative testing is a well-known technique that is advocated by many


usability practitioners. In iterative testing, a usability test is conducted with a
preliminary version of a product; changes are made based on findings from
that study; another round of testing occurs, perhaps with a slightly higher-
fidelity product; changes are again made based on results from testing, and
another round of testing occurs, and so on, until the usability goals are
achieved or until a critical development deadline is reached. Incorporating
testing from the earliest stage in the design process allows for the most
effective iterative testing.

ITERATIVE TESTING 50
If we assume that stakeholders want users to be successful in using their site,
iterative testing prior to launching a website should be effective in that
developers are able to make quick changes based on the users’ interactions
with the design and test the revised design using measures of success (e.g.,
efficiency and accuracy).

In actual experience, however, practitioners and project managers often


find that limited resources, such as time and money, management and
developer resistance, and meeting the logistical requirements of multiple
tests do not permit iterative testing and that the best they can do is conduct
one usability test. However, experience shows that conducting iterative
testing is worthwhile, and the benefits of iterative testing can be realized,
even when challenges arise.

8 tips for the iteration practitioner

1. Take the initiative to set an expectation from the beginning that there
will be iterative tests and that all of the key stakeholders will be part of
the process.

Set usability-related goals before the project begins, and agree to meet
regularly with the whole team to plan each iteration of tests and to dis-
cuss findings and recommendations for improving the site; being proactive

ITERATIVE TESTING 51
in this regard will help to ingrain usability testing into the process without
issue later on.

2. Acknowledge that schedule, cost, scale, and technical constraints of a


project will influence decisions to make enhancements that are beyond
the control of the design and development staff.

The need to work under pressure for quick turnaround can present logis-
tical challenges, such as obtaining results and producing preliminary re-
ports quickly; further challenges like tight budgets and technical limits on
what the developers can and cannot achieve may also interfere with an
iterative design process. Therefore, some of the results of usability test-
ing may have to be sacrificed or deferred to meet deadlines and manage
costs.

3. Start with paper prototypes, before any hard coding has been made, so
developers can get on-board with the design-test-refine process.

Paper is a medium that is easy to manipulate and to change. When


creating a working relationship with developers on your team, it helps
when they have not yet created the back-end of the application (i.e., no-

ITERATIVE TESTING 52
thing has been hard coded, no application actually exists), which can oft-
en wed developers to the design.

4. Plan to include some tasks that can be repeated in all of the iterations
so that the team has a measure of progress as they proceed.

By repeating tasks across iterations, you can evaluate whether there


were continual improvements from iteration to iteration. With design
changes from one iteration to the next, it is possible to assess whether
participants were successful with the new design or whether the changes
and additional available functionality cause new problems.

5. Collect quantitative measures that can be repeated over the iterations.

In each iteration, evaluate the user interface of the new design by exami-

ITERATIVE TESTING 53
ning participants’ success, satisfaction, and understanding of the site, as
measured by their performance on tasks, self-rated satisfaction, or other
quantifiable measures.

6. Encourage developers, project leads, and programmers to attend usa-


bility sessions as observers.

Watching participants interact with the design first-hand is a very


valuable experience for programmers and project managers. At the end
of each session, discuss the usability issues and possible fixes with all the
observers. By doing so, you can help get your teammates into the
mindset of anticipating modifications to their design when they are still
willing to make changes. This has a lasting impact throughout the entire
iterative cycle.

7. Have regular, ongoing discussions about the findings and recommenda-


tions with the design-and-development team.

The whole team should be involved in task development, test observa-


tion, and post-test discussion. By meeting regularly with the design and
development team members, you can collaboratively come up with ideas

ITERATIVE TESTING 54
for improvements and solutions to test. It is important that each team
values and respects what the others have to offer.

8. Give documented results to the development team as soon as possible,


within days of the last session, while the issues they observed are still
fresh in their minds.

Quick delivery of usability results allows your team to incorporate the


findings into their work without delay and keeps the product cycle mov-
ing; if your team has been involved in the testing and discussion process,
though, it is also possible for them to continue moving forward with the
testing insights in mind even before the delivery of these results.

Although most design teams are accustomed to addressing usability at the


end of the product development cycle, usability is better addressed
through-out the development cycle, so that few surprises occur with the
final product.

By getting the product into the users’ hands and finding out what they need,
you can quickly identify usability issues and achieve an efficient develop-
test-change-retest process. Iterative testing is a useful and productive tactic,

ITERATIVE TESTING 55
and usability researchers on any digital product development team should
aim to include several iterations in their test plans.

About the Author

Jennifer Romano Bergstrom is a UX Researcher at Facebook,


where she works to understand the UX of Facebook in
emerging markets. She has 12 years of experience planning,
conducting, and managing user-centered research projects. In
addition to being a skilled UX researcher and practitioner,
Jennifer specializes in experimental design, eye tracking, and
quantitative analysis.

COLLABORATIVE
ITERATIVE TESTING
ANALYSIS 49
56
XI When Design Drowns Out the Message
Signal-to-Noise Ratio in design

In engineering, the signal-to-noise ratio refers to the relative strength of an


electrical signal compared to the strength of the background noise. Too
much noise, and the signal can become difficult or impossible to decipher.

Signal-to-noise ratio is an important consideration in design, too. Here's a


simple example.

57
This lunch menu shows 8 combina-
tion choices available for the same
price. The “signal” is the information
differentiating each of the combina-
tions – specifically, the unique dish
offered alongside the Pad Thai. If the
whole list was boiled down to just
the signal, it would look something
like:

Each combination includes a Pad Thai


and an additional dish, as listed

A – Red Curry

B – Green Curry

C – Mussaman Curry…

What happens in the menu design


above, though, is that noise crowds
the list and makes it almost visually
jarring. First of all, the word
"Combination" is repeated 15 times,
above every combination choice.
WHEN DESIGN DROWNS OUT THE MESSAGE This clutters the page and distracts 58
jarring. First of all, the word "Combina-
tion" is repeated 8 times, above every
combination choice. This clutters the
page and distracts from the useful infor-
mation, all while adding nothing. It is a
list of lunch combinations; labeling
every entry "Combination" is entirely
unnecessary.

Additionally, all of the text is bold,


undoing any of the usefulness that
bolded text offers: if everything is im-
portant, nothing is. No piece of content
is given priority, and instead the entire
list is just made visually louder.

The signal
The signal is left to jostle for the user's is left to
eye among jostle for the
a cacophony user's
of extra-
eye among
neous content. The list is still readable, but nota quickly
cacophony of extraneous
or easily compre-
content.
hensible. The reader must work harder and take longer to get to the "signal"
that they really want. And each time they look away, looking back requires a
The list is still readable, but not quickly
moment of re-orientation and re-learning the list.
or easily comprehensible. The reader
must work harder and take longer to
get to the "signal" that they really want.
WHEN DESIGN DROWNS OUT THE MESSAGE 59
And each time they look away, looking
Signal-to-Noise Ratio in web design

If a simple lunch menu can suffer from poor SNR, imagine how much easier
it is to drown the signal on a complex, interactive webpage. The temptation
to add more noise is ever-present for the designer, and it's a hard line to
walk between aesthetic choices that support the design, and ones that
distract from it.

Here's another example of a disorienting list of options to pick from,


taken from an apparel website. While the visual effect is similar to the
Thai menu, this drop-down menu is different in that there actually aren't
a lot of wasted words. Most every item contributes to the signal
(though there does seem to be a lot of redundancy).

WHEN DESIGN DROWNS OUT THE MESSAGE 60


One "noisy" design choice does stand out, however: the use of all capital
letters for everything. Text in all capitals is known to be less readable than
lower-case text, because it eliminates the physical contours that help our
brains recognize and process words. Clearly this was a stylistic decision, but
on a list as encyclopedic as this one, it makes comprehension even slower.
Skimming for the right category would be much easier with lower-case
entries.

This is an aesthetic choice that ends up distracting from the signal, creating
counterproductive noise that works against the website's goals and de-
creases usability.

Design that undermines

Design decisions almost always affect usability, because they tend to either
support or distract from the core purpose of the site or feature. When they
don't contribute to the communication of the "signal," they can become
noise-generating distractions, and actually drown out the information neces-
sary for using a platform to its fullest.

So each time you make a design decision, especially for aesthetics, stop and
ask yourself: "Am I drowning my message, or supporting it?"

WHEN DESIGN DROWNS OUT THE MESSAGE 61


XII It’s the Little Things

It’s a saying we apply to a lot of things: It’s the little things in life. It’s the
little things in a relationship.

It’s also the little things in web design.

It’s surprising the reaction a simple interaction can cause. I was watching the
results of a test we had run on a survey-building site a few weeks ago when
the tester typed out a survey question along the lines of “Do you plan to
participate in any outdoor activities this month?” He had scarcely finished
typing the question when the four blank multiple-choice options below
auto-transformed into just two: “Yes” and “No”.

62
After the second or so that it took for this clever swap to register, he ex-
claimed, with pleasant surprise, “That was really really cool!” Then, after a
moment’s additional thought: “What if I want to say yes, no, or maybe?” He
added a third slot to fill with this last option, and immediately the site
changed it to say “Maybe.” UX folks like to bandy around the word “delight”
in talking about experience design, but I doubt many have ever heard a
grown man literally squeal in jubilant shock in response to a web interaction.

The thing is, that specific interaction was not especially critical or central to
the user’s journey on that site. In fact, only a minority of the testers I
watched even encountered it. But the disproportionate positive reaction
this small sequence generated cast a glow over the rest of the experience,
and set that website apart as an innovative and canny provider in their field.
And I guarantee you that the users who experienced that interaction
remembered it.

Just as easily as the little things can delight, though, they can also infuriate.
Think of it like the disproportionate anger you feel when you stub your toe,
or your headphones fall out, or a video abruptly stops to buffer.

When these little things surface in web design, they’re often things that
users wanted or expected on a site but are not there; I once saw a tester get

IT’S THE LITTLE THINGS 63


unexpectedly frustrated when he was unable to find the social sharing links,
which were hidden in a poorly-worded text link. Other times it’s things that
are there that users don’t want. I’ve seen users get angry with sites that
switch pages with left and right arrow strokes because a single accidental
tap of the keyboard causes an unwanted page to load. It’s little things like
this that get users irrationally and unnecessarily annoyed.

So while you’re appreciating all the little things in life as the year draws to a
close, don’t forget that the little things matter in web design too. A bit of
attention to detail goes a long way, and so does neglecting it. Here’s to a
new UX era, and to keeping users surprised.

IT’S THE LITTLE THINGS 64


XIII UX Partnerships
With Higher Education
Co-written by Guiseppe Getto and Karan Saggi

The field of UX is becoming an exciting, lucrative job market for young pro-
fessionals. A field dedicated to bringing engaging digital products and
services to life, UX centers on developing digital applications that serve the
needs of the user. Rather than design an application that may find a market,
or put months of development time into a new website that may meet cus-
tomer needs, companies ranging from startups to the Fortune 500 invest
millions of dollars into usability testing, prototyping, and interviews with
existing customers. And yet, a definite gap exists between the tools available
to students and the tools actually being used in fields like computer science,

UX PARTNERSHIPS WITH HIGHER EDUCATION 65


industrial design, and technical communication. Many of the skills that are
key to UX, and the tools used by UX professionals, are out of reach for the
average college classroom.

There is a need for partnerships between academics and UX professionals


that stand to benefit both. Academics need access to the current generation
of UX tools and networks of UX skills, and UX professionals need workers
who come into their organizations with a working knowledge of the field. At
the same time, such partnerships require a leap of faith by both parties.
Academics are not precisely customers or clients. UX professionals are not
precisely trained educators.


However, it is clear to students
“Being put into a class that handed
like Kristi Wiley, who is starting us a real life project and asked us to
a Ph.D. in Rhetoric and Writing show results and findings allowed
us to learn as we worked. This
at Michigan State University
allowed us to learn in a way that
that the benefits of such part- challenged us to expand our skills.
nerships outweigh the costs. Throughout the class we were
This is what she had to say asked to think about information in


a different way than we had
about learning UX in Guisep-
traditionally been taught.”
pe’s graduate UX class that she

UX PARTNERSHIPS WITH HIGHER EDUCATION 66


took at East Carolina University.

Kristi’s understanding of UX tools and techniques is an outcome of a new


generation of college curricula being created by initiatives like TryMyUI’s
Educational Partnerships or EDU Program. The intent of this program is to
foster partnerships between higher education and the UX industry. One
challenge for higher education in engaging with industry professionals and
using commercial tools is that they come with a price tag. We need partner-
ships that fit the budget of academic institutions and are beneficial to both
the institution and the industry. To that effort, TryMyUI provides its usability
testing tools to college instructors and students at no cost. In return, college
instructors can make the TryMyUI software available to students like Kristi,
who learn a method of design that they can pass onto their own students.

Of course, not all students will go on to be college professors. Some, like


Nick Hall, also a graduate of East Carolina University, will go on to be
content strategists, information architects, UX strategists, and other mem-
bers of the UX community. For students like these, academic-industry
partnerships stand to help them connect their classroom experiences “with
a form of high-level, strategic thinking that is valued in the workplace.” As
cdcdcccccccc

UX PARTNERSHIPS WITH HIGHER EDUCATION 67


N said about his experiences while learning UX:
Nick


I’d say one of the biggest things that resonated with me early on was the
importance of the user, first, second, last, and always. There was and is
something very reassuring about the notion that if you want to improve
the user’s experience, you learn what they like, you understand how they
use something and how they want to use it. Basically, it’s all about them.


That’s something I think about whether I’m working on a webpage or just
editing content.”

For both students who want to become professors of UX-related fields, and
for those who want to become the next generation of UX designers,
academic-industry partnerships are win-win.

Why Experiential UX Learning Matters

Students like Kristi and Nick need more than a theoretical knowledge of UX.
They need to be able to work on actual design projects. They need to leave
the classroom with an understanding of UX principles that goes beyond that

UX PARTNERSHIPS WITH HIGHER EDUCATION 68


which can be gained by reading the growing body of trade magazines, books,
and blogs on design best practices (e.g. User Experience Magazine, UX Mag-
azine, UX Matters, etc.). They need to go beyond academic research on
design from storehouses such as the Institute of Electrical and Electronics
Engineers’ Xplore Digital Library and the Association of Computing Machin-
ery’s Digital Library.

While all these sources contain invaluable information for any UX class, stu-
dents need experience using the concepts they make available.

UX PARTNERSHIPS WITH HIGHER EDUCATION 69


Timothy Rotolo, UX Architect at TryMyUI, points out that “there is a
disconnect between what’s taught and the reality of the work because this
field accelerates quickly. When we welcome summer interns, for instance,
we spend days training students about the latest tools that our clients use
and trends in the industry. With the EDU Program, we hope that all students,
not just our interns, have pre-existing knowledge of these tools. What we’re
trying to do is to help students practice research techniques that designers
and UX architects actually use on the job, so they’re entering the workforce
with a solid grounding in those methods and don’t feel adrift.”

Why UX Tools Are Important for College Classrooms

Higher education, especially in


the U.S., is a powerhouse for
social and commercial entre-
preneurship. College students
have the support and guidance
of the higher education com-
munity that lowers the risks of
startup failure during their
degrees.

UX PARTNERSHIPS WITH HIGHER EDUCATION 70


It is the ideal environment to prototype ideas and learn from practice.
Students who invest time in entrepreneurial initiatives often step out to
launch their own startups. MIT’s student-run entrepreneurship competition,
for example, has created more than 130 companies. These startups scale up
and go on to compete with large commercial brands, and they invest heavily
in usability testing. Through early access to usability testing tools, student
entrepreneurs can polish their products’ digital interfaces to better opera-
tionalize their ideas.

TryMyUI ensures that a digital interface does not fall into the Design Free
Design Trap (Eggers & O’Leary, 2009), where rushing through the design
phase risks a sub-par final product. Techniques for avoiding the Design Free
Trap are better learnt from practice than from theory. This is why experi-
ential learning through tools such as TryMyUI usability tests is an important
addition to UX curriculum. As college graduates build the tech sector, they
should take the words of Eggers & O’Leary to heart:


Don’t confuse good intentions with good design. No one cares about how


high-minded your design is if it doesn’t work in the real world.

UX PARTNERSHIPS WITH HIGHER EDUCATION 71


Thinkers such as these advise tech dreamers and enthusiasts to “test and
retest your design through multiple small-scale trials with real world users.”
An educational partnership enables design students to do exactly this, in
addition to receiving support from the professional community.

Even for students who don’t wish to become entrepreneurs, UX partner-


ships are valuable. Sectors of the economy as diverse as healthcare,
education, and government are realizing the power and necessity of
producing digital experiences that are usable and engaging. Students who
wish to go into these service-focused industries will benefit from an under-
standing of UX principles and will be a huge value-add to their future
organizations, many of whom will suffer from a lack of access to experi-
enced UX professionals.

Programs like TryMyUI provide opportunities that simply don’t exist in


higher education. The opportunity to learn industry-ready skills, tools, and
techniques is one of the most sought-after prizes by millions of college
students all over the world. This opportunity enriches student learning,
provides a value-add to degree programs seeking to recruit new students,
and helps the industry to engage and recruit the next generation of UX
designers.

UX PARTNERSHIPS WITH HIGHER EDUCATION 72


About the Authors

Guiseppe Getto is a college professor based in North Carolina


who does freelance writing, UX consulting, digital marketing,
and custom WordPress websites. He consults with a broad
range of organizations who want to develop better customer
experiences, better writing, better content, better SEO, bet-
ter designs, and better reach for their target audience. Visit
him online at guiseppegetto.com.

Karan Saggi serves as the Director of Educational Partner-


ships at TryMyUI, where he builds engagement between
higher education and the UX industry. He writes about lead-
ership, industrial-organizational psychology, and nonprofits.
Karan holds a Bachelor’s in Economics and Leadership Studies
from Claremont McKenna College.

UX PARTNERSHIPS WITH HIGHER EDUCATION 73


Appendices

Appendix A: UX Wars (April, 2015)

Spotify vs. Tidal

If you pay even a sliver of attention to the entertainment world, you’ve


probably heard of Tidal by now. Jay-Z and his famous co-owners (Beyonce,
Kanye West, Rihanna, Usher, and more) have gone all-out promoting their
new music streaming service, with TV ads, an over-the-top press conference,
and a social media frenzy that has turned many celebrity accounts a bright
turquoise.

Behind all the marketing buzz, though, Tidal is essentially a re-imagined


Spotify, and it’s aiming to compete directly with the streaming giant.

UX WARS: SPOTIFY VS TIDAL & APPLE MUSIC 74


Usability can be tricky for a small brand breaking into a market where
established players have set the rules: they must strike a fine balance
between putting their unique mark on the product, and following the same
old design conventions that users are already familiar with.

Has Tidal found that balance? Could the new platform’s designers have even
improved on the Spotify model to create an ultimately more usable product?
We ran a few tests with current Spotify users to find out…

Task 1: Browse the main page & talk about what interests you

Immediately we see the extent to which Tidal has replicated the Spotify UX
for its own design backbone. The familiar sidebar-menu-on-the-left,
content-panels-on-the-right layout quickly orients new users while
minimizing the pains of adjusting.

Like Spotify, the color scheme mostly sticks to dark grays and blacks that
put the content front and center; describing their first impressions, testers
noted the very dark background, calling it “chill” and “suitable for a lot of
different moods” – appropriate for a music platform.

Subtle differences in Tidal’s color scheme, however, communicate a slight-


ly different message from Spotify; the contrasts are sharper, the images

UX WARS: SPOTIFY VS TIDAL & APPLE MUSIC 75


brighter, the lines cleaner.

Subtle differences in Tidal’s color scheme, however, communicate a slight-


ly different message from Spotify; the contrasts are sharper, the images
brighter, the lines cleaner. Combined with bold turquoise highlights, these
choices signal a more refined, slick character that testers interpreted as
“modern” and “professional” – an identity in line with the service’s pay-only
model and high-quality branding claims.

The videos drew special attention from our testers because of the visual
prominence they receive but also because it’s a feature that Spotify doesn’t
have. As tester Abraham from LA noted, the emphasis on video is a double-

UX WARS: SPOTIFY VS TIDAL & APPLE MUSIC 76


edged sword for Tidal, because while it showcases a unique feature, it also
misleads visitors about Tidal’s primary identity – as a music streaming ser-
vice, not a video hub.

The sidebar menu

UX WARS: SPOTIFY VS TIDAL & APPLE MUSIC 77


Both platforms use a sidebar menu as the primary means of navigation, and
in some ways here Tidal’s design outperforms Spotify’s. The Spotify menu is
jammed full of too many options, some of them nebulous and relatively un-
useful. Meanwhile, the majority of users’ playlists, which constitute the
central part of the Spotify user experience, are pushed offscreen.

Tidal has kept it simple with their menu options – the sidebar is compact
and much less cluttered than Spotify’s. However, there are nota-
ble hierarchy issues that inject confusion into what could have been a
superior design: the set of menu options below My Music are of slightly
different coloration and size, and the reason is not clear.

The visual cues half suggest that they are subcategories of My Music, yet
there is a clear separation between the two. They are also indented at the
same level as the rest of the links; are they subcategories or not? Addi-
tionally, one is called Playlists, the same as another link further up on the
sidebar, and the difference between the two is entirely unclear.

All in all, Tidal does a relatively good job of presenting a main page that is
unique but still holds to the UX conventions users will recognize and under-
stand. The younger platform had an opportunity to one-up Spotify’s design
with a cleaner menu but squandered it with poorly communicated hierarchy.

UX WARS: SPOTIFY VS TIDAL & APPLE MUSIC 78


Tidal isn’t necessarily worse than Spotify, but ties go to the established
player, never the upstart.

Spotify: 1, Tidal: 0.

Task 2: Find a favorite artist

Tidal runs into usability problems pretty quickly on this task, because half of
the testers never even noticed the search bar, tucked away in the top right
corner. One tried clicking the Artists link on the sidebar, which in fact is for
saved artists. Another assumed that the only way to get to an artist page
was through tracks shown in playlists.

Spotify‘s search bar is more visible above the sidebar, where users are
likelier to see it, and it is also larger. Both platforms have issues with
their search results, though. Each gives results for several distinct categories,
including artists, albums, playlists, and tracks; the artist and album results
for both are filled with redundant or irrelevant entries, and album results
don’t filter out singles.

But Spotify at least makes these poor-quality results secondary, leaving the
focus on song listings. Tidal uses up much more space to display these cate-
gories, which may have been a conscious decision since user activity tends to

UX WARS: SPOTIFY VS TIDAL & APPLE MUSIC 79


center on playlists over single tracks. If so, they should have committed fully
to such a re-conceptualization, and put playlist results first; on Tidal these
are more interesting and better-quality anyways.

Tidal’s song results are abysmal, failing to take relevance and popularity into
account. The first 14 tracks returned for keyword Beyonce, for example, are
songs literally entitled Beyonce, none of which are by the singer herself.

Beyonce, by HD and Lil Rod: Not really what I was looking for…

Tidal once again fails to capitalize on an opportunity for improving on Spoti-


fy’s model, and definitively loses this round.

Spotify: 2, Tidal: 0.

Task 3: Listen to a few tracks by your chosen artist

Tidal falls further behind with some signalling missteps on its song listings.

Each track entry shows the title, the artist name, and the album, all of which
become underlined when users hover over them. This is a pretty standard
signal that the underlined words are a link, and the artist and album name do

UX WARS: SPOTIFY VS TIDAL & APPLE MUSIC 80


indeed link to artist and album pages. The song title, however, just plays the
song when clicked, which is of course what it ought to do; but it should not
appear as a link to users. Visual cues should be consistent across any site or
app.

The song title is underlined like a link when users hover over it, confusing
people as to how it will behave

As Steve Krug’s famous UX maxim goes: Don’t make me think. Users should
never have to wonder how an element will behave, or what the outcome of
interacting with it will be. One user was left believing that the only way to
play songs on Tidal was through the Play Now option listed under More
Options at the far right.

Another misleading element is the Play button which appears to the left of
the track titles (on some, but not all, pages – another mistake). Pressing it
for the first time plays the song; when pressed a second time, it does not
pause the song, as testers expected it to, but instead starts playing it again
from the beginning.

Lastly, the Add to Playlist icon at the right of the track listing (between the

UX WARS: SPOTIFY VS TIDAL & APPLE MUSIC 81


Favorite star and the More Options dots) is so small and messy as to be un-
identifiable without closer inspection. Tidal once again falls short, leaving
the score at 3-0, Spotify.

Task 4: Explore the playlists

Playlists are the most commonly used function of streaming services like
Spotify. People use playlists to complement their mood, to be a soundtrack
to their daily activities, to motivate them while they work out, and often as a
means to find new music. They are arguably the most important feature for
a music streaming platform, and if Tidal finishes strong on this task it might
still nab a tie.

Here again, Tidal chose to replicate almost exactly the layout and style of
Spotify for displaying playlists, and like in many other instances, did so with
a better use of space.

Tidal also chose to separate mood and theme based playlists from genre
based playlists, which Spotify combines; whether this is better for creating a
cleaner infrastructure, or worse for adding complexity, could be argued
either way. Some testers really liked having a Genres link on the sidebar
menu and said they would use it frequently.

UX WARS: SPOTIFY VS TIDAL & APPLE MUSIC 82


The resemblance between Tidal (bottom) and Spotify (top) is clear

UX WARS: SPOTIFY VS TIDAL & APPLE MUSIC 83


While Tidal pitches the quality of its expert-curated playlists as a selling
point, as far as usability goes there is nothing that stands out about its
design. If Tidal wins this round, it is only by a hair, and certainly not by
enough to make up for its deficit from the first 3 tasks.

The challenge for an underdog in any market is to stand out enough to


attract users away from the top dogs. Building a superior user experience is
one way, but not the only way, to achieve this. However, when the products
are as fundamentally similar as Tidal and Spotify, user experience is one of
the most significant differentiators.

Thus far, Tidal has not shown an ability to move UX design forward in the
music streaming field, even where there is clearly room for improvement.

This UX War’s winner is…

UX WARS: SPOTIFY VS TIDAL & APPLE MUSIC 84


Appendix B: UX Wars Update (July, 2015)

Apple Music

For music lovers around the world, Spotify has become the go-to for stream-
ing and sharing music over the past few years. However, with its new music
streaming service entering the ring last week, Apple has set out to challenge
that market dominance.

In April, we tested streaming newcomer Tidal to see how it measured up to


the industry’s established giant. This month, we ran a few tests with music
lovers once again to take a closer look at Apple Music’s UI, and see if the
latest addition to the streaming world might succeed where Tidal failed.

Task 1: Register for Apple Music membership

Give users what they expect

The 3-month free trial Apple Music offers is a smart move that has already
drawn a lot of adopters. However, once users get past the landing page,
there is almost no mention of the free trial. And when users click on a signup
option (“Individual” or “Family”) the resulting popup prompts them to either

UX WARS: SPOTIFY VS TIDAL & APPLE MUSIC 85


“Buy” or “Cancel” — a confusing choice to someone who was trying to sign
up for a free trial.

The confusion continues when the next page asks for credit card infor-
mation. “Is this still the free trial? Why do they need my credit card if the
first 3 months are free?” asked Paul from Indiana at this point. Other users
expressed similar doubts.

UX WARS: SPOTIFY VS TIDAL & APPLE MUSIC 86


After inputting their credit card information, users are greeted by another
popup asking, “Do you still want to buy Individual Apple Music membership
for $9.99?” This question is thrown at users more than once in a series of
frustrating popups that also repeatedly asks users to sign in with their Apple
ID every step of the way.

As we see customers struggle with the signup process, 2 takeaways for


improving the user experience become clear:

1. Give users what they expect. Presenting people with a “Buy” option when
they are trying to sign up for a free trial only adds confusion about whether
they are following the right path. A “Start free trial” button would be more
intuitive.

2. Don’t create extra obstacles. Asking users if they “still want to” move for-
ward makes them think twice about their purchase, which may implicitly dis-

UX WARS: SPOTIFY VS TIDAL & APPLE MUSIC 87


courage some from continuing. The other (clearly redundant) popups during
the signup process created additional annoyance as well.

Task 2: Pick your favorite genres and artists

More options, please?

Immediately after signing up for membership, users are asked to commu-


nicate their music preferences to the app by clicking on red bubbles that
bounce gently around the screen.

UX WARS: SPOTIFY VS TIDAL & APPLE MUSIC 88


While clicking on bubbles is a novel and fun way to choose music prefe-
rences, it also presents some usability issues. For example, Juliana from New
Jersey found the bubbles somewhat difficult to click on as they bounced
around the screen, accidentally clicking on genres she didn’t like several
times (some of them even bounced off the screen).

The bubbles’ constant motion also makes it hard to remove less-liked genres
and artists, with the little hovering X button sometimes floating out from
under the user’s cursor.

Another potential problem arises with the categorizations themselves. To


some people, they are lacking in specificity: for example, “rock” is too gen-
eral for someone who only enjoys Britpop rock. This is more of an issue
since you can’t go back and update your preferences later.

Task 3: Listen to a few tracks by your chosen artists and add them to your
playlist

Be nice to first-time users

Apple Music is built as an extension to the existing iTunes framework. For


people who are not familiar with iTunes’ layout, though, the icons can be
confusing. For example, what does the heart icon next to each song do?

UX WARS: SPOTIFY VS TIDAL & APPLE MUSIC 89


There is no explanation of the hearts, and the library column for them
doesn’t even have a label. How is it different from rating? What happens if
you “heart” a song that you haven’t added to your music? Can you re-find
hearted songs that you haven’t saved?

Other icons add to the confusion: What are the checkboxes in front of each
song for? What does the cloud-shaped icon do? No obvious explanation is
offered.

One of the most important features for a music streaming service is to


create and add music to playlists. However, Apple Music manages to make
this step extremely hard to do for new users. There isn’t any indication
of how to save songs to your library until you hover your cursor over the song

UX WARS: SPOTIFY VS TIDAL & APPLE MUSIC 90


title, after which a tiny circle with 3 dots appears. This little icon opens a
dropdown that allows users to “Add to my music,” among a number of other
options.

You can’t even directly add songs to existing playlists without first adding
them to the main music library.

Additionally, some users ran into


problems trying to play music
from the search results; when
Juliana tried clicking a song title, it
took her to the album page
instead of playing the song. To
actually play it, users must click
on the small “Play” arrow on the
album picture to the left of the
title.

Other users felt the search results page was “cluttered and unclear,” in the
words of Andrew from San Diego, who compared it negatively to the Spotify
results page.

UX WARS: SPOTIFY VS TIDAL & APPLE MUSIC 91


Lastly, it was unclear for many users how to make songs available offline.

Task 4: Explore the public playlists

Explaining to do

Apple does a good job providing a unique selection of public playlists cate-
gorized into “Apple Editors Playlists,” “Activities Playlists,” and “Curators
Playlists”. The background picture for each playlist category and sub-
category makes for a striking, aesthetically pleasing presentation. Organi-
zationally, the feature mostly makes sense, though some users were thrown
off by the multi-layer nesting of playlists.

UX WARS: SPOTIFY VS TIDAL & APPLE MUSIC 92


For people who are familiar with the Spotify system, it is confusing what the
actions “Follow” and “Add” mean — whether they involve getting notifica-
tions about new or related songs, or about updates to the playlist; or
whether all the songs on that playlist will be added to ‘My Music’; or how
following and adding are even different.

There is no explanation of any of these functions to new users, and they are
left to wonder on their own.

Evaluating Apple Music

At a very fundamental level, Apple Music approached the challenge of desig-

UX WARS: SPOTIFY VS TIDAL & APPLE MUSIC 93


ning a competitive streaming service in a very different way from Tidal. Jay-
Z’s service emulated Spotify as much as possible, with a layout and design
sense that clearly evoked the larger company’s UI.

Apple, on the other hand, did not copy Spotify but instead based the Apple
Music interface on iTunes, aiming to turn their music library tool into a more
all-encompassing platform.

This endeavor meets with mixed results. Apple Music successfully, if un-
evenly, integrates music streaming capabilities into the iTunes platform. It is
far from perfect though — with a few glitches and a consistent lack of expla-
nation for some less intuitive aspects of the design, Apple Music presents
undeniable usability challenges to users who are looking for an easy and
convenient Spotify replacement.

However, it’s clear that Apple Music has created something unique from its
primary competitor. Like Tidal, it offers exclusive content like influencer-
picked playlists and artist communications; it also offers users the oppor-
tunity to integrate their existing music libraries with songs they don’t
actually own.

From little twists like the red bubbles to bigger differentiators like its radio
feature, Apple Music provides a fun and unique alternative to Spotify, some-
thing that Tidal struggled to do. But the platform’s learning curve is rather

UX WARS: SPOTIFY VS TIDAL & APPLE MUSIC 94


steep, and some serious thought needs to be given to making the whole
experience less confusing and more obvious. Apple Music is still brand new,
so expect the usability gap to close in the coming months. But for now,

this UX War’s winner is, again:

UX WARS: SPOTIFY VS TIDAL & APPLE MUSIC 95


Appendix C: UX Wars (February, 2015)

OkCupid vs. Match.com

Love is in the air.....and, more recently, on the internet. February is the


month of Valentine's Day, and we felt this would be the perfect time to pit
two online dating heavyweights, OkCupid & Match.com, against each other
to see who has the most usable product. We ran a few tests with real, single
men and women, and here’s what we found...

Task 1: Create an account & fill in some info about yourself

OkCupid pulls ahead for an early lead –

Both sites follow the same 3-stage registration recipe: get the basic details
together, add a username and password, mix well. A couple of things put
OkCupid in the lead here:

(1) Playful, tongue-in-cheek style: OkCupid keeps things interesting with an

UX WARS: OKCUPID VS. MATCH.COM 96


informal tone that adds character and charm and puts users at ease as they
sign up. From interactive little flairs like “Ahh, Miami” (zip code recognition)
to static page headings like “The last step. Don’t stop now!” OkCupid
engages the user and subtly nudges them through the signup. Match’s
process is functionally just as good, but misses out on style points.

(2) Username selection: Both sites have users in numbers that make picking
a unique new username a challenge – a problem not unique to these two
sites. But OkCupid indicates (with a checkmark or an X) when a username is
already taken, while on Match users only find out once their submission is
rejected.

Match.com’s early setbacks continue as account setup progresses. Pop quiz:


What’s the #1 thing that people want to do when they join an online dating
site? We don’t have any statistics on this, but our guess is seeing matches,
as soon as possible. Here’s where OkCupid does something else right.

On their mobile site, new signups can see matches right away; no further
steps are required. Of course, the matching algorithms will have nothing to
work with, but you know what they say – seeing is believing. The details can
be filled in later.

UX WARS: OKCUPID VS. MATCH.COM 97


The desktop site adds a few extra
steps: an “About me” field, and 5 to
7 yes-or no questions (Could you
date someone who was really messy?
Do you like scary movies?) with
pleasing card-flicking visuals. Oh, and
all of it is skippable.

Match.com, on the other hand, packs


the next step with questions that keep
new users from even catching the scent of a match for as long as 7 minutes.
It doesn’t help that there’s no indication of when the interrogation will end.
Eduardo from Phoenix slogged through 6 questions before grumbling, “This
is a lot of questions.” 20 more passed before he reaffirmed his earlier
analysis, adding “Can I just get to the dating site?”

Task 2: Browse through some matches. How would you message them?

A more even stage

Both OkCupid and Match.com start off this stage with the same treatment –

UX WARS: OKCUPID VS. MATCH.COM 98


the new account owner is shown photos of possible matches to pick and
choose the ones they like and give the algorithms a better idea of their
‘type’. OkCupid gives users endless scrolling and unlimited options (picky
people rejoice!) though for many, Match.com’s dozen are probably enough.
A bigger issue is that all 12 options are checked by default, which users often
don’t realize until they accidentally deselect (and then confusedly reselect)
the first person they like.

Also, this looks like the banner ads people are used to ignoring
elsewhere
After this brief funnel step, users can browse potential matches at will, or see
matches one at a time. The tradeoff of Match’s previous thorough probe into
new users’ traits and preferences is that now, the first actual matches people
see are pretty well-suited to them.

UX WARS: OKCUPID VS. MATCH.COM 99


Match.com users by default first view the complete profiles of their “daily
matches”, who are specifically chosen for them, and have an option to like
or pass on them. A ‘like’ calls up a nice big box chat box in its place, making
it easy to send a message immediately. Bbbhbh

Match makes messaging easy

OkCupid doesn’t feature the ‘like’ and ‘message’ options quite so promi-
nently on matches’ profiles, but they are still quite visible and easy to use.
Both sites allow for pretty easy browsing of nearby matches. OkCupid wins
this round too, but not by as much.

UX WARS: OKCUPID VS. MATCH.COM 100


Task 3: You want to get better matches. Try to get more refined match
results.

Match.com closes the gap

There’s two ways to go about this task – applying filters to the match results,
or taking an action that tells the algorithm more about what you like and
permanently improves the matches you get. On Match, this action is to rate
your daily matches, and the site informs users of this with a clearly linked
banner stating as much:

On OkCupid, the action is to answer “match questions”, and users are


informed of this, and other things, in an inbox message from the staff
(heralded by a bright pink notification that no one could miss). The problem
is, the message does not contain a link to these match questions, and they
aren’t exactly easy to find. Some users wandered for a while before finding
them in the “questions” tab of their profile.

It’s not all going Match’s way, though – on mobile, the tides reverse.
OkCupid gets it right with a direct, straightforward “Improve Matches” button

UX WARS: OKCUPID VS. MATCH.COM 101


on the footer, while Match doesn’t really indicate anywhere how to get bet-
ter matches. For Match, though, the best is yet to come…

They really thought of everything

When it comes to filtering your results, Match.com blows OkCupid out of


the water with a hyper-extensive search system. Almost any attribute you
could think of can be selected for, with a simple and intuitive checkbox in-
terface. OkCupid’s filters aren’t bad, but they don’t even compare for depth
and breadth; and on mobile, users can’t even multi-select options (it’s only
white OR black, Buddhist OR Hindu). Match doesn’t do mobile great either,
with an ambiguous heading that sounds like you’re changing your own de-

UX WARS: OKCUPID VS. MATCH.COM 102


tails – but all the functions are there.

Minus points: Neither site allows users to filter for guys with long hair. Sorry,
Claudia from Miami.

Match.com wins this round by a solid margin. It’s not enough to put things
even, but they’ve got a fighting chance again.

Task 4: Go and fill out your profile a little more.

OkCupid falters…

A relatively easy task, and both sites do it pretty well. OkCupid wins style
points again for their word choice (Drinks: Very often, often, socially, rarely,
desperately) but it doesn’t make up for one big misstep: the languages
dropdown.

UX WARS: OKCUPID VS. MATCH.COM 103


Phenomenally comprehensive language offerings; itty-bitty scrolling bar

When the #1 number-two language in the country starts with an “S”, it’s not
a good idea to have users choose from an alphabetical dropdown of
hundreds of languages without a search option. What were they thinking?

Match wins this round, and closes the gap still further. As far as we’re
concerned, though, they have yet to make up for the slew of questions
during signup.

UX WARS: OKCUPID VS. MATCH.COM 104


Task 5: Make sure you will get notifications when someone ‘likes’ you.

A draw?

For the last task, we wanted to investigate the usability of the settings by
asking users to find ad potentially change their notification details. But both
Match.com and OkCupid do pretty well on this, and there’s no issues to
report.

So does OkCupid win on their current lead? Let’s factor in a few more
general components.

(1) Visuals: OkCupid wins for visual style, and it’s not really much of a
contest. Bold yet tasteful colors and plenty of blank space contribute to the
site’s fresh, fun, disarming appeal. Match.com is much blander in its design
and doesn’t quite feel up-to-date.

(2) Remembering users: Match does remember returning users, but it


doesn’t keep them signed in like OkCupid does. Each time you go to
Match.com, you land on the signup page, and have to click or tap onwards
to the member sign-in to get logged back in again.

UX WARS: OKCUPID VS. MATCH.COM 105


(3) Account deletion: Understandably, neither one wants users to leave, but
while OkCupid still makes it pretty easy (very bottom of the settings),
deleting your Match.com account is a nightmare – especially from mobile. In
fact, there’s no mobile-optimized page for it; users have to switch to the
desktop version and pinch-and-zoom their way down so they can confirm
their password twice and then delete.

And with that, OkCupid puts the final nails in Match.com’s coffin. As far as
usability goes, OkCupid is the champ of the online dating heavyweights. Of
course, usability isn’t the only important thing for these two; but ensuring
that the user experience is smooth, enjoyable, and painless is one of the
best ways to get people on board as well as keep them coming back.

This month's UX Wars winner is...

UX WARS: OKCUPID VS. MATCH.COM 106


Appendix D: UX Wars (May, 2015)

Priceline vs. Expedia

Summer is coming, and that means it’s time to start booking travel arrange-
ments for that vacation you've been daydreaming about. But where to start?
We decided to pick two of the biggest online travel planners against each
other to see who has the more usable product. So we set up some user
tests...

Scenario: You're planning a family vacation to Washington, DC this summer


over July 4th weekend. Your plan is to arrive on July 2nd and leave on the
8th. You've come here to book your flight and a hotel for the trip.

UX WARS: PRICELINE VS. EXPEDIA 107


Task 1: How much will a mid-range flight/hotel package cost?

A lesson in clarity...

Expedia puts all the search options right on the table.

Expedia makes this search very easy: flight and hotel combination packages
are the default search type when the home page opens. Other options, for
users who need either more or less, are obvious and clearly labelled.

Priceline, on the other hand, offers users a hotel-only search by default.

UX WARS: PRICELINE VS. EXPEDIA 108


Flight/hotel combinations are available as an option under the Flights tab
(but not the Hotels tab, oddly) and in Vacation Packages as well, but this was
enough of a barrier to complicate the process for users.

"Add a tab that's labeled Flights and Hotels" – much like a certain competitor
already has.

Kennedy from Phoenix advised in his written responses afterwards that the
site should "add a tab that's labeled Flights and Hotels" to clear up the
confusion – a Vacation Packages search with 0 results had earlier left him
wondering if he was even looking in the right place.

UX WARS: PRICELINE VS. EXPEDIA 109


Once users got to their search results, the task was not difficult on either
site. Neither offered the kind of exact price filtering several users were
looking for, that would have allowed them to set explicit minimum and
maximum rates and throw out the rest of the results. But price sorting was
available on both, and users found that $700-800 per person was generally a
reasonable price.

Both sites were very clear about what the listing prices represented and
included, a fact which was appreciated by users. Expedia takes round 1 with
a more straightforward experience.

Task 2: Select a hotel in walking distance of the National Mall & other main
sights.

Specificity is difficult

This time it was Expedia that left users wishing for a feature the other site
already thought of. "It would be nice to have the option to narrow my hotel
choice by nearby attractions," Barbara from Chicago commented.

There was a filter for neighborhoods, which helped; but as another tester
pointed out, "If I'm not familiar with the area, I don't know what the neigh-

UX WARS: PRICELINE VS. EXPEDIA 110


borhoods are." And there's nothing that describes or defines them on the
site.

Priceline, on the other hand, prominently features a Hotels Near... tab at the
top of the results page, which allows users to winnow down their list of
prospective stays based on closeness to a comprehensive list of
landmarks.

Hotels by proximity to Washington Monument on Priceline

UX WARS: PRICELINE VS. EXPEDIA 111


This is exactly the feature Expedia's users were looking for, though they pro-
bably would have wanted it to work more smoothly than Priceline's does.
The list of landmarks, for example, is so long that finding anything takes lots
of scrolling through dropdowns; also, the dropdown categories themselves
don't make much sense (hint: you won't find the Washington Monument
under Parks & Monuments).

Expedia's very small, no-frills map

Expedia does have a map feature, but it has no options for filtering, and the

UX WARS: PRICELINE VS. EXPEDIA 112


map itself is quite small – much more screen space could have been allotted
to it, especially considering that every user relied on it at some point to de-
termine hotels' closeness to DC landmarks. Since there is no search, users
had to look around for the map labels, which aren't always visible at
different zoom levels.

Priceline did well to make their map much bigger, but it is also clunkier: the
popups for the hotels are awkwardly sized and positioned, and stick around
for too long, becoming almost as annoying as they are helpful. Priceline still
wins this round, but not by a lot.

Task 3: Choose a flight that suits your plans.

Unavoidable confusion?

The two sites handle this step in opposite ways. Priceline automatically
selects the cheapest flight (going both ways) matching the user's travel
dates and specified airports, and offers the opportunity to review and select
other flights. Expedia does not select any flight by default, but presents the
user with a broad list of departure and return flights on or near their travel
dates.

UX WARS: PRICELINE VS. EXPEDIA 113


Both methods are the cause of some confusion with some users. On Expedia,
Chicago Barbara picked her return flight without realizing it, then had to go
back and start over so she could choose a round trip arrangement instead of
getting 2 mix-and-match flights. Charles from Atlanta, on the other hand,
thought he had selected both flights after only choosing his departure trip.

Priceline's pre-picked trips: "How do I choose between these 2? Am I doing


something wrong?"

Priceline's system seems somewhat prone to misinterpretation too. Irene of


Philadelphia puzzled over how to select one of the 2 flights she saw listed un-

UX WARS: PRICELINE VS. EXPEDIA 114


der Departing (they were actually the 2 consecutive legs of a pre-selected
flight with a stop). Then, when picking new flights, she thought the Choose
Return button was for viewing a list of return flights to choose from – not a
confirmation of the single result showing to the right.

With only 1 result, Irene thought Choose Return was for seeing a full list,
not confirming it.

Priceline caused more serious issues for the one tester that ran into them,
but those issues were also less likely to arise than the ones that Expedia
testers came across. So it looks like Priceline ekes out ahead on this task too.

UX WARS: PRICELINE VS. EXPEDIA 115


A draw?

One thing that stuck out about the Priceline tests in a general sense was that
it seemed to just work less well. Irene's 1 flight option for returning from DC
to Philly on July 8th, shopping a month and a half in advance, is not even the
worst example of this.

Kennedy's search for a flight + hotel vacation package in Task 1 returned no


results, and for the remainder of the test he had to look for and book his
hotel and flight separately. Yet there were options for both on the days of
his planned trip; so why did the vacation package search come up empty?

These seem like glaring errors in the core functionality of the product, and
are suspect to say the least. Due to the extent that issues like this had a
considerable negative impact on the entire experience, Priceline must be
docked points.

This month's UX Wars winner is...

UX WARS: PRICELINE
OKCUPID VS.
VS.MATCH.COM
EXPEDIA 1162
References

Qualitative & Quantitative

https://www.qualtrics.com/blog/market-research-safety-not-always-in-
numbers/

Experts vs. the Crowd

http://marciosaito.com/2011/10/13/crowdsourcing-knowledge-has-a-long-
tail/
Surowiecki, James. The Wisdom of Crowds. Doubleday, 2004. ISBN 978-0-
385-50386-0.

Running a Comparative Usability Study

http://www.slideshare.net/ChristofHammel/process-iceberg-21703547

You might also like