This Sux
This Sux
ISBN 978-0-9968284-0-6
No part of this publication text may be uploaded or posted online without the prior written
permission of the publisher.
For permission requests, write to [email protected]
CONTENTS
INTRODUCTION
APPENDICES
If you do a lot of usability testing, you’ve probably heard that a lot. Every
time users stumble over a poor UI, wherever there’s a gap between how the
product is and how it should be, the user experience suffers; conversions
drop, user satisfaction decreases -- and for product people, that sucks.
When you’re working on a product, the best thing you can do for your
designs is to get into your user’s head. What do they want from the product?
What do they need from it? What will they expect from it, and how will they
use it?
INTRODUCTION 1
It is tempting to think that we know the answers to these questions, but
more often than not our preconceptions, attachment to our creations, and
familiarity with the product preclude any kind of objective understanding of
what users really think. We might think we’ve designed the perfect product,
when in reality our users are saying, “This sucks!”
By going straight to the source, we eliminate the guessing and put the
experience of real users at the center of the decision-making process. But
getting high-quality feedback is a skill in and of itself; knowing what to do
and what not to do in running a usability study can make all the difference in
the relevance, accuracy, and usefulness of your research.
Perhaps the central balance that one must strike in conducting UX research
is between asking and observing. Quiet, unobtrusive observation is crucial to
collecting feedback that reflects a genuine user experience, but on the other
hand to really get at the root of a user’s thought process can take some
pushing and prodding through restrained, carefully worded questions. The
trick is knowing when (and how) to push, and when to let the user journey
follow its natural flow.
Often it’s the unprompted moments that can say the most in usability
testing: just the word “oh” can tell so much, for example, depending on the
INTRODUCTION 2
user’s timing, tone, and inflection. Noticing the significance and implications
of these moments is almost as important as making sure that they can
organically arise in the first place. This requires removing ourselves from the
equation as much as possible, minimizing our own voices and just watching
and listening.
The next three sections of this book will address the place of asking and
observing in UX — what we can accomplish with each of them, and how to
employ these different methods most effectively in our research. Then we
will take a look at how to use the insights we collect to be user-minded in
our designs and to make UX research a continuing part of the design process.
It’s easy to be persuaded into thinking we know what we’re doing; usability
testing is for proving us wrong. And through this process of being proved
wrong, we learn how to make our products that much better.
INTRODUCTION 3
I Farewell to Focus Groups
The future of user research
A number of years ago, British Airways needed to find out what customers
wanted. They were adding mini refrigerator to first-class seating sections so
passengers could help themselves to a snack during long overnight flights,
and needed to find out exactly what kind of snacks their passengers would
be interested in. So they put together a few focus groups.
And so the mini fridges were filled with healthy fruits and salads. But one
longtime flight attendant objected – after years spent waiting on airline pas-
sengers and observing what they wanted in practice, she insisted that a few
chocolates and cakes be stocked too.
4
At the end of that flight, the chocolates and cakes were gone, and fruits and
salads still filled the fridges.
Why are focus groups so prone to serious error? Because they rely on asking
rather than observing, and that extra layer adds all sorts of complications:
1. People are very good at answering questions, even when they don’t
know the answer.
This has been shown time and again - for example, in Jimmy Kimmel’s Lie
Witness News segments, in which interviewees off the street are presen-
ted with a piece of false news and asked for their response. Unwilling to
reveal that they don’t know, people concoct blatant lies to tell the inter-
viewer. One man, when asked what he thought of Landon Donovan’s
play in the 2014 World Cup (Donovan was cut from the team before the
tournament), said it was “definitely pretty good; he took one to the...
nose, was it? and he kept playing.”
2. People don’t know why they like the things they like.
Often, our likes and preferences have reasons that we don’t really under-
stand. We know we like something, but we can’t really say why. But once
again, when pressed for explanations, people would rather make one up
than admit that they don’t know - after all, aren’t we supposed to know
why we like the things we like?
Besides their peers, focus group participants also want to make their
moderator happy. This means guessing at the answers he or she is look-
And then there are the problems with recall, the act of retrieving infor-
mation from the memory after the fact - including divided attention,
primacy and recency effects, time delay, context dependency, and more. All
of these issues come into play when that extra layer is added between the
focus group’s conductor and the participants.
hjgkjhj
that’s a low-end cost of $32,000 per year.
Beyond the price tag, there are the costs of time and effort - personnel must
commit a good deal of energy to planning and developing the focus group,
and then observing it while it takes place. All this for 2 hours’ worth of
contorted, questionable feedback.
$32,000
Now, say you wanted to hire the same
number of people to get the same amount
of feedback, without the failings of the
is the cost to run focus group format and the significant
an effective focus investment of personnel time and energy.
saved by using
ficantly increase your number of testers, or
what you're getting out of them, or how
remote usability frequently you test, while still spending
testing just a fraction of what you would be paying
for focus groups.
9
II Occurrence, Not Recall
Peering into users’ heads
We all rely on it every day – Where did I leave my phone charger? Which gas
station has the lowest prices? What filename did I save that document
under?
10
Problems with recall
Attention
Divided attention has been shown to seriously hamper the memory-
encoding process that allows recall later on
Motivation
The greater the incentive for accuracy, the more reliable respondents’
recollections are likely to be
Primacy & Recency effects
People tend to be better at remembering the first and last elements of a
series than the middle elements
Interference
A delay between the encoding of a memory and the subsequent
remembering, especially if filled with a separate task, impairs recall
Context & state dependency
Items are recalled more reliably in the same environment or mental
state in which they were initially encoded, and less in different ones
On top of all this, your respondents’ answers can be subject to various other
manipulations depending on the format, including social pressures coming
from fellow testers leading to conformity, bandwagoning, or lying to hide
what could be perceived as incompetence; inadvertent pressure from a test
administrator to answer a certain way or confirm a given expectation;
jjjjjjjjjjj
question-framing issues that influence tester responses (think ‘leading the
witness’); and more.
So if waiting till the end to ask your questions is such a feedback faux pas,
what’s the solution? Problems like division of attention or context depen-
dency might be diminished by confining testers to a controlled environment,
but at the cost of losing a genuine, true-to-life look at the user experience.
Primacy, recency, and interference could be combatted by asking testers
questions at the end of each individual task, but at the cost of obstructing
the natural flow of their journey through your website.
The only way to close the gap between what users do on your site and what
they remember doing on your site, or what they say they’ve done on your
site, is to look at their occurrent thoughts - that is, the thoughts that pass
through their mind in the exact moment of those actions.
I
ty, the difference is much more:
5 times the field of vision, much-
enhanced depth perception, and
a testament to the usefulness of
looking at things through two
different lenses.
14
In the same way, zooming in on usability with the dual lenses of qualitative
and quantitative feedback returns a much broader, more solidly context-
ualized picture than either would alone.
At the very core of user experience is the subjective and emotional response
of the individual user – in short, the way a website makes visitors feel. These
feelings run the gauntlet from delighted, impressed, or hooked to confused,
frustrated, and angry. All these welling emotions, and the ones in between,
have one thing in common: they won’t show up in the numbers.
Your data may tell you which pages people visited, and how long they
stayed, and where they came from and where they went to, but the story
itself is missing – the feelings aren’t there. Did that user stay on your site
because his interest was captured by a great piece of content, or because he
was fruitlessly searching for an "About" section? When that visitor clicked to
a new page, did they move a step closer to their objective or had they mere-
ly mistaken the page for something else? And if so, what were the cues that
led them to that wrong turn?
The second lens on a set of binoculars lets the viewer gauge depth by taking
advantage of parallax, allowing the mind to compare and reconcile two
overlapping but distinct images to understand the object at hand in 3-D.
Delving into the individual stories of users enormously increases the value
of your research. Your users aren’t just data points; each one is the author
of a unique journey through your site, and exploring those journeys broa-
dens your view by many multiples. Placing your usability into context with
empirically robust quantification and comparison tools deepens your
understanding. Together, they render a much-enhanced, complete picture
in all its detail.
Designing a user test is a bit like designing a website: inevitably, users find
every way you never thought of to misinterpret and misuse what’s put
before them. Most of the time, though, that isn’t their fault – in fact, it’s
xII
probably yours.
Test designers frequently make oversights and errors that cripple the ability
of their tests to gather useful feedback and show the interactions they're
looking for. A poorly designed usability test will seriously impact the results,
including (1) users’ ability to correctly carry out the tasks in your user flow,
and (2) the amount of insight you get into real usability issues with your
18
application. When your testers don’t understand the tasks, aren’t prepared
to approach them properly, or misinterpret the instructions, the returns on
your research aren’t being optimized and you may well not learn what you
hoped to about your interface.
Here are 6 guidelines for designing a test that will avoid common errors and
maximize the information and insight you get out of your usability research.
A poor scenario, for example might sound like this: “You need to buy
renter’s insurance and want to explore your options.” A great scenario
would be, “Your friend just paid thousands of dollars to repair damage
caused when a guest accidentally caused a kitchen fire. You have guests
over often and want to be covered in case something like this happens.”
Making the right impression on new visitors can be the difference between
losing or keeping them. Including an impression test at the beginning of your
user test also helps to orient the user and gives them the chance to under-
stand what the site is for before jumping into the tasks.
The bottom line is, if the words you use are the same as the words that
appear on your site, the interactions you see in your results won’t be genu-
ine; you’re effectively giving the tester the ‘answer.’ Refrain from leading the
witness, and you’ll learn a lot more.
There are many other ways to optimize your research. But with these 6
guidelines, you will be able to avoid the main potholes and get valuable,
relevant usability feedback.
Where does your competitor's website or app hold the edge? What are they
doing right that you can learn from? And where are the strong points in your
own design? Learning the answers to these questions will give you a strong
grounding for understanding where to take your product and how to market
it.
Here are the top 5 things we tell customers who are looking to run a
comparative usability study.
24
Keep tasks the same
One of the key tricks is to keep your tasks as similar as possible so that the
results are directly comparable. If you can, choose a scenario and set of tasks
that are exactly the same, including order and word choice. This may require
you to get creative - make sure to frame your tasks in a way that is equally
applicable to both sites, and use words that aren't found on either so you
don't give an advantage to one or the other.
However, two sites, even competitors, are rarely identical, and it's likely you'll
have to make some accommodations. Sometimes similar sites will be de-
signed with the same functions in a different order, or with one or two
central functions that are sharply different.
The key is to design a genuine, true-to-life user journey for each site that will
return relevant insights, while also ensuring that your test designs are close
enough to allow side-by-side comparison.
Typically, you'll want to use different testers for each site, rather than
having the same people test both. Since both sites are offering a com-
peting product, service, or experience, the design and structure of the
first site will invariably affect testers' perceptions and expectations of the
second. People create schemas for how different functions should look,
feel, and work, and once they have seen one site's version, they are more
likely to have trouble with versions that differ.
This is the same reason we recommend using different testers for longit-
udinal research on a single site – once people have learned a system, it
colors their subsequent experiences, and their test results will not reflect
a typical user's journey.
Not all usability problems are created equal. Understanding the weight of
various issues is important to seeing how two sites really compare. When
thinking about what matters most, these are some questions to consider:
How did users respond to the problem? Were they annoyed, frustrated?
Who did they blame for the problem?
The more successful site is not the one with the smallest tally of issues, but
the one that better enables users to achieve their end goals. For example, if
your website centers around a search function and the search is unusable,
no amount of UX brownie points from the menu layout or the locator can
make up for it.
The bulk of your insights will come from watching users struggle with
usabilitycndjsbfhsbfhsdbhbfhsdb
You may think you're objective, but it's easy to subconsciously minimize the
issues your own site has while focusing heavily on the problems of your
competitor's. Widely-used quantification scales like SUS and the SEQ are
great tools for taking a more clear-eyed look at the results, and also allow
direct side-by-side comparison between system and system, task and task.
Not every usability problem will look like a problem. Some issues are subtle
enough that the user doesn't notice that their experience has suffered. This
may occur when users shoulders the blame for mistakes themselves, and
therefore don't say anything about it. Other times it's not that there's a
problem, but rather that there's simply room for improvement – an obser-
vation that's much easier to detect in a comparative usability study. Keep an
eye out for spots where the user experience may be just alright. Turning
those moments into stellar experiences is a key to creating a successful
website that people will want to return to.
So when it comes to UX, who can tell you more – the experts, or the crowd?
The answer may not be as clear-cut as you think.
30
The wisdom of crowds
In 2004, James Surowiecki gave a name to the truth and accuracy of the
aggregated many: “the wisdom of crowds.” It’s the idea, basically, that the
collected knowledge of a large number of people tends to be remarkably
correct.
It is under circumstances like these that the wisdom of crowds can reach its
full potential: untainted by the shadows of peer pressure, conformism, and
bandwagoning, independently-generated thoughts and ideas contribute to
the diversity and wholeness of the collective voice, and counteracting forces
(think underestimation and overestimation, or opposition and support) are
allowed to balance the sum and bring the crowd to the best judgment.
Applications to UX
“ Why are experts not that smart? Because experts tend to be and
think alike, and thus do not reflect maximum diversity of opinions.
-- Aldo Matteucci
”
But are they better than experts? At some things, they certainly can be
(after all, none of the cattle experts guessed within a pound of the prize ox’s
true weight).
What does crowdsourcing usability look like? The UXCrowd is one way – a
tool for identifying and prioritizing usability stress points by harnessing the
wisdom of crowds
Step 1 Step 2
After completing a task-based After all the answers from the
user test, the first 5 testers sub- first 5 testers are gathered, the
mit 2 things they liked, 2 things rest of the testers assign weight-
they didn’t like, and 2 ideas for ed votes to the ones they most
improving the site. agree with. This allows the best
ideas to bubble up to the top.
Each tester’s submissions are
made before they have seen To prevent bandwagoning, the
anyone else’s input. Thus, each testers can’t see the standing
idea is the result of independent vote count of the submissions as
thinking and insight. they vote on them. This allows
testers to approach each idea
with an open mind.
I
the plot is no secret, but what’s their great underlying theme?
36
ten questions, a five-point “strongly agree” to “strongly disagree” response
system, and a quick scoring algorithm yield an extremely reliable usability
score on a scale of 0 to 100.
STANDARDIZING UX FEEDBACK 37
With thousands of previously documented uses to compare to, SUS gives
you a solid indication of users’ overall satisfaction with your website, and
can even be broken down into usability and learnability dimensions. The
percentile ranking contextualizes your raw score, allowing you to under-
stand how your site performs relative to others; and some researchers have
tried, with some success, to map adjectives like “excellent,” “poor,” or
“worst imaginable” to individual scores for extra insight.
By various accounts, the mean SUS score hovers around 68-70.5 (a score
that roughly corresponds, as it happens, to the adjective “good,” though
falling rather short of “excellent”). Normalizing score distribution with per-
centiles therefore makes a 68 (or a 70.5) into a 50% – better than half of all
other systems tested, and worse than the other half.
STANDARDIZING UX FEEDBACK 38
Quartile breakdown of SUS scores and suggested adjective rankings from the Journal of
Usability Studies
STANDARDIZING UX FEEDBACK 39
It is these qualities that make SUS the key to normalizing a diverse array of
tester feedback, and aggregating responses into a meaningful but concise
picture of your UX. If individual test videos are the trees, SUS shows you not
only the forest, but the entire ecosystem into which your system fits; it
describes your site not as a standalone entity, but as a part of the wider web
world that surrounds and thus helps to define it. With a widely-trusted
industry standard to rely on, you can take a step back from your own com-
pany and see how you chalk up in the bigger picture.
STANDARDIZING UX FEEDBACK 40
VIII Measuring Task Usability
The Single-Ease Question
The idea behind task-based usability testing is that any application's user
experience is comprised of various steps along the user’s journey, each of
which must be optimized for simplicity and ease of use to guide the user to
his or her end goal.
41
So if tasks are the building blocks of usability testing, is there a way
to think quantitatively about the individual usability of the tasks
we ask our testers to complete?
A more broadly focused method, which does not pile significant amounts of
time, effort, or complexity onto the tester, is the Single Ease Question, or
SEQ.
Like SUS, the Single Ease Question uses a Likert Scale-style response system,
but the similarities stop there. As its name implies, SEQ is just one question:
This adds room for more nuances and a greater diversity of responses, while
still preserving the one-question-only simplicity of the SEQ.
The Single Ease Question has been found to be just as effective a measure as
other, longer task usability scales, and also correlates (though not especially
strongly) with metrics like time taken and completion rate.
With or without this addendum, the SEQ is a valuable tool helping web
developers understand the usability not only of the website as a whole, but
also of the individual steps on the user’s pathway through it.
How can you make usability research relevant to all of your stakeholders
without bogging the whole team down in a morass of data?
For the UXer, extracting insights from the data isn’t the end goal; the
findings, rather, are a tool for spurring necessary changes to the product.
Thus the challenge is to engage the whole team with those findings and
achieve alignment within the team so that the product can move forward.
This is a tricky proposition, though. Watching all of the video results is
neither feasible nor necessary for every team member; much as it may add
to the nuance of the analysis, the time investment will almost always be
insurmountable for most stakeholders.
45
It falls to the UX Researcher to watch all of the results and identify key
findings, judge what is important and what isn’t, and then persuasively
communicate these to decision-makers.
The goal of the designer, for their part, is to see why particular design
elements work or do not work, and then use that information to create new
solutions based on real user behavior.
For the designer to achieve his or her goals, they need to be able to
directly access the results and witness users interacting with the product
at key junctures and hear their thoughts and reactions when they run
into walls.
COLLABORATIVE ANALYSIS 46
Conceptualizing a UX research workflow
Given these specific goals and responsibilities, one can flesh out a picture
of a team UX workflow that engages every stakeholder with the data at the
appropriate level of involvement and achieves the goals of all involved, as
well as the team as a whole.
The researcher mines for insight and patterns, categorizes and synthesizes
↓ amounts of feedback, and makes note of key teachable moments.
large
The designer reviews the researcher’s work, observes the critical inter-
actions, and deepens the analysis of the results with their own unique
contributions and insights.
COLLABORATIVE ANALYSIS 47
Tools for the team that collaborates
The first big piece of the puzzle for us is annotations: a way for each
account member with whom the video results are shared to add tagged,
timestamped notes the whole team can see and access. This is primarily
for researchers – the tags make it easy to identify and keep track of use
patterns and recurring issues, and the timestamps with one-click
playback are an easy way to share key moments and highlights that
make the case for design changes.
For the designer, being able to see the researcher’s annotations makes it
simple to locate important teaching moments and then watch them him
or herself to more firmly grasp the issues. They can also add their own
insights, and with author-based filtering the whole team can read the
designer’s unique input and perspective on the data.
COLLABORATIVE ANALYSIS 48
The other big piece is the UX Insights summary: a way to create a
compact list of top findings from the test data and share it with stake-
holders. With all the important points consolidated into one place, such a
feature cuts away the extra and delivers only what the decision-maker
needs to know to move forward. Because it is integrated with the rest of
the platform, though, the video evidence that substantiates each recom-
mendation is just one click away for any viewer.
COLLABORATIVE ANALYSIS 49
X Iterative Testing
ITERATIVE TESTING 50
If we assume that stakeholders want users to be successful in using their site,
iterative testing prior to launching a website should be effective in that
developers are able to make quick changes based on the users’ interactions
with the design and test the revised design using measures of success (e.g.,
efficiency and accuracy).
1. Take the initiative to set an expectation from the beginning that there
will be iterative tests and that all of the key stakeholders will be part of
the process.
Set usability-related goals before the project begins, and agree to meet
regularly with the whole team to plan each iteration of tests and to dis-
cuss findings and recommendations for improving the site; being proactive
ITERATIVE TESTING 51
in this regard will help to ingrain usability testing into the process without
issue later on.
The need to work under pressure for quick turnaround can present logis-
tical challenges, such as obtaining results and producing preliminary re-
ports quickly; further challenges like tight budgets and technical limits on
what the developers can and cannot achieve may also interfere with an
iterative design process. Therefore, some of the results of usability test-
ing may have to be sacrificed or deferred to meet deadlines and manage
costs.
3. Start with paper prototypes, before any hard coding has been made, so
developers can get on-board with the design-test-refine process.
ITERATIVE TESTING 52
thing has been hard coded, no application actually exists), which can oft-
en wed developers to the design.
4. Plan to include some tasks that can be repeated in all of the iterations
so that the team has a measure of progress as they proceed.
In each iteration, evaluate the user interface of the new design by exami-
ITERATIVE TESTING 53
ning participants’ success, satisfaction, and understanding of the site, as
measured by their performance on tasks, self-rated satisfaction, or other
quantifiable measures.
ITERATIVE TESTING 54
for improvements and solutions to test. It is important that each team
values and respects what the others have to offer.
By getting the product into the users’ hands and finding out what they need,
you can quickly identify usability issues and achieve an efficient develop-
test-change-retest process. Iterative testing is a useful and productive tactic,
ITERATIVE TESTING 55
and usability researchers on any digital product development team should
aim to include several iterations in their test plans.
COLLABORATIVE
ITERATIVE TESTING
ANALYSIS 49
56
XI When Design Drowns Out the Message
Signal-to-Noise Ratio in design
57
This lunch menu shows 8 combina-
tion choices available for the same
price. The “signal” is the information
differentiating each of the combina-
tions – specifically, the unique dish
offered alongside the Pad Thai. If the
whole list was boiled down to just
the signal, it would look something
like:
A – Red Curry
B – Green Curry
C – Mussaman Curry…
The signal
The signal is left to jostle for the user's is left to
eye among jostle for the
a cacophony user's
of extra-
eye among
neous content. The list is still readable, but nota quickly
cacophony of extraneous
or easily compre-
content.
hensible. The reader must work harder and take longer to get to the "signal"
that they really want. And each time they look away, looking back requires a
The list is still readable, but not quickly
moment of re-orientation and re-learning the list.
or easily comprehensible. The reader
must work harder and take longer to
get to the "signal" that they really want.
WHEN DESIGN DROWNS OUT THE MESSAGE 59
And each time they look away, looking
Signal-to-Noise Ratio in web design
If a simple lunch menu can suffer from poor SNR, imagine how much easier
it is to drown the signal on a complex, interactive webpage. The temptation
to add more noise is ever-present for the designer, and it's a hard line to
walk between aesthetic choices that support the design, and ones that
distract from it.
This is an aesthetic choice that ends up distracting from the signal, creating
counterproductive noise that works against the website's goals and de-
creases usability.
Design decisions almost always affect usability, because they tend to either
support or distract from the core purpose of the site or feature. When they
don't contribute to the communication of the "signal," they can become
noise-generating distractions, and actually drown out the information neces-
sary for using a platform to its fullest.
So each time you make a design decision, especially for aesthetics, stop and
ask yourself: "Am I drowning my message, or supporting it?"
It’s a saying we apply to a lot of things: It’s the little things in life. It’s the
little things in a relationship.
It’s surprising the reaction a simple interaction can cause. I was watching the
results of a test we had run on a survey-building site a few weeks ago when
the tester typed out a survey question along the lines of “Do you plan to
participate in any outdoor activities this month?” He had scarcely finished
typing the question when the four blank multiple-choice options below
auto-transformed into just two: “Yes” and “No”.
62
After the second or so that it took for this clever swap to register, he ex-
claimed, with pleasant surprise, “That was really really cool!” Then, after a
moment’s additional thought: “What if I want to say yes, no, or maybe?” He
added a third slot to fill with this last option, and immediately the site
changed it to say “Maybe.” UX folks like to bandy around the word “delight”
in talking about experience design, but I doubt many have ever heard a
grown man literally squeal in jubilant shock in response to a web interaction.
The thing is, that specific interaction was not especially critical or central to
the user’s journey on that site. In fact, only a minority of the testers I
watched even encountered it. But the disproportionate positive reaction
this small sequence generated cast a glow over the rest of the experience,
and set that website apart as an innovative and canny provider in their field.
And I guarantee you that the users who experienced that interaction
remembered it.
Just as easily as the little things can delight, though, they can also infuriate.
Think of it like the disproportionate anger you feel when you stub your toe,
or your headphones fall out, or a video abruptly stops to buffer.
When these little things surface in web design, they’re often things that
users wanted or expected on a site but are not there; I once saw a tester get
So while you’re appreciating all the little things in life as the year draws to a
close, don’t forget that the little things matter in web design too. A bit of
attention to detail goes a long way, and so does neglecting it. Here’s to a
new UX era, and to keeping users surprised.
The field of UX is becoming an exciting, lucrative job market for young pro-
fessionals. A field dedicated to bringing engaging digital products and
services to life, UX centers on developing digital applications that serve the
needs of the user. Rather than design an application that may find a market,
or put months of development time into a new website that may meet cus-
tomer needs, companies ranging from startups to the Fortune 500 invest
millions of dollars into usability testing, prototyping, and interviews with
existing customers. And yet, a definite gap exists between the tools available
to students and the tools actually being used in fields like computer science,
“
However, it is clear to students
“Being put into a class that handed
like Kristi Wiley, who is starting us a real life project and asked us to
a Ph.D. in Rhetoric and Writing show results and findings allowed
us to learn as we worked. This
at Michigan State University
allowed us to learn in a way that
that the benefits of such part- challenged us to expand our skills.
nerships outweigh the costs. Throughout the class we were
This is what she had to say asked to think about information in
”
a different way than we had
about learning UX in Guisep-
traditionally been taught.”
pe’s graduate UX class that she
“
I’d say one of the biggest things that resonated with me early on was the
importance of the user, first, second, last, and always. There was and is
something very reassuring about the notion that if you want to improve
the user’s experience, you learn what they like, you understand how they
use something and how they want to use it. Basically, it’s all about them.
”
That’s something I think about whether I’m working on a webpage or just
editing content.”
For both students who want to become professors of UX-related fields, and
for those who want to become the next generation of UX designers,
academic-industry partnerships are win-win.
Students like Kristi and Nick need more than a theoretical knowledge of UX.
They need to be able to work on actual design projects. They need to leave
the classroom with an understanding of UX principles that goes beyond that
While all these sources contain invaluable information for any UX class, stu-
dents need experience using the concepts they make available.
TryMyUI ensures that a digital interface does not fall into the Design Free
Design Trap (Eggers & O’Leary, 2009), where rushing through the design
phase risks a sub-par final product. Techniques for avoiding the Design Free
Trap are better learnt from practice than from theory. This is why experi-
ential learning through tools such as TryMyUI usability tests is an important
addition to UX curriculum. As college graduates build the tech sector, they
should take the words of Eggers & O’Leary to heart:
“
Don’t confuse good intentions with good design. No one cares about how
”
high-minded your design is if it doesn’t work in the real world.
Has Tidal found that balance? Could the new platform’s designers have even
improved on the Spotify model to create an ultimately more usable product?
We ran a few tests with current Spotify users to find out…
Task 1: Browse the main page & talk about what interests you
Immediately we see the extent to which Tidal has replicated the Spotify UX
for its own design backbone. The familiar sidebar-menu-on-the-left,
content-panels-on-the-right layout quickly orients new users while
minimizing the pains of adjusting.
Like Spotify, the color scheme mostly sticks to dark grays and blacks that
put the content front and center; describing their first impressions, testers
noted the very dark background, calling it “chill” and “suitable for a lot of
different moods” – appropriate for a music platform.
The videos drew special attention from our testers because of the visual
prominence they receive but also because it’s a feature that Spotify doesn’t
have. As tester Abraham from LA noted, the emphasis on video is a double-
Tidal has kept it simple with their menu options – the sidebar is compact
and much less cluttered than Spotify’s. However, there are nota-
ble hierarchy issues that inject confusion into what could have been a
superior design: the set of menu options below My Music are of slightly
different coloration and size, and the reason is not clear.
The visual cues half suggest that they are subcategories of My Music, yet
there is a clear separation between the two. They are also indented at the
same level as the rest of the links; are they subcategories or not? Addi-
tionally, one is called Playlists, the same as another link further up on the
sidebar, and the difference between the two is entirely unclear.
All in all, Tidal does a relatively good job of presenting a main page that is
unique but still holds to the UX conventions users will recognize and under-
stand. The younger platform had an opportunity to one-up Spotify’s design
with a cleaner menu but squandered it with poorly communicated hierarchy.
Spotify: 1, Tidal: 0.
Tidal runs into usability problems pretty quickly on this task, because half of
the testers never even noticed the search bar, tucked away in the top right
corner. One tried clicking the Artists link on the sidebar, which in fact is for
saved artists. Another assumed that the only way to get to an artist page
was through tracks shown in playlists.
Spotify‘s search bar is more visible above the sidebar, where users are
likelier to see it, and it is also larger. Both platforms have issues with
their search results, though. Each gives results for several distinct categories,
including artists, albums, playlists, and tracks; the artist and album results
for both are filled with redundant or irrelevant entries, and album results
don’t filter out singles.
But Spotify at least makes these poor-quality results secondary, leaving the
focus on song listings. Tidal uses up much more space to display these cate-
gories, which may have been a conscious decision since user activity tends to
Tidal’s song results are abysmal, failing to take relevance and popularity into
account. The first 14 tracks returned for keyword Beyonce, for example, are
songs literally entitled Beyonce, none of which are by the singer herself.
Beyonce, by HD and Lil Rod: Not really what I was looking for…
Spotify: 2, Tidal: 0.
Tidal falls further behind with some signalling missteps on its song listings.
Each track entry shows the title, the artist name, and the album, all of which
become underlined when users hover over them. This is a pretty standard
signal that the underlined words are a link, and the artist and album name do
The song title is underlined like a link when users hover over it, confusing
people as to how it will behave
As Steve Krug’s famous UX maxim goes: Don’t make me think. Users should
never have to wonder how an element will behave, or what the outcome of
interacting with it will be. One user was left believing that the only way to
play songs on Tidal was through the Play Now option listed under More
Options at the far right.
Another misleading element is the Play button which appears to the left of
the track titles (on some, but not all, pages – another mistake). Pressing it
for the first time plays the song; when pressed a second time, it does not
pause the song, as testers expected it to, but instead starts playing it again
from the beginning.
Lastly, the Add to Playlist icon at the right of the track listing (between the
Playlists are the most commonly used function of streaming services like
Spotify. People use playlists to complement their mood, to be a soundtrack
to their daily activities, to motivate them while they work out, and often as a
means to find new music. They are arguably the most important feature for
a music streaming platform, and if Tidal finishes strong on this task it might
still nab a tie.
Here again, Tidal chose to replicate almost exactly the layout and style of
Spotify for displaying playlists, and like in many other instances, did so with
a better use of space.
Tidal also chose to separate mood and theme based playlists from genre
based playlists, which Spotify combines; whether this is better for creating a
cleaner infrastructure, or worse for adding complexity, could be argued
either way. Some testers really liked having a Genres link on the sidebar
menu and said they would use it frequently.
Thus far, Tidal has not shown an ability to move UX design forward in the
music streaming field, even where there is clearly room for improvement.
Apple Music
For music lovers around the world, Spotify has become the go-to for stream-
ing and sharing music over the past few years. However, with its new music
streaming service entering the ring last week, Apple has set out to challenge
that market dominance.
The 3-month free trial Apple Music offers is a smart move that has already
drawn a lot of adopters. However, once users get past the landing page,
there is almost no mention of the free trial. And when users click on a signup
option (“Individual” or “Family”) the resulting popup prompts them to either
The confusion continues when the next page asks for credit card infor-
mation. “Is this still the free trial? Why do they need my credit card if the
first 3 months are free?” asked Paul from Indiana at this point. Other users
expressed similar doubts.
1. Give users what they expect. Presenting people with a “Buy” option when
they are trying to sign up for a free trial only adds confusion about whether
they are following the right path. A “Start free trial” button would be more
intuitive.
2. Don’t create extra obstacles. Asking users if they “still want to” move for-
ward makes them think twice about their purchase, which may implicitly dis-
The bubbles’ constant motion also makes it hard to remove less-liked genres
and artists, with the little hovering X button sometimes floating out from
under the user’s cursor.
Task 3: Listen to a few tracks by your chosen artists and add them to your
playlist
Other icons add to the confusion: What are the checkboxes in front of each
song for? What does the cloud-shaped icon do? No obvious explanation is
offered.
You can’t even directly add songs to existing playlists without first adding
them to the main music library.
Other users felt the search results page was “cluttered and unclear,” in the
words of Andrew from San Diego, who compared it negatively to the Spotify
results page.
Explaining to do
Apple does a good job providing a unique selection of public playlists cate-
gorized into “Apple Editors Playlists,” “Activities Playlists,” and “Curators
Playlists”. The background picture for each playlist category and sub-
category makes for a striking, aesthetically pleasing presentation. Organi-
zationally, the feature mostly makes sense, though some users were thrown
off by the multi-layer nesting of playlists.
There is no explanation of any of these functions to new users, and they are
left to wonder on their own.
Apple, on the other hand, did not copy Spotify but instead based the Apple
Music interface on iTunes, aiming to turn their music library tool into a more
all-encompassing platform.
This endeavor meets with mixed results. Apple Music successfully, if un-
evenly, integrates music streaming capabilities into the iTunes platform. It is
far from perfect though — with a few glitches and a consistent lack of expla-
nation for some less intuitive aspects of the design, Apple Music presents
undeniable usability challenges to users who are looking for an easy and
convenient Spotify replacement.
However, it’s clear that Apple Music has created something unique from its
primary competitor. Like Tidal, it offers exclusive content like influencer-
picked playlists and artist communications; it also offers users the oppor-
tunity to integrate their existing music libraries with songs they don’t
actually own.
From little twists like the red bubbles to bigger differentiators like its radio
feature, Apple Music provides a fun and unique alternative to Spotify, some-
thing that Tidal struggled to do. But the platform’s learning curve is rather
Both sites follow the same 3-stage registration recipe: get the basic details
together, add a username and password, mix well. A couple of things put
OkCupid in the lead here:
(2) Username selection: Both sites have users in numbers that make picking
a unique new username a challenge – a problem not unique to these two
sites. But OkCupid indicates (with a checkmark or an X) when a username is
already taken, while on Match users only find out once their submission is
rejected.
On their mobile site, new signups can see matches right away; no further
steps are required. Of course, the matching algorithms will have nothing to
work with, but you know what they say – seeing is believing. The details can
be filled in later.
Task 2: Browse through some matches. How would you message them?
Both OkCupid and Match.com start off this stage with the same treatment –
Also, this looks like the banner ads people are used to ignoring
elsewhere
After this brief funnel step, users can browse potential matches at will, or see
matches one at a time. The tradeoff of Match’s previous thorough probe into
new users’ traits and preferences is that now, the first actual matches people
see are pretty well-suited to them.
OkCupid doesn’t feature the ‘like’ and ‘message’ options quite so promi-
nently on matches’ profiles, but they are still quite visible and easy to use.
Both sites allow for pretty easy browsing of nearby matches. OkCupid wins
this round too, but not by as much.
There’s two ways to go about this task – applying filters to the match results,
or taking an action that tells the algorithm more about what you like and
permanently improves the matches you get. On Match, this action is to rate
your daily matches, and the site informs users of this with a clearly linked
banner stating as much:
It’s not all going Match’s way, though – on mobile, the tides reverse.
OkCupid gets it right with a direct, straightforward “Improve Matches” button
Minus points: Neither site allows users to filter for guys with long hair. Sorry,
Claudia from Miami.
Match.com wins this round by a solid margin. It’s not enough to put things
even, but they’ve got a fighting chance again.
OkCupid falters…
A relatively easy task, and both sites do it pretty well. OkCupid wins style
points again for their word choice (Drinks: Very often, often, socially, rarely,
desperately) but it doesn’t make up for one big misstep: the languages
dropdown.
When the #1 number-two language in the country starts with an “S”, it’s not
a good idea to have users choose from an alphabetical dropdown of
hundreds of languages without a search option. What were they thinking?
Match wins this round, and closes the gap still further. As far as we’re
concerned, though, they have yet to make up for the slew of questions
during signup.
A draw?
For the last task, we wanted to investigate the usability of the settings by
asking users to find ad potentially change their notification details. But both
Match.com and OkCupid do pretty well on this, and there’s no issues to
report.
So does OkCupid win on their current lead? Let’s factor in a few more
general components.
(1) Visuals: OkCupid wins for visual style, and it’s not really much of a
contest. Bold yet tasteful colors and plenty of blank space contribute to the
site’s fresh, fun, disarming appeal. Match.com is much blander in its design
and doesn’t quite feel up-to-date.
And with that, OkCupid puts the final nails in Match.com’s coffin. As far as
usability goes, OkCupid is the champ of the online dating heavyweights. Of
course, usability isn’t the only important thing for these two; but ensuring
that the user experience is smooth, enjoyable, and painless is one of the
best ways to get people on board as well as keep them coming back.
Summer is coming, and that means it’s time to start booking travel arrange-
ments for that vacation you've been daydreaming about. But where to start?
We decided to pick two of the biggest online travel planners against each
other to see who has the more usable product. So we set up some user
tests...
A lesson in clarity...
Expedia makes this search very easy: flight and hotel combination packages
are the default search type when the home page opens. Other options, for
users who need either more or less, are obvious and clearly labelled.
"Add a tab that's labeled Flights and Hotels" – much like a certain competitor
already has.
Kennedy from Phoenix advised in his written responses afterwards that the
site should "add a tab that's labeled Flights and Hotels" to clear up the
confusion – a Vacation Packages search with 0 results had earlier left him
wondering if he was even looking in the right place.
Both sites were very clear about what the listing prices represented and
included, a fact which was appreciated by users. Expedia takes round 1 with
a more straightforward experience.
Task 2: Select a hotel in walking distance of the National Mall & other main
sights.
Specificity is difficult
This time it was Expedia that left users wishing for a feature the other site
already thought of. "It would be nice to have the option to narrow my hotel
choice by nearby attractions," Barbara from Chicago commented.
There was a filter for neighborhoods, which helped; but as another tester
pointed out, "If I'm not familiar with the area, I don't know what the neigh-
Priceline, on the other hand, prominently features a Hotels Near... tab at the
top of the results page, which allows users to winnow down their list of
prospective stays based on closeness to a comprehensive list of
landmarks.
Expedia does have a map feature, but it has no options for filtering, and the
Priceline did well to make their map much bigger, but it is also clunkier: the
popups for the hotels are awkwardly sized and positioned, and stick around
for too long, becoming almost as annoying as they are helpful. Priceline still
wins this round, but not by a lot.
Unavoidable confusion?
The two sites handle this step in opposite ways. Priceline automatically
selects the cheapest flight (going both ways) matching the user's travel
dates and specified airports, and offers the opportunity to review and select
other flights. Expedia does not select any flight by default, but presents the
user with a broad list of departure and return flights on or near their travel
dates.
With only 1 result, Irene thought Choose Return was for seeing a full list,
not confirming it.
Priceline caused more serious issues for the one tester that ran into them,
but those issues were also less likely to arise than the ones that Expedia
testers came across. So it looks like Priceline ekes out ahead on this task too.
One thing that stuck out about the Priceline tests in a general sense was that
it seemed to just work less well. Irene's 1 flight option for returning from DC
to Philly on July 8th, shopping a month and a half in advance, is not even the
worst example of this.
These seem like glaring errors in the core functionality of the product, and
are suspect to say the least. Due to the extent that issues like this had a
considerable negative impact on the entire experience, Priceline must be
docked points.
UX WARS: PRICELINE
OKCUPID VS.
VS.MATCH.COM
EXPEDIA 1162
References
https://www.qualtrics.com/blog/market-research-safety-not-always-in-
numbers/
http://marciosaito.com/2011/10/13/crowdsourcing-knowledge-has-a-long-
tail/
Surowiecki, James. The Wisdom of Crowds. Doubleday, 2004. ISBN 978-0-
385-50386-0.
http://www.slideshare.net/ChristofHammel/process-iceberg-21703547