0% found this document useful (0 votes)
8 views

Human Computer Interaction - What is it?

Measuring usability is essential for identifying issues, tracking progress, benchmarking against competitors, and justifying investments in product improvements. Key usability testing metrics include effectiveness (success rate and error rate), efficiency (time on task and overall efficiency), and satisfaction (System Usability Scale, Single Ease Question, and Subjective Mental Effort Questionnaire). These metrics provide valuable insights into user experience and help prioritize necessary changes to enhance product usability.

Uploaded by

Alisha Bhagat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Human Computer Interaction - What is it?

Measuring usability is essential for identifying issues, tracking progress, benchmarking against competitors, and justifying investments in product improvements. Key usability testing metrics include effectiveness (success rate and error rate), efficiency (time on task and overall efficiency), and satisfaction (System Usability Scale, Single Ease Question, and Subjective Mental Effort Questionnaire). These metrics provide valuable insights into user experience and help prioritize necessary changes to enhance product usability.

Uploaded by

Alisha Bhagat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

NUMERICAL SECTION:

Measuring the usability of your digital product is crucial for several reasons:

 Spotting usability issues: by keeping track of the usability testing metrics for your product
you’re able to quickly spot areas that need improvement and fine-tune them for better user
experience.
 Tracking progress: measuring metrics over time or after making a certain improvement can
help to evaluate their effectiveness and see how well you’ve solved the issue.
 Benchmarking: by comparing your product’s usability metrics to some of the industry
standards or even your competitors you get a clear picture of how well it performs compared
to the competition and find a way to outperform them.
 Justifying investments: we all know stakeholders love numbers. Usability testing metrics
provide tangible evidence of the problem existing, and help to demonstrate the value and
potential return on their investment.
Overall, usability testing is a proven method of evaluating your product and uncovering hidden
usability issues. It brings both qualitative and quantitative insights to the table, and by keeping
track of some of its metrics you’re able to really make sense of these huge amounts of data
and prioritize changes that matter.

By constantly measuring the usability of your product you’re committing to improving it and
fine-tuning its design and functionality to suit users’ needs.

Types of usability testing metrics


There are 3 main types of usability testing metrics that you can track:

 Effectiveness: describes whether or not a user is able to complete certain tasks with your
product and how effectively they manage to do it.
 Efficiency: efficiency metrics assess how quickly and with what amount of effort users are
able to complete those tasks.
 Satisfaction: satisfaction metrics help to evaluate users’ overall satisfaction with the
experience your product provides. They help assess user perceptions, preferences, and
emotional responses.
As we can see, these are categorized based on the different types of information they help
to obtain. This type of categorization can help you greatly with choosing the right metrics to
track in your own usability study.

For example, you want to find out how easy it is for the user to complete a certain task with
your product. Based on that goal you can choose the specific metrics that will help you find
out and analyze those details. In this case, to measure efficiency, you’ll need to track “Time
on task” and “Efficiency” metrics.
Usability testing metrics for measuring effectiveness

Success Rate

The success rate is one of the most essential and basic metrics in a task-based usability test.
It shows the percentage of the participants who successfully completed the task vs. the
ones who did not.

The formula for calculating the success rate manually is:

The total number of successful tasks of all respondents divided by the total number of tasks
performed by all respondents.

A study by Jeff Sauro from 2011 found that 78% is an average success rate, so if you’re
anything over that on your test – that’s a great score. However, a lower result signifies that
there’s still room for improvement and a lot to work on.

Users being able to complete the task should be your number one priority. If half
of your testers are not able to successfully perform a certain action with your
product this is most likely a sign of major usability issues.

Here’s an example of the success rate chart:


Number of errors

A number of errors, also known as error rate, measures the number of errors made by users
when completing your usability test and interacting with the product. It’s basically the
opposite of the success rate.

In task-based usability tests, you always define an ideal path a user should take to complete
the task, where they should start and finish. The error rate shows how many times users move
away from this path, click the wrong button, open the wrong page, etc. The higher your
product’s error rate is, the more usability problems there are.

To count the error rate, divide the total number of errors by the number of attempts. This
is the calculation for tracking multiple errors across your study.

However, there’s another formula for calculating the error rate for a specific task. For it, you
need to divide the number of times an error occurred by the total number of opportunities for
that error.

To give you an example, if the task has 3 error opportunities (specific elements or places in
your UI where testers can make a mistake) and 10 users try to complete it, the number of
opportunities would be 30. Imagine you then found 4 errors after the test.

Here’s how you’d calculate the error rate for that task: 4/30=13,3%duct Usability Testing

Usability testing metrics for measuring efficiency

Time on task

Time on task is the simplest metric ever to track how fast and efficient users perform
specific tasks with your product.

Ideally, you want them to do it as quickly and simply as possible. Therefore, when you see
that a certain task takes users longer to complete, this might be a sign of them experiencing
confusion, probably due to the usability issues.

There’s no perfect time for that metric, as it always depends on the task and its complexity.
However, you can always tell when the task takes longer to complete than it should’ve.

For example, you’re asking the user of your e-shop to add an item to the cart. This process
should only take a couple of seconds, until they find the right icon. If you see that the user
hesitates or it takes them a couple of minutes to complete this task, you probably need to look
into it and see what confused them.

There could be a number of reasons, such as an unfamiliar icon, a bug on the product page, an
inactive button etc.
Efficiency

The efficiency is a lot more complicated metric to track, due to its formula. It is measured
with the help of other metrics, specifically the “Success rate” and the “Time per task”.

Where:

N = The total number of tasks

R = The total number of users

nij = The results of task ‘i’ by user ‘j‘. If the user has completed the task, then Nij = 1, if not,
then Nij = 0

tij = The time spent by user ‘j’ to complete task ‘i’. If the task is not completed, time is
measured until the user quits the task.

This metric helps to measure time-based efficiency, meaning, how quickly and easily users
can complete tasks with your product.
Usability testing metrics for measuring satisfaction

System Usability Scale (SUS)

The System Usability Scale is a questionnaire that is sent out to the participants after the test
and helps to assess the perceived usability of the product.

It always consists of the same 10 Likert-scale questions and has been used to measure
usability for decades. The answers to those questions range from 1 = strongly disagree, to 5 =
strongly agree.

Here just a couple examples of the questions in SUS:

1. I think that I would like to use this system frequently


2. I found the system unnecessarily complex
3. I needed to learn a lot of things before I could get going with this system.
The average SUS score is 68. The higher it is, the better is your product’s usability.

Overall SUS is a proven metric to assess how easy to use your product is for the target
audience.

Single Ease Question (SEQ)

The Single Ease Question metric helps to assess how easy each task was for the user to
complete. It is administered immediately after the task completion and consists of just one
question and a 7-point rating scale:

Overall, how difficult or easy was the task to complete? (1 = very difficult, 7 = very easy)

This metric helps to evaluate how easy or difficult it is for users to perform a certain action
with your product. Studies show that the average SEQ score is around 5.5.

If most of the participants vote for the task to be difficult and hard to complete, this may be a
sign that you need to simplify the user flow and make the design more intuitive. It may also
signify that there’s a major usability issue that keeps users from completing the task with ease.

Subjective Mental Effort Questionnaire

SMEQ helps to measure the subjective mental effort of each respondent after completing the
task.
It consists of one question and is especially helpful for measuring the cognitive
load required from the participant to complete the task. Its scale has nine labels
from “Not at all hard to do” to “Tremendously hard to do”, which are shown as
millemeters above a baseline from 0-150.

How to compare two designs using usability testing metrics

Usability testing metrics offer a great opportunity to quantify your usability and use those
numbers to compare two different designs to each other.

This, for example, can be useful when you’re making a redesign of your product or trying to
compare your design to one of your competitor’s.

Case #1: Redesign

The process here is to start by running a usability test on the old version of your product,
analyze the results and calculate the usability metrics that you want to compare later.

After the redesign, conduct a usability test on the prototype of the updated product. We
recommend testing prototypes specifically and not the already developed version for a reason.
This will leave you the opportunity to spot usability issues and eliminate them before the
development. Conduct Proto

After you have the usability testing metrics for both designs, you can compare them and
evaluate the effectiveness and success of your redesign. By doing this you can still spot areas
for improvement. This comparison also gives you precious data to present to stakeholders and
explain how your solutions made the product better.

Case #2: Competitive Usability Testing

Alternatively, you can use usability testing metrics to compare your product to one of your
competitor’s. This can be done by conducting Competitive Usability Testing on your
competitor’s website, for example.

With the help of UXtweak Chrome extension you’re able to perform just the same usability
test with your competitor’s product as you would with your own! Don’t believe us?

Conduct two usability tests using the same tasks, one on yours and one on your competitor’s
product. This will get you specific usability testing metrics that you can then compare and
analyze where they are doing better. This is a perfect opportunity to optimize the UX of your
product by replicating some of the best practices used by your competitors!

After the comparison you’ll probably get a similar table for each of the metrics*:
Your website Competitor's website

Task 1 20s 17s

Task 2 47s 49s

Task 3 60s 27s

Success Rate 85.5% 80%

*the table represents the comparison of the success rate and time taken for each of the tasks.

Most of the time, you can’t really make any serious product decisions only based on the
comparison of usability testing metrics. This would require a more detailed analysis of
usability testing results to find out how users actually interact with the product and where they
get confused. However, such a table will allow you to pinpoint where your product is
lacking and then conduct further analysis in that area.

Wrapping up
Measuring usability is essential to keep track of your product’s performance and spot issues
early on. As we can see there are a lot of different metrics one can use to evaluate the usability
of their product. Choose your metrics wisely and never base the analysis on a single one .

What are usability metrics?


Usability metrics are a system of measurement of the effectiveness, efficiency, and
satisfaction of users working with a product.

To put it simply, such metrics are used to measure how easy and effective the product is for
users.

Most usability metrics are calculated based on the data collected during usability testing. Users
are asked to complete a task while researchers observe the user behaviour and take notes. A
task can be "Find the price of delivery to Japan" or "Register on the website."
The minimum number of users for measuring usability is 5. Jacob Nielsen, the founder of
"Nielsen Norman Group," recommends running usability testing with20 users.
Let’s take a closer look at the most used usability metrics. We’ll start with the metrics for
effectiveness measurement.

Success score
However long your list of usability metrics is, the success score will probably be at the top of
the list. Before we go into the details of usability, we have to find out if the design
works. Success, or completion, means that a user managed to complete a task that they were
given.
The basic formula for the success score is:

The success score would be between 0 and 1 (or 0 and 100%). 0 and 1 are not just simple
numbers. In this binary system, these numbers refer to the task being completed successfully
or not. All the other specific situations are overlooked. Partial task success is considered a
failure.

To have a more nuanced picture, UX researchers can include tasks performed with errors
in a separate group. For example, the task is to purchase a pair of yellow shoes. The "partial
success" options can be buying a pair of shoes of the wrong size, not being able to pay with a
credit card, or entering the wrong data.
Let's say there were 20 users, 10 of whom successfully bought the right shoes, 5 chose the
wrong type of delivery, 2 entered their address incorrectly, and 3 could not make the purchase.
If we were counting just 0 or 1, we would have a rather low 50% success score. By counting
all kinds of "partially successful" tasks, we get a whole spectrum.
Note! Avoid counting "wrong address" as 0,5 of success and adding it to the overall average,
as it distorts the results.

Each "partially successful" group can tell us more than a general success score: using these
groups, we can understand where the problem lies. We expect this more often from qualitative
UX research, while quantitative gives us a precise but narrow-focused set of data.
For you to consider that a product has good usability, the success score doesn't have to be
100%. The average score is around 78%.
Number of errors
In user testing, an error is any wrong action performed while completing a task. There are two
types of errors: slips and mistakes.

Slips are those errors that are made with the right goal (for example, a typo when entering the
date of birth), and mistakes are errors made with the wrong goal (for instance, entering today’s
date instead of birth date).

There are two ways of measuring errors: measuring all of them (error rate) or focusing on
one error (error occurrence rate).
To find the error occurrence rate, we have to calculate the total number of errors and divide it
by the number of attempts. It is recommended to count every error, even the repetitive ones.
For example, if a user tries to click an unclickable zone more than once, count each one.

Error rate counts all possible errors. To calculate it, we need to define all possible slips and
mistakes and the number of error opportunities. This number can be bigger or smaller
depending on the complexity of the task. After that, we apply this simple formula:

Can there be a perfect user interface that prevents people from making typos? Unlikely. That
is why the error rate seldom equals zero. Making mistakes is human nature, so having usability
testing errors is fine.

As Jeff Sauro states in his "Practical Guide to Measuring Usability," only about
10% of the tasks are completed without any mistakes, and the average number of errors per
task is 0,7.
Success score and error rate measure the effectiveness of the product. The following metrics
are used to measure efficiency.

Task time
Good usability typically means that users can perform their tasks successfully and fast. The
concept of task time metric is simple, yet there are some tricks to using it efficiently.

Having the average time, how do we know if the result is good or bad? There are some industry
standards for other metrics, but there can't be any for task time.
Still, you can find an "ideal" task time. It's a result of an experienced user. To do this, you have
to add up the average time for each little action, like "pointing with the mouse" and "clicking,"
using KLM (Keystroke Level Modelling). This system allows us to calculate this time quite
precisely.
The task time metric is often measured to compare the results with older versions of the design
or competitors.

Often, the difference in time will be tiny, but caring about time tasks is not just perfectionism.
Remember, we live in a world where most people leave a website if it's not loading after 3
seconds. Saving those few seconds for users can greatly impact their user experience.

Efficiency
There are many ways of measuring efficiency. One of the most basic is time-based efficiency,
which combines task time and success score.
Now that we have figured out how to measure both effectiveness and efficiency, we get
to measuring satisfaction, the key to user experience studies.

There are many satisfaction metrics, but we'll bring two that we consider the most efficient.
For these metrics, the data is collected during usability testing by asking the users to fill in a
questionnaire.

Single Ease Question (SEQ)


This is one of those easy and genius solutions that every UX researcher loves. Compared to
all those complex formulas, this one is as simple as it gets: a single question is asked after the
task.

Image credit: measuringu.com


While most task-based usability metrics aim at finding objective parameters, SEQ is tapping
into the essence of user experience: its subjectivity. Maybe the task took a user longer to
complete, but they had no such impression.

What if the user just reacts slower? Or were they distracted for a bit? User's subjective
evaluation of difficulty is no less important than the number of errors they made.
On average, users evaluate task difficulty at 4.8. Make sure your results are no less than
that.

System Usability Scale (SUS)


For those who don't trust the single-question solution, there is a list of 10 questions known as
the System Usability Scale. Based on the answers, the product gets a score on a scale from 0
to 100 (each question is worth 10 points).

Image credit: Bentley university.


This scale comes in handy when you want to compare your product with the others: the
average SUS is 68 points. Results over 80 are considered excellent.

You might also like