0% found this document useful (0 votes)
11 views37 pages

How To Conduct HE

Uploaded by

R.M.SAI PUNEETH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views37 pages

How To Conduct HE

Uploaded by

R.M.SAI PUNEETH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 37

How to conduct a Heuristic

Evaluation

Read this:
http://www.nngroup.com/articles/ho
w-to-conduct-a-heuristic-evaluation/

Adapted from slides by Karen Tang and Ryan Baker


What is an evaluation?
• Gather data about the usability of a product or
design by a particular group of users for a
particular activity or task within a particular
environment or context

• Evaluation goals:
• Assess extent of system’s functionality
• Assess effect of interface on user
• Identify specific problems with system
HE vs. user testing
• When we can, we want to test with real users

• HE is a “discount” usability technique

• When it’s useful:


– When real users are unavailable
– Very early in the design
– As a sanity check (but not a replacement for user
testing)
Why HE is great
• Cheap
• Doesn’t “spend” users
• Fast
• 1-2 days (instead of 1 week)
• Good
• Proven effective: the more careful you are, the
better it gets
• Easy to use
• Relatively easy to learn, can be taught
Heuristic Evaluation
• A type of discount usability testing
• A rational method – an expert applies “heuristics”
• mentally apply a theory or rule to the design and
see if that theory/rule’s advice is being followed

• Key Idea: Multiple expert evaluators


independently apply a set of heuristics to an
interface, produce Usability Action Reports
(UARs), combine & prioritize their findings
What can you evaluate with a HE?
• Any interface that has been “developed”
• Pre-existing webpage
• Sketch of a future interface (can be fully
implemented or only exist as a sketch)

• This method can be applied on your own


interface, or a competitor’s
• You will evaluate the interface according to a
standard set of 10 heuristics
How many evaluators are needed?
• Nielsen recommends at least 3, but go for 5!
Who should do the HE?
• Anyone who knows the appropriate heuristics
can do a HE

• But, heuristic evaluation experts find almost


twice as many problems as novices

• Heuristic evaluation experts who are also


domain experts find almost three times as
many problems as novices
Phases of Heuristic Evaluation
0) Pre-evaluation training (optional)
Give evaluators needed domain knowledge &
information on the scenario
1) Evaluate the interface to find usability
problems
2) Record the problem
3) Aggregate problems
4) Assign severity rating
5) Find a solution complexity rating
#1: evaluate the interface
Which heuristics to use?
• Many possible heuristic sets
• Some standard sets (e.g. Nielsen’s usability
heuristics)
• You might create your own heuristics, e.g. for
specific applications

• We’ll focus on Nielsen’s, which cover a range


of general usability issues
Find which heuristic is violated
1. Simple & Natural Dialog
2. Speak User’s Language
3. Minimize User’s Memory Load
4. Consistency Nielsen’s 10
5. Feedback Heuristics
6. Clearly Marked Exits
7. Shortcuts
8. Good Error Messages
9. Prevent Errors
10. Help & Documentation http://www.nngroup.com/articles/ten-
usability-heuristics/
Examples of applying the heuristics

• http://www.slideshare.net/sacsprasath/ten-us
ability-heuristics-with-
example
#2: record the problem
Record the problem
• Each evaluator writes a Usability Action
Report (UAR) describing each usability
problem they encounter
• HEs are typically used to report problems
• However, UARs can be used to report both the
good and bad qualities of an interface in other
usability evaluations…
Sample UAR
• EVALUATOR: XXXXX
• ID NUMBER: XXX
• NAME: Descriptive name for the problem
• EVIDENCE: Describe the violation, and why
you wrote this report.
what heuristic was
violated,
• EXPLANATION: Your interpretation: and why.
• Severity: Write up at the end of the evaluation
• Fixability: Write up at the end of the evaluation
• Possible Fix: Write up at the end of the evaluation
Keep looking for problems!
• Usually takes a few hours
• A shorter time may not
find important problems
• A longer time will exhaust
the evaluator, and they
may become less
productive
• For very large interfaces, it
is good to break heuristic
evaluation into several
sessions
What about multiple problems?
• This happens a lot, record them separately.

• This is not busywork….


• It may be possible to fix some of the problems,
but not all of them
• The problems might not always be linked to each
other – one may show up in other situations too
You are not done yet…
• You still need to address the bottom half of
the UAR:
• Severity
• Solution Complexity
• Possible Fix

• You may want to take a break before finishing


these UARs…
#3 aggregate the problems
Aggregate Problems
• Wait until all UARs are in
• You are aggregating across all evaluators
• Aggregating usability problems:
• Combine problems by consensus
• Gain a sense of relative importance after
you’ve seen a few problems
• At this point, decide which entries are and
aren’t problems (but keep original version of
report somewhere)
#4: assign each problem a
severity rating
Assign Severity Rating to UARs
• Severity Ratings help project leads determine
what problems should be given more
developer time
• Not all problems can be fixed
• Some problems will have more severe
consequences

• Each evaluator should assign severity


separately
Assign Severity Rating to UARs
Based on a combination of:
• Frequency
• How common or rare is the problem?
• Impact
• How easy is it to overcome the problem?
• How disastrous might the problem be?
• Persistence
• How repeatedly will users experience the problem?
• Are workarounds learnable?
Assign an Overall Severity Rating
• It is helpful to developers in allocating
resources to have one severity rating for the
problem.

• Therefore, evaluators need to combine their


opinion of a problem’s Frequency, Impact, &
Persistence ratings into one Severity
evaluation
Nielsen’s Severity Ratings
1. Usability Blemish. Mild annoyance or cosmetic
problem. Easily avoidable.
2. Minor usability problem. Annoying ,misleading,
unclear, confusing. Can be avoided or easily learned.
May occur only once.
3. Major usability problem. Prevents users from
completing tasks. Highly confusing or unclear.
Difficult to avoid. Likely to occur more than once.
4. Critical usability problem. Users won’t be able to
accomplish their goals, and may quit using system.
False positives
• There’s no virtue in finding 6,233 problems, if very
few of them actually cause problems for a user

• Every problem reported in a heuristic evaluation


takes time for the developers to consider

• Some interface aspects that seem like problems at


first might not be problems at all
#5 Assign each solution a
complexity rating
5: Solution Complexity Rating
• Some problems take more time to fix than
others, so it’s important to allocate
developers’ time well

• Ideally this could be made by either a


developer, or someone who is familiar with
development in the target platform
Solution Complexity Rating
1. Trivial to fix. Textual changes and cosmetic changes.
Minor code tweaking.
2. Easy to fix. Minimal redesign and straightforward
code changes. Solution known and understood.
3. Difficult to fix. Redesign and re-engineering
required. Significant code changes. Solution
identifiable but details not fully understood.
4. Nearly impossible to fix. Requires massive re-
engineering or use of new technology. Solution not
known or understood at all.
Record Possible Fixes
• While evaluating solution complexity
evaluator may have thought about how the
problem could be fixed

• Record these possible fixes as suggestions to


developers
• Don’t focus on feasibility of solutions (that is
their job)
• Your suggestions may be thought-provoking
Why HE?
• They find a reasonably large set of problems
• They are one of the easiest, quickest, and
cheapest methods available
HE vs. User Testing
• User tests are more effective at revealing
when a system’s manifest model or metaphor is
confusing
• User tests are less effective at finding obscure
problems
• User tests are also much more expensive

• Advice: use HE first, to find the obvious


problems, then user test.
Heuristic Evaluation Is Not User Testing
• Evaluator is not the user either
– Maybe closer to being a typical user than you are,
though
• Analogy: code inspection vs. testing
• HE finds problems that UT often misses
– Inconsistent fonts
– Fitts’s Law problems
• But UT is the gold standard for usability
In-class assignment

Perform HE on UMBC class search with


PeopleSoft

Use template form from the blog


HW: Perform HE on your sites
• Go through the 5 stages of HE for your website
– If on a team, each go through HE individually, then
combine later
• Turn in 1 completed form for each incident
• Come up with at least 8 UARs for each webpage
• Aggregate, finish filling out the template!
• Everyone writes 100-200 words describing what
they learned

You might also like