Week 3 - Writing Findings
Week 3 - Writing Findings
Overview
- It involves a few types of participants:
- Representative users
- Product designers and developers
- Usability professionals
- Other stakeholders
- Can be used with paper prototypes.
- Participants are given tasks.
- All participants are asked to assume the role of “the user”.
- Participants mark on their hardcopies the action they would take in pursuing the task.
- Developers also serve as “living publications.” If participants need information they
would look for in the manual, they can ask aloud.
What it is not
- Simply multiple reviewers applying their ideas, findings and feedback towards a single
document.
- I.e., it is not your Project 1 where you have to combine your team’s individual
findings.
- Surprisingly, pluralistic walkthroughs have relatively little theoretical basis for its use.
- Instead of a method that is founded on fundamentals, it is founded on practicality.
Process
1. The walkthrough moderator provides the tasks
2. Participants are asked to mark on their hard copy of the first screen the action(s) they
would take in attempting the task
3. After everyone has written their independent responses, the walkthrough moderator
announces the “right answer”
4. Users verbalize their responses first and discuss potential usability problems
5. Product developers explain why the design is the way it is
6. Usability experts facilitate the discussion and help come up with solutions
7. Participants can be given a usability questionnaire after each task and at the end of the
day.
Limitations
- Walkthrough must progress as slowly as the slowest participant (because we must wait
for everybody before the discussion).
- Participants may not get a good feel for the flow.
- Multiple correct paths cannot be simulated. Only one path can be evaluated at a time.
- This precludes participants from exploring, which might result in learning.
- Participants who picked a correct path that was not selected for the walkthrough
must “reset.”
- Participants who performed a “wrong” action must mentally reset as well.
- Product designers and developers have to be thick-skinned and treat users’ comments
with respect.
Benefits
- It is a cost- and time-effective method.
- It provides early performance and satisfaction data.
- It often offers redesign on the fly.
- “I got it right but...”
- Participant responses that were correct but made with uncertainty can be
discussed.
- It increases developers’ sensitivity to users’ concerns, which leads to increased buy-in.
Competitive Evaluation
- It involves evaluating 2 or more products with similar functionality. For example:
- “Us” vs. a key competitor / key competitors
- Only key competitors
- Goals can vary. For example:
- Find usability issues in our product and tell us if there is anything that the
competitors are doing better.
- We don’t care about the usability issues of the other products.
- Compare approaches and make recommendations for our new product.
- We want to know best practices and things we should avoid.
- Inspection methods are not a substitute for testing but how can we make them more
effective?
Number of Evaluators
- 4-5 evaluators find ~75% of all problems that an inspection can find but:
- 1 evaluator is better than none
- 2 are better than 1
- Etc
- But at some point, costs will outweigh the benefits
- Is it better to have 1 evaluator for 18 hours or 2 evaluators for 9 hours each?
- Is it better to have 2 evaluators for 9 hours each or 3 evaluators for 6 hours each?
- What about 6 evaluators for 3 hours each?
- Or 9 evaluators for 2 hours each?
Evaluators’ Expertise
- Evaluators with user testing experience are better than evaluators with no such
experience (even when using heuristics).
- Double experts (usability and domain) find 1.5 times more problems than usability
experts only.
- Pet insurance example:
-
- Is this good enough?
-
Actionable Recommendations
- Actionable recommendations are usable to stakeholders/clients
- Usability findings tell stakeholders what’s wrong. Recommendations tell stakeholders
how to fix the problem.
- What makes for a good, actionable recommendation?
How Bad can bad Recommendations be?
- Generic or vague recommendations can be worse than the omission of
recommendations altogether.
- Vague recommendations may not be actionable. As a result, the overall value of
conducting the usability test has greatly diminished, despite key usability findings.
- Implementing changes from vague recommendations can actually create more usability
issues or cause a Web site or application to become less usable!
Text-Only Recommendations
- Some actionable recommendations can be communicated effectively through text.
- Effective when used to recommend changes to:
- Labeling and terminology
- Removal of screen elements
- Messaging/descriptions
- Information hierarchy/taxonomies